Understanding Cloud Bigtable performance

This page describes the approximate performance that Cloud Bigtable can provide under optimal conditions, factors that can affect performance, and tips for testing and troubleshooting Cloud Bigtable performance issues.

Performance for typical workloads

Cloud Bigtable delivers highly predictable performance that is linearly scalable. When you avoid the causes of slower performance described below, each Cloud Bigtable node can provide the following approximate throughput, depending on which type of storage the cluster uses:

Storage Type Peak Reads   Peak Writes   Peak Scans
SSD 10,000 rows per second or 10,000 rows per second or 220 MB/s
HDD 500 rows per second or 10,000 rows per second or 180 MB/s

These estimates assume that each row contains 1 KB of data.

In general, a cluster's performance increases linearly as you add nodes to the cluster. For example, if you create an SSD cluster with 10 nodes, the cluster can support up to 100,000 rows per second for a typical read-only or write-only workload.

Causes of slower performance

There are several factors that can cause Cloud Bigtable to perform more slowly than the estimates shown above:

  • The table's schema is not designed correctly. To get good performance from Cloud Bigtable, it's essential to design a schema that makes it possible to distribute reads and writes evenly across each table. See Designing Your Schema for more information.
  • The workload isn't appropriate for Cloud Bigtable. If you test with a small amount (< 300 GB) of data, or if you test for a very short period of time (seconds rather than minutes or hours), Cloud Bigtable won't be able to balance your data in a way that gives you good performance. It needs time to learn your access patterns, and it needs large enough shards of data to make use of all of the nodes in your cluster.
  • The rows in your Cloud Bigtable table contain large amounts of data. The performance estimates shown above assume that each row contains 1 KB of data. You can read and write larger amounts of data per row, but increasing the amount of data per row will also reduce the number of rows per second.
  • The rows in your Cloud Bigtable table contain a very large number of cells. It takes time for Cloud Bigtable to process each cell in a row. Also, each cell adds some overhead to the amount of data that's stored in your table and sent over the network. For example, if you're storing 1 KB (1,024 bytes) of data, it's much more space-efficient to store that data in a single cell, rather than spreading the data across 1,024 cells that each contain 1 byte. If you split your data across more cells than necessary, you might not get the best possible performance.
  • The Cloud Bigtable cluster doesn't have enough nodes. If your Cloud Bigtable cluster is overloaded, adding more nodes can improve performance. Use the monitoring tools to check whether the cluster is overloaded.
  • The Cloud Bigtable cluster was scaled up or scaled down recently. After you change the number of nodes in a cluster, it can take up to 20 minutes under load before you see a significant improvement in the cluster's performance.
  • The Cloud Bigtable cluster uses HDD disks. In most cases, your cluster should use SSD disks, which have significantly better performance than HDD disks. See Choosing between SSD and HDD storage for details.
  • The Cloud Bigtable instance is a development instance. Because a development instance's performance is equivalent to an instance with one single-node cluster, it will not perform as well as a production instance. In particular, you might see a large number of errors if the instance is under heavy load.
  • There are issues with the network connection. Network issues can reduce throughput and cause reads and writes to take longer than usual. In particular, you might see issues if your clients are not running in the same zone as your Cloud Bigtable cluster, or if your clients run outside of Google Cloud.

Because different workloads can cause performance to vary, you should perform tests with your own workloads to obtain the most accurate benchmarks.

Replication and performance

Enabling replication will affect the performance of a Cloud Bigtable instance. The effect is positive for some metrics and negative for others. You should understand potential impacts on performance before deciding to enable replication.

Read throughput

Replication can improve read throughput, especially when you use multi-cluster routing. Additionally, replication can reduce read latency by placing your Cloud Bigtable data geographically closer to your application's users.

Write throughput

Although replication can improve availability and read performance, it does not increase write throughput. A write to one cluster must be replicated to all other clusters in the instance. As a result, each cluster is expending CPU resources to pull changes from the other clusters. Write throughput might actually go down because replication requires each cluster to do additional work.

For example, suppose you have a single-cluster instance, and the cluster has 3 nodes:

Single-cluster instance that has 3 nodes

If you add nodes to the cluster, the effect on write throughput is different than if you enable replication by adding a second 3-node cluster to the instance.

Adding nodes to the original cluster: You can add 3 nodes to the cluster, for a total of 6 nodes. The write throughput for the instance doubles, but the instance's data is available in only one zone:

Single-cluster instance that has 6 nodes

With replication: Alternatively, you can add a second cluster with 3 nodes, for a total of 6 nodes. The instance now writes each piece of data twice: when the write is first received and again when it is replicated to the other cluster. The write throughput does not increase, and might go down, but you benefit from having your data available in two different zones:

Two-cluster instance that has 6 nodes

In these examples, the single-cluster instance can handle twice the write throughput that the replicated instance can handle, even though each instance's clusters have a total of 6 nodes.

Replication latency

When you use multi-cluster routing, replication for Cloud Bigtable is eventually consistent. As a general rule, it takes longer to replicate data across a greater distance. Replicated clusters in different regions will typically have higher replication latency than replicated clusters in the same region.

App profiles and traffic routing

Depending on your use case, you will use one or more app profiles to route your Cloud Bigtable traffic. Each app profile uses either multi-cluster or single-cluster routing. The choice of routing can affect performance.

Multi-cluster routing can minimize latency. An app profile with multi-cluster routing automatically routes requests to the closest cluster in an instance from the perspective of the application, and the writes are then replicated to the other clusters in the instance. This automatic choice of the shortest distance results in the lowest possible latency.

An app profile that uses single-cluster routing can be optimal for certain use cases, like separating workloads or to have read-after-write semantics on a single cluster, but it will not reduce latency in the way multi-cluster routing does.

To understand how to configure your app profiles for these and other use cases, see Examples of Replication Settings.

How Cloud Bigtable optimizes your data over time

To store the underlying data for each of your tables, Cloud Bigtable shards the data into multiple tablets, which can be moved between nodes in your Cloud Bigtable cluster. This storage method enables Cloud Bigtable to use two different strategies for optimizing your data over time:

  1. Cloud Bigtable tries to store roughly the same amount of data on each Cloud Bigtable node.
  2. Cloud Bigtable tries to distribute reads and writes equally across all Cloud Bigtable nodes.

Sometimes these strategies conflict with one another. For example, if one tablet's rows are read extremely frequently, Cloud Bigtable might store that tablet on its own node, even though this causes some nodes to store more data than others.

As part of this process, Cloud Bigtable might also split a tablet into two or more smaller tablets, either to reduce a tablet's size or to isolate hot rows within an existing tablet.

The following sections explain each of these strategies in more detail.

Distributing the amount of data across nodes

As you write data to a Cloud Bigtable table, Cloud Bigtable shards the table's data into tablets. Each tablet contains a contiguous range of rows within the table.

If you have written only a small amount of data to the table, Cloud Bigtable will store all of the tablets on a single node within your cluster:

A cluster with four tablets on a single node.

As more tablets accumulate, Cloud Bigtable will move some of them to other nodes in the cluster so that the amount of data is balanced more evenly across the cluster:

Additional tablets are distributed across multiple nodes.

Distributing reads and writes evenly across nodes

If you've designed your schema correctly, then reads and writes should be distributed fairly evenly across your entire table. However, there are some cases where you can't avoid accessing certain rows more frequently than others. Cloud Bigtable helps you deal with these cases by taking reads and writes into account when it balances tablets across nodes.

For example, suppose that 25% of reads are going to a small number of tablets within a cluster, and reads are spread evenly across all other tablets:

Out of 48 tablets, 25% of reads are going to 3 tablets.

Cloud Bigtable will redistribute the existing tablets so that reads are spread as evenly as possible across the entire cluster:

The three hot tablets are isolated on their own node.

Testing performance with Cloud Bigtable

If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you plan and execute your test:

  1. Use a production instance. A development instance will not give you an accurate sense of how a production instance performs under load.
  2. Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use at least 100 GB of data per node. For simplicity, test with a single table.
  3. Stay below the recommended storage utilization per node. For details, see Storage utilization per node.
  4. Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance data across your nodes based on the access patterns it observes.
  5. Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps ensure that you will test reads from disk as well as cached reads from memory.

You can use the Cloud Bigtable loadtest tool, written in Go, as a starting point for developing your own performance test. The loadtest tool performs a simple workload made up of 50% reads and 50% writes.

Troubleshooting performance issues

If you think that Cloud Bigtable might be creating a performance bottleneck in your application, be sure to check all of the following:

  • Look at the Key Visualizer scans for your table. The Key Visualizer tool for Cloud Bigtable provides daily scans that show the usage patterns for each table in a cluster. Key Visualizer makes it easy to check whether your usage patterns are causing undesirable results, such as hotspots on specific rows or excessive CPU utilization. Learn how to get started with Key Visualizer.
  • Try commenting out the code that performs Cloud Bigtable reads and writes. If the performance issue disappears, then you're probably using Cloud Bigtable in a way that results in suboptimal performance. If the performance issue persists, the issue is probably not related to Cloud Bigtable.
  • Ensure that you're creating as few clients as possible. Creating a client for Cloud Bigtable is a relatively expensive operation. Therefore, you should create the smallest possible number of clients:

    • If you use replication, or if you use app profiles to identify different types of traffic to your instance, create one client per app profile and share the clients throughout your application.
    • If you don't use replication or app profiles, create a single client and share it throughout your application.

    If you're using the HBase client for Java, you create a Connection object rather than a client, so you should create as few connections as possible.

  • Make sure you're reading and writing many different rows in your table. Cloud Bigtable performs best when reads and writes are evenly distributed throughout your table, which helps Cloud Bigtable distribute the workload across all of the nodes in your cluster. If reads and writes cannot be spread across all of your Cloud Bigtable nodes, performance will suffer.

    If you find that you're reading and writing only a small number of rows, you might need to redesign your schema so that reads and writes are more evenly distributed.

  • Verify that you see approximately the same performance for reads and writes. If you find that reads are much faster than writes, you might be trying to read row keys that do not exist, or a large range of row keys that contains only a small number of rows.

    To make a valid comparison between reads and writes, you should aim for at least 90% of your reads to return valid results. Also, if you're reading a large range of row keys, measure performance based on the actual number of rows in that range, rather than the maximum number of rows that could exist.

  • Use the right type of write requests for your data. Choosing the optimal way to write your data helps maintain high performance.

What's next

هل كانت هذه الصفحة مفيدة؟ يرجى تقييم أدائنا:

إرسال تعليقات حول...

Cloud Bigtable Documentation