Understanding Cloud Bigtable Performance

Suppose you want to visit a friend. Should you travel by airplane? It depends where you're going. If your friend lives nearby, there are probably better ways to get to your friend's house. But if your friend lives very far away, traveling by airplane is probably the fastest way to go.

Using Cloud Bigtable is a lot like traveling by airplane. Cloud Bigtable excels at handling very large amounts of data (terabytes or petabytes) over a relatively long period of time (hours or days). It's not designed to work with small amounts of data in short bursts. If you test Cloud Bigtable for 30 seconds with a few GB of data, you won't get an accurate picture of its performance.

Keep reading to learn more about the performance you can expect from Cloud Bigtable.

Performance for typical workloads

Under a typical workload, Cloud Bigtable delivers highly predictable performance. When everything is running smoothly, a typical workload can achieve the following performance for each node in the Cloud Bigtable cluster, depending on which type of storage the cluster uses:

Storage Type Reads   Writes Scans
SSD 10,000 QPS1 @ 6 ms or 10,000 QPS @ 6 ms 220 MB/s
HDD 500 QPS @ 200 ms or 10,000 QPS @ 50 ms 180 MB/s

These estimates assume that each row contains 1 KB of data.

1 Queries per second. A query is a read or write operation against a single row.

In general, a cluster's performance increases linearly as you add nodes to the cluster. For example, if you create an SSD cluster with 10 nodes, the cluster can support up to 100,000 QPS for a typical read-only or write-only workload, with 6 ms latency for each read or write operation.

These performance estimates are guidelines, not hard and fast rules. Per-node performance will vary based on your workload and the typical size of each row in your table.

Also, these performance estimates reflect a read-only or write-only workload. Performance for a mixed workload of reads and writes will vary. Depending on your workload, the overall throughput might be less than the typical QPS for read-only or write-only workloads.

Causes of slower performance

There are several factors that can cause Cloud Bigtable to perform more slowly than the estimates shown above:

  • The table's schema is not designed correctly. To get good performance from Cloud Bigtable, it's essential to design a schema that makes it possible to distribute reads and writes evenly across each table. See Designing Your Schema for more information.
  • The workload isn't appropriate for Cloud Bigtable. If you test with a small amount (< 300 GB) of data, or if you test for a very short period of time (seconds rather than minutes or hours), Cloud Bigtable won't be able to balance your data in a way that gives you good performance. It needs time to learn your access patterns, and it needs large enough shards of data to make use of all of the nodes in your cluster.
  • The rows in your Cloud Bigtable table contain large amounts of data. The performance estimates shown above assume that each row contains 1 KB of data. You can read and write larger amounts of data per row, but increasing the amount of data per row will also reduce QPS.
  • The rows in your Cloud Bigtable table contain a very large number of cells. It takes time for Cloud Bigtable to process each cell in a row. Also, each cell adds some overhead to the amount of data that's stored in your table and sent over the network. For example, if you're storing 1 KB (1,024 bytes) of data, it's much more space-efficient to store that data in a single cell, rather than spreading the data across 1,024 cells that each contain 1 byte. If you split your data across more cells than necessary, you might not get the best possible performance.
  • The Cloud Bigtable cluster doesn't have enough nodes. If your Cloud Bigtable cluster is overloaded, adding more nodes can improve performance. Use the monitoring tools to check whether the cluster is overloaded.
  • The Cloud Bigtable cluster was scaled up or scaled down recently. After you change the number of nodes in a cluster, it can take up to 20 minutes under load before you see a significant improvement in the cluster's performance.
  • The Cloud Bigtable cluster uses HDD disks. In most cases, your cluster should use SSD disks, which have significantly better performance than HDD disks. See Choosing between SSD and HDD storage for details.
  • The Cloud Bigtable instance is a development instance. Because a development instance's performance is equivalent to an instance with one single-node cluster, it will not perform as well as a production instance. In particular, you might see a large number of errors if the instance is under heavy load.
  • There are issues with the network connection. Network issues can reduce throughput and cause reads and writes to take longer than usual. In particular, you might see issues if your clients are not running in the same zone as your Cloud Bigtable cluster, or if your clients run outside of Google Cloud Platform.

Because different workloads can cause performance to vary, you should perform tests with your own workloads to obtain the most accurate benchmarks.

How Cloud Bigtable optimizes your data over time

To store the underlying data for each of your tables, Cloud Bigtable shards the data into multiple tablets, which can be moved between nodes in your Cloud Bigtable cluster. This storage method enables Cloud Bigtable to use two different strategies for optimizing your data over time:

  1. Cloud Bigtable tries to store roughly the same amount of data on each Cloud Bigtable node.
  2. Cloud Bigtable tries to distribute reads and writes equally across all Cloud Bigtable nodes.

Sometimes these strategies conflict with one another. For example, if one tablet's rows are read extremely frequently, Cloud Bigtable might store that tablet on its own node, even though this causes some nodes to store more data than others.

As part of this process, Cloud Bigtable might also split a tablet into two or more smaller tablets, either to reduce a tablet's size or to isolate hot rows within an existing tablet.

The following sections explain each of these strategies in more detail.

Distributing the amount of data across nodes

As you write data to a Cloud Bigtable table, Cloud Bigtable shards the table's data into tablets. Each tablet contains a contiguous range of rows within the table.

If you have written only a small amount of data to the table, Cloud Bigtable will store all of the tablets on a single node within your cluster:

A cluster with four tablets on a single node.

As more tablets accumulate, Cloud Bigtable will move some of them to other nodes in the cluster so that the amount of data is balanced more evenly across the cluster:

Additional tablets are distributed across multiple nodes.

Distributing reads and writes evenly across nodes

If you've designed your schema correctly, then reads and writes should be distributed fairly evenly across your entire table. However, there are some cases where you can't avoid accessing certain rows more frequently than others. Cloud Bigtable helps you deal with these cases by taking reads and writes into account when it balances tablets across nodes.

For example, suppose that 25% of reads are going to a small number of tablets within a cluster, and reads are spread evenly across all other tablets:

Out of 48 tablets, 25% of reads are going to 3 tablets.

Cloud Bigtable will redistribute the existing tablets so that reads are spread as evenly as possible across the entire cluster:

The three hot tablets are isolated on their own node.

Testing performance with Cloud Bigtable

If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you plan and execute your test:

  1. Use a production instance. A development instance will not give you an accurate sense of how a production instance performs under load.
  2. Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use at least 100 GB of data per node.
  3. Stay below the recommended storage utilization per node. For details, see Storage utilization per node.
  4. Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance data across your nodes based on the access patterns it observes.
  5. Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps ensure that you will test reads from disk as well as cached reads from memory.

You can use the Cloud Bigtable loadtest tool, written in Go, as a starting point for developing your own performance test. The loadtest tool performs a simple workload made up of 50% reads and 50% writes.

Troubleshooting performance issues

If you think that Cloud Bigtable might be creating a performance bottleneck in your application, be sure to check all of the following:

  • Look at the Key Visualizer scans for your table. The Key Visualizer tool for Cloud Bigtable provides daily scans that show the usage patterns for each table in a cluster. Key Visualizer makes it easy to check whether your usage patterns are causing undesirable results, such as hotspots on specific rows or excessive CPU utilization. Learn how to get started with Key Visualizer.
  • Try commenting out the code that performs Cloud Bigtable reads and writes. If the performance issue disappears, then you're probably using Cloud Bigtable in a way that results in suboptimal performance. If the performance issue persists, the issue is probably not related to Cloud Bigtable.
  • Ensure that you're creating as few clients as possible. Creating a client for Cloud Bigtable is a relatively expensive operation. Therefore, you should create the smallest possible number of clients:

    • If you use replication, or if you use app profiles to identify different types of traffic to your instance, create one client per app profile and share the clients throughout your application.
    • If you don't use replication or app profiles, create a single client and share it throughout your application.

    If you're using the HBase client for Java, you create a Connection object rather than a client, so you should create as few connections as possible.

  • Make sure you're reading and writing many different rows in your table. Cloud Bigtable performs best when reads and writes are evenly distributed throughout your table, which helps Cloud Bigtable distribute the workload across all of the nodes in your cluster. If reads and writes cannot be spread across all of your Cloud Bigtable nodes, performance will suffer.

    If you find that you're reading and writing only a small number of rows, you might need to redesign your schema so that reads and writes are more evenly distributed.

  • Verify that you see approximately the same performance for reads and writes. If you find that reads are much faster than writes, you might be trying to read row keys that do not exist, or a large range of row keys that contains only a small number of rows.

    To make a valid comparison between reads and writes, you should aim for at least 90% of your reads to return valid results. Also, if you're reading a large range of row keys, measure performance based on the actual number of rows in that range, rather than the maximum number of rows that could exist.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Bigtable Documentation