Jump to Content
Databases

Cloud Bigtable schema tips: Key salting

October 18, 2022
Greg Colella

Software Engineer, Cloud Bigtable

Billy Jacobson

Developer Advocate, Cloud Bigtable

Cloud Bigtable is a low-latency, high-throughput NoSQL database. With NoSQL, you want to design a schema that can scale and adapt to your business growth. When working with large sets of data in the real world, it's possible there will be access pattern outliers with significantly more activity that requires a bit more planning. In this article, we are going to learn how to optimize a Bigtable schema to increase performance on highly active rows on an otherwise well-balanced schema. 

Row key design refresher

Bigtable performs best when the throughput is evenly distributed across the entire row key space and can spread across all the nodes. Bigtable rows are physically stored in tablets containing groups of contiguous row keys, and each tablet is distributed to the available nodes. If rows on the same tablet are receiving a disproportionately large percentage of requests compared to other tablets in that node, that can impact performance. 

Typically, row keys are designed to be optimized for particular queries. For example, to have queries centered around individual users you may put a user id at the beginning, like so:  user_id-xxx-yyy. When some users are significantly more active than others, such as the case for celebrity accounts, writes and reads from their rows could cause hotspotting by putting too much pressure on specific nodes. 

If we can distribute the logical row by physically dividing it amongst multiple tablets, then the rows can get balanced across the available nodes and reduce the hotspot. 

Key prefix salting

A well-distributed user id would typically work as a row key prefix, so we can use this as the starting point for our row key design:
user_id-xxx-yyy

One strategy to distribute this unbalanced throughput across all Bigtable nodes is to prepend an additional value to the row key design: 
01-user_id-xxx-yyy
02-user_id-xxx-yyy

This example has two physical rows corresponding to one logical row which divides the throughput in half. This will distribute all the rows for a particular user id across the rest of the keyspace. Since their prefix is different, they should be able to live on different tablets and are more likely to be hosted on multiple nodes. Note that it is possible for both prefixes to be in the same node or for one prefix to be split into multiple nodes since this setup's goal is to provide more options to the load balancing mechanism.

Choosing a prefix

Choosing a prefix that doesn't add much complexity for requests is important to consider. If we used random prefixes, each get request would turn into multiple get requests to ensure the correct row was located. If the prefix is deterministic from the row key, then it allows for minimal changes to single-row read and write requests.

If we would like N divisions, we can take modulo N of the hash of the entire existing row key. We will also refer to N as our salt range.

Loading...

A point lookup and write will still work as the physical key can be computed from the logical key. Salting won't eliminate the hotspots, but it spreads them into N hotspots of strength 1/N. These less severe hotspots can be more easily processed by individual nodes. 

Prefix options

If you have common scans over prefixes that you would like to stay intact, you can also hash just part of the row key rather than the entire row key. 

For a row key of the format user_id-site-timestamp, you might want efficient scans over user_id and site combinations. Here, we can leave off the timestamp when creating the hash, so the time-series data for those combinations will always be grouped together.

Loading...

Keys with the same logical prefix that is often scanned can still be efficiently scanned. 

This strategy is less resistant to hotspots—the same problem that the salting strategy is supposed to mitigate can come up again if individual user_id, site combinations get significant access.

Implementation

To implement this in your code, you'll need to change the areas where you are making requests to Bigtable data. You can view the full source code example on Github.

Writing

Using this new technique, if you want to write data, follow these steps:

  1. Take the row key you intend to write to

  2. Compute the prefix using your hash function

  3. Construct the salted row key by concatenating the prefix and row key

  4. Then use the salted row key for writing your data

You will need to ensure that you integrate this flow to anywhere you are writing data.

In Java, it would look something like this:

Loading...

Reading

Gets

To read individual rows in a table with salted keys, you would follow the same initial steps in writing the data like: 

  1. Take the row key you intend to read

  2. Compute the prefix using your hash function

  3. Construct the salted row key by concatenating the prefix and row key

  4. Then use the salted row key for reading your data

Since the physical row key is computed deterministically from the logical row key, only one read needs to be issued for each logical key.

In Java, it would look something like this:

Loading...

Scans

You can follow these steps for each scan:

  1. Take the row key prefix you intend to scan

  2. For 0 to N (each potential salt option)

    1. Construct the salted row key by concatenating the prefix and row key

    2. Then use the salted row key for your prefix scan

    3. Issue this scan in parallel

  3. Combine the results of all the scans

Let's look at an example. Say you wanted to get all the data for a user and one subcategory; you would do a prefix scan on "user_id-xxx-". If you're working with salted rows, you would need to prefix scans based on how large your hash size is. If our hash size is 4, then we would do 4 prefix scans: 

  • 01-user_id-xxx-
  • 02-user_id-xxx-
  • 03-user_id-xxx-
  • 04-user_id-xxx-

For the best performance you would want to issue each scan in parallel rather than sending all the prefixes into one request. Since the requests are done in parallel, the rows may not be returned in sorted order. If row order is important you will have to do some additional sorting once the results are received. 

Because the physical row keys are no longer a contiguous range, these scans may consume more Bigtable CPU which is an important consideration for choosing a salting factor with a scan-heavy workload. Large scans, however, may be more performant as more resources can be used in parallel to serve the request.

In Java, it would look something like this:

Loading...

Forward looking migrations

It can be difficult to make a large change to existing datasets, so one way to migrate is only applying the salt moving forward. If you have timestamps at the end of the key,  change the code to salt row keys past a certain fixed point in time, and just use an unsalted key for old/existing keys.

Next steps

Posted in