This page explains how to design a schema for a Cloud Bigtable table. Before you read this page, you should be familiar with the overview of Cloud Bigtable.
Designing a Cloud Bigtable schema is very different than designing a schema for a relational database. As you design your Cloud Bigtable schema, keep the following concepts in mind:
- Each table has only one index, the row key. There are no secondary indices.
- Rows are sorted lexicographically by row key, from the lowest to the
highest byte string. Row keys are sorted in big-endian byte order
(sometimes called network byte order), the binary equivalent of alphabetical
order. This means that integers do not automatically sort numerically. For
20. If you want to store integers and sort and order them numerically, pad them so that they will be sorted numerically:
03. This is especially true for timestamps where range-based queries are desired.
- Columns are grouped by column family and sorted in lexicographic order within the column family.
- All operations are atomic at the row level. For example, if you update two
rows in a table, it's possible that one row will be updated successfully and the
other update will fail. For this reason, follow these guidelines:
- Avoid schema designs that require atomicity across rows.
- In general, keep all information for an entity in a single row. However, if your use case doesn't require you to make atomic updates or reads to an entity, you can split the entity across multiple rows. Splitting across multiple rows is recommended if the entity data is large (hundreds of MB).
- Ideally, both reads and writes should be distributed evenly across the row space of the table.
- Related entities should be stored in adjacent rows, which makes reads more efficient.
- Cloud Bigtable tables are sparse. Empty columns don't take up any space. As a result, it often makes sense to create a very large number of columns, even if most columns are empty in most rows.
- It's better to have a few large tables than many small tables. Cloud Bigtable earned its name because it performs best with really big tables. You might justifiably want a separate table for a completely different use case that requires a completely different schema, but you should not use separate tables for similar data. For example, you shouldn't create a new table because it's a new year or you have a new customer.
Cloud Bigtable places size limits on the data that you store within your tables. Make sure you consider these limits as you design your schema.
Because Cloud Bigtable reads rows atomically, it's especially important to limit the total amount of data that you store in a single row. As a best practice, store a maximum of 10 MB in a single cell and 100 MB in a single row. You must also stay below the hard limits on data size within cells and rows.
These size limits are measured in binary megabytes (MB), where 1 MB is 220 bytes. This unit of measurement is also known as a mebibyte (MiB).
Choosing a row key
To get the best performance out of Cloud Bigtable, it's essential to think carefully about how you compose your row key. That's because the most efficient Cloud Bigtable queries use the row key, a row key prefix, or a row range to retrieve the data. Other types of queries trigger a full table scan, which is much less efficient. By choosing the correct row key now, you can avoid a painful data-migration process later.
Start by asking how you'll use the data that you plan to store. For example:
- User information: Do you need quick access to information about connections between users (for example, whether user A follows user B)?
- User-generated content: If you show users a sample of a large amount of user-generated content, such as status updates, how will you decide which status updates to display to a given user?
- Time series data: Will you often need to retrieve the most recent N records, or records that fall within a certain time range? If you're storing data for several kinds of events, will you need to filter based on the type of event?
By understanding your needs up front, you can ensure that your row key, and your overall schema design, provide enough flexibility to query your data efficiently.
Types of row keys
This section describes some of the most commonly used types of row keys and explains when to use each type of key.
As a general rule of thumb, keep your row keys reasonably short. Long row keys take up additional memory and storage and increase the time it takes to get responses from the Cloud Bigtable server.
Reverse domain names
If you're storing data about entities that can be represented as domain names,
consider using a reverse domain name (for example,
the row key. Using a reverse domain name is an especially good idea if each
row's data tends to overlap with adjacent rows. In this case,
Cloud Bigtable can compress your data more efficiently.
This approach works best when your data is spread across many different reverse domain names. If you expect to store most of your data in a small number of reverse domain names, consider other values for your row key. Otherwise, you might overload a tablet by pushing most writes to a single node in your cluster.
If you're storing data about entities that can be identified with a simple string (for example, user IDs), you might want to use the string identifier as the row key, or as a portion of the row key.
In the past, this page recommended using a hash of the string identifier, rather than the actual string identifier. We no longer recommend using a hash. We've found that hashed row keys make it very difficult to troubleshoot issues with Cloud Bigtable, because hashed row keys are effectively meaningless. For example, if your row key is a hash of the user ID, it will be difficult or impossible to find out what user ID is tied to the row key.
Use human-readable values instead of hashing the row key. Also, if your row key includes multiple values, separate those values with a delimiter. These practices make it much easier to use the Key Visualizer tool to troubleshoot issues with Cloud Bigtable.
If you often need to retrieve data based on the time when it was recorded, it's a good idea to include a timestamp as part of your row key. Using the timestamp by itself as the row key is not recommended, as most writes would be pushed onto a single node. For the same reason, avoid placing a timestamp at the start of the row key.
For example, your application might need to record performance-related data,
such as CPU and memory usage, once per second for a large number of machines.
Your row key for this data could combine an identifier for the machine with a
timestamp for the data (for example,
If you usually retrieve the most recent records first, you can use a reversed
timestamp in the row key by subtracting the timestamp from your programming
language's maximum value for long integers (in Java,
java.lang.Long.MAX_VALUE). With a reversed timestamp, the records will be
ordered from most recent to least recent.
Multiple values in a single row key
Because the only way to query Cloud Bigtable efficiently is by row key, it's often useful to include multiple identifiers in your row key. When your row key includes multiple values, it's especially important to have a clear understanding of how you'll use your data.
For example, suppose your application enables users to post messages, and users
can mention one another in posts. You want an efficient way to list all the
users who have tagged a specific user in a post. One way to achieve this goal
is to use a row key that contains the tagged username, followed by the username
that did the tagging, with the two separated by a delimiter (for example,
wmckinley#gwashington). To find out who has tagged a specific username, or to
show all the posts that tag that username, you can simply retrieve the range of
rows whose row keys start with the username.
It's important to create a row key that still makes it possible to retrieve a
well-defined range of rows. Otherwise your query will require a table scan,
which is much slower than retrieving specific rows. For example, suppose you're
storing performance-related data once per second. If your row key consisted of a
timestamp, followed by the machine identifier (for example,
1425330757685#machine_4223421), there would be no efficient way to limit
your query to a specific machine, and you could only limit your query based on
Row key prefixes
The first value in a multi-value row key is called a row key prefix. Well-planned row key prefixes let you take advantage of Cloud Bigtable's sorting order to store related data in contiguous rows. Storing related data in contiguous rows enables you to access related data as a range of rows, rather than running inefficient table scans.
Using row key prefixes for multi-tenancy
Row key prefixes provide a scalable solution for a "multi-tenancy" use case, a scenario in which you store similar data, using the same data model, on behalf of multiple clients. Using one table for all tenants is the most performant way to store and access multi-tenant data.
For example, let's say you store and track purchase histories on behalf of many companies. You can use your unique ID for each company as a row key prefix. All data for a tenant is stored in contiguous rows in the same table, and you can query or filter by using the row key prefix. Then, when a company is no longer your customer and you need to delete the purchase history data you were storing for the company, you can drop the range of rows that use that customer's row key prefix.
In contrast, if you store data on behalf of each company in its own table, you can experience performance and scalability issues. You are also more likely to inadvertently reach Cloud Bigtable's limit of 1,000 tables per instance. After an instance reaches this limit, Cloud Bigtable prevents you from creating more tables in the instance.
Row keys to avoid
Some types of row keys can make it difficult to query your data or result in poor performance. This section describes some types of row keys that you should avoid using in Cloud Bigtable.
Avoid using standard, non-reversed domain names as row keys. Using standard
domain names makes it inefficient to retrieve all of the rows within a portion
of the domain (for example, all rows that relate to
company.com will be in
separate row ranges like
product.company.com and so
on). In addition, using standard domain names causes rows to be sorted in such
a way that related data is not grouped together in one place, which can result
in less efficient compression.
Sequential numeric IDs
Suppose your system assigns a numeric ID to each of your application's users. You might be tempted to use the user's numeric ID as the row key for your table. However, because new users are more likely to be active users, this approach is likely to push most of your traffic to a small number of nodes.
A safer approach is to use a reversed version of the user's numeric ID, which spreads traffic more evenly across all of the nodes for your Cloud Bigtable table.
Frequently updated identifiers
Avoid using a single row key to identify a value that must be updated very
frequently. For example, if you store memory-usage data once per second,
do not use a single row key named
memusage and update the row repeatedly.
This type of operation overloads the tablet that stores the frequently used row.
It can also cause a row to exceed its size limit, because a cell's previous
values take up space for a while.
Instead, store one value per row, using a row key that contains the type of
metric, a delimiter, and a timestamp. For example, to track memory usage over
time, you could use row keys similar to
memusage#1423523569918. This strategy
is efficient because in Cloud Bigtable, creating a new row takes no more
time than creating a new cell. In addition, this strategy enables you to quickly
read data from a specific date range by calculating the appropriate start and
For values that change very frequently, such as a counter that is updated hundreds of times each minute, it's best to simply keep the data in memory, at the application layer, and write new rows to Cloud Bigtable periodically.
As discussed above, an earlier version of this page recommended using hashed values in row keys. We no longer recommend this practice. It results in row keys that are basically meaningless, which makes it challenging to use the Key Visualizer tool to troubleshoot issues with Cloud Bigtable.
Use human-readable values instead of hashed values. If your row key includes multiple values, separate those values with a delimiter.
Column families and column qualifiers
This section provides guidance on how to think about column families and column qualifiers within your table.
In Cloud Bigtable, unlike in HBase, you can use up to about 100 column families while maintaining excellent performance. As a result, whenever a row contains multiple values that are related to one another, it's a good practice to group those values into the same column family. Grouping data into column families enables you to retrieve data from a single family, or multiple families, rather than retrieving all of the data in each row. Group data as closely as you can to get just the information that you need, but no more, in your most frequent API calls.
Also, the names of your column families should be short, because they're included in the data that is transferred for each request.
Because Cloud Bigtable tables are sparse, you can create as many column qualifiers as you need in each row. There is no space penalty for empty cells in a row. As a result, it often makes sense to treat column qualifiers as data. For example, if your table is storing user posts, you could use the unique identifier for each post as the column qualifier.
That said, you should avoid splitting your data across more column qualifiers than necessary, which can result in rows that have a very large number of non-empty cells. It takes time for Cloud Bigtable to process each cell in a row. Also, each cell adds some overhead to the amount of data that's stored in your table and sent over the network. For example, if you're storing 1 KB (1,024 bytes) of data, it's much more space-efficient to store that data in a single cell, rather than spreading the data across 1,024 cells that each contain 1 byte. If you normally read or write a few related values all at once, consider storing all of those values together in one cell, using a format that allows you to extract the individual values later (such as the protocol buffer binary format).
Similarly, it's a good idea to keep the names of column qualifiers short, which helps to reduce the amount of data that is transferred for each request.
Cloud Bigtable has a limit of 1,000 tables per instance. In most cases, you should have far fewer tables than that. In other database systems, you might choose to store data in multiple tables based on the subject and number of columns. In Cloud Bigtable, however, you are better off storing all your data in one big table. You assign a unique row key prefix to use for each data set, so that Cloud Bigtable stores the related data in a contiguous range of that you can then query by row key prefix.
Creating many small tables is a Cloud Bigtable anti-pattern for a few reasons:
Sending requests to many different tables can increase backend connection overhead, resulting in increased tail latency.
Having multiple tables of different sizes can disrupt the behind-the-scenes load balancing that makes Cloud Bigtable performant.
Learn how to design a schema for time-series data.
Review the types of write requests you can send to Cloud Bigtable.