The distributed architecture of Cloud Spanner lets you design your schema to avoid hotspots: situations where too many requests being sent to the same server saturate the resources of the server and cause high latencies.
This page describes best practices for designing your schemas to avoid creating hotspots. One way to avoid hotspots is to adjust the schema design to allow Spanner to split and distribute the data across multiple servers. Distributing data across servers helps your Spanner database operate efficiently, particularly when performing bulk data insertions.
Choose a primary key to prevent hotspots
As mentioned in Schema and data model, you should be careful when choosing a primary key in the schema design to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part, because this results in all inserts occurring at the end of your key space. This pattern is undesirable because Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work.
For example, suppose you want to maintain a last access timestamp column on rows
of the UserAccessLog
table. The following table definition that uses a
timestamp-based primary key as the first key part is an anti-pattern if the
table will see a high rate of insertion:
Google Standard SQL
-- ANTI-PATTERN: USING A COLUMN WHOSE VALUE MONOTONICALLY INCREASES OR -- DECREASES AS THE FIRST KEY PART OF A HIGH WRITE RATE TABLE CREATE TABLE UserAccessLog ( LastAccess TIMESTAMP NOT NULL, UserId INT64 NOT NULL, ... ) PRIMARY KEY (LastAccess, UserId);
PostgreSQL
-- ANTI-PATTERN: USING A COLUMN WHOSE VALUE MONOTONICALLY INCREASES OR -- DECREASES AS THE FIRST KEY PART OF A HIGH WRITE RATE TABLE CREATE TABLE UserAccessLog ( LastAccess TIMESTAMPTZ NOT NULL, UserId bigint NOT NULL, ... PRIMARY KEY (LastAccess, UserId) );
The problem here is that rows will be written to this table in order of last access timestamp, and because last access timestamps are always increasing, they're always written to the end of the table. The hotspot is created because a single Spanner server will receive all of the writes, which overloads that one server.
The diagram below illustrates this pitfall:
The UserAccessLog
table above includes five example rows of data, which
represent five different users taking some sort of user action about a
millisecond apart from each other. The diagram also annotates the order in which
the rows are inserted (the labeled arrows indicate the order of writes for each
row). Because inserts are ordered by timestamp, and the timestamp value is
always increasing, the inserts are always added to the end of the table and are
directed at the same split. (As discussed in
Schema and data model, a
split is a set of rows from one or more related tables that are stored in order
of row key.)
This is problematic because Spanner assigns work to different servers in units of splits, so the server assigned to this particular split ends up handling all the insert requests. As the frequency of user access events increases, the frequency of insert requests to the corresponding server also increases. The server then becomes prone to becoming a hotspot, as indicated by the red border and background above. Note that in this simplified illustration, each server handles at most one split but in actuality each Spanner server can be assigned more than one split.
When more rows are appended to the table, the split grows, and when it reaches approximately 8 GB, Spanner creates another split, as described in Load-based splitting. Subsequent new rows are appended to this new split, and the server that is assigned to it becomes the new potential hotspot.
When hotspots occur, you might observe that your inserts are slow and other work
on the same server might slow down. Changing the order of the LastAccess
column to ascending order doesn't solve this problem because then all the writes
are inserted at the top of the table instead, which still sends all the inserts
to a single server.
Schema design best practice #1: Do not choose a column whose value monotonically increases or decreases as the first key part for a high write rate table.
Swap the order of keys
One way to spread writes over the key space is to swap the order of the keys so that the column that contains the monotonically increasing or decreasing value is not the first key part:
Google Standard SQL
CREATE TABLE UserAccessLog ( UserId INT64 NOT NULL, LastAccess TIMESTAMP NOT NULL, ... ) PRIMARY KEY (UserId, LastAccess);
PostgreSQL
CREATE TABLE UserAccessLog ( UserId bigint NOT NULL, LastAccess TIMESTAMPTZ NOT NULL, ... PRIMARY KEY (UserId, LastAccess) );
In this modified schema, inserts are now first ordered by UserId
, rather than
by chronological last access timestamp. This schema spreads writes among
different splits because it's unlikely that a single user will produce thousands
of events per second.
The diagram below illustrates the five rows from the UserAccessLog
table
ordered by UserId
instead of by access timestamp:
Here the UserAccessLog
data is chunked into three splits, with each split
containing on the order of a thousand rows of ordered UserId
values. This is a
reasonable estimate of how the user data could be split, assuming each row
contains about 1MB of user data and given a maximum split size of approximately
8 GB. Even though the user events occurred about a millisecond
apart, each event was raised by a different user, so the order of inserts is
much less likely to create a hotspot compared with ordering by timestamp.
See also the related best practice for ordering timestamp-based keys.
Hash the unique key and spread the writes across logical shards
Another common technique for spreading the load across multiple servers is to create a column that contains the hash of the actual unique key, then use the hash column (or the hash column and the unique key columns together) as the primary key. This pattern helps avoid hotspots, because new rows are spread more evenly across the key space.
You can use the hash value to create logical shards, or partitions, in your
database. (In a physically sharded database, the rows are spread across several
databases. In a logically sharded database, the shards are defined by the data
in the table.) For example, to spread writes to the UserAccessLog
table across
N logical shards, you could prepend a ShardId
key column to the table:
Google Standard SQL
CREATE TABLE UserAccessLog ( ShardId INT64 NOT NULL, LastAccess TIMESTAMP NOT NULL, UserId INT64 NOT NULL, ... ) PRIMARY KEY (ShardId, LastAccess, UserId);
PostgreSQL
CREATE TABLE UserAccessLog ( ShardId bigint NOT NULL, LastAccess TIMESTAMPTZ NOT NULL, UserId bigint NOT NULL, ... PRIMARY KEY (ShardId, LastAccess, UserId) );
To compute the ShardId
, hash a combination of the primary key columns and then
calculate modulo N of the hash. For example:
ShardId = hash(LastAccess and UserId) % N
Your choice of hash function and combination of columns determines how the rows are spread across the key space. Spanner will then create splits across the rows to optimize performance. Note that the splits might not align with the logical shards.
The diagram below illustrates how using a hash to create three logical shards can spread write throughput more evenly across servers:
Here the UserAccessLog
table is ordered by ShardId
, which is calculated as a
hash function of key columns. The five UserAccessLog
rows are chunked into
three logical shards, each of which is coincidentally in a different split. The
inserts are spread evenly among the splits, which balances write throughput to
the three servers that handle the splits.
Your choice of hash function will determine how well your insertions are spread across the key range. You don't need a cryptographic hash, although a cryptographic hash can be a good choice. When picking a hash function, you need to consider several factors:
- Avoiding hotspots. A function that results in more hash values tends to reduce hotspots.
- Reading efficiency. Reads across all hash values are faster if there are fewer hash values to scan.
- Node count.
Use a Universally Unique Identifier (UUID)
You can use a Universally Unique Identifier (UUID) as defined by RFC 4122 as the primary key. Version 4 UUID is recommended, because it uses random values in the bit sequence. Version 1 UUID stores the timestamp in the high order bits and is not recommended.
There are several ways to store the UUID as the primary key:
- In a
STRING(36)
column. - In a pair of
INT64
columns. - In a
BYTES(16)
column.
There are a few disadvantages to using a UUID:
- They are slightly large, using 16 bytes or more. Other options for primary keys don't use this much storage.
- They carry no information about the record. For example, a primary key of SingerId and AlbumId has an inherent meaning, while a UUID does not.
- You lose locality between records that are related, which is why using a UUID eliminates hotspots.
Bit-reverse sequential values
When you generate unique primary keys that are numerical, the high order bits of subsequent numbers should be distributed roughly equally over the entire number space. One way to do this is to generate sequential numbers by conventional means, then bit-reverse them to obtain the final values.
Reversing the bits maintains unique values across the primary keys. You need to store only the reversed value, because you can recalculate the original value in your application code.
Use descending order for timestamp-based keys
If you have a table for your history that's keyed by timestamp, consider using descending order for the key column(s) if any of the following apply:
- If you want to read the most recent history, you're using an
interleaved table for the history, and you're
reading the parent row. In this case, with a
DESC
timestamp column, the latest history entries are stored adjacent to the parent row. Otherwise, reading the parent row and its recent history will require a seek in the middle to skip over the older history. - If you're reading sequential entries in reverse chronological order, and
you don't know exactly how far back you're going. For example, you might
use a SQL query with a
LIMIT
to get the most recent N events, or you might plan to cancel the read after you've read a certain number of rows. In these cases, you want to start with the most recent entries and read sequentially older entries until your condition has been met, which Spanner does more efficiently for timestamp keys that are stored in descending order.
Add the DESC
keyword to make the timestamp key descending. For example:
Google Standard SQL
CREATE TABLE UserAccessLog ( UserId INT64 NOT NULL, LastAccess TIMESTAMP NOT NULL, ... ) PRIMARY KEY (UserId, LastAccess DESC);
Schema design best practice #2: Descending order or ascending order depends on the user queries, for example, top being the newest, or top being the oldest.
Use an interleaved index on a column whose value monotonically increases or decreases
Similar to the previous primary key anti-pattern, it's also a bad idea to create non-interleaved indexes on columns whose values are monotonically increasing or decreasing, even if they aren't primary key columns.
For example, suppose you define the following table, in which LastAccess
is a
non-primary-key column:
Google Standard SQL
CREATE TABLE Users ( UserId INT64 NOT NULL, LastAccess TIMESTAMP, ... ) PRIMARY KEY (UserId);
PostgreSQL
CREATE TABLE Users ( UserId bigint NOT NULL, LastAccess TIMESTAMPTZ, ... PRIMARY KEY (UserId) );
It might seem convenient to define an index on the LastAccess
column for
quickly querying the database for user accesses "since time X", like this:
Google Standard SQL
-- ANTI-PATTERN: CREATING A NON-INTERLEAVED INDEX ON A COLUMN WHOSE VALUE -- MONOTONICALLY INCREASES OR DECREASES ON A HIGH WRITE RATE COLUMN CREATE NULL_FILTERED INDEX UsersByLastAccess ON Users(LastAccess);
PostgreSQL
-- ANTI-PATTERN: CREATING A NON-INTERLEAVED INDEX ON A COLUMN WHOSE VALUE -- MONOTONICALLY INCREASES OR DECREASES ON A HIGH WRITE RATE COLUMN CREATE INDEX UsersByLastAccess ON Users(LastAccess) WHERE LastAccess IS NOT NULL;
However, this results in the same pitfall as described in the previous best practice, because indexes are implemented as tables under the hood, and the resulting index table would use a column whose value monotonically increases as its first key part.
It is okay to create an interleaved index like this though, because rows of interleaved indexes are interleaved in corresponding parent rows, and it's unlikely for a single parent row to produce thousands of events per second.
Schema design best practice #3: Do not create a non-interleaved index on a high write rate column whose value monotonically increases or decreases. But instead of using interleaved indexes, use techniques like those used for the base table primary key design when designing index columns—for example, add `shardId`.
What's next
- Look through examples of schema designs.
- Learn about bulk loading data.