Package google.spanner.v1

Index

Spanner

Cloud Spanner API

The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.

BeginTransaction

rpc BeginTransaction(BeginTransactionRequest) returns (Transaction)

Begins a new transaction. This step can often be skipped: Read, ExecuteSql and Commit can begin a new transaction as a side-effect.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

Commit

rpc Commit(CommitRequest) returns (CommitResponse)

Commits a transaction. The request includes the mutations to be applied to rows in the database.

Commit might return an ABORTED error. This can occur at any time; commonly, the cause is conflicts with concurrent transactions. However, it can also happen for a variety of other reasons. If Commit returns ABORTED, the caller should re-attempt the transaction from the beginning, re-using the same session.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

CreateSession

rpc CreateSession(CreateSessionRequest) returns (Session)

Creates a new session. A session can be used to perform transactions that read and/or modify data in a Cloud Spanner database. Sessions are meant to be reused for many consecutive transactions.

Sessions can only execute one transaction at a time. To execute multiple concurrent read-write/write-only transactions, create multiple sessions. Note that standalone reads and queries use a transaction internally, and count toward the one transaction limit.

Cloud Spanner limits the number of sessions that can exist at any given time; thus, it is a good idea to delete idle and/or unneeded sessions. Aside from explicit deletes, Cloud Spanner can delete sessions for which no operations are sent for more than an hour. If a session is deleted, requests to it return NOT_FOUND.

Idle sessions can be kept alive by sending a trivial SQL query periodically, e.g., "SELECT 1".

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

DeleteSession

rpc DeleteSession(DeleteSessionRequest) returns (Empty)

Ends a session, releasing server resources associated with it.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

ExecuteSql

rpc ExecuteSql(ExecuteSqlRequest) returns (ResultSet)

Executes an SQL query, returning all rows in a single reply. This method cannot be used to return a result set larger than 10 MiB; if the query yields more data than that, the query fails with a FAILED_PRECONDITION error.

Queries inside read-write transactions might return ABORTED. If this occurs, the application should restart the transaction from the beginning. See Transaction for more details.

Larger result sets can be fetched in streaming fashion by calling ExecuteStreamingSql instead.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

ExecuteStreamingSql

rpc ExecuteStreamingSql(ExecuteSqlRequest) returns (PartialResultSet)

Like ExecuteSql, except returns the result set as a stream. Unlike ExecuteSql, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

GetSession

rpc GetSession(GetSessionRequest) returns (Session)

Gets a session. Returns NOT_FOUND if the session does not exist. This is mainly useful for determining whether a session is still alive.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

ListSessions

rpc ListSessions(ListSessionsRequest) returns (ListSessionsResponse)

Lists all sessions in a given database.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

PartitionQuery

rpc PartitionQuery(PartitionQueryRequest) returns (PartitionResponse)

Creates a set of partition tokens that can be used to execute a query operation in parallel. Each of the returned partition tokens can be used by ExecuteStreamingSql to specify a subset of the query result to read. The same session and read-only transaction must be used by the PartitionQueryRequest used to create the partition tokens and the ExecuteSqlRequests that use the partition tokens. Partition tokens become invalid when the session used to create them is deleted or begins a new transaction.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

PartitionRead

rpc PartitionRead(PartitionReadRequest) returns (PartitionResponse)

Creates a set of partition tokens that can be used to execute a read operation in parallel. Each of the returned partition tokens can be used by StreamingRead to specify a subset of the read result to read. The same session and read-only transaction must be used by the PartitionReadRequest used to create the partition tokens and the ReadRequests that use the partition tokens. Partition tokens become invalid when the session used to create them is deleted or begins a new transaction.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

Read

rpc Read(ReadRequest) returns (ResultSet)

Reads rows from the database using key lookups and scans, as a simple key/value style alternative to ExecuteSql. This method cannot be used to return a result set larger than 10 MiB; if the read matches more data than that, the read fails with a FAILED_PRECONDITION error.

Reads inside read-write transactions might return ABORTED. If this occurs, the application should restart the transaction from the beginning. See Transaction for more details.

Larger result sets can be yielded in streaming fashion by calling StreamingRead instead.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

Rollback

rpc Rollback(RollbackRequest) returns (Empty)

Rolls back a transaction, releasing any locks it holds. It is a good idea to call this for any transaction that includes one or more Read or ExecuteSql requests and ultimately decides not to commit.

Rollback returns OK if it successfully aborts the transaction, the transaction was already aborted, or the transaction is not found. Rollback never returns ABORTED.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

StreamingRead

rpc StreamingRead(ReadRequest) returns (PartialResultSet)

Like Read, except returns the result set as a stream. Unlike Read, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB.

Authorization Scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/spanner.data
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Auth Guide.

BeginTransactionRequest

The request for BeginTransaction.

Fields
session

string

Required. The session in which the transaction runs.

Authorization requires one or more of the following Google IAM permissions on the specified resource session:

  • spanner.databases.beginReadOnlyTransaction
  • spanner.databases.beginOrRollbackReadWriteTransaction

options

TransactionOptions

Required. Options for the new transaction.

CommitRequest

The request for Commit.

Fields
session

string

Required. The session in which the transaction to be committed is running.

Authorization requires the following Google IAM permission on the specified resource session:

  • spanner.databases.write

mutations[]

Mutation

The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list.

Union field transaction. Required. The transaction in which to commit. transaction can be only one of the following:
transaction_id

bytes

Commit a previously-started transaction.

single_use_transaction

TransactionOptions

Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the CommitRequest is sent to Cloud Spanner more than once (for instance, due to retries in the application, or in the transport library), it is possible that the mutations are executed more than once. If this is undesirable, use BeginTransaction and Commit instead.

CommitResponse

The response for Commit.

Fields
commit_timestamp

Timestamp

The Cloud Spanner timestamp at which the transaction committed.

CreateSessionRequest

The request for CreateSession.

Fields
database

string

Required. The database in which the new session is created.

Authorization requires the following Google IAM permission on the specified resource database:

  • spanner.sessions.create

session

Session

The session to create.

DeleteSessionRequest

The request for DeleteSession.

Fields
name

string

Required. The name of the session to delete.

Authorization requires the following Google IAM permission on the specified resource name:

  • spanner.sessions.delete

ExecuteSqlRequest

The request for ExecuteSql and ExecuteStreamingSql.

Fields
session

string

Required. The session in which the SQL query should be performed.

Authorization requires the following Google IAM permission on the specified resource session:

  • spanner.databases.select

transaction

TransactionSelector

The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency.

sql

string

Required. The SQL query string.

params

Struct

The SQL query string can contain parameter placeholders. A parameter placeholder consists of '@' followed by the parameter name. Parameter names consist of any combination of letters, numbers, and underscores.

Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example: "WHERE id > @msg_id AND id < @msg_id + 100"

It is an error to execute an SQL query with unbound parameters.

Parameter values are specified using params, which is a JSON object whose keys are parameter names, and whose values are the corresponding parameter values.

param_types

map<string, Type>

It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type BYTES and values of type STRING both appear in params as JSON strings.

In these cases, param_types can be used to specify the exact SQL type for some or all of the SQL query parameters. See the definition of Type for more information about SQL types.

resume_token

bytes

If this request is resuming a previously interrupted SQL query execution, resume_token should be copied from the last PartialResultSet yielded before the interruption. Doing this enables the new SQL query execution to resume where the last one left off. The rest of the request parameters must exactly match the request that yielded this token.

query_mode

QueryMode

Used to control the amount of debugging information returned in ResultSetStats. If partition_token is set, query_mode can only be set to QueryMode.NORMAL.

partition_token

bytes

If present, results will be restricted to the specified partition previously created using PartitionQuery(). There must be an exact match for the values of fields common to this message and the PartitionQueryRequest message used to create this partition_token.

QueryMode

Mode in which the query must be processed.

Enums
NORMAL The default mode where only the query result, without any information about the query plan is returned.
PLAN This mode returns only the query plan, without any result rows or execution statistics information.
PROFILE This mode returns both the query plan and the execution statistics along with the result rows.

GetSessionRequest

The request for GetSession.

Fields
name

string

Required. The name of the session to retrieve.

Authorization requires the following Google IAM permission on the specified resource name:

  • spanner.sessions.get

KeyRange

KeyRange represents a range of rows in a table or index.

A range has a start key and an end key. These keys can be open or closed, indicating if the range includes rows with that key.

Keys are represented by lists, where the ith value in the list corresponds to the ith component of the table or index primary key. Individual values are encoded as described here.

For example, consider the following table definition:

CREATE TABLE UserEvents (
  UserName STRING(MAX),
  EventDate STRING(10)
) PRIMARY KEY(UserName, EventDate);

The following keys name rows in this table:

["Bob", "2014-09-23"]
["Alfred", "2015-06-12"]

Since the UserEvents table's PRIMARY KEY clause names two columns, each UserEvents key has two elements; the first is the UserName, and the second is the EventDate.

Key ranges with multiple components are interpreted lexicographically by component using the table or index key's declared sort order. For example, the following range returns all events for user "Bob" that occurred in the year 2015:

"start_closed": ["Bob", "2015-01-01"]
"end_closed": ["Bob", "2015-12-31"]

Start and end keys can omit trailing key components. This affects the inclusion and exclusion of rows that exactly match the provided key components: if the key is closed, then rows that exactly match the provided components are included; if the key is open, then rows that exactly match are not included.

For example, the following range includes all events for "Bob" that occurred during and after the year 2000:

"start_closed": ["Bob", "2000-01-01"]
"end_closed": ["Bob"]

The next example retrieves all events for "Bob":

"start_closed": ["Bob"]
"end_closed": ["Bob"]

To retrieve events before the year 2000:

"start_closed": ["Bob"]
"end_open": ["Bob", "2000-01-01"]

The following range includes all rows in the table:

"start_closed": []
"end_closed": []

This range returns all users whose UserName begins with any character from A to C:

"start_closed": ["A"]
"end_open": ["D"]

This range returns all users whose UserName begins with B:

"start_closed": ["B"]
"end_open": ["C"]

Key ranges honor column sort order. For example, suppose a table is defined as follows:

CREATE TABLE DescendingSortedTable {
  Key INT64,
  ...
) PRIMARY KEY(Key DESC);

The following range retrieves all rows with key values between 1 and 100 inclusive:

"start_closed": ["100"]
"end_closed": ["1"]

Note that 100 is passed as the start, and 1 is passed as the end, because Key is a descending column in the schema.

Fields
Union field start_key_type. The start key must be provided. It can be either closed or open. start_key_type can be only one of the following:
start_closed

ListValue

If the start is closed, then the range includes all rows whose first len(start_closed) key columns exactly match start_closed.

start_open

ListValue

If the start is open, then the range excludes rows whose first len(start_open) key columns exactly match start_open.

Union field end_key_type. The end key must be provided. It can be either closed or open. end_key_type can be only one of the following:
end_closed

ListValue

If the end is closed, then the range includes all rows whose first len(end_closed) key columns exactly match end_closed.

end_open

ListValue

If the end is open, then the range excludes rows whose first len(end_open) key columns exactly match end_open.

KeySet

KeySet defines a collection of Cloud Spanner keys and/or key ranges. All the keys are expected to be in the same table or index. The keys need not be sorted in any particular way.

If the same key is specified multiple times in the set (for example if two ranges, two keys, or a key and a range overlap), Cloud Spanner behaves as if the key were only specified once.

Fields
keys[]

ListValue

A list of specific keys. Entries in keys should have exactly as many elements as there are columns in the primary or index key with which this KeySet is used. Individual key values are encoded as described here.

ranges[]

KeyRange

A list of key ranges. See KeyRange for more information about key range specifications.

all

bool

For convenience all can be set to true to indicate that this KeySet matches all keys in the table or index. Note that any keys specified in keys or ranges are only yielded once.

ListSessionsRequest

The request for ListSessions.

Fields
database

string

Required. The database in which to list sessions.

Authorization requires the following Google IAM permission on the specified resource database:

  • spanner.sessions.list

page_size

int32

Number of sessions to be returned in the response. If 0 or less, defaults to the server's maximum allowed page size.

page_token

string

If non-empty, page_token should contain a next_page_token from a previous ListSessionsResponse.

filter

string

An expression for filtering the results of the request. Filter rules are case insensitive. The fields eligible for filtering are:

  • labels.key where key is the name of a label

Some examples of using filters are:

  • labels.env:* --> The session has the label "env".
  • labels.env:dev --> The session has the label "env" and the value of the label contains the string "dev".

ListSessionsResponse

The response for ListSessions.

Fields
sessions[]

Session

The list of requested sessions.

next_page_token

string

next_page_token can be sent in a subsequent ListSessions call to fetch more of the matching sessions.

Mutation

A modification to one or more Cloud Spanner rows. Mutations can be applied to a Cloud Spanner database by sending them in a Commit call.

Fields
Union field operation. Required. The operation to perform. operation can be only one of the following:
insert

Write

Insert new rows in a table. If any of the rows already exist, the write or transaction fails with error ALREADY_EXISTS.

update

Write

Update existing rows in a table. If any of the rows does not already exist, the transaction fails with error NOT_FOUND.

insert_or_update

Write

Like insert, except that if the row already exists, then its column values are overwritten with the ones provided. Any column values not explicitly written are preserved.

replace

Write

Like insert, except that if the row already exists, it is deleted, and the column values provided are inserted instead. Unlike insert_or_update, this means any values not explicitly written become NULL.

delete

Delete

Delete rows from a table. Succeeds whether or not the named rows were present.

Delete

Arguments to delete operations.

Fields
table

string

Required. The table whose rows will be deleted.

key_set

KeySet

Required. The primary keys of the rows within table to delete. Delete is idempotent. The transaction will succeed even if some or all rows do not exist.

Write

Arguments to insert, update, insert_or_update, and replace operations.

Fields
table

string

Required. The table whose rows will be written.

columns[]

string

The names of the columns in table to be written.

The list of columns must contain enough columns to allow Cloud Spanner to derive values for all primary key columns in the row(s) to be modified.

values[]

ListValue

The values to be written. values can contain more than one list of values. If it does, then multiple rows are written, one for each entry in values. Each list in values must have exactly as many entries as there are entries in columns above. Sending multiple lists is equivalent to sending multiple Mutations, each containing one values entry and repeating table and columns. Individual values in each list are encoded as described here.

PartialResultSet

Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.

Fields
metadata

ResultSetMetadata

Metadata about the result set, such as row type information. Only present in the first response.

values[]

Value

A streamed result set consists of a stream of values, which might be split into many PartialResultSet messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.

Most values are encoded based on type as described here.

It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent PartialResultSet(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:

  • bool/number/null: cannot be chunked
  • string: concatenate the strings
  • list: concatenate the lists. If the last element in a list is a string, list, or object, merge it with the first element in the next list by applying these rules recursively.
  • object: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.

Some examples of merging:

# Strings are concatenated.
"foo", "bar" => "foobar"

# Lists of non-strings are concatenated.
[2, 3], [4] => [2, 3, 4]

# Lists are concatenated, but the last and first elements are merged
# because they are strings.
["a", "b"], ["c", "d"] => ["a", "bc", "d"]

# Lists are concatenated, but the last and first elements are merged
# because they are lists. Recursively, the last and first elements
# of the inner lists are merged because they are strings.
["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]

# Non-overlapping object fields are combined.
{"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}

# Overlapping object fields are merged.
{"a": "1"}, {"a": "2"} => {"a": "12"}

# Examples of merging objects containing lists of strings.
{"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}

For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following PartialResultSets might be yielded:

{
  "metadata": { ... }
  "values": ["Hello", "W"]
  "chunked_value": true
  "resume_token": "Af65..."
}
{
  "values": ["orl"]
  "chunked_value": true
  "resume_token": "Bqp2..."
}
{
  "values": ["d"]
  "resume_token": "Zx1B..."
}

This sequence of PartialResultSets encodes two rows, one containing the field value "Hello", and a second containing the field value "World" = "W" + "orl" + "d".

chunked_value

bool

If true, then the final value in values is chunked, and must be combined with more values from subsequent PartialResultSets to obtain a complete field value.

resume_token

bytes

Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including resume_token. Note that executing any other transaction in the same session invalidates the token.

stats

ResultSetStats

Query plan and execution statistics for the query that produced this streaming result set. These can be requested by setting ExecuteSqlRequest.query_mode and are sent only once with the last response in the stream.

Partition

Information returned for each partition returned in a PartitionResponse.

Fields
partition_token

bytes

This token can be passed to Read, StreamingRead, ExecuteSql, or ExecuteStreamingSql requests to restrict the results to those identified by this partition token.

PartitionOptions

Options for a PartitionQueryRequest and PartitionReadRequest.

Fields
partition_size_bytes

int64

Note: This hint is currently ignored by PartitionQuery and PartitionRead requests.

The desired data size for each partition generated. The default for this option is currently 1 GiB. This is only a hint. The actual size of each partition may be smaller or larger than this size request.

max_partitions

int64

Note: This hint is currently ignored by PartitionQuery and PartitionRead requests.

The desired maximum number of partitions to return. For example, this may be set to the number of workers available. The default for this option is currently 10,000. The maximum value is currently 200,000. This is only a hint. The actual number of partitions returned may be smaller or larger than this maximum count request.

PartitionQueryRequest

The request for PartitionQuery

Fields
session

string

Required. The session used to create the partitions.

Authorization requires the following Google IAM permission on the specified resource session:

  • spanner.databases.partitionQuery

transaction

TransactionSelector

Read only snapshot transactions are supported, read/write and single use transactions are not.

sql

string

The query request to generate partitions for. The request will fail if the query is not root partitionable. The query plan of a root partitionable query has a single distributed union operator. A distributed union operator conceptually divides one or more tables into multiple splits, remotely evaluates a subquery independently on each split, and then unions all results.

params

Struct

The SQL query string can contain parameter placeholders. A parameter placeholder consists of '@' followed by the parameter name. Parameter names consist of any combination of letters, numbers, and underscores.

Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example: "WHERE id > @msg_id AND id < @msg_id + 100"

It is an error to execute an SQL query with unbound parameters.

Parameter values are specified using params, which is a JSON object whose keys are parameter names, and whose values are the corresponding parameter values.

param_types

map<string, Type>

It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type BYTES and values of type STRING both appear in params as JSON strings.

In these cases, param_types can be used to specify the exact SQL type for some or all of the SQL query parameters. See the definition of Type for more information about SQL types.

partition_options

PartitionOptions

Additional options that affect how many partitions are created.

PartitionReadRequest

The request for PartitionRead

Fields
session

string

Required. The session used to create the partitions.

Authorization requires the following Google IAM permission on the specified resource session:

  • spanner.databases.partitionRead

transaction

TransactionSelector

Read only snapshot transactions are supported, read/write and single use transactions are not.

table

string

Required. The name of the table in the database to be read.

index

string

If non-empty, the name of an index on table. This index is used instead of the table primary key when interpreting key_set and sorting result rows. See key_set for further information.

columns[]

string

The columns of table to be returned for each row matching this request.

key_set

KeySet

Required. key_set identifies the rows to be yielded. key_set names the primary keys of the rows in table to be yielded, unless index is present. If index is present, then key_set instead names index keys in index.

It is not an error for the key_set to name rows that do not exist in the database. Read yields nothing for nonexistent rows.

partition_options

PartitionOptions

Additional options that affect how many partitions are created.

PartitionResponse

The response for PartitionQuery or PartitionRead

Fields
partitions[]

Partition

Partitions created by this request.

transaction

Transaction

Transaction created by this request.

PlanNode

Node information for nodes appearing in a QueryPlan.plan_nodes.

Fields
index

int32

The PlanNode's index in node list.

kind

Kind

Used to determine the type of node. May be needed for visualizing different kinds of nodes differently. For example, If the node is a SCALAR node, it will have a condensed representation which can be used to directly embed a description of the node in its parent.

display_name

string

The display name for the node.

short_representation

ShortRepresentation

Condensed representation for SCALAR nodes.

metadata

Struct

Attributes relevant to the node contained in a group of key-value pairs. For example, a Parameter Reference node could have the following information in its metadata:

{
  "parameter_reference": "param1",
  "parameter_type": "array"
}

execution_stats

Struct

The execution statistics associated with the node, contained in a group of key-value pairs. Only present if the plan was returned as a result of a profile query. For example, number of executions, number of rows/time per execution etc.

Kind

The kind of PlanNode. Distinguishes between the two different kinds of nodes that can appear in a query plan.

Enums
KIND_UNSPECIFIED Not specified.
RELATIONAL Denotes a Relational operator node in the expression tree. Relational operators represent iterative processing of rows during query execution. For example, a TableScan operation that reads rows from a table.
SCALAR Denotes a Scalar node in the expression tree. Scalar nodes represent non-iterable entities in the query plan. For example, constants or arithmetic operators appearing inside predicate expressions or references to column names.

ShortRepresentation

Condensed representation of a node and its subtree. Only present for SCALAR PlanNode(s).

Fields
description

string

A string representation of the expression subtree rooted at this node.

subqueries

map<string, int32>

A mapping of (subquery variable name) -> (subquery node id) for cases where the description string of this node references a SCALAR subquery contained in the expression subtree rooted at this node. The referenced SCALAR subquery may not necessarily be a direct child of this node.

QueryPlan

Contains an ordered list of nodes appearing in the query plan.

Fields
plan_nodes[]

PlanNode

The nodes in the query plan. Plan nodes are returned in pre-order starting with the plan root. Each PlanNode's id corresponds to its index in plan_nodes.

ReadRequest

The request for Read and StreamingRead.

Fields
session

string

Required. The session in which the read should be performed.

Authorization requires the following Google IAM permission on the specified resource session:

  • spanner.databases.read

transaction

TransactionSelector

The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency.

table

string

Required. The name of the table in the database to be read.

index

string

If non-empty, the name of an index on table. This index is used instead of the table primary key when interpreting key_set and sorting result rows. See key_set for further information.

columns[]

string

The columns of table to be returned for each row matching this request.

key_set

KeySet

Required. key_set identifies the rows to be yielded. key_set names the primary keys of the rows in table to be yielded, unless index is present. If index is present, then key_set instead names index keys in index.

If the partition_token field is empty, rows are yielded in table primary key order (if index is empty) or index key order (if index is non-empty). If the partition_token field is not empty, rows will be yielded in an unspecified order.

It is not an error for the key_set to name rows that do not exist in the database. Read yields nothing for nonexistent rows.

limit

int64

If greater than zero, only the first limit rows are yielded. If limit is zero, the default is no limit. A limit cannot be specified if partition_token is set.

resume_token

bytes

If this request is resuming a previously interrupted read, resume_token should be copied from the last PartialResultSet yielded before the interruption. Doing this enables the new read to resume where the last read left off. The rest of the request parameters must exactly match the request that yielded this token.

partition_token

bytes

If present, results will be restricted to the specified partition previously created using PartitionRead(). There must be an exact match for the values of fields common to this message and the PartitionReadRequest message used to create this partition_token.

ResultSet

Results from Read or ExecuteSql.

Fields
metadata

ResultSetMetadata

Metadata about the result set, such as row type information.

rows[]

ListValue

Each element in rows is a row whose format is defined by metadata.row_type. The ith element in each row matches the ith field in metadata.row_type. Elements are encoded based on type as described here.

stats

ResultSetStats

Query plan and execution statistics for the query that produced this result set. These can be requested by setting ExecuteSqlRequest.query_mode.

ResultSetMetadata

Metadata about a ResultSet or PartialResultSet.

Fields
row_type

StructType

Indicates the field names and types for the rows in the result set. For example, a SQL query like "SELECT UserId, UserName FROM Users" could return a row_type value like:

"fields": [
  { "name": "UserId", "type": { "code": "INT64" } },
  { "name": "UserName", "type": { "code": "STRING" } },
]

transaction

Transaction

If the read or SQL query began a transaction as a side-effect, the information about the new transaction is yielded here.

ResultSetStats

Additional statistics about a ResultSet or PartialResultSet.

Fields
query_plan

QueryPlan

QueryPlan for the query associated with this result.

query_stats

Struct

Aggregated statistics from the execution of the query. Only present when the query is profiled. For example, a query could return the statistics as follows:

{
  "rows_returned": "3",
  "elapsed_time": "1.22 secs",
  "cpu_time": "1.19 secs"
}

RollbackRequest

The request for Rollback.

Fields
session

string

Required. The session in which the transaction to roll back is running.

Authorization requires the following Google IAM permission on the specified resource session:

  • spanner.databases.beginOrRollbackReadWriteTransaction

transaction_id

bytes

Required. The transaction to roll back.

Session

A session in the Cloud Spanner API.

Fields
name

string

The name of the session. This is always system-assigned; values provided when creating a session are ignored.

labels

map<string, string>

The labels for the session.

  • Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?.
  • Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?.
  • No more than 64 labels can be associated with a given session.

See https://goo.gl/xmQnxf for more information on and examples of labels.

create_time

Timestamp

Output only. The timestamp when the session is created.

approximate_last_use_time

Timestamp

Output only. The approximate timestamp when the session is last used. It is typically earlier than the actual last use time.

StructType

StructType defines the fields of a STRUCT type.

Fields
fields[]

Field

The list of fields that make up this struct. Order is significant, because values of this struct type are represented as lists, where the order of field values matches the order of fields in the StructType. In turn, the order of fields matches the order of columns in a read request, or the order of fields in the SELECT clause of a query.

Field

Message representing a single field of a struct.

Fields
name

string

The name of the field. For reads, this is the column name. For SQL queries, it is the column alias (e.g., "Word" in the query "SELECT 'hello' AS Word"), or the column name (e.g., "ColName" in the query "SELECT ColName FROM Table"). Some columns might have an empty name (e.g., !"SELECT UPPER(ColName)"`). Note that a query result can contain multiple fields with the same name.

type

Type

The type of the field.

Transaction

A transaction.

Fields
id

bytes

id may be used to identify the transaction in subsequent Read, ExecuteSql, Commit, or Rollback calls.

Single-use read-only transactions do not have IDs, because single-use transactions do not support multiple requests.

read_timestamp

Timestamp

For snapshot read-only transactions, the read timestamp chosen for the transaction. Not returned by default: see TransactionOptions.ReadOnly.return_read_timestamp.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

TransactionOptions

Transactions

Each session can have at most one active transaction at a time. After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction.

Transaction Modes

Cloud Spanner supports two transaction modes:

  1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.

  2. Snapshot read-only. This transaction type provides guaranteed consistency across several reads, but does not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past. Snapshot read-only transactions do not need to be committed.

For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed.

Transactions may only read/write data in a single database. They may, however, read/write data in different tables within that database.

Locking Read-Write Transactions

Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent.

Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it.

Reads performed within a transaction acquire locks on the data being read. Writes can only be done at commit time, after all reads have been completed. Conceptually, a read-write transaction consists of zero or more reads or SQL queries followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction.

Semantics

Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns ABORTED, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner.

Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves.

Retrying Aborted Transactions

When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous.

Under some circumstances (e.g., many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of wall time spent retrying.

Idle Transactions

A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. In that case, the commit will fail with error ABORTED.

If this behavior is undesirable, periodically executing a simple SQL query in the transaction (e.g., SELECT 1) prevents the transaction from becoming idle.

Snapshot Read-Only Transactions

Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes.

Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions.

Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice.

Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so).

To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp.

The types of timestamp bound are:

  • Strong (the default).
  • Bounded staleness.
  • Exact staleness.

If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transaction, because they are able to execute far from the leader replica.

Each type of timestamp bound is discussed in detail below.

Strong

Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction.

Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp.

See TransactionOptions.ReadOnly.strong.

Exact Staleness

These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp <= the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished.

The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time.

These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results.

See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness.

Bounded Staleness

Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking.

All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results.

Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp.

As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica.

Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions.

See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp.

Old Read Timestamps and Garbage Collection

Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error FAILED_PRECONDITION.

Fields
Union field mode. Required. The type of transaction. mode can be only one of the following:
read_write

ReadWrite

Transaction may write.

Authorization to begin a read-write transaction requires spanner.databases.beginOrRollbackReadWriteTransaction permission on the session resource.

read_only

ReadOnly

Transaction will not write.

Authorization to begin a read-only transaction requires spanner.databases.beginReadOnlyTransaction permission on the session resource.

ReadOnly

Message type to initiate a read-only transaction.

Fields
return_read_timestamp

bool

If true, the Cloud Spanner-selected read timestamp is included in the Transaction message that describes the transaction.

Union field timestamp_bound. How to choose the timestamp for the read-only transaction. timestamp_bound can be only one of the following:
strong

bool

Read at a timestamp where all previously committed transactions are visible.

min_read_timestamp

Timestamp

Executes all reads at a timestamp >= min_read_timestamp.

This is useful for requesting fresher data than some previous read, or data that is fresh enough to observe the effects of some previously committed transaction whose timestamp is known.

Note that this option can only be used in single-use transactions.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

max_staleness

Duration

Read data at a timestamp >= NOW - max_staleness seconds. Guarantees that all writes that have committed more than the specified number of seconds ago are visible. Because Cloud Spanner chooses the exact timestamp, this mode works even if the client's local clock is substantially skewed from Cloud Spanner commit timestamps.

Useful for reading the freshest data available at a nearby replica, while bounding the possible staleness if the local replica has fallen behind.

Note that this option can only be used in single-use transactions.

read_timestamp

Timestamp

Executes all reads at the given timestamp. Unlike other modes, reads at a specific timestamp are repeatable; the same read at the same timestamp always returns the same data. If the timestamp is in the future, the read will block until the specified timestamp, modulo the read's deadline.

Useful for large scale consistent reads such as mapreduces, or for coordinating many reads against a consistent snapshot of the data.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

exact_staleness

Duration

Executes all reads at a timestamp that is exact_staleness old. The timestamp is chosen soon after the read is started.

Guarantees that all writes that have committed more than the specified number of seconds ago are visible. Because Cloud Spanner chooses the exact timestamp, this mode works even if the client's local clock is substantially skewed from Cloud Spanner commit timestamps.

Useful for reading at nearby replicas without the distributed timestamp negotiation overhead of max_staleness.

ReadWrite

Message type to initiate a read-write transaction. Currently this transaction type has no options.

TransactionSelector

This message is used to select the transaction in which a Read or ExecuteSql call runs.

See TransactionOptions for more information about transactions.

Fields
Union field selector. If no fields are set, the default is a single use transaction with strong concurrency. selector can be only one of the following:
single_use

TransactionOptions

Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query.

id

bytes

Execute the read or SQL query in a previously-started transaction.

begin

TransactionOptions

Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction.

Type

Type indicates the type of a Cloud Spanner value, as might be stored in a table cell or returned from an SQL query.

Fields
code

TypeCode

Required. The TypeCode for this type.

array_element_type

Type

If code == ARRAY, then array_element_type is the type of the array elements.

struct_type

StructType

If code == STRUCT, then struct_type provides type information for the struct's fields.

TypeCode

TypeCode is used as part of Type to indicate the type of a Cloud Spanner value.

Each legal value of a type can be encoded to or decoded from a JSON value, using the encodings described below. All Cloud Spanner values can be null, regardless of type; nulls are always encoded as a JSON null.

Enums
TYPE_CODE_UNSPECIFIED Not specified.
BOOL Encoded as JSON true or false.
INT64 Encoded as string, in decimal format.
FLOAT64 Encoded as number, or the strings "NaN", "Infinity", or "-Infinity".
TIMESTAMP

Encoded as string in RFC 3339 timestamp format. The time zone must be present, and must be "Z".

If the schema has the column option allow_commit_timestamp=true, the placeholder string "spanner.commit_timestamp()" can be used to instruct the system to insert the commit timestamp associated with the transaction commit.

DATE Encoded as string in RFC 3339 date format.
STRING Encoded as string.
BYTES Encoded as a base64-encoded string, as described in RFC 4648, section 4.
ARRAY Encoded as list, where the list elements are represented according to array_element_type.
STRUCT Encoded as list, where list element i is represented according to struct_type.fields[i].
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Spanner Documentation