Index
Spanner
(interface)BatchCreateSessionsRequest
(message)BatchCreateSessionsResponse
(message)BatchWriteRequest
(message)BatchWriteRequest.MutationGroup
(message)BatchWriteResponse
(message)BeginTransactionRequest
(message)CommitRequest
(message)CommitResponse
(message)CommitResponse.CommitStats
(message)CreateSessionRequest
(message)DeleteSessionRequest
(message)DirectedReadOptions
(message)DirectedReadOptions.ExcludeReplicas
(message)DirectedReadOptions.IncludeReplicas
(message)DirectedReadOptions.ReplicaSelection
(message)DirectedReadOptions.ReplicaSelection.Type
(enum)ExecuteBatchDmlRequest
(message)ExecuteBatchDmlRequest.Statement
(message)ExecuteBatchDmlResponse
(message)ExecuteSqlRequest
(message)ExecuteSqlRequest.QueryMode
(enum)ExecuteSqlRequest.QueryOptions
(message)GetSessionRequest
(message)KeyRange
(message)KeySet
(message)ListSessionsRequest
(message)ListSessionsResponse
(message)Mutation
(message)Mutation.Delete
(message)Mutation.Write
(message)PartialResultSet
(message)Partition
(message)PartitionOptions
(message)PartitionQueryRequest
(message)PartitionReadRequest
(message)PartitionResponse
(message)PlanNode
(message)PlanNode.ChildLink
(message)PlanNode.Kind
(enum)PlanNode.ShortRepresentation
(message)QueryPlan
(message)ReadRequest
(message)RequestOptions
(message)RequestOptions.Priority
(enum)ResultSet
(message)ResultSetMetadata
(message)ResultSetStats
(message)RollbackRequest
(message)Session
(message)StructType
(message)StructType.Field
(message)Transaction
(message)TransactionOptions
(message)TransactionOptions.PartitionedDml
(message)TransactionOptions.ReadOnly
(message)TransactionOptions.ReadWrite
(message)TransactionSelector
(message)Type
(message)TypeAnnotationCode
(enum)TypeCode
(enum)
Spanner
Cloud Spanner API
The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
BatchCreateSessions |
---|
Creates multiple new sessions. This API can be used to initialize a session cache on the clients. See https://goo.gl/TgSFN2 for best practices on session cache management.
|
BatchWrite |
---|
Batches the supplied mutation groups in a collection of efficient transactions. All mutations in a group are committed atomically. However, mutations across groups can be committed non-atomically in an unspecified order and thus, they must be independent of each other. Partial failure is possible, i.e., some groups may have been committed successfully, while some may have failed. The results of individual batches are streamed into the response as the batches are applied. BatchWrite requests are not replay protected, meaning that each mutation group may be applied more than once. Replays of non-idempotent mutations may have undesirable effects. For example, replays of an insert mutation may produce an already exists error or if you use generated or commit timestamp-based keys, it may result in additional rows being added to the mutation's table. We recommend structuring your mutation groups to be idempotent to avoid this issue.
|
BeginTransaction |
---|
Begins a new transaction. This step can often be skipped:
|
Commit |
---|
Commits a transaction. The request includes the mutations to be applied to rows in the database.
On very rare occasions,
|
CreateSession |
---|
Creates a new session. A session can be used to perform transactions that read and/or modify data in a Cloud Spanner database. Sessions are meant to be reused for many consecutive transactions. Sessions can only execute one transaction at a time. To execute multiple concurrent read-write/write-only transactions, create multiple sessions. Note that standalone reads and queries use a transaction internally, and count toward the one transaction limit. Active sessions use additional server resources, so it is a good idea to delete idle and unneeded sessions. Aside from explicit deletes, Cloud Spanner may delete sessions for which no operations are sent for more than an hour. If a session is deleted, requests to it return Idle sessions can be kept alive by sending a trivial SQL query periodically, e.g.,
|
DeleteSession |
---|
Ends a session, releasing server resources associated with it. This will asynchronously trigger cancellation of any operations that are running with this session.
|
ExecuteBatchDml |
---|
Executes a batch of SQL DML statements. This method allows many statements to be run with lower latency than submitting them sequentially with Statements are executed in sequential order. A request can succeed even if a statement fails. The Execution stops after the first failed statement; the remaining statements are not executed.
|
ExecuteSql |
---|
Executes an SQL statement, returning all results in a single reply. This method cannot be used to return a result set larger than 10 MiB; if the query yields more data than that, the query fails with a Operations inside read-write transactions might return Larger result sets can be fetched in streaming fashion by calling The query string can be SQL or Graph Query Language (GQL).
|
ExecuteStreamingSql |
---|
Like The query string can be SQL or Graph Query Language (GQL).
|
GetSession |
---|
Gets a session. Returns
|
ListSessions |
---|
Lists all sessions in a given database.
|
PartitionQuery |
---|
Creates a set of partition tokens that can be used to execute a query operation in parallel. Each of the returned partition tokens can be used by Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it is not possible to resume the query, and the whole operation must be restarted from the beginning.
|
PartitionRead |
---|
Creates a set of partition tokens that can be used to execute a read operation in parallel. Each of the returned partition tokens can be used by Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it is not possible to resume the read, and the whole operation must be restarted from the beginning.
|
Read |
---|
Reads rows from the database using key lookups and scans, as a simple key/value style alternative to Reads inside read-write transactions might return Larger result sets can be yielded in streaming fashion by calling
|
Rollback |
---|
Rolls back a transaction, releasing any locks it holds. It is a good idea to call this for any transaction that includes one or more
|
StreamingRead |
---|
Like
|
BatchCreateSessionsRequest
The request for BatchCreateSessions
.
Fields | |
---|---|
database |
Required. The database in which the new sessions are created. Authorization requires the following IAM permission on the specified resource
|
session_ |
Parameters to be applied to each created session. |
session_ |
Required. The number of sessions to be created in this batch call. The API may return fewer than the requested number of sessions. If a specific number of sessions are desired, the client can make additional calls to BatchCreateSessions (adjusting |
BatchCreateSessionsResponse
The response for BatchCreateSessions
.
Fields | |
---|---|
session[] |
The freshly created sessions. |
BatchWriteRequest
The request for BatchWrite
.
Fields | |
---|---|
session |
Required. The session in which the batch request is to be run. Authorization requires the following IAM permission on the specified resource
|
request_ |
Common options for this request. |
mutation_ |
Required. The groups of mutations to be applied. |
MutationGroup
A group of mutations to be committed together. Related mutations should be placed in a group. For example, two mutations inserting rows with the same primary key prefix in both parent and child tables are related.
Fields | |
---|---|
mutations[] |
Required. The mutations in this group. |
BatchWriteResponse
The result of applying a batch of mutations.
Fields | |
---|---|
indexes[] |
The mutation groups applied in this batch. The values index into the |
status |
An |
commit_ |
The commit timestamp of the transaction that applied this batch. Present if |
BeginTransactionRequest
The request for BeginTransaction
.
Fields | |
---|---|
session |
Required. The session in which the transaction runs. Authorization requires one or more of the following IAM permissions on the specified resource
|
options |
Required. Options for the new transaction. |
request_ |
Common options for this request. Priority is ignored for this request. Setting the priority in this request_options struct will not do anything. To set the priority for a transaction, set it on the reads and writes that are part of this transaction instead. |
CommitRequest
The request for Commit
.
Fields | |
---|---|
session |
Required. The session in which the transaction to be committed is running. Authorization requires the following IAM permission on the specified resource
|
mutations[] |
The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list. |
return_ |
If |
max_ |
Optional. The amount of latency this request is configured to incur in order to improve throughput. If this field is not set, Spanner assumes requests are relatively latency sensitive and automatically determines an appropriate delay time. You can specify a commit delay value between 0 and 500 ms. |
request_ |
Common options for this request. |
Union field transaction . Required. The transaction in which to commit. transaction can be only one of the following: |
|
transaction_ |
Commit a previously-started transaction. |
single_ |
Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the |
CommitResponse
The response for Commit
.
Fields | |
---|---|
commit_ |
The Cloud Spanner timestamp at which the transaction committed. |
commit_ |
The statistics about this Commit. Not returned by default. For more information, see |
CommitStats
Additional statistics about a commit.
Fields | |
---|---|
mutation_ |
The total number of mutations for the transaction. Knowing the |
CreateSessionRequest
The request for CreateSession
.
Fields | |
---|---|
database |
Required. The database in which the new session is created. Authorization requires the following IAM permission on the specified resource
|
session |
Required. The session to create. |
DeleteSessionRequest
The request for DeleteSession
.
Fields | |
---|---|
name |
Required. The name of the session to delete. Authorization requires the following IAM permission on the specified resource
|
DirectedReadOptions
The DirectedReadOptions can be used to indicate which replicas or regions should be used for non-transactional reads or queries.
DirectedReadOptions may only be specified for a read-only transaction, otherwise the API will return an INVALID_ARGUMENT
error.
Fields | |
---|---|
Union field replicas . Required. At most one of either include_replicas or exclude_replicas should be present in the message. replicas can be only one of the following: |
|
include_ |
Include_replicas indicates the order of replicas (as they appear in this list) to process the request. If auto_failover_disabled is set to true and all replicas are exhausted without finding a healthy replica, Spanner will wait for a replica in the list to become available, requests may fail due to |
exclude_ |
Exclude_replicas indicates that specified replicas should be excluded from serving requests. Spanner will not route requests to the replicas in this list. |
ExcludeReplicas
An ExcludeReplicas contains a repeated set of ReplicaSelection that should be excluded from serving requests.
Fields | |
---|---|
replica_ |
The directed read replica selector. |
IncludeReplicas
An IncludeReplicas contains a repeated set of ReplicaSelection which indicates the order in which replicas should be considered.
Fields | |
---|---|
replica_ |
The directed read replica selector. |
auto_ |
If true, Spanner will not route requests to a replica outside the include_replicas list when all of the specified replicas are unavailable or unhealthy. Default value is |
ReplicaSelection
The directed read replica selector. Callers must provide one or more of the following fields for replica selection:
location
- The location must be one of the regions within the multi-region configuration of your database.type
- The type of the replica.
Some examples of using replica_selectors are:
location:us-east1
--> The "us-east1" replica(s) of any available type will be used to process the request.type:READ_ONLY
--> The "READ_ONLY" type replica(s) in nearest available location will be used to process the request.location:us-east1 type:READ_ONLY
--> The "READ_ONLY" type replica(s) in location "us-east1" will be used to process the request.
Fields | |
---|---|
location |
The location or region of the serving requests, e.g. "us-east1". |
type |
The type of replica. |
Type
Indicates the type of replica.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Not specified. |
READ_WRITE |
Read-write replicas support both reads and writes. |
READ_ONLY |
Read-only replicas only support reads (not writes). |
ExecuteBatchDmlRequest
The request for ExecuteBatchDml
.
Fields | |
---|---|
session |
Required. The session in which the DML statements should be performed. Authorization requires the following IAM permission on the specified resource
|
transaction |
Required. The transaction to use. Must be a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. |
statements[] |
Required. The list of statements to execute in this batch. Statements are executed serially, such that the effects of statement Callers must provide at least one statement. |
seqno |
Required. A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one will succeed. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction may be aborted. Replays of previously handled requests will yield the same response as the first execution. |
request_ |
Common options for this request. |
Statement
A single DML statement.
Fields | |
---|---|
sql |
Required. The DML string. |
params |
Parameter names and values that bind to placeholders in the DML string. A parameter placeholder consists of the Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example:
It is an error to execute a SQL statement with unbound parameters. |
param_ |
It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type In these cases, |
ExecuteBatchDmlResponse
The response for ExecuteBatchDml
. Contains a list of ResultSet
messages, one for each DML statement that has successfully executed, in the same order as the statements in the request. If a statement fails, the status in the response body identifies the cause of the failure.
To check for DML statements that failed, use the following approach:
- Check the status in the response message. The
google.rpc.Code
enum valueOK
indicates that all statements were executed successfully. - If the status was not
OK
, check the number of result sets in the response. If the response containsN
ResultSet
messages, then statementN+1
in the request failed.
Example 1:
- Request: 5 DML statements, all executed successfully.
- Response: 5
ResultSet
messages, with the statusOK
.
Example 2:
Fields | |
---|---|
result_ |
One Only the first |
status |
If all DML statements are executed successfully, the status is |
ExecuteSqlRequest
The request for ExecuteSql
and ExecuteStreamingSql
.
Fields | |
---|---|
session |
Required. The session in which the SQL query should be performed. Authorization requires the following IAM permission on the specified resource
|
transaction |
The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong concurrency. Standard DML statements require a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. Partitioned DML requires an existing Partitioned DML transaction ID. |
sql |
Required. The SQL string. |
params |
Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example:
It is an error to execute a SQL statement with unbound parameters. |
param_ |
It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type In these cases, |
resume_ |
If this request is resuming a previously interrupted SQL statement execution, |
query_ |
Used to control the amount of debugging information returned in |
partition_ |
If present, results will be restricted to the specified partition previously created using PartitionQuery(). There must be an exact match for the values of fields common to this message and the PartitionQueryRequest message used to create this partition_token. |
seqno |
A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one will succeed. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction may be aborted. Replays of previously handled requests will yield the same response as the first execution. Required for DML statements. Ignored for queries. |
query_ |
Query optimizer configuration to use for the given query. |
request_ |
Common options for this request. |
directed_ |
Directed read options for this request. |
data_ |
If this is for a partitioned query and this field is set to If the field is set to |
QueryMode
Mode in which the statement must be processed.
Enums | |
---|---|
NORMAL |
The default mode. Only the statement results are returned. |
PLAN |
This mode returns only the query plan, without any results or execution statistics information. |
PROFILE |
This mode returns the query plan, overall execution statistics, operator level execution statistics along with the results. This has a performance overhead compared to the other modes. It is not recommended to use this mode for production traffic. |
QueryOptions
Query optimizer configuration.
Fields | |
---|---|
optimizer_ |
An option to control the selection of optimizer version. This parameter allows individual queries to pick different query optimizer versions. Specifying The list of supported optimizer versions can be queried from SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement with an invalid optimizer version fails with an See https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer for more information on managing the query optimizer. The |
optimizer_ |
An option to control the selection of optimizer statistics package. This parameter allows individual queries to use a different query optimizer statistics package. Specifying The statistics package requested by the query has to be exempt from garbage collection. This can be achieved with the following DDL statement:
The list of available statistics packages can be queried from Executing a SQL statement with an invalid optimizer statistics package or with a statistics package that allows garbage collection fails with an |
GetSessionRequest
The request for GetSession
.
Fields | |
---|---|
name |
Required. The name of the session to retrieve. Authorization requires the following IAM permission on the specified resource
|
KeyRange
KeyRange represents a range of rows in a table or index.
A range has a start key and an end key. These keys can be open or closed, indicating if the range includes rows with that key.
Keys are represented by lists, where the ith value in the list corresponds to the ith component of the table or index primary key. Individual values are encoded as described here
.
For example, consider the following table definition:
CREATE TABLE UserEvents (
UserName STRING(MAX),
EventDate STRING(10)
) PRIMARY KEY(UserName, EventDate);
The following keys name rows in this table:
["Bob", "2014-09-23"]
["Alfred", "2015-06-12"]
Since the UserEvents
table's PRIMARY KEY
clause names two columns, each UserEvents
key has two elements; the first is the UserName
, and the second is the EventDate
.
Key ranges with multiple components are interpreted lexicographically by component using the table or index key's declared sort order. For example, the following range returns all events for user "Bob"
that occurred in the year 2015:
"start_closed": ["Bob", "2015-01-01"]
"end_closed": ["Bob", "2015-12-31"]
Start and end keys can omit trailing key components. This affects the inclusion and exclusion of rows that exactly match the provided key components: if the key is closed, then rows that exactly match the provided components are included; if the key is open, then rows that exactly match are not included.
For example, the following range includes all events for "Bob"
that occurred during and after the year 2000:
"start_closed": ["Bob", "2000-01-01"]
"end_closed": ["Bob"]
The next example retrieves all events for "Bob"
:
"start_closed": ["Bob"]
"end_closed": ["Bob"]
To retrieve events before the year 2000:
"start_closed": ["Bob"]
"end_open": ["Bob", "2000-01-01"]
The following range includes all rows in the table:
"start_closed": []
"end_closed": []
This range returns all users whose UserName
begins with any character from A to C:
"start_closed": ["A"]
"end_open": ["D"]
This range returns all users whose UserName
begins with B:
"start_closed": ["B"]
"end_open": ["C"]
Key ranges honor column sort order. For example, suppose a table is defined as follows:
CREATE TABLE DescendingSortedTable {
Key INT64,
...
) PRIMARY KEY(Key DESC);
The following range retrieves all rows with key values between 1 and 100 inclusive:
"start_closed": ["100"]
"end_closed": ["1"]
Note that 100 is passed as the start, and 1 is passed as the end, because Key
is a descending column in the schema.
Fields | |
---|---|
Union field start_key_type . The start key must be provided. It can be either closed or open. start_key_type can be only one of the following: |
|
start_ |
If the start is closed, then the range includes all rows whose first |
start_ |
If the start is open, then the range excludes rows whose first |
Union field end_key_type . The end key must be provided. It can be either closed or open. end_key_type can be only one of the following: |
|
end_ |
If the end is closed, then the range includes all rows whose first |
end_ |
If the end is open, then the range excludes rows whose first |
KeySet
KeySet
defines a collection of Cloud Spanner keys and/or key ranges. All the keys are expected to be in the same table or index. The keys need not be sorted in any particular way.
If the same key is specified multiple times in the set (for example if two ranges, two keys, or a key and a range overlap), Cloud Spanner behaves as if the key were only specified once.
Fields | |
---|---|
keys[] |
A list of specific keys. Entries in |
ranges[] |
A list of key ranges. See |
all |
For convenience |
ListSessionsRequest
The request for ListSessions
.
Fields | |
---|---|
database |
Required. The database in which to list sessions. Authorization requires the following IAM permission on the specified resource
|
page_ |
Number of sessions to be returned in the response. If 0 or less, defaults to the server's maximum allowed page size. |
page_ |
If non-empty, |
filter |
An expression for filtering the results of the request. Filter rules are case insensitive. The fields eligible for filtering are:
Some examples of using filters are:
|
ListSessionsResponse
The response for ListSessions
.
Fields | |
---|---|
sessions[] |
The list of requested sessions. |
next_ |
|
Mutation
A modification to one or more Cloud Spanner rows. Mutations can be applied to a Cloud Spanner database by sending them in a Commit
call.
Fields | |
---|---|
Union field operation . Required. The operation to perform. operation can be only one of the following: |
|
insert |
Insert new rows in a table. If any of the rows already exist, the write or transaction fails with error |
update |
Update existing rows in a table. If any of the rows does not already exist, the transaction fails with error |
insert_ |
Like When using |
replace |
Like In an interleaved table, if you create the child table with the |
delete |
Delete rows from a table. Succeeds whether or not the named rows were present. |
Delete
Arguments to delete
operations.
Fields | |
---|---|
table |
Required. The table whose rows will be deleted. |
key_ |
Required. The primary keys of the rows within |
Write
Arguments to insert
, update
, insert_or_update
, and replace
operations.
Fields | |
---|---|
table |
Required. The table whose rows will be written. |
columns[] |
The names of the columns in The list of columns must contain enough columns to allow Cloud Spanner to derive values for all primary key columns in the row(s) to be modified. |
values[] |
The values to be written. |
PartialResultSet
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
Fields | |
---|---|
metadata |
Metadata about the result set, such as row type information. Only present in the first response. |
values[] |
A streamed result set consists of a stream of values, which might be split into many Most values are encoded based on type as described It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent
Some examples of merging:
For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following
This sequence of Not all |
chunked_ |
If true, then the final value in |
resume_ |
Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including |
stats |
Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting |
Partition
Information returned for each partition returned in a PartitionResponse.
Fields | |
---|---|
partition_ |
This token can be passed to Read, StreamingRead, ExecuteSql, or ExecuteStreamingSql requests to restrict the results to those identified by this partition token. |
PartitionOptions
Options for a PartitionQueryRequest and PartitionReadRequest.
Fields | |
---|---|
partition_ |
Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired data size for each partition generated. The default for this option is currently 1 GiB. This is only a hint. The actual size of each partition may be smaller or larger than this size request. |
max_ |
Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired maximum number of partitions to return. For example, this may be set to the number of workers available. The default for this option is currently 10,000. The maximum value is currently 200,000. This is only a hint. The actual number of partitions returned may be smaller or larger than this maximum count request. |
PartitionQueryRequest
The request for PartitionQuery
Fields | |
---|---|
session |
Required. The session used to create the partitions. Authorization requires the following IAM permission on the specified resource
|
transaction |
Read only snapshot transactions are supported, read/write and single use transactions are not. |
sql |
Required. The query request to generate partitions for. The request fails if the query is not root partitionable. For a query to be root partitionable, it needs to satisfy a few conditions. For example, if the query execution plan contains a distributed union operator, then it must be the first operator in the plan. For more information about other conditions, see Read data in parallel. The query request must not contain DML commands, such as |
params |
Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example:
It is an error to execute a SQL statement with unbound parameters. |
param_ |
It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type In these cases, |
partition_ |
Additional options that affect how many partitions are created. |
PartitionReadRequest
The request for PartitionRead
Fields | |
---|---|
session |
Required. The session used to create the partitions. Authorization requires the following IAM permission on the specified resource
|
transaction |
Read only snapshot transactions are supported, read/write and single use transactions are not. |
table |
Required. The name of the table in the database to be read. |
index |
If non-empty, the name of an index on |
columns[] |
The columns of |
key_ |
Required. It is not an error for the |
partition_ |
Additional options that affect how many partitions are created. |
PartitionResponse
The response for PartitionQuery
or PartitionRead
Fields | |
---|---|
partitions[] |
Partitions created by this request. |
transaction |
Transaction created by this request. |
PlanNode
Node information for nodes appearing in a QueryPlan.plan_nodes
.
Fields | |
---|---|
index |
The |
kind |
Used to determine the type of node. May be needed for visualizing different kinds of nodes differently. For example, If the node is a |
display_ |
The display name for the node. |
child_ |
List of child node |
short_ |
Condensed representation for |
metadata |
Attributes relevant to the node contained in a group of key-value pairs. For example, a Parameter Reference node could have the following information in its metadata:
|
execution_ |
The execution statistics associated with the node, contained in a group of key-value pairs. Only present if the plan was returned as a result of a profile query. For example, number of executions, number of rows/time per execution etc. |
ChildLink
Metadata associated with a parent-child relationship appearing in a PlanNode
.
Fields | |
---|---|
child_ |
The node to which the link points. |
type |
The type of the link. For example, in Hash Joins this could be used to distinguish between the build child and the probe child, or in the case of the child being an output variable, to represent the tag associated with the output variable. |
variable |
Only present if the child node is |
Kind
The kind of PlanNode
. Distinguishes between the two different kinds of nodes that can appear in a query plan.
Enums | |
---|---|
KIND_UNSPECIFIED |
Not specified. |
RELATIONAL |
Denotes a Relational operator node in the expression tree. Relational operators represent iterative processing of rows during query execution. For example, a TableScan operation that reads rows from a table. |
SCALAR |
Denotes a Scalar node in the expression tree. Scalar nodes represent non-iterable entities in the query plan. For example, constants or arithmetic operators appearing inside predicate expressions or references to column names. |
ShortRepresentation
Condensed representation of a node and its subtree. Only present for SCALAR
PlanNode(s)
.
Fields | |
---|---|
description |
A string representation of the expression subtree rooted at this node. |
subqueries |
A mapping of (subquery variable name) -> (subquery node id) for cases where the |
QueryPlan
Contains an ordered list of nodes appearing in the query plan.
Fields | |
---|---|
plan_ |
The nodes in the query plan. Plan nodes are returned in pre-order starting with the plan root. Each |
ReadRequest
The request for Read
and StreamingRead
.
Fields | |
---|---|
session |
Required. The session in which the read should be performed. Authorization requires the following IAM permission on the specified resource
|
transaction |
The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency. |
table |
Required. The name of the table in the database to be read. |
index |
If non-empty, the name of an index on |
columns[] |
Required. The columns of |
key_ |
Required. If the It is not an error for the |
limit |
If greater than zero, only the first |
resume_ |
If this request is resuming a previously interrupted read, |
partition_ |
If present, results will be restricted to the specified partition previously created using PartitionRead(). There must be an exact match for the values of fields common to this message and the PartitionReadRequest message used to create this partition_token. |
request_ |
Common options for this request. |
directed_ |
Directed read options for this request. |
data_ |
If this is for a partitioned read and this field is set to If the field is set to |
RequestOptions
Common request options for various APIs.
Fields | |
---|---|
priority |
Priority for the request. |
request_ |
A per-request tag which can be applied to queries or reads, used for statistics collection. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. This field is ignored for requests where it's not applicable (e.g. CommitRequest). Legal characters for |
transaction_ |
A tag used for statistics collection about this transaction. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. The value of transaction_tag should be the same for all requests belonging to the same transaction. If this request doesn't belong to any transaction, transaction_tag will be ignored. Legal characters for |
Priority
The relative priority for requests. Note that priority is not applicable for BeginTransaction
.
The priority acts as a hint to the Cloud Spanner scheduler and does not guarantee priority or order of execution. For example:
- Some parts of a write operation always execute at
PRIORITY_HIGH
, regardless of the specified priority. This may cause you to see an increase in high priority workload even when executing a low priority request. This can also potentially cause a priority inversion where a lower priority request will be fulfilled ahead of a higher priority request. - If a transaction contains multiple operations with different priorities, Cloud Spanner does not guarantee to process the higher priority operations first. There may be other constraints to satisfy, such as order of operations.
Enums | |
---|---|
PRIORITY_UNSPECIFIED |
PRIORITY_UNSPECIFIED is equivalent to PRIORITY_HIGH . |
PRIORITY_LOW |
This specifies that the request is low priority. |
PRIORITY_MEDIUM |
This specifies that the request is medium priority. |
PRIORITY_HIGH |
This specifies that the request is high priority. |
ResultSet
Results from Read
or ExecuteSql
.
Fields | |
---|---|
metadata |
Metadata about the result set, such as row type information. |
rows[] |
Each element in |
stats |
Query plan and execution statistics for the SQL statement that produced this result set. These can be requested by setting |
ResultSetMetadata
Metadata about a ResultSet
or PartialResultSet
.
Fields | |
---|---|
row_ |
Indicates the field names and types for the rows in the result set. For example, a SQL query like
|
transaction |
If the read or SQL query began a transaction as a side-effect, the information about the new transaction is yielded here. |
undeclared_ |
A SQL query can be parameterized. In PLAN mode, these parameters can be undeclared. This indicates the field names and types for those undeclared parameters in the SQL query. For example, a SQL query like
|
ResultSetStats
Additional statistics about a ResultSet
or PartialResultSet
.
Fields | |
---|---|
query_ |
|
query_ |
Aggregated statistics from the execution of the query. Only present when the query is profiled. For example, a query could return the statistics as follows:
|
Union field row_count . The number of rows modified by the DML statement. row_count can be only one of the following: |
|
row_ |
Standard DML returns an exact count of rows that were modified. |
row_ |
Partitioned DML does not offer exactly-once semantics, so it returns a lower bound of the rows modified. |
RollbackRequest
The request for Rollback
.
Fields | |
---|---|
session |
Required. The session in which the transaction to roll back is running. Authorization requires the following IAM permission on the specified resource
|
transaction_ |
Required. The transaction to roll back. |
Session
A session in the Cloud Spanner API.
Fields | |
---|---|
name |
Output only. The name of the session. This is always system-assigned. |
labels |
The labels for the session.
See https://goo.gl/xmQnxf for more information on and examples of labels. |
create_ |
Output only. The timestamp when the session is created. |
approximate_ |
Output only. The approximate timestamp when the session is last used. It is typically earlier than the actual last use time. |
creator_ |
The database role which created this session. |
multiplexed |
Optional. If true, specifies a multiplexed session. Use a multiplexed session for multiple, concurrent read-only operations. Don't use them for read-write transactions, partitioned reads, or partitioned queries. Use |
StructType
StructType
defines the fields of a STRUCT
type.
Fields | |
---|---|
fields[] |
The list of fields that make up this struct. Order is significant, because values of this struct type are represented as lists, where the order of field values matches the order of fields in the |
Field
Message representing a single field of a struct.
Fields | |
---|---|
name |
The name of the field. For reads, this is the column name. For SQL queries, it is the column alias (e.g., |
type |
The type of the field. |
Transaction
A transaction.
Fields | |
---|---|
id |
Single-use read-only transactions do not have IDs, because single-use transactions do not support multiple requests. |
read_ |
For snapshot read-only transactions, the read timestamp chosen for the transaction. Not returned by default: see A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: |
TransactionOptions
Transactions:
Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction.
Transaction modes:
Cloud Spanner supports three transaction modes:
Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.
Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See
TransactionOptions.ReadOnly.strong
for more details.Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed.
For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed.
Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database.
Locking read-write transactions:
Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent.
Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit
or Rollback
. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it.
Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit
. At any time before Commit
, the client can send a Rollback
request to abort the transaction.
Semantics:
Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns ABORTED
, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner.
Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves.
Retrying aborted transactions:
When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous.
Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an ABORTED
error.
Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying.
Idle transactions:
A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error ABORTED
.
If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, SELECT 1
) prevents the transaction from becoming idle.
Snapshot read-only transactions:
Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes.
Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions.
Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice.
Snapshot read-only transactions do not need to call Commit
or Rollback
(and in fact are not permitted to do so).
To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp.
The types of timestamp bound are:
- Strong (the default).
- Bounded staleness.
- Exact staleness.
If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica.
Each type of timestamp bound is discussed in detail below.
Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction.
Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp.
Queries on change streams (see below for more details) must also specify the strong read timestamp bound.
See TransactionOptions.ReadOnly.strong
.
Exact staleness:
These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished.
The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time.
These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results.
See TransactionOptions.ReadOnly.read_timestamp
and TransactionOptions.ReadOnly.exact_staleness
.
Bounded staleness:
Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking.
All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results.
Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp.
As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica.
Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions.
See TransactionOptions.ReadOnly.max_staleness
and TransactionOptions.ReadOnly.min_read_timestamp
.
Old read timestamps and garbage collection:
Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error FAILED_PRECONDITION
.
You can configure and extend the VERSION_RETENTION_PERIOD
of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past.
Querying change Streams:
A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database.
When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_
All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries.
In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction
message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries.
Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs.
Partitioned DML transactions:
Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions.
Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another.
To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time.
That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions.
The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table.
The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows.
Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as
UPDATE table SET column = column + 1
as it could be run multiple times against some rows.The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows.
Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql.
If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all.
Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.
Fields | |
---|---|
Union field mode . Required. The type of transaction. mode can be only one of the following: |
|
read_ |
Transaction may write. Authorization to begin a read-write transaction requires |
partitioned_ |
Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires |
read_ |
Transaction will not write. Authorization to begin a read-only transaction requires |
PartitionedDml
This type has no fields.
Message type to initiate a Partitioned DML transaction.
ReadOnly
Message type to initiate a read-only transaction.
Fields | |
---|---|
return_ |
If true, the Cloud Spanner-selected read timestamp is included in the |
Union field timestamp_bound . How to choose the timestamp for the read-only transaction. timestamp_bound can be only one of the following: |
|
strong |
Read at a timestamp where all previously committed transactions are visible. |
min_ |
Executes all reads at a timestamp >= This is useful for requesting fresher data than some previous read, or data that is fresh enough to observe the effects of some previously committed transaction whose timestamp is known. Note that this option can only be used in single-use transactions. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: |
max_ |
Read data at a timestamp >= Useful for reading the freshest data available at a nearby replica, while bounding the possible staleness if the local replica has fallen behind. Note that this option can only be used in single-use transactions. |
read_ |
Executes all reads at the given timestamp. Unlike other modes, reads at a specific timestamp are repeatable; the same read at the same timestamp always returns the same data. If the timestamp is in the future, the read will block until the specified timestamp, modulo the read's deadline. Useful for large scale consistent reads such as mapreduces, or for coordinating many reads against a consistent snapshot of the data. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: |
exact_ |
Executes all reads at a timestamp that is Guarantees that all writes that have committed more than the specified number of seconds ago are visible. Because Cloud Spanner chooses the exact timestamp, this mode works even if the client's local clock is substantially skewed from Cloud Spanner commit timestamps. Useful for reading at nearby replicas without the distributed timestamp negotiation overhead of |
ReadWrite
This type has no fields.
Message type to initiate a read-write transaction. Currently this transaction type has no options.
TransactionSelector
This message is used to select the transaction in which a Read
or ExecuteSql
call runs.
See TransactionOptions
for more information about transactions.
Fields | |
---|---|
Union field selector . If no fields are set, the default is a single use transaction with strong concurrency. selector can be only one of the following: |
|
single_ |
Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. |
id |
Execute the read or SQL query in a previously-started transaction. |
begin |
Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in |
Type
Type
indicates the type of a Cloud Spanner value, as might be stored in a table cell or returned from an SQL query.
Fields | |
---|---|
code |
Required. The |
array_ |
If |
struct_ |
If |
type_ |
The |
proto_ |
If |
TypeAnnotationCode
TypeAnnotationCode
is used as a part of Type
to disambiguate SQL types that should be used for a given Cloud Spanner value. Disambiguation is needed because the same Cloud Spanner type can be mapped to different SQL types depending on SQL dialect. TypeAnnotationCode doesn't affect the way value is serialized.
Enums | |
---|---|
TYPE_ANNOTATION_CODE_UNSPECIFIED |
Not specified. |
PG_NUMERIC |
PostgreSQL compatible NUMERIC type. This annotation needs to be applied to Type instances having NUMERIC type code to specify that values of this type should be treated as PostgreSQL NUMERIC values. Currently this annotation is always needed for NUMERIC when a client interacts with PostgreSQL-enabled Spanner databases. |
PG_JSONB |
PostgreSQL compatible JSONB type. This annotation needs to be applied to Type instances having JSON type code to specify that values of this type should be treated as PostgreSQL JSONB values. Currently this annotation is always needed for JSON when a client interacts with PostgreSQL-enabled Spanner databases. |
TypeCode
TypeCode
is used as part of Type
to indicate the type of a Cloud Spanner value.
Each legal value of a type can be encoded to or decoded from a JSON value, using the encodings described below. All Cloud Spanner values can be null
, regardless of type; null
s are always encoded as a JSON null
.
Enums | |
---|---|
TYPE_CODE_UNSPECIFIED |
Not specified. |
BOOL |
Encoded as JSON true or false . |
INT64 |
Encoded as string , in decimal format. |
FLOAT64 |
Encoded as number , or the strings "NaN" , "Infinity" , or "-Infinity" . |
FLOAT32 |
Encoded as number , or the strings "NaN" , "Infinity" , or "-Infinity" . |
TIMESTAMP |
Encoded as If the schema has the column option |
DATE |
Encoded as string in RFC 3339 date format. |
STRING |
Encoded as string . |
BYTES |
Encoded as a base64-encoded string , as described in RFC 4648, section 4. |
ARRAY |
Encoded as list , where the list elements are represented according to array_element_type . |
STRUCT |
Encoded as list , where list element i is represented according to struct_type.fields[i] . |
NUMERIC |
Encoded as Scientific notation: |
JSON |
Encoded as a JSON-formatted
|
PROTO |
Encoded as a base64-encoded string , as described in RFC 4648, section 4. |
ENUM |
Encoded as string , in decimal format. |