Classes
BatchCreateSessionsRequest
The request for BatchCreateSessions.
Protobuf type google.spanner.v1.BatchCreateSessionsRequest
BatchCreateSessionsRequest.Builder
The request for BatchCreateSessions.
Protobuf type google.spanner.v1.BatchCreateSessionsRequest
BatchCreateSessionsResponse
The response for BatchCreateSessions.
Protobuf type google.spanner.v1.BatchCreateSessionsResponse
BatchCreateSessionsResponse.Builder
The response for BatchCreateSessions.
Protobuf type google.spanner.v1.BatchCreateSessionsResponse
BeginTransactionRequest
The request for BeginTransaction.
Protobuf type google.spanner.v1.BeginTransactionRequest
BeginTransactionRequest.Builder
The request for BeginTransaction.
Protobuf type google.spanner.v1.BeginTransactionRequest
CommitRequest
The request for Commit.
Protobuf type google.spanner.v1.CommitRequest
CommitRequest.Builder
The request for Commit.
Protobuf type google.spanner.v1.CommitRequest
CommitResponse
The response for Commit.
Protobuf type google.spanner.v1.CommitResponse
CommitResponse.Builder
The response for Commit.
Protobuf type google.spanner.v1.CommitResponse
CommitResponse.CommitStats
Additional statistics about a commit.
Protobuf type google.spanner.v1.CommitResponse.CommitStats
CommitResponse.CommitStats.Builder
Additional statistics about a commit.
Protobuf type google.spanner.v1.CommitResponse.CommitStats
CommitResponseProto
CreateSessionRequest
The request for CreateSession.
Protobuf type google.spanner.v1.CreateSessionRequest
CreateSessionRequest.Builder
The request for CreateSession.
Protobuf type google.spanner.v1.CreateSessionRequest
DatabaseName
DatabaseName.Builder
Builder for projects/{project}/instances/{instance}/databases/{database}.
DeleteSessionRequest
The request for DeleteSession.
Protobuf type google.spanner.v1.DeleteSessionRequest
DeleteSessionRequest.Builder
The request for DeleteSession.
Protobuf type google.spanner.v1.DeleteSessionRequest
ExecuteBatchDmlRequest
The request for ExecuteBatchDml.
Protobuf type google.spanner.v1.ExecuteBatchDmlRequest
ExecuteBatchDmlRequest.Builder
The request for ExecuteBatchDml.
Protobuf type google.spanner.v1.ExecuteBatchDmlRequest
ExecuteBatchDmlRequest.Statement
A single DML statement.
Protobuf type google.spanner.v1.ExecuteBatchDmlRequest.Statement
ExecuteBatchDmlRequest.Statement.Builder
A single DML statement.
Protobuf type google.spanner.v1.ExecuteBatchDmlRequest.Statement
ExecuteBatchDmlResponse
The response for ExecuteBatchDml. Contains a list of ResultSet messages, one for each DML statement that has successfully executed, in the same order as the statements in the request. If a statement fails, the status in the response body identifies the cause of the failure. To check for DML statements that failed, use the following approach:
- Check the status in the response message. The google.rpc.Code enum
value
OK
indicates that all statements were executed successfully. - If the status was not
OK
, check the number of result sets in the response. If the response containsN
ResultSet messages, then statementN+1
in the request failed. Example 1: - Request: 5 DML statements, all executed successfully.
- Response: 5 ResultSet messages, with the status
OK
. Example 2: - Request: 5 DML statements. The third statement has a syntax error.
- Response: 2 ResultSet messages, and a syntax error (
INVALID_ARGUMENT
) status. The number of ResultSet messages indicates that the third statement failed, and the fourth and fifth statements were not executed.
Protobuf type google.spanner.v1.ExecuteBatchDmlResponse
ExecuteBatchDmlResponse.Builder
The response for ExecuteBatchDml. Contains a list of ResultSet messages, one for each DML statement that has successfully executed, in the same order as the statements in the request. If a statement fails, the status in the response body identifies the cause of the failure. To check for DML statements that failed, use the following approach:
- Check the status in the response message. The google.rpc.Code enum
value
OK
indicates that all statements were executed successfully. - If the status was not
OK
, check the number of result sets in the response. If the response containsN
ResultSet messages, then statementN+1
in the request failed. Example 1: - Request: 5 DML statements, all executed successfully.
- Response: 5 ResultSet messages, with the status
OK
. Example 2: - Request: 5 DML statements. The third statement has a syntax error.
- Response: 2 ResultSet messages, and a syntax error (
INVALID_ARGUMENT
) status. The number of ResultSet messages indicates that the third statement failed, and the fourth and fifth statements were not executed.
Protobuf type google.spanner.v1.ExecuteBatchDmlResponse
ExecuteSqlRequest
The request for ExecuteSql and ExecuteStreamingSql.
Protobuf type google.spanner.v1.ExecuteSqlRequest
ExecuteSqlRequest.Builder
The request for ExecuteSql and ExecuteStreamingSql.
Protobuf type google.spanner.v1.ExecuteSqlRequest
ExecuteSqlRequest.QueryOptions
Query optimizer configuration.
Protobuf type google.spanner.v1.ExecuteSqlRequest.QueryOptions
ExecuteSqlRequest.QueryOptions.Builder
Query optimizer configuration.
Protobuf type google.spanner.v1.ExecuteSqlRequest.QueryOptions
GetSessionRequest
The request for GetSession.
Protobuf type google.spanner.v1.GetSessionRequest
GetSessionRequest.Builder
The request for GetSession.
Protobuf type google.spanner.v1.GetSessionRequest
KeyRange
KeyRange represents a range of rows in a table or index.
A range has a start key and an end key. These keys can be open or
closed, indicating if the range includes rows with that key.
Keys are represented by lists, where the ith value in the list
corresponds to the ith component of the table or index primary key.
Individual values are encoded as described
here.
For example, consider the following table definition:
CREATE TABLE UserEvents (
UserName STRING(MAX),
EventDate STRING(10)
) PRIMARY KEY(UserName, EventDate);
The following keys name rows in this table:
["Bob", "2014-09-23"]
["Alfred", "2015-06-12"]
Since the UserEvents
table's PRIMARY KEY
clause names two
columns, each UserEvents
key has two elements; the first is the
UserName
, and the second is the EventDate
.
Key ranges with multiple components are interpreted
lexicographically by component using the table or index key's declared
sort order. For example, the following range returns all events for
user "Bob"
that occurred in the year 2015:
"start_closed": ["Bob", "2015-01-01"]
"end_closed": ["Bob", "2015-12-31"]
Start and end keys can omit trailing key components. This affects the
inclusion and exclusion of rows that exactly match the provided key
components: if the key is closed, then rows that exactly match the
provided components are included; if the key is open, then rows
that exactly match are not included.
For example, the following range includes all events for "Bob"
that
occurred during and after the year 2000:
"start_closed": ["Bob", "2000-01-01"]
"end_closed": ["Bob"]
The next example retrieves all events for "Bob"
:
"start_closed": ["Bob"]
"end_closed": ["Bob"]
To retrieve events before the year 2000:
"start_closed": ["Bob"]
"end_open": ["Bob", "2000-01-01"]
The following range includes all rows in the table:
"start_closed": []
"end_closed": []
This range returns all users whose UserName
begins with any
character from A to C:
"start_closed": ["A"]
"end_open": ["D"]
This range returns all users whose UserName
begins with B:
"start_closed": ["B"]
"end_open": ["C"]
Key ranges honor column sort order. For example, suppose a table is
defined as follows:
CREATE TABLE DescendingSortedTable {
Key INT64,
...
) PRIMARY KEY(Key DESC);
The following range retrieves all rows with key values between 1
and 100 inclusive:
"start_closed": ["100"]
"end_closed": ["1"]
Note that 100 is passed as the start, and 1 is passed as the end,
because Key
is a descending column in the schema.
Protobuf type google.spanner.v1.KeyRange
KeyRange.Builder
KeyRange represents a range of rows in a table or index.
A range has a start key and an end key. These keys can be open or
closed, indicating if the range includes rows with that key.
Keys are represented by lists, where the ith value in the list
corresponds to the ith component of the table or index primary key.
Individual values are encoded as described
here.
For example, consider the following table definition:
CREATE TABLE UserEvents (
UserName STRING(MAX),
EventDate STRING(10)
) PRIMARY KEY(UserName, EventDate);
The following keys name rows in this table:
["Bob", "2014-09-23"]
["Alfred", "2015-06-12"]
Since the UserEvents
table's PRIMARY KEY
clause names two
columns, each UserEvents
key has two elements; the first is the
UserName
, and the second is the EventDate
.
Key ranges with multiple components are interpreted
lexicographically by component using the table or index key's declared
sort order. For example, the following range returns all events for
user "Bob"
that occurred in the year 2015:
"start_closed": ["Bob", "2015-01-01"]
"end_closed": ["Bob", "2015-12-31"]
Start and end keys can omit trailing key components. This affects the
inclusion and exclusion of rows that exactly match the provided key
components: if the key is closed, then rows that exactly match the
provided components are included; if the key is open, then rows
that exactly match are not included.
For example, the following range includes all events for "Bob"
that
occurred during and after the year 2000:
"start_closed": ["Bob", "2000-01-01"]
"end_closed": ["Bob"]
The next example retrieves all events for "Bob"
:
"start_closed": ["Bob"]
"end_closed": ["Bob"]
To retrieve events before the year 2000:
"start_closed": ["Bob"]
"end_open": ["Bob", "2000-01-01"]
The following range includes all rows in the table:
"start_closed": []
"end_closed": []
This range returns all users whose UserName
begins with any
character from A to C:
"start_closed": ["A"]
"end_open": ["D"]
This range returns all users whose UserName
begins with B:
"start_closed": ["B"]
"end_open": ["C"]
Key ranges honor column sort order. For example, suppose a table is
defined as follows:
CREATE TABLE DescendingSortedTable {
Key INT64,
...
) PRIMARY KEY(Key DESC);
The following range retrieves all rows with key values between 1
and 100 inclusive:
"start_closed": ["100"]
"end_closed": ["1"]
Note that 100 is passed as the start, and 1 is passed as the end,
because Key
is a descending column in the schema.
Protobuf type google.spanner.v1.KeyRange
KeySet
KeySet
defines a collection of Cloud Spanner keys and/or key ranges. All
the keys are expected to be in the same table or index. The keys need
not be sorted in any particular way.
If the same key is specified multiple times in the set (for example
if two ranges, two keys, or a key and a range overlap), Cloud Spanner
behaves as if the key were only specified once.
Protobuf type google.spanner.v1.KeySet
KeySet.Builder
KeySet
defines a collection of Cloud Spanner keys and/or key ranges. All
the keys are expected to be in the same table or index. The keys need
not be sorted in any particular way.
If the same key is specified multiple times in the set (for example
if two ranges, two keys, or a key and a range overlap), Cloud Spanner
behaves as if the key were only specified once.
Protobuf type google.spanner.v1.KeySet
KeysProto
ListSessionsRequest
The request for ListSessions.
Protobuf type google.spanner.v1.ListSessionsRequest
ListSessionsRequest.Builder
The request for ListSessions.
Protobuf type google.spanner.v1.ListSessionsRequest
ListSessionsResponse
The response for ListSessions.
Protobuf type google.spanner.v1.ListSessionsResponse
ListSessionsResponse.Builder
The response for ListSessions.
Protobuf type google.spanner.v1.ListSessionsResponse
Mutation
A modification to one or more Cloud Spanner rows. Mutations can be applied to a Cloud Spanner database by sending them in a Commit call.
Protobuf type google.spanner.v1.Mutation
Mutation.Builder
A modification to one or more Cloud Spanner rows. Mutations can be applied to a Cloud Spanner database by sending them in a Commit call.
Protobuf type google.spanner.v1.Mutation
Mutation.Delete
Arguments to delete operations.
Protobuf type google.spanner.v1.Mutation.Delete
Mutation.Delete.Builder
Arguments to delete operations.
Protobuf type google.spanner.v1.Mutation.Delete
Mutation.Write
Arguments to insert, update, insert_or_update, and replace operations.
Protobuf type google.spanner.v1.Mutation.Write
Mutation.Write.Builder
Arguments to insert, update, insert_or_update, and replace operations.
Protobuf type google.spanner.v1.Mutation.Write
MutationProto
PartialResultSet
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
Protobuf type google.spanner.v1.PartialResultSet
PartialResultSet.Builder
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
Protobuf type google.spanner.v1.PartialResultSet
Partition
Information returned for each partition returned in a PartitionResponse.
Protobuf type google.spanner.v1.Partition
Partition.Builder
Information returned for each partition returned in a PartitionResponse.
Protobuf type google.spanner.v1.Partition
PartitionOptions
Options for a PartitionQueryRequest and PartitionReadRequest.
Protobuf type google.spanner.v1.PartitionOptions
PartitionOptions.Builder
Options for a PartitionQueryRequest and PartitionReadRequest.
Protobuf type google.spanner.v1.PartitionOptions
PartitionQueryRequest
The request for PartitionQuery
Protobuf type google.spanner.v1.PartitionQueryRequest
PartitionQueryRequest.Builder
The request for PartitionQuery
Protobuf type google.spanner.v1.PartitionQueryRequest
PartitionReadRequest
The request for PartitionRead
Protobuf type google.spanner.v1.PartitionReadRequest
PartitionReadRequest.Builder
The request for PartitionRead
Protobuf type google.spanner.v1.PartitionReadRequest
PartitionResponse
The response for PartitionQuery or PartitionRead
Protobuf type google.spanner.v1.PartitionResponse
PartitionResponse.Builder
The response for PartitionQuery or PartitionRead
Protobuf type google.spanner.v1.PartitionResponse
PlanNode
Node information for nodes appearing in a QueryPlan.plan_nodes.
Protobuf type google.spanner.v1.PlanNode
PlanNode.Builder
Node information for nodes appearing in a QueryPlan.plan_nodes.
Protobuf type google.spanner.v1.PlanNode
PlanNode.ChildLink
Metadata associated with a parent-child relationship appearing in a PlanNode.
Protobuf type google.spanner.v1.PlanNode.ChildLink
PlanNode.ChildLink.Builder
Metadata associated with a parent-child relationship appearing in a PlanNode.
Protobuf type google.spanner.v1.PlanNode.ChildLink
PlanNode.ShortRepresentation
Condensed representation of a node and its subtree. Only present for
SCALAR
PlanNode(s).
Protobuf type google.spanner.v1.PlanNode.ShortRepresentation
PlanNode.ShortRepresentation.Builder
Condensed representation of a node and its subtree. Only present for
SCALAR
PlanNode(s).
Protobuf type google.spanner.v1.PlanNode.ShortRepresentation
QueryPlan
Contains an ordered list of nodes appearing in the query plan.
Protobuf type google.spanner.v1.QueryPlan
QueryPlan.Builder
Contains an ordered list of nodes appearing in the query plan.
Protobuf type google.spanner.v1.QueryPlan
QueryPlanProto
ReadRequest
The request for Read and StreamingRead.
Protobuf type google.spanner.v1.ReadRequest
ReadRequest.Builder
The request for Read and StreamingRead.
Protobuf type google.spanner.v1.ReadRequest
RequestOptions
Common request options for various APIs.
Protobuf type google.spanner.v1.RequestOptions
RequestOptions.Builder
Common request options for various APIs.
Protobuf type google.spanner.v1.RequestOptions
ResultSet
Results from Read or ExecuteSql.
Protobuf type google.spanner.v1.ResultSet
ResultSet.Builder
Results from Read or ExecuteSql.
Protobuf type google.spanner.v1.ResultSet
ResultSetMetadata
Metadata about a ResultSet or PartialResultSet.
Protobuf type google.spanner.v1.ResultSetMetadata
ResultSetMetadata.Builder
Metadata about a ResultSet or PartialResultSet.
Protobuf type google.spanner.v1.ResultSetMetadata
ResultSetProto
ResultSetStats
Additional statistics about a ResultSet or PartialResultSet.
Protobuf type google.spanner.v1.ResultSetStats
ResultSetStats.Builder
Additional statistics about a ResultSet or PartialResultSet.
Protobuf type google.spanner.v1.ResultSetStats
RollbackRequest
The request for Rollback.
Protobuf type google.spanner.v1.RollbackRequest
RollbackRequest.Builder
The request for Rollback.
Protobuf type google.spanner.v1.RollbackRequest
Session
A session in the Cloud Spanner API.
Protobuf type google.spanner.v1.Session
Session.Builder
A session in the Cloud Spanner API.
Protobuf type google.spanner.v1.Session
SessionName
SessionName.Builder
Builder for projects/{project}/instances/{instance}/databases/{database}/sessions/{session}.
SpannerGrpc
Cloud Spanner API The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
SpannerGrpc.SpannerBlockingStub
Cloud Spanner API The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
SpannerGrpc.SpannerFutureStub
Cloud Spanner API The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
SpannerGrpc.SpannerImplBase
Cloud Spanner API The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
SpannerGrpc.SpannerStub
Cloud Spanner API The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
SpannerProto
StructType
StructType
defines the fields of a STRUCT type.
Protobuf type google.spanner.v1.StructType
StructType.Builder
StructType
defines the fields of a STRUCT type.
Protobuf type google.spanner.v1.StructType
StructType.Field
Message representing a single field of a struct.
Protobuf type google.spanner.v1.StructType.Field
StructType.Field.Builder
Message representing a single field of a struct.
Protobuf type google.spanner.v1.StructType.Field
Transaction
A transaction.
Protobuf type google.spanner.v1.Transaction
Transaction.Builder
A transaction.
Protobuf type google.spanner.v1.Transaction
TransactionOptions
Transactions
Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction.
Transaction Modes
Cloud Spanner supports three transaction modes:
- Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.
- Snapshot read-only. This transaction type provides guaranteed consistency across several reads, but does not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past. Snapshot read-only transactions do not need to be committed.
- Partitioned DML. This type of transaction is used to execute
a single Partitioned DML statement. Partitioned DML partitions
the key space and runs the DML statement over each partition
in parallel using separate, internal transactions that commit
independently. Partitioned DML transactions do not need to be
committed.
For transactions that only read, snapshot read-only transactions
provide simpler semantics and are almost always faster. In
particular, read-only transactions do not take locks, so they do
not conflict with read-write transactions. As a consequence of not
taking locks, they also do not abort, so retry loops are not needed.
Transactions may only read/write data in a single database. They
may, however, read/write data in different tables within that
database.
## Locking Read-Write Transactions
Locking transactions may be used to atomically read-modify-write
data anywhere in a database. This type of transaction is externally
consistent.
Clients should attempt to minimize the amount of time a transaction
is active. Faster transactions commit with higher probability
and cause less contention. Cloud Spanner attempts to keep read locks
active as long as the transaction continues to do reads, and the
transaction has not been terminated by
Commit or
Rollback. Long periods of
inactivity at the client may cause Cloud Spanner to release a
transaction's locks and abort it.
Conceptually, a read-write transaction consists of zero or more
reads or SQL statements followed by
Commit. At any time before
Commit, the client can send a
Rollback request to abort the
transaction.
## Semantics
Cloud Spanner can commit the transaction if all read locks it acquired
are still valid at commit time, and it is able to acquire write
locks for all writes. Cloud Spanner can abort the transaction for any
reason. If a commit attempt returns
ABORTED
, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. ## Retrying Aborted Transactions When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Under some circumstances (e.g., many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of wall time spent retrying. ## Idle Transactions A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. In that case, the commit will fail with errorABORTED
. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (e.g.,SELECT 1
) prevents the transaction from becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default).
- Bounded staleness.
- Exact staleness.
If the Cloud Spanner database to be read is geographically distributed,
stale read-only transactions can execute more quickly than strong
or read-write transaction, because they are able to execute far
from the leader replica.
Each type of timestamp bound is discussed in detail below.
## Strong
Strong reads are guaranteed to see the effects of all transactions
that have committed before the start of the read. Furthermore, all
rows yielded by a single read are consistent with each other -- if
any part of the read observes a transaction, all parts of the read
see the transaction.
Strong reads are not repeatable: two consecutive strong read-only
transactions might return inconsistent results if there are
concurrent writes. If consistency across reads is required, the
reads should be executed within a transaction or at an exact read
timestamp.
See TransactionOptions.ReadOnly.strong.
## Exact Staleness
These timestamp bounds execute reads at a user-specified
timestamp. Reads at a timestamp are guaranteed to see a consistent
prefix of the global transaction history: they observe
modifications done by all transactions with a commit timestamp <=
the read timestamp, and observe none of the modifications done by
transactions with a larger commit timestamp. They will block until
all conflicting transactions that may be assigned commit timestamps
<= the read timestamp have finished.
The timestamp can either be expressed as an absolute Cloud Spanner commit
timestamp or a staleness relative to the current time.
These modes do not require a "negotiation phase" to pick a
timestamp. As a result, they execute slightly faster than the
equivalent boundedly stale concurrency modes. On the other hand,
boundedly stale reads usually return fresher results.
See TransactionOptions.ReadOnly.read_timestamp and
TransactionOptions.ReadOnly.exact_staleness.
## Bounded Staleness
Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
subject to a user-provided staleness bound. Cloud Spanner chooses the
newest timestamp within the staleness bound that allows execution
of the reads at the closest available replica without blocking.
All rows yielded are consistent with each other -- if any part of
the read observes a transaction, all parts of the read see the
transaction. Boundedly stale reads are not repeatable: two stale
reads, even if they use the same staleness bound, can execute at
different timestamps and thus return inconsistent results.
Boundedly stale reads execute in two phases: the first phase
negotiates a timestamp among all replicas needed to serve the
read. In the second phase, reads are executed at the negotiated
timestamp.
As a result of the two phase execution, bounded staleness reads are
usually a little slower than comparable exact staleness
reads. However, they are typically able to return fresher
results, and are more likely to execute at the closest replica.
Because the timestamp negotiation requires up-front knowledge of
which rows will be read, it can only be used with single-use
read-only transactions.
See TransactionOptions.ReadOnly.max_staleness and
TransactionOptions.ReadOnly.min_read_timestamp.
## Old Read Timestamps and Garbage Collection
Cloud Spanner continuously garbage collects deleted and overwritten data
in the background to reclaim storage space. This process is known
as "version GC". By default, version GC reclaims versions after they
are one hour old. Because of this, Cloud Spanner cannot perform reads
at read timestamps more than one hour in the past. This
restriction also applies to in-progress reads and/or SQL queries whose
timestamp become too old while executing. Reads and SQL queries with
too-old read timestamps fail with the error
FAILED_PRECONDITION
. ## Partitioned DML Transactions Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions.- The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table.
- The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows.
- Partitioned DML does not guarantee exactly-once execution semantics
against a partition. The statement will be applied at least once to each
partition. It is strongly recommended that the DML statement should be
idempotent to avoid unexpected results. For instance, it is potentially
dangerous to run a statement such as
UPDATE table SET column = column + 1
as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows.
- Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql.
- If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.
Protobuf type google.spanner.v1.TransactionOptions
TransactionOptions.Builder
Transactions
Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction.
Transaction Modes
Cloud Spanner supports three transaction modes:
- Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.
- Snapshot read-only. This transaction type provides guaranteed consistency across several reads, but does not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past. Snapshot read-only transactions do not need to be committed.
- Partitioned DML. This type of transaction is used to execute
a single Partitioned DML statement. Partitioned DML partitions
the key space and runs the DML statement over each partition
in parallel using separate, internal transactions that commit
independently. Partitioned DML transactions do not need to be
committed.
For transactions that only read, snapshot read-only transactions
provide simpler semantics and are almost always faster. In
particular, read-only transactions do not take locks, so they do
not conflict with read-write transactions. As a consequence of not
taking locks, they also do not abort, so retry loops are not needed.
Transactions may only read/write data in a single database. They
may, however, read/write data in different tables within that
database.
## Locking Read-Write Transactions
Locking transactions may be used to atomically read-modify-write
data anywhere in a database. This type of transaction is externally
consistent.
Clients should attempt to minimize the amount of time a transaction
is active. Faster transactions commit with higher probability
and cause less contention. Cloud Spanner attempts to keep read locks
active as long as the transaction continues to do reads, and the
transaction has not been terminated by
Commit or
Rollback. Long periods of
inactivity at the client may cause Cloud Spanner to release a
transaction's locks and abort it.
Conceptually, a read-write transaction consists of zero or more
reads or SQL statements followed by
Commit. At any time before
Commit, the client can send a
Rollback request to abort the
transaction.
## Semantics
Cloud Spanner can commit the transaction if all read locks it acquired
are still valid at commit time, and it is able to acquire write
locks for all writes. Cloud Spanner can abort the transaction for any
reason. If a commit attempt returns
ABORTED
, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. ## Retrying Aborted Transactions When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Under some circumstances (e.g., many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of wall time spent retrying. ## Idle Transactions A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. In that case, the commit will fail with errorABORTED
. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (e.g.,SELECT 1
) prevents the transaction from becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default).
- Bounded staleness.
- Exact staleness.
If the Cloud Spanner database to be read is geographically distributed,
stale read-only transactions can execute more quickly than strong
or read-write transaction, because they are able to execute far
from the leader replica.
Each type of timestamp bound is discussed in detail below.
## Strong
Strong reads are guaranteed to see the effects of all transactions
that have committed before the start of the read. Furthermore, all
rows yielded by a single read are consistent with each other -- if
any part of the read observes a transaction, all parts of the read
see the transaction.
Strong reads are not repeatable: two consecutive strong read-only
transactions might return inconsistent results if there are
concurrent writes. If consistency across reads is required, the
reads should be executed within a transaction or at an exact read
timestamp.
See TransactionOptions.ReadOnly.strong.
## Exact Staleness
These timestamp bounds execute reads at a user-specified
timestamp. Reads at a timestamp are guaranteed to see a consistent
prefix of the global transaction history: they observe
modifications done by all transactions with a commit timestamp <=
the read timestamp, and observe none of the modifications done by
transactions with a larger commit timestamp. They will block until
all conflicting transactions that may be assigned commit timestamps
<= the read timestamp have finished.
The timestamp can either be expressed as an absolute Cloud Spanner commit
timestamp or a staleness relative to the current time.
These modes do not require a "negotiation phase" to pick a
timestamp. As a result, they execute slightly faster than the
equivalent boundedly stale concurrency modes. On the other hand,
boundedly stale reads usually return fresher results.
See TransactionOptions.ReadOnly.read_timestamp and
TransactionOptions.ReadOnly.exact_staleness.
## Bounded Staleness
Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
subject to a user-provided staleness bound. Cloud Spanner chooses the
newest timestamp within the staleness bound that allows execution
of the reads at the closest available replica without blocking.
All rows yielded are consistent with each other -- if any part of
the read observes a transaction, all parts of the read see the
transaction. Boundedly stale reads are not repeatable: two stale
reads, even if they use the same staleness bound, can execute at
different timestamps and thus return inconsistent results.
Boundedly stale reads execute in two phases: the first phase
negotiates a timestamp among all replicas needed to serve the
read. In the second phase, reads are executed at the negotiated
timestamp.
As a result of the two phase execution, bounded staleness reads are
usually a little slower than comparable exact staleness
reads. However, they are typically able to return fresher
results, and are more likely to execute at the closest replica.
Because the timestamp negotiation requires up-front knowledge of
which rows will be read, it can only be used with single-use
read-only transactions.
See TransactionOptions.ReadOnly.max_staleness and
TransactionOptions.ReadOnly.min_read_timestamp.
## Old Read Timestamps and Garbage Collection
Cloud Spanner continuously garbage collects deleted and overwritten data
in the background to reclaim storage space. This process is known
as "version GC". By default, version GC reclaims versions after they
are one hour old. Because of this, Cloud Spanner cannot perform reads
at read timestamps more than one hour in the past. This
restriction also applies to in-progress reads and/or SQL queries whose
timestamp become too old while executing. Reads and SQL queries with
too-old read timestamps fail with the error
FAILED_PRECONDITION
. ## Partitioned DML Transactions Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions.- The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table.
- The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows.
- Partitioned DML does not guarantee exactly-once execution semantics
against a partition. The statement will be applied at least once to each
partition. It is strongly recommended that the DML statement should be
idempotent to avoid unexpected results. For instance, it is potentially
dangerous to run a statement such as
UPDATE table SET column = column + 1
as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows.
- Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql.
- If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.
Protobuf type google.spanner.v1.TransactionOptions
TransactionOptions.PartitionedDml
Message type to initiate a Partitioned DML transaction.
Protobuf type google.spanner.v1.TransactionOptions.PartitionedDml
TransactionOptions.PartitionedDml.Builder
Message type to initiate a Partitioned DML transaction.
Protobuf type google.spanner.v1.TransactionOptions.PartitionedDml
TransactionOptions.ReadOnly
Message type to initiate a read-only transaction.
Protobuf type google.spanner.v1.TransactionOptions.ReadOnly
TransactionOptions.ReadOnly.Builder
Message type to initiate a read-only transaction.
Protobuf type google.spanner.v1.TransactionOptions.ReadOnly
TransactionOptions.ReadWrite
Message type to initiate a read-write transaction. Currently this transaction type has no options.
Protobuf type google.spanner.v1.TransactionOptions.ReadWrite
TransactionOptions.ReadWrite.Builder
Message type to initiate a read-write transaction. Currently this transaction type has no options.
Protobuf type google.spanner.v1.TransactionOptions.ReadWrite
TransactionProto
TransactionSelector
This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions.
Protobuf type google.spanner.v1.TransactionSelector
TransactionSelector.Builder
This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions.
Protobuf type google.spanner.v1.TransactionSelector
Type
Type
indicates the type of a Cloud Spanner value, as might be stored in a
table cell or returned from an SQL query.
Protobuf type google.spanner.v1.Type
Type.Builder
Type
indicates the type of a Cloud Spanner value, as might be stored in a
table cell or returned from an SQL query.
Protobuf type google.spanner.v1.Type
TypeProto
Interfaces
BatchCreateSessionsRequestOrBuilder
BatchCreateSessionsResponseOrBuilder
BeginTransactionRequestOrBuilder
CommitRequestOrBuilder
CommitResponse.CommitStatsOrBuilder
CommitResponseOrBuilder
CreateSessionRequestOrBuilder
DeleteSessionRequestOrBuilder
ExecuteBatchDmlRequest.StatementOrBuilder
ExecuteBatchDmlRequestOrBuilder
ExecuteBatchDmlResponseOrBuilder
ExecuteSqlRequest.QueryOptionsOrBuilder
ExecuteSqlRequestOrBuilder
GetSessionRequestOrBuilder
KeyRangeOrBuilder
KeySetOrBuilder
ListSessionsRequestOrBuilder
ListSessionsResponseOrBuilder
Mutation.DeleteOrBuilder
Mutation.WriteOrBuilder
MutationOrBuilder
PartialResultSetOrBuilder
PartitionOptionsOrBuilder
PartitionOrBuilder
PartitionQueryRequestOrBuilder
PartitionReadRequestOrBuilder
PartitionResponseOrBuilder
PlanNode.ChildLinkOrBuilder
PlanNode.ShortRepresentationOrBuilder
PlanNodeOrBuilder
QueryPlanOrBuilder
ReadRequestOrBuilder
RequestOptionsOrBuilder
ResultSetMetadataOrBuilder
ResultSetOrBuilder
ResultSetStatsOrBuilder
RollbackRequestOrBuilder
SessionOrBuilder
StructType.FieldOrBuilder
StructTypeOrBuilder
TransactionOptions.PartitionedDmlOrBuilder
TransactionOptions.ReadOnlyOrBuilder
TransactionOptions.ReadWriteOrBuilder
TransactionOptionsOrBuilder
TransactionOrBuilder
TransactionSelectorOrBuilder
TypeOrBuilder
Enums
CommitRequest.TransactionCase
ExecuteSqlRequest.QueryMode
Mode in which the statement must be processed.
Protobuf enum google.spanner.v1.ExecuteSqlRequest.QueryMode
KeyRange.EndKeyTypeCase
KeyRange.StartKeyTypeCase
Mutation.OperationCase
PlanNode.Kind
The kind of PlanNode. Distinguishes between the two different kinds of nodes that can appear in a query plan.
Protobuf enum google.spanner.v1.PlanNode.Kind
RequestOptions.Priority
The relative priority for requests. Note that priority is not applicable for BeginTransaction. The priority acts as a hint to the Cloud Spanner scheduler and does not guarantee priority or order of execution. For example:
- Some parts of a write operation always execute at
PRIORITY_HIGH
, regardless of the specified priority. This may cause you to see an increase in high priority workload even when executing a low priority request. This can also potentially cause a priority inversion where a lower priority request will be fulfilled ahead of a higher priority request. - If a transaction contains multiple operations with different priorities, Cloud Spanner does not guarantee to process the higher priority operations first. There may be other constraints to satisfy, such as order of operations.
Protobuf enum google.spanner.v1.RequestOptions.Priority
ResultSetStats.RowCountCase
TransactionOptions.ModeCase
TransactionOptions.ReadOnly.TimestampBoundCase
TransactionSelector.SelectorCase
TypeCode
TypeCode
is used as part of Type to
indicate the type of a Cloud Spanner value.
Each legal value of a type can be encoded to or decoded from a JSON
value, using the encodings described below. All Cloud Spanner values can
be null
, regardless of type; null
s are always encoded as a JSON
null
.
Protobuf enum google.spanner.v1.TypeCode