- 6.101.1 (latest)
- 6.100.0
- 6.99.0
- 6.98.1
- 6.97.1
- 6.96.1
- 6.95.1
- 6.94.0
- 6.93.0
- 6.89.0
- 6.88.0
- 6.87.0
- 6.86.0
- 6.85.0
- 6.83.0
- 6.82.0
- 6.80.1
- 6.79.0
- 6.77.0
- 6.74.1
- 6.72.0
- 6.71.0
- 6.69.0
- 6.68.0
- 6.66.0
- 6.65.1
- 6.62.0
- 6.60.0
- 6.58.0
- 6.57.0
- 6.56.0
- 6.55.0
- 6.54.0
- 6.53.0
- 6.52.1
- 6.51.0
- 6.50.1
- 6.49.0
- 6.25.1
- 6.24.0
- 6.23.4
- 6.22.0
- 6.21.2
- 6.20.0
- 6.19.1
- 6.18.0
- 6.17.4
- 6.14.1
public static final class SpannerGrpc.SpannerStub extends AbstractAsyncStub<SpannerGrpc.SpannerStub>
A stub to allow clients to do asynchronous rpc calls to service Spanner.
Cloud Spanner API The Cloud Spanner API can be used to manage sessions and execute transactions on data stored in Cloud Spanner databases.
Inheritance
java.lang.Object > io.grpc.stub.AbstractStub > io.grpc.stub.AbstractAsyncStub > SpannerGrpc.SpannerStubMethods
batchCreateSessions(BatchCreateSessionsRequest request, StreamObserver<BatchCreateSessionsResponse> responseObserver)
public void batchCreateSessions(BatchCreateSessionsRequest request, StreamObserver<BatchCreateSessionsResponse> responseObserver)
Creates multiple new sessions. This API can be used to initialize a session cache on the clients. See https://goo.gl/TgSFN2 for best practices on session cache management.
Parameters | |
---|---|
Name | Description |
request |
BatchCreateSessionsRequest |
responseObserver |
io.grpc.stub.StreamObserver<BatchCreateSessionsResponse> |
batchWrite(BatchWriteRequest request, StreamObserver<BatchWriteResponse> responseObserver)
public void batchWrite(BatchWriteRequest request, StreamObserver<BatchWriteResponse> responseObserver)
Batches the supplied mutation groups in a collection of efficient
transactions. All mutations in a group are committed atomically. However,
mutations across groups can be committed non-atomically in an unspecified
order and thus, they must be independent of each other. Partial failure is
possible, that is, some groups might have been committed successfully,
while some might have failed. The results of individual batches are
streamed into the response as the batches are applied.
BatchWrite
requests are not replay protected, meaning that each mutation
group can be applied more than once. Replays of non-idempotent mutations
can have undesirable effects. For example, replays of an insert mutation
can produce an already exists error or if you use generated or commit
timestamp-based keys, it can result in additional rows being added to the
mutation's table. We recommend structuring your mutation groups to be
idempotent to avoid this issue.
Parameters | |
---|---|
Name | Description |
request |
BatchWriteRequest |
responseObserver |
io.grpc.stub.StreamObserver<BatchWriteResponse> |
beginTransaction(BeginTransactionRequest request, StreamObserver<Transaction> responseObserver)
public void beginTransaction(BeginTransactionRequest request, StreamObserver<Transaction> responseObserver)
Begins a new transaction. This step can often be skipped: Read, ExecuteSql and Commit can begin a new transaction as a side-effect.
Parameters | |
---|---|
Name | Description |
request |
BeginTransactionRequest |
responseObserver |
io.grpc.stub.StreamObserver<Transaction> |
build(Channel channel, CallOptions callOptions)
protected SpannerGrpc.SpannerStub build(Channel channel, CallOptions callOptions)
Parameters | |
---|---|
Name | Description |
channel |
io.grpc.Channel |
callOptions |
io.grpc.CallOptions |
Returns | |
---|---|
Type | Description |
SpannerGrpc.SpannerStub |
commit(CommitRequest request, StreamObserver<CommitResponse> responseObserver)
public void commit(CommitRequest request, StreamObserver<CommitResponse> responseObserver)
Commits a transaction. The request includes the mutations to be
applied to rows in the database.
Commit
might return an ABORTED
error. This can occur at any time;
commonly, the cause is conflicts with concurrent
transactions. However, it can also happen for a variety of other
reasons. If Commit
returns ABORTED
, the caller should retry
the transaction from the beginning, reusing the same session.
On very rare occasions, Commit
might return UNKNOWN
. This can happen,
for example, if the client job experiences a 1+ hour networking failure.
At that point, Cloud Spanner has lost track of the transaction outcome and
we recommend that you perform another read from the database to see the
state of things as they are now.
Parameters | |
---|---|
Name | Description |
request |
CommitRequest |
responseObserver |
io.grpc.stub.StreamObserver<CommitResponse> |
createSession(CreateSessionRequest request, StreamObserver<Session> responseObserver)
public void createSession(CreateSessionRequest request, StreamObserver<Session> responseObserver)
Creates a new session. A session can be used to perform
transactions that read and/or modify data in a Cloud Spanner database.
Sessions are meant to be reused for many consecutive
transactions.
Sessions can only execute one transaction at a time. To execute
multiple concurrent read-write/write-only transactions, create
multiple sessions. Note that standalone reads and queries use a
transaction internally, and count toward the one transaction
limit.
Active sessions use additional server resources, so it's a good idea to
delete idle and unneeded sessions.
Aside from explicit deletes, Cloud Spanner can delete sessions when no
operations are sent for more than an hour. If a session is deleted,
requests to it return NOT_FOUND
.
Idle sessions can be kept alive by sending a trivial SQL query
periodically, for example, "SELECT 1"
.
Parameters | |
---|---|
Name | Description |
request |
CreateSessionRequest |
responseObserver |
io.grpc.stub.StreamObserver<Session> |
deleteSession(DeleteSessionRequest request, StreamObserver<Empty> responseObserver)
public void deleteSession(DeleteSessionRequest request, StreamObserver<Empty> responseObserver)
Ends a session, releasing server resources associated with it. This asynchronously triggers the cancellation of any operations that are running with this session.
Parameters | |
---|---|
Name | Description |
request |
DeleteSessionRequest |
responseObserver |
io.grpc.stub.StreamObserver<Empty> |
executeBatchDml(ExecuteBatchDmlRequest request, StreamObserver<ExecuteBatchDmlResponse> responseObserver)
public void executeBatchDml(ExecuteBatchDmlRequest request, StreamObserver<ExecuteBatchDmlResponse> responseObserver)
Executes a batch of SQL DML statements. This method allows many statements to be run with lower latency than submitting them sequentially with ExecuteSql. Statements are executed in sequential order. A request can succeed even if a statement fails. The ExecuteBatchDmlResponse.status field in the response provides information about the statement that failed. Clients must inspect this field to determine whether an error occurred. Execution stops after the first failed statement; the remaining statements are not executed.
Parameters | |
---|---|
Name | Description |
request |
ExecuteBatchDmlRequest |
responseObserver |
io.grpc.stub.StreamObserver<ExecuteBatchDmlResponse> |
executeSql(ExecuteSqlRequest request, StreamObserver<ResultSet> responseObserver)
public void executeSql(ExecuteSqlRequest request, StreamObserver<ResultSet> responseObserver)
Executes an SQL statement, returning all results in a single reply. This
method can't be used to return a result set larger than 10 MiB;
if the query yields more data than that, the query fails with
a FAILED_PRECONDITION
error.
Operations inside read-write transactions might return ABORTED
. If
this occurs, the application should restart the transaction from
the beginning. See Transaction for more
details.
Larger result sets can be fetched in streaming fashion by calling
ExecuteStreamingSql
instead.
The query string can be SQL or Graph Query Language
(GQL).
Parameters | |
---|---|
Name | Description |
request |
ExecuteSqlRequest |
responseObserver |
io.grpc.stub.StreamObserver<ResultSet> |
executeStreamingSql(ExecuteSqlRequest request, StreamObserver<PartialResultSet> responseObserver)
public void executeStreamingSql(ExecuteSqlRequest request, StreamObserver<PartialResultSet> responseObserver)
Like ExecuteSql, except returns the result set as a stream. Unlike ExecuteSql, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB. The query string can be SQL or Graph Query Language (GQL).
Parameters | |
---|---|
Name | Description |
request |
ExecuteSqlRequest |
responseObserver |
io.grpc.stub.StreamObserver<PartialResultSet> |
getSession(GetSessionRequest request, StreamObserver<Session> responseObserver)
public void getSession(GetSessionRequest request, StreamObserver<Session> responseObserver)
Gets a session. Returns NOT_FOUND
if the session doesn't exist.
This is mainly useful for determining whether a session is still
alive.
Parameters | |
---|---|
Name | Description |
request |
GetSessionRequest |
responseObserver |
io.grpc.stub.StreamObserver<Session> |
listSessions(ListSessionsRequest request, StreamObserver<ListSessionsResponse> responseObserver)
public void listSessions(ListSessionsRequest request, StreamObserver<ListSessionsResponse> responseObserver)
Lists all sessions in a given database.
Parameters | |
---|---|
Name | Description |
request |
ListSessionsRequest |
responseObserver |
io.grpc.stub.StreamObserver<ListSessionsResponse> |
partitionQuery(PartitionQueryRequest request, StreamObserver<PartitionResponse> responseObserver)
public void partitionQuery(PartitionQueryRequest request, StreamObserver<PartitionResponse> responseObserver)
Creates a set of partition tokens that can be used to execute a query
operation in parallel. Each of the returned partition tokens can be used
by ExecuteStreamingSql to
specify a subset of the query result to read. The same session and
read-only transaction must be used by the PartitionQueryRequest
used to
create the partition tokens and the ExecuteSqlRequests
that use the
partition tokens.
Partition tokens become invalid when the session used to create them
is deleted, is idle for too long, begins a new transaction, or becomes too
old. When any of these happen, it isn't possible to resume the query, and
the whole operation must be restarted from the beginning.
Parameters | |
---|---|
Name | Description |
request |
PartitionQueryRequest |
responseObserver |
io.grpc.stub.StreamObserver<PartitionResponse> |
partitionRead(PartitionReadRequest request, StreamObserver<PartitionResponse> responseObserver)
public void partitionRead(PartitionReadRequest request, StreamObserver<PartitionResponse> responseObserver)
Creates a set of partition tokens that can be used to execute a read
operation in parallel. Each of the returned partition tokens can be used
by StreamingRead to specify a
subset of the read result to read. The same session and read-only
transaction must be used by the PartitionReadRequest
used to create the
partition tokens and the ReadRequests
that use the partition tokens.
There are no ordering guarantees on rows returned among the returned
partition tokens, or even within each individual StreamingRead
call
issued with a partition_token
.
Partition tokens become invalid when the session used to create them
is deleted, is idle for too long, begins a new transaction, or becomes too
old. When any of these happen, it isn't possible to resume the read, and
the whole operation must be restarted from the beginning.
Parameters | |
---|---|
Name | Description |
request |
PartitionReadRequest |
responseObserver |
io.grpc.stub.StreamObserver<PartitionResponse> |
read(ReadRequest request, StreamObserver<ResultSet> responseObserver)
public void read(ReadRequest request, StreamObserver<ResultSet> responseObserver)
Reads rows from the database using key lookups and scans, as a
simple key/value style alternative to
ExecuteSql. This method can't be
used to return a result set larger than 10 MiB; if the read matches more
data than that, the read fails with a FAILED_PRECONDITION
error.
Reads inside read-write transactions might return ABORTED
. If
this occurs, the application should restart the transaction from
the beginning. See Transaction for more
details.
Larger result sets can be yielded in streaming fashion by calling
StreamingRead instead.
Parameters | |
---|---|
Name | Description |
request |
ReadRequest |
responseObserver |
io.grpc.stub.StreamObserver<ResultSet> |
rollback(RollbackRequest request, StreamObserver<Empty> responseObserver)
public void rollback(RollbackRequest request, StreamObserver<Empty> responseObserver)
Rolls back a transaction, releasing any locks it holds. It's a good
idea to call this for any transaction that includes one or more
Read or
ExecuteSql requests and ultimately
decides not to commit.
Rollback
returns OK
if it successfully aborts the transaction, the
transaction was already aborted, or the transaction isn't
found. Rollback
never returns ABORTED
.
Parameters | |
---|---|
Name | Description |
request |
RollbackRequest |
responseObserver |
io.grpc.stub.StreamObserver<Empty> |
streamingRead(ReadRequest request, StreamObserver<PartialResultSet> responseObserver)
public void streamingRead(ReadRequest request, StreamObserver<PartialResultSet> responseObserver)
Like Read, except returns the result set as a stream. Unlike Read, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB.
Parameters | |
---|---|
Name | Description |
request |
ReadRequest |
responseObserver |
io.grpc.stub.StreamObserver<PartialResultSet> |