Package com.google.cloud.bigquery.storage.v1beta2 (3.1.0)

A client to BigQuery Storage API

The interfaces provided are listed below, along with usage samples.

BaseBigQueryReadClient

Service Description: BigQuery Read API.

The Read API can be used to read data from BigQuery.

New code should use the v1 Read API going forward, if they don't use Write API at the same time.

Sample for BaseBigQueryReadClient:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
   ProjectName parent = ProjectName.of("[PROJECT]");
   ReadSession readSession = ReadSession.newBuilder().build();
   int maxStreamCount = 940837515;
   ReadSession response =
       baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
 }
 

BigQueryWriteClient

Service Description: BigQuery Write API.

The Write API can be used to write data to BigQuery.

The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

Sample for BigQueryWriteClient:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create()) {
   TableName parent = TableName.of("[PROJECT]", "[DATASET]", "[TABLE]");
   WriteStream writeStream = WriteStream.newBuilder().build();
   WriteStream response = bigQueryWriteClient.createWriteStream(parent, writeStream);
 }
 

Classes

AppendRowsRequest

Request message for AppendRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsRequest

AppendRowsRequest.Builder

Request message for AppendRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsRequest

AppendRowsRequest.ProtoData

Proto schema and data.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsRequest.ProtoData

AppendRowsRequest.ProtoData.Builder

Proto schema and data.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsRequest.ProtoData

AppendRowsResponse

Response message for AppendRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsResponse

AppendRowsResponse.AppendResult

AppendResult is returned for successful append requests.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsResponse.AppendResult

AppendRowsResponse.AppendResult.Builder

AppendResult is returned for successful append requests.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsResponse.AppendResult

AppendRowsResponse.Builder

Response message for AppendRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.AppendRowsResponse

ArrowProto

ArrowRecordBatch

Arrow RecordBatch.

Protobuf type google.cloud.bigquery.storage.v1beta2.ArrowRecordBatch

ArrowRecordBatch.Builder

Arrow RecordBatch.

Protobuf type google.cloud.bigquery.storage.v1beta2.ArrowRecordBatch

ArrowSchema

Arrow schema as specified in https://arrow.apache.org/docs/python/api/datatypes.html and serialized to bytes using IPC: https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc

See code samples on how this message can be deserialized.

Protobuf type google.cloud.bigquery.storage.v1beta2.ArrowSchema

ArrowSchema.Builder

Arrow schema as specified in https://arrow.apache.org/docs/python/api/datatypes.html and serialized to bytes using IPC: https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc

See code samples on how this message can be deserialized.

Protobuf type google.cloud.bigquery.storage.v1beta2.ArrowSchema

ArrowSerializationOptions

Contains options specific to Arrow Serialization.

Protobuf type google.cloud.bigquery.storage.v1beta2.ArrowSerializationOptions

ArrowSerializationOptions.Builder

Contains options specific to Arrow Serialization.

Protobuf type google.cloud.bigquery.storage.v1beta2.ArrowSerializationOptions

AvroProto

AvroRows

Avro rows.

Protobuf type google.cloud.bigquery.storage.v1beta2.AvroRows

AvroRows.Builder

Avro rows.

Protobuf type google.cloud.bigquery.storage.v1beta2.AvroRows

AvroSchema

Avro schema.

Protobuf type google.cloud.bigquery.storage.v1beta2.AvroSchema

AvroSchema.Builder

Avro schema.

Protobuf type google.cloud.bigquery.storage.v1beta2.AvroSchema

BQTableSchemaToProtoDescriptor

Converts a BQ table schema to protobuf descriptor. All field names will be converted to lowercase when constructing the protobuf descriptor. The mapping between field types and field modes are shown in the ImmutableMaps below.

BaseBigQueryReadClient

Service Description: BigQuery Read API.

The Read API can be used to read data from BigQuery.

New code should use the v1 Read API going forward, if they don't use Write API at the same time.

This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
   ProjectName parent = ProjectName.of("[PROJECT]");
   ReadSession readSession = ReadSession.newBuilder().build();
   int maxStreamCount = 940837515;
   ReadSession response =
       baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
 }
 

Note: close() needs to be called on the BaseBigQueryReadClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().

Methods
MethodDescriptionMethod Variants

CreateReadSession

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • createReadSession(CreateReadSessionRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • createReadSession(ProjectName parent, ReadSession readSession, int maxStreamCount)

  • createReadSession(String parent, ReadSession readSession, int maxStreamCount)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • createReadSessionCallable()

ReadRows

Reads rows from the stream in the format prescribed by the ReadSession. Each response contains one or more table rows, up to a maximum of 100 MiB per response; read requests which attempt to read individual rows larger than 100 MiB will fail.

Each request also returns a set of stream statistics reflecting the current state of the stream.

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • readRowsCallable()

SplitReadStream

Splits a given ReadStream into two ReadStream objects. These ReadStream objects are referred to as the primary and the residual streams of the split. The original ReadStream can still be read from in the same manner as before. Both of the returned ReadStream objects can also be read from, and the rows returned by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back-to-back in the original ReadStream. Concretely, it is guaranteed that for streams original, primary, and residual, that original[0-j] = primary[0-j] and original[j-n] = residual[0-m] once the streams have been read to completion.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • splitReadStream(SplitReadStreamRequest request)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • splitReadStreamCallable()

See the individual methods for example code.

Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.

This class can be customized by passing in a custom instance of BaseBigQueryReadSettings to create(). For example:

To customize credentials:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryReadSettings baseBigQueryReadSettings =
     BaseBigQueryReadSettings.newBuilder()
         .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
         .build();
 BaseBigQueryReadClient baseBigQueryReadClient =
     BaseBigQueryReadClient.create(baseBigQueryReadSettings);
 

To customize the endpoint:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryReadSettings baseBigQueryReadSettings =
     BaseBigQueryReadSettings.newBuilder().setEndpoint(myEndpoint).build();
 BaseBigQueryReadClient baseBigQueryReadClient =
     BaseBigQueryReadClient.create(baseBigQueryReadSettings);
 

Please refer to the GitHub repository's samples for more quickstart code snippets.

BaseBigQueryReadSettings

Settings class to configure an instance of BaseBigQueryReadClient.

The default instance has everything set to sensible defaults:

  • The default service address (bigquerystorage.googleapis.com) and default port (443) are used.
  • Credentials are acquired automatically through Application Default Credentials.
  • Retries are configured for idempotent methods but not for non-idempotent methods.

The builder of this class is recursive, so contained classes are themselves builders. When build() is called, the tree of builders is called to create the complete settings object.

For example, to set the total timeout of createReadSession to 30 seconds:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryReadSettings.Builder baseBigQueryReadSettingsBuilder =
     BaseBigQueryReadSettings.newBuilder();
 baseBigQueryReadSettingsBuilder
     .createReadSessionSettings()
     .setRetrySettings(
         baseBigQueryReadSettingsBuilder
             .createReadSessionSettings()
             .getRetrySettings()
             .toBuilder()
             .setTotalTimeout(Duration.ofSeconds(30))
             .build());
 BaseBigQueryReadSettings baseBigQueryReadSettings = baseBigQueryReadSettingsBuilder.build();
 

BaseBigQueryReadSettings.Builder

Builder for BaseBigQueryReadSettings.

BatchCommitWriteStreamsRequest

Request message for BatchCommitWriteStreams.

Protobuf type google.cloud.bigquery.storage.v1beta2.BatchCommitWriteStreamsRequest

BatchCommitWriteStreamsRequest.Builder

Request message for BatchCommitWriteStreams.

Protobuf type google.cloud.bigquery.storage.v1beta2.BatchCommitWriteStreamsRequest

BatchCommitWriteStreamsResponse

Response message for BatchCommitWriteStreams.

Protobuf type google.cloud.bigquery.storage.v1beta2.BatchCommitWriteStreamsResponse

BatchCommitWriteStreamsResponse.Builder

Response message for BatchCommitWriteStreams.

Protobuf type google.cloud.bigquery.storage.v1beta2.BatchCommitWriteStreamsResponse

BigDecimalByteStringEncoder

BigQueryReadClient

Service Description: BigQuery Read API.

The Read API can be used to read data from BigQuery.

This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:

 
 try (BigQueryReadClient BigQueryReadClient = BigQueryReadClient.create()) {
   String parent = "";
   ReadSession readSession = ReadSession.newBuilder().build();
   int maxStreamCount = 0;
   ReadSession response = BigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
 }
 
 

Note: close() needs to be called on the BigQueryReadClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().

The surface of this class includes several types of Java methods for each of the API's methods:

  1. A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
  2. A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
  3. A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.

See the individual methods for example code.

Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.

This class can be customized by passing in a custom instance of BigQueryReadSettings to create(). For example:

To customize credentials:

 
 BigQueryReadSettings BigQueryReadSettings =
     BigQueryReadSettings.newBuilder()
         .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
         .build();
 BigQueryReadClient BigQueryReadClient =
     BigQueryReadClient.create(BigQueryReadSettings);
 
 

To customize the endpoint:

 
 BigQueryReadSettings BigQueryReadSettings =
     BigQueryReadSettings.newBuilder().setEndpoint(myEndpoint).build();
 BigQueryReadClient BigQueryReadClient =
     BigQueryReadClient.create(BigQueryReadSettings);
 
 

BigQueryReadGrpc

BigQuery Read API. The Read API can be used to read data from BigQuery. New code should use the v1 Read API going forward, if they don't use Write API at the same time.

BigQueryReadGrpc.BigQueryReadBlockingStub

A stub to allow clients to do synchronous rpc calls to service BigQueryRead.

BigQuery Read API. The Read API can be used to read data from BigQuery. New code should use the v1 Read API going forward, if they don't use Write API at the same time.

BigQueryReadGrpc.BigQueryReadFutureStub

A stub to allow clients to do ListenableFuture-style rpc calls to service BigQueryRead.

BigQuery Read API. The Read API can be used to read data from BigQuery. New code should use the v1 Read API going forward, if they don't use Write API at the same time.

BigQueryReadGrpc.BigQueryReadImplBase

Base class for the server implementation of the service BigQueryRead.

BigQuery Read API. The Read API can be used to read data from BigQuery. New code should use the v1 Read API going forward, if they don't use Write API at the same time.

BigQueryReadGrpc.BigQueryReadStub

A stub to allow clients to do asynchronous rpc calls to service BigQueryRead.

BigQuery Read API. The Read API can be used to read data from BigQuery. New code should use the v1 Read API going forward, if they don't use Write API at the same time.

BigQueryReadSettings

Settings class to configure an instance of BigQueryReadClient.

The default instance has everything set to sensible defaults:

  • The default service address (bigquerystorage.googleapis.com) and default port (443) are used.
  • Credentials are acquired automatically through Application Default Credentials.
  • Retries are configured for idempotent methods but not for non-idempotent methods.

The builder of this class is recursive, so contained classes are themselves builders. When build() is called, the tree of builders is called to create the complete settings object.

For example, to set the total timeout of createReadSession to 30 seconds:

 
 BigQueryReadSettings.Builder BigQueryReadSettingsBuilder =
     BigQueryReadSettings.newBuilder();
 BigQueryReadSettingsBuilder.createReadSessionSettings().getRetrySettings().toBuilder()
     .setTotalTimeout(Duration.ofSeconds(30));
 BigQueryReadSettings BigQueryReadSettings = BigQueryReadSettingsBuilder.build();
 
 

BigQueryReadSettings.Builder

Builder for BigQueryReadSettings.

BigQueryWriteClient

Service Description: BigQuery Write API.

The Write API can be used to write data to BigQuery.

The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create()) {
   TableName parent = TableName.of("[PROJECT]", "[DATASET]", "[TABLE]");
   WriteStream writeStream = WriteStream.newBuilder().build();
   WriteStream response = bigQueryWriteClient.createWriteStream(parent, writeStream);
 }
 

Note: close() needs to be called on the BigQueryWriteClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().

Methods
MethodDescriptionMethod Variants

CreateWriteStream

Creates a write stream to the given table. Additionally, every table has a special COMMITTED stream named '_default' to which data can be written. This stream doesn't need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • createWriteStream(CreateWriteStreamRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • createWriteStream(TableName parent, WriteStream writeStream)

  • createWriteStream(String parent, WriteStream writeStream)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • createWriteStreamCallable()

AppendRows

Appends data to the given stream.

If offset is specified, the offset is checked against the end of stream. The server returns OUT_OF_RANGE in AppendRowsResponse if an attempt is made to append to an offset beyond the current end of the stream or ALREADY_EXISTS if user provids an offset that has already been written to. User can retry with adjusted offset within the same RPC stream. If offset is not specified, append happens at the end of the stream.

The response contains the offset at which the append happened. Responses are received in the same order in which requests are sent. There will be one response for each successful request. If the offset is not set in response, it means append didn't happen due to some errors. If one request fails, all the subsequent requests will also fail until a success request is made again.

If the stream is of PENDING type, data will only be available for read operations after the stream is committed.

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • appendRowsCallable()

GetWriteStream

Gets a write stream.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • getWriteStream(GetWriteStreamRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • getWriteStream(WriteStreamName name)

  • getWriteStream(String name)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • getWriteStreamCallable()

FinalizeWriteStream

Finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the '_default' stream.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • finalizeWriteStream(FinalizeWriteStreamRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • finalizeWriteStream(WriteStreamName name)

  • finalizeWriteStream(String name)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • finalizeWriteStreamCallable()

BatchCommitWriteStreams

Atomically commits a group of PENDING streams that belong to the same parent table. Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • batchCommitWriteStreams(BatchCommitWriteStreamsRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • batchCommitWriteStreams(String parent)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • batchCommitWriteStreamsCallable()

FlushRows

Flushes rows to a BUFFERED stream. If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request. Flush is not supported on the _default stream, since it is not BUFFERED.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • flushRows(FlushRowsRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • flushRows(WriteStreamName writeStream)

  • flushRows(String writeStream)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • flushRowsCallable()

See the individual methods for example code.

Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.

This class can be customized by passing in a custom instance of BigQueryWriteSettings to create(). For example:

To customize credentials:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BigQueryWriteSettings bigQueryWriteSettings =
     BigQueryWriteSettings.newBuilder()
         .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
         .build();
 BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create(bigQueryWriteSettings);
 

To customize the endpoint:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BigQueryWriteSettings bigQueryWriteSettings =
     BigQueryWriteSettings.newBuilder().setEndpoint(myEndpoint).build();
 BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create(bigQueryWriteSettings);
 

Please refer to the GitHub repository's samples for more quickstart code snippets.

BigQueryWriteGrpc

BigQuery Write API. The Write API can be used to write data to BigQuery. The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

BigQueryWriteGrpc.BigQueryWriteBlockingStub

A stub to allow clients to do synchronous rpc calls to service BigQueryWrite.

BigQuery Write API. The Write API can be used to write data to BigQuery. The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

BigQueryWriteGrpc.BigQueryWriteFutureStub

A stub to allow clients to do ListenableFuture-style rpc calls to service BigQueryWrite.

BigQuery Write API. The Write API can be used to write data to BigQuery. The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

BigQueryWriteGrpc.BigQueryWriteImplBase

Base class for the server implementation of the service BigQueryWrite.

BigQuery Write API. The Write API can be used to write data to BigQuery. The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

BigQueryWriteGrpc.BigQueryWriteStub

A stub to allow clients to do asynchronous rpc calls to service BigQueryWrite.

BigQuery Write API. The Write API can be used to write data to BigQuery. The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

BigQueryWriteSettings

Settings class to configure an instance of BigQueryWriteClient.

The default instance has everything set to sensible defaults:

  • The default service address (bigquerystorage.googleapis.com) and default port (443) are used.
  • Credentials are acquired automatically through Application Default Credentials.
  • Retries are configured for idempotent methods but not for non-idempotent methods.

The builder of this class is recursive, so contained classes are themselves builders. When build() is called, the tree of builders is called to create the complete settings object.

For example, to set the total timeout of createWriteStream to 30 seconds:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BigQueryWriteSettings.Builder bigQueryWriteSettingsBuilder = BigQueryWriteSettings.newBuilder();
 bigQueryWriteSettingsBuilder
     .createWriteStreamSettings()
     .setRetrySettings(
         bigQueryWriteSettingsBuilder
             .createWriteStreamSettings()
             .getRetrySettings()
             .toBuilder()
             .setTotalTimeout(Duration.ofSeconds(30))
             .build());
 BigQueryWriteSettings bigQueryWriteSettings = bigQueryWriteSettingsBuilder.build();
 

BigQueryWriteSettings.Builder

Builder for BigQueryWriteSettings.

CivilTimeEncoder

Ported from ZetaSQL CivilTimeEncoder Original code can be found at: https://github.com/google/zetasql/blob/master/java/com/google/zetasql/CivilTimeEncoder.java Encoder for TIME and DATETIME values, according to civil_time encoding.

The valid range and number of bits required by each date/time field is as the following:

Range and bits for date/time fields
Field Range #Bits
Year [1, 9999] 14
Month [1, 12] 4
Day [1, 31] 5
Hour [0, 23] 5
Minute [0, 59] 6
Second [0, 59]* 6
Micros [0, 999999] 20
Nanos [0, 999999999] 30

* Leap second is not supported.

When encoding the TIME or DATETIME into a bit field, larger date/time field is on the more significant side.

CreateReadSessionRequest

Request message for CreateReadSession.

Protobuf type google.cloud.bigquery.storage.v1beta2.CreateReadSessionRequest

CreateReadSessionRequest.Builder

Request message for CreateReadSession.

Protobuf type google.cloud.bigquery.storage.v1beta2.CreateReadSessionRequest

CreateWriteStreamRequest

Request message for CreateWriteStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.CreateWriteStreamRequest

CreateWriteStreamRequest.Builder

Request message for CreateWriteStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.CreateWriteStreamRequest

FinalizeWriteStreamRequest

Request message for invoking FinalizeWriteStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.FinalizeWriteStreamRequest

FinalizeWriteStreamRequest.Builder

Request message for invoking FinalizeWriteStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.FinalizeWriteStreamRequest

FinalizeWriteStreamResponse

Response message for FinalizeWriteStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.FinalizeWriteStreamResponse

FinalizeWriteStreamResponse.Builder

Response message for FinalizeWriteStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.FinalizeWriteStreamResponse

FlushRowsRequest

Request message for FlushRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.FlushRowsRequest

FlushRowsRequest.Builder

Request message for FlushRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.FlushRowsRequest

FlushRowsResponse

Respond message for FlushRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.FlushRowsResponse

FlushRowsResponse.Builder

Respond message for FlushRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.FlushRowsResponse

GetWriteStreamRequest

Request message for GetWriteStreamRequest.

Protobuf type google.cloud.bigquery.storage.v1beta2.GetWriteStreamRequest

GetWriteStreamRequest.Builder

Request message for GetWriteStreamRequest.

Protobuf type google.cloud.bigquery.storage.v1beta2.GetWriteStreamRequest

JsonStreamWriter

A StreamWriter that can write JSON data (JSONObjects) to BigQuery tables. The JsonStreamWriter is built on top of a StreamWriter, and it simply converts all JSON data to protobuf messages then calls StreamWriter's append() method to write to BigQuery tables.

This client lib is deprecated, please use v1 instead.

JsonStreamWriter.Builder

JsonToProtoMessage

Converts Json data to protocol buffer messages given the protocol buffer descriptor. The protobuf descriptor must have all fields lowercased.

This client lib is deprecated, please use v1 instead.

ProjectName

ProjectName.Builder

Builder for projects/{project}.

ProtoBufProto

ProtoRows

Protobuf type google.cloud.bigquery.storage.v1beta2.ProtoRows

ProtoRows.Builder

Protobuf type google.cloud.bigquery.storage.v1beta2.ProtoRows

ProtoSchema

ProtoSchema describes the schema of the serialized protocol buffer data rows.

Protobuf type google.cloud.bigquery.storage.v1beta2.ProtoSchema

ProtoSchema.Builder

ProtoSchema describes the schema of the serialized protocol buffer data rows.

Protobuf type google.cloud.bigquery.storage.v1beta2.ProtoSchema

ProtoSchemaConverter

ReadRowsRequest

Request message for ReadRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadRowsRequest

ReadRowsRequest.Builder

Request message for ReadRows.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadRowsRequest

ReadRowsResponse

Response from calling ReadRows may include row data, progress and throttling information.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadRowsResponse

ReadRowsResponse.Builder

Response from calling ReadRows may include row data, progress and throttling information.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadRowsResponse

ReadSession

Information about the ReadSession.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadSession

ReadSession.Builder

Information about the ReadSession.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadSession

ReadSession.TableModifiers

Additional attributes when reading a table.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadSession.TableModifiers

ReadSession.TableModifiers.Builder

Additional attributes when reading a table.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadSession.TableModifiers

ReadSession.TableReadOptions

Options dictating how we read a table.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadSession.TableReadOptions

ReadSession.TableReadOptions.Builder

Options dictating how we read a table.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadSession.TableReadOptions

ReadStream

Information about a single stream that gets data out of the storage system. Most of the information about ReadStream instances is aggregated, making ReadStream lightweight.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadStream

ReadStream.Builder

Information about a single stream that gets data out of the storage system. Most of the information about ReadStream instances is aggregated, making ReadStream lightweight.

Protobuf type google.cloud.bigquery.storage.v1beta2.ReadStream

ReadStreamName

ReadStreamName.Builder

Builder for projects/{project}/locations/{location}/sessions/{session}/streams/{stream}.

SplitReadStreamRequest

Request message for SplitReadStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.SplitReadStreamRequest

SplitReadStreamRequest.Builder

Request message for SplitReadStream.

Protobuf type google.cloud.bigquery.storage.v1beta2.SplitReadStreamRequest

SplitReadStreamResponse

Protobuf type google.cloud.bigquery.storage.v1beta2.SplitReadStreamResponse

SplitReadStreamResponse.Builder

Protobuf type google.cloud.bigquery.storage.v1beta2.SplitReadStreamResponse

StorageError

Structured custom BigQuery Storage error message. The error can be attached as error details in the returned rpc Status. In particular, the use of error codes allows more structured error handling, and reduces the need to evaluate unstructured error text strings.

Protobuf type google.cloud.bigquery.storage.v1beta2.StorageError

StorageError.Builder

Structured custom BigQuery Storage error message. The error can be attached as error details in the returned rpc Status. In particular, the use of error codes allows more structured error handling, and reduces the need to evaluate unstructured error text strings.

Protobuf type google.cloud.bigquery.storage.v1beta2.StorageError

StorageProto

StreamConnection

StreamConnection is responsible for writing requests to a GRPC bidirecional connection.

StreamWriter creates a connection. Two callback functions are necessary: request_callback and done_callback. Request callback is used for every request, and done callback is used to notify the user that the connection is closed and no more callbacks will be received from this connection.

The stream writer will accept all the requests without flow control, and makes the callbacks in receiving order.

It's user's responsibility to do the flow control and maintain the lifetime of the requests.

This client lib is deprecated, please use v1 instead.

StreamProto

StreamStats

Estimated stream statistics for a given Stream.

Protobuf type google.cloud.bigquery.storage.v1beta2.StreamStats

StreamStats.Builder

Estimated stream statistics for a given Stream.

Protobuf type google.cloud.bigquery.storage.v1beta2.StreamStats

StreamStats.Progress

Protobuf type google.cloud.bigquery.storage.v1beta2.StreamStats.Progress

StreamStats.Progress.Builder

Protobuf type google.cloud.bigquery.storage.v1beta2.StreamStats.Progress

StreamWriterV2

A BigQuery Stream Writer that can be used to write data into BigQuery Table.

TODO: Support batching.

TODO: Support schema change.

This client lib is deprecated, please use v1 instead.

StreamWriterV2.Builder

A builder of StreamWriterV2s.

TableFieldSchema

A field in TableSchema

Protobuf type google.cloud.bigquery.storage.v1beta2.TableFieldSchema

TableFieldSchema.Builder

A field in TableSchema

Protobuf type google.cloud.bigquery.storage.v1beta2.TableFieldSchema

TableName

TableName.Builder

Builder for projects/{project}/datasets/{dataset}/tables/{table}.

TableProto

TableSchema

Schema of a table

Protobuf type google.cloud.bigquery.storage.v1beta2.TableSchema

TableSchema.Builder

Schema of a table

Protobuf type google.cloud.bigquery.storage.v1beta2.TableSchema

ThrottleState

Information on if the current connection is being throttled.

Protobuf type google.cloud.bigquery.storage.v1beta2.ThrottleState

ThrottleState.Builder

Information on if the current connection is being throttled.

Protobuf type google.cloud.bigquery.storage.v1beta2.ThrottleState

WriteStream

Information about a single stream that gets data inside the storage system.

Protobuf type google.cloud.bigquery.storage.v1beta2.WriteStream

WriteStream.Builder

Information about a single stream that gets data inside the storage system.

Protobuf type google.cloud.bigquery.storage.v1beta2.WriteStream

WriteStreamName

WriteStreamName.Builder

Builder for projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}.

Interfaces

AppendRowsRequest.ProtoDataOrBuilder

AppendRowsRequestOrBuilder

AppendRowsResponse.AppendResultOrBuilder

AppendRowsResponseOrBuilder

ArrowRecordBatchOrBuilder

ArrowSchemaOrBuilder

ArrowSerializationOptionsOrBuilder

AvroRowsOrBuilder

AvroSchemaOrBuilder

BatchCommitWriteStreamsRequestOrBuilder

BatchCommitWriteStreamsResponseOrBuilder

BigQueryReadGrpc.AsyncService

BigQuery Read API. The Read API can be used to read data from BigQuery. New code should use the v1 Read API going forward, if they don't use Write API at the same time.

BigQueryReadSettings.RetryAttemptListener

BigQueryWriteGrpc.AsyncService

BigQuery Write API. The Write API can be used to write data to BigQuery. The google.cloud.bigquery.storage.v1 API should be used instead of the v1beta2 API for BigQueryWrite operations.

CreateReadSessionRequestOrBuilder

CreateWriteStreamRequestOrBuilder

FinalizeWriteStreamRequestOrBuilder

FinalizeWriteStreamResponseOrBuilder

FlushRowsRequestOrBuilder

FlushRowsResponseOrBuilder

GetWriteStreamRequestOrBuilder

ProtoRowsOrBuilder

ProtoSchemaOrBuilder

ReadRowsRequestOrBuilder

ReadRowsResponseOrBuilder

ReadSession.TableModifiersOrBuilder

ReadSession.TableReadOptionsOrBuilder

ReadSessionOrBuilder

ReadStreamOrBuilder

SplitReadStreamRequestOrBuilder

SplitReadStreamResponseOrBuilder

StorageErrorOrBuilder

StreamConnection.DoneCallback

Invoked when server closes the connection.

StreamConnection.RequestCallback

Invoked when a response is received from the server.

StreamStats.ProgressOrBuilder

StreamStatsOrBuilder

TableFieldSchemaOrBuilder

TableSchemaOrBuilder

ThrottleStateOrBuilder

WriteStreamOrBuilder

Enums

AppendRowsRequest.RowsCase

AppendRowsResponse.ResponseCase

ArrowSerializationOptions.Format

The IPC format to use when serializing Arrow streams.

Protobuf enum google.cloud.bigquery.storage.v1beta2.ArrowSerializationOptions.Format

DataFormat

Data format for input or output data.

Protobuf enum google.cloud.bigquery.storage.v1beta2.DataFormat

ReadRowsResponse.RowsCase

ReadRowsResponse.SchemaCase

ReadSession.SchemaCase

StorageError.StorageErrorCode

Error code for StorageError.

Protobuf enum google.cloud.bigquery.storage.v1beta2.StorageError.StorageErrorCode

TableFieldSchema.Mode

Protobuf enum google.cloud.bigquery.storage.v1beta2.TableFieldSchema.Mode

TableFieldSchema.Type

Protobuf enum google.cloud.bigquery.storage.v1beta2.TableFieldSchema.Type

WriteStream.Type

Type enum of the stream.

Protobuf enum google.cloud.bigquery.storage.v1beta2.WriteStream.Type