A client to BigQuery Storage API
The interfaces provided are listed below, along with usage samples.
BaseBigQueryReadClient
Service Description: BigQuery Read API.
The Read API can be used to read data from BigQuery.
Sample for BaseBigQueryReadClient:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
ProjectName parent = ProjectName.of("[PROJECT]");
ReadSession readSession = ReadSession.newBuilder().build();
int maxStreamCount = 940837515;
ReadSession response =
baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
}
BigQueryWriteClient
Service Description: BigQuery Write API.
The Write API can be used to write data to BigQuery.
For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
Sample for BigQueryWriteClient:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create()) {
TableName parent = TableName.of("[PROJECT]", "[DATASET]", "[TABLE]");
WriteStream writeStream = WriteStream.newBuilder().build();
WriteStream response = bigQueryWriteClient.createWriteStream(parent, writeStream);
}
Classes
AnnotationsProto
AppendRowsRequest
Request message for AppendRows
.
Because AppendRows is a bidirectional streaming RPC, certain parts of the AppendRowsRequest need only be specified for the first request before switching table destinations. You can also switch table destinations within the same connection for the default stream.
The size of a single AppendRowsRequest must be less than 10 MB in size.
Requests larger than this return an error, typically INVALID_ARGUMENT
.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsRequest
AppendRowsRequest.Builder
Request message for AppendRows
.
Because AppendRows is a bidirectional streaming RPC, certain parts of the AppendRowsRequest need only be specified for the first request before switching table destinations. You can also switch table destinations within the same connection for the default stream.
The size of a single AppendRowsRequest must be less than 10 MB in size.
Requests larger than this return an error, typically INVALID_ARGUMENT
.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsRequest
AppendRowsRequest.ProtoData
ProtoData contains the data rows and schema when constructing append requests.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsRequest.ProtoData
AppendRowsRequest.ProtoData.Builder
ProtoData contains the data rows and schema when constructing append requests.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsRequest.ProtoData
AppendRowsResponse
Response message for AppendRows
.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsResponse
AppendRowsResponse.AppendResult
AppendResult is returned for successful append requests.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsResponse.AppendResult
AppendRowsResponse.AppendResult.Builder
AppendResult is returned for successful append requests.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsResponse.AppendResult
AppendRowsResponse.Builder
Response message for AppendRows
.
Protobuf type google.cloud.bigquery.storage.v1.AppendRowsResponse
ArrowProto
ArrowRecordBatch
Arrow RecordBatch.
Protobuf type google.cloud.bigquery.storage.v1.ArrowRecordBatch
ArrowRecordBatch.Builder
Arrow RecordBatch.
Protobuf type google.cloud.bigquery.storage.v1.ArrowRecordBatch
ArrowSchema
Arrow schema as specified in https://arrow.apache.org/docs/python/api/datatypes.html and serialized to bytes using IPC: https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc
See code samples on how this message can be deserialized.
Protobuf type google.cloud.bigquery.storage.v1.ArrowSchema
ArrowSchema.Builder
Arrow schema as specified in https://arrow.apache.org/docs/python/api/datatypes.html and serialized to bytes using IPC: https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc
See code samples on how this message can be deserialized.
Protobuf type google.cloud.bigquery.storage.v1.ArrowSchema
ArrowSerializationOptions
Contains options specific to Arrow Serialization.
Protobuf type google.cloud.bigquery.storage.v1.ArrowSerializationOptions
ArrowSerializationOptions.Builder
Contains options specific to Arrow Serialization.
Protobuf type google.cloud.bigquery.storage.v1.ArrowSerializationOptions
AvroProto
AvroRows
Avro rows.
Protobuf type google.cloud.bigquery.storage.v1.AvroRows
AvroRows.Builder
Avro rows.
Protobuf type google.cloud.bigquery.storage.v1.AvroRows
AvroSchema
Avro schema.
Protobuf type google.cloud.bigquery.storage.v1.AvroSchema
AvroSchema.Builder
Avro schema.
Protobuf type google.cloud.bigquery.storage.v1.AvroSchema
AvroSerializationOptions
Contains options specific to Avro Serialization.
Protobuf type google.cloud.bigquery.storage.v1.AvroSerializationOptions
AvroSerializationOptions.Builder
Contains options specific to Avro Serialization.
Protobuf type google.cloud.bigquery.storage.v1.AvroSerializationOptions
BQTableSchemaToProtoDescriptor
Converts a BQ table schema to protobuf descriptor. All field names will be converted to lowercase when constructing the protobuf descriptor. The mapping between field types and field modes are shown in the ImmutableMaps below.
BaseBigQueryReadClient
Service Description: BigQuery Read API.
The Read API can be used to read data from BigQuery.
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
ProjectName parent = ProjectName.of("[PROJECT]");
ReadSession readSession = ReadSession.newBuilder().build();
int maxStreamCount = 940837515;
ReadSession response =
baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
}
Note: close() needs to be called on the BaseBigQueryReadClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
Method | Description | Method Variants |
---|---|---|
CreateReadSession | Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned. A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read. Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments. Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller. |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
ReadRows | Reads rows from the stream in the format prescribed by the ReadSession. Each response contains one or more table rows, up to a maximum of 100 MiB per response; read requests which attempt to read individual rows larger than 100 MiB will fail. Each request also returns a set of stream statistics reflecting the current state of the stream. |
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
SplitReadStream | Splits a given Moreover, the two child streams will be allocated back-to-back in the original |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of BaseBigQueryReadSettings to create(). For example:
To customize credentials:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BaseBigQueryReadSettings baseBigQueryReadSettings =
BaseBigQueryReadSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
BaseBigQueryReadClient baseBigQueryReadClient =
BaseBigQueryReadClient.create(baseBigQueryReadSettings);
To customize the endpoint:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BaseBigQueryReadSettings baseBigQueryReadSettings =
BaseBigQueryReadSettings.newBuilder().setEndpoint(myEndpoint).build();
BaseBigQueryReadClient baseBigQueryReadClient =
BaseBigQueryReadClient.create(baseBigQueryReadSettings);
Please refer to the GitHub repository's samples for more quickstart code snippets.
BaseBigQueryReadSettings
Settings class to configure an instance of BaseBigQueryReadClient.
The default instance has everything set to sensible defaults:
- The default service address (bigquerystorage.googleapis.com) and default port (443) are used.
- Credentials are acquired automatically through Application Default Credentials.
- Retries are configured for idempotent methods but not for non-idempotent methods.
The builder of this class is recursive, so contained classes are themselves builders. When build() is called, the tree of builders is called to create the complete settings object.
For example, to set the total timeout of createReadSession to 30 seconds:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BaseBigQueryReadSettings.Builder baseBigQueryReadSettingsBuilder =
BaseBigQueryReadSettings.newBuilder();
baseBigQueryReadSettingsBuilder
.createReadSessionSettings()
.setRetrySettings(
baseBigQueryReadSettingsBuilder
.createReadSessionSettings()
.getRetrySettings()
.toBuilder()
.setTotalTimeout(Duration.ofSeconds(30))
.build());
BaseBigQueryReadSettings baseBigQueryReadSettings = baseBigQueryReadSettingsBuilder.build();
BaseBigQueryReadSettings.Builder
Builder for BaseBigQueryReadSettings.
BatchCommitWriteStreamsRequest
Request message for BatchCommitWriteStreams
.
Protobuf type google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsRequest
BatchCommitWriteStreamsRequest.Builder
Request message for BatchCommitWriteStreams
.
Protobuf type google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsRequest
BatchCommitWriteStreamsResponse
Response message for BatchCommitWriteStreams
.
Protobuf type google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsResponse
BatchCommitWriteStreamsResponse.Builder
Response message for BatchCommitWriteStreams
.
Protobuf type google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsResponse
BigDecimalByteStringEncoder
BigQueryReadClient
Service Description: BigQuery Read API.
The Read API can be used to read data from BigQuery.
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
try (BigQueryReadClient BigQueryReadClient = BigQueryReadClient.create()) {
String parent = "";
ReadSession readSession = ReadSession.newBuilder().build();
int maxStreamCount = 0;
ReadSession response = BigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
}
Note: close() needs to be called on the BigQueryReadClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
The surface of this class includes several types of Java methods for each of the API's methods:
- A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
- A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
- A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of BigQueryReadSettings to create(). For example:
To customize credentials:
BigQueryReadSettings BigQueryReadSettings =
BigQueryReadSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
BigQueryReadClient BigQueryReadClient =
BigQueryReadClient.create(BigQueryReadSettings);
To customize the endpoint:
BigQueryReadSettings BigQueryReadSettings =
BigQueryReadSettings.newBuilder().setEndpoint(myEndpoint).build();
BigQueryReadClient BigQueryReadClient =
BigQueryReadClient.create(BigQueryReadSettings);
BigQueryReadGrpc
BigQuery Read API. The Read API can be used to read data from BigQuery.
BigQueryReadGrpc.BigQueryReadBlockingStub
A stub to allow clients to do synchronous rpc calls to service BigQueryRead.
BigQuery Read API. The Read API can be used to read data from BigQuery.
BigQueryReadGrpc.BigQueryReadFutureStub
A stub to allow clients to do ListenableFuture-style rpc calls to service BigQueryRead.
BigQuery Read API. The Read API can be used to read data from BigQuery.
BigQueryReadGrpc.BigQueryReadImplBase
Base class for the server implementation of the service BigQueryRead.
BigQuery Read API. The Read API can be used to read data from BigQuery.
BigQueryReadGrpc.BigQueryReadStub
A stub to allow clients to do asynchronous rpc calls to service BigQueryRead.
BigQuery Read API. The Read API can be used to read data from BigQuery.
BigQueryReadSettings
Settings class to configure an instance of BigQueryReadClient.
The default instance has everything set to sensible defaults:
- The default service address (bigquerystorage.googleapis.com) and default port (443) are used.
- Credentials are acquired automatically through Application Default Credentials.
- Retries are configured for idempotent methods but not for non-idempotent methods.
The builder of this class is recursive, so contained classes are themselves builders. When build() is called, the tree of builders is called to create the complete settings object.
For example, to set the total timeout of createReadSession to 30 seconds:
BigQueryReadSettings.Builder BigQueryReadSettingsBuilder =
BigQueryReadSettings.newBuilder();
BigQueryReadSettingsBuilder.createReadSessionSettings().getRetrySettings().toBuilder()
.setTotalTimeout(Duration.ofSeconds(30));
BigQueryReadSettings BigQueryReadSettings = BigQueryReadSettingsBuilder.build();
BigQueryReadSettings.Builder
Builder for BigQueryReadSettings.
BigQuerySchemaUtil
BigQueryWriteClient
Service Description: BigQuery Write API.
The Write API can be used to write data to BigQuery.
For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create()) {
TableName parent = TableName.of("[PROJECT]", "[DATASET]", "[TABLE]");
WriteStream writeStream = WriteStream.newBuilder().build();
WriteStream response = bigQueryWriteClient.createWriteStream(parent, writeStream);
}
Note: close() needs to be called on the BigQueryWriteClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
Method | Description | Method Variants |
---|---|---|
CreateWriteStream | Creates a write stream to the given table. Additionally, every table has a special stream named '_default' to which data can be written. This stream doesn't need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received. |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
AppendRows | Appends data to the given stream. If The response contains an optional offset at which the append happened. No offset information will be returned for appends to a default stream. Responses are received in the same order in which requests are sent. There will be one response for each successful inserted request. Responses may optionally embed error information if the originating AppendRequest was not successfully processed. The specifics of when successfully appended data is made visible to the table are governed by the type of stream:
|
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
GetWriteStream | Gets information about a write stream. |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
FinalizeWriteStream | Finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the '_default' stream. |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
BatchCommitWriteStreams | Atomically commits a group of Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations. |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
FlushRows | Flushes rows to a BUFFERED stream. If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request. Flush is not supported on the _default stream, since it is not BUFFERED. |
Request object method variants only take one parameter, a request object, which must be constructed before the call.
"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.
Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.
|
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of BigQueryWriteSettings to create(). For example:
To customize credentials:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BigQueryWriteSettings bigQueryWriteSettings =
BigQueryWriteSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create(bigQueryWriteSettings);
To customize the endpoint:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BigQueryWriteSettings bigQueryWriteSettings =
BigQueryWriteSettings.newBuilder().setEndpoint(myEndpoint).build();
BigQueryWriteClient bigQueryWriteClient = BigQueryWriteClient.create(bigQueryWriteSettings);
Please refer to the GitHub repository's samples for more quickstart code snippets.
BigQueryWriteGrpc
BigQuery Write API. The Write API can be used to write data to BigQuery. For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
BigQueryWriteGrpc.BigQueryWriteBlockingStub
A stub to allow clients to do synchronous rpc calls to service BigQueryWrite.
BigQuery Write API. The Write API can be used to write data to BigQuery. For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
BigQueryWriteGrpc.BigQueryWriteFutureStub
A stub to allow clients to do ListenableFuture-style rpc calls to service BigQueryWrite.
BigQuery Write API. The Write API can be used to write data to BigQuery. For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
BigQueryWriteGrpc.BigQueryWriteImplBase
Base class for the server implementation of the service BigQueryWrite.
BigQuery Write API. The Write API can be used to write data to BigQuery. For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
BigQueryWriteGrpc.BigQueryWriteStub
A stub to allow clients to do asynchronous rpc calls to service BigQueryWrite.
BigQuery Write API. The Write API can be used to write data to BigQuery. For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
BigQueryWriteSettings
Settings class to configure an instance of BigQueryWriteClient.
The default instance has everything set to sensible defaults:
- The default service address (bigquerystorage.googleapis.com) and default port (443) are used.
- Credentials are acquired automatically through Application Default Credentials.
- Retries are configured for idempotent methods but not for non-idempotent methods.
The builder of this class is recursive, so contained classes are themselves builders. When build() is called, the tree of builders is called to create the complete settings object.
For example, to set the total timeout of createWriteStream to 30 seconds:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BigQueryWriteSettings.Builder bigQueryWriteSettingsBuilder = BigQueryWriteSettings.newBuilder();
bigQueryWriteSettingsBuilder
.createWriteStreamSettings()
.setRetrySettings(
bigQueryWriteSettingsBuilder
.createWriteStreamSettings()
.getRetrySettings()
.toBuilder()
.setTotalTimeout(Duration.ofSeconds(30))
.build());
BigQueryWriteSettings bigQueryWriteSettings = bigQueryWriteSettingsBuilder.build();
BigQueryWriteSettings.Builder
Builder for BigQueryWriteSettings.
CivilTimeEncoder
Ported from ZetaSQL CivilTimeEncoder Original code can be found at: https://github.com/google/zetasql/blob/master/java/com/google/zetasql/CivilTimeEncoder.java Encoder for TIME and DATETIME values, according to civil_time encoding.
The valid range and number of bits required by each date/time field is as the following:
Field | Range | #Bits |
---|---|---|
Year | [1, 9999] | 14 |
Month | [1, 12] | 4 |
Day | [1, 31] | 5 |
Hour | [0, 23] | 5 |
Minute | [0, 59] | 6 |
Second | [0, 59]* | 6 |
Micros | [0, 999999] | 20 |
Nanos | [0, 999999999] | 30 |
* Leap second is not supported.
When encoding the TIME or DATETIME into a bit field, larger date/time field is on the more significant side.
ConnectionWorkerPool
Pool of connections to accept appends and distirbute to different connections.
ConnectionWorkerPool.Settings
Settings for connection pool.
ConnectionWorkerPool.Settings.Builder
Builder for the options to config ConnectionWorkerPool.
CreateReadSessionRequest
Request message for CreateReadSession
.
Protobuf type google.cloud.bigquery.storage.v1.CreateReadSessionRequest
CreateReadSessionRequest.Builder
Request message for CreateReadSession
.
Protobuf type google.cloud.bigquery.storage.v1.CreateReadSessionRequest
CreateWriteStreamRequest
Request message for CreateWriteStream
.
Protobuf type google.cloud.bigquery.storage.v1.CreateWriteStreamRequest
CreateWriteStreamRequest.Builder
Request message for CreateWriteStream
.
Protobuf type google.cloud.bigquery.storage.v1.CreateWriteStreamRequest
Exceptions
Exceptions for Storage Client Libraries.
FinalizeWriteStreamRequest
Request message for invoking FinalizeWriteStream
.
Protobuf type google.cloud.bigquery.storage.v1.FinalizeWriteStreamRequest
FinalizeWriteStreamRequest.Builder
Request message for invoking FinalizeWriteStream
.
Protobuf type google.cloud.bigquery.storage.v1.FinalizeWriteStreamRequest
FinalizeWriteStreamResponse
Response message for FinalizeWriteStream
.
Protobuf type google.cloud.bigquery.storage.v1.FinalizeWriteStreamResponse
FinalizeWriteStreamResponse.Builder
Response message for FinalizeWriteStream
.
Protobuf type google.cloud.bigquery.storage.v1.FinalizeWriteStreamResponse
FlushRowsRequest
Request message for FlushRows
.
Protobuf type google.cloud.bigquery.storage.v1.FlushRowsRequest
FlushRowsRequest.Builder
Request message for FlushRows
.
Protobuf type google.cloud.bigquery.storage.v1.FlushRowsRequest
FlushRowsResponse
Respond message for FlushRows
.
Protobuf type google.cloud.bigquery.storage.v1.FlushRowsResponse
FlushRowsResponse.Builder
Respond message for FlushRows
.
Protobuf type google.cloud.bigquery.storage.v1.FlushRowsResponse
GetWriteStreamRequest
Request message for GetWriteStreamRequest
.
Protobuf type google.cloud.bigquery.storage.v1.GetWriteStreamRequest
GetWriteStreamRequest.Builder
Request message for GetWriteStreamRequest
.
Protobuf type google.cloud.bigquery.storage.v1.GetWriteStreamRequest
JsonStreamWriter
A StreamWriter that can write JSON data (JSONObjects) to BigQuery tables. The JsonStreamWriter is built on top of a StreamWriter, and it simply converts all JSON data to protobuf messages then calls StreamWriter's append() method to write to BigQuery tables. It maintains all StreamWriter functions, but also provides an additional feature: schema update support, where if the BigQuery table schema is updated, users will be able to ingest data on the new schema after some time (in order of minutes).
JsonStreamWriter.Builder
JsonToProtoMessage
Converts JSON data to Protobuf messages given the Protobuf descriptor and BigQuery table schema. The Protobuf descriptor must have all fields lowercased.
ProjectName
ProjectName.Builder
Builder for projects/{project}.
ProtoBufProto
ProtoRows
Protobuf type google.cloud.bigquery.storage.v1.ProtoRows
ProtoRows.Builder
Protobuf type google.cloud.bigquery.storage.v1.ProtoRows
ProtoSchema
ProtoSchema describes the schema of the serialized protocol buffer data rows.
Protobuf type google.cloud.bigquery.storage.v1.ProtoSchema
ProtoSchema.Builder
ProtoSchema describes the schema of the serialized protocol buffer data rows.
Protobuf type google.cloud.bigquery.storage.v1.ProtoSchema
ProtoSchemaConverter
ReadRowsRequest
Request message for ReadRows
.
Protobuf type google.cloud.bigquery.storage.v1.ReadRowsRequest
ReadRowsRequest.Builder
Request message for ReadRows
.
Protobuf type google.cloud.bigquery.storage.v1.ReadRowsRequest
ReadRowsResponse
Response from calling ReadRows
may include row data, progress and
throttling information.
Protobuf type google.cloud.bigquery.storage.v1.ReadRowsResponse
ReadRowsResponse.Builder
Response from calling ReadRows
may include row data, progress and
throttling information.
Protobuf type google.cloud.bigquery.storage.v1.ReadRowsResponse
ReadSession
Information about the ReadSession.
Protobuf type google.cloud.bigquery.storage.v1.ReadSession
ReadSession.Builder
Information about the ReadSession.
Protobuf type google.cloud.bigquery.storage.v1.ReadSession
ReadSession.TableModifiers
Additional attributes when reading a table.
Protobuf type google.cloud.bigquery.storage.v1.ReadSession.TableModifiers
ReadSession.TableModifiers.Builder
Additional attributes when reading a table.
Protobuf type google.cloud.bigquery.storage.v1.ReadSession.TableModifiers
ReadSession.TableReadOptions
Options dictating how we read a table.
Protobuf type google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions
ReadSession.TableReadOptions.Builder
Options dictating how we read a table.
Protobuf type google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions
ReadStream
Information about a single stream that gets data out of the storage system.
Most of the information about ReadStream
instances is aggregated, making
ReadStream
lightweight.
Protobuf type google.cloud.bigquery.storage.v1.ReadStream
ReadStream.Builder
Information about a single stream that gets data out of the storage system.
Most of the information about ReadStream
instances is aggregated, making
ReadStream
lightweight.
Protobuf type google.cloud.bigquery.storage.v1.ReadStream
ReadStreamName
ReadStreamName.Builder
Builder for projects/{project}/locations/{location}/sessions/{session}/streams/{stream}.
RowError
The message that presents row level error info in a request.
Protobuf type google.cloud.bigquery.storage.v1.RowError
RowError.Builder
The message that presents row level error info in a request.
Protobuf type google.cloud.bigquery.storage.v1.RowError
SchemaAwareStreamWriter<T>
A StreamWriter that can write data to BigQuery tables. The SchemaAwareStreamWriter is built on top of a StreamWriter, and it converts all data to Protobuf messages using provided converter then calls StreamWriter's append() method to write to BigQuery tables. It maintains all StreamWriter functions, but also provides an additional feature: schema update support, where if the BigQuery table schema is updated, users will be able to ingest data on the new schema after some time (in order of minutes).
NOTE: The schema update ability will be disabled when you pass in a table schema explicitly through the writer. It is recommended that user either use JsonStreamWriter (which fully manages table schema) or StreamWriter (which accepts proto format in raw and user will handle the schema update event themsevles). If you use this class, you need to be very cautious about possible mistmach between the writer's schema and the input data, any mismatch of the two will cause data corruption.
SchemaAwareStreamWriter.Builder<T>
SplitReadStreamRequest
Request message for SplitReadStream
.
Protobuf type google.cloud.bigquery.storage.v1.SplitReadStreamRequest
SplitReadStreamRequest.Builder
Request message for SplitReadStream
.
Protobuf type google.cloud.bigquery.storage.v1.SplitReadStreamRequest
SplitReadStreamResponse
Response message for SplitReadStream
.
Protobuf type google.cloud.bigquery.storage.v1.SplitReadStreamResponse
SplitReadStreamResponse.Builder
Response message for SplitReadStream
.
Protobuf type google.cloud.bigquery.storage.v1.SplitReadStreamResponse
StorageError
Structured custom BigQuery Storage error message. The error can be attached as error details in the returned rpc Status. In particular, the use of error codes allows more structured error handling, and reduces the need to evaluate unstructured error text strings.
Protobuf type google.cloud.bigquery.storage.v1.StorageError
StorageError.Builder
Structured custom BigQuery Storage error message. The error can be attached as error details in the returned rpc Status. In particular, the use of error codes allows more structured error handling, and reduces the need to evaluate unstructured error text strings.
Protobuf type google.cloud.bigquery.storage.v1.StorageError
StorageProto
StreamProto
StreamStats
Estimated stream statistics for a given read Stream.
Protobuf type google.cloud.bigquery.storage.v1.StreamStats
StreamStats.Builder
Estimated stream statistics for a given read Stream.
Protobuf type google.cloud.bigquery.storage.v1.StreamStats
StreamStats.Progress
Protobuf type google.cloud.bigquery.storage.v1.StreamStats.Progress
StreamStats.Progress.Builder
Protobuf type google.cloud.bigquery.storage.v1.StreamStats.Progress
StreamWriter
A BigQuery Stream Writer that can be used to write data into BigQuery Table.
TODO: Support batching.
StreamWriter.Builder
A builder of StreamWriters.
StreamWriter.SingleConnectionOrConnectionPool
When in single table mode, append directly to connectionWorker. Otherwise append to connection pool in multiplexing mode.
TableFieldSchema
TableFieldSchema defines a single field/column within a table schema.
Protobuf type google.cloud.bigquery.storage.v1.TableFieldSchema
TableFieldSchema.Builder
TableFieldSchema defines a single field/column within a table schema.
Protobuf type google.cloud.bigquery.storage.v1.TableFieldSchema
TableName
TableName.Builder
Builder for projects/{project}/datasets/{dataset}/tables/{table}.
TableProto
TableSchema
Schema of a table. This schema is a subset of google.cloud.bigquery.v2.TableSchema containing information necessary to generate valid message to write to BigQuery.
Protobuf type google.cloud.bigquery.storage.v1.TableSchema
TableSchema.Builder
Schema of a table. This schema is a subset of google.cloud.bigquery.v2.TableSchema containing information necessary to generate valid message to write to BigQuery.
Protobuf type google.cloud.bigquery.storage.v1.TableSchema
ThrottleState
Information on if the current connection is being throttled.
Protobuf type google.cloud.bigquery.storage.v1.ThrottleState
ThrottleState.Builder
Information on if the current connection is being throttled.
Protobuf type google.cloud.bigquery.storage.v1.ThrottleState
WriteStream
Information about a single stream that gets data inside the storage system.
Protobuf type google.cloud.bigquery.storage.v1.WriteStream
WriteStream.Builder
Information about a single stream that gets data inside the storage system.
Protobuf type google.cloud.bigquery.storage.v1.WriteStream
WriteStreamName
WriteStreamName.Builder
Builder for projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}.
Interfaces
AppendRowsRequest.ProtoDataOrBuilder
AppendRowsRequestOrBuilder
AppendRowsResponse.AppendResultOrBuilder
AppendRowsResponseOrBuilder
ArrowRecordBatchOrBuilder
ArrowSchemaOrBuilder
ArrowSerializationOptionsOrBuilder
AvroRowsOrBuilder
AvroSchemaOrBuilder
AvroSerializationOptionsOrBuilder
BatchCommitWriteStreamsRequestOrBuilder
BatchCommitWriteStreamsResponseOrBuilder
BigQueryReadGrpc.AsyncService
BigQuery Read API. The Read API can be used to read data from BigQuery.
BigQueryReadSettings.RetryAttemptListener
BigQueryWriteGrpc.AsyncService
BigQuery Write API. The Write API can be used to write data to BigQuery. For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
CreateReadSessionRequestOrBuilder
CreateWriteStreamRequestOrBuilder
FinalizeWriteStreamRequestOrBuilder
FinalizeWriteStreamResponseOrBuilder
FlushRowsRequestOrBuilder
FlushRowsResponseOrBuilder
GetWriteStreamRequestOrBuilder
ProtoRowsOrBuilder
ProtoSchemaOrBuilder
ReadRowsRequestOrBuilder
ReadRowsResponseOrBuilder
ReadSession.TableModifiersOrBuilder
ReadSession.TableReadOptionsOrBuilder
ReadSessionOrBuilder
ReadStreamOrBuilder
RowErrorOrBuilder
SplitReadStreamRequestOrBuilder
SplitReadStreamResponseOrBuilder
StorageErrorOrBuilder
StreamStats.ProgressOrBuilder
StreamStatsOrBuilder
TableFieldSchemaOrBuilder
TableSchemaOrBuilder
ThrottleStateOrBuilder
ToProtoConverter<T>
WriteStreamOrBuilder
Enums
AppendRowsRequest.MissingValueInterpretation
An enum to indicate how to interpret missing values of fields that are present in user schema but missing in rows. A missing value can represent a NULL or a column default value defined in BigQuery table schema.
Protobuf enum
google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretation
AppendRowsRequest.RowsCase
AppendRowsResponse.ResponseCase
ArrowSerializationOptions.CompressionCodec
Compression codec's supported by Arrow.
Protobuf enum
google.cloud.bigquery.storage.v1.ArrowSerializationOptions.CompressionCodec
DataFormat
Data format for input or output data.
Protobuf enum google.cloud.bigquery.storage.v1.DataFormat
ReadRowsResponse.RowsCase
ReadRowsResponse.SchemaCase
ReadSession.SchemaCase
ReadSession.TableReadOptions.OutputFormatSerializationOptionsCase
ReadSession.TableReadOptions.ResponseCompressionCodec
Specifies which compression codec to attempt on the entire serialized response payload (either Arrow record batch or Avro rows). This is not to be confused with the Apache Arrow native compression codecs specified in ArrowSerializationOptions. For performance reasons, when creating a read session requesting Arrow responses, setting both native Arrow compression and application-level response compression will not be allowed - choose, at most, one kind of compression.
Protobuf enum
google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions.ResponseCompressionCodec
RowError.RowErrorCode
Error code for RowError
.
Protobuf enum google.cloud.bigquery.storage.v1.RowError.RowErrorCode
StorageError.StorageErrorCode
Error code for StorageError
.
Protobuf enum google.cloud.bigquery.storage.v1.StorageError.StorageErrorCode
StreamWriter.SingleConnectionOrConnectionPool.Kind
Kind of connection operation mode.
TableFieldSchema.Mode
Protobuf enum google.cloud.bigquery.storage.v1.TableFieldSchema.Mode
TableFieldSchema.Type
Protobuf enum google.cloud.bigquery.storage.v1.TableFieldSchema.Type
WriteStream.Type
Type enum of the stream.
Protobuf enum google.cloud.bigquery.storage.v1.WriteStream.Type
WriteStream.WriteMode
Mode enum of the stream.
Protobuf enum google.cloud.bigquery.storage.v1.WriteStream.WriteMode
WriteStreamView
WriteStreamView is a view enum that controls what details about a write stream should be returned.
Protobuf enum google.cloud.bigquery.storage.v1.WriteStreamView
Exceptions
Exceptions.AppendSerializationError
This exception is thrown from <xref uid="com.google.cloud.bigquery.storage.v1.SchemaAwareStreamWriter.append(java.lang.Iterable<T>)" data-throw-if-not-resolved="false">SchemaAwareStreamWriter#append(Iterable) when the client side Proto serialization fails. It can also be thrown by the server in case rows contains invalid data. The exception contains a Map of indexes of faulty rows and the corresponding error message.
Exceptions.AppendSerializtionError
This class has a typo in the name. It will be removed soon. Please use AppendSerializationError
Exceptions.DataHasUnknownFieldException
Input data object has unknown field to the schema of the SchemaAwareStreamWriter. User can either turn on IgnoreUnknownFields option on the SchemaAwareStreamWriter, or if they don't want the error to be ignored, they should recreate the SchemaAwareStreamWriter with the updated table schema.
Exceptions.FieldParseError
This exception is used internally to handle field level parsing errors.
Exceptions.InflightBytesLimitExceededException
Exceptions.InflightLimitExceededException
If FlowController.LimitExceededBehavior is set to Block and inflight limit is exceeded, this exception will be thrown. If it is just a spike, you may retry the request. Otherwise, you can increase the inflight limit or create more StreamWriter to handle your traffic.
Exceptions.InflightRequestsLimitExceededException
Exceptions.JsonDataHasUnknownFieldException
This class is replaced by a generic one. It will be removed soon. Please use DataHasUnknownFieldException
Exceptions.MaximumRequestCallbackWaitTimeExceededException
The connection was shut down because a callback was not received within the maximum wait time.
Exceptions.OffsetAlreadyExists
Offset already exists. This indicates that the append request attempted to write data to an offset before the current end of the stream. This is an expected exception when ExactOnce is enforced. You can safely ignore it, and keep appending until there is new data to append.
Exceptions.OffsetOutOfRange
Offset out of range. This indicates that the append request is attempting to write data to a point beyond the current end of the stream. To append data successfully, you must either specify the offset corresponding to the current end of stream, or omit the offset from the append request. It usually means a bug in your code that introduces a gap in appends.
Exceptions.SchemaMismatchedException
There was a schema mismatch due to bigquery table with fewer fields than the input message. This can be resolved by updating the table's schema with the message schema.
Exceptions.StorageException
Main Storage Exception. Might contain map of streams to errors for that stream.
Exceptions.StreamFinalizedException
The write stream has already been finalized and will not accept further appends or flushes. To send additional requests, you will need to create a new write stream via CreateWriteStream.
Exceptions.StreamNotFound
The stream is not found. Possible causes include incorrectly specifying the stream identifier or attempting to use an old stream identifier that no longer exists. You can invoke CreateWriteStream to create a new stream.
Exceptions.StreamWriterClosedException
This writer instance has either been closed by the user explicitly, or has encountered non-retriable errors.
To continue to write to the same stream, you will need to create a new writer instance.