Class BaseBigQueryStorageClient (2.47.0)

public class BaseBigQueryStorageClient implements BackgroundResource

Service Description: BigQuery storage API.

The BigQuery storage API can be used to read data stored in BigQuery.

The v1beta1 API is not yet officially deprecated, and will go through a full deprecation cycle (https://cloud.google.com/products#product-launch-stages) before the service is turned down. However, new code should use the v1 API going forward.

This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   TableReferenceProto.TableReference tableReference =
       TableReferenceProto.TableReference.newBuilder().build();
   ProjectName parent = ProjectName.of("[PROJECT]");
   int requestedStreams = 1017221410;
   Storage.ReadSession response =
       baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
 }
 

Note: close() needs to be called on the BaseBigQueryStorageClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().

The surface of this class includes several types of Java methods for each of the API's methods:

  1. A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
  2. A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
  3. A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.

See the individual methods for example code.

Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.

This class can be customized by passing in a custom instance of BaseBigQueryStorageSettings to create(). For example:

To customize credentials:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryStorageSettings baseBigQueryStorageSettings =
     BaseBigQueryStorageSettings.newBuilder()
         .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
         .build();
 BaseBigQueryStorageClient baseBigQueryStorageClient =
     BaseBigQueryStorageClient.create(baseBigQueryStorageSettings);
 

To customize the endpoint:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryStorageSettings baseBigQueryStorageSettings =
     BaseBigQueryStorageSettings.newBuilder().setEndpoint(myEndpoint).build();
 BaseBigQueryStorageClient baseBigQueryStorageClient =
     BaseBigQueryStorageClient.create(baseBigQueryStorageSettings);
 

Please refer to the GitHub repository's samples for more quickstart code snippets.

Inheritance

java.lang.Object > BaseBigQueryStorageClient

Implements

BackgroundResource

Static Methods

create()

public static final BaseBigQueryStorageClient create()

Constructs an instance of BaseBigQueryStorageClient with default settings.

Returns
TypeDescription
BaseBigQueryStorageClient
Exceptions
TypeDescription
IOException

create(BaseBigQueryStorageSettings settings)

public static final BaseBigQueryStorageClient create(BaseBigQueryStorageSettings settings)

Constructs an instance of BaseBigQueryStorageClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set.

Parameter
NameDescription
settingsBaseBigQueryStorageSettings
Returns
TypeDescription
BaseBigQueryStorageClient
Exceptions
TypeDescription
IOException

create(BigQueryStorageStub stub)

public static final BaseBigQueryStorageClient create(BigQueryStorageStub stub)

Constructs an instance of BaseBigQueryStorageClient, using the given stub for making calls. This is for advanced usage - prefer using create(BaseBigQueryStorageSettings).

Parameter
NameDescription
stubBigQueryStorageStub
Returns
TypeDescription
BaseBigQueryStorageClient

Constructors

BaseBigQueryStorageClient(BaseBigQueryStorageSettings settings)

protected BaseBigQueryStorageClient(BaseBigQueryStorageSettings settings)

Constructs an instance of BaseBigQueryStorageClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred.

Parameter
NameDescription
settingsBaseBigQueryStorageSettings

BaseBigQueryStorageClient(BigQueryStorageStub stub)

protected BaseBigQueryStorageClient(BigQueryStorageStub stub)
Parameter
NameDescription
stubBigQueryStorageStub

Methods

awaitTermination(long duration, TimeUnit unit)

public boolean awaitTermination(long duration, TimeUnit unit)
Parameters
NameDescription
durationlong
unitTimeUnit
Returns
TypeDescription
boolean
Exceptions
TypeDescription
InterruptedException

batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)

public final Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.BatchCreateReadSessionStreamsRequest request =
       Storage.BatchCreateReadSessionStreamsRequest.newBuilder()
           .setSession(Storage.ReadSession.newBuilder().build())
           .setRequestedStreams(1017221410)
           .build();
   Storage.BatchCreateReadSessionStreamsResponse response =
       baseBigQueryStorageClient.batchCreateReadSessionStreams(request);
 }
 
Parameter
NameDescription
requestStorage.BatchCreateReadSessionStreamsRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
Storage.BatchCreateReadSessionStreamsResponse

batchCreateReadSessionStreams(Storage.ReadSession session, int requestedStreams)

public final Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.ReadSession session, int requestedStreams)

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.ReadSession session = Storage.ReadSession.newBuilder().build();
   int requestedStreams = 1017221410;
   Storage.BatchCreateReadSessionStreamsResponse response =
       baseBigQueryStorageClient.batchCreateReadSessionStreams(session, requestedStreams);
 }
 
Parameters
NameDescription
sessionStorage.ReadSession

Required. Must be a non-expired session obtained from a call to CreateReadSession. Only the name field needs to be set.

requestedStreamsint

Required. Number of new streams requested. Must be positive. Number of added streams may be less than this, see CreateReadSessionRequest for more information.

Returns
TypeDescription
Storage.BatchCreateReadSessionStreamsResponse

batchCreateReadSessionStreamsCallable()

public final UnaryCallable<Storage.BatchCreateReadSessionStreamsRequest,Storage.BatchCreateReadSessionStreamsResponse> batchCreateReadSessionStreamsCallable()

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.BatchCreateReadSessionStreamsRequest request =
       Storage.BatchCreateReadSessionStreamsRequest.newBuilder()
           .setSession(Storage.ReadSession.newBuilder().build())
           .setRequestedStreams(1017221410)
           .build();
   ApiFuture<Storage.BatchCreateReadSessionStreamsResponse> future =
       baseBigQueryStorageClient.batchCreateReadSessionStreamsCallable().futureCall(request);
   // Do something.
   Storage.BatchCreateReadSessionStreamsResponse response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<BatchCreateReadSessionStreamsRequest,BatchCreateReadSessionStreamsResponse>

close()

public final void close()

createReadSession(Storage.CreateReadSessionRequest request)

public final Storage.ReadSession createReadSession(Storage.CreateReadSessionRequest request)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.CreateReadSessionRequest request =
       Storage.CreateReadSessionRequest.newBuilder()
           .setTableReference(TableReferenceProto.TableReference.newBuilder().build())
           .setParent(ProjectName.of("[PROJECT]").toString())
           .setTableModifiers(TableReferenceProto.TableModifiers.newBuilder().build())
           .setRequestedStreams(1017221410)
           .setReadOptions(ReadOptions.TableReadOptions.newBuilder().build())
           .setFormat(Storage.DataFormat.forNumber(0))
           .setShardingStrategy(Storage.ShardingStrategy.forNumber(0))
           .build();
   Storage.ReadSession response = baseBigQueryStorageClient.createReadSession(request);
 }
 
Parameter
NameDescription
requestStorage.CreateReadSessionRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
Storage.ReadSession

createReadSession(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams)

public final Storage.ReadSession createReadSession(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   TableReferenceProto.TableReference tableReference =
       TableReferenceProto.TableReference.newBuilder().build();
   ProjectName parent = ProjectName.of("[PROJECT]");
   int requestedStreams = 1017221410;
   Storage.ReadSession response =
       baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
 }
 
Parameters
NameDescription
tableReferenceTableReferenceProto.TableReference

Required. Reference to the table to read.

parentProjectName

Required. String of the form projects/{project_id} indicating the project this ReadSession is associated with. This is the project that will be billed for usage.

requestedStreamsint

Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system.

Streams must be read starting from offset 0.

Returns
TypeDescription
Storage.ReadSession

createReadSession(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams)

public final Storage.ReadSession createReadSession(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   TableReferenceProto.TableReference tableReference =
       TableReferenceProto.TableReference.newBuilder().build();
   String parent = ProjectName.of("[PROJECT]").toString();
   int requestedStreams = 1017221410;
   Storage.ReadSession response =
       baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
 }
 
Parameters
NameDescription
tableReferenceTableReferenceProto.TableReference

Required. Reference to the table to read.

parentString

Required. String of the form projects/{project_id} indicating the project this ReadSession is associated with. This is the project that will be billed for usage.

requestedStreamsint

Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system.

Streams must be read starting from offset 0.

Returns
TypeDescription
Storage.ReadSession

createReadSessionCallable()

public final UnaryCallable<Storage.CreateReadSessionRequest,Storage.ReadSession> createReadSessionCallable()

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.CreateReadSessionRequest request =
       Storage.CreateReadSessionRequest.newBuilder()
           .setTableReference(TableReferenceProto.TableReference.newBuilder().build())
           .setParent(ProjectName.of("[PROJECT]").toString())
           .setTableModifiers(TableReferenceProto.TableModifiers.newBuilder().build())
           .setRequestedStreams(1017221410)
           .setReadOptions(ReadOptions.TableReadOptions.newBuilder().build())
           .setFormat(Storage.DataFormat.forNumber(0))
           .setShardingStrategy(Storage.ShardingStrategy.forNumber(0))
           .build();
   ApiFuture<Storage.ReadSession> future =
       baseBigQueryStorageClient.createReadSessionCallable().futureCall(request);
   // Do something.
   Storage.ReadSession response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<CreateReadSessionRequest,ReadSession>

finalizeStream(Storage.FinalizeStreamRequest request)

public final void finalizeStream(Storage.FinalizeStreamRequest request)

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.FinalizeStreamRequest request =
       Storage.FinalizeStreamRequest.newBuilder()
           .setStream(Storage.Stream.newBuilder().build())
           .build();
   baseBigQueryStorageClient.finalizeStream(request);
 }
 
Parameter
NameDescription
requestStorage.FinalizeStreamRequest

The request object containing all of the parameters for the API call.

finalizeStream(Storage.Stream stream)

public final void finalizeStream(Storage.Stream stream)

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.Stream stream = Storage.Stream.newBuilder().build();
   baseBigQueryStorageClient.finalizeStream(stream);
 }
 
Parameter
NameDescription
streamStorage.Stream

Required. Stream to finalize.

finalizeStreamCallable()

public final UnaryCallable<Storage.FinalizeStreamRequest,Empty> finalizeStreamCallable()

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.FinalizeStreamRequest request =
       Storage.FinalizeStreamRequest.newBuilder()
           .setStream(Storage.Stream.newBuilder().build())
           .build();
   ApiFuture<Empty> future =
       baseBigQueryStorageClient.finalizeStreamCallable().futureCall(request);
   // Do something.
   future.get();
 }
 
Returns
TypeDescription
UnaryCallable<FinalizeStreamRequest,Empty>

getSettings()

public final BaseBigQueryStorageSettings getSettings()
Returns
TypeDescription
BaseBigQueryStorageSettings

getStub()

public BigQueryStorageStub getStub()
Returns
TypeDescription
BigQueryStorageStub

isShutdown()

public boolean isShutdown()
Returns
TypeDescription
boolean

isTerminated()

public boolean isTerminated()
Returns
TypeDescription
boolean

readRowsCallable()

public final ServerStreamingCallable<Storage.ReadRowsRequest,Storage.ReadRowsResponse> readRowsCallable()

Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.

Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.ReadRowsRequest request =
       Storage.ReadRowsRequest.newBuilder()
           .setReadPosition(Storage.StreamPosition.newBuilder().build())
           .build();
   ServerStream<Storage.ReadRowsResponse> stream =
       baseBigQueryStorageClient.readRowsCallable().call(request);
   for (Storage.ReadRowsResponse response : stream) {
     // Do something when a response is received.
   }
 }
 
Returns
TypeDescription
ServerStreamingCallable<ReadRowsRequest,ReadRowsResponse>

shutdown()

public void shutdown()

shutdownNow()

public void shutdownNow()

splitReadStream(Storage.SplitReadStreamRequest request)

public final Storage.SplitReadStreamResponse splitReadStream(Storage.SplitReadStreamRequest request)

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.SplitReadStreamRequest request =
       Storage.SplitReadStreamRequest.newBuilder()
           .setOriginalStream(Storage.Stream.newBuilder().build())
           .setFraction(-1653751294)
           .build();
   Storage.SplitReadStreamResponse response = baseBigQueryStorageClient.splitReadStream(request);
 }
 
Parameter
NameDescription
requestStorage.SplitReadStreamRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
Storage.SplitReadStreamResponse

splitReadStream(Storage.Stream originalStream)

public final Storage.SplitReadStreamResponse splitReadStream(Storage.Stream originalStream)

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.Stream originalStream = Storage.Stream.newBuilder().build();
   Storage.SplitReadStreamResponse response =
       baseBigQueryStorageClient.splitReadStream(originalStream);
 }
 
Parameter
NameDescription
originalStreamStorage.Stream

Required. Stream to split.

Returns
TypeDescription
Storage.SplitReadStreamResponse

splitReadStreamCallable()

public final UnaryCallable<Storage.SplitReadStreamRequest,Storage.SplitReadStreamResponse> splitReadStreamCallable()

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.SplitReadStreamRequest request =
       Storage.SplitReadStreamRequest.newBuilder()
           .setOriginalStream(Storage.Stream.newBuilder().build())
           .setFraction(-1653751294)
           .build();
   ApiFuture<Storage.SplitReadStreamResponse> future =
       baseBigQueryStorageClient.splitReadStreamCallable().futureCall(request);
   // Do something.
   Storage.SplitReadStreamResponse response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<SplitReadStreamRequest,SplitReadStreamResponse>