Reference documentation and code samples for the BigQuery Storage V1 API class Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.
Client for the BigQueryRead service.
BigQuery Read API.
The Read API can be used to read data from BigQuery.
Inherits
- Object
Methods
.configure
def self.configure() { |config| ... } -> Client::Configuration
Configure the BigQueryRead Client class.
See Configuration for a description of the configuration fields.
- (config) — Configure the Client client.
- config (Client::Configuration)
# Modify the configuration for all BigQueryRead clients ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config| config.timeout = 10.0 end
#configure
def configure() { |config| ... } -> Client::Configuration
Configure the BigQueryRead Client instance.
The configuration is set to the derived mode, meaning that values can be changed, but structural changes (adding new fields, etc.) are not allowed. Structural changes should be made on Client.configure.
See Configuration for a description of the configuration fields.
- (config) — Configure the Client client.
- config (Client::Configuration)
#create_read_session
def create_read_session(request, options = nil) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession
def create_read_session(parent: nil, read_session: nil, max_stream_count: nil, preferred_min_stream_count: nil) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession
Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.
A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.
Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.
def create_read_session(request, options = nil) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession
create_read_session
via a request object, either of type
CreateReadSessionRequest or an equivalent Hash.
- request (::Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def create_read_session(parent: nil, read_session: nil, max_stream_count: nil, preferred_min_stream_count: nil) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession
create_read_session
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
parent (::String) — Required. The request project that owns the session, in the form of
projects/{project_id}
. - read_session (::Google::Cloud::Bigquery::Storage::V1::ReadSession, ::Hash) — Required. Session to be created.
-
max_stream_count (::Integer) — Max initial number of streams. If unset or zero, the server will
provide a value of streams so as to produce reasonable throughput. Must be
non-negative. The number of streams may be lower than the requested number,
depending on the amount parallelism that is reasonable for the table.
There is a default system max limit of 1,000.
This must be greater than or equal to preferred_min_stream_count. Typically, clients should either leave this unset to let the system to determine an upper bound OR set this a size for the maximum "units of work" it can gracefully handle.
-
preferred_min_stream_count (::Integer) — The minimum preferred stream count. This parameter can be used to inform
the service that there is a desired lower bound on the number of streams.
This is typically a target parallelism of the client (e.g. a Spark
cluster with N-workers would set this to a low multiple of N to ensure
good cluster utilization).
The system will make a best effort to provide at least this number of streams, but in some cases might provide less.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::Bigquery::Storage::V1::ReadSession)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/bigquery/storage/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest.new # Call the create_read_session method. result = client.create_read_session request # The returned object is of type Google::Cloud::Bigquery::Storage::V1::ReadSession. p result
#initialize
def initialize() { |config| ... } -> Client
Create a new BigQueryRead client object.
- (config) — Configure the BigQueryRead client.
- config (Client::Configuration)
- (Client) — a new instance of Client
# Create a client using the default configuration client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new # Create a client using a custom configuration client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config| config.timeout = 10.0 end
#read_rows
def read_rows(request, options = nil) -> ::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>
def read_rows(read_stream: nil, offset: nil) -> ::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>
Reads rows from the stream in the format prescribed by the ReadSession. Each response contains one or more table rows, up to a maximum of 100 MiB per response; read requests which attempt to read individual rows larger than 100 MiB will fail.
Each request also returns a set of stream statistics reflecting the current state of the stream.
def read_rows(request, options = nil) -> ::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>
read_rows
via a request object, either of type
ReadRowsRequest or an equivalent Hash.
- request (::Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def read_rows(read_stream: nil, offset: nil) -> ::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>
read_rows
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
- read_stream (::String) — Required. Stream to read rows from.
- offset (::Integer) — The offset requested must be less than the last row read from Read. Requesting a larger offset is undefined. If not specified, start reading from offset zero.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/bigquery/storage/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest.new # Call the read_rows method to start streaming. output = client.read_rows request # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse output.each do |current_response| p current_response end
#split_read_stream
def split_read_stream(request, options = nil) -> ::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse
def split_read_stream(name: nil, fraction: nil) -> ::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse
Splits a given ReadStream
into two ReadStream
objects. These
ReadStream
objects are referred to as the primary and the residual
streams of the split. The original ReadStream
can still be read from in
the same manner as before. Both of the returned ReadStream
objects can
also be read from, and the rows returned by both child streams will be
the same as the rows read from the original stream.
Moreover, the two child streams will be allocated back-to-back in the
original ReadStream
. Concretely, it is guaranteed that for streams
original, primary, and residual, that original[0-j] = primary[0-j] and
original[j-n] = residual[0-m] once the streams have been read to
completion.
def split_read_stream(request, options = nil) -> ::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse
split_read_stream
via a request object, either of type
SplitReadStreamRequest or an equivalent Hash.
- request (::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def split_read_stream(name: nil, fraction: nil) -> ::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse
split_read_stream
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
- name (::String) — Required. Name of the stream to split.
- fraction (::Float) — A value in the range (0.0, 1.0) that specifies the fractional point at which the original stream should be split. The actual split point is evaluated on pre-filtered rows, so if a filter is provided, then there is no guarantee that the division of the rows between the new child streams will be proportional to this fractional value. Additionally, because the server-side unit for assigning data is collections of rows, this fraction will always map to a data storage boundary on the server side.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/bigquery/storage/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest.new # Call the split_read_stream method. result = client.split_read_stream request # The returned object is of type Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse. p result