BigQuery Storage V1 API - Class Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest (v0.13.0)

Reference documentation and code samples for the BigQuery Storage V1 API class Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest.

Request message for CreateReadSession.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#max_stream_count

def max_stream_count() -> ::Integer
Returns
  • (::Integer) — Max initial number of streams. If unset or zero, the server will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table. There is a default system max limit of 1,000.

    This must be greater than or equal to preferred_min_stream_count. Typically, clients should either leave this unset to let the system to determine an upper bound OR set this a size for the maximum "units of work" it can gracefully handle.

#max_stream_count=

def max_stream_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Max initial number of streams. If unset or zero, the server will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table. There is a default system max limit of 1,000.

    This must be greater than or equal to preferred_min_stream_count. Typically, clients should either leave this unset to let the system to determine an upper bound OR set this a size for the maximum "units of work" it can gracefully handle.

Returns
  • (::Integer) — Max initial number of streams. If unset or zero, the server will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table. There is a default system max limit of 1,000.

    This must be greater than or equal to preferred_min_stream_count. Typically, clients should either leave this unset to let the system to determine an upper bound OR set this a size for the maximum "units of work" it can gracefully handle.

#parent

def parent() -> ::String
Returns
  • (::String) — Required. The request project that owns the session, in the form of projects/{project_id}.

#parent=

def parent=(value) -> ::String
Parameter
  • value (::String) — Required. The request project that owns the session, in the form of projects/{project_id}.
Returns
  • (::String) — Required. The request project that owns the session, in the form of projects/{project_id}.

#preferred_min_stream_count

def preferred_min_stream_count() -> ::Integer
Returns
  • (::Integer) — The minimum preferred stream count. This parameter can be used to inform the service that there is a desired lower bound on the number of streams. This is typically a target parallelism of the client (e.g. a Spark cluster with N-workers would set this to a low multiple of N to ensure good cluster utilization).

    The system will make a best effort to provide at least this number of streams, but in some cases might provide less.

#preferred_min_stream_count=

def preferred_min_stream_count=(value) -> ::Integer
Parameter
  • value (::Integer) — The minimum preferred stream count. This parameter can be used to inform the service that there is a desired lower bound on the number of streams. This is typically a target parallelism of the client (e.g. a Spark cluster with N-workers would set this to a low multiple of N to ensure good cluster utilization).

    The system will make a best effort to provide at least this number of streams, but in some cases might provide less.

Returns
  • (::Integer) — The minimum preferred stream count. This parameter can be used to inform the service that there is a desired lower bound on the number of streams. This is typically a target parallelism of the client (e.g. a Spark cluster with N-workers would set this to a low multiple of N to ensure good cluster utilization).

    The system will make a best effort to provide at least this number of streams, but in some cases might provide less.

#read_session

def read_session() -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession
Returns

#read_session=

def read_session=(value) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession
Parameter
Returns