BigQuery Storage V1 API - Class Google::Cloud::Bigquery::Storage::V1::ReadSession (v0.14.0)

Reference documentation and code samples for the BigQuery Storage V1 API class Google::Cloud::Bigquery::Storage::V1::ReadSession.

Information about the ReadSession.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#arrow_schema

def arrow_schema() -> ::Google::Cloud::Bigquery::Storage::V1::ArrowSchema
Returns

#avro_schema

def avro_schema() -> ::Google::Cloud::Bigquery::Storage::V1::AvroSchema
Returns

#data_format

def data_format() -> ::Google::Cloud::Bigquery::Storage::V1::DataFormat
Returns

#data_format=

def data_format=(value) -> ::Google::Cloud::Bigquery::Storage::V1::DataFormat
Parameter
Returns

#estimated_total_bytes_scanned

def estimated_total_bytes_scanned() -> ::Integer
Returns
  • (::Integer) — Output only. An estimate on the number of bytes this session will scan when all streams are completely consumed. This estimate is based on metadata from the table which might be incomplete or stale.

#expire_time

def expire_time() -> ::Google::Protobuf::Timestamp
Returns
  • (::Google::Protobuf::Timestamp) — Output only. Time at which the session becomes invalid. After this time, subsequent requests to read this Session will return errors. The expire_time is automatically assigned and currently cannot be specified or updated.

#name

def name() -> ::String
Returns
  • (::String) — Output only. Unique identifier for the session, in the form projects/{project_id}/locations/{location}/sessions/{session_id}.

#read_options

def read_options() -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions
Returns

#read_options=

def read_options=(value) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions
Parameter
Returns

#streams

def streams() -> ::Array<::Google::Cloud::Bigquery::Storage::V1::ReadStream>
Returns
  • (::Array<::Google::Cloud::Bigquery::Storage::V1::ReadStream>) — Output only. A list of streams created with the session.

    At least one stream is created with the session. In the future, larger request_stream_count values may result in this list being unpopulated, in that case, the user will need to use a List method to get the streams instead, which is not yet available.

#table

def table() -> ::String
Returns
  • (::String) — Immutable. Table that this ReadSession is reading from, in the form projects/{project_id}/datasets/{dataset_id}/tables/{table_id}

#table=

def table=(value) -> ::String
Parameter
  • value (::String) — Immutable. Table that this ReadSession is reading from, in the form projects/{project_id}/datasets/{dataset_id}/tables/{table_id}
Returns
  • (::String) — Immutable. Table that this ReadSession is reading from, in the form projects/{project_id}/datasets/{dataset_id}/tables/{table_id}

#table_modifiers

def table_modifiers() -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableModifiers
Returns

#table_modifiers=

def table_modifiers=(value) -> ::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableModifiers
Parameter
Returns

#trace_id

def trace_id() -> ::String
Returns
  • (::String) — Optional. ID set by client to annotate a session identity. This does not need to be strictly unique, but instead the same ID should be used to group logically connected sessions (e.g. All using the same ID for all sessions needed to complete a Spark SQL query is reasonable).

    Maximum length is 256 bytes.

#trace_id=

def trace_id=(value) -> ::String
Parameter
  • value (::String) — Optional. ID set by client to annotate a session identity. This does not need to be strictly unique, but instead the same ID should be used to group logically connected sessions (e.g. All using the same ID for all sessions needed to complete a Spark SQL query is reasonable).

    Maximum length is 256 bytes.

Returns
  • (::String) — Optional. ID set by client to annotate a session identity. This does not need to be strictly unique, but instead the same ID should be used to group logically connected sessions (e.g. All using the same ID for all sessions needed to complete a Spark SQL query is reasonable).

    Maximum length is 256 bytes.