BigQuery Storage V1 API - Class Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest (v0.21.0)

Reference documentation and code samples for the BigQuery Storage V1 API class Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest.

Request message for AppendRows.

Because AppendRows is a bidirectional streaming RPC, certain parts of the AppendRowsRequest need only be specified for the first request before switching table destinations. You can also switch table destinations within the same connection for the default stream.

The size of a single AppendRowsRequest must be less than 10 MB in size. Requests larger than this return an error, typically INVALID_ARGUMENT.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#missing_value_interpretations

def missing_value_interpretations() -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}
Returns
  • (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}) — A map to indicate how to interpret missing value for some fields. Missing values are fields present in user schema but missing in rows. The key is the field name. The value is the interpretation of missing values for the field.

    For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all missing values in field foo are interpreted as NULL, all missing values in field bar are interpreted as the default value of field bar in table schema.

    If a field is not in this map and has missing values, the missing values in this field are interpreted as NULL.

    This field only applies to the current request, it won't affect other requests on the connection.

    Currently, field name can only be top-level column name, can't be a struct field path like 'foo.bar'.

#missing_value_interpretations=

def missing_value_interpretations=(value) -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}
Parameter
  • value (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}) — A map to indicate how to interpret missing value for some fields. Missing values are fields present in user schema but missing in rows. The key is the field name. The value is the interpretation of missing values for the field.

    For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all missing values in field foo are interpreted as NULL, all missing values in field bar are interpreted as the default value of field bar in table schema.

    If a field is not in this map and has missing values, the missing values in this field are interpreted as NULL.

    This field only applies to the current request, it won't affect other requests on the connection.

    Currently, field name can only be top-level column name, can't be a struct field path like 'foo.bar'.

Returns
  • (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}) — A map to indicate how to interpret missing value for some fields. Missing values are fields present in user schema but missing in rows. The key is the field name. The value is the interpretation of missing values for the field.

    For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all missing values in field foo are interpreted as NULL, all missing values in field bar are interpreted as the default value of field bar in table schema.

    If a field is not in this map and has missing values, the missing values in this field are interpreted as NULL.

    This field only applies to the current request, it won't affect other requests on the connection.

    Currently, field name can only be top-level column name, can't be a struct field path like 'foo.bar'.

#offset

def offset() -> ::Google::Protobuf::Int64Value
Returns
  • (::Google::Protobuf::Int64Value) — If present, the write is only performed if the next append offset is same as the provided value. If not present, the write is performed at the current end of stream. Specifying a value for this field is not allowed when calling AppendRows for the '_default' stream.

#offset=

def offset=(value) -> ::Google::Protobuf::Int64Value
Parameter
  • value (::Google::Protobuf::Int64Value) — If present, the write is only performed if the next append offset is same as the provided value. If not present, the write is performed at the current end of stream. Specifying a value for this field is not allowed when calling AppendRows for the '_default' stream.
Returns
  • (::Google::Protobuf::Int64Value) — If present, the write is only performed if the next append offset is same as the provided value. If not present, the write is performed at the current end of stream. Specifying a value for this field is not allowed when calling AppendRows for the '_default' stream.

#proto_rows

def proto_rows() -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData

#proto_rows=

def proto_rows=(value) -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData
Parameter

#trace_id

def trace_id() -> ::String
Returns
  • (::String) — Id set by client to annotate its identity. Only initial request setting is respected.

#trace_id=

def trace_id=(value) -> ::String
Parameter
  • value (::String) — Id set by client to annotate its identity. Only initial request setting is respected.
Returns
  • (::String) — Id set by client to annotate its identity. Only initial request setting is respected.

#write_stream

def write_stream() -> ::String
Returns
  • (::String) — Required. The write_stream identifies the append operation. It must be provided in the following scenarios:

    • In the first request to an AppendRows connection.

    • In all subsequent requests to an AppendRows connection, if you use the same connection to write to multiple tables or change the input schema for default streams.

    For explicitly created write streams, the format is:

    • projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}

    For the special default stream, the format is:

    • projects/{project}/datasets/{dataset}/tables/{table}/streams/_default.

    An example of a possible sequence of requests with write_stream fields within a single connection:

    • r1: {write_stream: stream_name_1}

    • r2: {write_stream: /omit/}

    • r3: {write_stream: /omit/}

    • r4: {write_stream: stream_name_2}

    • r5: {write_stream: stream_name_2}

    The destination changed in request_4, so the write_stream field must be populated in all subsequent requests in this stream.

#write_stream=

def write_stream=(value) -> ::String
Parameter
  • value (::String) — Required. The write_stream identifies the append operation. It must be provided in the following scenarios:

    • In the first request to an AppendRows connection.

    • In all subsequent requests to an AppendRows connection, if you use the same connection to write to multiple tables or change the input schema for default streams.

    For explicitly created write streams, the format is:

    • projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}

    For the special default stream, the format is:

    • projects/{project}/datasets/{dataset}/tables/{table}/streams/_default.

    An example of a possible sequence of requests with write_stream fields within a single connection:

    • r1: {write_stream: stream_name_1}

    • r2: {write_stream: /omit/}

    • r3: {write_stream: /omit/}

    • r4: {write_stream: stream_name_2}

    • r5: {write_stream: stream_name_2}

    The destination changed in request_4, so the write_stream field must be populated in all subsequent requests in this stream.

Returns
  • (::String) — Required. The write_stream identifies the append operation. It must be provided in the following scenarios:

    • In the first request to an AppendRows connection.

    • In all subsequent requests to an AppendRows connection, if you use the same connection to write to multiple tables or change the input schema for default streams.

    For explicitly created write streams, the format is:

    • projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}

    For the special default stream, the format is:

    • projects/{project}/datasets/{dataset}/tables/{table}/streams/_default.

    An example of a possible sequence of requests with write_stream fields within a single connection:

    • r1: {write_stream: stream_name_1}

    • r2: {write_stream: /omit/}

    • r3: {write_stream: /omit/}

    • r4: {write_stream: stream_name_2}

    • r5: {write_stream: stream_name_2}

    The destination changed in request_4, so the write_stream field must be populated in all subsequent requests in this stream.