Cloud Spanner V1 API - Class Google::Cloud::Spanner::V1::PartialResultSet (v1.3.0)

Reference documentation and code samples for the Cloud Spanner V1 API class Google::Cloud::Spanner::V1::PartialResultSet.

Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#chunked_value

def chunked_value() -> ::Boolean
Returns
  • (::Boolean) — If true, then the final value in values is chunked, and must be combined with more values from subsequent PartialResultSets to obtain a complete field value.

#chunked_value=

def chunked_value=(value) -> ::Boolean
Parameter
  • value (::Boolean) — If true, then the final value in values is chunked, and must be combined with more values from subsequent PartialResultSets to obtain a complete field value.
Returns
  • (::Boolean) — If true, then the final value in values is chunked, and must be combined with more values from subsequent PartialResultSets to obtain a complete field value.

#metadata

def metadata() -> ::Google::Cloud::Spanner::V1::ResultSetMetadata
Returns

#metadata=

def metadata=(value) -> ::Google::Cloud::Spanner::V1::ResultSetMetadata
Parameter
Returns

#precommit_token

def precommit_token() -> ::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken
Returns
  • (::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken) — Optional. A precommit token will be included if the read-write transaction is on a multiplexed session. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction. This feature is not yet supported and will result in an UNIMPLEMENTED error.

#precommit_token=

def precommit_token=(value) -> ::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken
Parameter
  • value (::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken) — Optional. A precommit token will be included if the read-write transaction is on a multiplexed session. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction. This feature is not yet supported and will result in an UNIMPLEMENTED error.
Returns
  • (::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken) — Optional. A precommit token will be included if the read-write transaction is on a multiplexed session. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction. This feature is not yet supported and will result in an UNIMPLEMENTED error.

#resume_token

def resume_token() -> ::String
Returns
  • (::String) — Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including resume_token. Note that executing any other transaction in the same session invalidates the token.

#resume_token=

def resume_token=(value) -> ::String
Parameter
  • value (::String) — Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including resume_token. Note that executing any other transaction in the same session invalidates the token.
Returns
  • (::String) — Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including resume_token. Note that executing any other transaction in the same session invalidates the token.

#stats

def stats() -> ::Google::Cloud::Spanner::V1::ResultSetStats
Returns

#stats=

def stats=(value) -> ::Google::Cloud::Spanner::V1::ResultSetStats
Parameter
Returns

#values

def values() -> ::Array<::Google::Protobuf::Value>
Returns
  • (::Array<::Google::Protobuf::Value>) — A streamed result set consists of a stream of values, which might be split into many PartialResultSet messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.

    Most values are encoded based on type as described here.

    It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent PartialResultSet(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:

    • bool/number/null: cannot be chunked
    • string: concatenate the strings
    • list: concatenate the lists. If the last element in a list is a string, list, or object, merge it with the first element in the next list by applying these rules recursively.
    • object: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.

    Some examples of merging:

    # Strings are concatenated.
    "foo", "bar" => "foobar"
    
    # Lists of non-strings are concatenated.
    [2, 3], [4] => [2, 3, 4]
    
    # Lists are concatenated, but the last and first elements are merged
    # because they are strings.
    ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
    
    # Lists are concatenated, but the last and first elements are merged
    # because they are lists. Recursively, the last and first elements
    # of the inner lists are merged because they are strings.
    ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
    
    # Non-overlapping object fields are combined.
    {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
    
    # Overlapping object fields are merged.
    {"a": "1"}, {"a": "2"} => {"a": "12"}
    
    # Examples of merging objects containing lists of strings.
    {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
    

    For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following PartialResultSets might be yielded:

    {
      "metadata": { ... }
      "values": ["Hello", "W"]
      "chunked_value": true
      "resume_token": "Af65..."
    }
    {
      "values": ["orl"]
      "chunked_value": true
      "resume_token": "Bqp2..."
    }
    {
      "values": ["d"]
      "resume_token": "Zx1B..."
    }
    

    This sequence of PartialResultSets encodes two rows, one containing the field value "Hello", and a second containing the field value "World" = "W" + "orl" + "d".

#values=

def values=(value) -> ::Array<::Google::Protobuf::Value>
Parameter
  • value (::Array<::Google::Protobuf::Value>) — A streamed result set consists of a stream of values, which might be split into many PartialResultSet messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.

    Most values are encoded based on type as described here.

    It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent PartialResultSet(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:

    • bool/number/null: cannot be chunked
    • string: concatenate the strings
    • list: concatenate the lists. If the last element in a list is a string, list, or object, merge it with the first element in the next list by applying these rules recursively.
    • object: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.

    Some examples of merging:

    # Strings are concatenated.
    "foo", "bar" => "foobar"
    
    # Lists of non-strings are concatenated.
    [2, 3], [4] => [2, 3, 4]
    
    # Lists are concatenated, but the last and first elements are merged
    # because they are strings.
    ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
    
    # Lists are concatenated, but the last and first elements are merged
    # because they are lists. Recursively, the last and first elements
    # of the inner lists are merged because they are strings.
    ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
    
    # Non-overlapping object fields are combined.
    {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
    
    # Overlapping object fields are merged.
    {"a": "1"}, {"a": "2"} => {"a": "12"}
    
    # Examples of merging objects containing lists of strings.
    {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
    

    For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following PartialResultSets might be yielded:

    {
      "metadata": { ... }
      "values": ["Hello", "W"]
      "chunked_value": true
      "resume_token": "Af65..."
    }
    {
      "values": ["orl"]
      "chunked_value": true
      "resume_token": "Bqp2..."
    }
    {
      "values": ["d"]
      "resume_token": "Zx1B..."
    }
    

    This sequence of PartialResultSets encodes two rows, one containing the field value "Hello", and a second containing the field value "World" = "W" + "orl" + "d".

Returns
  • (::Array<::Google::Protobuf::Value>) — A streamed result set consists of a stream of values, which might be split into many PartialResultSet messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.

    Most values are encoded based on type as described here.

    It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent PartialResultSet(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:

    • bool/number/null: cannot be chunked
    • string: concatenate the strings
    • list: concatenate the lists. If the last element in a list is a string, list, or object, merge it with the first element in the next list by applying these rules recursively.
    • object: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.

    Some examples of merging:

    # Strings are concatenated.
    "foo", "bar" => "foobar"
    
    # Lists of non-strings are concatenated.
    [2, 3], [4] => [2, 3, 4]
    
    # Lists are concatenated, but the last and first elements are merged
    # because they are strings.
    ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
    
    # Lists are concatenated, but the last and first elements are merged
    # because they are lists. Recursively, the last and first elements
    # of the inner lists are merged because they are strings.
    ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
    
    # Non-overlapping object fields are combined.
    {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
    
    # Overlapping object fields are merged.
    {"a": "1"}, {"a": "2"} => {"a": "12"}
    
    # Examples of merging objects containing lists of strings.
    {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
    

    For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following PartialResultSets might be yielded:

    {
      "metadata": { ... }
      "values": ["Hello", "W"]
      "chunked_value": true
      "resume_token": "Af65..."
    }
    {
      "values": ["orl"]
      "chunked_value": true
      "resume_token": "Bqp2..."
    }
    {
      "values": ["d"]
      "resume_token": "Zx1B..."
    }
    

    This sequence of PartialResultSets encodes two rows, one containing the field value "Hello", and a second containing the field value "World" = "W" + "orl" + "d".