Reference documentation and code samples for the Cloud Spanner V1 API class Google::Cloud::Spanner::V1::PartialResultSet.
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#chunked_value
def chunked_value() -> ::Boolean
-
(::Boolean) — If true, then the final value in values is chunked, and must
be combined with more values from subsequent
PartialResultSet
s to obtain a complete field value.
#chunked_value=
def chunked_value=(value) -> ::Boolean
-
value (::Boolean) — If true, then the final value in values is chunked, and must
be combined with more values from subsequent
PartialResultSet
s to obtain a complete field value.
-
(::Boolean) — If true, then the final value in values is chunked, and must
be combined with more values from subsequent
PartialResultSet
s to obtain a complete field value.
#metadata
def metadata() -> ::Google::Cloud::Spanner::V1::ResultSetMetadata
- (::Google::Cloud::Spanner::V1::ResultSetMetadata) — Metadata about the result set, such as row type information. Only present in the first response.
#metadata=
def metadata=(value) -> ::Google::Cloud::Spanner::V1::ResultSetMetadata
- value (::Google::Cloud::Spanner::V1::ResultSetMetadata) — Metadata about the result set, such as row type information. Only present in the first response.
- (::Google::Cloud::Spanner::V1::ResultSetMetadata) — Metadata about the result set, such as row type information. Only present in the first response.
#precommit_token
def precommit_token() -> ::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken
- (::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken) — Optional. A precommit token will be included if the read-write transaction is on a multiplexed session. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction. This feature is not yet supported and will result in an UNIMPLEMENTED error.
#precommit_token=
def precommit_token=(value) -> ::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken
- value (::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken) — Optional. A precommit token will be included if the read-write transaction is on a multiplexed session. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction. This feature is not yet supported and will result in an UNIMPLEMENTED error.
- (::Google::Cloud::Spanner::V1::MultiplexedSessionPrecommitToken) — Optional. A precommit token will be included if the read-write transaction is on a multiplexed session. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction. This feature is not yet supported and will result in an UNIMPLEMENTED error.
#resume_token
def resume_token() -> ::String
-
(::String) — Streaming calls might be interrupted for a variety of reasons, such
as TCP connection loss. If this occurs, the stream of results can
be resumed by re-sending the original request and including
resume_token
. Note that executing any other transaction in the same session invalidates the token.
#resume_token=
def resume_token=(value) -> ::String
-
value (::String) — Streaming calls might be interrupted for a variety of reasons, such
as TCP connection loss. If this occurs, the stream of results can
be resumed by re-sending the original request and including
resume_token
. Note that executing any other transaction in the same session invalidates the token.
-
(::String) — Streaming calls might be interrupted for a variety of reasons, such
as TCP connection loss. If this occurs, the stream of results can
be resumed by re-sending the original request and including
resume_token
. Note that executing any other transaction in the same session invalidates the token.
#stats
def stats() -> ::Google::Cloud::Spanner::V1::ResultSetStats
- (::Google::Cloud::Spanner::V1::ResultSetStats) — Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting ExecuteSqlRequest.query_mode and are sent only once with the last response in the stream. This field will also be present in the last response for DML statements.
#stats=
def stats=(value) -> ::Google::Cloud::Spanner::V1::ResultSetStats
- value (::Google::Cloud::Spanner::V1::ResultSetStats) — Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting ExecuteSqlRequest.query_mode and are sent only once with the last response in the stream. This field will also be present in the last response for DML statements.
- (::Google::Cloud::Spanner::V1::ResultSetStats) — Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting ExecuteSqlRequest.query_mode and are sent only once with the last response in the stream. This field will also be present in the last response for DML statements.
#values
def values() -> ::Array<::Google::Protobuf::Value>
-
(::Array<::Google::Protobuf::Value>) — A streamed result set consists of a stream of values, which might
be split into many
PartialResultSet
messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.Most values are encoded based on type as described here.
It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent
PartialResultSet
(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:bool/number/null
: cannot be chunkedstring
: concatenate the stringslist
: concatenate the lists. If the last element in a list is astring
,list
, orobject
, merge it with the first element in the next list by applying these rules recursively.object
: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.
Some examples of merging:
# Strings are concatenated. "foo", "bar" => "foobar" # Lists of non-strings are concatenated. [2, 3], [4] => [2, 3, 4] # Lists are concatenated, but the last and first elements are merged # because they are strings. ["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are concatenated, but the last and first elements are merged # because they are lists. Recursively, the last and first elements # of the inner lists are merged because they are strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] # Non-overlapping object fields are combined. {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} # Overlapping object fields are merged. {"a": "1"}, {"a": "2"} => {"a": "12"} # Examples of merging objects containing lists of strings. {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following
PartialResultSet
s might be yielded:{ "metadata": { ... } "values": ["Hello", "W"] "chunked_value": true "resume_token": "Af65..." } { "values": ["orl"] "chunked_value": true "resume_token": "Bqp2..." } { "values": ["d"] "resume_token": "Zx1B..." }
This sequence of
PartialResultSet
s encodes two rows, one containing the field value"Hello"
, and a second containing the field value"World" = "W" + "orl" + "d"
.
#values=
def values=(value) -> ::Array<::Google::Protobuf::Value>
-
value (::Array<::Google::Protobuf::Value>) — A streamed result set consists of a stream of values, which might
be split into many
PartialResultSet
messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.Most values are encoded based on type as described here.
It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent
PartialResultSet
(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:bool/number/null
: cannot be chunkedstring
: concatenate the stringslist
: concatenate the lists. If the last element in a list is astring
,list
, orobject
, merge it with the first element in the next list by applying these rules recursively.object
: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.
Some examples of merging:
# Strings are concatenated. "foo", "bar" => "foobar" # Lists of non-strings are concatenated. [2, 3], [4] => [2, 3, 4] # Lists are concatenated, but the last and first elements are merged # because they are strings. ["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are concatenated, but the last and first elements are merged # because they are lists. Recursively, the last and first elements # of the inner lists are merged because they are strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] # Non-overlapping object fields are combined. {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} # Overlapping object fields are merged. {"a": "1"}, {"a": "2"} => {"a": "12"} # Examples of merging objects containing lists of strings. {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following
PartialResultSet
s might be yielded:{ "metadata": { ... } "values": ["Hello", "W"] "chunked_value": true "resume_token": "Af65..." } { "values": ["orl"] "chunked_value": true "resume_token": "Bqp2..." } { "values": ["d"] "resume_token": "Zx1B..." }
This sequence of
PartialResultSet
s encodes two rows, one containing the field value"Hello"
, and a second containing the field value"World" = "W" + "orl" + "d"
.
-
(::Array<::Google::Protobuf::Value>) — A streamed result set consists of a stream of values, which might
be split into many
PartialResultSet
messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields.Most values are encoded based on type as described here.
It is possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent
PartialResultSet
(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows:bool/number/null
: cannot be chunkedstring
: concatenate the stringslist
: concatenate the lists. If the last element in a list is astring
,list
, orobject
, merge it with the first element in the next list by applying these rules recursively.object
: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values.
Some examples of merging:
# Strings are concatenated. "foo", "bar" => "foobar" # Lists of non-strings are concatenated. [2, 3], [4] => [2, 3, 4] # Lists are concatenated, but the last and first elements are merged # because they are strings. ["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are concatenated, but the last and first elements are merged # because they are lists. Recursively, the last and first elements # of the inner lists are merged because they are strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] # Non-overlapping object fields are combined. {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} # Overlapping object fields are merged. {"a": "1"}, {"a": "2"} => {"a": "12"} # Examples of merging objects containing lists of strings. {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following
PartialResultSet
s might be yielded:{ "metadata": { ... } "values": ["Hello", "W"] "chunked_value": true "resume_token": "Af65..." } { "values": ["orl"] "chunked_value": true "resume_token": "Bqp2..." } { "values": ["d"] "resume_token": "Zx1B..." }
This sequence of
PartialResultSet
s encodes two rows, one containing the field value"Hello"
, and a second containing the field value"World" = "W" + "orl" + "d"
.