- 3.46.0 (latest)
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.1
- 3.39.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.1
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.27.1
- 3.26.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.2
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.1
- 3.14.1
- 3.13.0
- 3.12.1
- 3.11.1
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.0
- 3.3.0
- 3.2.0
- 3.1.0
- 3.0.0
- 2.1.1
- 2.0.0
- 1.19.3
- 1.18.0
- 1.17.1
- 1.16.0
- 1.15.1
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.0
- 1.10.0
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
A streamed result set consists of a stream of values, which
might be split into many PartialResultSet
messages to
accommodate large rows and/or large values. Every N complete
values defines a row, where N is equal to the number of
entries in [metadata.row_type.fields][google.spanner.v1.Struc
tType.fields]. Most values are encoded based on type as
described here][google.spanner.v1.TypeCode]
. It is possible
that the last value in values is "chunked", meaning that the
rest of the value is sent in subsequent PartialResultSet
(s). This is denoted by the [chunked_value][google.spanner.v1
.PartialResultSet.chunked_value] field. Two or more chunked
values can be merged to form a complete value as follows: -
bool/number/null
: cannot be chunked - string
:
concatenate the strings - list
: concatenate the lists. If
the last element in a list is a string
, list
, or
object
, merge it with the first element in the next
list by applying these rules recursively. - object
:
concatenate the (field name, field value) pairs. If a field
name is duplicated, then apply these rules recursively to
merge the field values. Some examples of merging: ::
Strings are concatenated. "foo", "bar" => "foobar"
Lists of non-strings are concatenated. [2, 3], [4] =>
[2, 3, 4] # Lists are concatenated, but the last and
first elements are merged # because they are strings.
["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are
concatenated, but the last and first elements are merged #
because they are lists. Recursively, the last and first
elements # of the inner lists are merged because they are
strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b",
"cd"], "e"] # Non-overlapping object fields are combined.
{"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} #
Overlapping object fields are merged. {"a": "1"}, {"a":
"2"} => {"a": "12"} # Examples of merging objects
containing lists of strings. {"a": ["1"]}, {"a": ["2"]} =>
{"a": ["12"]} For a more complete example, suppose a
streaming SQL query is yielding a result set whose rows
contain a single string field. The following
PartialResultSet
\ s might be yielded: :: {
"metadata": { ... } "values": ["Hello", "W"]
"chunked_value": true "resume_token": "Af65..." }
{ "values": ["orl"] "chunked_value": true
"resume_token": "Bqp2..." } { "values": ["d"]
"resume_token": "Zx1B..." } This sequence of
PartialResultSet
\ s encodes two rows, one containing the
field value "Hello"
, and a second containing the field
value "World" = "W" + "orl" + "d"
.
Streaming calls might be interrupted for a variety of reasons,
such as TCP connection loss. If this occurs, the stream of
results can be resumed by re-sending the original request and
including resume_token
. Note that executing any other
transaction in the same session invalidates the token.