Class SchemaAwareStreamWriter<T> (2.38.0)

public class SchemaAwareStreamWriter<T> implements AutoCloseable

A StreamWriter that can write data to BigQuery tables. The SchemaAwareStreamWriter is built on top of a StreamWriter, and it converts all data to Protobuf messages using provided converter then calls StreamWriter's append() method to write to BigQuery tables. It maintains all StreamWriter functions, but also provides an additional feature: schema update support, where if the BigQuery table schema is updated, users will be able to ingest data on the new schema after some time (in order of minutes).

Inheritance

java.lang.Object > SchemaAwareStreamWriter<T>

Implements

AutoCloseable

Type Parameter

NameDescription
T

Static Methods

<T>newBuilder(String streamOrTableName, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)

public static SchemaAwareStreamWriter.Builder<T> <T>newBuilder(String streamOrTableName, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)

newBuilder that constructs a SchemaAwareStreamWriter builder with TableSchema being initialized by StreamWriter by default.

Parameters
NameDescription
streamOrTableNameString

name of the stream that must follow "projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+"

clientBigQueryWriteClient

BigQueryWriteClient

toProtoConverterToProtoConverter<T>
Returns
TypeDescription
Builder<T>

Builder

<T>newBuilder(String streamOrTableName, TableSchema tableSchema, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)

public static SchemaAwareStreamWriter.Builder<T> <T>newBuilder(String streamOrTableName, TableSchema tableSchema, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)

newBuilder that constructs a SchemaAwareStreamWriter builder.

The table schema passed in will be updated automatically when there is a schema update event. When used for Writer creation, it should be the latest schema. So when you are trying to reuse a stream, you should use Builder newBuilder( String streamOrTableName, BigQueryWriteClient client) instead, so the created Writer will be based on a fresh schema.

Parameters
NameDescription
streamOrTableNameString

name of the stream that must follow "projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+"

tableSchemaTableSchema

The schema of the table when the stream was created, which is passed back through WriteStream

clientBigQueryWriteClient
toProtoConverterToProtoConverter<T>
Returns
TypeDescription
Builder<T>

Builder

<T>newBuilder(String streamOrTableName, TableSchema tableSchema, ToProtoConverter<T> toProtoConverter)

public static SchemaAwareStreamWriter.Builder<T> <T>newBuilder(String streamOrTableName, TableSchema tableSchema, ToProtoConverter<T> toProtoConverter)

newBuilder that constructs a SchemaAwareStreamWriter builder with BigQuery client being initialized by StreamWriter by default.

The table schema passed in will be updated automatically when there is a schema update event. When used for Writer creation, it should be the latest schema. So when you are trying to reuse a stream, you should use Builder newBuilder( String streamOrTableName, BigQueryWriteClient client) instead, so the created Writer will be based on a fresh schema.

Parameters
NameDescription
streamOrTableNameString

name of the stream that must follow "projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+" or table name "projects/[^/]+/datasets/[^/]+/tables/[^/]+"

tableSchemaTableSchema

The schema of the table when the stream was created, which is passed back through WriteStream

toProtoConverterToProtoConverter<T>
Returns
TypeDescription
Builder<T>

Builder

Methods

append(Iterable<T> items)

public ApiFuture<AppendRowsResponse> append(Iterable<T> items)

Writes a collection that contains objects to the BigQuery table by first converting the data to Protobuf messages, then using StreamWriter's append() to write the data at current end of stream. If there is a schema update, the current StreamWriter is closed. A new StreamWriter is created with the updated TableSchema.

Parameter
NameDescription
itemsIterable<T>

The array that contains objects to be written

Returns
TypeDescription
ApiFuture<AppendRowsResponse>

ApiFuture<AppendRowsResponse> returns an AppendRowsResponse message wrapped in an ApiFuture

Exceptions
TypeDescription
IOException
DescriptorValidationException

append(Iterable<T> items, long offset)

public ApiFuture<AppendRowsResponse> append(Iterable<T> items, long offset)

Writes a collection that contains objects to the BigQuery table by first converting the data to Protobuf messages, then using StreamWriter's append() to write the data at the specified offset. If there is a schema update, the current StreamWriter is closed. A new StreamWriter is created with the updated TableSchema.

Parameters
NameDescription
itemsIterable<T>

The collection that contains objects to be written

offsetlong

Offset for deduplication

Returns
TypeDescription
ApiFuture<AppendRowsResponse>

ApiFuture<AppendRowsResponse> returns an AppendRowsResponse message wrapped in an ApiFuture

Exceptions
TypeDescription
IOException
DescriptorValidationException

close()

public void close()

Closes the underlying StreamWriter.

getDescriptor()

public Descriptors.Descriptor getDescriptor()

Gets current descriptor

Returns
TypeDescription
Descriptor

Descriptor

getInflightWaitSeconds()

public long getInflightWaitSeconds()

Returns the wait of a request in Client side before sending to the Server. Request could wait in Client because it reached the client side inflight request limit (adjustable when constructing the Writer). The value is the wait time for the last sent request. A constant high wait value indicates a need for more throughput, you can create a new Stream for to increase the throughput in exclusive stream case, or create a new Writer in the default stream case.

Returns
TypeDescription
long

getLocation()

public String getLocation()

Gets the location of the destination

Returns
TypeDescription
String

Descriptor

getMissingValueInterpretationMap()

public Map<String,AppendRowsRequest.MissingValueInterpretation> getMissingValueInterpretationMap()
Returns
TypeDescription
Map<String,MissingValueInterpretation>

the missing value interpretation map used for the writer.

getStreamName()

public String getStreamName()
Returns
TypeDescription
String

The name of the write stream associated with this writer.

getWriterId()

public String getWriterId()
Returns
TypeDescription
String

A unique Id for this writer.

isClosed()

public boolean isClosed()
Returns
TypeDescription
boolean

if a writer can no longer be used for writing. It is due to either the SchemaAwareStreamWriter is explicitly closed or the underlying connection is broken when connection pool is not used. Client should recreate SchemaAwareStreamWriter in this case.

isUserClosed()

public boolean isUserClosed()
Returns
TypeDescription
boolean

if user explicitly closed the writer.

setMissingValueInterpretationMap(Map<String,AppendRowsRequest.MissingValueInterpretation> missingValueInterpretationMap)

public void setMissingValueInterpretationMap(Map<String,AppendRowsRequest.MissingValueInterpretation> missingValueInterpretationMap)

Sets the missing value interpretation map for the SchemaAwareStreamWriter. The input missingValueInterpretationMap is used for all append requests unless otherwise changed.

Parameter
NameDescription
missingValueInterpretationMapMap<String,MissingValueInterpretation>

the missing value interpretation map used by the SchemaAwareStreamWriter.