Class v1.BigQueryWriteClient (4.1.0)

BigQuery Write API.

The Write API can be used to write data to BigQuery.

For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api v1

Package

@google-cloud/bigquery-storage

Constructors

(constructor)(opts, gaxInstance)

constructor(opts?: ClientOptions, gaxInstance?: typeof gax | typeof gax.fallback);

Construct an instance of BigQueryWriteClient.

Parameters
NameDescription
opts ClientOptions
gaxInstance typeof gax | typeof fallback

: loaded instance of google-gax. Useful if you need to avoid loading the default gRPC version and want to use the fallback HTTP implementation. Load only fallback version and pass it to the constructor: ``` const gax = require('google-gax/build/src/fallback'); // avoids loading google-gax with gRPC const client = new BigQueryWriteClient({fallback: 'rest'}, gax); ```

Properties

apiEndpoint

static get apiEndpoint(): string;

The DNS address for this API service - same as servicePath(), exists for compatibility reasons.

auth

auth: gax.GoogleAuth;

bigQueryWriteStub

bigQueryWriteStub?: Promise<{
        [name: string]: Function;
    }>;

descriptors

descriptors: Descriptors;

innerApiCalls

innerApiCalls: {
        [name: string]: Function;
    };

pathTemplates

pathTemplates: {
        [name: string]: gax.PathTemplate;
    };

port

static get port(): number;

The port for this API service.

scopes

static get scopes(): string[];

The scopes needed to make gRPC calls for every method defined in this service.

servicePath

static get servicePath(): string;

The DNS address for this API service.

warn

warn: (code: string, message: string, warnType?: string) => void;

Methods

appendRows(options)

appendRows(options?: CallOptions): gax.CancellableStream;

Appends data to the given stream.

If offset is specified, the offset is checked against the end of stream. The server returns OUT_OF_RANGE in AppendRowsResponse if an attempt is made to append to an offset beyond the current end of the stream or ALREADY_EXISTS if user provides an offset that has already been written to. User can retry with adjusted offset within the same RPC connection. If offset is not specified, append happens at the end of the stream.

The response contains an optional offset at which the append happened. No offset information will be returned for appends to a default stream.

Responses are received in the same order in which requests are sent. There will be one response for each successful inserted request. Responses may optionally embed error information if the originating AppendRequest was not successfully processed.

The specifics of when successfully appended data is made visible to the table are governed by the type of stream:

* For COMMITTED streams (which includes the default stream), data is visible immediately upon successful append.

* For BUFFERED streams, data is made visible via a subsequent FlushRows rpc which advances a cursor to a newer offset in the stream.

* For PENDING streams, data is not made visible until the stream itself is finalized (via the FinalizeWriteStream rpc), and the stream is explicitly committed via the BatchCommitWriteStreams rpc.

Parameter
NameDescription
options CallOptions

Call options. See CallOptions for more details.

Returns
TypeDescription
gax.CancellableStream

{Stream} An object stream which is both readable and writable. It accepts objects representing for write() method, and will emit objects representing on 'data' event asynchronously. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#bi-directional-streaming) for more details and examples.

Example

  /**
   * This snippet has been automatically generated and should be regarded as a code template only.
   * It will require modifications to work.
   * It may require correct/in-range values for request initialization.
   * TODO(developer): Uncomment these variables before running the sample.
   */
  /**
   *  Required. The write_stream identifies the target of the append operation,
   *  and only needs to be specified as part of the first request on the gRPC
   *  connection. If provided for subsequent requests, it must match the value of
   *  the first request.
   *  For explicitly created write streams, the format is:
   *  * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
   *  For the special default stream, the format is:
   *  * `projects/{project}/datasets/{dataset}/tables/{table}/streams/_default`.
   */
  // const writeStream = 'abc123'
  /**
   *  If present, the write is only performed if the next append offset is same
   *  as the provided value. If not present, the write is performed at the
   *  current end of stream. Specifying a value for this field is not allowed
   *  when calling AppendRows for the '_default' stream.
   */
  // const offset = {}
  /**
   *  Rows in proto format.
   */
  // const protoRows = {}
  /**
   *  Id set by client to annotate its identity. Only initial request setting is
   *  respected.
   */
  // const traceId = 'abc123'
  /**
   *  A map to indicate how to interpret missing value for some fields. Missing
   *  values are fields present in user schema but missing in rows. The key is
   *  the field name. The value is the interpretation of missing values for the
   *  field.
   *  For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all
   *  missing values in field foo are interpreted as NULL, all missing values in
   *  field bar are interpreted as the default value of field bar in table
   *  schema.
   *  If a field is not in this map and has missing values, the missing values
   *  in this field are interpreted as NULL.
   *  This field only applies to the current request, it won't affect other
   *  requests on the connection.
   *  Currently, field name can only be top-level column name, can't be a struct
   *  field path like 'foo.bar'.
   */
  // const missingValueInterpretations = 1234

  // Imports the Storage library
  const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;

  // Instantiates a client
  const storageClient = new BigQueryWriteClient();

  async function callAppendRows() {
    // Construct request
    const request = {
      writeStream,
    };

    // Run request
    const stream = await storageClient.appendRows();
    stream.on('data', (response) => { console.log(response) });
    stream.on('error', (err) => { throw(err) });
    stream.on('end', () => { /* API call completed */ });
    stream.write(request);
    stream.end(); 
  }

  callAppendRows();

batchCommitWriteStreams(request, options)

batchCommitWriteStreams(request?: protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest, options?: CallOptions): Promise<[
        protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse,
        (protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | undefined),
        {} | undefined
    ]>;

Atomically commits a group of PENDING streams that belong to the same parent table.

Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations.

Parameters
NameDescription
request IBatchCommitWriteStreamsRequest

The request object that will be sent.

options CallOptions

Call options. See CallOptions for more details.

Returns
TypeDescription
Promise<[ protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, (protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | undefined), {} | undefined ]>

{Promise} - The promise which resolves to an array. The first element of the array is an object representing . Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples.

Example

  /**
   * This snippet has been automatically generated and should be regarded as a code template only.
   * It will require modifications to work.
   * It may require correct/in-range values for request initialization.
   * TODO(developer): Uncomment these variables before running the sample.
   */
  /**
   *  Required. Parent table that all the streams should belong to, in the form
   *  of `projects/{project}/datasets/{dataset}/tables/{table}`.
   */
  // const parent = 'abc123'
  /**
   *  Required. The group of streams that will be committed atomically.
   */
  // const writeStreams = 'abc123'

  // Imports the Storage library
  const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;

  // Instantiates a client
  const storageClient = new BigQueryWriteClient();

  async function callBatchCommitWriteStreams() {
    // Construct request
    const request = {
      parent,
      writeStreams,
    };

    // Run request
    const response = await storageClient.batchCommitWriteStreams(request);
    console.log(response);
  }

  callBatchCommitWriteStreams();

batchCommitWriteStreams(request, options, callback)

batchCommitWriteStreams(request: protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IBatchCommitWriteStreamsRequest
options CallOptions
callback Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

batchCommitWriteStreams(request, callback)

batchCommitWriteStreams(request: protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IBatchCommitWriteStreamsRequest
callback Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

close()

close(): Promise<void>;

Terminate the gRPC channel and close the client.

The client will no longer be usable and all future behavior is undefined.

Returns
TypeDescription
Promise<void>

{Promise} A promise that resolves when the client is closed.

createWriteStream(request, options)

createWriteStream(request?: protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest, options?: CallOptions): Promise<[
        protos.google.cloud.bigquery.storage.v1.IWriteStream,
        (protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | undefined),
        {} | undefined
    ]>;

Creates a write stream to the given table. Additionally, every table has a special stream named '_default' to which data can be written. This stream doesn't need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received.

Parameters
NameDescription
request ICreateWriteStreamRequest

The request object that will be sent.

options CallOptions

Call options. See CallOptions for more details.

Returns
TypeDescription
Promise<[ protos.google.cloud.bigquery.storage.v1.IWriteStream, (protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | undefined), {} | undefined ]>

{Promise} - The promise which resolves to an array. The first element of the array is an object representing . Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples.

Example

  /**
   * This snippet has been automatically generated and should be regarded as a code template only.
   * It will require modifications to work.
   * It may require correct/in-range values for request initialization.
   * TODO(developer): Uncomment these variables before running the sample.
   */
  /**
   *  Required. Reference to the table to which the stream belongs, in the format
   *  of `projects/{project}/datasets/{dataset}/tables/{table}`.
   */
  // const parent = 'abc123'
  /**
   *  Required. Stream to be created.
   */
  // const writeStream = {}

  // Imports the Storage library
  const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;

  // Instantiates a client
  const storageClient = new BigQueryWriteClient();

  async function callCreateWriteStream() {
    // Construct request
    const request = {
      parent,
      writeStream,
    };

    // Run request
    const response = await storageClient.createWriteStream(request);
    console.log(response);
  }

  callCreateWriteStream();

createWriteStream(request, options, callback)

createWriteStream(request: protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request ICreateWriteStreamRequest
options CallOptions
callback Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

createWriteStream(request, callback)

createWriteStream(request: protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request ICreateWriteStreamRequest
callback Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

finalizeWriteStream(request, options)

finalizeWriteStream(request?: protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest, options?: CallOptions): Promise<[
        protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse,
        (protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | undefined),
        {} | undefined
    ]>;

Finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the '_default' stream.

Parameters
NameDescription
request IFinalizeWriteStreamRequest

The request object that will be sent.

options CallOptions

Call options. See CallOptions for more details.

Returns
TypeDescription
Promise<[ protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, (protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | undefined), {} | undefined ]>

{Promise} - The promise which resolves to an array. The first element of the array is an object representing . Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples.

Example

  /**
   * This snippet has been automatically generated and should be regarded as a code template only.
   * It will require modifications to work.
   * It may require correct/in-range values for request initialization.
   * TODO(developer): Uncomment these variables before running the sample.
   */
  /**
   *  Required. Name of the stream to finalize, in the form of
   *  `projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}`.
   */
  // const name = 'abc123'

  // Imports the Storage library
  const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;

  // Instantiates a client
  const storageClient = new BigQueryWriteClient();

  async function callFinalizeWriteStream() {
    // Construct request
    const request = {
      name,
    };

    // Run request
    const response = await storageClient.finalizeWriteStream(request);
    console.log(response);
  }

  callFinalizeWriteStream();

finalizeWriteStream(request, options, callback)

finalizeWriteStream(request: protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IFinalizeWriteStreamRequest
options CallOptions
callback Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

finalizeWriteStream(request, callback)

finalizeWriteStream(request: protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IFinalizeWriteStreamRequest
callback Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

flushRows(request, options)

flushRows(request?: protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest, options?: CallOptions): Promise<[
        protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse,
        protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | undefined,
        {} | undefined
    ]>;

Flushes rows to a BUFFERED stream.

If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request.

Flush is not supported on the _default stream, since it is not BUFFERED.

Parameters
NameDescription
request IFlushRowsRequest

The request object that will be sent.

options CallOptions

Call options. See CallOptions for more details.

Returns
TypeDescription
Promise<[ protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | undefined, {} | undefined ]>

{Promise} - The promise which resolves to an array. The first element of the array is an object representing . Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples.

Example

  /**
   * This snippet has been automatically generated and should be regarded as a code template only.
   * It will require modifications to work.
   * It may require correct/in-range values for request initialization.
   * TODO(developer): Uncomment these variables before running the sample.
   */
  /**
   *  Required. The stream that is the target of the flush operation.
   */
  // const writeStream = 'abc123'
  /**
   *  Ending offset of the flush operation. Rows before this offset(including
   *  this offset) will be flushed.
   */
  // const offset = {}

  // Imports the Storage library
  const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;

  // Instantiates a client
  const storageClient = new BigQueryWriteClient();

  async function callFlushRows() {
    // Construct request
    const request = {
      writeStream,
    };

    // Run request
    const response = await storageClient.flushRows(request);
    console.log(response);
  }

  callFlushRows();

flushRows(request, options, callback)

flushRows(request: protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IFlushRowsRequest
options CallOptions
callback Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

flushRows(request, callback)

flushRows(request: protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IFlushRowsRequest
callback Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

getProjectId()

getProjectId(): Promise<string>;
Returns
TypeDescription
Promise<string>

getProjectId(callback)

getProjectId(callback: Callback<string, undefined, undefined>): void;
Parameter
NameDescription
callback Callback<string, undefined, undefined>
Returns
TypeDescription
void

getWriteStream(request, options)

getWriteStream(request?: protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest, options?: CallOptions): Promise<[
        protos.google.cloud.bigquery.storage.v1.IWriteStream,
        (protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | undefined),
        {} | undefined
    ]>;

Gets information about a write stream.

Parameters
NameDescription
request IGetWriteStreamRequest

The request object that will be sent.

options CallOptions

Call options. See CallOptions for more details.

Returns
TypeDescription
Promise<[ protos.google.cloud.bigquery.storage.v1.IWriteStream, (protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | undefined), {} | undefined ]>

{Promise} - The promise which resolves to an array. The first element of the array is an object representing . Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples.

Example

  /**
   * This snippet has been automatically generated and should be regarded as a code template only.
   * It will require modifications to work.
   * It may require correct/in-range values for request initialization.
   * TODO(developer): Uncomment these variables before running the sample.
   */
  /**
   *  Required. Name of the stream to get, in the form of
   *  `projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}`.
   */
  // const name = 'abc123'
  /**
   *  Indicates whether to get full or partial view of the WriteStream. If
   *  not set, view returned will be basic.
   */
  // const view = {}

  // Imports the Storage library
  const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;

  // Instantiates a client
  const storageClient = new BigQueryWriteClient();

  async function callGetWriteStream() {
    // Construct request
    const request = {
      name,
    };

    // Run request
    const response = await storageClient.getWriteStream(request);
    console.log(response);
  }

  callGetWriteStream();

getWriteStream(request, options, callback)

getWriteStream(request: protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IGetWriteStreamRequest
options CallOptions
callback Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

getWriteStream(request, callback)

getWriteStream(request: protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Parameters
NameDescription
request IGetWriteStreamRequest
callback Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>
Returns
TypeDescription
void

initialize()

initialize(): Promise<{
        [name: string]: Function;
    }>;

Initialize the client. Performs asynchronous operations (such as authentication) and prepares the client. This function will be called automatically when any class method is called for the first time, but if you need to initialize it before calling an actual method, feel free to call initialize() directly.

You can await on this method if you want to make sure the client is initialized.

Returns
TypeDescription
Promise<{ [name: string]: Function; }>

{Promise} A promise that resolves to an authenticated service stub.

matchDatasetFromTableName(tableName)

matchDatasetFromTableName(tableName: string): string | number;

Parse the dataset from Table resource.

Parameter
NameDescription
tableName string

A fully-qualified path representing Table resource.

Returns
TypeDescription
string | number

{string} A string representing the dataset.

matchDatasetFromWriteStreamName(writeStreamName)

matchDatasetFromWriteStreamName(writeStreamName: string): string | number;

Parse the dataset from WriteStream resource.

Parameter
NameDescription
writeStreamName string

A fully-qualified path representing WriteStream resource.

Returns
TypeDescription
string | number

{string} A string representing the dataset.

matchLocationFromReadSessionName(readSessionName)

matchLocationFromReadSessionName(readSessionName: string): string | number;

Parse the location from ReadSession resource.

Parameter
NameDescription
readSessionName string

A fully-qualified path representing ReadSession resource.

Returns
TypeDescription
string | number

{string} A string representing the location.

matchLocationFromReadStreamName(readStreamName)

matchLocationFromReadStreamName(readStreamName: string): string | number;

Parse the location from ReadStream resource.

Parameter
NameDescription
readStreamName string

A fully-qualified path representing ReadStream resource.

Returns
TypeDescription
string | number

{string} A string representing the location.

matchProjectFromProjectName(projectName)

matchProjectFromProjectName(projectName: string): string | number;

Parse the project from Project resource.

Parameter
NameDescription
projectName string

A fully-qualified path representing Project resource.

Returns
TypeDescription
string | number

{string} A string representing the project.

matchProjectFromReadSessionName(readSessionName)

matchProjectFromReadSessionName(readSessionName: string): string | number;

Parse the project from ReadSession resource.

Parameter
NameDescription
readSessionName string

A fully-qualified path representing ReadSession resource.

Returns
TypeDescription
string | number

{string} A string representing the project.

matchProjectFromReadStreamName(readStreamName)

matchProjectFromReadStreamName(readStreamName: string): string | number;

Parse the project from ReadStream resource.

Parameter
NameDescription
readStreamName string

A fully-qualified path representing ReadStream resource.

Returns
TypeDescription
string | number

{string} A string representing the project.

matchProjectFromTableName(tableName)

matchProjectFromTableName(tableName: string): string | number;

Parse the project from Table resource.

Parameter
NameDescription
tableName string

A fully-qualified path representing Table resource.

Returns
TypeDescription
string | number

{string} A string representing the project.

matchProjectFromWriteStreamName(writeStreamName)

matchProjectFromWriteStreamName(writeStreamName: string): string | number;

Parse the project from WriteStream resource.

Parameter
NameDescription
writeStreamName string

A fully-qualified path representing WriteStream resource.

Returns
TypeDescription
string | number

{string} A string representing the project.

matchSessionFromReadSessionName(readSessionName)

matchSessionFromReadSessionName(readSessionName: string): string | number;

Parse the session from ReadSession resource.

Parameter
NameDescription
readSessionName string

A fully-qualified path representing ReadSession resource.

Returns
TypeDescription
string | number

{string} A string representing the session.

matchSessionFromReadStreamName(readStreamName)

matchSessionFromReadStreamName(readStreamName: string): string | number;

Parse the session from ReadStream resource.

Parameter
NameDescription
readStreamName string

A fully-qualified path representing ReadStream resource.

Returns
TypeDescription
string | number

{string} A string representing the session.

matchStreamFromReadStreamName(readStreamName)

matchStreamFromReadStreamName(readStreamName: string): string | number;

Parse the stream from ReadStream resource.

Parameter
NameDescription
readStreamName string

A fully-qualified path representing ReadStream resource.

Returns
TypeDescription
string | number

{string} A string representing the stream.

matchStreamFromWriteStreamName(writeStreamName)

matchStreamFromWriteStreamName(writeStreamName: string): string | number;

Parse the stream from WriteStream resource.

Parameter
NameDescription
writeStreamName string

A fully-qualified path representing WriteStream resource.

Returns
TypeDescription
string | number

{string} A string representing the stream.

matchTableFromTableName(tableName)

matchTableFromTableName(tableName: string): string | number;

Parse the table from Table resource.

Parameter
NameDescription
tableName string

A fully-qualified path representing Table resource.

Returns
TypeDescription
string | number

{string} A string representing the table.

matchTableFromWriteStreamName(writeStreamName)

matchTableFromWriteStreamName(writeStreamName: string): string | number;

Parse the table from WriteStream resource.

Parameter
NameDescription
writeStreamName string

A fully-qualified path representing WriteStream resource.

Returns
TypeDescription
string | number

{string} A string representing the table.

projectPath(project)

projectPath(project: string): string;

Return a fully-qualified project resource name string.

Parameter
NameDescription
project string
Returns
TypeDescription
string

{string} Resource name string.

readSessionPath(project, location, session)

readSessionPath(project: string, location: string, session: string): string;

Return a fully-qualified readSession resource name string.

Parameters
NameDescription
project string
location string
session string
Returns
TypeDescription
string

{string} Resource name string.

readStreamPath(project, location, session, stream)

readStreamPath(project: string, location: string, session: string, stream: string): string;

Return a fully-qualified readStream resource name string.

Parameters
NameDescription
project string
location string
session string
stream string
Returns
TypeDescription
string

{string} Resource name string.

tablePath(project, dataset, table)

tablePath(project: string, dataset: string, table: string): string;

Return a fully-qualified table resource name string.

Parameters
NameDescription
project string
dataset string
table string
Returns
TypeDescription
string

{string} Resource name string.

writeStreamPath(project, dataset, table, stream)

writeStreamPath(project: string, dataset: string, table: string, stream: string): string;

Return a fully-qualified writeStream resource name string.

Parameters
NameDescription
project string
dataset string
table string
stream string
Returns
TypeDescription
string

{string} Resource name string.