Package Methods (2.25.0)

Summary of entries of Methods for bigquerystorage.

google.cloud.bigquery_storage_v1.client.BigQueryReadClient

BigQueryReadClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query read client.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.exit

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

Returns a fully-qualified billing_account string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_folder_path

common_folder_path(folder: str) -> str

Returns a fully-qualified folder string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_folder_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_location_path

common_location_path(project: str, location: str) -> str

Returns a fully-qualified location string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_location_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_organization_path

common_organization_path(organization: str) -> str

Returns a fully-qualified organization string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_organization_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_project_path

common_project_path(project: str) -> str

Returns a fully-qualified project string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_project_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.create_read_session

create_read_session(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    read_session: typing.Optional[
        google.cloud.bigquery_storage_v1.types.stream.ReadSession
    ] = None,
    max_stream_count: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.ReadSession

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

Parse a billing_account path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

Parse a folder path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_folder_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

Parse a location path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_location_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

Parse a organization path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_organization_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

Parse a project path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_project_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path: str) -> typing.Dict[str, str]

Parses a read_session path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_session_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path: str) -> typing.Dict[str, str]

Parses a read_stream path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_stream_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

Parses a table path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_table_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_rows

read_rows(
    name,
    offset=0,
    retry=_MethodDefault._DEFAULT_VALUE,
    timeout=_MethodDefault._DEFAULT_VALUE,
    metadata=(),
    retry_delay_callback=None,
)

Reads rows from the table in the format prescribed by the read session.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_session_path

read_session_path(project: str, location: str, session: str) -> str

Returns a fully-qualified read_session string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_session_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str) -> str

Returns a fully-qualified read_stream string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_stream_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.split_read_stream

split_read_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest, dict
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamResponse

Splits a given ReadStream into two ReadStream objects.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.split_read_stream

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.table_path

table_path(project: str, dataset: str, table: str) -> str

Returns a fully-qualified table string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryReadClient.table_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient

BigQueryWriteClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query write client.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.exit

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.append_rows

append_rows(
    requests: typing.Optional[
        typing.Iterator[
            google.cloud.bigquery_storage_v1.types.storage.AppendRowsRequest
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Iterable[google.cloud.bigquery_storage_v1.types.storage.AppendRowsResponse]

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsResponse

Atomically commits a group of PENDING streams that belong to the same parent table.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

Returns a fully-qualified billing_account string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_folder_path

common_folder_path(folder: str) -> str

Returns a fully-qualified folder string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_folder_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_location_path

common_location_path(project: str, location: str) -> str

Returns a fully-qualified location string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_location_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_organization_path

common_organization_path(organization: str) -> str

Returns a fully-qualified organization string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_organization_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_project_path

common_project_path(project: str) -> str

Returns a fully-qualified project string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_project_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.create_write_stream

create_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.CreateWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    write_stream: typing.Optional[
        google.cloud.bigquery_storage_v1.types.stream.WriteStream
    ] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.WriteStream

Creates a write stream to the given table.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.create_write_stream

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended to the stream.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.flush_rows

flush_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.FlushRowsRequest, dict
        ]
    ] = None,
    *,
    write_stream: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.get_write_stream

get_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.GetWriteStreamRequest, dict
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

Parse a billing_account path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

Parse a folder path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_folder_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

Parse a location path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_location_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

Parse a organization path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_organization_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

Parse a project path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_project_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

Parses a table path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_table_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path: str) -> typing.Dict[str, str]

Parses a write_stream path into its component segments.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_write_stream_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.table_path

table_path(project: str, dataset: str, table: str) -> str

Returns a fully-qualified table string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.table_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.write_stream_path

write_stream_path(project: str, dataset: str, table: str, stream: str) -> str

Returns a fully-qualified write_stream string.

See more: google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.write_stream_path

google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.__iter__

__iter__()

Iterator for each row in all pages.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.iter

google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_arrow

to_arrow()

Create a pyarrow.Table of all rows in the stream.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_arrow

google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_dataframe

to_dataframe(dtypes=None)

Create a pandas.DataFrame of all rows in the stream.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_dataframe

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.__iter__

__iter__()

A ReadRowsPage is an iterator.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsPage.iter

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.__next__

__next__()

Get the next row in the page.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsPage.next

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.next

next()

Get the next row in the page.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsPage.next

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_arrow

to_arrow()

Create an pyarrow.RecordBatch of rows in the page.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_arrow

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_dataframe

to_dataframe(dtypes=None)

Create a pandas.DataFrame of rows in the page.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_dataframe

google.cloud.bigquery_storage_v1.reader.ReadRowsStream

ReadRowsStream(client, name, offset, read_rows_kwargs, retry_delay_callback=None)

Construct a ReadRowsStream.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsStream

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.__iter__

__iter__()

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.rows

rows(read_session=None)

Iterate over all rows in the stream.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsStream.rows

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_arrow

to_arrow(read_session=None)

Create a pyarrow.Table of all rows in the stream.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_arrow

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_dataframe

to_dataframe(read_session=None, dtypes=None)

Create a pandas.DataFrame of all rows in the stream.

See more: google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_dataframe

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient

BigQueryReadAsyncClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Union[str, google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport] = 'grpc_asyncio', client_options: typing.Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query read async client.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.create_read_session

create_read_session(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    read_session: typing.Optional[
        google.cloud.bigquery_storage_v1.types.stream.ReadSession
    ] = None,
    max_stream_count: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.ReadSession

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.get_transport_class

get_transport_class() -> (
    typing.Type[
        google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport
    ]
)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_read_session_path

parse_read_session_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_read_stream_path

parse_read_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_rows

read_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.ReadRowsRequest, dict
        ]
    ] = None,
    *,
    read_stream: typing.Optional[str] = None,
    offset: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Awaitable[
    typing.AsyncIterable[
        google.cloud.bigquery_storage_v1.types.storage.ReadRowsResponse
    ]
]

Reads rows from the stream in the format prescribed by the ReadSession.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_rows

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_session_path

read_session_path(project: str, location: str, session: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.split_read_stream

split_read_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest, dict
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamResponse

Splits a given ReadStream into two ReadStream objects.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.split_read_stream

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient

BigQueryReadClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query read client.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.exit

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.create_read_session

create_read_session(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    read_session: typing.Optional[
        google.cloud.bigquery_storage_v1.types.stream.ReadSession
    ] = None,
    max_stream_count: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.ReadSession

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path: str) -> typing.Dict[str, str]

Parses a read_session path into its component segments.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_read_session_path

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path: str) -> typing.Dict[str, str]

Parses a read_stream path into its component segments.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_read_stream_path

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_rows

read_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.ReadRowsRequest, dict
        ]
    ] = None,
    *,
    read_stream: typing.Optional[str] = None,
    offset: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Iterable[google.cloud.bigquery_storage_v1.types.storage.ReadRowsResponse]

Reads rows from the stream in the format prescribed by the ReadSession.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_session_path

read_session_path(project: str, location: str, session: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.split_read_stream

split_read_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest, dict
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamResponse

Splits a given ReadStream into two ReadStream objects.

See more: google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.split_read_stream

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient

BigQueryWriteAsyncClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Union[str, google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport] = 'grpc_asyncio', client_options: typing.Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query write async client.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.append_rows

append_rows(
    requests: typing.Optional[
        typing.AsyncIterator[
            google.cloud.bigquery_storage_v1.types.storage.AppendRowsRequest
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Awaitable[
    typing.AsyncIterable[
        google.cloud.bigquery_storage_v1.types.storage.AppendRowsResponse
    ]
]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

batch_commit_write_streams(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsResponse

Atomically commits a group of PENDING streams that belong to the same parent table.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.create_write_stream

create_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.CreateWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    write_stream: typing.Optional[
        google.cloud.bigquery_storage_v1.types.stream.WriteStream
    ] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

finalize_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended to the stream.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.flush_rows

flush_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.FlushRowsRequest, dict
        ]
    ] = None,
    *,
    write_stream: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.get_transport_class

get_transport_class() -> (
    typing.Type[
        google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport
    ]
)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.get_write_stream

get_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.GetWriteStreamRequest, dict
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_write_stream_path

parse_write_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.write_stream_path

write_stream_path(project: str, dataset: str, table: str, stream: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient

BigQueryWriteClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query write client.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.__exit__

__exit__(type, value, traceback)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.append_rows

append_rows(
    requests: typing.Optional[
        typing.Iterator[
            google.cloud.bigquery_storage_v1.types.storage.AppendRowsRequest
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Iterable[google.cloud.bigquery_storage_v1.types.storage.AppendRowsResponse]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsResponse

Atomically commits a group of PENDING streams that belong to the same parent table.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.create_write_stream

create_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.CreateWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    write_stream: typing.Optional[
        google.cloud.bigquery_storage_v1.types.stream.WriteStream
    ] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended to the stream.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.flush_rows

flush_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.FlushRowsRequest, dict
        ]
    ] = None,
    *,
    write_stream: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.get_write_stream

get_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1.types.storage.GetWriteStreamRequest, dict
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.write_stream_path

write_stream_path(project: str, dataset: str, table: str, stream: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient

BigQueryReadClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query read client.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.exit

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

Returns a fully-qualified billing_account string.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_billing_account_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.create_read_session

create_read_session(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.CreateReadSessionRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    read_session: typing.Optional[
        google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession
    ] = None,
    max_stream_count: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

Parse a billing_account path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

Parse a folder path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_folder_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

Parse a location path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_location_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

Parse a organization path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_organization_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

Parse a project path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_project_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path: str) -> typing.Dict[str, str]

Parses a read_session path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_session_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path: str) -> typing.Dict[str, str]

Parses a read_stream path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_stream_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

Parses a table path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_table_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_rows

read_rows(
    name,
    offset=0,
    retry=_MethodDefault._DEFAULT_VALUE,
    timeout=_MethodDefault._DEFAULT_VALUE,
    metadata=(),
    retry_delay_callback=None,
)

Reads rows from the table in the format prescribed by the read session.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_session_path

read_session_path(project: str, location: str, session: str) -> str

Returns a fully-qualified read_session string.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_session_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str) -> str

Returns a fully-qualified read_stream string.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_stream_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.split_read_stream

split_read_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamRequest,
            dict,
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamResponse

Splits a given ReadStream into two ReadStream objects.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.split_read_stream

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.table_path

table_path(project: str, dataset: str, table: str) -> str

Returns a fully-qualified table string.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.table_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient

BigQueryWriteClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query write client.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.exit

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.append_rows

append_rows(
    requests: typing.Optional[
        typing.Iterator[
            google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Iterable[
    google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsResponse
]

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> (
    google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsResponse
)

Atomically commits a group of PENDING streams that belong to the same parent table.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.create_write_stream

create_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.CreateWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    write_stream: typing.Optional[
        google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream
    ] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended to the stream.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.flush_rows

flush_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsRequest, dict
        ]
    ] = None,
    *,
    write_stream: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.get_write_stream

get_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.GetWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

Parse a billing_account path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

Parse a folder path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_folder_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

Parse a location path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_location_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

Parse a organization path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_organization_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

Parse a project path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_project_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

Parses a table path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_table_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path: str) -> typing.Dict[str, str]

Parses a write_stream path into its component segments.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_write_stream_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.table_path

table_path(project: str, dataset: str, table: str) -> str

Returns a fully-qualified table string.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.table_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.write_stream_path

write_stream_path(project: str, dataset: str, table: str, stream: str) -> str

Returns a fully-qualified write_stream string.

See more: google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.write_stream_path

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient

BigQueryReadAsyncClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Union[str, google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport] = 'grpc_asyncio', client_options: typing.Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query read async client.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.create_read_session

create_read_session(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.CreateReadSessionRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    read_session: typing.Optional[
        google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession
    ] = None,
    max_stream_count: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.get_transport_class

get_transport_class() -> (
    typing.Type[
        google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport
    ]
)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_read_session_path

parse_read_session_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_read_stream_path

parse_read_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_rows

read_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsRequest, dict
        ]
    ] = None,
    *,
    read_stream: typing.Optional[str] = None,
    offset: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Awaitable[
    typing.AsyncIterable[
        google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsResponse
    ]
]

Reads rows from the stream in the format prescribed by the ReadSession.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_rows

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_session_path

read_session_path(project: str, location: str, session: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.split_read_stream

split_read_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamRequest,
            dict,
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient

BigQueryReadClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.__exit__

__exit__(type, value, traceback)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.create_read_session

create_read_session(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.CreateReadSessionRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    read_session: typing.Optional[
        google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession
    ] = None,
    max_stream_count: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_rows

read_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsRequest, dict
        ]
    ] = None,
    *,
    read_stream: typing.Optional[str] = None,
    offset: typing.Optional[int] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Iterable[
    google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsResponse
]

Reads rows from the stream in the format prescribed by the ReadSession.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_session_path

read_session_path(project: str, location: str, session: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.split_read_stream

split_read_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamRequest,
            dict,
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamResponse

Splits a given ReadStream into two ReadStream objects.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.split_read_stream

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient

BigQueryWriteAsyncClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Union[str, google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport] = 'grpc_asyncio', client_options: typing.Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

Instantiates the big query write async client.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.append_rows

append_rows(
    requests: typing.Optional[
        typing.AsyncIterator[
            google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Awaitable[
    typing.AsyncIterable[
        google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsResponse
    ]
]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

batch_commit_write_streams(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> (
    google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsResponse
)

Atomically commits a group of PENDING streams that belong to the same parent table.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.create_write_stream

create_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.CreateWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    write_stream: typing.Optional[
        google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream
    ] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

finalize_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended to the stream.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.flush_rows

flush_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsRequest, dict
        ]
    ] = None,
    *,
    write_stream: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.get_transport_class

get_transport_class() -> (
    typing.Type[
        google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport
    ]
)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.get_write_stream

get_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.GetWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary_async.AsyncRetry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_write_stream_path

parse_write_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.write_stream_path

write_stream_path(project: str, dataset: str, table: str, stream: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient

BigQueryWriteClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Optional[typing.Union[str, google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport]] = None, client_options: typing.Optional[typing.Union[google.api_core.client_options.ClientOptions, dict]] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = 

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.__exit__

__exit__(type, value, traceback)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.append_rows

append_rows(
    requests: typing.Optional[
        typing.Iterator[
            google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest
        ]
    ] = None,
    *,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> typing.Iterable[
    google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsResponse
]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> (
    google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsResponse
)

Atomically commits a group of PENDING streams that belong to the same parent table.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_folder_path

common_folder_path(folder: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_location_path

common_location_path(project: str, location: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_organization_path

common_organization_path(organization: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_project_path

common_project_path(project: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.create_write_stream

create_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.CreateWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    parent: typing.Optional[str] = None,
    write_stream: typing.Optional[
        google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream
    ] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended to the stream.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.flush_rows

flush_rows(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsRequest, dict
        ]
    ] = None,
    *,
    write_stream: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

See more: google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: typing.Optional[
        google.api_core.client_options.ClientOptions
    ] = None,
)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.get_write_stream

get_write_stream(
    request: typing.Optional[
        typing.Union[
            google.cloud.bigquery_storage_v1beta2.types.storage.GetWriteStreamRequest,
            dict,
        ]
    ] = None,
    *,
    name: typing.Optional[str] = None,
    retry: typing.Optional[
        typing.Union[
            google.api_core.retry.retry_unary.Retry,
            google.api_core.gapic_v1.method._MethodDefault,
        ]
    ] = _MethodDefault._DEFAULT_VALUE,
    timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
    metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_table_path

parse_table_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path: str) -> typing.Dict[str, str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.table_path

table_path(project: str, dataset: str, table: str) -> str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.write_stream_path

write_stream_path(project: str, dataset: str, table: str, stream: str) -> str

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.add_done_callback

add_done_callback(fn)

Add a callback to be executed when the operation is complete.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.add_done_callback

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.cancel

cancel()

Stops pulling messages and shutdowns the background thread consuming messages.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.cancel

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.cancelled

cancelled()

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.done

done(
    retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = None,
) -> bool

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.exception

exception(timeout=

Get the exception from the operation, blocking if necessary.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.exception

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.result

result(timeout=

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.running

running()

True if the operation is currently running.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.running

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_exception

set_exception(exception)

Set the result of the future as being the given exception.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_exception

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_result

set_result(result)

Set the return value of work associated with the future.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_result

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream

AppendRowsStream(
    client: google.cloud.bigquery_storage_v1beta2.services.big_query_write.client.BigQueryWriteClient,
    initial_request_template: google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest,
    metadata: typing.Sequence[typing.Tuple[str, str]] = (),
)

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.add_close_callback

add_close_callback(callback: typing.Callable)

Schedules a callable when the manager closes.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.add_close_callback

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.close

close(reason: typing.Optional[Exception] = None)

Stop consuming messages and shutdown all helper threads.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.close

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.send

send(
    request: google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest,
) -> google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture

Send an append rows request to the open stream.

See more: google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.send