Discovery Engine V1 API - Class Google::Cloud::DiscoveryEngine::V1::BigQuerySource (v0.4.0)

Reference documentation and code samples for the Discovery Engine V1 API class Google::Cloud::DiscoveryEngine::V1::BigQuerySource.

BigQuery source import data from.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#data_schema

def data_schema() -> ::String
Returns
  • (::String) —

    The schema to use when parsing the data from the source.

    Supported values for user event imports:

    • user_event (default): One UserEvent per row.

    Supported values for document imports:

#data_schema=

def data_schema=(value) -> ::String
Parameter
  • value (::String) —

    The schema to use when parsing the data from the source.

    Supported values for user event imports:

    • user_event (default): One UserEvent per row.

    Supported values for document imports:

Returns
  • (::String) —

    The schema to use when parsing the data from the source.

    Supported values for user event imports:

    • user_event (default): One UserEvent per row.

    Supported values for document imports:

#dataset_id

def dataset_id() -> ::String
Returns
  • (::String) — Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.

#dataset_id=

def dataset_id=(value) -> ::String
Parameter
  • value (::String) — Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.
Returns
  • (::String) — Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.

#gcs_staging_dir

def gcs_staging_dir() -> ::String
Returns
  • (::String) — Intermediate Cloud Storage directory used for the import with a length limit of 2,000 characters. Can be specified if one wants to have the BigQuery export to a specific Cloud Storage directory.

#gcs_staging_dir=

def gcs_staging_dir=(value) -> ::String
Parameter
  • value (::String) — Intermediate Cloud Storage directory used for the import with a length limit of 2,000 characters. Can be specified if one wants to have the BigQuery export to a specific Cloud Storage directory.
Returns
  • (::String) — Intermediate Cloud Storage directory used for the import with a length limit of 2,000 characters. Can be specified if one wants to have the BigQuery export to a specific Cloud Storage directory.

#partition_date

def partition_date() -> ::Google::Type::Date
Returns

#partition_date=

def partition_date=(value) -> ::Google::Type::Date
Parameter
  • value (::Google::Type::Date) — BigQuery time partitioned table's _PARTITIONDATE in YYYY-MM-DD format.
Returns

#project_id

def project_id() -> ::String
Returns
  • (::String) — The project ID (can be project # or ID) that the BigQuery source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.

#project_id=

def project_id=(value) -> ::String
Parameter
  • value (::String) — The project ID (can be project # or ID) that the BigQuery source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.
Returns
  • (::String) — The project ID (can be project # or ID) that the BigQuery source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.

#table_id

def table_id() -> ::String
Returns
  • (::String) — Required. The BigQuery table to copy the data from with a length limit of 1,024 characters.

#table_id=

def table_id=(value) -> ::String
Parameter
  • value (::String) — Required. The BigQuery table to copy the data from with a length limit of 1,024 characters.
Returns
  • (::String) — Required. The BigQuery table to copy the data from with a length limit of 1,024 characters.