Class LoadJobConfig (3.25.0)

LoadJobConfig(**kwargs)

Configuration options for load jobs.

Set properties on the constructed configuration by using the property name as the name of a keyword argument. Values which are unset or :data:None use the BigQuery REST API default values. See the BigQuery REST API reference documentation <https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad>_ for a list of default values.

Required options differ based on the source_format value. For example, the BigQuery API's default value for source_format is "CSV". When loading a CSV file, either schema must be set or autodetect must be set to :data:True.

Properties

allow_jagged_rows

Optional[bool]: Allow missing trailing optional columns (CSV only).

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.allow_jagged_rows

allow_quoted_newlines

Optional[bool]: Allow quoted data containing newline characters (CSV only).

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.allow_quoted_newlines

autodetect

Optional[bool]: Automatically infer the schema from a sample of the data.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.autodetect

clustering_fields

Optional[List[str]]: Fields defining clustering for the table

(Defaults to :data:None).

Clustering fields are immutable after table creation.

column_name_character_map

Optional[google.cloud.bigquery.job.ColumnNameCharacterMap]: Character map supported for column names in CSV/Parquet loads. Defaults to STRICT and can be overridden by Project Config Service. Using this option with unsupported load formats will result in an error.

See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.column_name_character_map

connection_properties

create_disposition

create_session

[Preview] If :data:True, creates a new session, where session_info will contain a random server generated session id.

If :data:False, runs load job with an existing session_id passed in connection_properties, otherwise runs load job in non-session mode.

See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.create_session

.. versionadded:: 3.7.0

decimal_target_types

Possible SQL data types to which the source decimal values are converted.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.decimal_target_types

.. versionadded:: 2.21.0

destination_encryption_configuration

Optional[google.cloud.bigquery.encryption_configuration.EncryptionConfiguration]: Custom encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:None if using default encryption.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.destination_encryption_configuration

destination_table_description

destination_table_friendly_name

encoding

field_delimiter

hive_partitioning

Optional[.external_config.HivePartitioningOptions]: [Beta] When set, it configures hive partitioning support.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.hive_partitioning_options

ignore_unknown_values

Optional[bool]: Ignore extra values not represented in the table schema.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.ignore_unknown_values

job_timeout_ms

Optional parameter. Job timeout in milliseconds. If this time limit is exceeded, BigQuery might attempt to stop the job. https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfiguration.FIELDS.job_timeout_ms e.g.

job_config = bigquery.QueryJobConfig( job_timeout_ms = 5000 )
or
job_config.job_timeout_ms = 5000
Exceptions
Type Description
ValueError If value type is invalid.

json_extension

Optional[str]: The extension to use for writing JSON data to BigQuery. Only supports GeoJSON currently.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.json_extension

labels

Dict[str, str]: Labels for the job.

This method always returns a dict. Once a job has been created on the server, its labels cannot be modified anymore.

Exceptions
Type Description
ValueError If value type is invalid.

max_bad_records

null_marker

parquet_options

preserve_ascii_control_characters

Optional[bool]: Preserves the embedded ASCII control characters when sourceFormat is set to CSV.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.preserve_ascii_control_characters

projection_fields

Optional[List[str]]: If source_format is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.

Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.projection_fields

quote_character

Optional[str]: Character used to quote data sections (CSV only).

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.quote

range_partitioning

Optional[google.cloud.bigquery.table.RangePartitioning]: Configures range-based partitioning for destination table.

Only specify at most one of xref_time_partitioning or xref_range_partitioning.

Exceptions
Type Description
ValueError If the value is not RangePartitioning or :data:None.

reference_file_schema_uri

Optional[str]: When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats:

AVRO, PARQUET, ORC

schema

Optional[Sequence[Union[ SchemaField, Mapping[str, Any] ]]]: Schema of the destination table.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.schema

schema_update_options

Optional[List[google.cloud.bigquery.job.SchemaUpdateOption]]: Specifies updates to the destination table schema to allow as a side effect of the load job.

skip_leading_rows

Optional[int]: Number of rows to skip when reading data (CSV only).

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.skip_leading_rows

source_format

time_partitioning

Optional[google.cloud.bigquery.table.TimePartitioning]: Specifies time-based partitioning for the destination table.

Only specify at most one of time_partitioning or range_partitioning.

use_avro_logical_types

Optional[bool]: For loads of Avro data, governs whether Avro logical types are converted to their corresponding BigQuery types (e.g. TIMESTAMP) rather than raw types (e.g. INTEGER).

write_disposition

Methods

__setattr__

__setattr__(name, value)

Override to be able to raise error if an unknown property is being set

from_api_repr

from_api_repr(resource: dict) -> google.cloud.bigquery.job.base._JobConfig

Factory: construct a job configuration given its API representation

Parameter
Name Description
resource Dict

A job configuration in the same representation as is returned from the API.

Returns
Type Description
google.cloud.bigquery.job._JobConfig Configuration parsed from resource.

to_api_repr

to_api_repr() -> dict

Build an API representation of the job config.

Returns
Type Description
Dict A dictionary in the format used by the BigQuery API.