Class LoadJobConfiguration.Builder (2.17.1)

public static final class LoadJobConfiguration.Builder extends JobConfiguration.Builder<LoadJobConfiguration,LoadJobConfiguration.Builder> implements LoadConfiguration.Builder

Inheritance

java.lang.Object > JobConfiguration.Builder > LoadJobConfiguration.Builder

Methods

build()

public LoadJobConfiguration build()

Creates an object.

Returns
TypeDescription
LoadJobConfiguration
Overrides

setAutodetect(Boolean autodetect)

public LoadJobConfiguration.Builder setAutodetect(Boolean autodetect)

[Experimental] Sets automatic inference of the options and schema for CSV and JSON sources.

Parameter
NameDescription
autodetectBoolean
Returns
TypeDescription
LoadJobConfiguration.Builder

setClustering(Clustering clustering)

public LoadJobConfiguration.Builder setClustering(Clustering clustering)

Sets the clustering specification for the destination table.

Parameter
NameDescription
clusteringClustering
Returns
TypeDescription
LoadJobConfiguration.Builder

setCreateDisposition(JobInfo.CreateDisposition createDisposition)

public LoadJobConfiguration.Builder setCreateDisposition(JobInfo.CreateDisposition createDisposition)

Sets whether the job is allowed to create new tables.

Parameter
NameDescription
createDispositionJobInfo.CreateDisposition
Returns
TypeDescription
LoadJobConfiguration.Builder

setDecimalTargetTypes(List<String> decimalTargetTypes)

public LoadJobConfiguration.Builder setDecimalTargetTypes(List<String> decimalTargetTypes)

Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values.

Parameter
NameDescription
decimalTargetTypesList<String>

decimalTargetType or null for none

Returns
TypeDescription
LoadJobConfiguration.Builder

setDestinationEncryptionConfiguration(EncryptionConfiguration encryptionConfiguration)

public LoadJobConfiguration.Builder setDestinationEncryptionConfiguration(EncryptionConfiguration encryptionConfiguration)
Parameter
NameDescription
encryptionConfigurationEncryptionConfiguration
Returns
TypeDescription
LoadJobConfiguration.Builder

setDestinationTable(TableId destinationTable)

public LoadJobConfiguration.Builder setDestinationTable(TableId destinationTable)

Sets the destination table to load the data into.

Parameter
NameDescription
destinationTableTableId
Returns
TypeDescription
LoadJobConfiguration.Builder

setFormatOptions(FormatOptions formatOptions)

public LoadJobConfiguration.Builder setFormatOptions(FormatOptions formatOptions)

Sets the source format, and possibly some parsing options, of the external data. Supported formats are CSV, NEWLINE_DELIMITED_JSON and DATASTORE_BACKUP. If not specified, CSV format is assumed.

Source Format

Parameter
NameDescription
formatOptionsFormatOptions
Returns
TypeDescription
LoadJobConfiguration.Builder

setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)

public LoadJobConfiguration.Builder setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)
Parameter
NameDescription
hivePartitioningOptionsHivePartitioningOptions
Returns
TypeDescription
LoadJobConfiguration.Builder

setIgnoreUnknownValues(Boolean ignoreUnknownValues)

public LoadJobConfiguration.Builder setIgnoreUnknownValues(Boolean ignoreUnknownValues)

Sets whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default unknown values are not allowed.

Parameter
NameDescription
ignoreUnknownValuesBoolean
Returns
TypeDescription
LoadJobConfiguration.Builder

setJobTimeoutMs(Long jobTimeoutMs)

public LoadJobConfiguration.Builder setJobTimeoutMs(Long jobTimeoutMs)

[Optional] Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

Parameter
NameDescription
jobTimeoutMsLong

jobTimeoutMs or null for none

Returns
TypeDescription
LoadJobConfiguration.Builder

setLabels(Map<String,String> labels)

public LoadJobConfiguration.Builder setLabels(Map<String,String> labels)

The labels associated with this job. You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.

Parameter
NameDescription
labelsMap<String,String>

labels or null for none

Returns
TypeDescription
LoadJobConfiguration.Builder

setMaxBadRecords(Integer maxBadRecords)

public LoadJobConfiguration.Builder setMaxBadRecords(Integer maxBadRecords)

Sets the maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. By default no bad record is ignored.

Parameter
NameDescription
maxBadRecordsInteger
Returns
TypeDescription
LoadJobConfiguration.Builder

setNullMarker(String nullMarker)

public LoadJobConfiguration.Builder setNullMarker(String nullMarker)

Sets the string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

Parameter
NameDescription
nullMarkerString
Returns
TypeDescription
LoadJobConfiguration.Builder

setRangePartitioning(RangePartitioning rangePartitioning)

public LoadJobConfiguration.Builder setRangePartitioning(RangePartitioning rangePartitioning)

Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.

Parameter
NameDescription
rangePartitioningRangePartitioning

rangePartitioning or null for none

Returns
TypeDescription
LoadJobConfiguration.Builder

setReferenceFileSchemaUri(String referenceFileSchemaUri)

public LoadJobConfiguration.Builder setReferenceFileSchemaUri(String referenceFileSchemaUri)

When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats: AVRO, PARQUET, ORC.

Parameter
NameDescription
referenceFileSchemaUriString

or null for none

Returns
TypeDescription
LoadJobConfiguration.Builder

setSchema(Schema schema)

public LoadJobConfiguration.Builder setSchema(Schema schema)

Sets the schema for the destination table. The schema can be omitted if the destination table already exists, or if you're loading data from a Google Cloud Datastore backup (i.e. DATASTORE_BACKUP format option).

Parameter
NameDescription
schemaSchema
Returns
TypeDescription
LoadJobConfiguration.Builder

setSchemaUpdateOptions(List<JobInfo.SchemaUpdateOption> schemaUpdateOptions)

public LoadJobConfiguration.Builder setSchemaUpdateOptions(List<JobInfo.SchemaUpdateOption> schemaUpdateOptions)

[Experimental] Sets options allowing the schema of the destination table to be updated as a side effect of the load job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema.

Parameter
NameDescription
schemaUpdateOptionsList<SchemaUpdateOption>
Returns
TypeDescription
LoadJobConfiguration.Builder

setSourceUris(List<String> sourceUris)

public LoadJobConfiguration.Builder setSourceUris(List<String> sourceUris)

Sets the fully-qualified URIs that point to source data in Google Cloud Storage (e.g. gs://bucket/path). Each URI can contain one '*' wildcard character and it must come after the 'bucket' name.

Parameter
NameDescription
sourceUrisList<String>
Returns
TypeDescription
LoadJobConfiguration.Builder

setTimePartitioning(TimePartitioning timePartitioning)

public LoadJobConfiguration.Builder setTimePartitioning(TimePartitioning timePartitioning)

Sets the time partitioning specification for the destination table.

Parameter
NameDescription
timePartitioningTimePartitioning
Returns
TypeDescription
LoadJobConfiguration.Builder

setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)

public LoadJobConfiguration.Builder setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)

If FormatOptions is set to AVRO, you can interpret logical types into their corresponding types (such as TIMESTAMP) instead of only using their raw types (such as INTEGER). The value may be null.

Parameter
NameDescription
useAvroLogicalTypesBoolean
Returns
TypeDescription
LoadJobConfiguration.Builder

setWriteDisposition(JobInfo.WriteDisposition writeDisposition)

public LoadJobConfiguration.Builder setWriteDisposition(JobInfo.WriteDisposition writeDisposition)

Sets the action that should occur if the destination table already exists.

Parameter
NameDescription
writeDispositionJobInfo.WriteDisposition
Returns
TypeDescription
LoadJobConfiguration.Builder