public static final class LoadJobConfiguration.Builder extends JobConfiguration.Builder<LoadJobConfiguration,LoadJobConfiguration.Builder> implements LoadConfiguration.Builder
Methods
build()
public LoadJobConfiguration build()
Overrides
setAutodetect(Boolean autodetect)
public LoadJobConfiguration.Builder setAutodetect(Boolean autodetect)
[Experimental] Sets automatic inference of the options and schema for CSV and JSON sources.
Parameter |
---|
Name | Description |
autodetect | Boolean
|
setClustering(Clustering clustering)
public LoadJobConfiguration.Builder setClustering(Clustering clustering)
Sets the clustering specification for the destination table.
setConnectionProperties(List<ConnectionProperty> connectionProperties)
public LoadJobConfiguration.Builder setConnectionProperties(List<ConnectionProperty> connectionProperties)
setCreateDisposition(JobInfo.CreateDisposition createDisposition)
public LoadJobConfiguration.Builder setCreateDisposition(JobInfo.CreateDisposition createDisposition)
Sets whether the job is allowed to create new tables.
setCreateSession(Boolean createSession)
public LoadJobConfiguration.Builder setCreateSession(Boolean createSession)
Parameter |
---|
Name | Description |
createSession | Boolean
|
setDecimalTargetTypes(List<String> decimalTargetTypes)
public LoadJobConfiguration.Builder setDecimalTargetTypes(List<String> decimalTargetTypes)
Defines the list of possible SQL data types to which the source decimal values are converted.
This list and the precision and the scale parameters of the decimal field determine the
target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in
the specified list and if it supports the precision and the scale. STRING supports all
precision and scale values.
Parameter |
---|
Name | Description |
decimalTargetTypes | List<String>
decimalTargetType or null for none
|
setDestinationEncryptionConfiguration(EncryptionConfiguration encryptionConfiguration)
public LoadJobConfiguration.Builder setDestinationEncryptionConfiguration(EncryptionConfiguration encryptionConfiguration)
setDestinationTable(TableId destinationTable)
public LoadJobConfiguration.Builder setDestinationTable(TableId destinationTable)
Sets the destination table to load the data into.
Parameter |
---|
Name | Description |
destinationTable | TableId
|
public LoadJobConfiguration.Builder setFormatOptions(FormatOptions formatOptions)
Sets the source format, and possibly some parsing options, of the external data. Supported
formats are CSV
, NEWLINE_DELIMITED_JSON
and DATASTORE_BACKUP
. If not
specified, CSV
format is assumed.
Source Format
setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)
public LoadJobConfiguration.Builder setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)
setIgnoreUnknownValues(Boolean ignoreUnknownValues)
public LoadJobConfiguration.Builder setIgnoreUnknownValues(Boolean ignoreUnknownValues)
Sets whether BigQuery should allow extra values that are not represented in the table schema.
If true
, the extra values are ignored. If false
, records with extra columns
are treated as bad records, and if there are too many bad records, an invalid error is
returned in the job result. By default unknown values are not allowed.
Parameter |
---|
Name | Description |
ignoreUnknownValues | Boolean
|
setJobTimeoutMs(Long jobTimeoutMs)
public LoadJobConfiguration.Builder setJobTimeoutMs(Long jobTimeoutMs)
[Optional] Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt
to terminate the job.
Parameter |
---|
Name | Description |
jobTimeoutMs | Long
jobTimeoutMs or null for none
|
setLabels(Map<String,String> labels)
public LoadJobConfiguration.Builder setLabels(Map<String,String> labels)
The labels associated with this job. You can use these to organize and group your jobs. Label
keys and values can be no longer than 63 characters, can only contain lowercase letters,
numeric characters, underscores and dashes. International characters are allowed. Label
values are optional. Label keys must start with a letter and each label in the list must have
a different key.
setMaxBadRecords(Integer maxBadRecords)
public LoadJobConfiguration.Builder setMaxBadRecords(Integer maxBadRecords)
Sets the maximum number of bad records that BigQuery can ignore when running the job. If the
number of bad records exceeds this value, an invalid error is returned in the job result. By
default no bad record is ignored.
Parameter |
---|
Name | Description |
maxBadRecords | Integer
|
setNullMarker(String nullMarker)
public LoadJobConfiguration.Builder setNullMarker(String nullMarker)
Sets the string that represents a null value in a CSV file. For example, if you specify "N",
BigQuery interprets "N" as a null value when loading a CSV file. The default value is the
empty string. If you set this property to a custom value, BigQuery throws an error if an
empty string is present for all data types except for STRING
and BYTE
. For
STRING
and BYTE
columns, BigQuery interprets the empty string as an empty
value.
Parameter |
---|
Name | Description |
nullMarker | String
|
setRangePartitioning(RangePartitioning rangePartitioning)
public LoadJobConfiguration.Builder setRangePartitioning(RangePartitioning rangePartitioning)
Range partitioning specification for this table. Only one of timePartitioning and
rangePartitioning should be specified.
Parameter |
---|
Name | Description |
rangePartitioning | RangePartitioning
rangePartitioning or null for none
|
setReferenceFileSchemaUri(String referenceFileSchemaUri)
public LoadJobConfiguration.Builder setReferenceFileSchemaUri(String referenceFileSchemaUri)
When creating an external table, the user can provide a reference file with the table schema.
This is enabled for the following formats: AVRO, PARQUET, ORC.
Parameter |
---|
Name | Description |
referenceFileSchemaUri | String
or null for none
|
setSchema(Schema schema)
public LoadJobConfiguration.Builder setSchema(Schema schema)
Sets the schema for the destination table. The schema can be omitted if the destination table
already exists, or if you're loading data from a Google Cloud Datastore backup (i.e.
DATASTORE_BACKUP
format option).
Parameter |
---|
Name | Description |
schema | Schema
|
setSchemaUpdateOptions(List<JobInfo.SchemaUpdateOption> schemaUpdateOptions)
public LoadJobConfiguration.Builder setSchemaUpdateOptions(List<JobInfo.SchemaUpdateOption> schemaUpdateOptions)
[Experimental] Sets options allowing the schema of the destination table to be updated as a
side effect of the load job. Schema update options are supported in two cases: when
writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination
table is a partition of a table, specified by partition decorators. For normal tables,
WRITE_TRUNCATE will always overwrite the schema.
setSourceUris(List<String> sourceUris)
public LoadJobConfiguration.Builder setSourceUris(List<String> sourceUris)
Sets the fully-qualified URIs that point to source data in Google Cloud Storage (e.g.
gs://bucket/path). Each URI can contain one '*' wildcard character and it must come after the
'bucket' name.
setTimePartitioning(TimePartitioning timePartitioning)
public LoadJobConfiguration.Builder setTimePartitioning(TimePartitioning timePartitioning)
Sets the time partitioning specification for the destination table.
setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)
public LoadJobConfiguration.Builder setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)
If FormatOptions is set to AVRO, you can interpret logical types into their corresponding
types (such as TIMESTAMP) instead of only using their raw types (such as INTEGER). The value
may be null
.
Parameter |
---|
Name | Description |
useAvroLogicalTypes | Boolean
|
setWriteDisposition(JobInfo.WriteDisposition writeDisposition)
public LoadJobConfiguration.Builder setWriteDisposition(JobInfo.WriteDisposition writeDisposition)
Sets the action that should occur if the destination table already exists.