Types overview

AggregateClassificationMetrics

Aggregate metrics for classification/classifier models. For multi-class models, the metrics are either macro-averaged or micro-averaged. When macro-averaged, the metrics are calculated for each label and then an unweighted average is taken of those values. When micro-averaged, the metric is calculated globally by counting the total number of correctly predicted rows.
Fields
accuracy

number (double format)

Accuracy is the fraction of predictions given the correct label. For multiclass this is a micro-averaged metric.

f1Score

number (double format)

The F1 score is an average of recall and precision. For multiclass this is a macro-averaged metric.

logLoss

number (double format)

Logarithmic Loss. For multiclass this is a macro-averaged metric.

precision

number (double format)

Precision is the fraction of actual positive predictions that had positive actual labels. For multiclass this is a macro-averaged metric treating each class as a binary classifier.

recall

number (double format)

Recall is the fraction of actual positive labels that were given a positive prediction. For multiclass this is a macro-averaged metric.

rocAuc

number (double format)

Area Under a ROC Curve. For multiclass this is a macro-averaged metric.

threshold

number (double format)

Threshold at which the metrics are computed. For binary classification models this is the positive class threshold. For multi-class classfication models this is the confidence threshold.

Argument

Input/output argument of a function or a stored procedure.
Fields
argumentKind

enum

Optional. Defaults to FIXED_TYPE.

Enum type. Can be one of the following:
ARGUMENT_KIND_UNSPECIFIED (No description provided)
FIXED_TYPE The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
ANY_TYPE The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
dataType

object (StandardSqlDataType)

Required unless argument_kind = ANY_TYPE.

mode

enum

Optional. Specifies whether the argument is input or output. Can be set for procedures only.

Enum type. Can be one of the following:
MODE_UNSPECIFIED (No description provided)
IN The argument is input-only.
OUT The argument is output-only.
INOUT The argument is both an input and an output.
name

string

Optional. The name of this argument. Can be absent for function return argument.

ArimaCoefficients

Arima coefficients.
Fields
autoRegressiveCoefficients[]

number (double format)

Auto-regressive coefficients, an array of double.

interceptCoefficient

number (double format)

Intercept coefficient, just a double not an array.

movingAverageCoefficients[]

number (double format)

Moving-average coefficients, an array of double.

ArimaFittingMetrics

ARIMA model fitting metrics.
Fields
aic

number (double format)

AIC.

logLikelihood

number (double format)

Log-likelihood.

variance

number (double format)

Variance.

ArimaForecastingMetrics

Model evaluation metrics for ARIMA forecasting models.
Fields
arimaFittingMetrics[]

object (ArimaFittingMetrics)

Arima model fitting metrics.

arimaSingleModelForecastingMetrics[]

object (ArimaSingleModelForecastingMetrics)

Repeated as there can be many metric sets (one for each model) in auto-arima and the large-scale case.

hasDrift[]

boolean

Whether Arima model fitted with drift or not. It is always false when d is not 1.

nonSeasonalOrder[]

object (ArimaOrder)

Non-seasonal order.

seasonalPeriods[]

string

Seasonal periods. Repeated because multiple periods are supported for one time series.

timeSeriesId[]

string

Id to differentiate different time series for the large-scale case.

ArimaModelInfo

Arima model information.
Fields
arimaCoefficients

object (ArimaCoefficients)

Arima coefficients.

arimaFittingMetrics

object (ArimaFittingMetrics)

Arima fitting metrics.

hasDrift

boolean

Whether Arima model fitted with drift or not. It is always false when d is not 1.

hasHolidayEffect

boolean

If true, holiday_effect is a part of time series decomposition result.

hasSpikesAndDips

boolean

If true, spikes_and_dips is a part of time series decomposition result.

hasStepChanges

boolean

If true, step_changes is a part of time series decomposition result.

nonSeasonalOrder

object (ArimaOrder)

Non-seasonal order.

seasonalPeriods[]

string

Seasonal periods. Repeated because multiple periods are supported for one time series.

timeSeriesId

string

The time_series_id value for this time series. It will be one of the unique values from the time_series_id_column specified during ARIMA model training. Only present when time_series_id_column training option was used.

timeSeriesIds[]

string

The tuple of time_series_ids identifying this time series. It will be one of the unique tuples of values present in the time_series_id_columns specified during ARIMA model training. Only present when time_series_id_columns training option was used and the order of values here are same as the order of time_series_id_columns.

ArimaOrder

Arima order, can be used for both non-seasonal and seasonal parts.
Fields
d

string (int64 format)

Order of the differencing part.

p

string (int64 format)

Order of the autoregressive part.

q

string (int64 format)

Order of the moving-average part.

ArimaResult

(Auto-)arima fitting result. Wrap everything in ArimaResult for easier refactoring if we want to use model-specific iteration results.
Fields
arimaModelInfo[]

object (ArimaModelInfo)

This message is repeated because there are multiple arima models fitted in auto-arima. For non-auto-arima model, its size is one.

seasonalPeriods[]

string

Seasonal periods. Repeated because multiple periods are supported for one time series.

ArimaSingleModelForecastingMetrics

Model evaluation metrics for a single ARIMA forecasting model.
Fields
arimaFittingMetrics

object (ArimaFittingMetrics)

Arima fitting metrics.

hasDrift

boolean

Is arima model fitted with drift or not. It is always false when d is not 1.

hasHolidayEffect

boolean

If true, holiday_effect is a part of time series decomposition result.

hasSpikesAndDips

boolean

If true, spikes_and_dips is a part of time series decomposition result.

hasStepChanges

boolean

If true, step_changes is a part of time series decomposition result.

nonSeasonalOrder

object (ArimaOrder)

Non-seasonal order.

seasonalPeriods[]

string

Seasonal periods. Repeated because multiple periods are supported for one time series.

timeSeriesId

string

The time_series_id value for this time series. It will be one of the unique values from the time_series_id_column specified during ARIMA model training. Only present when time_series_id_column training option was used.

timeSeriesIds[]

string

The tuple of time_series_ids identifying this time series. It will be one of the unique tuples of values present in the time_series_id_columns specified during ARIMA model training. Only present when time_series_id_columns training option was used and the order of values here are same as the order of time_series_id_columns.

AuditConfig

Specifies the audit configuration for a service. The configuration determines which permission types are logged, and what identities, if any, are exempted from logging. An AuditConfig must have one or more AuditLogConfigs. If there are AuditConfigs for both allServices and a specific service, the union of the two AuditConfigs is used for that service: the log_types specified in each AuditConfig are enabled, and the exempted_members in each AuditLogConfig are exempted. Example Policy with multiple AuditConfigs: { "audit_configs": [ { "service": "allServices", "audit_log_configs": [ { "log_type": "DATA_READ", "exempted_members": [ "user:jose@example.com" ] }, { "log_type": "DATA_WRITE" }, { "log_type": "ADMIN_READ" } ] }, { "service": "sampleservice.googleapis.com", "audit_log_configs": [ { "log_type": "DATA_READ" }, { "log_type": "DATA_WRITE", "exempted_members": [ "user:aliya@example.com" ] } ] } ] } For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ logging. It also exempts jose@example.com from DATA_READ logging, and aliya@example.com from DATA_WRITE logging.
Fields
auditLogConfigs[]

object (AuditLogConfig)

The configuration for logging of each type of permission.

service

string

Specifies a service that will be enabled for audit logging. For example, storage.googleapis.com, cloudsql.googleapis.com. allServices is a special value that covers all services.

AuditLogConfig

Provides the configuration for logging a type of permissions. Example: { "audit_log_configs": [ { "log_type": "DATA_READ", "exempted_members": [ "user:jose@example.com" ] }, { "log_type": "DATA_WRITE" } ] } This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting jose@example.com from DATA_READ logging.
Fields
exemptedMembers[]

string

Specifies the identities that do not cause logging for this type of permission. Follows the same format of Binding.members.

logType

enum

The log type that this config enables.

Enum type. Can be one of the following:
LOG_TYPE_UNSPECIFIED Default case. Should never be this.
ADMIN_READ Admin reads. Example: CloudIAM getIamPolicy
DATA_WRITE Data writes. Example: CloudSQL Users create
DATA_READ Data reads. Example: CloudSQL Users list

AvroOptions

(No description provided)
Fields
useAvroLogicalTypes

boolean

[Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).

BiEngineReason

(No description provided)
Fields
code

string

[Output-only] High-level BI Engine reason for partial or disabled acceleration.

message

string

[Output-only] Free form human-readable reason for partial or disabled acceleration.

BiEngineStatistics

(No description provided)
Fields
biEngineMode

string

[Output-only] Specifies which mode of BI Engine acceleration was performed (if any).

biEngineReasons[]

object (BiEngineReason)

In case of DISABLED or PARTIAL bi_engine_mode, these contain the explanatory reasons as to why BI Engine could not accelerate. In case the full query was accelerated, this field is not populated.

BigQueryModelTraining

(No description provided)
Fields
currentIteration

integer (int32 format)

[Output-only, Beta] Index of current ML training iteration. Updated during create model query job to show job progress.

expectedTotalIterations

string (int64 format)

[Output-only, Beta] Expected number of iterations for the create model query job specified as num_iterations in the input query. The actual total number of iterations may be less than this number due to early stop.

BigtableColumn

(No description provided)
Fields
encoding

string

[Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.

fieldName

string

[Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.

onlyReadLatest

boolean

[Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.

qualifierEncoded

string (bytes format)

[Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.

qualifierString

string

(No description provided)

type

string

[Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.

BigtableColumnFamily

(No description provided)
Fields
columns[]

object (BigtableColumn)

[Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.

encoding

string

[Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.

familyId

string

Identifier of the column family.

onlyReadLatest

boolean

[Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.

type

string

[Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.

BigtableOptions

(No description provided)
Fields
columnFamilies[]

object (BigtableColumnFamily)

[Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.

ignoreUnspecifiedColumnFamilies

boolean

[Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.

readRowkeyAsString

boolean

[Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.

BinaryClassificationMetrics

Evaluation metrics for binary classification/classifier models.
Fields
aggregateClassificationMetrics

object (AggregateClassificationMetrics)

Aggregate classification metrics.

binaryConfusionMatrixList[]

object (BinaryConfusionMatrix)

Binary confusion matrix at multiple thresholds.

negativeLabel

string

Label representing the negative class.

positiveLabel

string

Label representing the positive class.

BinaryConfusionMatrix

Confusion matrix for binary classification models.
Fields
accuracy

number (double format)

The fraction of predictions given the correct label.

f1Score

number (double format)

The equally weighted average of recall and precision.

falseNegatives

string (int64 format)

Number of false samples predicted as false.

falsePositives

string (int64 format)

Number of false samples predicted as true.

positiveClassThreshold

number (double format)

Threshold value used when computing each of the following metric.

precision

number (double format)

The fraction of actual positive predictions that had positive actual labels.

recall

number (double format)

The fraction of actual positive labels that were given a positive prediction.

trueNegatives

string (int64 format)

Number of true samples predicted as false.

truePositives

string (int64 format)

Number of true samples predicted as true.

Binding

Associates members, or principals, with a role.
Fields
condition

object (Expr)

The condition that is associated with this binding. If the condition evaluates to true, then this binding applies to the current request. If the condition evaluates to false, then this binding does not apply to the current request. However, a different role binding might grant the same role to one or more of the principals in this binding. To learn which resources support conditions in their IAM policies, see the IAM documentation.

members[]

string

Specifies the principals requesting access for a Google Cloud resource. members can have the following values: * allUsers: A special identifier that represents anyone who is on the internet; with or without a Google account. * allAuthenticatedUsers: A special identifier that represents anyone who is authenticated with a Google account or a service account. Does not include identities that come from external identity providers (IdPs) through identity federation. * user:{emailid}: An email address that represents a specific Google account. For example, alice@example.com . * serviceAccount:{emailid}: An email address that represents a Google service account. For example, my-other-app@appspot.gserviceaccount.com. * serviceAccount:{projectid}.svc.id.goog[{namespace}/{kubernetes-sa}]: An identifier for a Kubernetes service account. For example, my-project.svc.id.goog[my-namespace/my-kubernetes-sa]. * group:{emailid}: An email address that represents a Google group. For example, admins@example.com. * deleted:user:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a user that has been recently deleted. For example, alice@example.com?uid=123456789012345678901. If the user is recovered, this value reverts to user:{emailid} and the recovered user retains the role in the binding. * deleted:serviceAccount:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a service account that has been recently deleted. For example, my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901. If the service account is undeleted, this value reverts to serviceAccount:{emailid} and the undeleted service account retains the role in the binding. * deleted:group:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a Google group that has been recently deleted. For example, admins@example.com?uid=123456789012345678901. If the group is recovered, this value reverts to group:{emailid} and the recovered group retains the role in the binding. * domain:{domain}: The G Suite domain (primary) that represents all the users of that domain. For example, google.com or example.com.

role

string

Role that is assigned to the list of members, or principals. For example, roles/viewer, roles/editor, or roles/owner.

BqmlIterationResult

(No description provided)
Fields
durationMs

string (int64 format)

[Output-only, Beta] Time taken to run the training iteration in milliseconds.

evalLoss

number (double format)

[Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.

index

integer (int32 format)

[Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.

learnRate

number (double format)

[Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.

trainingLoss

number (double format)

[Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.

BqmlTrainingRun

(No description provided)
Fields
iterationResults[]

object (BqmlIterationResult)

[Output-only, Beta] List of each iteration results.

startTime

string (date-time format)

[Output-only, Beta] Training run start time in milliseconds since the epoch.

state

string

[Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.

trainingOptions

object

[Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.

trainingOptions.earlyStop

boolean

(No description provided)

trainingOptions.l1Reg

number (double format)

(No description provided)

trainingOptions.l2Reg

number (double format)

(No description provided)

trainingOptions.learnRate

number (double format)

(No description provided)

trainingOptions.learnRateStrategy

string

(No description provided)

trainingOptions.lineSearchInitLearnRate

number (double format)

(No description provided)

trainingOptions.maxIteration

string (int64 format)

(No description provided)

trainingOptions.minRelProgress

number (double format)

(No description provided)

trainingOptions.warmStart

boolean

(No description provided)

CategoricalValue

Representative value of a categorical feature.
Fields
categoryCounts[]

object (CategoryCount)

Counts of all categories for the categorical feature. If there are more than ten categories, we return top ten (by count) and return one more CategoryCount with category "OTHER" and count as aggregate counts of remaining categories.

CategoryCount

Represents the count of a single category within the cluster.
Fields
category

string

The name of category.

count

string (int64 format)

The count of training samples matching the category within the cluster.

CloneDefinition

(No description provided)
Fields
baseTableReference

object (TableReference)

[Required] Reference describing the ID of the table that was cloned.

cloneTime

string (date-time format)

[Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.

Cluster

Message containing the information about one cluster.
Fields
centroidId

string (int64 format)

Centroid id.

count

string (int64 format)

Count of training data rows that were assigned to this cluster.

featureValues[]

object (FeatureValue)

Values of highly variant features for this cluster.

ClusterInfo

Information about a single cluster for clustering model.
Fields
centroidId

string (int64 format)

Centroid id.

clusterRadius

number (double format)

Cluster radius, the average distance from centroid to each point assigned to the cluster.

clusterSize

string (int64 format)

Cluster size, the total number of points assigned to the cluster.

Clustering

(No description provided)
Fields
fields[]

string

[Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.

ClusteringMetrics

Evaluation metrics for clustering models.
Fields
clusters[]

object (Cluster)

Information for all clusters.

daviesBouldinIndex

number (double format)

Davies-Bouldin index.

meanSquaredDistance

number (double format)

Mean of squared distances between each sample to its cluster centroid.

ConfusionMatrix

Confusion matrix for multi-class classification models.
Fields
confidenceThreshold

number (double format)

Confidence threshold used when computing the entries of the confusion matrix.

rows[]

object (Row)

One row per actual label.

ConnectionProperty

(No description provided)
Fields
key

string

[Required] Name of the connection property to set.

value

string

[Required] Value of the connection property.

CsvOptions

(No description provided)
Fields
allowJaggedRows

boolean

[Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

allowQuotedNewlines

boolean

[Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

encoding

string

[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

fieldDelimiter

string

[Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').

null_marker

string

[Optional] An custom string that will represent a NULL value in CSV import data.

preserveAsciiControlCharacters

boolean

[Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.

quote

string

[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

skipLeadingRows

string (int64 format)

[Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

DataMaskingStatistics

(No description provided)
Fields
dataMaskingApplied

boolean

[Output-only] [Preview] Whether any accessed data was protected by data masking. The actual evaluation is done by accessStats.masked_field_count > 0. Since this is only used for the discovery_doc generation purpose, as long as the type (boolean) matches, client library can leverage this. The actual evaluation of the variable is done else-where.

DataSplitResult

Data split result. This contains references to the training and evaluation data tables that were used to train the model.
Fields
evaluationTable

object (TableReference)

Table reference of the evaluation data after split.

testTable

object (TableReference)

Table reference of the test data after split.

trainingTable

object (TableReference)

Table reference of the training data after split.

Dataset

(No description provided)
Fields
access[]

object

[Optional] An array of objects that define dataset access for one or more entities. You can set this property when inserting or updating a dataset in order to control who is allowed to access the data. If unspecified at dataset creation time, BigQuery adds default dataset access for the following entities: access.specialGroup: projectReaders; access.role: READER; access.specialGroup: projectWriters; access.role: WRITER; access.specialGroup: projectOwners; access.role: OWNER; access.userByEmail: [dataset creator email]; access.role: OWNER;

access.dataset

object (DatasetAccessEntry)

[Pick one] A grant authorizing all resources of a particular type in a particular dataset access to this dataset. Only views are supported for now. The role field is not required when this field is set. If that dataset is deleted and re-created, its access needs to be granted again via an update operation.

access.domain

string

[Pick one] A domain to grant access to. Any users signed in with the domain specified will be granted the specified access. Example: "example.com". Maps to IAM policy member "domain:DOMAIN".

access.groupByEmail

string

[Pick one] An email address of a Google Group to grant access to. Maps to IAM policy member "group:GROUP".

access.iamMember

string

[Pick one] Some other type of member that appears in the IAM Policy but isn't a user, group, domain, or special group.

access.role

string

[Required] An IAM role ID that should be granted to the user, group, or domain specified in this access entry. The following legacy mappings will be applied: OWNER roles/bigquery.dataOwner WRITER roles/bigquery.dataEditor READER roles/bigquery.dataViewer This field will accept any of the above formats, but will return only the legacy format. For example, if you set this field to "roles/bigquery.dataOwner", it will be returned back as "OWNER".

access.routine

object (RoutineReference)

[Pick one] A routine from a different dataset to grant access to. Queries executed against that routine will have read access to views/tables/routines in this dataset. Only UDF is supported for now. The role field is not required when this field is set. If that routine is updated by any user, access to the routine needs to be granted again via an update operation.

access.specialGroup

string

[Pick one] A special group to grant access to. Possible values include: projectOwners: Owners of the enclosing project. projectReaders: Readers of the enclosing project. projectWriters: Writers of the enclosing project. allAuthenticatedUsers: All authenticated BigQuery users. Maps to similarly-named IAM members.

access.userByEmail

string

[Pick one] An email address of a user to grant access to. For example: fred@example.com. Maps to IAM policy member "user:EMAIL" or "serviceAccount:EMAIL".

access.view

object (TableReference)

[Pick one] A view from a different dataset to grant access to. Queries executed against that view will have read access to tables in this dataset. The role field is not required when this field is set. If that view is updated by any user, access to the view needs to be granted again via an update operation.

creationTime

string (int64 format)

[Output-only] The time when this dataset was created, in milliseconds since the epoch.

datasetReference

object (DatasetReference)

[Required] A reference that identifies the dataset.

defaultCollation

string

[Output-only] The default collation of the dataset.

defaultEncryptionConfiguration

object (EncryptionConfiguration)

(No description provided)

defaultPartitionExpirationMs

string (int64 format)

[Optional] The default partition expiration for all partitioned tables in the dataset, in milliseconds. Once this property is set, all newly-created partitioned tables in the dataset will have an expirationMs property in the timePartitioning settings set to this value, and changing the value will only affect new tables, not existing ones. The storage in a partition will have an expiration time of its partition time plus this value. Setting this property overrides the use of defaultTableExpirationMs for partitioned tables: only one of defaultTableExpirationMs and defaultPartitionExpirationMs will be used for any new partitioned table. If you provide an explicit timePartitioning.expirationMs when creating or updating a partitioned table, that value takes precedence over the default partition expiration time indicated by this property.

defaultTableExpirationMs

string (int64 format)

[Optional] The default lifetime of all tables in the dataset, in milliseconds. The minimum value is 3600000 milliseconds (one hour). Once this property is set, all newly-created tables in the dataset will have an expirationTime property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the expirationTime for a given table is reached, that table will be deleted automatically. If a table's expirationTime is modified or removed before the table expires, or if you provide an explicit expirationTime when creating a table, that value takes precedence over the default expiration time indicated by this property.

description

string

[Optional] A user-friendly description of the dataset.

etag

string

[Output-only] A hash of the resource.

friendlyName

string

[Optional] A descriptive name for the dataset.

id

string

[Output-only] The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field.

isCaseInsensitive

boolean

[Optional] Indicates if table names are case insensitive in the dataset.

kind

string

[Output-only] The resource type.

labels

map (key: string, value: string)

The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.

lastModifiedTime

string (int64 format)

[Output-only] The date when this dataset or any of its tables was last modified, in milliseconds since the epoch.

location

string

The geographic location where the dataset should reside. The default value is US. See details at https://cloud.google.com/bigquery/docs/locations.

maxTimeTravelHours

string (int64 format)

[Optional] Number of hours for the max time travel for all tables in the dataset.

satisfiesPzs

boolean

[Output-only] Reserved for future use.

selfLink

string

[Output-only] A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource.

tags[]

object

[Optional]The tags associated with this dataset. Tag keys are globally unique.

tags.tagKey

string

[Required] The namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is org id.

tags.tagValue

string

[Required] Friendly short name of the tag value, e.g. "production".

DatasetAccessEntry

(No description provided)
Fields
dataset

object (DatasetReference)

[Required] The dataset this entry applies to.

targetTypes[]

string

(No description provided)

DatasetList

(No description provided)
Fields
datasets[]

object

An array of the dataset resources in the project. Each resource contains basic information. For full information about a particular dataset resource, use the Datasets: get method. This property is omitted when there are no datasets in the project.

datasets.datasetReference

object (DatasetReference)

The dataset reference. Use this property to access specific parts of the dataset's ID, such as project ID or dataset ID.

datasets.friendlyName

string

A descriptive name for the dataset, if one exists.

datasets.id

string

The fully-qualified, unique, opaque ID of the dataset.

datasets.kind

string

The resource type. This property always returns the value "bigquery#dataset".

datasets.labels

map (key: string, value: string)

The labels associated with this dataset. You can use these to organize and group your datasets.

datasets.location

string

The geographic location where the data resides.

etag

string

A hash value of the results page. You can use this property to determine if the page has changed since the last request.

kind

string

The list type. This property always returns the value "bigquery#datasetList".

nextPageToken

string

A token that can be used to request the next results page. This property is omitted on the final results page.

DatasetReference

(No description provided)
Fields
datasetId

string

[Required] A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

projectId

string

[Optional] The ID of the project containing this dataset.

DestinationTableProperties

(No description provided)
Fields
description

string

[Optional] The description for the destination table. This will only be used if the destination table is newly created. If the table already exists and a value different than the current description is provided, the job will fail.

expirationTime

string (date-time format)

[Internal] This field is for Google internal use only.

friendlyName

string

[Optional] The friendly name for the destination table. This will only be used if the destination table is newly created. If the table already exists and a value different than the current friendly name is provided, the job will fail.

labels

map (key: string, value: string)

[Optional] The labels associated with this table. You can use these to organize and group your tables. This will only be used if the destination table is newly created. If the table already exists and labels are different than the current labels are provided, the job will fail.

DimensionalityReductionMetrics

Model evaluation metrics for dimensionality reduction models.
Fields
totalExplainedVarianceRatio

number (double format)

Total percentage of variance explained by the selected principal components.

DmlStatistics

(No description provided)
Fields
deletedRowCount

string (int64 format)

Number of deleted Rows. populated by DML DELETE, MERGE and TRUNCATE statements.

insertedRowCount

string (int64 format)

Number of inserted Rows. Populated by DML INSERT and MERGE statements.

updatedRowCount

string (int64 format)

Number of updated Rows. Populated by DML UPDATE and MERGE statements.

DoubleCandidates

Discrete candidates of a double hyperparameter.
Fields
candidates[]

number (double format)

Candidates for the double parameter in increasing order.

DoubleHparamSearchSpace

Search space for a double hyperparameter.
Fields
candidates

object (DoubleCandidates)

Candidates of the double hyperparameter.

range

object (DoubleRange)

Range of the double hyperparameter.

DoubleRange

Range of a double hyperparameter.
Fields
max

number (double format)

Max value of the double parameter.

min

number (double format)

Min value of the double parameter.

EncryptionConfiguration

(No description provided)
Fields
kmsKeyName

string

[Optional] Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

Entry

A single entry in the confusion matrix.
Fields
itemCount

string (int64 format)

Number of items being predicted as this label.

predictedLabel

string

The predicted label. For confidence_threshold > 0, we will also add an entry indicating the number of items under the confidence threshold.

ErrorProto

(No description provided)
Fields
debugInfo

string

Debugging information. This property is internal to Google and should not be used.

location

string

Specifies where the error occurred, if present.

message

string

A human-readable description of the error.

reason

string

A short error code that summarizes the error.

EvaluationMetrics

Evaluation metrics of a model. These are either computed on all training data or just the eval data based on whether eval data was used during training. These are not present for imported models.
Fields
arimaForecastingMetrics

object (ArimaForecastingMetrics)

Populated for ARIMA models.

binaryClassificationMetrics

object (BinaryClassificationMetrics)

Populated for binary classification/classifier models.

clusteringMetrics

object (ClusteringMetrics)

Populated for clustering models.

dimensionalityReductionMetrics

object (DimensionalityReductionMetrics)

Evaluation metrics when the model is a dimensionality reduction model, which currently includes PCA.

multiClassClassificationMetrics

object (MultiClassClassificationMetrics)

Populated for multi-class classification/classifier models.

rankingMetrics

object (RankingMetrics)

Populated for implicit feedback type matrix factorization models.

regressionMetrics

object (RegressionMetrics)

Populated for regression models and explicit feedback type matrix factorization models.

ExplainQueryStage

(No description provided)
Fields
completedParallelInputs

string (int64 format)

Number of parallel input segments completed.

computeMsAvg

string (int64 format)

Milliseconds the average shard spent on CPU-bound tasks.

computeMsMax

string (int64 format)

Milliseconds the slowest shard spent on CPU-bound tasks.

computeRatioAvg

number (double format)

Relative amount of time the average shard spent on CPU-bound tasks.

computeRatioMax

number (double format)

Relative amount of time the slowest shard spent on CPU-bound tasks.

endMs

string (int64 format)

Stage end time represented as milliseconds since epoch.

id

string (int64 format)

Unique ID for stage within plan.

inputStages[]

string (int64 format)

IDs for stages that are inputs to this stage.

name

string

Human-readable name for stage.

parallelInputs

string (int64 format)

Number of parallel input segments to be processed.

readMsAvg

string (int64 format)

Milliseconds the average shard spent reading input.

readMsMax

string (int64 format)

Milliseconds the slowest shard spent reading input.

readRatioAvg

number (double format)

Relative amount of time the average shard spent reading input.

readRatioMax

number (double format)

Relative amount of time the slowest shard spent reading input.

recordsRead

string (int64 format)

Number of records read into the stage.

recordsWritten

string (int64 format)

Number of records written by the stage.

shuffleOutputBytes

string (int64 format)

Total number of bytes written to shuffle.

shuffleOutputBytesSpilled

string (int64 format)

Total number of bytes written to shuffle and spilled to disk.

slotMs

string (int64 format)

Slot-milliseconds used by the stage.

startMs

string (int64 format)

Stage start time represented as milliseconds since epoch.

status

string

Current status for the stage.

steps[]

object (ExplainQueryStep)

List of operations within the stage in dependency order (approximately chronological).

waitMsAvg

string (int64 format)

Milliseconds the average shard spent waiting to be scheduled.

waitMsMax

string (int64 format)

Milliseconds the slowest shard spent waiting to be scheduled.

waitRatioAvg

number (double format)

Relative amount of time the average shard spent waiting to be scheduled.

waitRatioMax

number (double format)

Relative amount of time the slowest shard spent waiting to be scheduled.

writeMsAvg

string (int64 format)

Milliseconds the average shard spent on writing output.

writeMsMax

string (int64 format)

Milliseconds the slowest shard spent on writing output.

writeRatioAvg

number (double format)

Relative amount of time the average shard spent on writing output.

writeRatioMax

number (double format)

Relative amount of time the slowest shard spent on writing output.

ExplainQueryStep

(No description provided)
Fields
kind

string

Machine-readable operation type.

substeps[]

string

Human-readable stage descriptions.

Explanation

Explanation for a single feature.
Fields
attribution

number (double format)

Attribution of feature.

featureName

string

The full feature name. For non-numerical features, will be formatted like .. Overall size of feature name will always be truncated to first 120 characters.

Expr

Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
Fields
description

string

Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.

expression

string

Textual representation of an expression in Common Expression Language syntax.

location

string

Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.

title

string

Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.

ExternalDataConfiguration

(No description provided)
Fields
autodetect

boolean

Try to detect schema and format options automatically. Any option specified explicitly will be honored.

avroOptions

object (AvroOptions)

Additional properties to set if sourceFormat is set to Avro.

bigtableOptions

object (BigtableOptions)

[Optional] Additional options if sourceFormat is set to BIGTABLE.

compression

string

[Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.

connectionId

string

[Optional, Trusted Tester] Connection for external data source.

csvOptions

object (CsvOptions)

Additional properties to set if sourceFormat is set to CSV.

decimalTargetTypes[]

string

[Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.

googleSheetsOptions

object (GoogleSheetsOptions)

[Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.

hivePartitioningOptions

object (HivePartitioningOptions)

[Optional] Options to configure hive partitioning support.

ignoreUnknownValues

boolean

[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.

maxBadRecords

integer (int32 format)

[Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.

parquetOptions

object (ParquetOptions)

Additional properties to set if sourceFormat is set to Parquet.

referenceFileSchemaUri

string

[Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.

schema

object (TableSchema)

[Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.

sourceFormat

string

[Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".

sourceUris[]

string

[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.

FeatureValue

Representative value of a single feature within the cluster.
Fields
categoricalValue

object (CategoricalValue)

The categorical feature value.

featureColumn

string

The feature column name.

numericalValue

number (double format)

The numerical feature value. This is the centroid value for this feature.

GetIamPolicyRequest

Request message for GetIamPolicy method.
Fields
options

object (GetPolicyOptions)

OPTIONAL: A GetPolicyOptions object for specifying options to GetIamPolicy.

GetPolicyOptions

Encapsulates settings provided to GetIamPolicy.
Fields
requestedPolicyVersion

integer (int32 format)

Optional. The maximum policy version that will be used to format the policy. Valid values are 0, 1, and 3. Requests specifying an invalid value will be rejected. Requests for policies with any conditional role bindings must specify version 3. Policies with no conditional role bindings may specify any valid value or leave the field unset. The policy in the response might use the policy version that you specified, or it might use a lower policy version. For example, if you specify version 3, but the policy has no conditional role bindings, the response uses version 1. To learn which resources support conditions in their IAM policies, see the IAM documentation.

GetQueryResultsResponse

(No description provided)
Fields
cacheHit

boolean

Whether the query result was fetched from the query cache.

errors[]

object (ErrorProto)

[Output-only] The first errors or warnings encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful.

etag

string

A hash of this response.

jobComplete

boolean

Whether the query has completed or not. If rows or totalRows are present, this will always be true. If this is false, totalRows will not be available.

jobReference

object (JobReference)

Reference to the BigQuery Job that was created to run the query. This field will be present even if the original request timed out, in which case GetQueryResults can be used to read the results once the query has completed. Since this API only returns the first page of results, subsequent pages can be fetched via the same mechanism (GetQueryResults).

kind

string

The resource type of the response.

numDmlAffectedRows

string (int64 format)

[Output-only] The number of rows affected by a DML statement. Present only for DML statements INSERT, UPDATE or DELETE.

pageToken

string

A token used for paging results.

rows[]

object (TableRow)

An object with as many results as can be contained within the maximum permitted reply size. To get any additional rows, you can call GetQueryResults and specify the jobReference returned above. Present only when the query completes successfully.

schema

object (TableSchema)

The schema of the results. Present only when the query completes successfully.

totalBytesProcessed

string (int64 format)

The total number of bytes processed for this query.

totalRows

string (uint64 format)

The total number of rows in the complete query result set, which can be more than the number of rows in this single page of results. Present only when the query completes successfully.

GetServiceAccountResponse

(No description provided)
Fields
email

string

The service account email address.

kind

string

The resource type of the response.

GlobalExplanation

Global explanations containing the top most important features after training.
Fields
classLabel

string

Class label for this set of global explanations. Will be empty/null for binary logistic and linear regression models. Sorted alphabetically in descending order.

explanations[]

object (Explanation)

A list of the top global explanations. Sorted by absolute value of attribution in descending order.

GoogleSheetsOptions

(No description provided)
Fields
range

string

[Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20

skipLeadingRows

string (int64 format)

[Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

HivePartitioningOptions

(No description provided)
Fields
mode

string

[Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.

requirePartitionFilter

boolean

[Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.

sourceUriPrefix

string

[Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).

HparamSearchSpaces

Hyperparameter search spaces. These should be a subset of training_options.
Fields
activationFn

object (StringHparamSearchSpace)

Activation functions of neural network models.

batchSize

object (IntHparamSearchSpace)

Mini batch sample size.

boosterType

object (StringHparamSearchSpace)

Booster type for boosted tree models.

colsampleBylevel

object (DoubleHparamSearchSpace)

Subsample ratio of columns for each level for boosted tree models.

colsampleBynode

object (DoubleHparamSearchSpace)

Subsample ratio of columns for each node(split) for boosted tree models.

colsampleBytree

object (DoubleHparamSearchSpace)

Subsample ratio of columns when constructing each tree for boosted tree models.

dartNormalizeType

object (StringHparamSearchSpace)

Dart normalization type for boosted tree models.

dropout

object (DoubleHparamSearchSpace)

Dropout probability for dnn model training and boosted tree models using dart booster.

hiddenUnits

object (IntArrayHparamSearchSpace)

Hidden units for neural network models.

l1Reg

object (DoubleHparamSearchSpace)

L1 regularization coefficient.

l2Reg

object (DoubleHparamSearchSpace)

L2 regularization coefficient.

learnRate

object (DoubleHparamSearchSpace)

Learning rate of training jobs.

maxTreeDepth

object (IntHparamSearchSpace)

Maximum depth of a tree for boosted tree models.

minSplitLoss

object (DoubleHparamSearchSpace)

Minimum split loss for boosted tree models.

minTreeChildWeight

object (IntHparamSearchSpace)

Minimum sum of instance weight needed in a child for boosted tree models.

numClusters

object (IntHparamSearchSpace)

Number of clusters for k-means.

numFactors

object (IntHparamSearchSpace)

Number of latent factors to train on.

numParallelTree

object (IntHparamSearchSpace)

Number of parallel trees for boosted tree models.

optimizer

object (StringHparamSearchSpace)

Optimizer of TF models.

subsample

object (DoubleHparamSearchSpace)

Subsample the training data to grow tree to prevent overfitting for boosted tree models.

treeMethod

object (StringHparamSearchSpace)

Tree construction algorithm for boosted tree models.

walsAlpha

object (DoubleHparamSearchSpace)

Hyperparameter for matrix factoration when implicit feedback type is specified.

HparamTuningTrial

Training info of a trial in hyperparameter tuning models.
Fields
endTimeMs

string (int64 format)

Ending time of the trial.

errorMessage

string

Error message for FAILED and INFEASIBLE trial.

evalLoss

number (double format)

Loss computed on the eval data at the end of trial.

evaluationMetrics

object (EvaluationMetrics)

Evaluation metrics of this trial calculated on the test data. Empty in Job API.

hparamTuningEvaluationMetrics

object (EvaluationMetrics)

Hyperparameter tuning evaluation metrics of this trial calculated on the eval data. Unlike evaluation_metrics, only the fields corresponding to the hparam_tuning_objectives are set.

hparams

object (TrainingOptions)

The hyperprameters selected for this trial.

startTimeMs

string (int64 format)

Starting time of the trial.

status

enum

The status of the trial.

Enum type. Can be one of the following:
TRIAL_STATUS_UNSPECIFIED (No description provided)
NOT_STARTED Scheduled but not started.
RUNNING Running state.
SUCCEEDED The trial succeeded.
FAILED The trial failed.
INFEASIBLE The trial is infeasible due to the invalid params.
STOPPED_EARLY Trial stopped early because it's not promising.
trainingLoss

number (double format)

Loss computed on the training data at the end of trial.

trialId

string (int64 format)

1-based index of the trial.

IndexUnusedReason

(No description provided)
Fields
base_table

object (TableReference)

[Output-only] Specifies the base table involved in the reason that no search index was used.

code

string

[Output-only] Specifies the high-level reason for the scenario when no search index was used.

index_name

string

[Output-only] Specifies the name of the unused search index, if available.

message

string

[Output-only] Free form human-readable reason for the scenario when no search index was used.

IntArray

An array of int.
Fields
elements[]

string (int64 format)

Elements in the int array.

IntArrayHparamSearchSpace

Search space for int array.
Fields
candidates[]

object (IntArray)

Candidates for the int array parameter.

IntCandidates

Discrete candidates of an int hyperparameter.
Fields
candidates[]

string (int64 format)

Candidates for the int parameter in increasing order.

IntHparamSearchSpace

Search space for an int hyperparameter.
Fields
candidates

object (IntCandidates)

Candidates of the int hyperparameter.

range

object (IntRange)

Range of the int hyperparameter.

IntRange

Range of an int hyperparameter.
Fields
max

string (int64 format)

Max value of the int parameter.

min

string (int64 format)

Min value of the int parameter.

IterationResult

(No description provided)
Fields
durationMs

string (int64 format)

Time taken to run the iteration in milliseconds.

evalLoss

number (double format)

Loss computed on the eval data at the end of iteration.

index

integer (int32 format)

Index of the iteration, 0 based.

learnRate

number (double format)

Learn rate used for this iteration.

trainingLoss

number (double format)

Loss computed on the training data at the end of iteration.

Job

(No description provided)
Fields
configuration

object (JobConfiguration)

[Required] Describes the job configuration.

configuration.load.rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

configuration.load.rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

configuration.load.rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

etag

string

[Output-only] A hash of this resource.

id

string

[Output-only] Opaque ID field of the job

jobReference

object (JobReference)

[Optional] Reference describing the unique-per-user name of the job.

kind

string

[Output-only] The type of the resource.

selfLink

string

[Output-only] A URL that can be used to access this resource again.

statistics

object (JobStatistics)

[Output-only] Information about the job, including starting time and ending time of the job.

statistics.query.reservationUsage.name

string

[Output only] Reservation name or "unreserved" for on-demand resources usage.

statistics.query.reservationUsage.slotMs

string (int64 format)

[Output only] Slot-milliseconds the job spent in the given reservation.

statistics.reservationUsage.name

string

[Output-only] Reservation name or "unreserved" for on-demand resources usage.

statistics.reservationUsage.slotMs

string (int64 format)

[Output-only] Slot-milliseconds the job spent in the given reservation.

status

object (JobStatus)

[Output-only] The status of this job. Examine this value when polling an asynchronous job to see if the job is complete.

user_email

string

[Output-only] Email address of the user who ran the job.

JobCancelResponse

(No description provided)
Fields
job

object (Job)

The final state of the job.

job.configuration.load.rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

job.configuration.load.rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

job.configuration.load.rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

job.statistics.query.reservationUsage.name

string

[Output only] Reservation name or "unreserved" for on-demand resources usage.

job.statistics.query.reservationUsage.slotMs

string (int64 format)

[Output only] Slot-milliseconds the job spent in the given reservation.

job.statistics.reservationUsage.name

string

[Output-only] Reservation name or "unreserved" for on-demand resources usage.

job.statistics.reservationUsage.slotMs

string (int64 format)

[Output-only] Slot-milliseconds the job spent in the given reservation.

kind

string

The resource type of the response.

JobConfiguration

(No description provided)
Fields
copy

object (JobConfigurationTableCopy)

[Pick one] Copies a table.

dryRun

boolean

[Optional] If set, don't actually run this job. A valid query will return a mostly empty response with some processing statistics, while an invalid query will return the same error it would if it wasn't a dry run. Behavior of non-query jobs is undefined.

extract

object (JobConfigurationExtract)

[Pick one] Configures an extract job.

jobTimeoutMs

string (int64 format)

[Optional] Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

jobType

string

[Output-only] The type of the job. Can be QUERY, LOAD, EXTRACT, COPY or UNKNOWN.

labels

map (key: string, value: string)

The labels associated with this job. You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.

load

object (JobConfigurationLoad)

[Pick one] Configures a load job.

load.rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

load.rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

load.rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

query

object (JobConfigurationQuery)

[Pick one] Configures a query job.

JobConfigurationExtract

(No description provided)
Fields
compression

string

[Optional] The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro. Not applicable when extracting models.

destinationFormat

string

[Optional] The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON, PARQUET or AVRO for tables and ML_TF_SAVED_MODEL or ML_XGBOOST_BOOSTER for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is ML_TF_SAVED_MODEL.

destinationUri

string

[Pick one] DEPRECATED: Use destinationUris instead, passing only one URI as necessary. The fully-qualified Google Cloud Storage URI where the extracted table should be written.

destinationUris[]

string

[Pick one] A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

fieldDelimiter

string

[Optional] Delimiter to use between fields in the exported data. Default is ','. Not applicable when extracting models.

printHeader

boolean

[Optional] Whether to print out a header row in the results. Default is true. Not applicable when extracting models.

sourceModel

object (ModelReference)

A reference to the model being exported.

sourceTable

object (TableReference)

A reference to the table being exported.

useAvroLogicalTypes

boolean

[Optional] If destinationFormat is set to "AVRO", this flag indicates whether to enable extracting applicable column types (such as TIMESTAMP) to their corresponding AVRO logical types (timestamp-micros), instead of only using their raw types (avro-long). Not applicable when extracting models.

JobConfigurationLoad

(No description provided)
Fields
allowJaggedRows

boolean

[Optional] Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

allowQuotedNewlines

boolean

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

autodetect

boolean

[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON sources.

clustering

object (Clustering)

[Beta] Clustering specification for the destination table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.

connectionProperties[]

object (ConnectionProperty)

Connection properties.

createDisposition

string

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.

createSession

boolean

If true, creates a new session, where session id will be a server generated random id. If false, runs query with an existing session_id passed in ConnectionProperty, otherwise runs the load job in non-session mode.

decimalTargetTypes[]

string

[Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.

destinationEncryptionConfiguration

object (EncryptionConfiguration)

Custom encryption configuration (e.g., Cloud KMS keys).

destinationTable

object (TableReference)

[Required] The destination table to load the data into.

destinationTableProperties

object (DestinationTableProperties)

[Beta] [Optional] Properties with which to create the destination table if it is new.

encoding

string

[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

fieldDelimiter

string

[Optional] The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').

hivePartitioningOptions

object (HivePartitioningOptions)

[Optional] Options to configure hive partitioning support.

ignoreUnknownValues

boolean

[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names

jsonExtension

string

[Optional] If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON. For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited GeoJSON: set to GEOJSON.

maxBadRecords

integer (int32 format)

[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV and JSON. The default value is 0, which requires that all records are valid.

nullMarker

string

[Optional] Specifies a string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

parquetOptions

object (ParquetOptions)

[Optional] Options to configure parquet support.

preserveAsciiControlCharacters

boolean

[Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.

projectionFields[]

string

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

quote

string

[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

rangePartitioning

object (RangePartitioning)

[TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.

rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

referenceFileSchemaUri

string

User provided referencing file with the expected reader schema, Available for the format: AVRO, PARQUET, ORC.

schema

object (TableSchema)

[Optional] The schema for the destination table. The schema can be omitted if the destination table already exists, or if you're loading data from Google Cloud Datastore.

schemaInline

string

[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT".

schemaInlineFormat

string

[Deprecated] The format of the schemaInline property.

schemaUpdateOptions[]

string

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

skipLeadingRows

integer (int32 format)

[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

sourceFormat

string

[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value is CSV.

sourceUris[]

string

[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.

timePartitioning

object (TimePartitioning)

Time-based partitioning specification for the destination table. Only one of timePartitioning and rangePartitioning should be specified.

useAvroLogicalTypes

boolean

[Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).

writeDisposition

string

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

JobConfigurationQuery

(No description provided)
Fields
allowLargeResults

boolean

[Optional] If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

clustering

object (Clustering)

[Beta] Clustering specification for the destination table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.

connectionProperties[]

object (ConnectionProperty)

Connection properties.

createDisposition

string

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.

createSession

boolean

If true, creates a new session, where session id will be a server generated random id. If false, runs query with an existing session_id passed in ConnectionProperty, otherwise runs query in non-session mode.

defaultDataset

object (DatasetReference)

[Optional] Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names.

destinationEncryptionConfiguration

object (EncryptionConfiguration)

Custom encryption configuration (e.g., Cloud KMS keys).

destinationTable

object (TableReference)

[Optional] Describes the table where the query results should be stored. If not present, a new table will be created to store the results. This property must be set for large results that exceed the maximum response size.

flattenResults

boolean

[Optional] If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

maximumBillingTier

integer (int32 format)

[Optional] Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

maximumBytesBilled

string (int64 format)

[Optional] Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

parameterMode

string

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

preserveNulls

boolean

[Deprecated] This property is deprecated.

priority

string

[Optional] Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE.

query

string

[Required] SQL query text to execute. The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL.

queryParameters[]

object (QueryParameter)

Query parameters for standard SQL queries.

rangePartitioning

object (RangePartitioning)

[TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.

rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

schemaUpdateOptions[]

string

Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

tableDefinitions

map (key: string, value: object (ExternalDataConfiguration))

[Optional] If querying an external data source outside of BigQuery, describes the data format, location and other properties of the data source. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.

timePartitioning

object (TimePartitioning)

Time-based partitioning specification for the destination table. Only one of timePartitioning and rangePartitioning should be specified.

useLegacySql

boolean

Specifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ When useLegacySql is set to false, the value of flattenResults is ignored; query will be run as if flattenResults is false.

useQueryCache

boolean

[Optional] Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

userDefinedFunctionResources[]

object (UserDefinedFunctionResource)

Describes user-defined function resources used in the query.

writeDisposition

string

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_EMPTY. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

JobConfigurationTableCopy

(No description provided)
Fields
createDisposition

string

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.

destinationEncryptionConfiguration

object (EncryptionConfiguration)

Custom encryption configuration (e.g., Cloud KMS keys).

destinationExpirationTime

any

[Optional] The time when the destination table expires. Expired tables will be deleted and their storage reclaimed.

destinationTable

object (TableReference)

[Required] The destination table

operationType

string

[Optional] Supported operation types in table copy job.

sourceTable

object (TableReference)

[Pick one] Source table to copy.

sourceTables[]

object (TableReference)

[Pick one] Source tables to copy.

writeDisposition

string

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_EMPTY. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

JobList

(No description provided)
Fields
etag

string

A hash of this page of results.

jobs[]

object

List of jobs that were requested.

jobs.configuration

object (JobConfiguration)

[Full-projection-only] Specifies the job configuration.

jobs.configuration.load.rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

jobs.configuration.load.rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

jobs.configuration.load.rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

jobs.errorResult

object (ErrorProto)

A result object that will be present only if the job has failed.

jobs.id

string

Unique opaque ID of the job.

jobs.jobReference

object (JobReference)

Job reference uniquely identifying the job.

jobs.kind

string

The resource type.

jobs.state

string

Running state of the job. When the state is DONE, errorResult can be checked to determine whether the job succeeded or failed.

jobs.statistics

object (JobStatistics)

[Output-only] Information about the job, including starting time and ending time of the job.

jobs.statistics.query.reservationUsage.name

string

[Output only] Reservation name or "unreserved" for on-demand resources usage.

jobs.statistics.query.reservationUsage.slotMs

string (int64 format)

[Output only] Slot-milliseconds the job spent in the given reservation.

jobs.statistics.reservationUsage.name

string

[Output-only] Reservation name or "unreserved" for on-demand resources usage.

jobs.statistics.reservationUsage.slotMs

string (int64 format)

[Output-only] Slot-milliseconds the job spent in the given reservation.

jobs.status

object (JobStatus)

[Full-projection-only] Describes the state of the job.

jobs.user_email

string

[Full-projection-only] Email address of the user who ran the job.

kind

string

The resource type of the response.

nextPageToken

string

A token to request the next page of results.

JobReference

(No description provided)
Fields
jobId

string

[Required] The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

location

string

The geographic location of the job. See details at https://cloud.google.com/bigquery/docs/locations#specifying_your_location.

projectId

string

[Required] The ID of the project containing this job.

JobStatistics

(No description provided)
Fields
completionRatio

number (double format)

[TrustedTester] [Output-only] Job progress (0.0 -> 1.0) for LOAD and EXTRACT jobs.

copy

object (JobStatistics5)

[Output-only] Statistics for a copy job.

creationTime

string (int64 format)

[Output-only] Creation time of this job, in milliseconds since the epoch. This field will be present on all jobs.

dataMaskingStatistics

object (DataMaskingStatistics)

[Output-only] Statistics for data masking. Present only for query and extract jobs.

endTime

string (int64 format)

[Output-only] End time of this job, in milliseconds since the epoch. This field will be present whenever a job is in the DONE state.

extract

object (JobStatistics4)

[Output-only] Statistics for an extract job.

load

object (JobStatistics3)

[Output-only] Statistics for a load job.

numChildJobs

string (int64 format)

[Output-only] Number of child jobs executed.

parentJobId

string

[Output-only] If this is a child job, the id of the parent.

query

object (JobStatistics2)

[Output-only] Statistics for a query job.

query.reservationUsage.name

string

[Output only] Reservation name or "unreserved" for on-demand resources usage.

query.reservationUsage.slotMs

string (int64 format)

[Output only] Slot-milliseconds the job spent in the given reservation.

quotaDeferments[]

string

[Output-only] Quotas which delayed this job's start time.

reservationUsage[]

object

[Output-only] Job resource usage breakdown by reservation.

reservationUsage.name

string

[Output-only] Reservation name or "unreserved" for on-demand resources usage.

reservationUsage.slotMs

string (int64 format)

[Output-only] Slot-milliseconds the job spent in the given reservation.

reservation_id

string

[Output-only] Name of the primary reservation assigned to this job. Note that this could be different than reservations reported in the reservation usage field if parent reservations were used to execute this job.

rowLevelSecurityStatistics

object (RowLevelSecurityStatistics)

[Output-only] [Preview] Statistics for row-level security. Present only for query and extract jobs.

scriptStatistics

object (ScriptStatistics)

[Output-only] Statistics for a child job of a script.

sessionInfo

object (SessionInfo)

[Output-only] [Preview] Information of the session if this job is part of one.

startTime

string (int64 format)

[Output-only] Start time of this job, in milliseconds since the epoch. This field will be present when the job transitions from the PENDING state to either RUNNING or DONE.

totalBytesProcessed

string (int64 format)

[Output-only] [Deprecated] Use the bytes processed in the query statistics instead.

totalSlotMs

string (int64 format)

[Output-only] Slot-milliseconds for the job.

transactionInfo

object (TransactionInfo)

[Output-only] [Alpha] Information of the multi-statement transaction if this job is part of one.

JobStatistics2

(No description provided)
Fields
biEngineStatistics

object (BiEngineStatistics)

BI Engine specific Statistics. [Output only] BI Engine specific Statistics.

billingTier

integer (int32 format)

[Output only] Billing tier for the job.

cacheHit

boolean

[Output only] Whether the query result was fetched from the query cache.

ddlAffectedRowAccessPolicyCount

string (int64 format)

[Output only] [Preview] The number of row access policies affected by a DDL statement. Present only for DROP ALL ROW ACCESS POLICIES queries.

ddlDestinationTable

object (TableReference)

[Output only] The DDL destination table. Present only for ALTER TABLE RENAME TO queries. Note that ddl_target_table is used just for its type information.

ddlOperationPerformed

string

The DDL operation performed, possibly dependent on the pre-existence of the DDL target. Possible values (new values might be added in the future): "CREATE": The query created the DDL target. "SKIP": No-op. Example cases: the query is CREATE TABLE IF NOT EXISTS while the table already exists, or the query is DROP TABLE IF EXISTS while the table does not exist. "REPLACE": The query replaced the DDL target. Example case: the query is CREATE OR REPLACE TABLE, and the table already exists. "DROP": The query deleted the DDL target.

ddlTargetDataset

object (DatasetReference)

[Output only] The DDL target dataset. Present only for CREATE/ALTER/DROP SCHEMA queries.

ddlTargetRoutine

object (RoutineReference)

The DDL target routine. Present only for CREATE/DROP FUNCTION/PROCEDURE queries.

ddlTargetRowAccessPolicy

object (RowAccessPolicyReference)

[Output only] [Preview] The DDL target row access policy. Present only for CREATE/DROP ROW ACCESS POLICY queries.

ddlTargetTable

object (TableReference)

[Output only] The DDL target table. Present only for CREATE/DROP TABLE/VIEW and DROP ALL ROW ACCESS POLICIES queries.

dmlStats

object (DmlStatistics)

[Output only] Detailed statistics for DML statements Present only for DML statements INSERT, UPDATE, DELETE or TRUNCATE.

estimatedBytesProcessed

string (int64 format)

[Output only] The original estimate of bytes processed for the job.

mlStatistics

object (MlStatistics)

[Output only] Statistics of a BigQuery ML training job.

modelTraining

object (BigQueryModelTraining)

[Output only, Beta] Information about create model query job progress.

modelTrainingCurrentIteration

integer (int32 format)

[Output only, Beta] Deprecated; do not use.

modelTrainingExpectedTotalIteration

string (int64 format)

[Output only, Beta] Deprecated; do not use.

numDmlAffectedRows

string (int64 format)

[Output only] The number of rows affected by a DML statement. Present only for DML statements INSERT, UPDATE or DELETE.

queryPlan[]

object (ExplainQueryStage)

[Output only] Describes execution plan for the query.

referencedRoutines[]

object (RoutineReference)

[Output only] Referenced routines (persistent user-defined functions and stored procedures) for the job.

referencedTables[]

object (TableReference)

[Output only] Referenced tables for the job. Queries that reference more than 50 tables will not have a complete list.

reservationUsage[]

object

[Output only] Job resource usage breakdown by reservation.

reservationUsage.name

string

[Output only] Reservation name or "unreserved" for on-demand resources usage.

reservationUsage.slotMs

string (int64 format)

[Output only] Slot-milliseconds the job spent in the given reservation.

schema

object (TableSchema)

[Output only] The schema of the results. Present only for successful dry run of non-legacy SQL queries.

searchStatistics

object (SearchStatistics)

[Output only] Search query specific statistics.

sparkStatistics

object (SparkStatistics)

[Output only] Statistics of a Spark procedure job.

statementType

string

The type of query statement, if valid. Possible values (new values might be added in the future): "SELECT": SELECT query. "INSERT": INSERT query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "UPDATE": UPDATE query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "DELETE": DELETE query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "MERGE": MERGE query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "ALTER_TABLE": ALTER TABLE query. "ALTER_VIEW": ALTER VIEW query. "ASSERT": ASSERT condition AS 'description'. "CREATE_FUNCTION": CREATE FUNCTION query. "CREATE_MODEL": CREATE [OR REPLACE] MODEL ... AS SELECT ... . "CREATE_PROCEDURE": CREATE PROCEDURE query. "CREATE_TABLE": CREATE [OR REPLACE] TABLE without AS SELECT. "CREATE_TABLE_AS_SELECT": CREATE [OR REPLACE] TABLE ... AS SELECT ... . "CREATE_VIEW": CREATE [OR REPLACE] VIEW ... AS SELECT ... . "DROP_FUNCTION" : DROP FUNCTION query. "DROP_PROCEDURE": DROP PROCEDURE query. "DROP_TABLE": DROP TABLE query. "DROP_VIEW": DROP VIEW query.

timeline[]

object (QueryTimelineSample)

[Output only] [Beta] Describes a timeline of job execution.

totalBytesBilled

string (int64 format)

[Output only] Total bytes billed for the job.

totalBytesProcessed

string (int64 format)

[Output only] Total bytes processed for the job.

totalBytesProcessedAccuracy

string

[Output only] For dry-run jobs, totalBytesProcessed is an estimate and this field specifies the accuracy of the estimate. Possible values can be: UNKNOWN: accuracy of the estimate is unknown. PRECISE: estimate is precise. LOWER_BOUND: estimate is lower bound of what the query would cost. UPPER_BOUND: estimate is upper bound of what the query would cost.

totalPartitionsProcessed

string (int64 format)

[Output only] Total number of partitions processed from all partitioned tables referenced in the job.

totalSlotMs

string (int64 format)

[Output only] Slot-milliseconds for the job.

transferredBytes

string (int64 format)

[Output-only] Total bytes transferred for cross-cloud queries such as Cross Cloud Transfer and CREATE TABLE AS SELECT (CTAS).

undeclaredQueryParameters[]

object (QueryParameter)

Standard SQL only: list of undeclared query parameters detected during a dry run validation.

JobStatistics3

(No description provided)
Fields
badRecords

string (int64 format)

[Output-only] The number of bad records encountered. Note that if the job has failed because of more bad records encountered than the maximum allowed in the load job configuration, then this number can be less than the total number of bad records present in the input data.

inputFileBytes

string (int64 format)

[Output-only] Number of bytes of source data in a load job.

inputFiles

string (int64 format)

[Output-only] Number of source files in a load job.

outputBytes

string (int64 format)

[Output-only] Size of the loaded data in bytes. Note that while a load job is in the running state, this value may change.

outputRows

string (int64 format)

[Output-only] Number of rows imported in a load job. Note that while an import job is in the running state, this value may change.

JobStatistics4

(No description provided)
Fields
destinationUriFileCounts[]

string (int64 format)

[Output-only] Number of files per destination URI or URI pattern specified in the extract configuration. These values will be in the same order as the URIs specified in the 'destinationUris' field.

inputBytes

string (int64 format)

[Output-only] Number of user bytes extracted into the result. This is the byte count as computed by BigQuery for billing purposes.

JobStatistics5

(No description provided)
Fields
copied_logical_bytes

string (int64 format)

[Output-only] Number of logical bytes copied to the destination table.

copied_rows

string (int64 format)

[Output-only] Number of rows copied to the destination table.

JobStatus

(No description provided)
Fields
errorResult

object (ErrorProto)

[Output-only] Final error result of the job. If present, indicates that the job has completed and was unsuccessful.

errors[]

object (ErrorProto)

[Output-only] The first errors encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful.

state

string

[Output-only] Running state of the job.

ListModelsResponse

(No description provided)
Fields
models[]

object (Model)

Models in the requested dataset. Only the following fields are populated: model_reference, model_type, creation_time, last_modified_time and labels.

nextPageToken

string

A token to request the next page of results.

ListRoutinesResponse

(No description provided)
Fields
nextPageToken

string

A token to request the next page of results.

routines[]

object (Routine)

Routines in the requested dataset. Unless read_mask is set in the request, only the following fields are populated: etag, project_id, dataset_id, routine_id, routine_type, creation_time, last_modified_time, and language.

ListRowAccessPoliciesResponse

Response message for the ListRowAccessPolicies method.
Fields
nextPageToken

string

A token to request the next page of results.

rowAccessPolicies[]

object (RowAccessPolicy)

Row access policies on the requested table.

LocationMetadata

BigQuery-specific metadata about a location. This will be set on google.cloud.location.Location.metadata in Cloud Location API responses.
Fields
legacyLocationId

string

The legacy BigQuery location ID, e.g. “EU” for the “europe” location. This is for any API consumers that need the legacy “US” and “EU” locations.

MaterializedViewDefinition

(No description provided)
Fields
enableRefresh

boolean

[Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".

lastRefreshTime

string (int64 format)

[Output-only] [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.

maxStaleness

string (bytes format)

[Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).

query

string

[Required] A query whose result is persisted.

refreshIntervalMs

string (int64 format)

[Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).

MlStatistics

(No description provided)
Fields
iterationResults[]

object (IterationResult)

Results for all completed iterations.

maxIterations

string (int64 format)

Maximum number of iterations specified as max_iterations in the 'CREATE MODEL' query. The actual number of iterations may be less than this number due to early stop.

Model

(No description provided)
Fields
bestTrialId

string (int64 format)

The best trial_id across all training runs.

creationTime

string (int64 format)

Output only. The time when this model was created, in millisecs since the epoch.

defaultTrialId

string (int64 format)

Output only. The default trial_id to use in TVFs when the trial_id is not passed in. For single-objective hyperparameter tuning models, this is the best trial ID. For multi-objective hyperparameter tuning models, this is the smallest trial ID among all Pareto optimal trials.

description

string

Optional. A user-friendly description of this model.

encryptionConfiguration

object (EncryptionConfiguration)

Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with PatchModel to update encryption key for an already encrypted model.

etag

string

Output only. A hash of this resource.

expirationTime

string (int64 format)

Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models.

featureColumns[]

object (StandardSqlField)

Output only. Input feature columns that were used to train this model.

friendlyName

string

Optional. A descriptive name for this model.

hparamSearchSpaces

object (HparamSearchSpaces)

Output only. All hyperparameter search spaces in this model.

hparamTrials[]

object (HparamTuningTrial)

Output only. Trials of a hyperparameter tuning model sorted by trial_id.

labelColumns[]

object (StandardSqlField)

Output only. Label columns that were used to train this model. The output of the model will have a "predicted_" prefix to these columns.

labels

map (key: string, value: string)

The labels associated with this model. You can use these to organize and group your models. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.

lastModifiedTime

string (int64 format)

Output only. The time when this model was last modified, in millisecs since the epoch.

location

string

Output only. The geographic location where the model resides. This value is inherited from the dataset.

modelReference

object (ModelReference)

Required. Unique identifier for this model.

modelType

enum

Output only. Type of the model resource.

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED (No description provided)
LINEAR_REGRESSION Linear regression model.
LOGISTIC_REGRESSION Logistic regression based classification model.
KMEANS K-means clustering model.
MATRIX_FACTORIZATION Matrix factorization model.
DNN_CLASSIFIER DNN classifier model.
TENSORFLOW An imported TensorFlow model.
DNN_REGRESSOR DNN regressor model.
BOOSTED_TREE_REGRESSOR Boosted tree regressor model.
BOOSTED_TREE_CLASSIFIER Boosted tree classifier model.
ARIMA ARIMA model.
AUTOML_REGRESSOR AutoML Tables regression model.
AUTOML_CLASSIFIER AutoML Tables classification model.
PCA Prinpical Component Analysis model.
DNN_LINEAR_COMBINED_CLASSIFIER Wide-and-deep classifier model.
DNN_LINEAR_COMBINED_REGRESSOR Wide-and-deep regressor model.
AUTOENCODER Autoencoder model.
ARIMA_PLUS New name for the ARIMA model.
optimalTrialIds[]

string (int64 format)

Output only. For single-objective hyperparameter tuning models, it only contains the best trial. For multi-objective hyperparameter tuning models, it contains all Pareto optimal trials sorted by trial_id.

trainingRuns[]

object (TrainingRun)

Information for all training runs in increasing order of start_time.

ModelDefinition

(No description provided)
Fields
modelOptions

object

[Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.

modelOptions.labels[]

string

(No description provided)

modelOptions.lossType

string

(No description provided)

modelOptions.modelType

string

(No description provided)

trainingRuns[]

object (BqmlTrainingRun)

[Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.

ModelReference

(No description provided)
Fields
datasetId

string

[Required] The ID of the dataset containing this model.

modelId

string

[Required] The ID of the model. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

projectId

string

[Required] The ID of the project containing this model.

MultiClassClassificationMetrics

Evaluation metrics for multi-class classification/classifier models.
Fields
aggregateClassificationMetrics

object (AggregateClassificationMetrics)

Aggregate classification metrics.

confusionMatrixList[]

object (ConfusionMatrix)

Confusion matrix at different thresholds.

ParquetOptions

(No description provided)
Fields
enableListInference

boolean

[Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.

enumAsString

boolean

[Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.

Policy

An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. A Policy is a collection of bindings. A binding binds one or more members, or principals, to a single role. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a binding can also specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation. JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 } YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3 For a description of IAM and its features, see the IAM documentation.
Fields
auditConfigs[]

object (AuditConfig)

Specifies cloud audit logging configuration for this policy.

bindings[]

object (Binding)

Associates a list of members, or principals, with a role. Optionally, may specify a condition that determines how and when the bindings are applied. Each of the bindings must contain at least one principal. The bindings in a Policy can refer to up to 1,500 principals; up to 250 of these principals can be Google groups. Each occurrence of a principal counts towards these limits. For example, if the bindings grant 50 different roles to user:alice@example.com, and not to any other principal, then you can add another 1,450 principals to the bindings in the Policy.

etag

string (bytes format)

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy. Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.

version

integer (int32 format)

Specifies the format of the policy. Valid values are 0, 1, and 3. Requests that specify an invalid value are rejected. Any operation that affects conditional role bindings must specify version 3. This requirement applies to the following operations: * Getting a policy that includes a conditional role binding * Adding a conditional role binding to a policy * Changing a conditional role binding in a policy * Removing any role binding, with or without a condition, from a policy that includes conditions Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost. If a policy does not include any conditions, operations on that policy may specify any valid version or leave the field unset. To learn which resources support conditions in their IAM policies, see the IAM documentation.

PrincipalComponentInfo

Principal component infos, used only for eigen decomposition based models, e.g., PCA. Ordered by explained_variance in the descending order.
Fields
cumulativeExplainedVarianceRatio

number (double format)

The explained_variance is pre-ordered in the descending order to compute the cumulative explained variance ratio.

explainedVariance

number (double format)

Explained variance by this principal component, which is simply the eigenvalue.

explainedVarianceRatio

number (double format)

Explained_variance over the total explained variance.

principalComponentId

string (int64 format)

Id of the principal component.

ProjectList

(No description provided)
Fields
etag

string

A hash of the page of results

kind

string

The type of list.

nextPageToken

string

A token to request the next page of results.

projects[]

object

Projects to which you have at least READ access.

projects.friendlyName

string

A descriptive name for this project.

projects.id

string

An opaque ID of this project.

projects.kind

string

The resource type.

projects.numericId

string (uint64 format)

The numeric ID of this project.

projects.projectReference

object (ProjectReference)

A unique reference to this project.

totalItems

integer (int32 format)

The total number of projects in the list.

ProjectReference

(No description provided)
Fields
projectId

string

[Required] ID of the project. Can be either the numeric ID or the assigned ID of the project.

QueryParameter

(No description provided)
Fields
name

string

[Optional] If unset, this is a positional parameter. Otherwise, should be unique within a query.

parameterType

object (QueryParameterType)

[Required] The type of this parameter.

parameterType.structTypes.description

string

[Optional] Human-oriented description of the field.

parameterType.structTypes.name

string

[Optional] The name of this field.

parameterType.structTypes.type

object (QueryParameterType)

[Required] The type of this field.

parameterValue

object (QueryParameterValue)

[Required] The value of this parameter.

QueryParameterType

(No description provided)
Fields
arrayType

object (QueryParameterType)

[Optional] The type of the array's elements, if this is an array.

structTypes[]

object

[Optional] The types of the fields of this struct, in order, if this is a struct.

structTypes.description

string

[Optional] Human-oriented description of the field.

structTypes.name

string

[Optional] The name of this field.

structTypes.type

object (QueryParameterType)

[Required] The type of this field.

type

string

[Required] The top level type of this field.

QueryParameterValue

(No description provided)
Fields
arrayValues[]

object (QueryParameterValue)

[Optional] The array values, if this is an array type.

structValues

map (key: string, value: object (QueryParameterValue))

[Optional] The struct field values, in order of the struct type's declaration.

value

string

[Optional] The value of this value, if a simple scalar type.

QueryRequest

(No description provided)
Fields
connectionProperties[]

object (ConnectionProperty)

Connection properties.

createSession

boolean

If true, creates a new session, where session id will be a server generated random id. If false, runs query with an existing session_id passed in ConnectionProperty, otherwise runs query in non-session mode.

defaultDataset

object (DatasetReference)

[Optional] Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.

dryRun

boolean

[Optional] If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false.

kind

string

The resource type of the request.

labels

map (key: string, value: string)

The labels associated with this job. You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.

location

string

The geographic location where the job should run. See details at https://cloud.google.com/bigquery/docs/locations#specifying_your_location.

maxResults

integer (uint32 format)

[Optional] The maximum number of rows of data to return per page of results. Setting this flag to a small value such as 1000 and then paging through results might improve reliability when the query result set is large. In addition to this limit, responses are also limited to 10 MB. By default, there is no maximum row count, and only the byte limit applies.

maximumBytesBilled

string (int64 format)

[Optional] Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

parameterMode

string

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

preserveNulls

boolean

[Deprecated] This property is deprecated.

query

string

[Required] A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

queryParameters[]

object (QueryParameter)

Query parameters for Standard SQL queries.

requestId

string

A unique user provided identifier to ensure idempotent behavior for queries. Note that this is different from the job_id. It has the following properties: 1. It is case-sensitive, limited to up to 36 ASCII characters. A UUID is recommended. 2. Read only queries can ignore this token since they are nullipotent by definition. 3. For the purposes of idempotency ensured by the request_id, a request is considered duplicate of another only if they have the same request_id and are actually duplicates. When determining whether a request is a duplicate of the previous request, all parameters in the request that may affect the behavior are considered. For example, query, connection_properties, query_parameters, use_legacy_sql are parameters that affect the result and are considered when determining whether a request is a duplicate, but properties like timeout_ms don't affect the result and are thus not considered. Dry run query requests are never considered duplicate of another request. 4. When a duplicate mutating query request is detected, it returns: a. the results of the mutation if it completes successfully within the timeout. b. the running operation if it is still in progress at the end of the timeout. 5. Its lifetime is limited to 15 minutes. In other words, if two requests are sent with the same request_id, but more than 15 minutes apart, idempotency is not guaranteed.

timeoutMs

integer (uint32 format)

[Optional] How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with the 'jobComplete' flag set to false. You can call GetQueryResults() to wait for the query to complete and read the results. The default value is 10000 milliseconds (10 seconds).

useLegacySql

boolean

Specifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ When useLegacySql is set to false, the value of flattenResults is ignored; query will be run as if flattenResults is false.

useQueryCache

boolean

[Optional] Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true.

QueryResponse

(No description provided)
Fields
cacheHit

boolean

Whether the query result was fetched from the query cache.

dmlStats

object (DmlStatistics)

[Output-only] Detailed statistics for DML statements Present only for DML statements INSERT, UPDATE, DELETE or TRUNCATE.

errors[]

object (ErrorProto)

[Output-only] The first errors or warnings encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful.

jobComplete

boolean

Whether the query has completed or not. If rows or totalRows are present, this will always be true. If this is false, totalRows will not be available.

jobReference

object (JobReference)

Reference to the Job that was created to run the query. This field will be present even if the original request timed out, in which case GetQueryResults can be used to read the results once the query has completed. Since this API only returns the first page of results, subsequent pages can be fetched via the same mechanism (GetQueryResults).

kind

string

The resource type.

numDmlAffectedRows

string (int64 format)

[Output-only] The number of rows affected by a DML statement. Present only for DML statements INSERT, UPDATE or DELETE.

pageToken

string

A token used for paging results.

rows[]

object (TableRow)

An object with as many results as can be contained within the maximum permitted reply size. To get any additional rows, you can call GetQueryResults and specify the jobReference returned above.

schema

object (TableSchema)

The schema of the results. Present only when the query completes successfully.

sessionInfo

object (SessionInfo)

[Output-only] [Preview] Information of the session if this job is part of one.

totalBytesProcessed

string (int64 format)

The total number of bytes processed for this query. If this query was a dry run, this is the number of bytes that would be processed if the query were run.

totalRows

string (uint64 format)

The total number of rows in the complete query result set, which can be more than the number of rows in this single page of results.

QueryTimelineSample

(No description provided)
Fields
activeUnits

string (int64 format)

Total number of units currently being processed by workers. This does not correspond directly to slot usage. This is the largest value observed since the last sample.

completedUnits

string (int64 format)

Total parallel units of work completed by this query.

elapsedMs

string (int64 format)

Milliseconds elapsed since the start of query execution.

estimatedRunnableUnits

string (int64 format)

Units of work that can be scheduled immediately. Providing additional slots for these units of work will speed up the query, provided no other query in the reservation needs additional slots.

pendingUnits

string (int64 format)

Total units of work remaining for the query. This number can be revised (increased or decreased) while the query is running.

totalSlotMs

string (int64 format)

Cumulative slot-ms consumed by the query.

RangePartitioning

(No description provided)
Fields
field

string

[TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.

range

object

[TrustedTester] [Required] Defines the ranges for range partitioning.

range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

RankingMetrics

Evaluation metrics used by weighted-ALS models specified by feedback_type=implicit.
Fields
averageRank

number (double format)

Determines the goodness of a ranking by computing the percentile rank from the predicted confidence and dividing it by the original rank.

meanAveragePrecision

number (double format)

Calculates a precision per user for all the items by ranking them and then averages all the precisions across all the users.

meanSquaredError

number (double format)

Similar to the mean squared error computed in regression and explicit recommendation models except instead of computing the rating directly, the output from evaluate is computed against a preference which is 1 or 0 depending on if the rating exists or not.

normalizedDiscountedCumulativeGain

number (double format)

A metric to determine the goodness of a ranking calculated from the predicted confidence by comparing it to an ideal rank measured by the original ratings.

RegressionMetrics

Evaluation metrics for regression and explicit feedback type matrix factorization models.
Fields
meanAbsoluteError

number (double format)

Mean absolute error.

meanSquaredError

number (double format)

Mean squared error.

meanSquaredLogError

number (double format)

Mean squared log error.

medianAbsoluteError

number (double format)

Median absolute error.

rSquared

number (double format)

R^2 score. This corresponds to r2_score in ML.EVALUATE.

RemoteFunctionOptions

Options for a remote user-defined function.
Fields
connection

string

Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"

endpoint

string

Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add

maxBatchingRows

string (int64 format)

Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.

userDefinedContext

map (key: string, value: string)

User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.

Routine

A user-defined function or a stored procedure.
Fields
arguments[]

object (Argument)

Optional.

creationTime

string (int64 format)

Output only. The time when this routine was created, in milliseconds since the epoch.

definitionBody

string

Required. The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.

description

string

Optional. The description of the routine, if defined.

determinismLevel

enum

Optional. The determinism level of the JavaScript UDF, if defined.

Enum type. Can be one of the following:
DETERMINISM_LEVEL_UNSPECIFIED The determinism of the UDF is unspecified.
DETERMINISTIC The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
NOT_DETERMINISTIC The UDF is not deterministic.
etag

string

Output only. A hash of this resource.

importedLibraries[]

string

Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.

language

enum

Optional. Defaults to "SQL".

Enum type. Can be one of the following:
LANGUAGE_UNSPECIFIED (No description provided)
SQL SQL language.
JAVASCRIPT JavaScript language.
PYTHON Python language.
lastModifiedTime

string (int64 format)

Output only. The time when this routine was last modified, in milliseconds since the epoch.

remoteFunctionOptions

object (RemoteFunctionOptions)

Optional. Remote function specific options.

returnTableType

object (StandardSqlTableType)

Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specificed in return table type, at query time.

returnType

object (StandardSqlDataType)

Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.

routineReference

object (RoutineReference)

Required. Reference describing the ID of this routine.

routineType

enum

Required. The type of routine.

Enum type. Can be one of the following:
ROUTINE_TYPE_UNSPECIFIED (No description provided)
SCALAR_FUNCTION Non-builtin permanent scalar function.
PROCEDURE Stored procedure.
TABLE_VALUED_FUNCTION Non-builtin permanent TVF.
sparkOptions

object (SparkOptions)

Optional. Spark specific options.

strictMode

boolean

Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.

RoutineReference

(No description provided)
Fields
datasetId

string

[Required] The ID of the dataset containing this routine.

projectId

string

[Required] The ID of the project containing this routine.

routineId

string

[Required] The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.

Row

A single row in the confusion matrix.
Fields
actualLabel

string

The original label of this row.

entries[]

object (Entry)

Info describing predicted label distribution.

RowAccessPolicy

Represents access on a subset of rows on the specified table, defined by its filter predicate. Access to the subset of rows is controlled by its IAM policy.
Fields
creationTime

string (Timestamp format)

Output only. The time when this row access policy was created, in milliseconds since the epoch.

etag

string

Output only. A hash of this resource.

filterPredicate

string

Required. A SQL boolean expression that represents the rows defined by this row access policy, similar to the boolean expression in a WHERE clause of a SELECT query on a table. References to other tables, routines, and temporary functions are not supported. Examples: region="EU" date_field = CAST('2019-9-27' as DATE) nullable_field is not NULL numeric_field BETWEEN 1.0 AND 5.0

lastModifiedTime

string (Timestamp format)

Output only. The time when this row access policy was last modified, in milliseconds since the epoch.

rowAccessPolicyReference

object (RowAccessPolicyReference)

Required. Reference describing the ID of this row access policy.

RowAccessPolicyReference

(No description provided)
Fields
datasetId

string

[Required] The ID of the dataset containing this row access policy.

policyId

string

[Required] The ID of the row access policy. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.

projectId

string

[Required] The ID of the project containing this row access policy.

tableId

string

[Required] The ID of the table containing this row access policy.

RowLevelSecurityStatistics

(No description provided)
Fields
rowLevelSecurityApplied

boolean

[Output-only] [Preview] Whether any accessed data was protected by row access policies.

ScriptStackFrame

(No description provided)
Fields
endColumn

integer (int32 format)

[Output-only] One-based end column.

endLine

integer (int32 format)

[Output-only] One-based end line.

procedureId

string

[Output-only] Name of the active procedure, empty if in a top-level script.

startColumn

integer (int32 format)

[Output-only] One-based start column.

startLine

integer (int32 format)

[Output-only] One-based start line.

text

string

[Output-only] Text of the current statement/expression.

ScriptStatistics

(No description provided)
Fields
evaluationKind

string

[Output-only] Whether this child job was a statement or expression.

stackFrames[]

object (ScriptStackFrame)

Stack trace showing the line/column/procedure name of each frame on the stack at the point where the current evaluation happened. The leaf frame is first, the primary script is last. Never empty.

SearchStatistics

(No description provided)
Fields
indexUnusedReason[]

object (IndexUnusedReason)

When index_usage_mode is UNUSED or PARTIALLY_USED, this field explains why index was not used in all or part of the search query. If index_usage_mode is FULLLY_USED, this field is not populated.

indexUsageMode

string

Specifies index usage mode for the query.

SessionInfo

(No description provided)
Fields
sessionId

string

[Output-only] // [Preview] Id of the session.

SetIamPolicyRequest

Request message for SetIamPolicy method.
Fields
policy

object (Policy)

REQUIRED: The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them.

updateMask

string (FieldMask format)

OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only the fields in the mask will be modified. If no mask is provided, the following default mask is used: paths: "bindings, etag"

SnapshotDefinition

(No description provided)
Fields
baseTableReference

object (TableReference)

[Required] Reference describing the ID of the table that was snapshot.

snapshotTime

string (date-time format)

[Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.

SparkLoggingInfo

(No description provided)
Fields
project_id

string

[Output-only] Project ID used for logging

resource_type

string

[Output-only] Resource type used for logging

SparkOptions

Options for a user-defined Spark routine.
Fields
archiveUris[]

string

Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.

connection

string

Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"

containerImage

string

Custom container image for the runtime environment.

fileUris[]

string

Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.

jarUris[]

string

JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.

mainFileUri

string

The main file URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set.

properties

map (key: string, value: string)

Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark.

pyFileUris[]

string

Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.

runtimeVersion

string

Runtime version. If not specified, the default runtime version is used.

SparkStatistics

(No description provided)
Fields
endpoints

map (key: string, value: string)

[Output-only] Endpoints generated for the Spark job.

logging_info

object (SparkLoggingInfo)

[Output-only] Logging info is used to generate a link to Cloud Logging.

spark_job_id

string

[Output-only] Spark job id if a Spark job is created successfully.

spark_job_location

string

[Output-only] Location where the Spark job is executed.

StandardSqlDataType

The data type of a variable such as a function argument. Examples include: * INT64: {"typeKind": "INT64"} * ARRAY: { "typeKind": "ARRAY", "arrayElementType": {"typeKind": "STRING"} } * STRUCT>: { "typeKind": "STRUCT", "structType": { "fields": [ { "name": "x", "type": {"typeKind": "STRING"} }, { "name": "y", "type": { "typeKind": "ARRAY", "arrayElementType": {"typeKind": "DATE"} } } ] } }
Fields
arrayElementType

object (StandardSqlDataType)

The type of the array's elements, if type_kind = "ARRAY".

structType

object (StandardSqlStructType)

The fields of this struct, in order, if type_kind = "STRUCT".

typeKind

enum

Required. The top level type of this field. Can be any standard SQL data type (e.g., "INT64", "DATE", "ARRAY").

Enum type. Can be one of the following:
TYPE_KIND_UNSPECIFIED Invalid type.
INT64 Encoded as a string in decimal format.
BOOL Encoded as a boolean "false" or "true".
FLOAT64 Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
STRING Encoded as a string value.
BYTES Encoded as a base64 string per RFC 4648, section 4.
TIMESTAMP Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
DATE Encoded as RFC 3339 full-date format string: 1985-04-12
TIME Encoded as RFC 3339 partial-time format string: 23:20:50.52
DATETIME Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
INTERVAL Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
GEOGRAPHY Encoded as WKT
NUMERIC Encoded as a decimal string.
BIGNUMERIC Encoded as a decimal string.
JSON Encoded as a string.
ARRAY Encoded as a list with types matching Type.array_type.
STRUCT Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.

StandardSqlField

A field or a column.
Fields
name

string

Optional. The name of this field. Can be absent for struct fields.

type

object (StandardSqlDataType)

Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).

StandardSqlStructType

(No description provided)
Fields
fields[]

object (StandardSqlField)

(No description provided)

StandardSqlTableType

A table type
Fields
columns[]

object (StandardSqlField)

The columns in this table type

Streamingbuffer

(No description provided)
Fields
estimatedBytes

string (uint64 format)

[Output-only] A lower-bound estimate of the number of bytes currently in the streaming buffer.

estimatedRows

string (uint64 format)

[Output-only] A lower-bound estimate of the number of rows currently in the streaming buffer.

oldestEntryTime

string (uint64 format)

[Output-only] Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.

StringHparamSearchSpace

Search space for string and enum.
Fields
candidates[]

string

Canididates for the string or enum parameter in lower case.

Table

(No description provided)
Fields
cloneDefinition

object (CloneDefinition)

[Output-only] Clone definition.

clustering

object (Clustering)

[Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.

creationTime

string (int64 format)

[Output-only] The time when this table was created, in milliseconds since the epoch.

defaultCollation

string

[Output-only] The default collation of the table.

description

string

[Optional] A user-friendly description of this table.

encryptionConfiguration

object (EncryptionConfiguration)

Custom encryption configuration (e.g., Cloud KMS keys).

etag

string

[Output-only] A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.

expirationTime

string (int64 format)

[Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.

externalDataConfiguration

object (ExternalDataConfiguration)

[Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.

friendlyName

string

[Optional] A descriptive name for this table.

id

string

[Output-only] An opaque ID uniquely identifying the table.

kind

string

[Output-only] The type of the resource.

labels

map (key: string, value: string)

The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.

lastModifiedTime

string (uint64 format)

[Output-only] The time when this table was last modified, in milliseconds since the epoch.

location

string

[Output-only] The geographic location where the table resides. This value is inherited from the dataset.

materializedView

object (MaterializedViewDefinition)

[Optional] Materialized view definition.

maxStaleness

string (bytes format)

[Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).

model

object (ModelDefinition)

[Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.

model.modelOptions.labels[]

string

(No description provided)

model.modelOptions.lossType

string

(No description provided)

model.modelOptions.modelType

string

(No description provided)

numBytes

string (int64 format)

[Output-only] The size of this table in bytes, excluding any data in the streaming buffer.

numLongTermBytes

string (int64 format)

[Output-only] The number of bytes in the table that are considered "long-term storage".

numPhysicalBytes

string (int64 format)

[Output-only] [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.

numRows

string (uint64 format)

[Output-only] The number of rows of data in this table, excluding any data in the streaming buffer.

num_active_logical_bytes

string (int64 format)

[Output-only] Number of logical bytes that are less than 90 days old.

num_active_physical_bytes

string (int64 format)

[Output-only] Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.

num_long_term_logical_bytes

string (int64 format)

[Output-only] Number of logical bytes that are more than 90 days old.

num_long_term_physical_bytes

string (int64 format)

[Output-only] Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.

num_partitions

string (int64 format)

[Output-only] The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.

num_time_travel_physical_bytes

string (int64 format)

[Output-only] Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.

num_total_logical_bytes

string (int64 format)

[Output-only] Total number of logical bytes in the table or materialized view.

num_total_physical_bytes

string (int64 format)

[Output-only] The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.

rangePartitioning

object (RangePartitioning)

[TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.

rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

requirePartitionFilter

boolean

[Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.

schema

object (TableSchema)

[Optional] Describes the schema of this table.

selfLink

string

[Output-only] A URL that can be used to access this resource again.

snapshotDefinition

object (SnapshotDefinition)

[Output-only] Snapshot definition.

streamingBuffer

object (Streamingbuffer)

[Output-only] Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

tableReference

object (TableReference)

[Required] Reference describing the ID of this table.

timePartitioning

object (TimePartitioning)

Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.

type

string

[Output-only] Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.

view

object (ViewDefinition)

[Optional] The view definition.

TableCell

(No description provided)
Fields
v

any

(No description provided)

TableDataInsertAllRequest

(No description provided)
Fields
ignoreUnknownValues

boolean

[Optional] Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.

kind

string

The resource type of the response.

rows[]

object

The rows to insert.

rows.insertId

string

[Optional] A unique ID for each row. BigQuery uses this property to detect duplicate insertion requests on a best-effort basis.

rows.json

object (JsonObject)

[Required] A JSON object that contains a row of data. The object's properties and values must match the destination table's schema.

skipInvalidRows

boolean

[Optional] Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.

templateSuffix

string

If specified, treats the destination table as a base template, and inserts the rows into an instance table named "{destination}{templateSuffix}". BigQuery will manage creation of the instance table, using the schema of the base template table. See https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables for considerations when working with templates tables.

TableDataInsertAllResponse

(No description provided)
Fields
insertErrors[]

object

An array of errors for rows that were not inserted.

insertErrors.errors[]

object (ErrorProto)

Error information for the row indicated by the index property.

insertErrors.index

integer (uint32 format)

The index of the row that error applies to.

kind

string

The resource type of the response.

TableDataList

(No description provided)
Fields
etag

string

A hash of this page of results.

kind

string

The resource type of the response.

pageToken

string

A token used for paging results. Providing this token instead of the startIndex parameter can help you retrieve stable results when an underlying table is changing.

rows[]

object (TableRow)

Rows of results.

totalRows

string (int64 format)

The total number of rows in the complete table.

TableFieldSchema

(No description provided)
Fields
categories

object

[Optional] The categories attached to this field, used for field-level access control.

categories.names[]

string

A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.

collation

string

Optional. Collation specification of the field. It only can be set on string type field.

defaultValueExpression

string

Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']

description

string

[Optional] The field description. The maximum length is 1,024 characters.

fields[]

object (TableFieldSchema)

[Optional] Describes the nested schema fields if the type property is set to RECORD.

maxLength

string (int64 format)

[Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠ "STRING" and ≠ "BYTES".

mode

string

[Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.

name

string

[Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.

policyTags

object

(No description provided)

policyTags.names[]

string

A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.

precision

string (int64 format)

[Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠ "NUMERIC" and ≠ "BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.

scale

string (int64 format)

[Optional] See documentation for precision.

type

string

[Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).

TableList

(No description provided)
Fields
etag

string

A hash of this page of results.

kind

string

The type of list.

nextPageToken

string

A token to request the next page of results.

tables[]

object

Tables in the requested dataset.

tables.clustering

object (Clustering)

[Beta] Clustering specification for this table, if configured.

tables.creationTime

string (int64 format)

The time when this table was created, in milliseconds since the epoch.

tables.expirationTime

string (int64 format)

[Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

tables.friendlyName

string

The user-friendly name for this table.

tables.id

string

An opaque ID of the table

tables.kind

string

The resource type.

tables.labels

map (key: string, value: string)

The labels associated with this table. You can use these to organize and group your tables.

tables.rangePartitioning

object (RangePartitioning)

The range partitioning specification for this table, if configured.

tables.rangePartitioning.range.end

string (int64 format)

[TrustedTester] [Required] The end of range partitioning, exclusive.

tables.rangePartitioning.range.interval

string (int64 format)

[TrustedTester] [Required] The width of each interval.

tables.rangePartitioning.range.start

string (int64 format)

[TrustedTester] [Required] The start of range partitioning, inclusive.

tables.tableReference

object (TableReference)

A reference uniquely identifying the table.

tables.timePartitioning

object (TimePartitioning)

The time-based partitioning specification for this table, if configured.

tables.type

string

The type of table. Possible values are: TABLE, VIEW.

tables.view

object

Additional details for a view.

tables.view.useLegacySql

boolean

True if view is defined in legacy SQL dialect, false if in standard SQL.

totalItems

integer (int32 format)

The total number of tables in the dataset.

TableReference

(No description provided)
Fields
datasetId

string

[Required] The ID of the dataset containing this table.

projectId

string

[Required] The ID of the project containing this table.

tableId

string

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

TableRow

(No description provided)
Fields
f[]

object (TableCell)

Represents a single row in the result set, consisting of one or more fields.

TableSchema

(No description provided)
Fields
fields[]

object (TableFieldSchema)

Describes the fields in a table.

TestIamPermissionsRequest

Request message for TestIamPermissions method.
Fields
permissions[]

string

The set of permissions to check for the resource. Permissions with wildcards (such as * or storage.*) are not allowed. For more information see IAM Overview.

TestIamPermissionsResponse

Response message for TestIamPermissions method.
Fields
permissions[]

string

A subset of TestPermissionsRequest.permissions that the caller is allowed.

TimePartitioning

(No description provided)
Fields
expirationMs

string (int64 format)

[Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.

field

string

[Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.

requirePartitionFilter

boolean

(No description provided)

type

string

[Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.

TrainingOptions

Options used in model training.
Fields
adjustStepChanges

boolean

If true, detect step changes and make data adjustment in the input time series.

autoArima

boolean

Whether to enable auto ARIMA or not.

autoArimaMaxOrder

string (int64 format)

The max value of non-seasonal p and q.

batchSize

string (int64 format)

Batch size for dnn models.

boosterType

enum

Booster type for boosted tree models.

Enum type. Can be one of the following:
BOOSTER_TYPE_UNSPECIFIED Unspecified booster type.
GBTREE Gbtree booster.
DART Dart booster.
calculatePValues

boolean

Whether or not p-value test should be computed for this model. Only available for linear and logistic regression models.

cleanSpikesAndDips

boolean

If true, clean spikes and dips in the input time series.

colorSpace

enum

Enums for color space, used for processing images in Object Table. See more details at https://www.tensorflow.org/io/tutorials/colorspace.

Enum type. Can be one of the following:
COLOR_SPACE_UNSPECIFIED Unspecified color space
RGB RGB
HSV HSV
YIQ YIQ
YUV YUV
GRAYSCALE GRAYSCALE
colsampleBylevel

number (double format)

Subsample ratio of columns for each level for boosted tree models.

colsampleBynode

number (double format)

Subsample ratio of columns for each node(split) for boosted tree models.

colsampleBytree

number (double format)

Subsample ratio of columns when constructing each tree for boosted tree models.

dartNormalizeType

enum

Type of normalization algorithm for boosted tree models using dart booster.

Enum type. Can be one of the following:
DART_NORMALIZE_TYPE_UNSPECIFIED Unspecified dart normalize type.
TREE New trees have the same weight of each of dropped trees.
FOREST New trees have the same weight of sum of dropped trees.
dataFrequency

enum

The data frequency of a time series.

Enum type. Can be one of the following:
DATA_FREQUENCY_UNSPECIFIED (No description provided)
AUTO_FREQUENCY Automatically inferred from timestamps.
YEARLY Yearly data.
QUARTERLY Quarterly data.
MONTHLY Monthly data.
WEEKLY Weekly data.
DAILY Daily data.
HOURLY Hourly data.
PER_MINUTE Per-minute data.
dataSplitColumn

string

The column to split data with. This column won't be used as a feature. 1. When data_split_method is CUSTOM, the corresponding column should be boolean. The rows with true value tag are eval data, and the false are training data. 2. When data_split_method is SEQ, the first DATA_SPLIT_EVAL_FRACTION rows (from smallest to largest) in the corresponding column are used as training data, and the rest are eval data. It respects the order in Orderable data types: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#data-type-properties

dataSplitEvalFraction

number (double format)

The fraction of evaluation data over the whole input data. The rest of data will be used as training data. The format should be double. Accurate to two decimal places. Default value is 0.2.

dataSplitMethod

enum

The data split type for training and evaluation, e.g. RANDOM.

Enum type. Can be one of the following:
DATA_SPLIT_METHOD_UNSPECIFIED (No description provided)
RANDOM Splits data randomly.
CUSTOM Splits data with the user provided tags.
SEQUENTIAL Splits data sequentially.
NO_SPLIT Data split will be skipped.
AUTO_SPLIT Splits data automatically: Uses NO_SPLIT if the data size is small. Otherwise uses RANDOM.
decomposeTimeSeries

boolean

If true, perform decompose time series and save the results.

distanceType

enum

Distance type for clustering models.

Enum type. Can be one of the following:
DISTANCE_TYPE_UNSPECIFIED (No description provided)
EUCLIDEAN Eculidean distance.
COSINE Cosine distance.
dropout

number (double format)

Dropout probability for dnn models.

earlyStop

boolean

Whether to stop early when the loss doesn't improve significantly any more (compared to min_relative_progress). Used only for iterative training algorithms.

enableGlobalExplain

boolean

If true, enable global explanation during training.

feedbackType

enum

Feedback type that specifies which algorithm to run for matrix factorization.

Enum type. Can be one of the following:
FEEDBACK_TYPE_UNSPECIFIED (No description provided)
IMPLICIT Use weighted-als for implicit feedback problems.
EXPLICIT Use nonweighted-als for explicit feedback problems.
hiddenUnits[]

string (int64 format)

Hidden units for dnn models.

holidayRegion

enum

The geographical region based on which the holidays are considered in time series modeling. If a valid value is specified, then holiday effects modeling is enabled.

Enum type. Can be one of the following:
HOLIDAY_REGION_UNSPECIFIED Holiday region unspecified.
GLOBAL Global.
NA North America.
JAPAC Japan and Asia Pacific: Korea, Greater China, India, Australia, and New Zealand.
EMEA Europe, the Middle East and Africa.
LAC Latin America and the Caribbean.
AE United Arab Emirates
AR Argentina
AT Austria
AU Australia
BE Belgium
BR Brazil
CA Canada
CH Switzerland
CL Chile
CN China
CO Colombia
CS Czechoslovakia
CZ Czech Republic
DE Germany
DK Denmark
DZ Algeria
EC Ecuador
EE Estonia
EG Egypt
ES Spain
FI Finland
FR France
GB Great Britain (United Kingdom)
GR Greece
HK Hong Kong
HU Hungary
ID Indonesia
IE Ireland
IL Israel
IN India
IR Iran
IT Italy
JP Japan
KR Korea (South)
LV Latvia
MA Morocco
MX Mexico
MY Malaysia
NG Nigeria
NL Netherlands
NO Norway
NZ New Zealand
PE Peru
PH Philippines
PK Pakistan
PL Poland
PT Portugal
RO Romania
RS Serbia
RU Russian Federation
SA Saudi Arabia
SE Sweden
SG Singapore
SI Slovenia
SK Slovakia
TH Thailand
TR Turkey
TW Taiwan
UA Ukraine
US United States
VE Venezuela
VN Viet Nam
ZA South Africa
horizon

string (int64 format)

The number of periods ahead that need to be forecasted.

hparamTuningObjectives[]

string

The target evaluation metrics to optimize the hyperparameters for.

includeDrift

boolean

Include drift when fitting an ARIMA model.

initialLearnRate

number (double format)

Specifies the initial learning rate for the line search learn rate strategy.

inputLabelColumns[]

string

Name of input label columns in training data.

integratedGradientsNumSteps

string (int64 format)

Number of integral steps for the integrated gradients explain method.

itemColumn

string

Item column specified for matrix factorization models.

kmeansInitializationColumn

string

The column used to provide the initial centroids for kmeans algorithm when kmeans_initialization_method is CUSTOM.

kmeansInitializationMethod

enum

The method used to initialize the centroids for kmeans algorithm.

Enum type. Can be one of the following:
KMEANS_INITIALIZATION_METHOD_UNSPECIFIED Unspecified initialization method.
RANDOM Initializes the centroids randomly.
CUSTOM Initializes the centroids using data specified in kmeans_initialization_column.
KMEANS_PLUS_PLUS Initializes with kmeans++.
l1Regularization

number (double format)

L1 regularization coefficient.

l2Regularization

number (double format)

L2 regularization coefficient.

labelClassWeights

map (key: string, value: number (double format))

Weights associated with each label class, for rebalancing the training data. Only applicable for classification models.

learnRate

number (double format)

Learning rate in training. Used only for iterative training algorithms.

learnRateStrategy

enum

The strategy to determine learn rate for the current iteration.

Enum type. Can be one of the following:
LEARN_RATE_STRATEGY_UNSPECIFIED (No description provided)
LINE_SEARCH Use line search to determine learning rate.
CONSTANT Use a constant learning rate.
lossType

enum

Type of loss function used during training run.

Enum type. Can be one of the following:
LOSS_TYPE_UNSPECIFIED (No description provided)
MEAN_SQUARED_LOSS Mean squared loss, used for linear regression.
MEAN_LOG_LOSS Mean log loss, used for logistic regression.
maxIterations

string (int64 format)

The maximum number of iterations in training. Used only for iterative training algorithms.

maxParallelTrials

string (int64 format)

Maximum number of trials to run in parallel.

maxTimeSeriesLength

string (int64 format)

Get truncated length by last n points in time series. Use separately from time_series_length_fraction and min_time_series_length.

maxTreeDepth

string (int64 format)

Maximum depth of a tree for boosted tree models.

minRelativeProgress

number (double format)

When early_stop is true, stops training when accuracy improvement is less than 'min_relative_progress'. Used only for iterative training algorithms.

minSplitLoss

number (double format)

Minimum split loss for boosted tree models.

minTimeSeriesLength

string (int64 format)

Set fast trend ARIMA_PLUS model minimum training length. Use in pair with time_series_length_fraction.

minTreeChildWeight

string (int64 format)

Minimum sum of instance weight needed in a child for boosted tree models.

modelUri

string

Google Cloud Storage URI from which the model was imported. Only applicable for imported models.

nonSeasonalOrder

object (ArimaOrder)

A specification of the non-seasonal part of the ARIMA model: the three components (p, d, q) are the AR order, the degree of differencing, and the MA order.

numClusters

string (int64 format)

Number of clusters for clustering models.

numFactors

string (int64 format)

Num factors specified for matrix factorization models.

numParallelTree

string (int64 format)

Number of parallel trees constructed during each iteration for boosted tree models.

numTrials

string (int64 format)

Number of trials to run this hyperparameter tuning job.

optimizationStrategy

enum

Optimization strategy for training linear regression models.

Enum type. Can be one of the following:
OPTIMIZATION_STRATEGY_UNSPECIFIED (No description provided)
BATCH_GRADIENT_DESCENT Uses an iterative batch gradient descent algorithm.
NORMAL_EQUATION Uses a normal equation to solve linear regression problem.
preserveInputStructs

boolean

Whether to preserve the input structs in output feature names. Suppose there is a struct A with field b. When false (default), the output feature name is A_b. When true, the output feature name is A.b.

sampledShapleyNumPaths

string (int64 format)

Number of paths for the sampled Shapley explain method.

subsample

number (double format)

Subsample fraction of the training data to grow tree to prevent overfitting for boosted tree models.

timeSeriesDataColumn

string

Column to be designated as time series data for ARIMA model.

timeSeriesIdColumn

string

The time series id column that was used during ARIMA model training.

timeSeriesIdColumns[]

string

The time series id columns that were used during ARIMA model training.

timeSeriesLengthFraction

number (double format)

Get truncated length by fraction in time series.

timeSeriesTimestampColumn

string

Column to be designated as time series timestamp for ARIMA model.

treeMethod

enum

Tree construction algorithm for boosted tree models.

Enum type. Can be one of the following:
TREE_METHOD_UNSPECIFIED Unspecified tree method.
AUTO Use heuristic to choose the fastest method.
EXACT Exact greedy algorithm.
APPROX Approximate greedy algorithm using quantile sketch and gradient histogram.
HIST Fast histogram optimized approximate greedy algorithm.
trendSmoothingWindowSize

string (int64 format)

The smoothing window size for the trend component of the time series.

userColumn

string

User column specified for matrix factorization models.

walsAlpha

number (double format)

Hyperparameter for matrix factoration when implicit feedback type is specified.

warmStart

boolean

Whether to train a model from the last checkpoint.

TrainingRun

Information about a single training query run for the model.
Fields
classLevelGlobalExplanations[]

object (GlobalExplanation)

Output only. Global explanation contains the explanation of top features on the class level. Applies to classification models only.

dataSplitResult

object (DataSplitResult)

Output only. Data split result of the training run. Only set when the input data is actually split.

evaluationMetrics

object (EvaluationMetrics)

Output only. The evaluation metrics over training/eval data that were computed at the end of training.

modelLevelGlobalExplanation

object (GlobalExplanation)

Output only. Global explanation contains the explanation of top features on the model level. Applies to both regression and classification models.

results[]

object (IterationResult)

Output only. Output of each iteration run, results.size() <= max_iterations.

startTime

string (Timestamp format)

Output only. The start time of this training run.

trainingOptions

object (TrainingOptions)

Output only. Options that were used for this training run, includes user specified and default options that were used.

trainingStartTime

string (int64 format)

Output only. The start time of this training run, in milliseconds since epoch.

vertexAiModelId

string

The model id in the Vertex AI Model Registry for this training run.

vertexAiModelVersion

string

Output only. The model version in the Vertex AI Model Registry for this training run.

TransactionInfo

(No description provided)
Fields
transactionId

string

[Output-only] // [Alpha] Id of the transaction.

UserDefinedFunctionResource

This is used for defining User Defined Function (UDF) resources only when using legacy SQL. Users of Standard SQL should leverage either DDL (e.g. CREATE [TEMPORARY] FUNCTION ... ) or the Routines API to define UDF resources. For additional information on migrating, see: https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#differences_in_user-defined_javascript_functions
Fields
inlineCode

string

[Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

resourceUri

string

[Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

ViewDefinition

(No description provided)
Fields
query

string

[Required] A query that BigQuery executes when the view is referenced.

useExplicitColumnNames

boolean

True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/

useLegacySql

boolean

Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.

userDefinedFunctionResources[]

object (UserDefinedFunctionResource)

Describes user-defined function resources used in the query.