所有 BigQuery 代码示例

Add a column using a load job

Add a new column to a BigQuery table while appending rows using a load job.

View in documentation

Add a column using a query job

Add a new column to a BigQuery table while appending rows using a query job with an explicit destination table.

View in documentation

Add a label

Add a label to a table.

Add an empty column

Manually add an empty column.

View in documentation

Array parameters

Run a query with array parameters.

View in documentation

Cancel a job

Attempt to cancel a job.

View in documentation

Check dataset existence

A function to check whether a dataset exists.

View in documentation

Clustered table

Load data from a CSV file on Cloud Storage to a clustered table.

View in documentation

Column-based time partitioning

Create a table that uses column-based time partitioning.

Copy a single-source table

Copy a single-source table to a given destination.

View in documentation

Copy a table

Copy a table with customer-managed encryption keys.

View in documentation

Copy multiple tables

Copy multiple source tables to a given destination.

View in documentation

Create a client with a service account key file

Create a BigQuery client using a service account key file.

Create a client with application default credentials

Create a BigQuery client using application default credentials.

View in documentation

Create a dataset

Create an AWS dataset.

Create a dataset with the BigQuery API

Creates a dataset named my_new_dataset.

View in documentation

Create a job

Run a BigQuery job (query, load, extract, or copy) in a specified location with additional configuration.

View in documentation

Create a model

Create a model within an existing dataset.

Create a routine

Create a routine within an existing dataset.

Create a table

Create a table with customer-managed encryption keys.

View in documentation

Create a view

Create a view within a dataset.

View in documentation

Create a view with DDL

Create a view using a DDL query.

Create an authorized view

Create an authorized view using GitHub public data.

View in documentation

Create an external table

Create an external AWS table.

Create an integer-range partitioned table

Create a new integer-range partitioned table in an existing dataset.

Create credentials with scopes

Create credentials with Drive and BigQuery API scopes.

View in documentation

Delete a dataset

Delete a dataset from a project.

View in documentation

Delete a dataset and its contents

Delete a dataset and its contents from a project.

Delete a label from a dataset

Remove a label from a dataset.

View in documentation

Delete a label from a table

Remove a label from a table.

View in documentation

Delete a model

Delete a model from a dataset.

View in documentation

Delete a routine

Delete a routine from a dataset.

Delete a table

Delete a table from a dataset.

View in documentation

Disable query cache

Query disables the use of the cache.

View in documentation

Download public table data to DataFrame

Use the BigQuery Storage API to speed up downloads of large tables to DataFrame.

Download public table data to DataFrame from the sandbox

Use the BigQuery Storage API to download query results to DataFrame.

Download query results to DataFrame

Get query results as a Pandas DataFrame.

Enable large results

Query enables large result sets using legacy SQL.

View in documentation

Export a table to a compressed file

Exports a table to a compressed file in a Cloud Storage bucket.

View in documentation

Export a table to a CSV file

Exports a table to a CSV file in a Cloud Storage bucket.

View in documentation

Export a table to a JSON file

Exports a table to a newline-delimited JSON file in a Cloud Storage bucket.

Export query results

Export query results to an Amazon S3 bucket.

Get a model

Get a model resource for a given model ID.

View in documentation

Get a routine

Get a routine resource for a given routine ID.

Get dataset labels

Retrieve the labels of a dataset for a given dataset ID.

View in documentation

Get dataset properties

Retrieve the properties of a dataset.

View in documentation

Get job properties

Retrieve the properties of a job for a given job ID.

View in documentation

Get table labels

Retrieve the labels of a table for a given table ID.

View in documentation

Get table properties

Retrieve the properties of a table for a given table ID.

View in documentation

Get total rows

Run a query and get total rows.

Get view properties

Retrieve the properties of a view for a given view ID.

View in documentation

Grant view access

Authorize and grant access to a view.

View in documentation

Import a local file

Import a local file into a table.

View in documentation

Insert rows with no IDs

Insert rows without row IDs in a table.

View in documentation

List by label

List datasets, filtering by labels.

View in documentation

List datasets

Lists all existing datasets in a project.

View in documentation

List jobs

List all jobs in a project.

View in documentation

List models

Lists all existing models in a dataset.

View in documentation

List models using streaming

Lists all existing models in the dataset using streaming.

List routines

Lists all existing routines in a dataset.

Load a CSV file

Load a CSV file from Cloud Storage using an explicit schema.

View in documentation

Load a CSV file to replace a table

Load a CSV file from Cloud Storage, replacing a table.

View in documentation

Load a CSV file with autodetect schema

Load a CSV file from Cloud Storage using an autodetected schema.

Load a JSON file

Loads a JSON file from Cloud Storage using an explicit schema.

Load a JSON file to replace a table

Load a JSON file from Cloud Storage, replacing a table.

View in documentation

Load a JSON file with autodetect schema

Load a JSON file from Cloud Storage using autodetect schema.

View in documentation

Load a Parquet file

Load a Parquet file from Cloud Storage into a new table.

Load a Parquet to replace a table

Load a Parquet file from Cloud Storage, replacing a table.

Load a table in JSON format

Load a table with customer-managed encryption keys to Cloud Storage in JSON format.

View in documentation

Load an Avro file

Load an Avro file from Cloud Storage into a new table.

View in documentation

Load an Avro file to replace a table

Load an Avro file from Cloud Storage, replacing existing table data.

View in documentation

Load an ORC file

Load an ORC file from Cloud Storage into a new table.

View in documentation

Load an ORC file to replace a table

Load an ORC file from Cloud Storage, replacing a table.

View in documentation

Load data from DataFrame

Load contents of a pandas DataFrame to a table.

Load data into a column-based time partitioning table

Load data into a table that uses column-based time partitioning.

Named parameters

Run a query with named parameters.

View in documentation

Named parameters and provided types

Run a query with named parameters and provided parameter types.

Nested repeated schema

Specify nested and repeated columns in schema.

Positional parameters

Run a query with positional parameters.

View in documentation

Positional parameters and provided types

Run a query with positional parameters and provided parameter types.

Preview table data

Retrieve selected row data from a table.

Query a clustered table

Query a table that has a clustering specification.

View in documentation

Query a column-based time-partitioned table

Query a table that uses column-based time partitioning.

Query a table

Query a table with customer-managed encryption keys.

View in documentation

Query an external data source

Query an external data source using a permanent table.

Query Bigtable using a permanent table

Query data from a Bigtable instance by creating a permanent table.

View in documentation

Query Bigtable using a temporary table

Query data from a Bigtable instance by creating a temporary table.

View in documentation

Query Cloud Storage with a permanent table

Query data from a file on Cloud Storage by creating a permanent table.

View in documentation

Query Cloud Storage with a temporary table

Query data from a file on Cloud Storage by creating a temporary table.

View in documentation

Query script

Run a query script.

Query Sheets with a permanent table

Query data from a Google Sheets file by creating a permanent table.

View in documentation

Query Sheets with a temporary table

Query data from a Google Sheets file by creating a temporary table.

View in documentation

Read from the BigQuery Storage API

Read data from a table using a read stream.

Relax a column

Change columns from required to nullable.

View in documentation

Relax a column in a load append job

Change a column from required to nullable in a load append job.

View in documentation

Relax a column in a query append job

Change a column from required to nullable in a query append job.

View in documentation

Run a query with batch priority

Run a query job using batch priority.

Run a query with legacy SQL

Runs a query with legacy SQL.

View in documentation

Save query results

Query saves results to a permanent table.

Set user agent

Set a custom user agent on a BigQuery client.

Streaming insert

Inserts simple rows into a table using the streaming API (insertAll).

View in documentation

Streaming insert with complex data types

Insert data of various BigQuery-supported types into a table.

Struct parameters

Run a query with struct parameters.

View in documentation

Table exists

A function to check whether a table exists.

Timestamp parameters

Run a query with timestamp parameters.

View in documentation

View in documentation

Update a description

Update the description of an existing dataset resource.

View in documentation

Update a label

Update an existing label on a dataset.

Update a model description

Update a model's description property for a given model ID.

View in documentation

Update a query

Update a view's query.

View in documentation

Update a routine

Update an existing routine resource.

Update a table

Update a table with customer-managed encryption keys.

View in documentation

Update a table description

Update a table's description.

Update an expiration time

Update a table's expiration time.

Update default table expiration times

Updates a dataset's default table expiration times.

View in documentation

Update the require partition filter

Update the require partition filter on a table.

View in documentation

Write to destination table

Run a query on the natality public dataset and write the results to a destination table.