- 3.27.0 (latest)
- 3.26.0
- 3.25.0
- 3.24.0
- 3.23.1
- 3.22.0
- 3.21.0
- 3.20.1
- 3.19.0
- 3.18.0
- 3.17.2
- 3.16.0
- 3.15.0
- 3.14.1
- 3.13.0
- 3.12.0
- 3.11.4
- 3.4.0
- 3.3.6
- 3.2.0
- 3.1.0
- 3.0.1
- 2.34.4
- 2.33.0
- 2.32.0
- 2.31.0
- 2.30.1
- 2.29.0
- 2.28.1
- 2.27.1
- 2.26.0
- 2.25.2
- 2.24.1
- 2.23.3
- 2.22.1
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.0
- 2.14.0
- 2.13.1
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.2
- 2.5.0
- 2.4.0
- 2.3.1
- 2.2.0
- 2.1.0
- 2.0.0
- 1.28.2
- 1.27.2
- 1.26.1
- 1.25.0
- 1.24.0
- 1.23.1
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
Client(
project=None,
credentials=None,
_http=None,
location=None,
default_query_job_config=None,
default_load_job_config=None,
client_info=None,
client_options=None,
)
Client to bundle configuration needed for API requests.
Parameters | |
---|---|
Name | Description |
project |
Optional[str]
Project ID for the project which the client acts on behalf of. Will be passed when creating a dataset / job. If not passed, falls back to the default inferred from the environment. |
credentials |
Optional[google.auth.credentials.Credentials]
The OAuth2 Credentials to use for this client. If not passed (and if no |
_http |
Optional[requests.Session]
HTTP object to make requests. Can be any object that defines |
location |
Optional[str]
Default location for jobs / datasets / tables. |
default_query_job_config |
Optional[google.cloud.bigquery.job.QueryJobConfig]
Default |
default_load_job_config |
Optional[google.cloud.bigquery.job.LoadJobConfig]
Default |
client_info |
Optional[google.api_core.client_info.ClientInfo]
The client info used to send a user-agent string along with API requests. If |
client_options |
Optional[Union[google.api_core.client_options.ClientOptions, Dict]]
Client options used to set user options on the client. API Endpoint should be set through client_options. |
Properties
default_load_job_config
Default LoadJobConfig
.
Will be merged into job configs passed into the load_table_*
methods.
default_query_job_config
Default QueryJobConfig
or None
.
Will be merged into job configs passed into the query
or
query_and_wait
methods.
location
Default location for jobs / datasets / tables.
Methods
__getstate__
__getstate__()
Explicitly state that clients are not pickleable.
cancel_job
cancel_job(job_id: str, project: typing.Optional[str] = None, location: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> typing.Union[google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob]
Attempt to cancel a job from a job ID.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel
Parameter | |
---|---|
Name | Description |
job_id |
Union[ str, google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ] :keyword project: ID of the project which owns the job (defaults to the client's project). :kwtype project: Optional[str] :keyword location: Location where the job was run. Ignored if
Job identifier. |
Returns | |
---|---|
Type | Description |
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob, ] | Job instance, based on the resource returned by the API. |
close
close()
Close the underlying transport objects, releasing system resources.
copy_table
copy_table(sources: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, typing.Sequence[typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str]]], destination: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, job_config: typing.Optional[google.cloud.bigquery.job.copy_.CopyJobConfig] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.copy_.CopyJob
Copy one or more tables to another table.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationtablecopy
Parameters | |
---|---|
Name | Description |
sources |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Table or tables to be copied. |
destination |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] :keyword job_id: The ID of the job. :kwtype job_id: Optional[str] :keyword job_id_prefix: The user-provided prefix for a randomly generated job ID. This parameter will be ignored if a
Table into which data is to be copied. |
Exceptions | |
---|---|
Type | Description |
TypeError | If job_config is not an instance of CopyJobConfig class. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.CopyJob | A new copy job instance. |
create_dataset
create_dataset(dataset: typing.Union[str, google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem], exists_ok: bool = False, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.dataset.Dataset
API call: create the dataset via a POST request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/insert
Parameters | |
---|---|
Name | Description |
dataset |
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
A Dataset to create. If |
exists_ok |
Optional[bool]
Defaults to |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Exceptions | |
---|---|
Type | Description |
google.cloud.exceptions.Conflict | If the dataset already exists. .. rubric:: Example >>> from google.cloud import bigquery >>> client = bigquery.Client() >>> dataset = bigquery.Dataset('my_project.my_dataset') >>> dataset = client.create_dataset(dataset) |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.dataset.Dataset | A new Dataset returned from the API. |
create_job
create_job(job_config: dict, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> typing.Union[google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob]
Create a new job.
Parameter | |
---|---|
Name | Description |
job_config |
dict :keyword retry: How to retry the RPC. :kwtype retry: Optional[google.api_core.retry.Retry] :keyword timeout: The number of seconds to wait for the underlying HTTP transport before using
configuration job representation returned from the API. |
Returns | |
---|---|
Type | Description |
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ] | A new job instance. |
create_routine
create_routine(routine: google.cloud.bigquery.routine.routine.Routine, exists_ok: bool = False, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.routine.routine.Routine
[Beta] Create a routine via a POST request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/routines/insert
Parameters | |
---|---|
Name | Description |
routine |
google.cloud.bigquery.routine.Routine
A Routine to create. The dataset that the routine belongs to must already exist. |
exists_ok |
Optional[bool]
Defaults to |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Exceptions | |
---|---|
Type | Description |
google.cloud.exceptions.Conflict | If the routine already exists. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.routine.Routine | A new Routine returned from the service. |
create_table
create_table(table: typing.Union[str, google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem], exists_ok: bool = False, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.Table
API call: create a table via a PUT request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
A Table to create. If |
exists_ok |
Optional[bool]
Defaults to |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Exceptions | |
---|---|
Type | Description |
google.cloud.exceptions.Conflict | If the table already exists. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.table.Table | A new Table returned from the service. |
dataset
dataset(
dataset_id: str, project: typing.Optional[str] = None
) -> google.cloud.bigquery.dataset.DatasetReference
Deprecated: Construct a reference to a dataset.
As of google-cloud-bigquery
version 1.7.0, all client methods
that take a
xref_DatasetReference or
xref_TableReference also take a
string in standard SQL format, e.g. project.dataset_id
or
project.dataset_id.table_id
.
Parameters | |
---|---|
Name | Description |
dataset_id |
str
ID of the dataset. |
project |
Optional[str]
Project ID for the dataset (defaults to the project of the client). |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.dataset.DatasetReference | a new DatasetReference instance. |
delete_dataset
delete_dataset(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], delete_contents: bool = False, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None
Delete a dataset.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/delete
Parameters | |
---|---|
Name | Description |
dataset |
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
A reference to the dataset to delete. If a string is passed in, this method attempts to create a dataset reference from a string using from_string. |
delete_contents |
Optional[bool]
If True, delete all the tables in the dataset. If False and the dataset contains tables, the request will fail. Default is False. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
not_found_ok |
Optional[bool]
Defaults to |
delete_job_metadata
delete_job_metadata(job_id: typing.Union[str, google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob], project: typing.Optional[str] = None, location: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, not_found_ok: bool = False)
[Beta] Delete job metadata from job history.
Note: This does not stop a running job. Use xref_cancel_job instead.
Parameter | |
---|---|
Name | Description |
job_id |
typing.Union[str, google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob]
Job or job identifier. :keyword project: ID of the project which owns the job (defaults to the client's project). :keyword location: Location where the job was run. Ignored if |
delete_model
delete_model(model: typing.Union[google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None
[Beta] Delete a model
See https://cloud.google.com/bigquery/docs/reference/rest/v2/models/delete
Parameters | |
---|---|
Name | Description |
model |
Union[ google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str, ]
A reference to the model to delete. If a string is passed in, this method attempts to create a model reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
not_found_ok |
Optional[bool]
Defaults to |
delete_routine
delete_routine(routine: typing.Union[google.cloud.bigquery.routine.routine.Routine, google.cloud.bigquery.routine.routine.RoutineReference, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None
[Beta] Delete a routine.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/routines/delete
Parameters | |
---|---|
Name | Description |
routine |
Union[ google.cloud.bigquery.routine.Routine, google.cloud.bigquery.routine.RoutineReference, str, ]
A reference to the routine to delete. If a string is passed in, this method attempts to create a routine reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
not_found_ok |
Optional[bool]
Defaults to |
delete_table
delete_table(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None
Delete a table
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/delete
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
A reference to the table to delete. If a string is passed in, this method attempts to create a table reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
not_found_ok |
Optional[bool]
Defaults to |
extract_table
extract_table(source: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str], destination_uris: typing.Union[str, typing.Sequence[str]], job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, job_config: typing.Optional[google.cloud.bigquery.job.extract.ExtractJobConfig] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, source_type: str = 'Table') -> google.cloud.bigquery.job.extract.ExtractJob
Start a job to extract a table into Cloud Storage files.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationextract
Parameters | |
---|---|
Name | Description |
source |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, src, ]
Table or Model to be extracted. |
destination_uris |
Union[str, Sequence[str]] :keyword job_id: The ID of the job. :kwtype job_id: Optional[str] :keyword job_id_prefix: The user-provided prefix for a randomly generated job ID. This parameter will be ignored if a
URIs of Cloud Storage file(s) into which table data is to be extracted; in format |
Exceptions | |
---|---|
Type | Description |
TypeError | If job_config is not an instance of ExtractJobConfig class. |
ValueError | If source_type is not among Table ,Model . |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.ExtractJob | A new extract job instance. |
from_service_account_info
from_service_account_info(info, *args, **kwargs)
Factory to retrieve JSON credentials while creating client.
Parameters | |
---|---|
Name | Description |
args |
tuple
Remaining positional arguments to pass to constructor. |
info |
dict
The JSON object with a private key and other credentials information (downloaded from the Google APIs console). |
Exceptions | |
---|---|
Type | Description |
TypeError | if there is a conflict with the kwargs and the credentials created by the factory. |
Returns | |
---|---|
Type | Description |
| The client created with the retrieved JSON credentials. |
from_service_account_json
from_service_account_json(json_credentials_path, *args, **kwargs)
Factory to retrieve JSON credentials while creating client.
Parameters | |
---|---|
Name | Description |
args |
tuple
Remaining positional arguments to pass to constructor. |
json_credentials_path |
str
The path to a private key file (this file was given to you when you created the service account). This file must contain a JSON object with a private key and other credentials information (downloaded from the Google APIs console). |
Exceptions | |
---|---|
Type | Description |
TypeError | if there is a conflict with the kwargs and the credentials created by the factory. |
Returns | |
---|---|
Type | Description |
| The client created with the retrieved JSON credentials. |
get_dataset
get_dataset(dataset_ref: typing.Union[google.cloud.bigquery.dataset.DatasetReference, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.dataset.Dataset
Fetch the dataset referenced by dataset_ref
Parameters | |
---|---|
Name | Description |
dataset_ref |
Union[ google.cloud.bigquery.dataset.DatasetReference, str, ]
A reference to the dataset to fetch from the BigQuery API. If a string is passed in, this method attempts to create a dataset reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.dataset.Dataset | A Dataset instance. |
get_job
get_job(job_id: typing.Union[str, google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob], project: typing.Optional[str] = None, location: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> typing.Union[google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob, google.cloud.bigquery.job.base.UnknownJob]
Fetch a job for the project associated with this client.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get
Parameter | |
---|---|
Name | Description |
job_id |
typing.Union[str, google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob]
Job identifier. :keyword project: ID of the project which owns the job (defaults to the client's project). :kwtype project: Optional[str] :keyword location: Location where the job was run. Ignored if |
Returns | |
---|---|
Type | Description |
Union[job.LoadJob, job.CopyJob, job.ExtractJob, job.QueryJob, job.UnknownJob] | Job instance, based on the resource returned by the API. |
get_model
get_model(model_ref: typing.Union[google.cloud.bigquery.model.ModelReference, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.model.Model
[Beta] Fetch the model referenced by model_ref
.
Parameters | |
---|---|
Name | Description |
model_ref |
Union[ google.cloud.bigquery.model.ModelReference, str, ]
A reference to the model to fetch from the BigQuery API. If a string is passed in, this method attempts to create a model reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.model.Model | A Model instance. |
get_routine
get_routine(routine_ref: typing.Union[google.cloud.bigquery.routine.routine.Routine, google.cloud.bigquery.routine.routine.RoutineReference, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.routine.routine.Routine
[Beta] Get the routine referenced by routine_ref
.
Parameters | |
---|---|
Name | Description |
routine_ref |
Union[ google.cloud.bigquery.routine.Routine, google.cloud.bigquery.routine.RoutineReference, str, ]
A reference to the routine to fetch from the BigQuery API. If a string is passed in, this method attempts to create a reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the API call. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.routine.Routine | A Routine instance. |
get_service_account_email
get_service_account_email(project: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> str
Get the email address of the project's BigQuery service account
Parameters | |
---|---|
Name | Description |
project |
Optional[str]
Project ID to use for retreiving service account email. Defaults to the client's project. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
str .. rubric:: Example >>> from google.cloud import bigquery >>> client = bigquery.Client() >>> client.get_service_account_email() my_service_account@my-project.iam.gserviceaccount.com | service account email address |
get_table
get_table(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.Table
Fetch the table referenced by table
.
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
A reference to the table to fetch from the BigQuery API. If a string is passed in, this method attempts to create a table reference from a string using from_string. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.table.Table | A Table instance. |
insert_rows
insert_rows(
table: typing.Union[
google.cloud.bigquery.table.Table,
google.cloud.bigquery.table.TableReference,
str,
],
rows: typing.Union[
typing.Iterable[typing.Tuple], typing.Iterable[typing.Mapping[str, typing.Any]]
],
selected_fields: typing.Optional[
typing.Sequence[google.cloud.bigquery.schema.SchemaField]
] = None,
**kwargs
) -> typing.Sequence[typing.Dict[str, typing.Any]]
Insert rows into a table via the streaming API.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
BigQuery will reject insertAll payloads that exceed a defined limit (10MB). Additionally, if a payload vastly exceeds this limit, the request is rejected by the intermediate architecture, which returns a 413 (Payload Too Large) status code.
See https://cloud.google.com/bigquery/quotas#streaming_inserts
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str, ]
The destination table for the row data, or a reference to it. |
rows |
Union[Sequence[Tuple], Sequence[Dict]]
Row data to be inserted. If a list of tuples is given, each tuple should contain data for each schema field on the current table and in the same order as the schema fields. If a list of dictionaries is given, the keys must include all required fields in the schema. Keys which do not correspond to a field in the schema are ignored. |
selected_fields |
Sequence[google.cloud.bigquery.schema.SchemaField]
The fields to return. Required if |
kwargs |
dict
Keyword arguments to insert_rows_json. |
Exceptions | |
---|---|
Type | Description |
ValueError | if table's schema is not set or rows is not a Sequence . |
Returns | |
---|---|
Type | Description |
Sequence[Mappings] | One mapping per row with insert errors: the "index" key identifies the row, and the "errors" key contains a list of the mappings describing one or more problems with the row. |
insert_rows_from_dataframe
insert_rows_from_dataframe(
table: typing.Union[
google.cloud.bigquery.table.Table,
google.cloud.bigquery.table.TableReference,
str,
],
dataframe,
selected_fields: typing.Optional[
typing.Sequence[google.cloud.bigquery.schema.SchemaField]
] = None,
chunk_size: int = 500,
**kwargs: typing.Dict
) -> typing.Sequence[typing.Sequence[dict]]
Insert rows into a table from a dataframe via the streaming API.
BigQuery will reject insertAll payloads that exceed a defined limit (10MB). Additionally, if a payload vastly exceeds this limit, the request is rejected by the intermediate architecture, which returns a 413 (Payload Too Large) status code.
See https://cloud.google.com/bigquery/quotas#streaming_inserts
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str, ]
The destination table for the row data, or a reference to it. |
selected_fields |
Sequence[google.cloud.bigquery.schema.SchemaField]
The fields to return. Required if |
chunk_size |
int
The number of rows to stream in a single chunk. Must be positive. |
kwargs |
Dict
Keyword arguments to insert_rows_json. |
dataframe |
pandas.DataFrame
A |
Exceptions | |
---|---|
Type | Description |
ValueError | if table's schema is not set |
Returns | |
---|---|
Type | Description |
Sequence[Sequence[Mappings]] | A list with insert errors for each insert chunk. Each element is a list containing one mapping per row with insert errors: the "index" key identifies the row, and the "errors" key contains a list of the mappings describing one or more problems with the row. |
insert_rows_json
insert_rows_json(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], json_rows: typing.Sequence[typing.Mapping[str, typing.Any]], row_ids: typing.Optional[typing.Union[typing.Iterable[typing.Optional[str]], google.cloud.bigquery.enums.AutoRowIDs]] = AutoRowIDs.GENERATE_UUID, skip_invalid_rows: typing.Optional[bool] = None, ignore_unknown_values: typing.Optional[bool] = None, template_suffix: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> typing.Sequence[dict]
Insert rows into a table without applying local type conversions.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
BigQuery will reject insertAll payloads that exceed a defined limit (10MB). Additionally, if a payload vastly exceeds this limit, the request is rejected by the intermediate architecture, which returns a 413 (Payload Too Large) status code.
See https://cloud.google.com/bigquery/quotas#streaming_inserts
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str ]
The destination table for the row data, or a reference to it. |
json_rows |
Sequence[Dict]
Row data to be inserted. Keys must match the table schema fields and values must be JSON-compatible representations. |
row_ids |
Union[Iterable[str], AutoRowIDs, None]
Unique IDs, one per row being inserted. An ID can also be |
skip_invalid_rows |
Optional[bool]
Insert all valid rows of a request, even if invalid rows exist. The default value is |
ignore_unknown_values |
Optional[bool]
Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is |
template_suffix |
Optional[str]
Treat |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Exceptions | |
---|---|
Type | Description |
TypeError | if json_rows is not a Sequence . |
Returns | |
---|---|
Type | Description |
Sequence[Mappings] | One mapping per row with insert errors: the "index" key identifies the row, and the "errors" key contains a list of the mappings describing one or more problems with the row. |
job_from_resource
job_from_resource(
resource: dict,
) -> typing.Union[
google.cloud.bigquery.job.copy_.CopyJob,
google.cloud.bigquery.job.extract.ExtractJob,
google.cloud.bigquery.job.load.LoadJob,
google.cloud.bigquery.job.query.QueryJob,
google.cloud.bigquery.job.base.UnknownJob,
]
Detect correct job type from resource and instantiate.
Parameter | |
---|---|
Name | Description |
resource |
Dict
one job resource from API response |
Returns | |
---|---|
Type | Description |
Union[job.CopyJob, job.ExtractJob, job.LoadJob, job.QueryJob, job.UnknownJob] | The job instance, constructed via the resource. |
list_datasets
list_datasets(project: typing.Optional[str] = None, include_all: bool = False, filter: typing.Optional[str] = None, max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator
List datasets for the project associated with this client.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/list
Parameters | |
---|---|
Name | Description |
project |
Optional[str]
Project ID to use for retreiving datasets. Defaults to the client's project. |
include_all |
Optional[bool]
True if results include hidden datasets. Defaults to False. |
filter |
Optional[str]
An expression for filtering the results by label. For syntax, see https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/list#body.QUERY_PARAMETERS.filter |
max_results |
Optional[int]
Maximum number of datasets to return. |
page_token |
Optional[str]
Token representing a cursor into the datasets. If not passed, the API will return the first page of datasets. The token marks the beginning of the iterator to be returned and the value of the |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
page_size |
Optional[int]
Maximum number of datasets to return per page. |
Returns | |
---|---|
Type | Description |
google.api_core.page_iterator.Iterator | Iterator of DatasetListItem. associated with the project. |
list_jobs
list_jobs(project: typing.Optional[str] = None, parent_job: typing.Optional[typing.Union[google.cloud.bigquery.job.query.QueryJob, str]] = None, max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, all_users: typing.Optional[bool] = None, state_filter: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, min_creation_time: typing.Optional[datetime.datetime] = None, max_creation_time: typing.Optional[datetime.datetime] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator
List jobs for the project associated with this client.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/list
Parameters | |
---|---|
Name | Description |
project |
Optional[str]
Project ID to use for retreiving datasets. Defaults to the client's project. |
parent_job |
Optional[Union[ google.cloud.bigquery.job._AsyncJob, str, ]]
If set, retrieve only child jobs of the specified parent. |
max_results |
Optional[int]
Maximum number of jobs to return. |
page_token |
Optional[str]
Opaque marker for the next "page" of jobs. If not passed, the API will return the first page of jobs. The token marks the beginning of the iterator to be returned and the value of the |
all_users |
Optional[bool]
If true, include jobs owned by all users in the project. Defaults to :data: |
state_filter |
Optional[str]
If set, include only jobs matching the given state. One of: * |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
min_creation_time |
Optional[datetime.datetime]
Min value for job creation time. If set, only jobs created after or at this timestamp are returned. If the datetime has no time zone assumes UTC time. |
max_creation_time |
Optional[datetime.datetime]
Max value for job creation time. If set, only jobs created before or at this timestamp are returned. If the datetime has no time zone assumes UTC time. |
page_size |
Optional[int]
Maximum number of jobs to return per page. |
Returns | |
---|---|
Type | Description |
google.api_core.page_iterator.Iterator | Iterable of job instances. |
list_models
list_models(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator
[Beta] List models in the dataset.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/models/list
Parameters | |
---|---|
Name | Description |
dataset |
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
A reference to the dataset whose models to list from the BigQuery API. If a string is passed in, this method attempts to create a dataset reference from a string using from_string. |
max_results |
Optional[int]
Maximum number of models to return. Defaults to a value set by the API. |
page_token |
Optional[str]
Token representing a cursor into the models. If not passed, the API will return the first page of models. The token marks the beginning of the iterator to be returned and the value of the |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
page_size |
Optional[int] Returns: google.api_core.page_iterator.Iterator: Iterator of Model contained within the requested dataset.
Maximum number of models to return per page. Defaults to a value set by the API. |
list_partitions
list_partitions(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> typing.Sequence[str]
List the partitions in a table.
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
The table or reference from which to get partition info |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
List[str] | A list of the partition ids present in the partitioned table |
list_projects
list_projects(max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator
List projects for the project associated with this client.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/projects/list
Parameters | |
---|---|
Name | Description |
max_results |
Optional[int]
Maximum number of projects to return. Defaults to a value set by the API. |
page_token |
Optional[str]
Token representing a cursor into the projects. If not passed, the API will return the first page of projects. The token marks the beginning of the iterator to be returned and the value of the |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
page_size |
Optional[int]
Maximum number of projects to return in each page. Defaults to a value set by the API. |
Returns | |
---|---|
Type | Description |
google.api_core.page_iterator.Iterator | Iterator of Project accessible to the current client. |
list_routines
list_routines(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator
[Beta] List routines in the dataset.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/routines/list
Parameters | |
---|---|
Name | Description |
dataset |
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
A reference to the dataset whose routines to list from the BigQuery API. If a string is passed in, this method attempts to create a dataset reference from a string using from_string. |
max_results |
Optional[int]
Maximum number of routines to return. Defaults to a value set by the API. |
page_token |
Optional[str]
Token representing a cursor into the routines. If not passed, the API will return the first page of routines. The token marks the beginning of the iterator to be returned and the value of the |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
page_size |
Optional[int] Returns: google.api_core.page_iterator.Iterator: Iterator of all Routines contained within the requested dataset, limited by
Maximum number of routines to return per page. Defaults to a value set by the API. |
list_rows
list_rows(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.table.TableReference, str], selected_fields: typing.Optional[typing.Sequence[google.cloud.bigquery.schema.SchemaField]] = None, max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, start_index: typing.Optional[int] = None, page_size: typing.Optional[int] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.RowIterator
List the rows of the table.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/list
Parameters | |
---|---|
Name | Description |
table |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.table.TableReference, str, ]
The table to list, or a reference to it. When the table object does not contain a schema and |
selected_fields |
Sequence[google.cloud.bigquery.schema.SchemaField]
The fields to return. If not supplied, data for all columns are downloaded. |
max_results |
Optional[int]
Maximum number of rows to return. |
page_token |
Optional[str]
Token representing a cursor into the table's rows. If not passed, the API will return the first page of the rows. The token marks the beginning of the iterator to be returned and the value of the |
start_index |
Optional[int]
The zero-based index of the starting row to read. |
page_size |
Optional[int]
The maximum number of rows in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.table.RowIterator | Iterator of row data Row-s. During each page, the iterator will have the total_rows attribute set, which counts the total number of rows **in the table** (this is distinct from the total number of rows in the current page: iterator.page.num_items ). |
list_tables
list_tables(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator
List tables in the dataset.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/list
Parameters | |
---|---|
Name | Description |
dataset |
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
A reference to the dataset whose tables to list from the BigQuery API. If a string is passed in, this method attempts to create a dataset reference from a string using from_string. |
max_results |
Optional[int]
Maximum number of tables to return. Defaults to a value set by the API. |
page_token |
Optional[str]
Token representing a cursor into the tables. If not passed, the API will return the first page of tables. The token marks the beginning of the iterator to be returned and the value of the |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
page_size |
Optional[int]
Maximum number of tables to return per page. Defaults to a value set by the API. |
Returns | |
---|---|
Type | Description |
google.api_core.page_iterator.Iterator | Iterator of TableListItem contained within the requested dataset. |
load_table_from_dataframe
load_table_from_dataframe(
dataframe: pandas.DataFrame,
destination: typing.Union[
google.cloud.bigquery.table.Table,
google.cloud.bigquery.table.TableReference,
str,
],
num_retries: int = 6,
job_id: typing.Optional[str] = None,
job_id_prefix: typing.Optional[str] = None,
location: typing.Optional[str] = None,
project: typing.Optional[str] = None,
job_config: typing.Optional[google.cloud.bigquery.job.load.LoadJobConfig] = None,
parquet_compression: str = "snappy",
timeout: typing.Union[None, float, typing.Tuple[float, float]] = None,
) -> google.cloud.bigquery.job.load.LoadJob
Upload the contents of a table from a pandas DataFrame.
Similar to load_table_from_uri
, this method creates, starts and
returns a xref_LoadJob.
Parameter | |
---|---|
Name | Description |
destination |
typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str]
The destination table to use for loading the data. If it is an existing table, the schema of the |
Exceptions | |
---|---|
Type | Description |
ValueError | If a usable parquet engine cannot be found. This method requires pyarrow to be installed. |
TypeError | If job_config is not an instance of LoadJobConfig class. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.LoadJob | A new load job. |
load_table_from_file
load_table_from_file(
file_obj: typing.IO[bytes],
destination: typing.Union[
google.cloud.bigquery.table.Table,
google.cloud.bigquery.table.TableReference,
google.cloud.bigquery.table.TableListItem,
str,
],
rewind: bool = False,
size: typing.Optional[int] = None,
num_retries: int = 6,
job_id: typing.Optional[str] = None,
job_id_prefix: typing.Optional[str] = None,
location: typing.Optional[str] = None,
project: typing.Optional[str] = None,
job_config: typing.Optional[google.cloud.bigquery.job.load.LoadJobConfig] = None,
timeout: typing.Union[None, float, typing.Tuple[float, float]] = None,
) -> google.cloud.bigquery.job.load.LoadJob
Upload the contents of this table from a file-like object.
Similar to load_table_from_uri
, this method creates, starts and
returns a xref_LoadJob.
Parameters | |
---|---|
Name | Description |
file_obj |
typing.IO[bytes]
A file handle opened in binary mode for reading. |
destination |
typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str]
Table into which data is to be loaded. If a string is passed in, this method attempts to create a table reference from a string using from_string. :keyword rewind: If True, seek to the beginning of the file handle before reading the file. :keyword size: The number of bytes to read from the file handle. If size is |
Exceptions | |
---|---|
Type | Description |
ValueError | If size is not passed in and can not be determined, or if the file_obj can be detected to be a file opened in text mode. |
TypeError | If job_config is not an instance of LoadJobConfig class. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.LoadJob | A new load job. |
load_table_from_json
load_table_from_json(
json_rows: typing.Iterable[typing.Dict[str, typing.Any]],
destination: typing.Union[
google.cloud.bigquery.table.Table,
google.cloud.bigquery.table.TableReference,
google.cloud.bigquery.table.TableListItem,
str,
],
num_retries: int = 6,
job_id: typing.Optional[str] = None,
job_id_prefix: typing.Optional[str] = None,
location: typing.Optional[str] = None,
project: typing.Optional[str] = None,
job_config: typing.Optional[google.cloud.bigquery.job.load.LoadJobConfig] = None,
timeout: typing.Union[None, float, typing.Tuple[float, float]] = None,
) -> google.cloud.bigquery.job.load.LoadJob
Upload the contents of a table from a JSON string or dict.
Parameters | |
---|---|
Name | Description |
json_rows |
Iterable[Dict[str, Any]]
Row data to be inserted. Keys must match the table schema fields and values must be JSON-compatible representations. .. note:: If your data is already a newline-delimited JSON string, it is best to wrap it into a file-like object and pass it to load_table_from_file:: import io from google.cloud import bigquery data = u'{"foo": "bar"}' data_as_file = io.StringIO(data) client = bigquery.Client() client.load_table_from_file(data_as_file, ...) |
destination |
typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str]
Table into which data is to be loaded. If a string is passed in, this method attempts to create a table reference from a string using from_string. :keyword num_retries: Number of upload retries. :keyword job_id: Name of the job. :keyword job_id_prefix: The user-provided prefix for a randomly generated job ID. This parameter will be ignored if a |
Exceptions | |
---|---|
Type | Description |
TypeError | If job_config is not an instance of LoadJobConfig class. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.LoadJob | A new load job. |
load_table_from_uri
load_table_from_uri(source_uris: typing.Union[str, typing.Sequence[str]], destination: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, job_config: typing.Optional[google.cloud.bigquery.job.load.LoadJobConfig] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.load.LoadJob
Starts a job for loading data into a table from Cloud Storage.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationload
Parameters | |
---|---|
Name | Description |
source_uris |
Union[str, Sequence[str]]
URIs of data files to be loaded; in format |
destination |
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] :keyword job_id: Name of the job. :kwtype job_id: Optional[str] :keyword job_id_prefix: The user-provided prefix for a randomly generated job ID. This parameter will be ignored if a
Table into which data is to be loaded. If a string is passed in, this method attempts to create a table reference from a string using from_string. |
Exceptions | |
---|---|
Type | Description |
TypeError | If job_config is not an instance of LoadJobConfig class. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.LoadJob | A new load job. |
query
query(query: str, job_config: typing.Optional[google.cloud.bigquery.job.query.QueryJobConfig] = None, job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None, job_retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, api_method: typing.Union[str, google.cloud.bigquery.enums.QueryApiMethod] = QueryApiMethod.INSERT) -> google.cloud.bigquery.job.query.QueryJob
Run a SQL query.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationquery
Parameter | |
---|---|
Name | Description |
query |
str :keyword job_config: Extra configuration options for the job. To override any options that were previously set in the
SQL query to be executed. Defaults to the standard SQL dialect. Use the |
Exceptions | |
---|---|
Type | Description |
TypeError | If job_config is not an instance of QueryJobConfig class, or if both job_id and non-None non-default job_retry are provided. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.job.QueryJob | A new query job instance. |
query_and_wait
query_and_wait(query, *, job_config: typing.Optional[google.cloud.bigquery.job.query.QueryJobConfig] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, api_timeout: typing.Optional[float] = None, wait_timeout: typing.Optional[float] = None, retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, job_retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, page_size: typing.Optional[int] = None, max_results: typing.Optional[int] = None) -> google.cloud.bigquery.table.RowIterator
Run the query, wait for it to finish, and return the results.
While jobCreationMode=JOB_CREATION_OPTIONAL
is in preview in the
jobs.query
REST API, use the default jobCreationMode
unless
the environment variable QUERY_PREVIEW_ENABLED=true
. After
jobCreationMode
is GA, this method will always use
jobCreationMode=JOB_CREATION_OPTIONAL
. See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query
Parameters | |
---|---|
Name | Description |
query |
str
SQL query to be executed. Defaults to the standard SQL dialect. Use the |
job_config |
Optional[google.cloud.bigquery.job.QueryJobConfig]
Extra configuration options for the job. To override any options that were previously set in the |
location |
Optional[str]
Location where to run the job. Must match the location of the table used in the query as well as the destination table. |
project |
Optional[str]
Project ID of the project of where to run the job. Defaults to the client's project. |
api_timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
wait_timeout |
Optional[float]
The number of seconds to wait for the query to finish. If the query doesn't finish before this timeout, the client attempts to cancel the query. |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. This only applies to making RPC calls. It isn't used to retry failed jobs. This has a reasonable default that should only be overridden with care. |
job_retry |
Optional[google.api_core.retry.Retry]
How to retry failed jobs. The default retries rate-limit-exceeded errors. Passing |
page_size |
Optional[int]
The maximum number of rows in each page of results from this request. Non-positive values are ignored. |
max_results |
Optional[int]
The maximum total number of rows from this request. |
Exceptions | |
---|---|
Type | Description |
TypeError | If job_config is not an instance of QueryJobConfig class. |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.table.RowIterator | Iterator of row data Row-s. During each page, the iterator will have the total_rows attribute set, which counts the total number of rows **in the result set** (this is distinct from the total number of rows in the current page: iterator.page.num_items ). If the query is a special query that produces no results, e.g. a DDL query, an _EmptyRowIterator instance is returned. |
schema_from_json
schema_from_json(
file_or_path: PathType,
) -> typing.List[google.cloud.bigquery.schema.SchemaField]
Takes a file object or file path that contains json that describes a table schema.
Returns | |
---|---|
Type | Description |
List[SchemaField] | List of SchemaField objects. |
schema_to_json
schema_to_json(
schema_list: typing.Sequence[google.cloud.bigquery.schema.SchemaField],
destination: PathType,
)
Takes a list of schema field objects.
Serializes the list of schema field objects as json to a file.
Destination is a file path or a file object.
update_dataset
update_dataset(dataset: google.cloud.bigquery.dataset.Dataset, fields: typing.Sequence[str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.dataset.Dataset
Change some fields of a dataset.
Use fields
to specify which fields to update. At least one field
must be provided. If a field is listed in fields
and is None
in
dataset
, it will be deleted.
If dataset.etag
is not None
, the update will only
succeed if the dataset on the server has the same ETag. Thus
reading a dataset with get_dataset
, changing its fields,
and then passing it to update_dataset
will ensure that the changes
will only be saved if no modifications to the dataset occurred
since the read.
Parameters | |
---|---|
Name | Description |
dataset |
google.cloud.bigquery.dataset.Dataset
The dataset to update. |
fields |
Sequence[str]
The properties of |
retry |
Optional[google.api_core.retry.Retry]
How to retry the RPC. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.dataset.Dataset | The modified Dataset instance. |
update_model
update_model(model: google.cloud.bigquery.model.Model, fields: typing.Sequence[str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.model.Model
[Beta] Change some fields of a model.
Use fields
to specify which fields to update. At least one field
must be provided. If a field is listed in fields
and is None
in model
, the field value will be deleted.
If model.etag
is not None
, the update will only succeed if
the model on the server has the same ETag. Thus reading a model with
get_model
, changing its fields, and then passing it to
update_model
will ensure that the changes will only be saved if
no modifications to the model occurred since the read.
Parameters | |
---|---|
Name | Description |
model |
google.cloud.bigquery.model.Model
The model to update. |
fields |
Sequence[str]
The properties of |
retry |
Optional[google.api_core.retry.Retry]
A description of how to retry the API call. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.model.Model | The model resource returned from the API call. |
update_routine
update_routine(routine: google.cloud.bigquery.routine.routine.Routine, fields: typing.Sequence[str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.routine.routine.Routine
[Beta] Change some fields of a routine.
Use fields
to specify which fields to update. At least one field
must be provided. If a field is listed in fields
and is None
in routine
, the field value will be deleted.
If xref_etag is not
None
, the update will only succeed if the resource on the server
has the same ETag. Thus reading a routine with
xref_get_routine, changing
its fields, and then passing it to this method will ensure that the
changes will only be saved if no modifications to the resource
occurred since the read.
Parameters | |
---|---|
Name | Description |
routine |
google.cloud.bigquery.routine.Routine
The routine to update. |
fields |
Sequence[str]
The fields of |
retry |
Optional[google.api_core.retry.Retry]
A description of how to retry the API call. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.routine.Routine | The routine resource returned from the API call. |
update_table
update_table(table: google.cloud.bigquery.table.Table, fields: typing.Sequence[str], retry: google.api_core.retry.Retry = <google.api_core.retry.Retry object>, timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.Table
Change some fields of a table.
Use fields
to specify which fields to update. At least one field
must be provided. If a field is listed in fields
and is None
in table
, the field value will be deleted.
If table.etag
is not None
, the update will only succeed if
the table on the server has the same ETag. Thus reading a table with
get_table
, changing its fields, and then passing it to
update_table
will ensure that the changes will only be saved if
no modifications to the table occurred since the read.
Parameters | |
---|---|
Name | Description |
table |
google.cloud.bigquery.table.Table
The table to update. |
fields |
Sequence[str]
The fields of |
retry |
Optional[google.api_core.retry.Retry]
A description of how to retry the API call. |
timeout |
Optional[float]
The number of seconds to wait for the underlying HTTP transport before using |
Returns | |
---|---|
Type | Description |
google.cloud.bigquery.table.Table | The table resource returned from the API call. |