- 1.25.0 (latest)
- 1.24.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.1
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.1
- 0.19.2
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.1
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.0
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
ComputeOptions(
maximum_bytes_billed: typing.Optional[int] = None,
enable_multi_query_execution: bool = False,
)
Encapsulates the configuration for compute options.
Examples:
>>> import bigframes.pandas as bpd
>>> df = bpd.read_gbq("bigquery-public-data.ml_datasets.penguins")
>>> bpd.options.compute.maximum_bytes_billed = 500
>>> # df.to_pandas() # this should fail
google.api_core.exceptions.InternalServerError: 500 Query exceeded limit for bytes billed: 500. 10485760 or higher required.
>>> bpd.options.compute.maximum_bytes_billed = None # reset option
Attributes |
|
---|---|
Name | Description |
maximum_bytes_billed |
int, Options
Limits the bytes billed for query jobs. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default. See maximum_bytes_billed : https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.job.QueryJobConfig#google_cloud_bigquery_job_QueryJobConfig_maximum_bytes_billed.
|
enable_multi_query_execution |
bool, Options
If enabled, large queries may be factored into multiple smaller queries in order to avoid generating queries that are too complex for the query engine to handle. However this comes at the cost of increase cost and latency. |