Migrating from pandas-gbq

The pandas-gbq library is a community-led project by the pandas community. The BigQuery client library, google-cloud-bigquery, is the official python library for interacting with BigQuery. If you are using the pandas-gbq library, you are already using the google-cloud-bigquery library. pandas-gbq uses google-cloud-bigquery to make API calls to BigQuery.

This topic provides code samples comparing google-cloud-bigquery and pandas-gbq. The code samples in this topic use the following libraries:

google-cloud-bigquery==1.20.0
google-cloud-bigquery-storage==0.7.0
pandas==0.25.1
pandas-gbq==0.11.0
pyarrow==0.14.1

Key differences in the level of functionality and support between the two libraries include:

pandas-gbq google-cloud-bigquery
Support Open source library maintained by PyData and volunteer contributors Open source library maintained by Google
BigQuery API functionality covered Limited to running queries and saving data from pandas DataFrames to tables Full BigQuery API functionality, with added support for reading/writing pandas DataFrames and a Jupyter magic for running queries
Cadence of new features New features added to the library only if implemented by volunteer contributors New features implemented as they are released in the BigQuery API
docs / source docs / source

Running Queries

Both libraries support querying data stored in BigQuery. Key differences between the libraries include:

pandas-gbq google-cloud-bigquery
Default SQL syntax Standard SQL (configurable with pandas_gbq.context.dialect) Standard SQL
Query configurations Sent as dictionary in the format specified in the BigQuery REST reference. Use the QueryJobConfig class, which contains properties for the various API configuration options.

Querying data with the standard SQL syntax

The following sample shows how to run a standard SQL query with and without explicitly specifying a project. For both libraries, if a project is not specified, the project will be determined from the default credentials.

pandas-gbq:

import pandas

sql = """
    SELECT name
    FROM `bigquery-public-data.usa_names.usa_1910_current`
    WHERE state = 'TX'
    LIMIT 100
"""

# Run a Standard SQL query using the environment's default project
df = pandas.read_gbq(sql, dialect='standard')

# Run a Standard SQL query with the project set explicitly
project_id = 'your-project-id'
df = pandas.read_gbq(sql, project_id=project_id, dialect='standard')

google-cloud-bigquery:

from google.cloud import bigquery

client = bigquery.Client()
sql = """
    SELECT name
    FROM `bigquery-public-data.usa_names.usa_1910_current`
    WHERE state = 'TX'
    LIMIT 100
"""

# Run a Standard SQL query using the environment's default project
df = client.query(sql).to_dataframe()

# Run a Standard SQL query with the project set explicitly
project_id = 'your-project-id'
df = client.query(sql, project=project_id).to_dataframe()

Querying data with the legacy SQL syntax

The following sample shows how to run a query using legacy SQL syntax. See the Standard SQL migration guide for guidance on updating your queries to standard SQL.

pandas-gbq:

import pandas

sql = """
    SELECT name
    FROM [bigquery-public-data:usa_names.usa_1910_current]
    WHERE state = 'TX'
    LIMIT 100
"""

df = pandas.read_gbq(sql, dialect='legacy')

google-cloud-bigquery:

from google.cloud import bigquery

client = bigquery.Client()
sql = """
    SELECT name
    FROM [bigquery-public-data:usa_names.usa_1910_current]
    WHERE state = 'TX'
    LIMIT 100
"""
query_config = bigquery.QueryJobConfig(use_legacy_sql=True)

df = client.query(sql, job_config=query_config).to_dataframe()

Using the BigQuery Storage API to download large results

Use the BigQuery Storage API to speed-up downloads of large results by 15 to 31 times.

pandas-gbq:

import pandas

sql = "SELECT * FROM `bigquery-public-data.irs_990.irs_990_2012`"

# Use the BigQuery Storage API to download results more quickly.
df = pandas.read_gbq(sql, dialect='standard', use_bqstorage_api=True)

google-cloud-bigquery:

import google.auth
from google.cloud import bigquery
from google.cloud import bigquery_storage_v1beta1

# Create a BigQuery client and a BigQuery Storage API client with the same
# credentials to avoid authenticating twice.
credentials, project_id = google.auth.default(
    scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
client = bigquery.Client(credentials=credentials, project=project_id)
bqstorage_client = bigquery_storage_v1beta1.BigQueryStorageClient(
    credentials=credentials
)
sql = "SELECT * FROM `bigquery-public-data.irs_990.irs_990_2012`"

# Use a BigQuery Storage API client to download results more quickly.
df = client.query(sql).to_dataframe(bqstorage_client=bqstorage_client)

Running a query with a configuration

Sending a configuration with a BigQuery API request is required to perform certain complex operations, such as running a parameterized query or specifying a destination table to store the query results. In pandas-gbq, the configuration must be sent as a dictionary in the format specified in the BigQuery REST reference. In google-cloud-bigquery, job configuration classes are provided, such as QueryJobConfig, which contain the necessary properties to configure complex jobs.

The following sample shows how to run a query with named parameters.

pandas-gbq:

import pandas

sql = """
    SELECT name
    FROM `bigquery-public-data.usa_names.usa_1910_current`
    WHERE state = @state
    LIMIT @limit
"""
query_config = {
    'query': {
        'parameterMode': 'NAMED',
        'queryParameters': [
            {
                'name': 'state',
                'parameterType': {'type': 'STRING'},
                'parameterValue': {'value': 'TX'}
            },
            {
                'name': 'limit',
                'parameterType': {'type': 'INTEGER'},
                'parameterValue': {'value': 100}
            }
        ]
    }
}

df = pandas.read_gbq(sql, configuration=query_config)

google-cloud-bigquery:

from google.cloud import bigquery

client = bigquery.Client()
sql = """
    SELECT name
    FROM `bigquery-public-data.usa_names.usa_1910_current`
    WHERE state = @state
    LIMIT @limit
"""
query_config = bigquery.QueryJobConfig(
    query_parameters=[
        bigquery.ScalarQueryParameter('state', 'STRING', 'TX'),
        bigquery.ScalarQueryParameter('limit', 'INTEGER', 100)
    ]
)

df = client.query(sql, job_config=query_config).to_dataframe()

Loading a pandas DataFrame to a BigQuery table

Both libraries support uploading data from a pandas DataFrame to a new table in BigQuery. Key differences include:

pandas-gbq google-cloud-bigquery
Type support Converts the DataFrame to CSV format before sending to the API, which does not support nested or array values. Converts the DataFrame to Parquet format before sending to the API, which supports nested and array values. Note that pyarrow, which is the parquet engine used to send the DataFrame data to the BigQuery API, must be installed to load the DataFrame to a table.
Load configurations Sent as dictionary in the format specified in the BigQuery REST reference. Use the LoadJobConfig class, which contains properties for the various API configuration options.

pandas-gbq:

import pandas

df = pandas.DataFrame(
    {
        'my_string': ['a', 'b', 'c'],
        'my_int64': [1, 2, 3],
        'my_float64': [4.0, 5.0, 6.0],
    }
)
table_id = 'my_dataset.new_table'

df.to_gbq(table_id)

google-cloud-bigquery:

The google-cloud-bigquery package requires the pyarrow library to serialize a pandas DataFrame to a Parquet file.

Install the pyarrow package:

 conda install -c conda-forge pyarrow

or

 pip install pyarrow

from google.cloud import bigquery
import pandas

df = pandas.DataFrame(
    {
        'my_string': ['a', 'b', 'c'],
        'my_int64': [1, 2, 3],
        'my_float64': [4.0, 5.0, 6.0],
    }
)
client = bigquery.Client()
table_id = 'my_dataset.new_table'
# Since string columns use the "object" dtype, pass in a (partial) schema
# to ensure the correct BigQuery data type.
job_config = bigquery.LoadJobConfig(schema=[
    bigquery.SchemaField("my_string", "STRING"),
])

job = client.load_table_from_dataframe(
    df, table_id, job_config=job_config
)

# Wait for the load job to complete.
job.result()

Features not supported by pandas-gbq

While the pandas-gbq library provides a useful interface for querying data and writing data to tables, it does not cover many of the BigQuery API features, including but not limited to:

Was this page helpful? Let us know how we did:

Send feedback about...

Need help? Visit our support page.