Scheduling queries

This page describes how to schedule recurring queries in BigQuery.

You can schedule queries to run on a recurring basis. Scheduled queries must be written in GoogleSQL, which can include data definition language (DDL) and data manipulation language (DML) statements. You can organize query results by date and time by parameterizing the query string and destination table.

When you create or update the schedule for a query, the scheduled time for the query is converted from your local time to UTC. UTC is not affected by daylight saving time.

Before you begin

Required permissions

To schedule a query, you need the following IAM permissions:

  • To create the transfer, you must either have the bigquery.transfers.update and bigquery.datasets.get permissions, or the bigquery.jobs.create, bigquery.transfers.get, and bigquery.datasets.get permissions.

  • To run a scheduled query, you must have:

    • bigquery.datasets.get permissions on the target dataset
    • bigquery.jobs.create

To modify or delete a scheduled query, you need one of the following IAM permissions:

  • bigquery.transfers.update
  • bigquery.jobs.create and ownership over the scheduled query

The predefined roles/bigquery.admin IAM role includes the permissions that you need in order to schedule or modify a query.

For more information about IAM roles in BigQuery, see Predefined roles and permissions.

To create or update scheduled queries run by a service account, you must have access to that service account. For more information on granting users the service account role, see Service Account user role. To select a service account in the scheduled query UI of the Google Cloud console, you need the following IAM permissions:

  • iam.serviceAccounts.list

Configuration options

Query string

The query string must be valid and written in GoogleSQL. Each run of a scheduled query can receive the following query parameters.

To manually test a query string with @run_time and @run_date parameters before scheduling a query, use the bq command-line tool.

Available parameters

Parameter GoogleSQL Type Value
@run_time TIMESTAMP Represented in UTC time. For regularly scheduled queries, run_time represents the intended time of execution. For example, if the scheduled query is set to "every 24 hours", the run_time difference between two consecutive queries is exactly 24 hours, even though the actual execution time might slightly vary.
@run_date DATE Represents a logical calendar date.

Example

The @run_time parameter is part of the query string in this example, which queries a public dataset named hacker_news.stories.

SELECT @run_time AS time,
  title,
  author,
  text
FROM `bigquery-public-data.hacker_news.stories`
LIMIT
  1000

Destination table

If the destination table for your results doesn't exist when you set up the scheduled query, BigQuery attempts to create the table for you.

If you are using a DDL or DML query, then in the Google Cloud console, choose the Processing location or region. Processing location is required for DDL or DML queries that create the destination table.

If the destination table does exist and you are using the WRITE_APPEND write preference, BigQuery appends data to the destination table and tries to map the schema. BigQuery automatically allows field additions and reordering, and accommodates missing optional fields. If the table schema changes so much between runs that BigQuery can't process the changes automatically, the scheduled query fails.

Queries can reference tables from different projects and different datasets. When configuring your scheduled query, you don't need to include the destination dataset in the table name. You specify the destination dataset separately.

The destination dataset and table for a scheduled query must be in the same project as the scheduled query.

Write preference

The write preference you select determines how your query results are written to an existing destination table.

  • WRITE_TRUNCATE: If the table exists, BigQuery overwrites the table data.
  • WRITE_APPEND: If the table exists, BigQuery appends the data to the table.

If you're using a DDL or DML query, you can't use the write preference option.

Creating, truncating, or appending a destination table only happens if BigQuery is able to successfully complete the query. Creation, truncation, or append actions occur as one atomic update upon job completion.

Clustering

Scheduled queries can create clustering on new tables only, when the table is made with a DDL CREATE TABLE AS SELECT statement. See Creating a clustered table from a query result on the Using data definition language statements page.

Partitioning options

Scheduled queries can create partitioned or non-partitioned destination tables. Partitioning is available in the Google Cloud console, bq command-line tool, and API setup methods. If you're using a DDL or DML query with partitioning, leave the Destination table partitioning field blank.

You can use the following types of table partitioning in BigQuery:

To create a partitioned table by using a scheduled query in the Google Cloud console, use the following options:

  • To use integer range partitioning, leave the Destination table partitioning field blank.

  • To use time-unit column partitioning, specify the column name in the Destination table partitioning field when you set up a scheduled query.

  • To use ingestion time partitioning, leave the Destination table partitioning field blank and indicate the date partitioning in the destination table's name. For example, mytable${run_date}. For more information, see Parameter templating syntax.

Available parameters

When setting up the scheduled query, you can specify how you want to partition the destination table with runtime parameters.

Parameter Template Type Value
run_time Formatted timestamp In UTC time, per the schedule. For regularly scheduled queries, run_time represents the intended time of execution. For example, if the scheduled query is set to "every 24 hours", the run_time difference between two consecutive queries is exactly 24 hours, even though the actual execution time may vary slightly.

See TransferRun.runTime.
run_date Date string The date of the run_time parameter in the following format: %Y-%m-%d; for example, 2018-01-01. This format is compatible with ingestion-time partitioned tables.

Templating system

Scheduled queries support runtime parameters in the destination table name with a templating syntax.

Parameter templating syntax

The templating syntax supports basic string templating and time offsetting. Parameters are referenced in the following formats:

  • {run_date}
  • {run_time[+\-offset]|"time_format"}
Parameter Purpose
run_date This parameter is replaced by the date in format YYYYMMDD.
run_time This parameter supports the following properties:


offset
Time offset expressed in hours (h), minutes (m), and seconds (s) in that order.
Days (d) are not supported.
Decimals are allowed, for example: 1.5h.

time_format
A formatting string. The most common formatting parameters are years (%Y), months (%m), and days (%d).
For partitioned tables, YYYYMMDD is the required suffix - this is equivalent to "%Y%m%d".

Read more about formatting datetime elements.

Usage notes:
  • No whitespace is allowed between run_time, offset, and time format.
  • To include literal curly braces in the string, you can escape them as '\{' and '\}'.
  • To include literal quotes or a vertical bar in the time_format, such as "YYYY|MM|DD", you can escape them in the format string as: '\"' or '\|'.

Parameter templating examples

These examples demonstrate specifying destination table names with different time formats, and offsetting the run time.
run_time (UTC) Templated parameter Output destination table name
2018-02-15 00:00:00 mytable mytable
2018-02-15 00:00:00 mytable_{run_time|"%Y%m%d"} mytable_20180215
2018-02-15 00:00:00 mytable_{run_time+25h|"%Y%m%d"} mytable_20180216
2018-02-15 00:00:00 mytable_{run_time-1h|"%Y%m%d"} mytable_20180214
2018-02-15 00:00:00 mytable_{run_time+1.5h|"%Y%m%d%H"}
or
mytable_{run_time+90m|"%Y%m%d%H"}
mytable_2018021501
2018-02-15 00:00:00 {run_time+97s|"%Y%m%d"}_mytable_{run_time+97s|"%H%M%S"} 20180215_mytable_000137

Using a service account

You can set up a scheduled query to authenticate as a service account. A service account is a Google Account associated with your Google Cloud project. The service account can run jobs, such as scheduled queries or batch processing pipelines, with its own service credentials rather than an end user's credentials.

Read more about authenticating with service accounts in Introduction to authentication.

Specify encryption key with scheduled queries

You can specify customer-managed encryption keys (CMEKs) to encrypt data for a transfer run. You can use a CMEK to support transfers from scheduled queries.

When you specify a CMEK with a transfer, the BigQuery Data Transfer Service applies the CMEK to any intermediate on-disk cache of ingested data so that the entire data transfer workflow is CMEK compliant.

You cannot update an existing transfer to add a CMEK if the transfer was not originally created with a CMEK. For example, you cannot change a destination table that was originally default encrypted to now be encrypted with CMEK. Conversely, you also cannot change a CMEK-encrypted destination table to have a different type of encryption.

You can update a CMEK for a transfer if the transfer configuration was originally created with a CMEK encryption. When you update a CMEK for a transfer configuration, the BigQuery Data Transfer Service propagates the CMEK to the destination tables at the next run of the transfer, where the BigQuery Data Transfer Service replaces any outdated CMEKs with the new CMEK during the transfer run. For more information, see Update a transfer.

You can also use project default keys. When you specify a project default key with a transfer, the BigQuery Data Transfer Service uses the project default key as the default key for any new transfer configurations.

Set up scheduled queries

For a description of the schedule syntax, see Formatting the schedule. For details about schedule syntax, see Resource: TransferConfig.

Console

  1. Open the BigQuery page in the Google Cloud console.

    Go to BigQuery

  2. Run the query that you're interested in. When you are satisfied with your results, click Schedule.

    Create new scheduled query in Google Cloud console.

  3. The scheduled query options open in the New scheduled query pane.

    New scheduled query pane.

  4. On the New scheduled query pane:

    • For Name for the scheduled query, enter a name such as My scheduled query. The scheduled query name can be any value that you can identify later if you need to modify the query.
    • Optional: By default, the query is scheduled to run Daily. You can change the default schedule by selecting an option from the Repeats drop-down menu:

      • To specify a custom frequency, select Custom, then enter a Cron-like time specification in the Custom schedule field— for example, every mon 23:30 or every 6 hours. For details about valid schedules including custom intervals, see the schedule field under Resource: TransferConfig.

        Formatting a custom scheduled query.

      • To change the start time, select the Start at set time option, enter the desired start date and time.

      • To specify an end time, select the Schedule end time option, enter the desired end date and time.

      • To save the query without a schedule, so you can run it on demand later, select On-demand in the Repeats menu.

  5. For a GoogleSQL SELECT query, select the Set a destination table for query results option and provide the following information about the destination dataset.

    • For Dataset name, choose the appropriate destination dataset.
    • For Table name, enter the name of your destination table.
    • For Destination table write preference, choose either Append to table to append data to the table or Overwrite table to overwrite the destination table.

      New scheduled query destination.

  6. Choose the Location Type.

    • If you have enabled the destination table for query results, you can select Automatic location selection to automatically select the location where the destination table resides.

    • Otherwise, choose the location where the data being queried is located.

  7. Advanced options:

    • Optional: CMEK If you use customer-managed encryption keys, you can select Customer-managed key under Advanced options. A list of your available CMEKs appears for you to choose from. For information about how customer-managed encryption keys (CMEKs) work with the BigQuery Data Transfer Service, see Specify encryption key with scheduled queries.

    • Authenticate as a service account If you have one or more service accounts associated with your Google Cloud project, you can associate a service account with your scheduled query instead of using your user credentials. Under Scheduled query credential, click the menu to see a list of your available service accounts. A service account is required if you are signed in as a federated identity.

      Scheduled query advanced options.

  8. Additional configurations:

    • Optional: Check Send email notifications to allow email notifications of transfer run failures.

    • Optional: For Pub/Sub topic, enter your Pub/Sub topic name, for example: projects/myproject/topics/mytopic.

      New scheduled query DDL and DML.

  9. Click Save.

bq

Option 1: Use the bq query command.

To create a scheduled query, add the options destination_table (or target_dataset), --schedule, and --display_name to your bq query command.

bq query \
--display_name=name \
--destination_table=table \
--schedule=interval

Replace the following:

  • name. The display name for the scheduled query. The display name can be any value that you can identify later if you need to modify the query.
  • table. The destination table for the query results.
    • --target_dataset is an alternative way to name the target dataset for the query results, when used with DDL and DML queries.
    • Use either --destination_table or --target_dataset, but not both.
  • interval. When used with bq query, makes a query a recurring scheduled query. A schedule for how often the query should run is required. For details about valid schedules including custom intervals, see the schedule field under Resource: TransferConfig. Examples:
    • --schedule='every 24 hours'
    • --schedule='every 3 hours'
    • --schedule='every monday 09:00'
    • --schedule='1st sunday of sep,oct,nov 00:00'

Optional flags:

  • --project_id is your project ID. If --project_id isn't specified, the default project is used.

  • --replace overwrites the destination table with the query results after every run of the scheduled query. Any existing data is erased. For non-partitioned tables, the schema is also erased.

  • --append_table appends results to the destination table.

  • For DDL and DML queries, you can also supply the --location flag to specify a particular region for processing. If --location isn't specified, the nearest Google Cloud location is used.

For example, the following command creates a scheduled query named My Scheduled Query using the simple query SELECT 1 from mydataset.test. The destination table is mytable in the dataset mydataset. The scheduled query is created in the default project:

    bq query \
    --use_legacy_sql=false \
    --destination_table=mydataset.mytable \
    --display_name='My Scheduled Query' \
    --schedule='every 24 hours' \
    --replace=true \
    'SELECT
      1
    FROM
      mydataset.test'


Option 2: Use the bq mk command.

Scheduled queries are a kind of transfer. To schedule a query, you can use the bq command-line tool to make a transfer configuration.

Queries must be in StandardSQL dialect to be scheduled.

Enter the bq mk command and supply the following required flags:

  • --transfer_config
  • --data_source
  • --target_dataset (optional for DDL and DML queries)
  • --display_name
  • --params

Optional flags:

  • --project_id is your project ID. If --project_id isn't specified, the default project is used.

  • --schedule is how often you want the query to run. If --schedule isn't specified, the default is 'every 24 hours' based on creation time.

  • For DDL and DML queries, you can also supply the --location flag to specify a particular region for processing. If --location isn't specified, the nearest Google Cloud location is used.

  • --service_account_name is for authenticating your scheduled query with a service account instead of your individual user account.

  • --destination_kms_key specifies the key resource ID for the key if you use a customer-managed encryption key (CMEK) for this transfer. For information about how CMEKs work with the BigQuery Data Transfer Service, see Specify encryption key with scheduled queries.

bq mk \
--transfer_config \
--target_dataset=dataset \
--display_name=name \
--params='parameters' \
--data_source=data_source

Replace the following:

  • dataset. The target dataset for the transfer configuration.
    • This parameter is optional for DDL and DML queries. It is required for all other queries.
  • name. The display name for the transfer configuration. The display name can be any value that you can identify later if you need to modify the query.
  • parameters. Contains the parameters for the created transfer configuration in JSON format. For example: --params='{"param":"param_value"}'.
    • For a scheduled query, you must supply the query parameter.
    • The destination_table_name_template parameter is the name of your destination table.
      • This parameter is optional for DDL and DML queries. It is required for all other queries.
    • For the write_disposition parameter, you can choose WRITE_TRUNCATE to truncate (overwrite) the destination table or WRITE_APPEND to append the query results to the destination table.
      • This parameter is optional for DDL and DML queries. It is required for all other queries.
  • data_source. The data source: scheduled_query.
  • Optional: The --service_account_name flag is for authenticating with a service account instead of an individual user account.
  • Optional: The --destination_kms_key specifies the key resource ID for the Cloud KMS key—for example, projects/project_name/locations/us/keyRings/key_ring_name/cryptoKeys/key_name.

For example, the following command creates a scheduled query transfer configuration named My Scheduled Query using the simple query SELECT 1 from mydataset.test. The destination table mytable is truncated for every write, and the target dataset is mydataset. The scheduled query is created in the default project, and authenticates as a service account:

bq mk \
--transfer_config \
--target_dataset=mydataset \
--display_name='My Scheduled Query' \
--params='{"query":"SELECT 1 from mydataset.test","destination_table_name_template":"mytable","write_disposition":"WRITE_TRUNCATE"}' \
--data_source=scheduled_query \
--service_account_name=abcdef-test-sa@abcdef-test.iam.gserviceaccount.com

The first time you run the command, you receive a message like the following:

[URL omitted] Please copy and paste the above URL into your web browser and follow the instructions to retrieve an authentication code.

Follow the instructions in the message and paste the authentication code on the command line.

API

Use the projects.locations.transferConfigs.create method and supply an instance of the TransferConfig resource.

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.CreateTransferConfigRequest;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.ProjectName;
import com.google.cloud.bigquery.datatransfer.v1.TransferConfig;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

// Sample to create a scheduled query
public class CreateScheduledQuery {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    final String projectId = "MY_PROJECT_ID";
    final String datasetId = "MY_DATASET_ID";
    final String query =
        "SELECT CURRENT_TIMESTAMP() as current_time, @run_time as intended_run_time, "
            + "@run_date as intended_run_date, 17 as some_integer";
    Map<String, Value> params = new HashMap<>();
    params.put("query", Value.newBuilder().setStringValue(query).build());
    params.put(
        "destination_table_name_template",
        Value.newBuilder().setStringValue("my_destination_table_{run_date}").build());
    params.put("write_disposition", Value.newBuilder().setStringValue("WRITE_TRUNCATE").build());
    params.put("partitioning_field", Value.newBuilder().build());
    TransferConfig transferConfig =
        TransferConfig.newBuilder()
            .setDestinationDatasetId(datasetId)
            .setDisplayName("Your Scheduled Query Name")
            .setDataSourceId("scheduled_query")
            .setParams(Struct.newBuilder().putAllFields(params).build())
            .setSchedule("every 24 hours")
            .build();
    createScheduledQuery(projectId, transferConfig);
  }

  public static void createScheduledQuery(String projectId, TransferConfig transferConfig)
      throws IOException {
    try (DataTransferServiceClient dataTransferServiceClient = DataTransferServiceClient.create()) {
      ProjectName parent = ProjectName.of(projectId);
      CreateTransferConfigRequest request =
          CreateTransferConfigRequest.newBuilder()
              .setParent(parent.toString())
              .setTransferConfig(transferConfig)
              .build();
      TransferConfig config = dataTransferServiceClient.createTransferConfig(request);
      System.out.println("\nScheduled query created successfully :" + config.getName());
    } catch (ApiException ex) {
      System.out.print("\nScheduled query was not created." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

from google.cloud import bigquery_datatransfer

transfer_client = bigquery_datatransfer.DataTransferServiceClient()

# The project where the query job runs is the same as the project
# containing the destination dataset.
project_id = "your-project-id"
dataset_id = "your_dataset_id"

# This service account will be used to execute the scheduled queries. Omit
# this request parameter to run the query as the user with the credentials
# associated with this client.
service_account_name = "abcdef-test-sa@abcdef-test.iam.gserviceaccount.com"

# Use standard SQL syntax for the query.
query_string = """
SELECT
  CURRENT_TIMESTAMP() as current_time,
  @run_time as intended_run_time,
  @run_date as intended_run_date,
  17 as some_integer
"""

parent = transfer_client.common_project_path(project_id)

transfer_config = bigquery_datatransfer.TransferConfig(
    destination_dataset_id=dataset_id,
    display_name="Your Scheduled Query Name",
    data_source_id="scheduled_query",
    params={
        "query": query_string,
        "destination_table_name_template": "your_table_{run_date}",
        "write_disposition": "WRITE_TRUNCATE",
        "partitioning_field": "",
    },
    schedule="every 24 hours",
)

transfer_config = transfer_client.create_transfer_config(
    bigquery_datatransfer.CreateTransferConfigRequest(
        parent=parent,
        transfer_config=transfer_config,
        service_account_name=service_account_name,
    )
)

print("Created scheduled query '{}'".format(transfer_config.name))

Set up scheduled queries with a service account

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.CreateTransferConfigRequest;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.ProjectName;
import com.google.cloud.bigquery.datatransfer.v1.TransferConfig;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

// Sample to create a scheduled query with service account
public class CreateScheduledQueryWithServiceAccount {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    final String projectId = "MY_PROJECT_ID";
    final String datasetId = "MY_DATASET_ID";
    final String serviceAccount = "MY_SERVICE_ACCOUNT";
    final String query =
        "SELECT CURRENT_TIMESTAMP() as current_time, @run_time as intended_run_time, "
            + "@run_date as intended_run_date, 17 as some_integer";
    Map<String, Value> params = new HashMap<>();
    params.put("query", Value.newBuilder().setStringValue(query).build());
    params.put(
        "destination_table_name_template",
        Value.newBuilder().setStringValue("my_destination_table_{run_date}").build());
    params.put("write_disposition", Value.newBuilder().setStringValue("WRITE_TRUNCATE").build());
    params.put("partitioning_field", Value.newBuilder().build());
    TransferConfig transferConfig =
        TransferConfig.newBuilder()
            .setDestinationDatasetId(datasetId)
            .setDisplayName("Your Scheduled Query Name")
            .setDataSourceId("scheduled_query")
            .setParams(Struct.newBuilder().putAllFields(params).build())
            .setSchedule("every 24 hours")
            .build();
    createScheduledQueryWithServiceAccount(projectId, transferConfig, serviceAccount);
  }

  public static void createScheduledQueryWithServiceAccount(
      String projectId, TransferConfig transferConfig, String serviceAccount) throws IOException {
    try (DataTransferServiceClient dataTransferServiceClient = DataTransferServiceClient.create()) {
      ProjectName parent = ProjectName.of(projectId);
      CreateTransferConfigRequest request =
          CreateTransferConfigRequest.newBuilder()
              .setParent(parent.toString())
              .setTransferConfig(transferConfig)
              .setServiceAccountName(serviceAccount)
              .build();
      TransferConfig config = dataTransferServiceClient.createTransferConfig(request);
      System.out.println(
          "\nScheduled query with service account created successfully :" + config.getName());
    } catch (ApiException ex) {
      System.out.print("\nScheduled query with service account was not created." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

from google.cloud import bigquery_datatransfer

transfer_client = bigquery_datatransfer.DataTransferServiceClient()

# The project where the query job runs is the same as the project
# containing the destination dataset.
project_id = "your-project-id"
dataset_id = "your_dataset_id"

# This service account will be used to execute the scheduled queries. Omit
# this request parameter to run the query as the user with the credentials
# associated with this client.
service_account_name = "abcdef-test-sa@abcdef-test.iam.gserviceaccount.com"

# Use standard SQL syntax for the query.
query_string = """
SELECT
  CURRENT_TIMESTAMP() as current_time,
  @run_time as intended_run_time,
  @run_date as intended_run_date,
  17 as some_integer
"""

parent = transfer_client.common_project_path(project_id)

transfer_config = bigquery_datatransfer.TransferConfig(
    destination_dataset_id=dataset_id,
    display_name="Your Scheduled Query Name",
    data_source_id="scheduled_query",
    params={
        "query": query_string,
        "destination_table_name_template": "your_table_{run_date}",
        "write_disposition": "WRITE_TRUNCATE",
        "partitioning_field": "",
    },
    schedule="every 24 hours",
)

transfer_config = transfer_client.create_transfer_config(
    bigquery_datatransfer.CreateTransferConfigRequest(
        parent=parent,
        transfer_config=transfer_config,
        service_account_name=service_account_name,
    )
)

print("Created scheduled query '{}'".format(transfer_config.name))

View scheduled query status

Console

To view the status of your scheduled queries, click Scheduled queries in the navigation pane. Refresh the page to see the updated status of your scheduled queries. Click a scheduled query to get more details about it.

List scheduled queries.

bq

Scheduled queries are a kind of transfer. To show the details of a scheduled query, you can first use the bq command-line tool to list your transfer configurations.

Enter the bq ls command and supply the transfer flag --transfer_config. The following flags are also required:

  • --transfer_location

For example:

bq ls \
--transfer_config \
--transfer_location=us

To show the details of a single scheduled query, enter the bq show command and supply the transfer_path for that scheduled query/transfer config.

For example:

bq show \
--transfer_config \
projects/862514376110/locations/us/transferConfigs/5dd12f26-0000-262f-bc38-089e0820fe38

API

Use the projects.locations.transferConfigs.list method and supply an instance of the TransferConfig resource.

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.ListTransferConfigsRequest;
import com.google.cloud.bigquery.datatransfer.v1.ProjectName;
import java.io.IOException;

// Sample to get list of transfer config
public class ListTransferConfigs {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    final String projectId = "MY_PROJECT_ID";
    listTransferConfigs(projectId);
  }

  public static void listTransferConfigs(String projectId) throws IOException {
    try (DataTransferServiceClient dataTransferServiceClient = DataTransferServiceClient.create()) {
      ProjectName parent = ProjectName.of(projectId);
      ListTransferConfigsRequest request =
          ListTransferConfigsRequest.newBuilder().setParent(parent.toString()).build();
      dataTransferServiceClient
          .listTransferConfigs(request)
          .iterateAll()
          .forEach(config -> System.out.print("Success! Config ID :" + config.getName() + "\n"));
    } catch (ApiException ex) {
      System.out.println("Config list not found due to error." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

from google.cloud import bigquery_datatransfer

transfer_client = bigquery_datatransfer.DataTransferServiceClient()

project_id = "my-project"
parent = transfer_client.common_project_path(project_id)

configs = transfer_client.list_transfer_configs(parent=parent)
print("Got the following configs:")
for config in configs:
    print(f"\tID: {config.name}, Schedule: {config.schedule}")

Update scheduled queries

Console

To update a scheduled query, follow these steps:

  1. In the navigation pane, click Scheduled queries.
  2. In the list of scheduled queries, click the name of the query that you want to change.
  3. On the Scheduled query details page that opens, click Edit. Edit scheduled query details.
  4. Optional: Change the query text in the query editing pane.
  5. Click Schedule query and then select Update scheduled query.
  6. Optional: Change any other scheduling options for the query.
  7. Click Update.

bq

Scheduled queries are a kind of transfer. To update scheduled query, you can use the bq command-line tool to make a transfer configuration.

Enter the bq update command with the required --transfer_config flag.

Optional flags:

  • --project_id is your project ID. If --project_id isn't specified, the default project is used.

  • --schedule is how often you want the query to run. If --schedule isn't specified, the default is 'every 24 hours' based on creation time.

  • --service_account_name only takes effect if --update_credentials is also set. For more information, see Update scheduled query credentials.

  • --target_dataset (optional for DDL and DML queries) is an alternative way to name the target dataset for the query results, when used with DDL and DML queries.

  • --display_name is the name for the scheduled query.

  • --params the parameters for the created transfer configuration in JSON format. For example: --params='{"param":"param_value"}'.

  • --destination_kms_key specifies the key resource ID for the Cloud KMS key if you use a customer-managed encryption key (CMEK) for this transfer. For information about how customer-managed encryption keys (CMEK) works with the BigQuery Data Transfer Service, see Specify encryption key with scheduled queries.

bq update \
--target_dataset=dataset \
--display_name=name \
--params='parameters'
--transfer_config \
RESOURCE_NAME

Replace the following:

  • dataset. The target dataset for the transfer configuration. This parameter is optional for DDL and DML queries. It is required for all other queries.
  • name. The display name for the transfer configuration. The display name can be any value that you can identify later if you need to modify the query.
  • parameters. Contains the parameters for the created transfer configuration in JSON format. For example: --params='{"param":"param_value"}'.
    • For a scheduled query, you must supply the query parameter.
    • The destination_table_name_template parameter is the name of your destination table. This parameter is optional for DDL and DML queries. It is required for all other queries.
    • For the write_disposition parameter, you can choose WRITE_TRUNCATE to truncate (overwrite) the destination table or WRITE_APPEND to append the query results to the destination table. This parameter is optional for DDL and DML queries. It is required for all other queries.
  • Optional: The --destination_kms_key specifies the key resource ID for the Cloud KMS key—for example, projects/project_name/locations/us/keyRings/key_ring_name/cryptoKeys/key_name.
  • RESOURCE_NAME: The transfer's resource name (also referred to as the transfer configuration). If you don't know the transfer's resource name, find the resource name with: bq ls --transfer_config --transfer_location=location.

For example, the following command updates a scheduled query transfer configuration named My Scheduled Query using the simple query SELECT 1 from mydataset.test. The destination table mytable is truncated for every write, and the target dataset is mydataset:

bq update \
--target_dataset=mydataset \
--display_name='My Scheduled Query' \
--params='{"query":"SELECT 1 from mydataset.test","destination_table_name_template":"mytable","write_disposition":"WRITE_TRUNCATE"}'
--transfer_config \
projects/myproject/locations/us/transferConfigs/1234a123-1234-1a23-1be9-12ab3c456de7

API

Use the projects.transferConfigs.patch method and supply the transfer's Resource Name using the transferConfig.name parameter. If you do not know the transfer's Resource Name, use the bq ls --transfer_config --transfer_location=location command to list all transfers or call the projects.locations.transferConfigs.list method and supply the project ID using the parent parameter.

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.TransferConfig;
import com.google.cloud.bigquery.datatransfer.v1.UpdateTransferConfigRequest;
import com.google.protobuf.FieldMask;
import com.google.protobuf.util.FieldMaskUtil;
import java.io.IOException;

// Sample to update transfer config.
public class UpdateTransferConfig {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String configId = "MY_CONFIG_ID";
    TransferConfig transferConfig =
        TransferConfig.newBuilder()
            .setName(configId)
            .setDisplayName("UPDATED_DISPLAY_NAME")
            .build();
    FieldMask updateMask = FieldMaskUtil.fromString("display_name");
    updateTransferConfig(transferConfig, updateMask);
  }

  public static void updateTransferConfig(TransferConfig transferConfig, FieldMask updateMask)
      throws IOException {
    try (DataTransferServiceClient dataTransferServiceClient = DataTransferServiceClient.create()) {
      UpdateTransferConfigRequest request =
          UpdateTransferConfigRequest.newBuilder()
              .setTransferConfig(transferConfig)
              .setUpdateMask(updateMask)
              .build();
      TransferConfig updateConfig = dataTransferServiceClient.updateTransferConfig(request);
      System.out.println("Transfer config updated successfully :" + updateConfig.getDisplayName());
    } catch (ApiException ex) {
      System.out.print("Transfer config was not updated." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

from google.cloud import bigquery_datatransfer
from google.protobuf import field_mask_pb2

transfer_client = bigquery_datatransfer.DataTransferServiceClient()

transfer_config_name = "projects/1234/locations/us/transferConfigs/abcd"
new_display_name = "My Transfer Config"

transfer_config = bigquery_datatransfer.TransferConfig(name=transfer_config_name)
transfer_config.display_name = new_display_name

transfer_config = transfer_client.update_transfer_config(
    {
        "transfer_config": transfer_config,
        "update_mask": field_mask_pb2.FieldMask(paths=["display_name"]),
    }
)

print(f"Updated config: '{transfer_config.name}'")
print(f"New display name: '{transfer_config.display_name}'")

Update scheduled queries with ownership restrictions

If you try to update a scheduled query you don't own, the update might fail with the following error message:

Cannot modify restricted parameters without taking ownership of the transfer configuration.

The owner of the scheduled query is the user associated with the scheduled query or the user who has access to the service account associated with the scheduled query. The associated user can be seen in the configuration details of the scheduled query. For information on how to update the scheduled query to take ownership, see Update scheduled query credentials. To grant users access to a service account, you must have the Service Account user role.

Any user who is not the owner of the scheduled query, but has access to all resources mentioned by the query, might still be allowed to update the query. This scenario is only supported if the query can be validated within a minute or two. Otherwise, you get the same error message mentioned previously. If the query is too complex, you can instead update the scheduled query credentials to take direct ownership of the scheduled query or to use a service account.

Update scheduled query credentials

If you're scheduling an existing query, you might need to update the user credentials on the query. Credentials are automatically up to date for new scheduled queries.

Some other situations that could require updating credentials include the following:

  • You want to query Google Drive data in a scheduled query.
  • You receive an INVALID_USER error when you attempt to schedule the query:

    Error code 5 : Authentication failure: User Id not found. Error code: INVALID_USERID

  • You receive the following restricted parameters error when you attempt to update the query:

    Cannot modify restricted parameters without taking ownership of the transfer configuration.

Console

To refresh the existing credentials on a scheduled query:

  1. Find and view the status of a scheduled query.

  2. Click the MORE button and select Update credentials.

    Update scheduled query credentials.

  3. Allow 10 to 20 minutes for the change to take effect. You might need to clear your browser's cache.

bq

Scheduled queries are a kind of transfer. To update the credentials of a scheduled query, you can use the bq command-line tool to update the transfer configuration.

Enter the bq update command and supply the transfer flag --transfer_config. The following flags are also required:

  • --update_credentials

Optional flag:

  • --service_account_name is for authenticating your scheduled query with a service account instead of your individual user account.

For example, the following command updates a scheduled query transfer configuration to authenticate as a service account:

bq update \
--update_credentials \
--service_account_name=abcdef-test-sa@abcdef-test.iam.gserviceaccount.com
--transfer_config \
projects/myproject/locations/us/transferConfigs/1234a123-1234-1a23-1be9-12ab3c456de7

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.TransferConfig;
import com.google.cloud.bigquery.datatransfer.v1.UpdateTransferConfigRequest;
import com.google.protobuf.FieldMask;
import com.google.protobuf.util.FieldMaskUtil;
import java.io.IOException;

// Sample to update credentials in transfer config.
public class UpdateCredentials {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String configId = "MY_CONFIG_ID";
    String serviceAccount = "MY_SERVICE_ACCOUNT";
    TransferConfig transferConfig = TransferConfig.newBuilder().setName(configId).build();
    FieldMask updateMask = FieldMaskUtil.fromString("service_account_name");
    updateCredentials(transferConfig, serviceAccount, updateMask);
  }

  public static void updateCredentials(
      TransferConfig transferConfig, String serviceAccount, FieldMask updateMask)
      throws IOException {
    try (DataTransferServiceClient dataTransferServiceClient = DataTransferServiceClient.create()) {
      UpdateTransferConfigRequest request =
          UpdateTransferConfigRequest.newBuilder()
              .setTransferConfig(transferConfig)
              .setUpdateMask(updateMask)
              .setServiceAccountName(serviceAccount)
              .build();
      dataTransferServiceClient.updateTransferConfig(request);
      System.out.println("Credentials updated successfully");
    } catch (ApiException ex) {
      System.out.print("Credentials was not updated." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

from google.cloud import bigquery_datatransfer
from google.protobuf import field_mask_pb2

transfer_client = bigquery_datatransfer.DataTransferServiceClient()

service_account_name = "abcdef-test-sa@abcdef-test.iam.gserviceaccount.com"
transfer_config_name = "projects/1234/locations/us/transferConfigs/abcd"

transfer_config = bigquery_datatransfer.TransferConfig(name=transfer_config_name)

transfer_config = transfer_client.update_transfer_config(
    {
        "transfer_config": transfer_config,
        "update_mask": field_mask_pb2.FieldMask(paths=["service_account_name"]),
        "service_account_name": service_account_name,
    }
)

print("Updated config: '{}'".format(transfer_config.name))

Set up a manual run on historical dates

In addition to scheduling a query to run in the future, you can also trigger immediate runs manually. Triggering an immediate run would be necessary if your query uses the run_date parameter, and there were issues during a prior run.

For example, every day at 09:00 you query a source table for rows that match the current date. However, you find that data wasn't added to the source table for the last three days. In this situation, you can set the query to run on historical data within a date range that you specify. Your query runs using combinations of run_date and run_time parameters that correspond to the dates you configured in your scheduled query.

After setting up a scheduled query, here's how you can run the query by using a historical date range:

Console

After clicking Schedule to save your scheduled query, you can click the Scheduled queries button to see the list of currently scheduled queries. Click any display name to see the query schedule's details. At the top right of the page, click Schedule backfill to specify a historical date range.

Schedule backfill button.

The chosen runtimes are all within your selected range, including the first date and excluding the last date.

set historic dates

Example 1

Your scheduled query is set to run every day 09:00 Pacific Time. You're missing data from January 1, January 2, and January 3. Choose the following historic date range:

Start Time = 1/1/19
End Time = 1/4/19

Your query runs using run_date and run_time parameters that correspond to the following times:

  • 1/1/19 09:00 Pacific Time
  • 1/2/19 09:00 Pacific Time
  • 1/3/19 09:00 Pacific Time

Example 2

Your scheduled query is set to run every day 23:00 Pacific Time. You're missing data from January 1, January 2, and January 3. Choose the following historic date ranges (later dates are chosen because UTC has a different date at 23:00 Pacific Time):

Start Time = 1/2/19
End Time = 1/5/19

Your query runs using run_date and run_time parameters that correspond to the following times:

  • 1/2/19 06:00 UTC, or 1/1/2019 23:00 Pacific Time
  • 1/3/19 06:00 UTC, or 1/2/2019 23:00 Pacific Time
  • 1/4/19 06:00 UTC, or 1/3/2019 23:00 Pacific Time

After setting up manual runs, refresh the page to see them in the list of runs.

bq

To manually run the query on a historical date range:

Enter the bq mk command and supply the transfer run flag --transfer_run. The following flags are also required:

  • --start_time
  • --end_time
bq mk \
--transfer_run \
--start_time='start_time' \
--end_time='end_time' \
resource_name

Replace the following:

  • start_time and end_time. Timestamps that end in Z or contain a valid time zone offset. Examples:
    • 2017-08-19T12:11:35.00Z
    • 2017-05-25T00:00:00+00:00
  • resource_name. The scheduled query's (or transfer's) Resource Name. The Resource Name is also known as the transfer configuration.

For example, the following command schedules a backfill for scheduled query resource (or transfer configuration): projects/myproject/locations/us/transferConfigs/1234a123-1234-1a23-1be9-12ab3c456de7.

  bq mk \
  --transfer_run \
  --start_time 2017-05-25T00:00:00Z \
  --end_time 2017-05-25T00:00:00Z \
  projects/myproject/locations/us/transferConfigs/1234a123-1234-1a23-1be9-12ab3c456de7

For more information, see bq mk --transfer_run.

API

Use the projects.locations.transferConfigs.scheduleRun method and supply a path of the TransferConfig resource.

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.ScheduleTransferRunsRequest;
import com.google.cloud.bigquery.datatransfer.v1.ScheduleTransferRunsResponse;
import com.google.protobuf.Timestamp;
import java.io.IOException;
import org.threeten.bp.Clock;
import org.threeten.bp.Instant;
import org.threeten.bp.temporal.ChronoUnit;

// Sample to run schedule back fill for transfer config
public class ScheduleBackFill {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String configId = "MY_CONFIG_ID";
    Clock clock = Clock.systemDefaultZone();
    Instant instant = clock.instant();
    Timestamp startTime =
        Timestamp.newBuilder()
            .setSeconds(instant.minus(5, ChronoUnit.DAYS).getEpochSecond())
            .setNanos(instant.minus(5, ChronoUnit.DAYS).getNano())
            .build();
    Timestamp endTime =
        Timestamp.newBuilder()
            .setSeconds(instant.minus(2, ChronoUnit.DAYS).getEpochSecond())
            .setNanos(instant.minus(2, ChronoUnit.DAYS).getNano())
            .build();
    scheduleBackFill(configId, startTime, endTime);
  }

  public static void scheduleBackFill(String configId, Timestamp startTime, Timestamp endTime)
      throws IOException {
    try (DataTransferServiceClient client = DataTransferServiceClient.create()) {
      ScheduleTransferRunsRequest request =
          ScheduleTransferRunsRequest.newBuilder()
              .setParent(configId)
              .setStartTime(startTime)
              .setEndTime(endTime)
              .build();
      ScheduleTransferRunsResponse response = client.scheduleTransferRuns(request);
      System.out.println("Schedule backfill run successfully :" + response.getRunsCount());
    } catch (ApiException ex) {
      System.out.print("Schedule backfill was not run." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import datetime

from google.cloud.bigquery_datatransfer_v1 import (
    DataTransferServiceClient,
    StartManualTransferRunsRequest,
)

# Create a client object
client = DataTransferServiceClient()

# Replace with your transfer configuration name
transfer_config_name = "projects/1234/locations/us/transferConfigs/abcd"
now = datetime.datetime.now(datetime.timezone.utc)
start_time = now - datetime.timedelta(days=5)
end_time = now - datetime.timedelta(days=2)

# Some data sources, such as scheduled_query only support daily run.
# Truncate start_time and end_time to midnight time (00:00AM UTC).
start_time = datetime.datetime(
    start_time.year, start_time.month, start_time.day, tzinfo=datetime.timezone.utc
)
end_time = datetime.datetime(
    end_time.year, end_time.month, end_time.day, tzinfo=datetime.timezone.utc
)

requested_time_range = StartManualTransferRunsRequest.TimeRange(
    start_time=start_time,
    end_time=end_time,
)

# Initialize request argument(s)
request = StartManualTransferRunsRequest(
    parent=transfer_config_name,
    requested_time_range=requested_time_range,
)

# Make the request
response = client.start_manual_transfer_runs(request=request)

# Handle the response
print("Started manual transfer runs:")
for run in response.runs:
    print(f"backfill: {run.run_time} run: {run.name}")

Delete scheduled queries

Console

To delete a scheduled query through the consle:

  1. In the navigation pane, click Scheduled queries.

  2. In the list of scheduled queries, click the name of the scheduled query that you want to delete.

  3. On the Scheduled query details page that opens, click Delete. Update scheduled query credentials.

Java

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.DeleteTransferConfigRequest;
import java.io.IOException;

// Sample to delete a transfer config
public class DeleteTransferConfig {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    // i.e projects/{project_id}/transferConfigs/{config_id}` or
    // `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`
    String configId = "MY_CONFIG_ID";
    deleteTransferConfig(configId);
  }

  public static void deleteTransferConfig(String configId) throws IOException {
    try (DataTransferServiceClient dataTransferServiceClient = DataTransferServiceClient.create()) {
      DeleteTransferConfigRequest request =
          DeleteTransferConfigRequest.newBuilder().setName(configId).build();
      dataTransferServiceClient.deleteTransferConfig(request);
      System.out.println("Transfer config deleted successfully");
    } catch (ApiException ex) {
      System.out.println("Transfer config was not deleted." + ex.toString());
    }
  }
}

Python

To learn how to install and use the client library for BigQuery, see BigQuery client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import google.api_core.exceptions
from google.cloud import bigquery_datatransfer

transfer_client = bigquery_datatransfer.DataTransferServiceClient()

transfer_config_name = "projects/1234/locations/us/transferConfigs/abcd"
try:
    transfer_client.delete_transfer_config(name=transfer_config_name)
except google.api_core.exceptions.NotFound:
    print("Transfer config not found.")
else:
    print(f"Deleted transfer config: {transfer_config_name}")

Quotas

Scheduled queries are executed with the creator's credentials and project, as if you were executing the query yourself. Scheduled queries are always run as batch query jobs.

Although scheduled queries use features of BigQuery Data Transfer Service, they are not transfers and are not subject to the load jobs quota. Scheduled queries are subject to the same BigQuery quotas and limits as manual queries.

Pricing

Scheduled queries are priced the same as manual BigQuery queries.

Supported regions

Scheduled queries are supported in the following locations.

Regions

The following table lists the regions in the Americas where BigQuery is available.
Region description Region name Details
Columbus, Ohio us-east5
Dallas us-south1 leaf icon Low CO2
Iowa us-central1 leaf icon Low CO2
Las Vegas us-west4
Los Angeles us-west2
Mexico1 northamerica-south1
Montréal1 northamerica-northeast1 leaf icon Low CO2
Northern Virginia us-east4
Oregon us-west1 leaf icon Low CO2
Salt Lake City us-west3
São Paulo southamerica-east1 leaf icon Low CO2
Santiago southamerica-west1 leaf icon Low CO2
South Carolina us-east1
Toronto northamerica-northeast2 leaf icon Low CO2
The following table lists the regions in Asia Pacific where BigQuery is available.
Region description Region name Details
Delhi asia-south2
Hong Kong asia-east2
Jakarta asia-southeast2
Melbourne australia-southeast2
Mumbai asia-south1
Osaka1 asia-northeast2
Seoul asia-northeast3
Singapore asia-southeast1
Sydney australia-southeast1
Taiwan asia-east1
Tokyo asia-northeast1
The following table lists the regions in Europe where BigQuery is available.
Region description Region name Details
Belgium europe-west1 leaf icon Low CO2
Berlin europe-west10 leaf icon Low CO2
Finland europe-north1 leaf icon Low CO2
Frankfurt europe-west3 leaf icon Low CO2
London europe-west2 leaf icon Low CO2
Madrid europe-southwest1 leaf icon Low CO2
Milan europe-west8
Netherlands europe-west4 leaf icon Low CO2
Paris europe-west9 leaf icon Low CO2
Turin europe-west12
Warsaw europe-central2
Zürich europe-west6 leaf icon Low CO2
The following table lists the regions in the Middle East where BigQuery is available.
Region description Region name Details
Dammam me-central2
Doha me-central1
Tel Aviv me-west1
The following table lists the regions in Africa where BigQuery is available.
Region description Region name Details
Johannesburg africa-south1
1 The Mexico, Montreal, and Osaka regions have three zones within one or two physical data centers. These regions are in the process of expanding to at least three physical data centers. For more information, see Cloud locations and Google Cloud Platform SLAs. To help improve the reliability of your workloads, consider a multi-regional deployment.

Multi-regions

The following table lists the multi-regions where BigQuery is available.
Multi-region description Multi-region name
Data centers within member states of the European Union1 EU
Data centers in the United States2 US

1 Data located in the EU multi-region is only stored in one of the following locations: europe-west1 (Belgium) or europe-west4 (Netherlands). The exact location in which the data is stored and processed is determined automatically by BigQuery.

2 Data located in the US multi-region is only stored in one of the following locations: us-central1 (Iowa), us-west1 (Oregon), or us-central2 (Oklahoma). The exact location in which the data is stored and processed is determined automatically by BigQuery.

What's next