Translate SQL queries with the translation API

This document describes how to use the translation API in BigQuery to translate scripts written in other SQL dialects into GoogleSQL queries. The translation API can simplify the process of migrating workloads to BigQuery.

Before you begin

Before you submit a translation job, complete the following steps:

  1. Ensure that you have all the required permissions.
  2. Enable the BigQuery Migration API.
  3. Collect the source files containing the SQL scripts and queries to be translated.
  4. Upload the source files to Cloud Storage.

Required permissions

To get the permissions that you need to create translation jobs using the translation API, ask your administrator to grant you the MigrationWorkflow Editor (roles/bigquerymigration.editor) IAM role on the parent resource. For more information about granting roles, see Manage access.

This predefined role contains the permissions required to create translation jobs using the translation API. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to create translation jobs using the translation API:

  • bigquerymigration.workflows.create
  • bigquerymigration.workflows.get

You might also be able to get these permissions with custom roles or other predefined roles.

Enable the BigQuery Migration API

If your Google Cloud CLI project was created before February 15, 2022, enable the BigQuery Migration API as follows:

  1. In the Google Cloud console, go to the BigQuery Migration API page.

    Go to BigQuery Migration API

  2. Click Enable.

Upload input files to Cloud Storage

If you want to use the Google Cloud console or the BigQuery Migration API to perform a translation job, you must upload the source files containing the queries and scripts you want to translate to Cloud Storage. You can also upload any metadata files or configuration YAML files to the same Cloud Storage bucket containing the source files. For more information about creating buckets and uploading files to Cloud Storage, see Create buckets and Upload objects from a filesystem.

Supported task types

The translation API can translate the following SQL dialects into GoogleSQL:

  • Amazon Redshift SQL - Redshift2BigQuery_Translation
  • Apache HiveQL and Beeline CLI - HiveQL2BigQuery_Translation
  • Apache Spark SQL - SparkSQL2BigQuery_Translation
  • Azure Synapse T-SQL - AzureSynapse2BigQuery_Translation
  • IBM Netezza SQL and NZPLSQL - Netezza2BigQuery_Translation
  • MySQL SQL - MySQL2BigQuery_Translation
  • Oracle SQL, PL/SQL, Exadata - Oracle2BigQuery_Translation
  • PostgreSQL SQL - Postgresql2BigQuery_Translation
  • Presto or Trino SQL - Presto2BigQuery_Translation
  • Snowflake SQL - Snowflake2BigQuery_Translation
  • SQL Server T-SQL - SQLServer2BigQuery_Translation
  • Teradata and Teradata Vantage - Teradata2BigQuery_Translation
  • Vertica SQL - Vertica2BigQuery_Translation

The translation API also supports the following translation tasks:

  • Translate Oracle SQL to PostgreSQL SQL - Oracle2Postgresql_Translation

Locations

The translation API is available in the following processing locations:

Region description Region name Details
Asia Pacific
Tokyo asia-northeast1
Mumbai asia-south1
Singapore asia-southeast1
Sydney australia-southeast1
Europe
EU multi-region eu
Warsaw europe-central2
Finland europe-north1 leaf icon Low CO2
Madrid europe-southwest1
Belgium europe-west1 leaf icon Low CO2
London europe-west2 leaf icon Low CO2
Frankfurt europe-west3 leaf icon Low CO2
Netherlands europe-west4
Zürich europe-west6 leaf icon Low CO2
Paris europe-west9 leaf icon Low CO2
Turin europe-west12
Americas
São Paulo southamerica-east1 leaf icon Low CO2
US multi-region us
Iowa us-central1 leaf icon Low CO2
South Carolina us-east1
Northern Virginia us-east4
Columbus, Ohio us-east5
Dallas us-south1
Oregon us-west1 leaf icon Low CO2
Los Angeles us-west2
Salt Lake City us-west3

Submit a translation job

To submit a translation job using the translation API, use the projects.locations.workflows.create method and supply an instance of the MigrationWorkflow resource with a supported task type.

Example: Create a batch translation

The following curl command creates a batch translation job where the input and output files are stored in Cloud Storage. The source_target_mapping field contains a list that maps the source literal entries to an optional relative path for the target output.

curl -d "{
  \"tasks\": {
      string: {
        \"type\": \"TYPE\",
        \"translation_details\": {
            \"target_base_uri\": \"TARGET_BASE\",
            \"source_target_mapping\": {
              \"source_spec\": {
                  \"base_uri\": \"BASE\"
              }
            },
        }
      }
  }
  }" \
  -H "Content-Type:application/json" \
  -H "Authorization: Bearer TOKEN" -X POST https://bigquerymigration.googleapis.com/v2alpha/projects/PROJECT_ID/locations/LOCATION/workflows

Replace the following:

  • TYPE: the task type of the translation, which determines the source and target dialect.
  • TARGET_BASE: the base URI for all translation outputs.
  • BASE: the base URI for all files read as sources for translation.
  • TOKEN: the token for authentication. To generate a token, use the gcloud auth print-access-token command or the OAuth 2.0 playground (use the scope https://www.googleapis.com/auth/cloud-platform).
  • PROJECT_ID: the project to process the translation.
  • LOCATION: the location where the job is processed.

The preceding command returns a response that includes a workflow ID written in the format projects/PROJECT_ID/locations/LOCATION/workflows/WORKFLOW_ID.

Example: Create an interactive translation job with string literal inputs and outputs

The following curl command creates a translation job with string literal inputs and outputs. The source_target_mapping field contains a list that maps the source directories to an optional relative path for the target output.

curl -d "{
  \"tasks\": {
      string: {
        \"type\": \"TYPE\",
        \"translation_details\": {
        \"source_target_mapping\": {
            \"source_spec\": {
              \"literal\": {
              \"relative_path\": \"PATH\",
              \"literal_string\": \"STRING\"
              }
            }
        },
        \"target_return_literals\": \"TARGETS\",
        }
      }
  }
  }" \
  -H "Content-Type:application/json" \
  -H "Authorization: Bearer TOKEN" -X POST https://bigquerymigration.googleapis.com/v2alpha/projects/PROJECT_ID/locations/LOCATION/workflows

Replace the following:

  • TYPE: the task type of the translation, which determines the source and target dialect.
  • PATH: the identifier of the literal entry, similar to a filename or path.
  • STRING: string of literal input data (for example, SQL) to be translated.
  • TARGETS: the expected targets that the user wants to be directly returned in the response in the literal format. These should be in the target URI format (for example, GENERATED_DIR + target_spec.relative_path + source_spec.literal.relative_path). Anything not in this list is not returned in the response. The generated directory, GENERATED_DIR for general SQL translations is sql/.
  • TOKEN: the token for authentication. To generate a token, use the gcloud auth print-access-token command or the OAuth 2.0 playground (use the scope https://www.googleapis.com/auth/cloud-platform).
  • PROJECT_ID: the project to process the translation.
  • LOCATION: the location where the job is processed.

The preceding command returns a response that includes a workflow ID written in the format projects/PROJECT_ID/locations/LOCATION/workflows/WORKFLOW_ID.

Explore the translation output

After running the translation job, retrieve the results by specifying the translation job workflow ID using the following command:

curl \
-H "Content-Type:application/json" \
-H "Authorization:Bearer TOKEN" -X GET https://bigquerymigration.googleapis.com/v2alpha/projects/PROJECT_ID/locations/LOCATION/workflows/WORKFLOW_ID

Replace the following:

  • TOKEN: the token for authentication. To generate a token, use the gcloud auth print-access-token command or the OAuth 2.0 playground (use the scope https://www.googleapis.com/auth/cloud-platform).
  • PROJECT_ID: the project to process the translation.
  • LOCATION: the location where the job is processed.
  • WORKFLOW_ID: the ID generated when you create a translation workflow.