BigQuery Migration API Client Libraries

This page shows how to get started with the Cloud Client Libraries for the BigQuery Migration API. Read more about the client libraries for Cloud APIs, including the older Google API Client Libraries, in Client Libraries Explained.

Installing the client library


For more information, see Setting Up a Go Development Environment.

go get


For more information, see Setting Up a Java Development Environment.


For more information, see Setting Up a Python Development Environment.

pip install --upgrade google-cloud-bigquery-migration

Setting up authentication

To run the client library, you must first set up authentication. One way to do that is to create a service account and set an environment variable, as shown in the following steps. For other ways to authenticate, see Authenticating as a service account.


Create a service account:

  1. In the Cloud console, go to the Create service account page.

    Go to Create service account
  2. Select your project.
  3. In the Service account name field, enter a name. The Cloud console fills in the Service account ID field based on this name.

    In the Service account description field, enter a description. For example, Service account for quickstart.

  4. Click Create and continue.
  5. To provide access to your project, grant the following role(s) to your service account: Project > Owner.

    In the Select a role list, select a role.

    For additional roles, click Add another role and add each additional role.

  6. Click Continue.
  7. Click Done to finish creating the service account.

    Do not close your browser window. You will use it in the next step.

Create a service account key:

  1. In the Cloud console, click the email address for the service account that you created.
  2. Click Keys.
  3. Click Add key, and then click Create new key.
  4. Click Create. A JSON key file is downloaded to your computer.
  5. Click Close.


Set up authentication:

  1. Create the service account. Replace NAME with a name for the service account:

    gcloud iam service-accounts create NAME
  2. Grant roles to the service account. Run the following command once for each of the following IAM roles: roles/owner:

    gcloud projects add-iam-policy-binding PROJECT_ID --member="" --role=ROLE

    Replace the following:

    • NAME: the name of the service account
    • PROJECT_ID: the project ID where you created the service account
    • ROLE: the role to grant
  3. Generate the key file:

    gcloud iam service-accounts keys create FILE_NAME.json

    Replace the following:

    • FILE_NAME: a name for the key file
    • NAME: the name of the service account
    • PROJECT_ID: the project ID where you created the service account

Provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. This variable applies only to your current shell session. If you want the variable to apply to future shell sessions, set the variable in your shell startup file, for example in the ~/.bashrc or ~/.profile file.

Linux or macOS


Replace KEY_PATH with the path of the JSON file that contains your service account key.

For example:

export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"


For PowerShell:


Replace KEY_PATH with the path of the JSON file that contains your service account key.

For example:


For command prompt:


Replace KEY_PATH with the path of the JSON file that contains your service account key.

Using the client library

The following example demonstrates some basic interactions with the BigQuery Migration API.


To use this sample, prepare your machine for Go development, and complete the BigQuery Migration API quickstart. For more information, see the BigQuery Migration API Go API reference documentation.

// The bigquery_migration_quickstart application demonstrates basic usage of the
// BigQuery migration API by executing a workflow that performs a batch SQL
// translation task.
package main

import (

	migration ""
	translationtaskpb ""
	migrationpb ""

func main() {
	// Define command line flags for controlling the behavior of this quickstart.
	projectID := flag.String("project_id", "", "Cloud Project ID.")
	location := flag.String("location", "us", "BigQuery Migration location used for interactions.")
	outputPath := flag.String("output", "", "Cloud Storage path for translated resources.")
	// Parse flags and do some minimal validation.
	if *projectID == "" {
		log.Fatal("empty --project_id specified, please provide a valid project ID")
	if *location == "" {
		log.Fatal("empty --location specified, please provide a valid location")
	if *outputPath == "" {
		log.Fatalf("empty --output specified, please provide a valid cloud storage path")

	ctx := context.Background()
	migClient, err := migration.NewClient(ctx)
	if err != nil {
		log.Fatalf("migration.NewClient: %v", err)
	defer migClient.Close()

	workflow, err := executeTranslationWorkflow(ctx, migClient, *projectID, *location, *outputPath)
	if err != nil {
		log.Fatalf("workflow execution failed: %v", err)


// executeTranslationWorkflow constructs a migration workflow that performs batch SQL translation.
func executeTranslationWorkflow(ctx context.Context, client *migration.Client, projectID, location, outPath string) (*migrationpb.MigrationWorkflow, error) {

	// Tasks are extensible; the translation task is defined by the BigQuery Migration API, and so we construct the appropriate
	// details for the task.
	detailsTranslation := &translationtaskpb.TranslationTaskDetails{
		// The path to objects in cloud storage containing queries to be translated.  This is a prefix to some input text files.
		InputPath: "gs://cloud-samples-data/bigquery/migration/translation/input/",
		// The path to objects in cloud storage containing DDL create statements.  This is a prefix to some input DDL text files.
		SchemaPath: "gs://cloud-samples-data/bigquery/migration/translation/schema/",
		// This is the cloud storage path where results will be written.  In this case it will contain translated queries,
		// and possibly error files.
		OutputPath: outPath,

	// We then convert the task details for translation into the suitable protobuf `Any` representation needed
	// to define the workflow.
	detailsAny, err := anypb.New(detailsTranslation)
	if err != nil {
		return nil, err

	// Finally, construct the workflow creation request.
	req := &migrationpb.CreateMigrationWorkflowRequest{
		Parent: fmt.Sprintf("projects/%s/locations/%s", projectID, location),
		MigrationWorkflow: &migrationpb.MigrationWorkflow{
			DisplayName: "example SQL conversion",
			Tasks: map[string]*migrationpb.MigrationTask{
				"example_conversion": {
					Type:    "Translation_Teradata",
					Details: detailsAny,

	// Create the workflow using the request.
	workflow, err := client.CreateMigrationWorkflow(ctx, req)
	if err != nil {
		return nil, fmt.Errorf("CreateMigrationWorkflow: %v", err)

	// This is an asyncronous process, so we now poll the workflow
	// until completion or a suitable timeout has elapsed.
	timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Minute)
	defer cancel()
	for {
		select {
		case <-timeoutCtx.Done():
			return nil, fmt.Errorf("task %s didn't complete due to context expiring", workflow.GetName())
			polledWorkflow, err := client.GetMigrationWorkflow(timeoutCtx, &migrationpb.GetMigrationWorkflowRequest{
				Name: workflow.GetName(),
			if err != nil {
				return nil, fmt.Errorf("polling ended in error: %v", err)
			if polledWorkflow.GetState() == migrationpb.MigrationWorkflow_COMPLETED {
				// polledWorkflow contains the most recent metadata about the workflow, so we return that.
				return polledWorkflow, nil
			// workflow still isn't complete, so sleep briefly before polling again.
			time.Sleep(5 * time.Second)

// reportWorkflowStatus prints information about the workflow execution in a more human readable format.
func reportWorkflowStatus(workflow *migrationpb.MigrationWorkflow) {
	fmt.Printf("Migration workflow %s ended in state %s.\n", workflow.GetName(), workflow.GetState().String())
	for k, task := range workflow.GetTasks() {
		fmt.Printf(" - Task %s had id %s", k, task.GetId())
		if task.GetProcessingError() != nil {
			fmt.Printf(" with processing error: %s", task.GetProcessingError().GetReason())

Additional resources

What's next?

For more background, see the introduction to BigQuery Migration Service page.