BigQuery Migration API Client Libraries

This page shows how to get started with the Cloud Client Libraries for the BigQuery Migration API. Read more about the client libraries for Cloud APIs, including the older Google API Client Libraries, in Client Libraries Explained.

Installing the client library

Go

For more information, see Setting Up a Go Development Environment.

go get cloud.google.com/go/bigquery

Java

For more information, see Setting Up a Java Development Environment.

Python

For more information, see Setting Up a Python Development Environment.

pip install --upgrade google-cloud-bigquery-migration

Setting up authentication

To run the client library, you must first set up authentication. One way to do that is to create a service account and set an environment variable, as shown in the following steps. For other ways to authenticate, see Authenticating as a service account.

Cloud Console

Create a service account:

  1. In the Cloud Console, go to the Create service account page.

    Go to Create service account
  2. Select a project.
  3. In the Service account name field, enter a name. The Cloud Console fills in the Service account ID field based on this name.

    In the Service account description field, enter a description. For example, Service account for quickstart.

  4. Click Create and continue.
  5. Click the Select a role field.

    Under Quick access, click Basic, then click Owner.

  6. Click Continue.
  7. Click Done to finish creating the service account.

    Do not close your browser window. You will use it in the next step.

Create a service account key:

  1. In the Cloud Console, click the email address for the service account that you created.
  2. Click Keys.
  3. Click Add key, then click Create new key.
  4. Click Create. A JSON key file is downloaded to your computer.
  5. Click Close.

Command line

You can run the following commands using the Cloud SDK on your local machine, or in Cloud Shell.

  1. Create the service account. Replace NAME with a name for the service account.

    gcloud iam service-accounts create NAME
  2. Grant permissions to the service account. Replace PROJECT_ID with your project ID.

    gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:NAME@PROJECT_ID.iam.gserviceaccount.com" --role="roles/owner"
  3. Generate the key file. Replace FILE_NAME with a name for the key file.

    gcloud iam service-accounts keys create FILE_NAME.json --iam-account=NAME@PROJECT_ID.iam.gserviceaccount.com

Provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. This variable applies only to your current shell session. If you want the variable to apply to future shell sessions, set the variable in your shell startup file, for example in the ~/.bashrc or ~/.profile file.

Linux or macOS

export GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"

Replace KEY_PATH with the path of the JSON file that contains your service account key.

For example:

export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"

Windows

For PowerShell:

$env:GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"

Replace KEY_PATH with the path of the JSON file that contains your service account key.

For example:

$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json"

For command prompt:

set GOOGLE_APPLICATION_CREDENTIALS=KEY_PATH

Replace KEY_PATH with the path of the JSON file that contains your service account key.

Using the client library

The following example demonstrates some basic interactions with the BigQuery Migration API.

Go

To use this sample, prepare your machine for Go development, and complete the BigQuery Migration API quickstart. For more information, see the BigQuery Migration API Go API reference documentation.


// The bigquery_migration_quickstart application demonstrates basic usage of the
// BigQuery migration API by executing a workflow that performs a batch SQL
// translation task.
package main

import (
	"context"
	"flag"
	"fmt"
	"log"
	"time"

	migration "cloud.google.com/go/bigquery/migration/apiv2alpha"
	translationtaskpb "google.golang.org/genproto/googleapis/cloud/bigquery/migration/tasks/translation/v2alpha"
	migrationpb "google.golang.org/genproto/googleapis/cloud/bigquery/migration/v2alpha"
	"google.golang.org/protobuf/types/known/anypb"
)

func main() {
	// Define command line flags for controlling the behavior of this quickstart.
	projectID := flag.String("project_id", "", "Cloud Project ID.")
	location := flag.String("location", "us", "BigQuery Migration location used for interactions.")
	outputPath := flag.String("output", "", "Cloud Storage path for translated resources.")
	// Parse flags and do some minimal validation.
	flag.Parse()
	if *projectID == "" {
		log.Fatal("empty --project_id specified, please provide a valid project ID")
	}
	if *location == "" {
		log.Fatal("empty --location specified, please provide a valid location")
	}
	if *outputPath == "" {
		log.Fatalf("empty --output specified, please provide a valid cloud storage path")
	}

	ctx := context.Background()
	migClient, err := migration.NewClient(ctx)
	if err != nil {
		log.Fatalf("migration.NewClient: %v", err)
	}
	defer migClient.Close()

	workflow, err := executeTranslationWorkflow(ctx, migClient, *projectID, *location, *outputPath)
	if err != nil {
		log.Fatalf("workflow execution failed: %v", err)
	}

	reportWorkflowStatus(workflow)
}

// executeTranslationWorkflow constructs a migration workflow that performs batch SQL translation.
func executeTranslationWorkflow(ctx context.Context, client *migration.Client, projectID, location, outPath string) (*migrationpb.MigrationWorkflow, error) {

	// Tasks are extensible; the translation task is defined by the BigQuery Migration API, and so we construct the appropriate
	// details for the task.
	detailsTranslation := &translationtaskpb.TranslationTaskDetails{
		// The path to objects in cloud storage containing queries to be translated.  This is a prefix to some input text files.
		InputPath: "gs://cloud-samples-data/bigquery/migration/translation/input/",
		// The path to objects in cloud storage containing DDL create statements.  This is a prefix to some input DDL text files.
		SchemaPath: "gs://cloud-samples-data/bigquery/migration/translation/schema/",
		// This is the cloud storage path where results will be written.  In this case it will contain translated queries,
		// and possibly error files.
		OutputPath: outPath,
	}

	// We then convert the task details for translation into the suitable protobuf `Any` representation needed
	// to define the workflow.
	detailsAny, err := anypb.New(detailsTranslation)
	if err != nil {
		return nil, err
	}

	// Finally, construct the workflow creation request.
	req := &migrationpb.CreateMigrationWorkflowRequest{
		Parent: fmt.Sprintf("projects/%s/locations/%s", projectID, location),
		MigrationWorkflow: &migrationpb.MigrationWorkflow{
			DisplayName: "example SQL conversion",
			Tasks: map[string]*migrationpb.MigrationTask{
				"example_conversion": {
					Type:    "Translation_Teradata",
					Details: detailsAny,
				},
			},
		},
	}

	// Create the workflow using the request.
	workflow, err := client.CreateMigrationWorkflow(ctx, req)
	if err != nil {
		return nil, fmt.Errorf("CreateMigrationWorkflow: %v", err)
	}

	// This is an asyncronous process, so we now poll the workflow
	// until completion or a suitable timeout has elapsed.
	timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Minute)
	defer cancel()
	for {
		select {
		case <-timeoutCtx.Done():
			return nil, fmt.Errorf("task %s didn't complete due to context expiring", workflow.GetName())
		default:
			polledWorkflow, err := client.GetMigrationWorkflow(timeoutCtx, &migrationpb.GetMigrationWorkflowRequest{
				Name: workflow.GetName(),
			})
			if err != nil {
				return nil, fmt.Errorf("polling ended in error: %v", err)
			}
			if polledWorkflow.GetState() == migrationpb.MigrationWorkflow_COMPLETED {
				// polledWorkflow contains the most recent metadata about the workflow, so we return that.
				return polledWorkflow, nil
			}
			// workflow still isn't complete, so sleep briefly before polling again.
			time.Sleep(5 * time.Second)
		}
	}
}

// reportWorkflowStatus prints information about the workflow execution in a more human readable format.
func reportWorkflowStatus(workflow *migrationpb.MigrationWorkflow) {
	fmt.Printf("Migration workflow %s ended in state %s.\n", workflow.GetName(), workflow.GetState().String())
	for k, task := range workflow.GetTasks() {
		fmt.Printf(" - Task %s had id %s", k, task.GetId())
		if task.GetProcessingError() != nil {
			fmt.Printf(" with processing error: %s", task.GetProcessingError().GetReason())
		}
		fmt.Println()
	}
}

Additional resources

What's next?

For more background, see the introduction to BigQuery Migration Service page.