Create a data pipeline

This quickstart shows you how to:

  1. Create a Cloud Data Fusion instance.
  2. Deploy a sample pipeline that's provided with your Cloud Data Fusion instance. The pipeline does the following:
    1. Reads a JSON file containing NYT bestseller data from Cloud Storage.
    2. Runs transformations on the file to parse and clean the data.
    3. Loads the top-rated books added in the last week that cost less than $25 into BigQuery.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Enable the Cloud Data Fusion API.

    Enable the API

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Enable the Cloud Data Fusion API.

    Enable the API

Create a Cloud Data Fusion instance

  1. Click Create an instance.

    Go to Instances

  2. Enter an Instance name.
  3. Enter a Description for your instance.
  4. Enter the Region in which to create the instance.
  5. Choose the Cloud Data Fusion Version to use.
  6. Choose the Cloud Data Fusion Edition.
  7. For Cloud Data Fusion versions 6.2.3 and later, in the Authorization field, choose the Dataproc service account to use for running your Cloud Data Fusion pipeline in Dataproc. The default value, Compute Engine account, is pre-selected.
  8. Click Create. It takes up to 30 minutes for the instance creation process to complete. While Cloud Data Fusion creates your instance, a progress wheel displays next to the instance name on the Instances page. After completion, it turns into a green check mark and indicates that you can start using the instance.

When using Cloud Data Fusion, you use both the console and the separate Cloud Data Fusion UI.

  • In the console, you can create a console project, create and delete Cloud Data Fusion instances, and view Cloud Data Fusion instance details.

  • In the Cloud Data Fusion web UI, you can use the various pages, such as Studio or Wrangler, to use Cloud Data Fusion functionality.

To navigate the Cloud Data Fusion UI, follow these steps:

  1. In the console, open the Instances page.

    Go to Instances

  2. In the instance Actions column, click the View Instance link.
  3. In the Cloud Data Fusion web UI, use the left navigation panel to navigate to the page you need.

Deploy a sample pipeline

Sample pipelines are available through the Cloud Data Fusion Hub, which lets you share reusable Cloud Data Fusion pipelines, plugins, and solutions.

  1. In the Cloud Data Fusion web UI, click Hub.
  2. In the left panel, click Pipelines.
  3. Click the Cloud Data Fusion Quickstart pipeline.
  4. Click Create.
  5. In the Cloud Data Fusion Quickstart configuration panel, Click Finish.
  6. Click Customize Pipeline. A visual representation of your pipeline appears on the Studio page, which is a graphical interface for developing data integration pipelines. Available pipeline plugins are listed on the left, and your pipeline is displayed on the main canvas area. You can explore your pipeline by holding the pointer over each pipeline node and clicking Properties. The properties menu for each node lets you view the objects and operations associated with the node.
  7. In the top-right menu, click Deploy. This submits the pipeline to Cloud Data Fusion. You will execute the pipeline in the next section of this quickstart.
Deploy the pipeline.

View your pipeline

The deployed pipeline appears in the pipeline details view, where you can do the following:

  • View the pipeline's structure and configuration.
  • Run the pipeline manually or set up a schedule or a trigger.
  • View a summary of the pipeline's historical runs, including execution times, logs, and metrics.
Copy the service account.

Execute your pipeline

In the pipeline details view, click Run to execute your pipeline.

Run the pipeline.

View the results

After a few minutes, the pipeline finishes. The pipeline status changes to Succeeded and the number of records processed by each node is displayed.

Pipeline run complete.
  1. Go to the BigQuery UI.
  2. To view a sample of the results, go to the DataFusionQuickstart dataset in your project, click the top_rated_inexpensive table, then run a simple query, such as: SELECT * FROM <var>PROJECT_ID<var>.GCPQuickStart.top_rated_inexpensive LIMIT 10

    Replace PROJECT_ID with your project's ID.

View results.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.

  1. Delete the BigQuery dataset your pipeline wrote to in this quickstart.
  2. Delete the Cloud Data Fusion instance.

  3. (Optional) Delete the project.

    1. In the console, go to the Manage resources page.

      Go to Manage resources

    2. In the project list, select the project that you want to delete, and then click Delete.
    3. In the dialog, type the project ID, and then click Shut down to delete the project.

What's next