Using Dataflow SQL

This tutorial shows you how to run Dataflow jobs using SQL and the Dataflow SQL UI. To demonstrate these, this tutorial walks you through an example that joins a stream of data from Pub/Sub with data from a BigQuery table.

Objectives

In this tutorial, you:

  • Use SQL to join Pub/Sub streaming data with BigQuery table data
  • Deploy a Dataflow job from the Dataflow SQL UI

Costs

This tutorial uses billable components of Google Cloud, including:

  • Dataflow
  • Cloud Storage
  • Pub/Sub

Use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  4. Enable the Cloud Dataflow, Compute Engine, Stackdriver Logging, Cloud Storage, Cloud Storage JSON, BigQuery, Cloud Pub/Sub, and Cloud Resource Manager APIs.

    Enable the APIs

  5. Set up authentication:
    1. In the Cloud Console, go to the Create service account key page.

      Go to the Create Service Account Key page
    2. From the Service account list, select New service account.
    3. In the Service account name field, enter a name.
    4. From the Role list, select Project > Owner.

      Note: The Role field authorizes your service account to access resources. You can view and change this field later by using the Cloud Console. If you are developing a production app, specify more granular permissions than Project > Owner. For more information, see granting roles to service accounts.
    5. Click Create. A JSON file that contains your key downloads to your computer.
  6. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again.

  7. Install and initialize the Cloud SDK. Choose one of the installation options. You might need to set the project property to the project that you are using for this walkthrough.
  8. Go to the BigQuery web UI in the Cloud Console. This opens your most recently accessed project. To switch to a different project, click the name of the project at the top of the BigQuery web UI, and search for the project you want to use.
    Go to the BigQuery web UI

Switch to the Dataflow SQL UI

In the BigQuery web UI, follow these steps to switch to the Dataflow UI.

  1. Click the More drop-down menu and select Query settings.

  2. In the Query settings menu that opens on the right, select Dataflow engine.

  3. If your project does not have the Dataflow and Data Catalog APIs enabled, you will be prompted to enable them. Click Enable APIs. Enabling the Dataflow and Data Catalog APIs might take a few minutes.

  4. When enabling the APIs is complete, click Save.

Create example sources

If you would like to follow the example provided in this tutorial, create the following sources and use them in the steps of the tutorial.

  • A Pub/Sub topic called transactions - A stream of transaction data that arrives via a subscription to the Pub/Sub topic. The data for each transaction includes information like the product purchased, the sale price, and the city and state in which the purchase occurred. After you create the Pub/Sub topic, you create a script that publishes messages to your topic. You will run this script in a later section of this tutorial.
  • A BigQuery table called us_state_salesregions - A table that provides a mapping of states to sales regions. Before you create this table, you need to create a BigQuery dataset.

Find Pub/Sub sources

The Dataflow SQL UI provides a way to find Pub/Sub data source objects for any project you have access to, so you don't have to remember their full names.

For the example in this tutorial, add the transactions Pub/Sub topic that you created:

  1. In the left navigation panel, click the Add data drop-down list and select Cloud Dataflow sources.

  2. In the Add Cloud Dataflow source panel that opens on the right, choose Pub/Sub topics. In the search box, search for transactions. Select the topic and click Add.

Assign a schema to your Pub/Sub topic

Assigning a schema lets you run SQL queries on your Pub/Sub topic data. Currently, Dataflow SQL expects messages in Pub/Sub topics to be serialized in JSON format. Support for other formats such as Avro will be added in the future.

After adding the example Pub/Sub topic as a Dataflow source, complete the following steps to assign a schema to the topic in the Dataflow SQL UI:

  1. Select the topic in the Resources panel.

  2. In the Schema tab, click Edit schema. The Schema side panel opens on the right.

    The side panel in which to add or edit a schema

  3. Toggle the Edit as text button and paste the following inline schema into the editor. Then, click Submit.

    [
      {
          "description": "Transaction time string",
          "name": "tr_time_str",
          "type": "STRING"
      },
      {
          "description": "First name",
          "name": "first_name",
          "type": "STRING"
      },
      {
          "description": "Last name",
          "name": "last_name",
          "type": "STRING"
      },
      {
          "description": "City",
          "name": "city",
          "type": "STRING"
      },
      {
          "description": "State",
          "name": "state",
          "type": "STRING"
      },
      {
          "description": "Product",
          "name": "product",
          "type": "STRING"
      },
      {
          "description": "Amount of transaction",
          "name": "amount",
          "type": "FLOAT"
      }
    ]
    
  4. (Optional) Click Preview topic to examine the content of your messages and confirm that they match the schema you defined.

    The Preview topic button opens

View the schema

  1. In the left navigation panel of the Dataflow SQL UI, click Cloud Dataflow sources.
  2. Click Pub/Sub topics.
  3. Click transactions.
  4. Under Schema, you can view the schema you assigned to the transactions Pub/Sub topic.

Create a SQL query

The Dataflow SQL UI lets you create SQL queries to run your Dataflow jobs.

The following SQL query is a data enrichment query. It adds an additional field, sales_region, to the Pub/Sub stream of events (transactions), using a BigQuery table (us_state_salesregions) that maps states to sales regions.

Copy and paste the following SQL query into the Query editor. Replace project-id with your project ID.

SELECT tr.payload.*, sr.sales_region
FROM pubsub.topic.`project-id`.transactions as tr
  INNER JOIN bigquery.table.`project-id`.dataflow_sql_dataset.us_state_salesregions AS sr
  ON tr.payload.state = sr.state_code

When you enter a query in the Dataflow SQL UI, the query validator verifies the query syntax. A green check mark icon is displayed if the query is valid. If the query is invalid, a red exclamation point icon is displayed. If your query syntax is invalid, clicking on the validator icon provides information about what you need to fix.

The following screenshot shows the valid query in the Query editor. The validator displays a green check mark.

Enter your query in the editor.

Create a Dataflow job to run your SQL query

To run your SQL query, create a Dataflow job from the Dataflow SQL UI.

  1. Below the Query editor, click Create Cloud Dataflow job.

  2. In the Create Cloud Dataflow job panel that opens on the right, change the default Table name to dfsqltable_sales.

  3. Click Create. Your Dataflow job will take a few minutes to start running.

  4. The Query results panel appears in the UI. To get back to a job's Query results panel at a later time, find the job in the Job history panel and use Open query in editor button as shown in View the Dataflow job and output.

  5. Under Job information, click the Job ID link. This opens a new browser tab with the Dataflow Job Details page in the Dataflow web UI.

View the Dataflow job and output

Dataflow turns your SQL query into an Apache Beam pipeline. In the Dataflow web UI that opened in a new browser tab, you can see a graphical representation of your pipeline.

You can click the boxes to see a breakdown of the transformations occurring in the pipeline. For example, if you click the top box in the graphical representation, labeled Run SQL Query, a graphic appears that shows the operations taking place behind the scenes.

The top two boxes represent the two inputs you joined: the Pub/Sub topic, transactions, and the BigQuery table, us_state_salesregions.

To view the output table that contains the job results, go back to the browser tab with the Dataflow SQL UI. In the left navigation panel, under your project, click the dataflow_sql_dataset dataset you created. Then, click on the output table, dfsqltable_sales. The Preview tab displays the contents of the output table.

View past jobs and edit your queries

The Dataflow SQL UI stores past jobs and queries in the Job history panel. Jobs are listed by the day the job started. The job list first displays days that contain running jobs. Then, the list displays days with no running jobs.

You can use the job history list to edit previous SQL queries and run new Dataflow jobs. For example, you want to modify your query to aggregate sales by sales region every 15 seconds. Use the Job history panel to access the running job that you started earlier in the tutorial, change the SQL query, and run another job with the modified query.

  1. In the left navigation panel, click Job history.

  2. Under Job history, click Cloud Dataflow. All past jobs for your project appear.

  3. Click on the job you want to edit. Click Open in query editor.

  4. Edit your SQL query in the Query editor to add tumbling windows. Replace project-id with your project ID if you copy the following query.

     SELECT
       sr.sales_region,
       TUMBLE_START("INTERVAL 15 SECOND") AS period_start,
       SUM(tr.payload.amount) as amount
     FROM pubsub.topic.`project-id`.transactions AS tr
       INNER JOIN bigquery.table.`project-id`.dataflow_sql_dataset.us_state_salesregions AS sr
       ON tr.payload.state = sr.state_code
     GROUP BY
       sr.sales_region,
       TUMBLE(tr.event_timestamp, "INTERVAL 15 SECOND")
    
  5. Below the Query editor, click Create Cloud Dataflow job to create a new job with the modified query.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial:

  1. Stop your transactions_injector.py publishing script if it is still running.

  2. Stop your running Dataflow jobs. Go to the Dataflow web UI in the Cloud Console.

    Go to the Dataflow web UI

    For each job you created from following this walkthrough, do the following steps:

    1. Click the name of the job.

    2. In the Job summary panel for the job, click Stop job. The Stop Job dialog appears with your options for how to stop your job.

    3. Click Cancel.

    4. Click Stop job. The service halts all data ingestion and processing as soon as possible. Because Cancel immediately halts processing, you might lose any "in-flight" data. Stopping a job might take a few minutes.

  3. Delete your BigQuery dataset. Go to the BigQuery web UI in the Cloud Console.

    Go to the BigQuery web UI

    1. In the navigation panel, in the Resources section, click the dataflow_sql_dataset dataset you created.

    2. In the details panel, on the right side, click Delete dataset. This action deletes the dataset, the table, and all the data.

    3. In the Delete dataset dialog box, confirm the delete command by typing the name of your dataset (dataflow_sql_dataset) and then click Delete.

  4. Delete your Pub/Sub topic. Go to the Pub/Sub topics page in the Cloud Console.

    Go to the Pub/Sub topics page

    1. Check the checkbox next to the transactions topic.

    2. Click Delete to permanently delete the topic.

    3. Go to the Pub/Sub subscriptions page.

    4. Check the checkbox next to any remaining subscriptions to transactions. If your jobs are not running anymore, there might not be any subscriptions.

    5. Click Delete to permanently delete the subscriptions.

  5. Delete the Dataflow staging bucket in Cloud Storage. Go to the Cloud Storage browser in the Cloud Console.

    Go to the Cloud Storage browser

    1. Check the checkbox next to the Dataflow staging bucket.

    2. Click Delete to permanently delete the bucket.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Dataflow
Need help? Visit our support page.