Using Stackdriver Monitoring for Cloud Dataflow pipelines

Stackdriver provides powerful monitoring, logging, and diagnostics. Cloud Dataflow integration with Stackdriver Monitoring allows you to access Cloud Dataflow job metrics such as Job Status, Element Counts, System Lag (for streaming jobs), and User Counters from the Stackdriver dashboards. You can also employ Stackdriver alerting capabilities to be notified of a variety of conditions, such as long streaming system lag or failed jobs.

Before you begin

Follow one of the quickstarts to get your Cloud Dataflow project set up and construct and run your pipeline.

Explore Metrics

You can explore Cloud Dataflow metrics using Stackdriver. Follow the steps in this section to observe the several standard metrics provided for each of your Apache Beam pipelines.

Note: Any aggregator you define in your Apache Beam pipeline will be reported by Cloud Dataflow to Stackdriver as a custom metric. Cloud Dataflow will report incremental updates to Stackdriver approximately every 30 seconds. All user metrics will be exported as the “double” data type to avoid conflicts.

  1. In the Google Cloud Platform Console, select Stackdriver Monitoring:

    Go to Monitoring

  2. If the Add your project to a Workspace dialog is displayed, create a new Workspace by selecting your GCP project under New Workspace and then clicking Add. In the following image, the GCP project name is Quickstart:

    Creating a new workspace dialog.

    The Add your project to a Workspace dialog is displayed only when you have at least one existing Workspace available to you. The Workspaces listed under Existing Workspace are Workspaces you've created or Workspaces for GCP projects where you have editorial permission. Using this dialog, you can choose between creating a new Workspace and adding your project to an existing Workspace.

  3. In the Resource menu, select Metrics Explorer.

  4. In the Find a resource type and/or a metric pane, select the dataflow_job resource type. dataflow_job

  5. From the list that appears, select a metric you'd like to observe for one of your jobs.

    Choose metrics
    For example: This example shows a streaming pipeline that reads from a Cloud Pub/Sub topic and writes to BigQuery. It has 5 steps, one of which is PubsubIO.Read. The image below displays the dataflow/job/element_count for the PubsubIO.Read step of the pipeline. Example

Create alerts and dashboards

Stackdriver does not only provide you with access to Cloud Dataflow-related metrics, but also allows you to create alerts and dashboards so you can chart time series of metrics and choose to be notified when these metrics reach specified values.

Create groups of resources

You can create resource groups that include multiple Apache Beam pipelines so that you can easily set alerts and build dashboards.

  1. In the Google Cloud Platform Console, select Stackdriver Monitoring:

    Go to Monitoring

  2. In the Groups menu, select Create Groups.

  3. Add filter criteria that define the Cloud Dataflow resources included in the group. For example, one of your filter criteria can be the name prefix of your pipelines. Create group.

  4. After the group is created, you will be able to see the basic metrics related to resources in that group. Create group.

Create alerts for Cloud Dataflow metrics

Stackdriver gives you the ability to create alerts and be notified when a certain metric crosses a specified threshold. For example, when System Lag of a streaming pipeline increases above a predefined value.

  1. In the Google Cloud Platform Console, select Stackdriver Monitoring:

    Go to Monitoring

  2. In the Alerting menu, select Policies Overview.

  3. Click on Add Policy. Add policy.

  4. In the Create new alerting policy page, you can define the alerting conditions and the channels of communication for alerts.
    For example, to set an alert on the System Lag for the WindowedWordCount Apache Beam pipeline group, select 'Dataflow Job' in the Resource Type dropdown, 'Group' in the Applies To dropdown, and 'System Lag' in the If Metric dropdown. Create alert.

  5. After you've created an alert, you can review the events related to Cloud Dataflow by navigating to Alerting > Events. Every time an alert is triggered by a Metric Threshold condition, an Incident and a corresponding Event are created in Stackdriver. If you specified a notification mechanism in the alert (email, SMS, etc), you will also receive a notification. Incident alert.

Build your own custom monitoring dashboard

You can build Stackdriver monitoring dashboards with the most relevant Cloud Dataflow-related charts.

  1. Go to the Google Cloud Platform Console, and select Stackdriver Monitoring:

    Go to Monitoring

  2. Select Dashboards > Create Dashboard.

  3. Click on Add Chart.

  4. In the Add Chart window, select "Dataflow Job" as the Resource Type, select a metric you want to chart in the Metric Type field, and select a group that contains Apache Beam pipelines in the Filter panel. Add chart.

You can add as many charts to the dashboard as you like.

Receive worker VM metrics from Stackdriver Monitoring agent

If you would like to monitor persistent disk, CPU, network, and process metrics from your Cloud Dataflow worker VM instances, you can enable the Stackdriver Monitoring Agent when you run your pipeline. See the list of available Monitoring agent metrics.

To enable the Monitoring agent, use the --experiments=enable_stackdriver_agent_metrics option when running your pipeline.

To disable the Monitoring agent without stopping your pipeline, update your pipeline by launching a replacement job and without specifying the --experiments=enable_stackdriver_agent_metrics parameter.

What's next

To learn more, consider exploring these other resources:

Oliko tästä sivusta apua? Kerro mielipiteesi

Palautteen aihe:

Tämä sivu
Cloud Dataflow
Tarvitsetko apua? Siirry tukisivullemme.