Package google.dataflow.v1beta3

Index

FlexTemplatesService

Provides a service for Flex templates.

LaunchFlexTemplate

rpc LaunchFlexTemplate(LaunchFlexTemplateRequest) returns (LaunchFlexTemplateResponse)

Launch a job with a FlexTemplate.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

JobsV1Beta3

Provides a method to create and modify Google Cloud Dataflow jobs. A Job is a multi-stage computation graph run by the Cloud Dataflow service.

AggregatedListJobs

rpc AggregatedListJobs(ListJobsRequest) returns (ListJobsResponse)

List the jobs of a project across all regions.

Note: This method doesn't support filtering the list of jobs by name.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CheckActiveJobs

rpc CheckActiveJobs(CheckActiveJobsRequest) returns (CheckActiveJobsResponse)

Check for existence of active jobs in the given project across all regions.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateJob

rpc CreateJob(CreateJobRequest) returns (Job)

Creates a Cloud Dataflow job.

To create a job, we recommend using projects.locations.jobs.create with a regional endpoint. Using projects.jobs.create is not recommended, as your job will always start in us-central1.

Do not enter confidential information when you supply string values using the API.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetJob

rpc GetJob(GetJobRequest) returns (Job)

Gets the state of the specified Cloud Dataflow job.

To get the state of a job, we recommend using projects.locations.jobs.get with a regional endpoint. Using projects.jobs.get is not recommended, as you can only get the state of jobs that are running in us-central1.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListJobs

rpc ListJobs(ListJobsRequest) returns (ListJobsResponse)

List the jobs of a project.

To list the jobs of a project in a region, we recommend using projects.locations.jobs.list with a regional endpoint. To list the all jobs across all regions, use projects.jobs.aggregated. Using projects.jobs.list is not recommended, because you can only get the list of jobs that are running in us-central1.

projects.locations.jobs.list and projects.jobs.list support filtering the list of jobs by name. Filtering by name isn't supported by projects.jobs.aggregated.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

SnapshotJob

rpc SnapshotJob(SnapshotJobRequest) returns (Snapshot)

Snapshot the state of a streaming job.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateJob

rpc UpdateJob(UpdateJobRequest) returns (Job)

Updates the state of an existing Cloud Dataflow job.

To update the state of an existing job, we recommend using projects.locations.jobs.update with a regional endpoint. Using projects.jobs.update is not recommended, as you can only update the state of jobs that are running in us-central1.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

MessagesV1Beta3

The Dataflow Messages API is used for monitoring the progress of Dataflow jobs.

ListJobMessages

rpc ListJobMessages(ListJobMessagesRequest) returns (ListJobMessagesResponse)

Request the job status.

To request the status of a job, we recommend using projects.locations.jobs.messages.list with a regional endpoint. Using projects.jobs.messages.list is not recommended, as you can only request the status of jobs that are running in us-central1.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

MetricsV1Beta3

The Dataflow Metrics API lets you monitor the progress of Dataflow jobs.

GetJobExecutionDetails

rpc GetJobExecutionDetails(GetJobExecutionDetailsRequest) returns (JobExecutionDetails)

Request detailed information about the execution status of the job.

EXPERIMENTAL. This API is subject to change or removal without notice.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetJobMetrics

rpc GetJobMetrics(GetJobMetricsRequest) returns (JobMetrics)

Request the job status.

To request the status of a job, we recommend using projects.locations.jobs.getMetrics with a regional endpoint. Using projects.jobs.getMetrics is not recommended, as you can only request the status of jobs that are running in us-central1.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetStageExecutionDetails

rpc GetStageExecutionDetails(GetStageExecutionDetailsRequest) returns (StageExecutionDetails)

Request detailed information about the execution status of a stage of the job.

EXPERIMENTAL. This API is subject to change or removal without notice.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

TemplatesService

Provides a method to create Cloud Dataflow jobs from templates.

CreateJobFromTemplate

rpc CreateJobFromTemplate(CreateJobFromTemplateRequest) returns (Job)

Creates a Cloud Dataflow job from a template. Do not enter confidential information when you supply string values using the API.

To create a job, we recommend using projects.locations.templates.create with a regional endpoint. Using projects.templates.create is not recommended, because your job will always start in us-central1.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetTemplate

rpc GetTemplate(GetTemplateRequest) returns (GetTemplateResponse)

Get the template associated with a template.

To get the template, we recommend using projects.locations.templates.get with a regional endpoint. Using projects.templates.get is not recommended, because only templates that are running in us-central1 are retrieved.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

LaunchTemplate

rpc LaunchTemplate(LaunchTemplateRequest) returns (LaunchTemplateResponse)

Launches a template.

To launch a template, we recommend using projects.locations.templates.launch with a regional endpoint. Using projects.templates.launch is not recommended, because jobs launched from the template will always start in us-central1.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/compute
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

AutoscalingAlgorithm

Specifies the algorithm used to determine the number of worker processes to run at any given point in time, based on the amount of data left to process, the number of workers, and how quickly existing workers are processing data.

Enums
AUTOSCALING_ALGORITHM_UNKNOWN The algorithm is unknown, or unspecified.
AUTOSCALING_ALGORITHM_NONE Disable autoscaling.
AUTOSCALING_ALGORITHM_BASIC Increase worker count over time to reduce job execution time.

AutoscalingEvent

A structured message reporting an autoscaling decision made by the Dataflow service.

Fields
current_num_workers

int64

The current number of workers the job has.

target_num_workers

int64

The target number of workers the worker pool wants to resize to use.

event_type

AutoscalingEventType

The type of autoscaling event to report.

description

StructuredMessage

A message describing why the system decided to adjust the current number of workers, why it failed, or why the system decided to not make any changes to the number of workers.

time

Timestamp

The time this event was emitted to indicate a new target or current num_workers value.

worker_pool

string

A short and friendly name for the worker pool this event refers to.

AutoscalingEventType

Indicates the type of autoscaling event.

Enums
TYPE_UNKNOWN Default type for the enum. Value should never be returned.
TARGET_NUM_WORKERS_CHANGED The TARGET_NUM_WORKERS_CHANGED type should be used when the target worker pool size has changed at the start of an actuation. An event should always be specified as TARGET_NUM_WORKERS_CHANGED if it reflects a change in the target_num_workers.
CURRENT_NUM_WORKERS_CHANGED The CURRENT_NUM_WORKERS_CHANGED type should be used when actual worker pool size has been changed, but the target_num_workers has not changed.
ACTUATION_FAILURE The ACTUATION_FAILURE type should be used when we want to report an error to the user indicating why the current number of workers in the pool could not be changed. Displayed in the current status and history widgets.
NO_CHANGE Used when we want to report to the user a reason why we are not currently adjusting the number of workers. Should specify both target_num_workers, current_num_workers and a decision_message.

AutoscalingSettings

Settings for WorkerPool autoscaling.

Fields
algorithm

AutoscalingAlgorithm

The algorithm to use for autoscaling.

max_num_workers

int32

The maximum number of workers to cap scaling at.

BigQueryIODetails

Metadata for a BigQuery connector used by the job.

Fields
table

string

Table accessed in the connection.

dataset

string

Dataset accessed in the connection.

project_id

string

Project accessed in the connection.

query

string

Query used to access data in the connection.

BigTableIODetails

Metadata for a Cloud Bigtable connector used by the job.

Fields
project_id

string

ProjectId accessed in the connection.

instance_id

string

InstanceId accessed in the connection.

table_id

string

TableId accessed in the connection.

CheckActiveJobsRequest

Request to check is active jobs exists for a project

Fields
project_id

string

The project which owns the jobs.

CheckActiveJobsResponse

Response for CheckActiveJobsRequest.

Fields
active_jobs_exist

bool

If True, active jobs exists for project. False otherwise.

CreateJobFromTemplateRequest

A request to create a Cloud Dataflow job from a template.

Fields
project_id

string

Required. The ID of the Cloud Platform project that the job belongs to.

job_name

string

Required. The job name to use for the created job.

parameters

map<string, string>

The runtime parameters to pass to the job.

environment

RuntimeEnvironment

The runtime environment for the job.

location

string

The regional endpoint to which to direct the request.

Union field template. The template from which to create the job. template can be only one of the following:
gcs_path

string

Required. A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with gs://.

CreateJobRequest

Request to create a Cloud Dataflow job.

Fields
project_id

string

The ID of the Cloud Platform project that the job belongs to.

job

Job

The job to create.

view

JobView

The level of information requested in response.

replace_job_id

string

Deprecated. This field is now in the Job message.

location

string

The regional endpoint that contains this job.

DataSamplingConfig

Configuration options for sampling elements.

Fields
behaviors[]

DataSamplingBehavior

List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling.

If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors.

Ordering does not matter.

DataSamplingBehavior

The following enum defines what to sample for a running job.

Enums
DATA_SAMPLING_BEHAVIOR_UNSPECIFIED If given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
DISABLED When given, disables element sampling. Has same behavior as not setting the behavior.
ALWAYS_ON When given, enables sampling in-flight from all PCollections.
EXCEPTIONS When given, enables sampling input elements when a user-defined DoFn causes an exception.

DatastoreIODetails

Metadata for a Datastore connector used by the job.

Fields
namespace

string

Namespace used in the connection.

project_id

string

ProjectId accessed in the connection.

DebugOptions

Describes any options that have an effect on the debugging of pipelines.

Fields
enable_hot_key_logging

bool

Optional. When true, enables the logging of the literal hot key to the user's Cloud Logging.

data_sampling

DataSamplingConfig

Configuration options for sampling elements from a running pipeline.

DefaultPackageSet

The default set of packages to be staged on a pool of workers.

Enums
DEFAULT_PACKAGE_SET_UNKNOWN The default set of packages to stage is unknown, or unspecified.
DEFAULT_PACKAGE_SET_NONE Indicates that no packages should be staged at the worker unless explicitly specified by the job.
DEFAULT_PACKAGE_SET_JAVA Stage packages typically useful to workers written in Java.
DEFAULT_PACKAGE_SET_PYTHON Stage packages typically useful to workers written in Python.

Disk

Describes the data disk used by a workflow job.

Fields
size_gb

int32

Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

disk_type

string

Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default.

For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone.

Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this:

compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

mount_point

string

Directory in a VM where disk is mounted.

DisplayData

Data provided with a pipeline or transform to provide descriptive info.

Fields
key

string

The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

namespace

string

The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

short_str_value

string

A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

url

string

An optional full URL.

label

string

An optional label to display in a dax UI for the element.

Union field Value. Various value types which can be used for display data. Only one will be set. Value can be only one of the following:
str_value

string

Contains value if the data is of string type.

int64_value

int64

Contains value if the data is of int64 type.

float_value

float

Contains value if the data is of float type.

java_class_value

string

Contains value if the data is of java class type.

timestamp_value

Timestamp

Contains value if the data is of timestamp type.

duration_value

Duration

Contains value if the data is of duration type.

bool_value

bool

Contains value if the data is of a boolean type.

DynamicTemplateLaunchParams

Parameters to pass when launching a dynamic template.

Fields
gcs_path

string

Path to the dynamic template specification file on Cloud Storage. The file must be a JSON serialized DynamicTemplateFileSpec object.

staging_location

string

Cloud Storage path for staging dependencies. Must be a valid Cloud Storage URL, beginning with gs://.

Environment

Describes the environment in which a Dataflow Job runs.

Fields
temp_storage_prefix

string

The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is:

Google Cloud Storage:

storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

cluster_manager_api_service

string

The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

experiments[]

string

The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

service_options[]

string

Optional. The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

service_kms_key_name

string

Optional. If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK).

Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

worker_pools[]

WorkerPool

The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

user_agent

Struct

A description of the process that generated the request.

version

Struct

A structure describing which components and their versions of the service are required in order to run the job.

dataset

string

Optional. The dataset for the current project where various workflow related tables are stored.

The supported resource type is:

Google BigQuery: bigquery.googleapis.com/{dataset}

sdk_pipeline_options

Struct

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

internal_experiments

Any

Experimental settings.

service_account_email

string

Optional. Identity to run virtual machines as. Defaults to the default account.

flex_resource_scheduling_goal

FlexResourceSchedulingGoal

Optional. Which Flexible Resource Scheduling mode to run in.

worker_region

string

Optional. The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

worker_zone

string

Optional. The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

shuffle_mode

ShuffleMode

Output only. The shuffle mode used for the job.

debug_options

DebugOptions

Optional. Any debugging options to be supplied to the job.

use_streaming_engine_resource_based_billing

bool

Output only. Whether the job uses the Streaming Engine resource-based billing model.

streaming_mode

StreamingMode

Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode.

ExecutionStageState

A message describing the state of a particular execution stage.

Fields
execution_stage_name

string

The name of the execution stage.

execution_stage_state

JobState

Executions stage states allow the same set of values as JobState.

current_state_time

Timestamp

The time at which the stage transitioned to this state.

ExecutionStageSummary

Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning.

Fields
name

string

Dataflow service generated name for this stage.

id

string

Dataflow service generated id for this stage.

kind

KindType

Type of transform this stage is executing.

input_source[]

StageSource

Input sources for this stage.

output_source[]

StageSource

Output sources for this stage.

prerequisite_stage[]

string

Other stages that must complete before this stage can run.

component_transform[]

ComponentTransform

Transforms that comprise this execution stage.

component_source[]

ComponentSource

Collections produced and consumed by component transforms of this stage.

ComponentSource

Description of an interstitial value between transforms in an execution stage.

Fields
user_name

string

Human-readable name for this transform; may be user or system generated.

name

string

Dataflow service generated name for this source.

original_transform_or_collection

string

User name for the original user transform or collection with which this source is most closely associated.

ComponentTransform

Description of a transform executed as part of an execution stage.

Fields
user_name

string

Human-readable name for this transform; may be user or system generated.

name

string

Dataflow service generated name for this source.

original_transform

string

User name for the original user transform with which this transform is most closely associated.

StageSource

Description of an input or output of an execution stage.

Fields
user_name

string

Human-readable name for this source; may be user or system generated.

name

string

Dataflow service generated name for this source.

original_transform_or_collection

string

User name for the original user transform or collection with which this source is most closely associated.

size_bytes

int64

Size of the source, if measurable.

ExecutionState

The state of some component of job execution.

Enums
EXECUTION_STATE_UNKNOWN The component state is unknown or unspecified.
EXECUTION_STATE_NOT_STARTED The component is not yet running.
EXECUTION_STATE_RUNNING The component is currently running.
EXECUTION_STATE_SUCCEEDED The component succeeded.
EXECUTION_STATE_FAILED The component failed.
EXECUTION_STATE_CANCELLED Execution of the component was cancelled.

FailedLocation

Indicates which regional endpoint failed to respond to a request for data.

Fields
name

string

The name of the regional endpoint that failed to respond.

FileIODetails

Metadata for a File connector used by the job.

Fields
file_pattern

string

File Pattern used to access files by the connector.

FlexResourceSchedulingGoal

Specifies the resource to optimize for in Flexible Resource Scheduling.

Enums
FLEXRS_UNSPECIFIED Run in the default mode.
FLEXRS_SPEED_OPTIMIZED Optimize for lower execution time.
FLEXRS_COST_OPTIMIZED Optimize for lower cost.

FlexTemplateRuntimeEnvironment

The environment values to be set at runtime for flex template.

Fields
num_workers

int32

The initial number of Google Compute Engine instances for the job.

max_workers

int32

The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

zone

string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

service_account_email

string

The email address of the service account to run the job as.

temp_location

string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

machine_type

string

The machine type to use for the job. Defaults to the value from the template if not specified.

additional_experiments[]

string

Additional experiment flags for the job.

network

string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

subnetwork

string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

additional_user_labels

map<string, string>

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions page. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

kms_key_name

string

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

ip_configuration

WorkerIPAddressConfiguration

Configuration for VM IPs.

worker_region

string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

worker_zone

string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

enable_streaming_engine

bool

Whether to enable Streaming Engine for the job.

flexrs_goal

FlexResourceSchedulingGoal

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

staging_location

string

The Cloud Storage path for staging local files. Must be a valid Cloud Storage URL, beginning with gs://.

sdk_container_image

string

Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

disk_size_gb

int32

Worker disk size, in gigabytes.

autoscaling_algorithm

AutoscalingAlgorithm

The algorithm to use for autoscaling

dump_heap_on_oom

bool

If true, when processing time is spent almost entirely on garbage collection (GC), saves a heap dump before ending the thread or process. If false, ends the thread or process without saving a heap dump. Does not save a heap dump when the Java Virtual Machine (JVM) has an out of memory error during processing. The location of the heap file is either echoed back to the user, or the user is given the opportunity to download the heap file.

save_heap_dumps_to_gcs_path

string

Cloud Storage bucket (directory) to upload heap dumps to. Enabling this field implies that dump_heap_on_oom is set to true.

launcher_machine_type

string

The machine type to use for launching the job. The default is n1-standard-1.

enable_launcher_vm_serial_port_logging

bool

If true serial port logging will be enabled for the launcher VM.

streaming_mode

StreamingMode

Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode.

GetJobExecutionDetailsRequest

Request to get job execution details.

Fields
project_id

string

A project id.

job_id

string

The job to get execution details for.

location

string

The regional endpoint that contains the job specified by job_id.

page_size

int32

If specified, determines the maximum number of stages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.

page_token

string

If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned.

GetJobMetricsRequest

Request to get job metrics.

Fields
project_id

string

A project id.

job_id

string

The job to get metrics for.

start_time

Timestamp

Return only metric data that has changed since this time. Default is to return all information about all metrics for the job.

location

string

The regional endpoint that contains the job specified by job_id.

GetJobRequest

Request to get the state of a Cloud Dataflow job.

Fields
project_id

string

The ID of the Cloud Platform project that the job belongs to.

job_id

string

The job ID.

view

JobView

The level of information requested in response.

location

string

The regional endpoint that contains this job.

GetStageExecutionDetailsRequest

Request to get information about a particular execution stage of a job. Currently only tracked for Batch jobs.

Fields
project_id

string

A project id.

job_id

string

The job to get execution details for.

location

string

The regional endpoint that contains the job specified by job_id.

stage_id

string

The stage for which to fetch information.

page_size

int32

If specified, determines the maximum number of work items to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.

page_token

string

If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned.

start_time

Timestamp

Lower time bound of work items to include, by start time.

end_time

Timestamp

Upper time bound of work items to include, by start time.

GetTemplateRequest

A request to retrieve a Cloud Dataflow job template.

Fields
project_id

string

Required. The ID of the Cloud Platform project that the job belongs to.

view

TemplateView

The view to retrieve. Defaults to METADATA_ONLY.

location

string

The regional endpoint to which to direct the request.

Union field template. The template from which to create the job. template can be only one of the following:
gcs_path

string

Required. A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with 'gs://'.

TemplateView

The various views of a template that may be retrieved.

Enums
METADATA_ONLY Template view that retrieves only the metadata associated with the template.

GetTemplateResponse

The response to a GetTemplate request.

Fields
status

Status

The status of the get template request. Any problems with the request will be indicated in the error_details.

metadata

TemplateMetadata

The template metadata describing the template name, available parameters, etc.

template_type

TemplateType

Template Type.

runtime_metadata

RuntimeMetadata

Describes the runtime metadata with SDKInfo and available parameters.

TemplateType

Template Type.

Enums
UNKNOWN Unknown Template Type.
LEGACY Legacy Template.
FLEX Flex Template.

HotKeyDebuggingInfo

Information useful for debugging a hot key detection.

Fields
detected_hot_keys

map<uint64, HotKeyInfo>

Debugging information for each detected hot key. Keyed by a hash of the key.

HotKeyInfo

Information about a hot key.

Fields
hot_key_age

Duration

The age of the hot key measured from when it was first detected.

key

string

A detected hot key that is causing limited parallelism. This field will be populated only if the following flag is set to true: "--enable_hot_key_logging".

key_truncated

bool

If true, then the above key is truncated and cannot be deserialized. This occurs if the key above is populated and the key size is >5MB.

InvalidTemplateParameters

Used in the error_details field of a google.rpc.Status message, this indicates problems with the template parameter.

Fields
parameter_violations[]

ParameterViolation

Describes all parameter violations in a template request.

ParameterViolation

A specific template-parameter violation.

Fields
parameter

string

The parameter that failed to validate.

description

string

A description of why the parameter failed to validate.

Job

Defines a job to be run by the Cloud Dataflow service. Do not enter confidential information when you supply string values using the API.

Fields
id

string

The unique ID of this job.

This field is set by the Dataflow service when the job is created, and is immutable for the life of the job.

project_id

string

The ID of the Google Cloud project that the job belongs to.

name

string

Optional. The user-specified Dataflow job name.

Only one active job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a job with the same name as an active job that already exists, the attempt returns the existing job.

The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

type

JobType

Optional. The type of Dataflow job.

environment

Environment

Optional. The environment for the job.

steps[]

Step

Exactly one of step or steps_location should be specified.

The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

steps_location

string

The Cloud Storage location where the steps are stored.

current_state

JobState

The current state of the job.

Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified.

A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made.

This field might be mutated by the Dataflow service; callers cannot mutate it.

current_state_time

Timestamp

The timestamp associated with the current state.

requested_state

JobState

The job's requested state. Applies to UpdateJob requests.

Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state.

This field has no effect on CreateJob requests.

execution_info

JobExecutionInfo

Deprecated.

create_time

Timestamp

The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

replace_job_id

string

If this job is an update of an existing job, this field is the job ID of the job it replaced.

When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

transform_name_mapping

map<string, string>

Optional. The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

client_request_id

string

The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

replaced_by_job_id

string

If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

temp_files[]

string

A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported.

The supported files are:

Google Cloud Storage:

storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

labels

map<string, string>

User-defined labels for this job.

The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:

  • Keys must conform to regexp: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}
  • Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
  • Both keys and values are additionally constrained to be <= 128 bytes in size.
location

string

Optional. The regional endpoint that contains this job.

pipeline_description

PipelineDescription

Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

stage_states[]

ExecutionStageState

This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

job_metadata

JobMetadata

This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

start_time

Timestamp

The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

created_from_snapshot_id

string

If this is specified, the job's initial state is populated from the given snapshot.

satisfies_pzs

bool

Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

runtime_updatable_params

RuntimeUpdatableParams

This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

satisfies_pzi

bool

Output only. Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

service_resources

ServiceResources

Output only. Resources used by the Dataflow Service to run the job.

JobExecutionDetails

Information about the execution of a job.

Fields
stages[]

StageSummary

The stages of the job execution.

next_page_token

string

If present, this response does not contain all requested tasks. To obtain the next page of results, repeat the request with page_token set to this value.

JobExecutionInfo

Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job.

Fields
stages

map<string, JobExecutionStageInfo>

A mapping from each stage to the information about that stage.

JobExecutionStageInfo

Contains information about how a particular google.dataflow.v1beta3.Step will be executed.

Fields
step_name[]

string

The steps associated with the execution stage. Note that stages may have several steps, and that a given step might be run by more than one stage.

JobMessage

A particular message pertaining to a Dataflow job.

Fields
id

string

Deprecated.

time

Timestamp

The timestamp of the message.

message_text

string

The text of the message.

message_importance

JobMessageImportance

Importance level of the message.

JobMessageImportance

Indicates the importance of the message.

Enums
JOB_MESSAGE_IMPORTANCE_UNKNOWN The message importance isn't specified, or is unknown.
JOB_MESSAGE_DEBUG The message is at the 'debug' level: typically only useful for software engineers working on the code the job is running. Typically, Dataflow pipeline runners do not display log messages at this level by default.
JOB_MESSAGE_DETAILED The message is at the 'detailed' level: somewhat verbose, but potentially useful to users. Typically, Dataflow pipeline runners do not display log messages at this level by default. These messages are displayed by default in the Dataflow monitoring UI.
JOB_MESSAGE_BASIC The message is at the 'basic' level: useful for keeping track of the execution of a Dataflow pipeline. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
JOB_MESSAGE_WARNING The message is at the 'warning' level: indicating a condition pertaining to a job which may require human intervention. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
JOB_MESSAGE_ERROR The message is at the 'error' level: indicating a condition preventing a job from succeeding. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.

JobMetadata

Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.

Fields
sdk_version

SdkVersion

The SDK version used to run the job.

spanner_details[]

SpannerIODetails

Identification of a Spanner source used in the Dataflow job.

bigquery_details[]

BigQueryIODetails

Identification of a BigQuery source used in the Dataflow job.

big_table_details[]

BigTableIODetails

Identification of a Cloud Bigtable source used in the Dataflow job.

pubsub_details[]

PubSubIODetails

Identification of a Pub/Sub source used in the Dataflow job.

file_details[]

FileIODetails

Identification of a File source used in the Dataflow job.

datastore_details[]

DatastoreIODetails

Identification of a Datastore source used in the Dataflow job.

user_display_properties

map<string, string>

List of display properties to help UI filter jobs.

JobMetrics

JobMetrics contains a collection of metrics describing the detailed progress of a Dataflow job. Metrics correspond to user-defined and system-defined metrics in the job. For more information, see Dataflow job metrics.

This resource captures only the most recent values of each metric; time-series data can be queried for them (under the same metric names) from Cloud Monitoring.

Fields
metric_time

Timestamp

Timestamp as of which metric values are current.

metrics[]

MetricUpdate

All metrics for this job.

JobState

Describes the overall state of a google.dataflow.v1beta3.Job.

Enums
JOB_STATE_UNKNOWN The job's run state isn't specified.
JOB_STATE_STOPPED JOB_STATE_STOPPED indicates that the job has not yet started to run.
JOB_STATE_RUNNING JOB_STATE_RUNNING indicates that the job is currently running.
JOB_STATE_DONE JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
JOB_STATE_FAILED JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_CANCELLED JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
JOB_STATE_UPDATED JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_DRAINING JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
JOB_STATE_DRAINED JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
JOB_STATE_PENDING JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
JOB_STATE_CANCELLING JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
JOB_STATE_QUEUED JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
JOB_STATE_RESOURCE_CLEANING_UP JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

JobType

Specifies the processing model used by a google.dataflow.v1beta3.Job, which determines the way the Job is managed by the Cloud Dataflow service (how workers are scheduled, how inputs are sharded, etc).

Enums
JOB_TYPE_UNKNOWN The type of the job is unspecified, or unknown.
JOB_TYPE_BATCH A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
JOB_TYPE_STREAMING A continuously streaming job with no end: data is read, processed, and written continuously.

JobView

Selector for how much information is returned in Job responses.

Enums
JOB_VIEW_UNKNOWN The job view to return isn't specified, or is unknown. Responses will contain at least the JOB_VIEW_SUMMARY information, and may contain additional information.
JOB_VIEW_SUMMARY Request summary information only: Project ID, Job ID, job name, job type, job status, start/end time, and Cloud SDK version details.
JOB_VIEW_ALL Request all information available for this job. When the job is in JOB_STATE_PENDING, the job has been created but is not yet running, and not all job information is available. For complete job information, wait until the job in is JOB_STATE_RUNNING. For more information, see JobState.
JOB_VIEW_DESCRIPTION Request summary info and limited job description data for steps, labels and environment.

KindType

Type of transform or stage operation.

Enums
UNKNOWN_KIND Unrecognized transform type.
PAR_DO_KIND ParDo transform.
GROUP_BY_KEY_KIND Group By Key transform.
FLATTEN_KIND Flatten transform.
READ_KIND Read transform.
WRITE_KIND Write transform.
CONSTANT_KIND Constructs from a constant value, such as with Create.of.
SINGLETON_KIND Creates a Singleton view of a collection.
SHUFFLE_KIND Opening or closing a shuffle session, often as part of a GroupByKey.

LaunchFlexTemplateParameter

Launch FlexTemplate Parameter.

Fields
job_name

string

Required. The job name to use for the created job. For update job request, job name should be same as the existing running job.

parameters

map<string, string>

The parameters for FlexTemplate. Ex. {"num_workers":"5"}

launch_options

map<string, string>

Launch options for this flex template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

environment

FlexTemplateRuntimeEnvironment

The runtime environment for the FlexTemplate job

update

bool

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

transform_name_mappings

map<string, string>

Use this to pass transform_name_mappings for streaming update jobs. Ex:{"oldTransformName":"newTransformName",...}'

Union field template. Launch Mechanism. template can be only one of the following:
container_spec_gcs_path

string

Cloud Storage path to a file with json serialized ContainerSpec as content.

LaunchFlexTemplateRequest

A request to launch a Cloud Dataflow job from a FlexTemplate.

Fields
project_id

string

Required. The ID of the Cloud Platform project that the job belongs to.

launch_parameter

LaunchFlexTemplateParameter

Required. Parameter to launch a job form Flex Template.

location

string

Required. The regional endpoint to which to direct the request. E.g., us-central1, us-west1.

validate_only

bool

If true, the request is validated but not actually executed. Defaults to false.

LaunchFlexTemplateResponse

Response to the request to launch a job from Flex Template.

Fields
job

Job

The job that was launched, if the request was not a dry run and the job was successfully launched.

LaunchTemplateParameters

Parameters to provide to the template being launched. Note that the metadata in the pipeline code determines which runtime parameters are valid.

Fields
job_name

string

Required. The job name to use for the created job.

The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

parameters

map<string, string>

The runtime parameters to pass to the job.

environment

RuntimeEnvironment

The runtime environment for the job.

update

bool

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

transform_name_mapping

map<string, string>

Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

LaunchTemplateRequest

A request to launch a template.

Fields
project_id

string

Required. The ID of the Cloud Platform project that the job belongs to.

validate_only

bool

If true, the request is validated but not actually executed. Defaults to false.

launch_parameters

LaunchTemplateParameters

The parameters of the template to launch. Part of the body of the POST request.

location

string

The regional endpoint to which to direct the request.

Union field template. The template to use to create the job. template can be only one of the following:
gcs_path

string

A Cloud Storage path to the template to use to create the job. Must be valid Cloud Storage URL, beginning with gs://.

dynamic_template

DynamicTemplateLaunchParams

Parameters for launching a dynamic template.

LaunchTemplateResponse

Response to the request to launch a template.

Fields
job

Job

The job that was launched, if the request was not a dry run and the job was successfully launched.

ListJobMessagesRequest

Request to list job messages. Up to max_results messages will be returned in the time range specified starting with the oldest messages first. If no time range is specified the results with start with the oldest message.

Fields
project_id

string

A project id.

job_id

string

The job to get messages about.

minimum_importance

JobMessageImportance

Filter to only get messages with importance >= level

page_size

int32

If specified, determines the maximum number of messages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.

page_token

string

If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned.

start_time

Timestamp

If specified, return only messages with timestamps >= start_time. The default is the job creation time (i.e. beginning of messages).

end_time

Timestamp

Return only messages with timestamps < end_time. The default is now (i.e. return up to the latest messages available).

location

string

The regional endpoint that contains the job specified by job_id.

ListJobMessagesResponse

Response to a request to list job messages.

Fields
job_messages[]

JobMessage

Messages in ascending timestamp order.

next_page_token

string

The token to obtain the next page of results if there are more.

autoscaling_events[]

AutoscalingEvent

Autoscaling events in ascending timestamp order.

ListJobsRequest

Request to list Cloud Dataflow jobs.

Fields
filter

Filter

The kind of filter to use.

project_id

string

The project which owns the jobs.

view
(deprecated)

JobView

Deprecated. ListJobs always returns summaries now. Use GetJob for other JobViews.

page_size

int32

If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit.

page_token

string

Set this to the 'next_page_token' field of a previous response to request additional results in a long list.

location

string

The regional endpoint that contains this job.

name

string

Optional. The job name.

Filter

This field filters out and returns jobs in the specified job state. The order of data returned is determined by the filter used, and is subject to change.

Enums
UNKNOWN The filter isn't specified, or is unknown. This returns all jobs ordered on descending JobUuid.
ALL Returns all running jobs first ordered on creation timestamp, then returns all terminated jobs ordered on the termination timestamp.
TERMINATED Filters the jobs that have a terminated state, ordered on the termination timestamp. Example terminated states: JOB_STATE_STOPPED, JOB_STATE_UPDATED, JOB_STATE_DRAINED, etc.
ACTIVE Filters the jobs that are running ordered on the creation timestamp.

ListJobsResponse

Response to a request to list Cloud Dataflow jobs in a project. This might be a partial response, depending on the page size in the ListJobsRequest. However, if the project does not have any jobs, an instance of ListJobsResponse is not returned and the requests's response body is empty {}.

Fields
jobs[]

Job

A subset of the requested job information.

next_page_token

string

Set if there may be more results than fit in this response.

failed_location[]

FailedLocation

Zero or more messages describing the regional endpoints that failed to respond.

MetricStructuredName

Identifies a metric, by describing the source which generated the metric.

Fields
origin

string

Origin (namespace) of metric name. May be blank for user-define metrics; will be "dataflow" for metrics defined by the Dataflow service or SDK.

name

string

Worker-defined metric name.

context

map<string, string>

Zero or more labeled fields which identify the part of the job this metric is associated with, such as the name of a step or collection.

For example, built-in counters associated with steps will have context['step'] = . Counters associated with PCollections in the SDK will have context['pcollection'] = .

MetricUpdate

Describes the state of a metric.

Fields
name

MetricStructuredName

Name of the metric.

kind

string

Metric aggregation kind. The possible metric aggregation kinds are "Sum", "Max", "Min", "Mean", "Set", "And", "Or", and "Distribution". The specified aggregation kind is case-insensitive.

If omitted, this is not an aggregated value but instead a single metric sample value.

cumulative

bool

True if this metric is reported as the total cumulative aggregate value accumulated since the worker started working on this WorkItem. By default this is false, indicating that this metric is reported as a delta that is not associated with any WorkItem.

scalar

Value

Worker-computed aggregate value for aggregation kinds "Sum", "Max", "Min", "And", and "Or". The possible value types are Long, Double, and Boolean.

mean_sum

Value

Worker-computed aggregate value for the "Mean" aggregation kind. This holds the sum of the aggregated values and is used in combination with mean_count below to obtain the actual mean aggregate value. The only possible value types are Long and Double.

mean_count

Value

Worker-computed aggregate value for the "Mean" aggregation kind. This holds the count of the aggregated values and is used in combination with mean_sum above to obtain the actual mean aggregate value. The only possible value type is Long.

set

Value

Worker-computed aggregate value for the "Set" aggregation kind. The only possible value type is a list of Values whose type can be Long, Double, or String, according to the metric's type. All Values in the list must be of the same type.

distribution

Value

A struct value describing properties of a distribution of numeric values.

gauge

Value

A struct value describing properties of a Gauge. Metrics of gauge type show the value of a metric across time, and is aggregated based on the newest value.

internal

Value

Worker-computed aggregate value for internal use by the Dataflow service.

update_time

Timestamp

Timestamp associated with the metric value. Optional when workers are reporting work progress; it will be filled in responses from the metrics API.

Package

The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.

This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run.

Fields
name

string

The name of the package.

location

string

The resource to read the package from. The supported resource type is:

Google Cloud Storage:

storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

ParameterMetadata

Metadata for a specific parameter.

Fields
name

string

Required. The name of the parameter.

label

string

Required. The label to display for the parameter.

help_text

string

Required. The help text to display for the parameter.

is_optional

bool

Optional. Whether the parameter is optional. Defaults to false.

regexes[]

string

Optional. Regexes that the parameter must match.

param_type

ParameterType

Optional. The type of the parameter. Used for selecting input picker.

custom_metadata

map<string, string>

Optional. Additional metadata for describing this parameter.

group_name

string

Optional. Specifies a group name for this parameter to be rendered under. Group header text will be rendered exactly as specified in this field. Only considered when parent_name is NOT provided.

parent_name

string

Optional. Specifies the name of the parent parameter. Used in conjunction with 'parent_trigger_values' to make this parameter conditional (will only be rendered conditionally). Should be mappable to a ParameterMetadata.name field.

parent_trigger_values[]

string

Optional. The value(s) of the 'parent_name' parameter which will trigger this parameter to be shown. If left empty, ANY non-empty value in parent_name will trigger this parameter to be shown. Only considered when this parameter is conditional (when 'parent_name' has been provided).

enum_options[]

ParameterMetadataEnumOption

Optional. The options shown when ENUM ParameterType is specified.

default_value

string

Optional. The default values will pre-populate the parameter with the given value from the proto. If default_value is left empty, the parameter will be populated with a default of the relevant type, e.g. false for a boolean.

hidden_ui

bool

Optional. Whether the parameter should be hidden in the UI.

ParameterMetadataEnumOption

ParameterMetadataEnumOption specifies the option shown in the enum form.

Fields
value

string

Required. The value of the enum option.

label

string

Optional. The label to display for the enum option.

description

string

Optional. The description to display for the enum option.

ParameterType

ParameterType specifies what kind of input we need for this parameter.

Enums
DEFAULT Default input type.
TEXT The parameter specifies generic text input.
GCS_READ_BUCKET The parameter specifies a Cloud Storage Bucket to read from.
GCS_WRITE_BUCKET The parameter specifies a Cloud Storage Bucket to write to.
GCS_READ_FILE The parameter specifies a Cloud Storage file path to read from.
GCS_WRITE_FILE The parameter specifies a Cloud Storage file path to write to.
GCS_READ_FOLDER The parameter specifies a Cloud Storage folder path to read from.
GCS_WRITE_FOLDER The parameter specifies a Cloud Storage folder to write to.
PUBSUB_TOPIC The parameter specifies a Pub/Sub Topic.
PUBSUB_SUBSCRIPTION The parameter specifies a Pub/Sub Subscription.
BIGQUERY_TABLE The parameter specifies a BigQuery table.
JAVASCRIPT_UDF_FILE The parameter specifies a JavaScript UDF in Cloud Storage.
SERVICE_ACCOUNT The parameter specifies a Service Account email.
MACHINE_TYPE The parameter specifies a Machine Type.
KMS_KEY_NAME The parameter specifies a KMS Key name.
WORKER_REGION The parameter specifies a Worker Region.
WORKER_ZONE The parameter specifies a Worker Zone.
BOOLEAN The parameter specifies a boolean input.
ENUM The parameter specifies an enum input.
NUMBER The parameter specifies a number input.
KAFKA_TOPIC

Deprecated. Please use KAFKA_READ_TOPIC instead.

KAFKA_READ_TOPIC The parameter specifies the fully-qualified name of an Apache Kafka topic. This can be either a Google Managed Kafka topic or a non-managed Kafka topic.
KAFKA_WRITE_TOPIC The parameter specifies the fully-qualified name of an Apache Kafka topic. This can be an existing Google Managed Kafka topic, the name for a new Google Managed Kafka topic, or an existing non-managed Kafka topic.

PipelineDescription

A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics.

Fields
original_pipeline_transform[]

TransformSummary

Description of each transform in the pipeline and collections between them.

execution_pipeline_stage[]

ExecutionStageSummary

Description of each stage of execution of the pipeline.

display_data[]

DisplayData

Pipeline level display data.

step_names_hash

string

A hash value of the submitted pipeline portable graph step names if exists.

ProgressTimeseries

Information about the progress of some component of job execution.

Fields
current_progress

double

The current progress of the component, in the range [0,1].

data_points[]

Point

History of progress for the component.

Points are sorted by time.

Point

A point in the timeseries.

Fields
time

Timestamp

The timestamp of the point.

value

double

The value of the point.

PubSubIODetails

Metadata for a Pub/Sub connector used by the job.

Fields
topic

string

Topic accessed in the connection.

subscription

string

Subscription used in the connection.

PubsubSnapshotMetadata

Represents a Pubsub snapshot.

Fields
topic_name

string

The name of the Pubsub topic.

snapshot_name

string

The name of the Pubsub snapshot.

expire_time

Timestamp

The expire time of the Pubsub snapshot.

RuntimeEnvironment

The environment values to set at runtime.

Fields
num_workers

int32

Optional. The initial number of Google Compute Engine instances for the job. The default value is 11.

max_workers

int32

Optional. The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000. The default value is 1.

zone

string

Optional. The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

service_account_email

string

Optional. The email address of the service account to run the job as.

temp_location

string

Required. The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

bypass_temp_dir_validation

bool

Optional. Whether to bypass the safety checks for the job's temporary directory. Use with caution.

machine_type

string

Optional. The machine type to use for the job. Defaults to the value from the template if not specified.

additional_experiments[]

string

Optional. Additional experiment flags for the job, specified with the --experiments option.

network

string

Optional. Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

subnetwork

string

Optional. Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

additional_user_labels

map<string, string>

Optional. Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

kms_key_name

string

Optional. Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

ip_configuration

WorkerIPAddressConfiguration

Optional. Configuration for VM IPs.

worker_region

string

Required. The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

worker_zone

string

Optional. The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

enable_streaming_engine

bool

Optional. Whether to enable Streaming Engine for the job.

disk_size_gb

int32

Optional. The disk size, in gigabytes, to use on each remote Compute Engine worker instance.

streaming_mode

StreamingMode

Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode.

RuntimeMetadata

RuntimeMetadata describing a runtime environment.

Fields
sdk_info

SDKInfo

SDK Info for the template.

parameters[]

ParameterMetadata

The parameters for the template.

RuntimeUpdatableParams

Additional job parameters that can only be updated during runtime using the projects.jobs.update method. These fields have no effect when specified during job creation.

Fields
max_num_workers

int32

The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

min_num_workers

int32

The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

worker_utilization_hint

double

Target worker utilization, compared against the aggregate utilization of the worker pool by autoscaler, to determine upscaling and downscaling when absent other constraints such as backlog. For more information, see Update an existing pipeline.

SDKInfo

SDK Information.

Fields
language

Language

Required. The SDK Language.

version

string

Optional. The SDK version.

Language

SDK Language.

Enums
UNKNOWN UNKNOWN Language.
JAVA Java.
PYTHON Python.
GO Go.

SdkBug

A bug found in the Dataflow SDK.

Fields
type

Type

Output only. Describes the impact of this SDK bug.

severity

Severity

Output only. How severe the SDK bug is.

uri

string

Output only. Link to more information on the bug.

Severity

Indicates the severity of the bug. Other severities may be added to this list in the future.

Enums
SEVERITY_UNSPECIFIED A bug of unknown severity.
NOTICE A minor bug that that may reduce reliability or performance for some jobs. Impact will be minimal or non-existent for most jobs.
WARNING A bug that has some likelihood of causing performance degradation, data loss, or job failures.
SEVERE A bug with extremely significant impact. Jobs may fail erroneously, performance may be severely degraded, and data loss may be very likely.

Type

Nature of the issue, ordered from least severe to most. Other bug types may be added to this list in the future.

Enums
TYPE_UNSPECIFIED Unknown issue with this SDK.
GENERAL Catch-all for SDK bugs that don't fit in the below categories.
PERFORMANCE Using this version of the SDK may result in degraded performance.
DATALOSS Using this version of the SDK may cause data loss.

SdkHarnessContainerImage

Defines an SDK harness container for executing Dataflow pipelines.

Fields
container_image

string

A docker container image that resides in Google Container Registry.

use_single_core_per_container

bool

If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

environment_id

string

Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

capabilities[]

string

The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

SdkVersion

The version of the SDK used to run the job.

Fields
version

string

The version of the SDK used to run the job.

version_display_name

string

A readable string describing the version of the SDK.

sdk_support_status

SdkSupportStatus

The support status for this SDK version.

bugs[]

SdkBug

Output only. Known bugs found in this SDK version.

SdkSupportStatus

The support status of the SDK used to run the job.

Enums
UNKNOWN Cloud Dataflow is unaware of this version.
SUPPORTED This is a known version of an SDK, and is supported.
STALE A newer version of the SDK family exists, and an update is recommended.
DEPRECATED This version of the SDK is deprecated and will eventually be unsupported.
UNSUPPORTED Support for this SDK version has ended and it should no longer be used.

ServiceResources

Resources used by the Dataflow Service to run the job.

Fields
zones[]

string

Output only. List of Cloud Zones being used by the Dataflow Service for this job. Example: us-central1-c

ShuffleMode

Specifies the shuffle mode used by a google.dataflow.v1beta3.Job, which determines the approach data is shuffled during processing. More details in: https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#dataflow-shuffle

Enums
SHUFFLE_MODE_UNSPECIFIED Shuffle mode information is not available.
VM_BASED Shuffle is done on the worker VMs.
SERVICE_BASED Shuffle is done on the service side.

Snapshot

Represents a snapshot of a job.

Fields
id

string

The unique ID of this snapshot.

project_id

string

The project this snapshot belongs to.

source_job_id

string

The job this snapshot was created from.

creation_time

Timestamp

The time this snapshot was created.

ttl

Duration

The time after which this snapshot will be automatically deleted.

state

SnapshotState

State of the snapshot.

pubsub_metadata[]

PubsubSnapshotMetadata

Pub/Sub snapshot metadata.

description

string

User specified description of the snapshot. Maybe empty.

disk_size_bytes

int64

The disk byte size of the snapshot. Only available for snapshots in READY state.

region

string

Cloud region where this snapshot lives in, e.g., "us-central1".

SnapshotJobRequest

Request to create a snapshot of a job.

Fields
project_id

string

The project which owns the job to be snapshotted.

job_id

string

The job to be snapshotted.

ttl

Duration

TTL for the snapshot.

location

string

The location that contains this job.

snapshot_sources

bool

If true, perform snapshots for sources which support this.

description

string

User specified description of the snapshot. Maybe empty.

SnapshotState

Snapshot state.

Enums
UNKNOWN_SNAPSHOT_STATE Unknown state.
PENDING Snapshot intent to create has been persisted, snapshotting of state has not yet started.
RUNNING Snapshotting is being performed.
READY Snapshot has been created and is ready to be used.
FAILED Snapshot failed to be created.
DELETED Snapshot has been deleted.

SpannerIODetails

Metadata for a Spanner connector used by the job.

Fields
project_id

string

ProjectId accessed in the connection.

instance_id

string

InstanceId accessed in the connection.

database_id

string

DatabaseId accessed in the connection.

StageExecutionDetails

Information about the workers and work items within a stage.

Fields
workers[]

WorkerDetails

Workers that have done work on the stage.

next_page_token

string

If present, this response does not contain all requested tasks. To obtain the next page of results, repeat the request with page_token set to this value.

StageSummary

Information about a particular execution stage of a job.

Fields
stage_id

string

ID of this stage

state

ExecutionState

State of this stage.

start_time

Timestamp

Start time of this stage.

end_time

Timestamp

End time of this stage.

If the work item is completed, this is the actual end time of the stage. Otherwise, it is the predicted end time.

progress

ProgressTimeseries

Progress for this stage. Only applicable to Batch jobs.

metrics[]

MetricUpdate

Metrics for this stage.

straggler_summary

StragglerSummary

Straggler summary for this stage.

Step

Defines a particular step within a Cloud Dataflow job.

A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job.

Note: The properties of this object are not stable and might change.

Here's an example of a sequence of steps which together implement a Map-Reduce job:

  • Read a collection of data from some source, parsing the collection's elements.

  • Validate the elements.

  • Apply a user-defined function to map each element to some value and extract an element-specific key value.

  • Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection.

  • Write the elements out to some data sink.

Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce.

Fields
kind

string

The kind of step in the Cloud Dataflow job.

name

string

The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

properties

Struct

Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

Straggler

Information for a straggler.

Fields
Union field straggler_info. Information useful for straggler identification and debugging. straggler_info can be only one of the following:
batch_straggler

StragglerInfo

Batch straggler identification and debugging information.

streaming_straggler

StreamingStragglerInfo

Streaming straggler identification and debugging information.

StragglerInfo

Information useful for straggler identification and debugging.

Fields
start_time

Timestamp

The time when the work item attempt became a straggler.

causes

map<string, StragglerDebuggingInfo>

The straggler causes, keyed by the string representation of the StragglerCause enum and contains specialized debugging information for each straggler cause.

StragglerDebuggingInfo

Information useful for debugging a straggler. Each type will provide specialized debugging information relevant for a particular cause. The StragglerDebuggingInfo will be 1:1 mapping to the StragglerCause enum.

Fields

Union field straggler_debugging_info_value.

straggler_debugging_info_value can be only one of the following:

hot_key

HotKeyDebuggingInfo

Hot key debugging details.

StragglerSummary

Summarized straggler identification details.

Fields
total_straggler_count

int64

The total count of stragglers.

straggler_cause_count

map<string, int64>

Aggregated counts of straggler causes, keyed by the string representation of the StragglerCause enum.

recent_stragglers[]

Straggler

The most recent stragglers.

StreamingMode

Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages written to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode.

Enums
STREAMING_MODE_UNSPECIFIED Run in the default mode.
STREAMING_MODE_EXACTLY_ONCE In this mode, message deduplication is performed against persistent state to make sure each message is processed and committed to storage exactly once.
STREAMING_MODE_AT_LEAST_ONCE Message deduplication is not performed. Messages might be processed multiple times, and the results are applied multiple times. Note: Setting this value also enables Streaming Engine and Streaming Engine resource-based billing.

StreamingStragglerInfo

Information useful for streaming straggler identification and debugging.

Fields
start_time

Timestamp

Start time of this straggler.

end_time

Timestamp

End time of this straggler.

worker_name

string

Name of the worker where the straggler was detected.

data_watermark_lag

Duration

The event-time watermark lag at the time of the straggler detection.

system_watermark_lag

Duration

The system watermark lag at the time of the straggler detection.

StructuredMessage

A rich message format, including a human readable string, a key for identifying the message, and structured data associated with the message for programmatic consumption.

Fields
message_text

string

Human-readable version of message.

message_key

string

Identifier for this message type. Used by external systems to internationalize or personalize message.

parameters[]

Parameter

The structured data associated with this message.

Parameter

Structured data associated with this message.

Fields
key

string

Key or name for this parameter.

value

Value

Value for this parameter.

TaskRunnerSettings

Taskrunner configuration settings.

Fields
task_user

string

The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

task_group

string

The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

oauth_scopes[]

string

The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

base_url

string

The base URL for the taskrunner to use when accessing Google Cloud APIs.

When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators".

If not specified, the default value is "http://www.googleapis.com/"

dataflow_api_version

string

The API version of endpoint, e.g. "v1b3"

parallel_worker_settings

WorkerSettings

The settings to pass to the parallel worker harness.

base_task_dir

string

The location on the worker for task-specific subdirectories.

continue_on_exception

bool

Whether to continue taskrunner if an exception is hit.

log_to_serialconsole

bool

Whether to send taskrunner log info to Google Compute Engine VM serial console.

alsologtostderr

bool

Whether to also send taskrunner log info to stderr.

log_upload_location

string

Indicates where to put logs. If this is not specified, the logs will not be uploaded.

The supported resource type is:

Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

log_dir

string

The directory on the VM to store logs.

temp_storage_prefix

string

The prefix of the resources the taskrunner should use for temporary storage.

The supported resource type is:

Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

harness_command

string

The command to launch the worker harness.

workflow_file_name

string

The file to store the workflow in.

commandlines_file_name

string

The file to store preprocessing commands in.

vm_id

string

The ID string of the VM.

language_hint

string

The suggested backend language.

streaming_worker_main_class

string

The streaming worker main class name.

TeardownPolicy

Specifies what happens to a resource when a Cloud Dataflow google.dataflow.v1beta3.Job has completed.

Enums
TEARDOWN_POLICY_UNKNOWN The teardown policy isn't specified, or is unknown.
TEARDOWN_ALWAYS Always teardown the resource.
TEARDOWN_ON_SUCCESS Teardown the resource on success. This is useful for debugging failures.
TEARDOWN_NEVER Never teardown the resource. This is useful for debugging and development.

TemplateMetadata

Metadata describing a template.

Fields
name

string

Required. The name of the template.

description

string

Optional. A description of the template.

parameters[]

ParameterMetadata

The parameters for the template.

streaming

bool

Optional. Indicates if the template is streaming or not.

supports_at_least_once

bool

Optional. Indicates if the streaming template supports at least once mode.

supports_exactly_once

bool

Optional. Indicates if the streaming template supports exactly once mode.

default_streaming_mode

string

Optional. Indicates the default streaming mode for a streaming template. Only valid if both supports_at_least_once and supports_exactly_once are true. Possible values: UNSPECIFIED, EXACTLY_ONCE and AT_LEAST_ONCE

TransformSummary

Description of the type, names/ids, and input/outputs for a transform.

Fields
kind

KindType

Type of transform.

id

string

SDK generated id of this transform instance.

name

string

User provided name for this transform instance.

display_data[]

DisplayData

Transform-specific display data.

output_collection_name[]

string

User names for all collection outputs to this transform.

input_collection_name[]

string

User names for all collection inputs to this transform.

UpdateJobRequest

Request to update a Cloud Dataflow job.

Fields
project_id

string

The ID of the Cloud Platform project that the job belongs to.

job_id

string

The job ID.

job

Job

The updated job. Only the job state is updatable; other fields will be ignored.

location

string

The regional endpoint that contains this job.

update_mask

FieldMask

The list of fields to update relative to Job. If empty, only RequestedJobState will be considered for update. If the FieldMask is not empty and RequestedJobState is none/empty, The fields specified in the update mask will be the only ones considered for update. If both RequestedJobState and update_mask are specified, an error will be returned as we cannot update both state and mask.

WorkItemDetails

Information about an individual work item execution.

Fields
task_id

string

Name of this work item.

attempt_id

string

Attempt ID of this work item

start_time

Timestamp

Start time of this work item attempt.

end_time

Timestamp

End time of this work item attempt.

If the work item is completed, this is the actual end time of the work item. Otherwise, it is the predicted end time.

state

ExecutionState

State of this work item.

progress

ProgressTimeseries

Progress of this work item.

metrics[]

MetricUpdate

Metrics for this work item.

straggler_info

StragglerInfo

Information about straggler detections for this work item.

WorkerDetails

Information about a worker

Fields
worker_name

string

Name of this worker

work_items[]

WorkItemDetails

Work items processed by this worker, sorted by time.

WorkerIPAddressConfiguration

Specifies how to allocate IP addresses to worker machines. You can also use pipeline options to specify whether Dataflow workers use external IP addresses.

Enums
WORKER_IP_UNSPECIFIED The configuration is unknown, or unspecified.
WORKER_IP_PUBLIC Workers should have public IP addresses.
WORKER_IP_PRIVATE Workers should have private IP addresses.

WorkerPool

Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.

Fields
kind

string

The kind of the worker pool; currently only harness and shuffle are supported.

num_workers

int32

Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

packages[]

Package

Packages to be installed on workers.

default_package_set

DefaultPackageSet

The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

machine_type

string

Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

teardown_policy

TeardownPolicy

Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down.

If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs.

If unknown or unspecified, the service will attempt to choose a reasonable default.

disk_size_gb

int32

Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

disk_type

string

Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

disk_source_image

string

Fully qualified source image for disks.

zone

string

Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

taskrunner_settings

TaskRunnerSettings

Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

on_host_maintenance

string

The action to take on host maintenance, as defined by the Google Compute Engine API.

data_disks[]

Disk

Data disks that are used by a VM in this workflow.

metadata

map<string, string>

Metadata to set on the Google Compute Engine VMs.

autoscaling_settings

AutoscalingSettings

Settings for autoscaling of this WorkerPool.

pool_args

Any

Extra arguments for this worker pool.

network

string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

subnetwork

string

Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

worker_harness_container_image

string

Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry.

Deprecated for the Fn API path. Use sdk_harness_container_images instead.

num_threads_per_worker

int32

The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

ip_configuration

WorkerIPAddressConfiguration

Configuration for VM IPs.

sdk_harness_container_images[]

SdkHarnessContainerImage

Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

WorkerSettings

Provides data to pass through to the worker harness.

Fields
base_url

string

The base URL for accessing Google Cloud APIs.

When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators".

If not specified, the default value is "http://www.googleapis.com/"

reporting_enabled

bool

Whether to send work progress updates to the service.

service_path

string

The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".

shuffle_service_path

string

The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".

worker_id

string

The ID of the worker running this pipeline.

temp_storage_prefix

string

The prefix of the resources the system should use for temporary storage.

The supported resource type is:

Google Cloud Storage:

storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}