Types overview

ApproximateProgress

Obsolete in favor of ApproximateReportedProgress and ApproximateSplitRequest.
Fields
percentComplete

number (float format)

Obsolete.

position

object (Position)

Obsolete.

remainingTime

string (Duration format)

Obsolete.

ApproximateReportedProgress

A progress measurement of a WorkItem by a worker.
Fields
consumedParallelism

object (ReportedParallelism)

Total amount of parallelism in the portion of input of this task that has already been consumed and is no longer active. In the first two examples above (see remaining_parallelism), the value should be 29 or 2 respectively. The sum of remaining_parallelism and consumed_parallelism should equal the total amount of parallelism in this work item. If specified, must be finite.

fractionConsumed

number (double format)

Completion as fraction of the input consumed, from 0.0 (beginning, nothing consumed), to 1.0 (end of the input, entire input consumed).

position

object (Position)

A Position within the work to represent a progress.

remainingParallelism

object (ReportedParallelism)

Total amount of parallelism in the input of this task that remains, (i.e. can be delegated to this task and any new tasks via dynamic splitting). Always at least 1 for non-finished work items and 0 for finished. "Amount of parallelism" refers to how many non-empty parts of the input can be read in parallel. This does not necessarily equal number of records. An input that can be read in parallel down to the individual records is called "perfectly splittable". An example of non-perfectly parallelizable input is a block-compressed file format where a block of records has to be read as a whole, but different blocks can be read in parallel. Examples: * If we are processing record #30 (starting at 1) out of 50 in a perfectly splittable 50-record input, this value should be 21 (20 remaining + 1 current). * If we are reading through block 3 in a block-compressed file consisting of 5 blocks, this value should be 3 (since blocks 4 and 5 can be processed in parallel by new tasks via dynamic splitting and the current task remains processing block 3). * If we are reading through the last block in a block-compressed file, or reading or processing the last record in a perfectly splittable input, this value should be 1, because apart from the current task, no additional remainder can be split off.

ApproximateSplitRequest

A suggestion by the service to the worker to dynamically split the WorkItem.
Fields
fractionConsumed

number (double format)

A fraction at which to split the work item, from 0.0 (beginning of the input) to 1.0 (end of the input).

fractionOfRemainder

number (double format)

The fraction of the remainder of work to split the work item at, from 0.0 (split at the current position) to 1.0 (end of the input).

position

object (Position)

A Position at which to split the work item.

AutoscalingEvent

A structured message reporting an autoscaling decision made by the Dataflow service.
Fields
currentNumWorkers

string (int64 format)

The current number of workers the job has.

description

object (StructuredMessage)

A message describing why the system decided to adjust the current number of workers, why it failed, or why the system decided to not make any changes to the number of workers.

eventType

enum

The type of autoscaling event to report.

Enum type. Can be one of the following:
TYPE_UNKNOWN Default type for the enum. Value should never be returned.
TARGET_NUM_WORKERS_CHANGED The TARGET_NUM_WORKERS_CHANGED type should be used when the target worker pool size has changed at the start of an actuation. An event should always be specified as TARGET_NUM_WORKERS_CHANGED if it reflects a change in the target_num_workers.
CURRENT_NUM_WORKERS_CHANGED The CURRENT_NUM_WORKERS_CHANGED type should be used when actual worker pool size has been changed, but the target_num_workers has not changed.
ACTUATION_FAILURE The ACTUATION_FAILURE type should be used when we want to report an error to the user indicating why the current number of workers in the pool could not be changed. Displayed in the current status and history widgets.
NO_CHANGE Used when we want to report to the user a reason why we are not currently adjusting the number of workers. Should specify both target_num_workers, current_num_workers and a decision_message.
targetNumWorkers

string (int64 format)

The target number of workers the worker pool wants to resize to use.

time

string (Timestamp format)

The time this event was emitted to indicate a new target or current num_workers value.

workerPool

string

A short and friendly name for the worker pool this event refers to.

AutoscalingSettings

Settings for WorkerPool autoscaling.
Fields
algorithm

enum

The algorithm to use for autoscaling.

Enum type. Can be one of the following:
AUTOSCALING_ALGORITHM_UNKNOWN The algorithm is unknown, or unspecified.
AUTOSCALING_ALGORITHM_NONE Disable autoscaling.
AUTOSCALING_ALGORITHM_BASIC Increase worker count over time to reduce job execution time.
maxNumWorkers

integer (int32 format)

The maximum number of workers to cap scaling at.

BigQueryIODetails

Metadata for a BigQuery connector used by the job.
Fields
dataset

string

Dataset accessed in the connection.

projectId

string

Project accessed in the connection.

query

string

Query used to access data in the connection.

table

string

Table accessed in the connection.

BigTableIODetails

Metadata for a Cloud Bigtable connector used by the job.
Fields
instanceId

string

InstanceId accessed in the connection.

projectId

string

ProjectId accessed in the connection.

tableId

string

TableId accessed in the connection.

CPUTime

Modeled after information exposed by /proc/stat.
Fields
rate

number (double format)

Average CPU utilization rate (% non-idle cpu / second) since previous sample.

timestamp

string (Timestamp format)

Timestamp of the measurement.

totalMs

string (uint64 format)

Total active CPU time across all cores (ie., non-idle) in milliseconds since start-up.

ComponentSource

Description of an interstitial value between transforms in an execution stage.
Fields
name

string

Dataflow service generated name for this source.

originalTransformOrCollection

string

User name for the original user transform or collection with which this source is most closely associated.

userName

string

Human-readable name for this transform; may be user or system generated.

ComponentTransform

Description of a transform executed as part of an execution stage.
Fields
name

string

Dataflow service generated name for this source.

originalTransform

string

User name for the original user transform with which this transform is most closely associated.

userName

string

Human-readable name for this transform; may be user or system generated.

ComputationTopology

All configuration data for a particular Computation.
Fields
computationId

string

The ID of the computation.

inputs[]

object (StreamLocation)

The inputs to the computation.

keyRanges[]

object (KeyRangeLocation)

The key ranges processed by the computation.

outputs[]

object (StreamLocation)

The outputs from the computation.

stateFamilies[]

object (StateFamilyConfig)

The state family values.

systemStageName

string

The system stage name.

ConcatPosition

A position that encapsulates an inner position and an index for the inner position. A ConcatPosition can be used by a reader of a source that encapsulates a set of other sources.
Fields
index

integer (int32 format)

Index of the inner source.

position

object (Position)

Position within the inner source.

ContainerSpec

Container Spec.
Fields
defaultEnvironment

object (FlexTemplateRuntimeEnvironment)

Default runtime environment for the job.

image

string

Name of the docker container image. E.g., gcr.io/project/some-image

metadata

object (TemplateMetadata)

Metadata describing a template including description and validation rules.

sdkInfo

object (SDKInfo)

Required. SDK info of the Flex Template.

CounterMetadata

CounterMetadata includes all static non-name non-value counter attributes.
Fields
description

string

Human-readable description of the counter semantics.

kind

enum

Counter aggregation kind.

Enum type. Can be one of the following:
INVALID Counter aggregation kind was not set.
SUM Aggregated value is the sum of all contributed values.
MAX Aggregated value is the max of all contributed values.
MIN Aggregated value is the min of all contributed values.
MEAN Aggregated value is the mean of all contributed values.
OR Aggregated value represents the logical 'or' of all contributed values.
AND Aggregated value represents the logical 'and' of all contributed values.
SET Aggregated value is a set of unique contributed values.
DISTRIBUTION Aggregated value captures statistics about a distribution.
LATEST_VALUE Aggregated value tracks the latest value of a variable.
otherUnits

string

A string referring to the unit type.

standardUnits

enum

System defined Units, see above enum.

Enum type. Can be one of the following:
BYTES Counter returns a value in bytes.
BYTES_PER_SEC Counter returns a value in bytes per second.
MILLISECONDS Counter returns a value in milliseconds.
MICROSECONDS Counter returns a value in microseconds.
NANOSECONDS Counter returns a value in nanoseconds.
TIMESTAMP_MSEC Counter returns a timestamp in milliseconds.
TIMESTAMP_USEC Counter returns a timestamp in microseconds.
TIMESTAMP_NSEC Counter returns a timestamp in nanoseconds.

CounterStructuredName

Identifies a counter within a per-job namespace. Counters whose structured names are the same get merged into a single value for the job.
Fields
componentStepName

string

Name of the optimized step being executed by the workers.

executionStepName

string

Name of the stage. An execution step contains multiple component steps.

inputIndex

integer (int32 format)

Index of an input collection that's being read from/written to as a side input. The index identifies a step's side inputs starting by 1 (e.g. the first side input has input_index 1, the third has input_index 3). Side inputs are identified by a pair of (original_step_name, input_index). This field helps uniquely identify them.

name

string

Counter name. Not necessarily globally-unique, but unique within the context of the other fields. Required.

origin

enum

One of the standard Origins defined above.

Enum type. Can be one of the following:
SYSTEM Counter was created by the Dataflow system.
USER Counter was created by the user.
originNamespace

string

A string containing a more specific namespace of the counter's origin.

originalRequestingStepName

string

The step name requesting an operation, such as GBK. I.e. the ParDo causing a read/write from shuffle to occur, or a read from side inputs.

originalStepName

string

System generated name of the original step in the user's graph, before optimization.

portion

enum

Portion of this counter, either key or value.

Enum type. Can be one of the following:
ALL Counter portion has not been set.
KEY Counter reports a key.
VALUE Counter reports a value.
workerId

string

ID of a particular worker.

CounterStructuredNameAndMetadata

A single message which encapsulates structured name and metadata for a given counter.
Fields
metadata

object (CounterMetadata)

Metadata associated with a counter

name

object (CounterStructuredName)

Structured name of the counter.

CounterUpdate

An update to a Counter sent from a worker.
Fields
boolean

boolean

Boolean value for And, Or.

cumulative

boolean

True if this counter is reported as the total cumulative aggregate value accumulated since the worker started working on this WorkItem. By default this is false, indicating that this counter is reported as a delta.

distribution

object (DistributionUpdate)

Distribution data

floatingPoint

number (double format)

Floating point value for Sum, Max, Min.

floatingPointList

object (FloatingPointList)

List of floating point numbers, for Set.

floatingPointMean

object (FloatingPointMean)

Floating point mean aggregation value for Mean.

integer

object (SplitInt64)

Integer value for Sum, Max, Min.

integerGauge

object (IntegerGauge)

Gauge data

integerList

object (IntegerList)

List of integers, for Set.

integerMean

object (IntegerMean)

Integer mean aggregation value for Mean.

internal

any

Value for internally-defined counters used by the Dataflow service.

nameAndKind

object (NameAndKind)

Counter name and aggregation type.

shortId

string (int64 format)

The service-generated short identifier for this counter. The short_id -> (name, metadata) mapping is constant for the lifetime of a job.

stringList

object (StringList)

List of strings, for Set.

structuredNameAndMetadata

object (CounterStructuredNameAndMetadata)

Counter structured name and metadata.

CreateJobFromTemplateRequest

A request to create a Cloud Dataflow job from a template.
Fields
environment

object (RuntimeEnvironment)

The runtime environment for the job.

gcsPath

string

Required. A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with gs://.

jobName

string

Required. The job name to use for the created job.

location

string

The regional endpoint to which to direct the request.

parameters

map (key: string, value: string)

The runtime parameters to pass to the job.

CustomSourceLocation

Identifies the location of a custom souce.
Fields
stateful

boolean

Whether this source is stateful.

DataDiskAssignment

Data disk assignment for a given VM instance.
Fields
dataDisks[]

string

Mounted data disks. The order is important a data disk's 0-based index in this list defines which persistent directory the disk is mounted to, for example the list of { "myproject-1014-104817-4c2-harness-0-disk-0" }, { "myproject-1014-104817-4c2-harness-0-disk-1" }.

vmInstance

string

VM instance name the data disks mounted to, for example "myproject-1014-104817-4c2-harness-0".

DatastoreIODetails

Metadata for a Datastore connector used by the job.
Fields
namespace

string

Namespace used in the connection.

projectId

string

ProjectId accessed in the connection.

DebugOptions

Describes any options that have an effect on the debugging of pipelines.
Fields
enableHotKeyLogging

boolean

When true, enables the logging of the literal hot key to the user's Cloud Logging.

DerivedSource

Specification of one of the bundles produced as a result of splitting a Source (e.g. when executing a SourceSplitRequest, or when splitting an active task using WorkItemStatus.dynamic_source_split), relative to the source being split.
Fields
derivationMode

enum

What source to base the produced source on (if any).

Enum type. Can be one of the following:
SOURCE_DERIVATION_MODE_UNKNOWN The source derivation is unknown, or unspecified.
SOURCE_DERIVATION_MODE_INDEPENDENT Produce a completely independent Source with no base.
SOURCE_DERIVATION_MODE_CHILD_OF_CURRENT Produce a Source based on the Source being split.
SOURCE_DERIVATION_MODE_SIBLING_OF_CURRENT Produce a Source based on the base of the Source being split.
source

object (Source)

Specification of the source.

Disk

Describes the data disk used by a workflow job.
Fields
diskType

string

Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

mountPoint

string

Directory in a VM where disk is mounted.

sizeGb

integer (int32 format)

Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

DisplayData

Data provided with a pipeline or transform to provide descriptive info.
Fields
boolValue

boolean

Contains value if the data is of a boolean type.

durationValue

string (Duration format)

Contains value if the data is of duration type.

floatValue

number (float format)

Contains value if the data is of float type.

int64Value

string (int64 format)

Contains value if the data is of int64 type.

javaClassValue

string

Contains value if the data is of java class type.

key

string

The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

label

string

An optional label to display in a dax UI for the element.

namespace

string

The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

shortStrValue

string

A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

strValue

string

Contains value if the data is of string type.

timestampValue

string (Timestamp format)

Contains value if the data is of timestamp type.

url

string

An optional full URL.

DistributionUpdate

A metric value representing a distribution.
Fields
count

object (SplitInt64)

The count of the number of elements present in the distribution.

histogram

object (Histogram)

(Optional) Histogram of value counts for the distribution.

max

object (SplitInt64)

The maximum value present in the distribution.

min

object (SplitInt64)

The minimum value present in the distribution.

sum

object (SplitInt64)

Use an int64 since we'd prefer the added precision. If overflow is a common problem we can detect it and use an additional int64 or a double.

sumOfSquares

number (double format)

Use a double since the sum of squares is likely to overflow int64.

DynamicSourceSplit

When a task splits using WorkItemStatus.dynamic_source_split, this message describes the two parts of the split relative to the description of the current task's input.
Fields
primary

object (DerivedSource)

Primary part (continued to be processed by worker). Specified relative to the previously-current source. Becomes current.

residual

object (DerivedSource)

Residual part (returned to the pool of work). Specified relative to the previously-current source.

Environment

Describes the environment in which a Dataflow Job runs.
Fields
clusterManagerApiService

string

The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

dataset

string

The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

debugOptions

object (DebugOptions)

Any debugging options to be supplied to the job.

experiments[]

string

The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

flexResourceSchedulingGoal

enum

Which Flexible Resource Scheduling mode to run in.

Enum type. Can be one of the following:
FLEXRS_UNSPECIFIED Run in the default mode.
FLEXRS_SPEED_OPTIMIZED Optimize for lower execution time.
FLEXRS_COST_OPTIMIZED Optimize for lower cost.
internalExperiments

map (key: string, value: any)

Experimental settings.

sdkPipelineOptions

map (key: string, value: any)

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

serviceAccountEmail

string

Identity to run virtual machines as. Defaults to the default account.

serviceKmsKeyName

string

If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

serviceOptions[]

string

The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

shuffleMode

enum

Output only. The shuffle mode used for the job.

Enum type. Can be one of the following:
SHUFFLE_MODE_UNSPECIFIED Shuffle mode information is not available.
VM_BASED Shuffle is done on the worker VMs.
SERVICE_BASED Shuffle is done on the service side.
tempStoragePrefix

string

The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

userAgent

map (key: string, value: any)

A description of the process that generated the request.

version

map (key: string, value: any)

A structure describing which components and their versions of the service are required in order to run the job.

workerPools[]

object (WorkerPool)

The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

workerRegion

string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

workerZone

string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

ExecutionStageState

A message describing the state of a particular execution stage.
Fields
currentStateTime

string (Timestamp format)

The time at which the stage transitioned to this state.

executionStageName

string

The name of the execution stage.

executionStageState

enum

Executions stage states allow the same set of values as JobState.

Enum type. Can be one of the following:
JOB_STATE_UNKNOWN The job's run state isn't specified.
JOB_STATE_STOPPED JOB_STATE_STOPPED indicates that the job has not yet started to run.
JOB_STATE_RUNNING JOB_STATE_RUNNING indicates that the job is currently running.
JOB_STATE_DONE JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
JOB_STATE_FAILED JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_CANCELLED JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
JOB_STATE_UPDATED JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_DRAINING JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
JOB_STATE_DRAINED JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
JOB_STATE_PENDING JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
JOB_STATE_CANCELLING JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
JOB_STATE_QUEUED JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
JOB_STATE_RESOURCE_CLEANING_UP JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

ExecutionStageSummary

Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning.
Fields
componentSource[]

object (ComponentSource)

Collections produced and consumed by component transforms of this stage.

componentTransform[]

object (ComponentTransform)

Transforms that comprise this execution stage.

id

string

Dataflow service generated id for this stage.

inputSource[]

object (StageSource)

Input sources for this stage.

kind

enum

Type of transform this stage is executing.

Enum type. Can be one of the following:
UNKNOWN_KIND Unrecognized transform type.
PAR_DO_KIND ParDo transform.
GROUP_BY_KEY_KIND Group By Key transform.
FLATTEN_KIND Flatten transform.
READ_KIND Read transform.
WRITE_KIND Write transform.
CONSTANT_KIND Constructs from a constant value, such as with Create.of.
SINGLETON_KIND Creates a Singleton view of a collection.
SHUFFLE_KIND Opening or closing a shuffle session, often as part of a GroupByKey.
name

string

Dataflow service generated name for this stage.

outputSource[]

object (StageSource)

Output sources for this stage.

prerequisiteStage[]

string

Other stages that must complete before this stage can run.

FailedLocation

Indicates which regional endpoint failed to respond to a request for data.
Fields
name

string

The name of the regional endpoint that failed to respond.

FileIODetails

Metadata for a File connector used by the job.
Fields
filePattern

string

File Pattern used to access files by the connector.

FlattenInstruction

An instruction that copies its inputs (zero or more) to its (single) output.
Fields
inputs[]

object (InstructionInput)

Describes the inputs to the flatten instruction.

FlexTemplateRuntimeEnvironment

The environment values to be set at runtime for flex template.
Fields
additionalExperiments[]

string

Additional experiment flags for the job.

additionalUserLabels

map (key: string, value: string)

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions page. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

autoscalingAlgorithm

enum

The algorithm to use for autoscaling

Enum type. Can be one of the following:
AUTOSCALING_ALGORITHM_UNKNOWN The algorithm is unknown, or unspecified.
AUTOSCALING_ALGORITHM_NONE Disable autoscaling.
AUTOSCALING_ALGORITHM_BASIC Increase worker count over time to reduce job execution time.
diskSizeGb

integer (int32 format)

Worker disk size, in gigabytes.

dumpHeapOnOom

boolean

If true, save a heap dump before killing a thread or process which is GC thrashing or out of memory. The location of the heap file will either be echoed back to the user, or the user will be given the opportunity to download the heap file.

enableStreamingEngine

boolean

Whether to enable Streaming Engine for the job.

flexrsGoal

enum

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

Enum type. Can be one of the following:
FLEXRS_UNSPECIFIED Run in the default mode.
FLEXRS_SPEED_OPTIMIZED Optimize for lower execution time.
FLEXRS_COST_OPTIMIZED Optimize for lower cost.
ipConfiguration

enum

Configuration for VM IPs.

Enum type. Can be one of the following:
WORKER_IP_UNSPECIFIED The configuration is unknown, or unspecified.
WORKER_IP_PUBLIC Workers should have public IP addresses.
WORKER_IP_PRIVATE Workers should have private IP addresses.
kmsKeyName

string

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

launcherMachineType

string

The machine type to use for launching the job. The default is n1-standard-1.

machineType

string

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers

integer (int32 format)

The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network

string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers

integer (int32 format)

The initial number of Google Compute Engine instances for the job.

saveHeapDumpsToGcsPath

string

Cloud Storage bucket (directory) to upload heap dumps to the given location. Enabling this implies that heap dumps should be generated on OOM (dump_heap_on_oom is set to true).

sdkContainerImage

string

Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

serviceAccountEmail

string

The email address of the service account to run the job as.

stagingLocation

string

The Cloud Storage path for staging local files. Must be a valid Cloud Storage URL, beginning with gs://.

subnetwork

string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation

string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion

string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

workerZone

string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone

string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

FloatingPointList

A metric value representing a list of floating point numbers.
Fields
elements[]

number (double format)

Elements of the list.

FloatingPointMean

A representation of a floating point mean metric contribution.
Fields
count

object (SplitInt64)

The number of values being aggregated.

sum

number (double format)

The sum of all values being aggregated.

GetDebugConfigRequest

Request to get updated debug configuration for component.
Fields
componentId

string

The internal component id for which debug configuration is requested.

location

string

The regional endpoint that contains the job specified by job_id.

workerId

string

The worker id, i.e., VM hostname.

GetDebugConfigResponse

Response to a get debug configuration request.
Fields
config

string

The encoded debug configuration for the requested component.

GetTemplateResponse

The response to a GetTemplate request.
Fields
metadata

object (TemplateMetadata)

The template metadata describing the template name, available parameters, etc.

runtimeMetadata

object (RuntimeMetadata)

Describes the runtime metadata with SDKInfo and available parameters.

status

object (Status)

The status of the get template request. Any problems with the request will be indicated in the error_details.

templateType

enum

Template Type.

Enum type. Can be one of the following:
UNKNOWN Unknown Template Type.
LEGACY Legacy Template.
FLEX Flex Template.

Histogram

Histogram of value counts for a distribution. Buckets have an inclusive lower bound and exclusive upper bound and use "1,2,5 bucketing": The first bucket range is from [0,1) and all subsequent bucket boundaries are powers of ten multiplied by 1, 2, or 5. Thus, bucket boundaries are 0, 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, ... Negative values are not supported.
Fields
bucketCounts[]

string (int64 format)

Counts of values in each bucket. For efficiency, prefix and trailing buckets with count = 0 are elided. Buckets can store the full range of values of an unsigned long, with ULLONG_MAX falling into the 59th bucket with range [1e19, 2e19).

firstBucketOffset

integer (int32 format)

Starting index of first stored bucket. The non-inclusive upper-bound of the ith bucket is given by: pow(10,(i-first_bucket_offset)/3) * (1,2,5)[(i-first_bucket_offset)%3]

HotKeyDetection

Proto describing a hot key detected on a given WorkItem.
Fields
hotKeyAge

string (Duration format)

The age of the hot key measured from when it was first detected.

systemName

string

System-defined name of the step containing this hot key. Unique across the workflow.

userStepName

string

User-provided name of the step that contains this hot key.

InstructionInput

An input of an instruction, as a reference to an output of a producer instruction.
Fields
outputNum

integer (int32 format)

The output index (origin zero) within the producer.

producerInstructionIndex

integer (int32 format)

The index (origin zero) of the parallel instruction that produces the output to be consumed by this input. This index is relative to the list of instructions in this input's instruction's containing MapTask.

InstructionOutput

An output of an instruction.
Fields
codec

map (key: string, value: any)

The codec to use to encode data being written via this output.

name

string

The user-provided name of this output.

onlyCountKeyBytes

boolean

For system-generated byte and mean byte metrics, certain instructions should only report the key size.

onlyCountValueBytes

boolean

For system-generated byte and mean byte metrics, certain instructions should only report the value size.

originalName

string

System-defined name for this output in the original workflow graph. Outputs that do not contribute to an original instruction do not set this.

systemName

string

System-defined name of this output. Unique across the workflow.

IntegerGauge

A metric value representing temporal values of a variable.
Fields
timestamp

string (Timestamp format)

The time at which this value was measured. Measured as msecs from epoch.

value

object (SplitInt64)

The value of the variable represented by this gauge.

IntegerList

A metric value representing a list of integers.
Fields
elements[]

object (SplitInt64)

Elements of the list.

IntegerMean

A representation of an integer mean metric contribution.
Fields
count

object (SplitInt64)

The number of values being aggregated.

sum

object (SplitInt64)

The sum of all values being aggregated.

Job

Defines a job to be run by the Cloud Dataflow service. nextID: 26
Fields
clientRequestId

string

The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

createTime

string (Timestamp format)

The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

createdFromSnapshotId

string

If this is specified, the job's initial state is populated from the given snapshot.

currentState

enum

The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

Enum type. Can be one of the following:
JOB_STATE_UNKNOWN The job's run state isn't specified.
JOB_STATE_STOPPED JOB_STATE_STOPPED indicates that the job has not yet started to run.
JOB_STATE_RUNNING JOB_STATE_RUNNING indicates that the job is currently running.
JOB_STATE_DONE JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
JOB_STATE_FAILED JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_CANCELLED JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
JOB_STATE_UPDATED JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_DRAINING JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
JOB_STATE_DRAINED JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
JOB_STATE_PENDING JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
JOB_STATE_CANCELLING JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
JOB_STATE_QUEUED JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
JOB_STATE_RESOURCE_CLEANING_UP JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
currentStateTime

string (Timestamp format)

The timestamp associated with the current state.

environment

object (Environment)

The environment for the job.

executionInfo

object (JobExecutionInfo)

Deprecated.

id

string

The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

jobMetadata

object (JobMetadata)

This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

labels

map (key: string, value: string)

User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

location

string

The regional endpoint that contains this job.

name

string

The user-specified Cloud Dataflow job name. Only one Job with a given name may exist in a project at any given time. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,38}[a-z0-9])?

pipelineDescription

object (PipelineDescription)

Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

projectId

string

The ID of the Cloud Platform project that the job belongs to.

replaceJobId

string

If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

replacedByJobId

string

If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

requestedState

enum

The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

Enum type. Can be one of the following:
JOB_STATE_UNKNOWN The job's run state isn't specified.
JOB_STATE_STOPPED JOB_STATE_STOPPED indicates that the job has not yet started to run.
JOB_STATE_RUNNING JOB_STATE_RUNNING indicates that the job is currently running.
JOB_STATE_DONE JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
JOB_STATE_FAILED JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_CANCELLED JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
JOB_STATE_UPDATED JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
JOB_STATE_DRAINING JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
JOB_STATE_DRAINED JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
JOB_STATE_PENDING JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
JOB_STATE_CANCELLING JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
JOB_STATE_QUEUED JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
JOB_STATE_RESOURCE_CLEANING_UP JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
satisfiesPzs

boolean

Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

stageStates[]

object (ExecutionStageState)

This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

startTime

string (Timestamp format)

The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

steps[]

object (Step)

Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

stepsLocation

string

The Cloud Storage location where the steps are stored.

tempFiles[]

string

A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

transformNameMapping

map (key: string, value: string)

The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

type

enum

The type of Cloud Dataflow job.

Enum type. Can be one of the following:
JOB_TYPE_UNKNOWN The type of the job is unspecified, or unknown.
JOB_TYPE_BATCH A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
JOB_TYPE_STREAMING A continuously streaming job with no end: data is read, processed, and written continuously.

JobExecutionDetails

Information about the execution of a job.
Fields
nextPageToken

string

If present, this response does not contain all requested tasks. To obtain the next page of results, repeat the request with page_token set to this value.

stages[]

object (StageSummary)

The stages of the job execution.

JobExecutionInfo

Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job.
Fields
stages

map (key: string, value: object (JobExecutionStageInfo))

A mapping from each stage to the information about that stage.

JobExecutionStageInfo

Contains information about how a particular google.dataflow.v1beta3.Step will be executed.
Fields
stepName[]

string

The steps associated with the execution stage. Note that stages may have several steps, and that a given step might be run by more than one stage.

JobMessage

A particular message pertaining to a Dataflow job.
Fields
id

string

Deprecated.

messageImportance

enum

Importance level of the message.

Enum type. Can be one of the following:
JOB_MESSAGE_IMPORTANCE_UNKNOWN The message importance isn't specified, or is unknown.
JOB_MESSAGE_DEBUG The message is at the 'debug' level: typically only useful for software engineers working on the code the job is running. Typically, Dataflow pipeline runners do not display log messages at this level by default.
JOB_MESSAGE_DETAILED The message is at the 'detailed' level: somewhat verbose, but potentially useful to users. Typically, Dataflow pipeline runners do not display log messages at this level by default. These messages are displayed by default in the Dataflow monitoring UI.
JOB_MESSAGE_BASIC The message is at the 'basic' level: useful for keeping track of the execution of a Dataflow pipeline. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
JOB_MESSAGE_WARNING The message is at the 'warning' level: indicating a condition pertaining to a job which may require human intervention. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
JOB_MESSAGE_ERROR The message is at the 'error' level: indicating a condition preventing a job from succeeding. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
messageText

string

The text of the message.

time

string (Timestamp format)

The timestamp of the message.

JobMetadata

Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.
Fields
bigTableDetails[]

object (BigTableIODetails)

Identification of a Cloud Bigtable source used in the Dataflow job.

bigqueryDetails[]

object (BigQueryIODetails)

Identification of a BigQuery source used in the Dataflow job.

datastoreDetails[]

object (DatastoreIODetails)

Identification of a Datastore source used in the Dataflow job.

fileDetails[]

object (FileIODetails)

Identification of a File source used in the Dataflow job.

pubsubDetails[]

object (PubSubIODetails)

Identification of a Pub/Sub source used in the Dataflow job.

sdkVersion

object (SdkVersion)

The SDK version used to run the job.

spannerDetails[]

object (SpannerIODetails)

Identification of a Spanner source used in the Dataflow job.

JobMetrics

JobMetrics contains a collection of metrics describing the detailed progress of a Dataflow job. Metrics correspond to user-defined and system-defined metrics in the job. This resource captures only the most recent values of each metric; time-series data can be queried for them (under the same metric names) from Cloud Monitoring.
Fields
metricTime

string (Timestamp format)

Timestamp as of which metric values are current.

metrics[]

object (MetricUpdate)

All metrics for this job.

KeyRangeDataDiskAssignment

Data disk assignment information for a specific key-range of a sharded computation. Currently we only support UTF-8 character splits to simplify encoding into JSON.
Fields
dataDisk

string

The name of the data disk where data for this range is stored. This name is local to the Google Cloud Platform project and uniquely identifies the disk within that project, for example "myproject-1014-104817-4c2-harness-0-disk-1".

end

string

The end (exclusive) of the key range.

start

string

The start (inclusive) of the key range.

KeyRangeLocation

Location information for a specific key-range of a sharded computation. Currently we only support UTF-8 character splits to simplify encoding into JSON.
Fields
dataDisk

string

The name of the data disk where data for this range is stored. This name is local to the Google Cloud Platform project and uniquely identifies the disk within that project, for example "myproject-1014-104817-4c2-harness-0-disk-1".

deliveryEndpoint

string

The physical location of this range assignment to be used for streaming computation cross-worker message delivery.

deprecatedPersistentDirectory

string

DEPRECATED. The location of the persistent state for this range, as a persistent directory in the worker local filesystem.

end

string

The end (exclusive) of the key range.

start

string

The start (inclusive) of the key range.

LaunchFlexTemplateParameter

Launch FlexTemplate Parameter.
Fields
containerSpec

object (ContainerSpec)

Spec about the container image to launch.

containerSpecGcsPath

string

Cloud Storage path to a file with json serialized ContainerSpec as content.

environment

object (FlexTemplateRuntimeEnvironment)

The runtime environment for the FlexTemplate job

jobName

string

Required. The job name to use for the created job. For update job request, job name should be same as the existing running job.

launchOptions

map (key: string, value: string)

Launch options for this flex template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

parameters

map (key: string, value: string)

The parameters for FlexTemplate. Ex. {"num_workers":"5"}

transformNameMappings

map (key: string, value: string)

Use this to pass transform_name_mappings for streaming update jobs. Ex:{"oldTransformName":"newTransformName",...}'

update

boolean

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

LaunchFlexTemplateRequest

A request to launch a Cloud Dataflow job from a FlexTemplate.
Fields
launchParameter

object (LaunchFlexTemplateParameter)

Required. Parameter to launch a job form Flex Template.

validateOnly

boolean

If true, the request is validated but not actually executed. Defaults to false.

LaunchFlexTemplateResponse

Response to the request to launch a job from Flex Template.
Fields
job

object (Job)

The job that was launched, if the request was not a dry run and the job was successfully launched.

LaunchTemplateParameters

Parameters to provide to the template being launched.
Fields
environment

object (RuntimeEnvironment)

The runtime environment for the job.

jobName

string

Required. The job name to use for the created job.

parameters

map (key: string, value: string)

The runtime parameters to pass to the job.

transformNameMapping

map (key: string, value: string)

Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

update

boolean

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

LaunchTemplateResponse

Response to the request to launch a template.
Fields
job

object (Job)

The job that was launched, if the request was not a dry run and the job was successfully launched.

LeaseWorkItemRequest

Request to lease WorkItems.
Fields
currentWorkerTime

string (Timestamp format)

The current timestamp at the worker.

location

string

The regional endpoint that contains the WorkItem's job.

requestedLeaseDuration

string (Duration format)

The initial lease period.

unifiedWorkerRequest

map (key: string, value: any)

Untranslated bag-of-bytes WorkRequest from UnifiedWorker.

workItemTypes[]

string

Filter for WorkItem type.

workerCapabilities[]

string

Worker capabilities. WorkItems might be limited to workers with specific capabilities.

workerId

string

Identifies the worker leasing work -- typically the ID of the virtual machine running the worker.

LeaseWorkItemResponse

Response to a request to lease WorkItems.
Fields
unifiedWorkerResponse

map (key: string, value: any)

Untranslated bag-of-bytes WorkResponse for UnifiedWorker.

workItems[]

object (WorkItem)

A list of the leased WorkItems.

ListJobMessagesResponse

Response to a request to list job messages.
Fields
autoscalingEvents[]

object (AutoscalingEvent)

Autoscaling events in ascending timestamp order.

jobMessages[]

object (JobMessage)

Messages in ascending timestamp order.

nextPageToken

string

The token to obtain the next page of results if there are more.

ListJobsResponse

Response to a request to list Cloud Dataflow jobs in a project. This might be a partial response, depending on the page size in the ListJobsRequest. However, if the project does not have any jobs, an instance of ListJobsResponse is not returned and the requests's response body is empty {}.
Fields
failedLocation[]

object (FailedLocation)

Zero or more messages describing the regional endpoints that failed to respond.

jobs[]

object (Job)

A subset of the requested job information.

nextPageToken

string

Set if there may be more results than fit in this response.

ListSnapshotsResponse

List of snapshots.
Fields
snapshots[]

object (Snapshot)

Returned snapshots.

MapTask

MapTask consists of an ordered set of instructions, each of which describes one particular low-level operation for the worker to perform in order to accomplish the MapTask's WorkItem. Each instruction must appear in the list before any instructions which depends on its output.
Fields
counterPrefix

string

Counter prefix that can be used to prefix counters. Not currently used in Dataflow.

instructions[]

object (ParallelInstruction)

The instructions in the MapTask.

stageName

string

System-defined name of the stage containing this MapTask. Unique across the workflow.

systemName

string

System-defined name of this MapTask. Unique across the workflow.

MemInfo

Information about the memory usage of a worker or a container within a worker.
Fields
currentLimitBytes

string (uint64 format)

Instantenous memory limit in bytes.

currentRssBytes

string (uint64 format)

Instantenous memory (RSS) size in bytes.

timestamp

string (Timestamp format)

Timestamp of the measurement.

totalGbMs

string (uint64 format)

Total memory (RSS) usage since start up in GB * ms.

MetricShortId

The metric short id is returned to the user alongside an offset into ReportWorkItemStatusRequest
Fields
metricIndex

integer (int32 format)

The index of the corresponding metric in the ReportWorkItemStatusRequest. Required.

shortId

string (int64 format)

The service-generated short identifier for the metric.

MetricStructuredName

Identifies a metric, by describing the source which generated the metric.
Fields
context

map (key: string, value: string)

Zero or more labeled fields which identify the part of the job this metric is associated with, such as the name of a step or collection. For example, built-in counters associated with steps will have context['step'] = . Counters associated with PCollections in the SDK will have context['pcollection'] = .

name

string

Worker-defined metric name.

origin

string

Origin (namespace) of metric name. May be blank for user-define metrics; will be "dataflow" for metrics defined by the Dataflow service or SDK.

MetricUpdate

Describes the state of a metric.
Fields
cumulative

boolean

True if this metric is reported as the total cumulative aggregate value accumulated since the worker started working on this WorkItem. By default this is false, indicating that this metric is reported as a delta that is not associated with any WorkItem.

distribution

any

A struct value describing properties of a distribution of numeric values.

gauge

any

A struct value describing properties of a Gauge. Metrics of gauge type show the value of a metric across time, and is aggregated based on the newest value.

internal

any

Worker-computed aggregate value for internal use by the Dataflow service.

kind

string

Metric aggregation kind. The possible metric aggregation kinds are "Sum", "Max", "Min", "Mean", "Set", "And", "Or", and "Distribution". The specified aggregation kind is case-insensitive. If omitted, this is not an aggregated value but instead a single metric sample value.

meanCount

any

Worker-computed aggregate value for the "Mean" aggregation kind. This holds the count of the aggregated values and is used in combination with mean_sum above to obtain the actual mean aggregate value. The only possible value type is Long.

meanSum

any

Worker-computed aggregate value for the "Mean" aggregation kind. This holds the sum of the aggregated values and is used in combination with mean_count below to obtain the actual mean aggregate value. The only possible value types are Long and Double.

name

object (MetricStructuredName)

Name of the metric.

scalar

any

Worker-computed aggregate value for aggregation kinds "Sum", "Max", "Min", "And", and "Or". The possible value types are Long, Double, and Boolean.

set

any

Worker-computed aggregate value for the "Set" aggregation kind. The only possible value type is a list of Values whose type can be Long, Double, or String, according to the metric's type. All Values in the list must be of the same type.

updateTime

string (Timestamp format)

Timestamp associated with the metric value. Optional when workers are reporting work progress; it will be filled in responses from the metrics API.

MountedDataDisk

Describes mounted data disk.
Fields
dataDisk

string

The name of the data disk. This name is local to the Google Cloud Platform project and uniquely identifies the disk within that project, for example "myproject-1014-104817-4c2-harness-0-disk-1".

MultiOutputInfo

Information about an output of a multi-output DoFn.
Fields
tag

string

The id of the tag the user code will emit to this output by; this should correspond to the tag of some SideInputInfo.

NameAndKind

Basic metadata about a counter.
Fields
kind

enum

Counter aggregation kind.

Enum type. Can be one of the following:
INVALID Counter aggregation kind was not set.
SUM Aggregated value is the sum of all contributed values.
MAX Aggregated value is the max of all contributed values.
MIN Aggregated value is the min of all contributed values.
MEAN Aggregated value is the mean of all contributed values.
OR Aggregated value represents the logical 'or' of all contributed values.
AND Aggregated value represents the logical 'and' of all contributed values.
SET Aggregated value is a set of unique contributed values.
DISTRIBUTION Aggregated value captures statistics about a distribution.
LATEST_VALUE Aggregated value tracks the latest value of a variable.
name

string

Name of the counter.

Package

The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool. This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run.
Fields
location

string

The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

name

string

The name of the package.

ParDoInstruction

An instruction that does a ParDo operation. Takes one main input and zero or more side inputs, and produces zero or more outputs. Runs user code.
Fields
input

object (InstructionInput)

The input.

multiOutputInfos[]

object (MultiOutputInfo)

Information about each of the outputs, if user_fn is a MultiDoFn.

numOutputs

integer (int32 format)

The number of outputs.

sideInputs[]

object (SideInputInfo)

Zero or more side inputs.

userFn

map (key: string, value: any)

The user function to invoke.

ParallelInstruction

Describes a particular operation comprising a MapTask.
Fields
flatten

object (FlattenInstruction)

Additional information for Flatten instructions.

name

string

User-provided name of this operation.

originalName

string

System-defined name for the operation in the original workflow graph.

outputs[]

object (InstructionOutput)

Describes the outputs of the instruction.

parDo

object (ParDoInstruction)

Additional information for ParDo instructions.

partialGroupByKey

object (PartialGroupByKeyInstruction)

Additional information for PartialGroupByKey instructions.

read

object (ReadInstruction)

Additional information for Read instructions.

systemName

string

System-defined name of this operation. Unique across the workflow.

write

object (WriteInstruction)

Additional information for Write instructions.

Parameter

Structured data associated with this message.
Fields
key

string

Key or name for this parameter.

value

any

Value for this parameter.

ParameterMetadata

Metadata for a specific parameter.
Fields
customMetadata

map (key: string, value: string)

Optional. Additional metadata for describing this parameter.

helpText

string

Required. The help text to display for the parameter.

isOptional

boolean

Optional. Whether the parameter is optional. Defaults to false.

label

string

Required. The label to display for the parameter.

name

string

Required. The name of the parameter.

paramType

enum

Optional. The type of the parameter. Used for selecting input picker.

Enum type. Can be one of the following:
DEFAULT Default input type.
TEXT The parameter specifies generic text input.
GCS_READ_BUCKET The parameter specifies a Cloud Storage Bucket to read from.
GCS_WRITE_BUCKET The parameter specifies a Cloud Storage Bucket to write to.
GCS_READ_FILE The parameter specifies a Cloud Storage file path to read from.
GCS_WRITE_FILE The parameter specifies a Cloud Storage file path to write to.
GCS_READ_FOLDER The parameter specifies a Cloud Storage folder path to read from.
GCS_WRITE_FOLDER The parameter specifies a Cloud Storage folder to write to.
PUBSUB_TOPIC The parameter specifies a Pub/Sub Topic.
PUBSUB_SUBSCRIPTION The parameter specifies a Pub/Sub Subscription.
regexes[]

string

Optional. Regexes that the parameter must match.

PartialGroupByKeyInstruction

An instruction that does a partial group-by-key. One input and one output.
Fields
input

object (InstructionInput)

Describes the input to the partial group-by-key instruction.

inputElementCodec

map (key: string, value: any)

The codec to use for interpreting an element in the input PTable.

originalCombineValuesInputStoreName

string

If this instruction includes a combining function this is the name of the intermediate store between the GBK and the CombineValues.

originalCombineValuesStepName

string

If this instruction includes a combining function, this is the name of the CombineValues instruction lifted into this instruction.

sideInputs[]

object (SideInputInfo)

Zero or more side inputs.

valueCombiningFn

map (key: string, value: any)

The value combining function to invoke.

PipelineDescription

A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics.
Fields
displayData[]

object (DisplayData)

Pipeline level display data.

executionPipelineStage[]

object (ExecutionStageSummary)

Description of each stage of execution of the pipeline.

originalPipelineTransform[]

object (TransformSummary)

Description of each transform in the pipeline and collections between them.

Point

A point in the timeseries.
Fields
time

string (Timestamp format)

The timestamp of the point.

value

number (double format)

The value of the point.

Position

Position defines a position within a collection of data. The value can be either the end position, a key (used with ordered collections), a byte offset, or a record index.
Fields
byteOffset

string (int64 format)

Position is a byte offset.

concatPosition

object (ConcatPosition)

CloudPosition is a concat position.

end

boolean

Position is past all other positions. Also useful for the end position of an unbounded range.

key

string

Position is a string key, ordered lexicographically.

recordIndex

string (int64 format)

Position is a record index.

shufflePosition

string

CloudPosition is a base64 encoded BatchShufflePosition (with FIXED sharding).

ProgressTimeseries

Information about the progress of some component of job execution.
Fields
currentProgress

number (double format)

The current progress of the component, in the range [0,1].

dataPoints[]

object (Point)

History of progress for the component. Points are sorted by time.

PubSubIODetails

Metadata for a Pub/Sub connector used by the job.
Fields
subscription

string

Subscription used in the connection.

topic

string

Topic accessed in the connection.

PubsubLocation

Identifies a pubsub location to use for transferring data into or out of a streaming Dataflow job.
Fields
dropLateData

boolean

Indicates whether the pipeline allows late-arriving data.

idLabel

string

If set, contains a pubsub label from which to extract record ids. If left empty, record deduplication will be strictly best effort.

subscription

string

A pubsub subscription, in the form of "pubsub.googleapis.com/subscriptions//"

timestampLabel

string

If set, contains a pubsub label from which to extract record timestamps. If left empty, record timestamps will be generated upon arrival.

topic

string

A pubsub topic, in the form of "pubsub.googleapis.com/topics//"

trackingSubscription

string

If set, specifies the pubsub subscription that will be used for tracking custom time timestamps for watermark estimation.

withAttributes

boolean

If true, then the client has requested to get pubsub attributes.

PubsubSnapshotMetadata

Represents a Pubsub snapshot.
Fields
expireTime

string (Timestamp format)

The expire time of the Pubsub snapshot.

snapshotName

string

The name of the Pubsub snapshot.

topicName

string

The name of the Pubsub topic.

QueryInfo

Information about a validated query.
Fields
queryProperty[]

string

Includes an entry for each satisfied QueryProperty.

ReadInstruction

An instruction that reads records. Takes no inputs, produces one output.
Fields
source

object (Source)

The source to read from.

ReportWorkItemStatusRequest

Request to report the status of WorkItems.
Fields
currentWorkerTime

string (Timestamp format)

The current timestamp at the worker.

location

string

The regional endpoint that contains the WorkItem's job.

unifiedWorkerRequest

map (key: string, value: any)

Untranslated bag-of-bytes WorkProgressUpdateRequest from UnifiedWorker.

workItemStatuses[]

object (WorkItemStatus)

The order is unimportant, except that the order of the WorkItemServiceState messages in the ReportWorkItemStatusResponse corresponds to the order of WorkItemStatus messages here.

workerId

string

The ID of the worker reporting the WorkItem status. If this does not match the ID of the worker which the Dataflow service believes currently has the lease on the WorkItem, the report will be dropped (with an error response).

ReportWorkItemStatusResponse

Response from a request to report the status of WorkItems.
Fields
unifiedWorkerResponse

map (key: string, value: any)

Untranslated bag-of-bytes WorkProgressUpdateResponse for UnifiedWorker.

workItemServiceStates[]

object (WorkItemServiceState)

A set of messages indicating the service-side state for each WorkItem whose status was reported, in the same order as the WorkItemStatus messages in the ReportWorkItemStatusRequest which resulting in this response.

ReportedParallelism

Represents the level of parallelism in a WorkItem's input, reported by the worker.
Fields
isInfinite

boolean

Specifies whether the parallelism is infinite. If true, "value" is ignored. Infinite parallelism means the service will assume that the work item can always be split into more non-empty work items by dynamic splitting. This is a work-around for lack of support for infinity by the current JSON-based Java RPC stack.

value

number (double format)

Specifies the level of parallelism in case it is finite.

ResourceUtilizationReport

Worker metrics exported from workers. This contains resource utilization metrics accumulated from a variety of sources. For more information, see go/df-resource-signals.
Fields
containers

map (key: string, value: object (ResourceUtilizationReport))

Per container information. Key: container name.

cpuTime[]

object (CPUTime)

CPU utilization samples.

memoryInfo[]

object (MemInfo)

Memory utilization samples.

RuntimeEnvironment

The environment values to set at runtime.
Fields
additionalExperiments[]

string

Additional experiment flags for the job, specified with the --experiments option.

additionalUserLabels

map (key: string, value: string)

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

bypassTempDirValidation

boolean

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

enableStreamingEngine

boolean

Whether to enable Streaming Engine for the job.

ipConfiguration

enum

Configuration for VM IPs.

Enum type. Can be one of the following:
WORKER_IP_UNSPECIFIED The configuration is unknown, or unspecified.
WORKER_IP_PUBLIC Workers should have public IP addresses.
WORKER_IP_PRIVATE Workers should have private IP addresses.
kmsKeyName

string

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

machineType

string

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers

integer (int32 format)

The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network

string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers

integer (int32 format)

The initial number of Google Compute Engine instnaces for the job.

serviceAccountEmail

string

The email address of the service account to run the job as.

subnetwork

string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation

string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion

string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

workerZone

string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone

string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

RuntimeMetadata

RuntimeMetadata describing a runtime environment.
Fields
parameters[]

object (ParameterMetadata)

The parameters for the template.

sdkInfo

object (SDKInfo)

SDK Info for the template.

SDKInfo

SDK Information.
Fields
language

enum

Required. The SDK Language.

Enum type. Can be one of the following:
UNKNOWN UNKNOWN Language.
JAVA Java.
PYTHON Python.
version

string

Optional. The SDK version.

SdkHarnessContainerImage

Defines a SDK harness container for executing Dataflow pipelines.
Fields
containerImage

string

A docker container image that resides in Google Container Registry.

environmentId

string

Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

useSingleCorePerContainer

boolean

If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

SdkVersion

The version of the SDK used to run the job.
Fields
sdkSupportStatus

enum

The support status for this SDK version.

Enum type. Can be one of the following:
UNKNOWN Cloud Dataflow is unaware of this version.
SUPPORTED This is a known version of an SDK, and is supported.
STALE A newer version of the SDK family exists, and an update is recommended.
DEPRECATED This version of the SDK is deprecated and will eventually be unsupported.
UNSUPPORTED Support for this SDK version has ended and it should no longer be used.
version

string

The version of the SDK used to run the job.

versionDisplayName

string

A readable string describing the version of the SDK.

SendDebugCaptureRequest

Request to send encoded debug information.
Fields
componentId

string

The internal component id for which debug information is sent.

data

string

The encoded debug information.

location

string

The regional endpoint that contains the job specified by job_id.

workerId

string

The worker id, i.e., VM hostname.

SendWorkerMessagesRequest

A request for sending worker messages to the service.
Fields
location

string

The regional endpoint that contains the job.

workerMessages[]

object (WorkerMessage)

The WorkerMessages to send.

SendWorkerMessagesResponse

The response to the worker messages.
Fields
workerMessageResponses[]

object (WorkerMessageResponse)

The servers response to the worker messages.

SeqMapTask

Describes a particular function to invoke.
Fields
inputs[]

object (SideInputInfo)

Information about each of the inputs.

name

string

The user-provided name of the SeqDo operation.

outputInfos[]

object (SeqMapTaskOutputInfo)

Information about each of the outputs.

stageName

string

System-defined name of the stage containing the SeqDo operation. Unique across the workflow.

systemName

string

System-defined name of the SeqDo operation. Unique across the workflow.

userFn

map (key: string, value: any)

The user function to invoke.

SeqMapTaskOutputInfo

Information about an output of a SeqMapTask.
Fields
sink

object (Sink)

The sink to write the output value to.

tag

string

The id of the TupleTag the user code will tag the output value by.

ShellTask

A task which consists of a shell command for the worker to execute.
Fields
command

string

The shell command to run.

exitCode

integer (int32 format)

Exit code for the task.

SideInputInfo

Information about a side input of a DoFn or an input of a SeqDoFn.
Fields
kind

map (key: string, value: any)

How to interpret the source element(s) as a side input value.

sources[]

object (Source)

The source(s) to read element(s) from to get the value of this side input. If more than one source, then the elements are taken from the sources, in the specified order if order matters. At least one source is required.

tag

string

The id of the tag the user code will access this side input by; this should correspond to the tag of some MultiOutputInfo.

Sink

A sink that records can be encoded and written to.
Fields
codec

map (key: string, value: any)

The codec to use to encode data written to the sink.

spec

map (key: string, value: any)

The sink to write to, plus its parameters.

Snapshot

Represents a snapshot of a job.
Fields
creationTime

string (Timestamp format)

The time this snapshot was created.

description

string

User specified description of the snapshot. Maybe empty.

diskSizeBytes

string (int64 format)

The disk byte size of the snapshot. Only available for snapshots in READY state.

id

string

The unique ID of this snapshot.

projectId

string

The project this snapshot belongs to.

pubsubMetadata[]

object (PubsubSnapshotMetadata)

Pub/Sub snapshot metadata.

region

string

Cloud region where this snapshot lives in, e.g., "us-central1".

sourceJobId

string

The job this snapshot was created from.

state

enum

State of the snapshot.

Enum type. Can be one of the following:
UNKNOWN_SNAPSHOT_STATE Unknown state.
PENDING Snapshot intent to create has been persisted, snapshotting of state has not yet started.
RUNNING Snapshotting is being performed.
READY Snapshot has been created and is ready to be used.
FAILED Snapshot failed to be created.
DELETED Snapshot has been deleted.
ttl

string (Duration format)

The time after which this snapshot will be automatically deleted.

SnapshotJobRequest

Request to create a snapshot of a job.
Fields
description

string

User specified description of the snapshot. Maybe empty.

location

string

The location that contains this job.

snapshotSources

boolean

If true, perform snapshots for sources which support this.

ttl

string (Duration format)

TTL for the snapshot.

Source

A source that records can be read and decoded from.
Fields
baseSpecs[]

object

While splitting, sources may specify the produced bundles as differences against another source, in order to save backend-side memory and allow bigger jobs. For details, see SourceSplitRequest. To support this use case, the full set of parameters of the source is logically obtained by taking the latest explicitly specified value of each parameter in the order: base_specs (later items win), spec (overrides anything in base_specs).

codec

map (key: string, value: any)

The codec to use to decode data read from the source.

doesNotNeedSplitting

boolean

Setting this value to true hints to the framework that the source doesn't need splitting, and using SourceSplitRequest on it would yield SOURCE_SPLIT_OUTCOME_USE_CURRENT. E.g. a file splitter may set this to true when splitting a single file into a set of byte ranges of appropriate size, and set this to false when splitting a filepattern into individual files. However, for efficiency, a file splitter may decide to produce file subranges directly from the filepattern to avoid a splitting round-trip. See SourceSplitRequest for an overview of the splitting process. This field is meaningful only in the Source objects populated by the user (e.g. when filling in a DerivedSource). Source objects supplied by the framework to the user don't have this field populated.

metadata

object (SourceMetadata)

Optionally, metadata for this source can be supplied right away, avoiding a SourceGetMetadataOperation roundtrip (see SourceOperationRequest). This field is meaningful only in the Source objects populated by the user (e.g. when filling in a DerivedSource). Source objects supplied by the framework to the user don't have this field populated.

spec

map (key: string, value: any)

The source to read from, plus its parameters.

SourceFork

DEPRECATED in favor of DynamicSourceSplit.
Fields
primary

object (SourceSplitShard)

DEPRECATED

primarySource

object (DerivedSource)

DEPRECATED

residual

object (SourceSplitShard)

DEPRECATED

residualSource

object (DerivedSource)

DEPRECATED

SourceGetMetadataRequest

A request to compute the SourceMetadata of a Source.
Fields
source

object (Source)

Specification of the source whose metadata should be computed.

SourceGetMetadataResponse

The result of a SourceGetMetadataOperation.
Fields
metadata

object (SourceMetadata)

The computed metadata.

SourceMetadata

Metadata about a Source useful for automatically optimizing and tuning the pipeline, etc.
Fields
estimatedSizeBytes

string (int64 format)

An estimate of the total size (in bytes) of the data that would be read from this source. This estimate is in terms of external storage size, before any decompression or other processing done by the reader.

infinite

boolean

Specifies that the size of this source is known to be infinite (this is a streaming source).

producesSortedKeys

boolean

Whether this source is known to produce key/value pairs with the (encoded) keys in lexicographically sorted order.

SourceOperationRequest

A work item that represents the different operations that can be performed on a user-defined Source specification.
Fields
getMetadata

object (SourceGetMetadataRequest)

Information about a request to get metadata about a source.

name

string

User-provided name of the Read instruction for this source.

originalName

string

System-defined name for the Read instruction for this source in the original workflow graph.

split

object (SourceSplitRequest)

Information about a request to split a source.

stageName

string

System-defined name of the stage containing the source operation. Unique across the workflow.

systemName

string

System-defined name of the Read instruction for this source. Unique across the workflow.

SourceOperationResponse

The result of a SourceOperationRequest, specified in ReportWorkItemStatusRequest.source_operation when the work item is completed.
Fields
getMetadata

object (SourceGetMetadataResponse)

A response to a request to get metadata about a source.

split

object (SourceSplitResponse)

A response to a request to split a source.

SourceSplitOptions

Hints for splitting a Source into bundles (parts for parallel processing) using SourceSplitRequest.
Fields
desiredBundleSizeBytes

string (int64 format)

The source should be split into a set of bundles where the estimated size of each is approximately this many bytes.

desiredShardSizeBytes

string (int64 format)

DEPRECATED in favor of desired_bundle_size_bytes.

SourceSplitRequest

Represents the operation to split a high-level Source specification into bundles (parts for parallel processing). At a high level, splitting of a source into bundles happens as follows: SourceSplitRequest is applied to the source. If it returns SOURCE_SPLIT_OUTCOME_USE_CURRENT, no further splitting happens and the source is used "as is". Otherwise, splitting is applied recursively to each produced DerivedSource. As an optimization, for any Source, if its does_not_need_splitting is true, the framework assumes that splitting this source would return SOURCE_SPLIT_OUTCOME_USE_CURRENT, and doesn't initiate a SourceSplitRequest. This applies both to the initial source being split and to bundles produced from it.
Fields
options

object (SourceSplitOptions)

Hints for tuning the splitting process.

source

object (Source)

Specification of the source to be split.

SourceSplitResponse

The response to a SourceSplitRequest.
Fields
bundles[]

object (DerivedSource)

If outcome is SPLITTING_HAPPENED, then this is a list of bundles into which the source was split. Otherwise this field is ignored. This list can be empty, which means the source represents an empty input.

outcome

enum

Indicates whether splitting happened and produced a list of bundles. If this is USE_CURRENT_SOURCE_AS_IS, the current source should be processed "as is" without splitting. "bundles" is ignored in this case. If this is SPLITTING_HAPPENED, then "bundles" contains a list of bundles into which the source was split.

Enum type. Can be one of the following:
SOURCE_SPLIT_OUTCOME_UNKNOWN The source split outcome is unknown, or unspecified.
SOURCE_SPLIT_OUTCOME_USE_CURRENT The current source should be processed "as is" without splitting.
SOURCE_SPLIT_OUTCOME_SPLITTING_HAPPENED Splitting produced a list of bundles.
shards[]

object (SourceSplitShard)

DEPRECATED in favor of bundles.

SourceSplitShard

DEPRECATED in favor of DerivedSource.
Fields
derivationMode

enum

DEPRECATED

Enum type. Can be one of the following:
SOURCE_DERIVATION_MODE_UNKNOWN The source derivation is unknown, or unspecified.
SOURCE_DERIVATION_MODE_INDEPENDENT Produce a completely independent Source with no base.
SOURCE_DERIVATION_MODE_CHILD_OF_CURRENT Produce a Source based on the Source being split.
SOURCE_DERIVATION_MODE_SIBLING_OF_CURRENT Produce a Source based on the base of the Source being split.
source

object (Source)

DEPRECATED

SpannerIODetails

Metadata for a Spanner connector used by the job.
Fields
databaseId

string

DatabaseId accessed in the connection.

instanceId

string

InstanceId accessed in the connection.

projectId

string

ProjectId accessed in the connection.

SplitInt64

A representation of an int64, n, that is immune to precision loss when encoded in JSON.
Fields
highBits

integer (int32 format)

The high order bits, including the sign: n >> 32.

lowBits

integer (uint32 format)

The low order bits: n & 0xffffffff.

StageExecutionDetails

Information about the workers and work items within a stage.
Fields
nextPageToken

string

If present, this response does not contain all requested tasks. To obtain the next page of results, repeat the request with page_token set to this value.

workers[]

object (WorkerDetails)

Workers that have done work on the stage.

StageSource

Description of an input or output of an execution stage.
Fields
name

string

Dataflow service generated name for this source.

originalTransformOrCollection

string

User name for the original user transform or collection with which this source is most closely associated.

sizeBytes

string (int64 format)

Size of the source, if measurable.

userName

string

Human-readable name for this source; may be user or system generated.

StageSummary

Information about a particular execution stage of a job.
Fields
endTime

string (Timestamp format)

End time of this stage. If the work item is completed, this is the actual end time of the stage. Otherwise, it is the predicted end time.

metrics[]

object (MetricUpdate)

Metrics for this stage.

progress

object (ProgressTimeseries)

Progress for this stage. Only applicable to Batch jobs.

stageId

string

ID of this stage

startTime

string (Timestamp format)

Start time of this stage.

state

enum

State of this stage.

Enum type. Can be one of the following:
EXECUTION_STATE_UNKNOWN The component state is unknown or unspecified.
EXECUTION_STATE_NOT_STARTED The component is not yet running.
EXECUTION_STATE_RUNNING The component is currently running.
EXECUTION_STATE_SUCCEEDED The component succeeded.
EXECUTION_STATE_FAILED The component failed.
EXECUTION_STATE_CANCELLED Execution of the component was cancelled.

StateFamilyConfig

State family configuration.
Fields
isRead

boolean

If true, this family corresponds to a read operation.

stateFamily

string

The state family value.

Status

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields
code

integer (int32 format)

The status code, which should be an enum value of google.rpc.Code.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

Step

Defines a particular step within a Cloud Dataflow job. A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job. Here's an example of a sequence of steps which together implement a Map-Reduce job: * Read a collection of data from some source, parsing the collection's elements. * Validate the elements. * Apply a user-defined function to map each element to some value and extract an element-specific key value. * Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection. * Write the elements out to some data sink. Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce.
Fields
kind

string

The kind of step in the Cloud Dataflow job.

name

string

The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

properties

map (key: string, value: any)

Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

StreamLocation

Describes a stream of data, either as input to be processed or as output of a streaming Dataflow job.
Fields
customSourceLocation

object (CustomSourceLocation)

The stream is a custom source.

pubsubLocation

object (PubsubLocation)

The stream is a pubsub stream.

sideInputLocation

object (StreamingSideInputLocation)

The stream is a streaming side input.

streamingStageLocation

object (StreamingStageLocation)

The stream is part of another computation within the current streaming Dataflow job.

StreamingApplianceSnapshotConfig

Streaming appliance snapshot configuration.
Fields
importStateEndpoint

string

Indicates which endpoint is used to import appliance state.

snapshotId

string

If set, indicates the snapshot id for the snapshot being performed.

StreamingComputationConfig

Configuration information for a single streaming computation.
Fields
computationId

string

Unique identifier for this computation.

instructions[]

object (ParallelInstruction)

Instructions that comprise the computation.

stageName

string

Stage name of this computation.

systemName

string

System defined name for this computation.

transformUserNameToStateFamily

map (key: string, value: string)

Map from user name of stateful transforms in this stage to their state family.

StreamingComputationRanges

Describes full or partial data disk assignment information of the computation ranges.
Fields
computationId

string

The ID of the computation.

rangeAssignments[]

object (KeyRangeDataDiskAssignment)

Data disk assignments for ranges from this computation.

StreamingComputationTask

A task which describes what action should be performed for the specified streaming computation ranges.
Fields
computationRanges[]

object (StreamingComputationRanges)

Contains ranges of a streaming computation this task should apply to.

dataDisks[]

object (MountedDataDisk)

Describes the set of data disks this task should apply to.

taskType

enum

A type of streaming computation task.

Enum type. Can be one of the following:
STREAMING_COMPUTATION_TASK_UNKNOWN The streaming computation task is unknown, or unspecified.
STREAMING_COMPUTATION_TASK_STOP Stop processing specified streaming computation range(s).
STREAMING_COMPUTATION_TASK_START Start processing specified streaming computation range(s).

StreamingConfigTask

A task that carries configuration information for streaming computations.
Fields
commitStreamChunkSizeBytes

string (int64 format)

Chunk size for commit streams from the harness to windmill.

getDataStreamChunkSizeBytes

string (int64 format)

Chunk size for get data streams from the harness to windmill.

maxWorkItemCommitBytes

string (int64 format)

Maximum size for work item commit supported windmill storage layer.

streamingComputationConfigs[]

object (StreamingComputationConfig)

Set of computation configuration information.

userStepToStateFamilyNameMap

map (key: string, value: string)

Map from user step names to state families.

windmillServiceEndpoint

string

If present, the worker must use this endpoint to communicate with Windmill Service dispatchers, otherwise the worker must continue to use whatever endpoint it had been using.

windmillServicePort

string (int64 format)

If present, the worker must use this port to communicate with Windmill Service dispatchers. Only applicable when windmill_service_endpoint is specified.

StreamingSetupTask

A task which initializes part of a streaming Dataflow job.
Fields
drain

boolean

The user has requested drain.

receiveWorkPort

integer (int32 format)

The TCP port on which the worker should listen for messages from other streaming computation workers.

snapshotConfig

object (StreamingApplianceSnapshotConfig)

Configures streaming appliance snapshot.

streamingComputationTopology

object (TopologyConfig)

The global topology of the streaming Dataflow job.

workerHarnessPort

integer (int32 format)

The TCP port used by the worker to communicate with the Dataflow worker harness.

StreamingSideInputLocation

Identifies the location of a streaming side input.
Fields
stateFamily

string

Identifies the state family where this side input is stored.

tag

string

Identifies the particular side input within the streaming Dataflow job.

StreamingStageLocation

Identifies the location of a streaming computation stage, for stage-to-stage communication.
Fields
streamId

string

Identifies the particular stream within the streaming Dataflow job.

StringList

A metric value representing a list of strings.
Fields
elements[]

string

Elements of the list.

StructuredMessage

A rich message format, including a human readable string, a key for identifying the message, and structured data associated with the message for programmatic consumption.
Fields
messageKey

string

Identifier for this message type. Used by external systems to internationalize or personalize message.

messageText

string

Human-readable version of message.

parameters[]

object (Parameter)

The structured data associated with this message.

TaskRunnerSettings

Taskrunner configuration settings.
Fields
alsologtostderr

boolean

Whether to also send taskrunner log info to stderr.

baseTaskDir

string

The location on the worker for task-specific subdirectories.

baseUrl

string

The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

commandlinesFileName

string

The file to store preprocessing commands in.

continueOnException

boolean

Whether to continue taskrunner if an exception is hit.

dataflowApiVersion

string

The API version of endpoint, e.g. "v1b3"

harnessCommand

string

The command to launch the worker harness.

languageHint

string

The suggested backend language.

logDir

string

The directory on the VM to store logs.

logToSerialconsole

boolean

Whether to send taskrunner log info to Google Compute Engine VM serial console.

logUploadLocation

string

Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

oauthScopes[]

string

The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

parallelWorkerSettings

object (WorkerSettings)

The settings to pass to the parallel worker harness.

streamingWorkerMainClass

string

The streaming worker main class name.

taskGroup

string

The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

taskUser

string

The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

tempStoragePrefix

string

The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

vmId

string

The ID string of the VM.

workflowFileName

string

The file to store the workflow in.

TemplateMetadata

Metadata describing a template.
Fields
description

string

Optional. A description of the template.

name

string

Required. The name of the template.

parameters[]

object (ParameterMetadata)

The parameters for the template.

TopologyConfig

Global topology of the streaming Dataflow job, including all computations and their sharded locations.
Fields
computations[]

object (ComputationTopology)

The computations associated with a streaming Dataflow job.

dataDiskAssignments[]

object (DataDiskAssignment)

The disks assigned to a streaming Dataflow job.

forwardingKeyBits

integer (int32 format)

The size (in bits) of keys that will be assigned to source messages.

persistentStateVersion

integer (int32 format)

Version number for persistent state.

userStageToComputationNameMap

map (key: string, value: string)

Maps user stage names to stable computation names.

TransformSummary

Description of the type, names/ids, and input/outputs for a transform.
Fields
displayData[]

object (DisplayData)

Transform-specific display data.

id

string

SDK generated id of this transform instance.

inputCollectionName[]

string

User names for all collection inputs to this transform.

kind

enum

Type of transform.

Enum type. Can be one of the following:
UNKNOWN_KIND Unrecognized transform type.
PAR_DO_KIND ParDo transform.
GROUP_BY_KEY_KIND Group By Key transform.
FLATTEN_KIND Flatten transform.
READ_KIND Read transform.
WRITE_KIND Write transform.
CONSTANT_KIND Constructs from a constant value, such as with Create.of.
SINGLETON_KIND Creates a Singleton view of a collection.
SHUFFLE_KIND Opening or closing a shuffle session, often as part of a GroupByKey.
name

string

User provided name for this transform instance.

outputCollectionName[]

string

User names for all collection outputs to this transform.

ValidateResponse

Response to the validation request.
Fields
errorMessage

string

Will be empty if validation succeeds.

queryInfo

object (QueryInfo)

Information about the validated query. Not defined if validation fails.

WorkItem

WorkItem represents basic information about a WorkItem to be executed in the cloud.
Fields
configuration

string

Work item-specific configuration as an opaque blob.

id

string (int64 format)

Identifies this WorkItem.

initialReportIndex

string (int64 format)

The initial index to use when reporting the status of the WorkItem.

jobId

string

Identifies the workflow job this WorkItem belongs to.

leaseExpireTime

string (Timestamp format)

Time when the lease on this Work will expire.

mapTask

object (MapTask)

Additional information for MapTask WorkItems.

packages[]

object (Package)

Any required packages that need to be fetched in order to execute this WorkItem.

projectId

string

Identifies the cloud project this WorkItem belongs to.

reportStatusInterval

string (Duration format)

Recommended reporting interval.

seqMapTask

object (SeqMapTask)

Additional information for SeqMapTask WorkItems.

shellTask

object (ShellTask)

Additional information for ShellTask WorkItems.

sourceOperationTask

object (SourceOperationRequest)

Additional information for source operation WorkItems.

streamingComputationTask

object (StreamingComputationTask)

Additional information for StreamingComputationTask WorkItems.

streamingConfigTask

object (StreamingConfigTask)

Additional information for StreamingConfigTask WorkItems.

streamingSetupTask

object (StreamingSetupTask)

Additional information for StreamingSetupTask WorkItems.

WorkItemDetails

Information about an individual work item execution.
Fields
attemptId

string

Attempt ID of this work item

endTime

string (Timestamp format)

End time of this work item attempt. If the work item is completed, this is the actual end time of the work item. Otherwise, it is the predicted end time.

metrics[]

object (MetricUpdate)

Metrics for this work item.

progress

object (ProgressTimeseries)

Progress of this work item.

startTime

string (Timestamp format)

Start time of this work item attempt.

state

enum

State of this work item.

Enum type. Can be one of the following:
EXECUTION_STATE_UNKNOWN The component state is unknown or unspecified.
EXECUTION_STATE_NOT_STARTED The component is not yet running.
EXECUTION_STATE_RUNNING The component is currently running.
EXECUTION_STATE_SUCCEEDED The component succeeded.
EXECUTION_STATE_FAILED The component failed.
EXECUTION_STATE_CANCELLED Execution of the component was cancelled.
taskId

string

Name of this work item.

WorkItemServiceState

The Dataflow service's idea of the current state of a WorkItem being processed by a worker.
Fields
completeWorkStatus

object (Status)

If set, a request to complete the work item with the given status. This will not be set to OK, unless supported by the specific kind of WorkItem. It can be used for the backend to indicate a WorkItem must terminate, e.g., for aborting work.

harnessData

map (key: string, value: any)

Other data returned by the service, specific to the particular worker harness.

hotKeyDetection

object (HotKeyDetection)

A hot key is a symptom of poor data distribution in which there are enough elements mapped to a single key to impact pipeline performance. When present, this field includes metadata associated with any hot key.

leaseExpireTime

string (Timestamp format)

Time at which the current lease will expire.

metricShortId[]

object (MetricShortId)

The short ids that workers should use in subsequent metric updates. Workers should strive to use short ids whenever possible, but it is ok to request the short_id again if a worker lost track of it (e.g. if the worker is recovering from a crash). NOTE: it is possible that the response may have short ids for a subset of the metrics.

nextReportIndex

string (int64 format)

The index value to use for the next report sent by the worker. Note: If the report call fails for whatever reason, the worker should reuse this index for subsequent report attempts.

reportStatusInterval

string (Duration format)

New recommended reporting interval.

splitRequest

object (ApproximateSplitRequest)

The progress point in the WorkItem where the Dataflow service suggests that the worker truncate the task.

suggestedStopPoint

object (ApproximateProgress)

DEPRECATED in favor of split_request.

suggestedStopPosition

object (Position)

Obsolete, always empty.

WorkItemStatus

Conveys a worker's progress through the work described by a WorkItem.
Fields
completed

boolean

True if the WorkItem was completed (successfully or unsuccessfully).

counterUpdates[]

object (CounterUpdate)

Worker output counters for this WorkItem.

dynamicSourceSplit

object (DynamicSourceSplit)

See documentation of stop_position.

errors[]

object (Status)

Specifies errors which occurred during processing. If errors are provided, and completed = true, then the WorkItem is considered to have failed.

metricUpdates[]

object (MetricUpdate)

DEPRECATED in favor of counter_updates.

progress

object (ApproximateProgress)

DEPRECATED in favor of reported_progress.

reportIndex

string (int64 format)

The report index. When a WorkItem is leased, the lease will contain an initial report index. When a WorkItem's status is reported to the system, the report should be sent with that report index, and the response will contain the index the worker should use for the next report. Reports received with unexpected index values will be rejected by the service. In order to preserve idempotency, the worker should not alter the contents of a report, even if the worker must submit the same report multiple times before getting back a response. The worker should not submit a subsequent report until the response for the previous report had been received from the service.

reportedProgress

object (ApproximateReportedProgress)

The worker's progress through this WorkItem.

requestedLeaseDuration

string (Duration format)

Amount of time the worker requests for its lease.

sourceFork

object (SourceFork)

DEPRECATED in favor of dynamic_source_split.

sourceOperationResponse

object (SourceOperationResponse)

If the work item represented a SourceOperationRequest, and the work is completed, contains the result of the operation.

stopPosition

object (Position)

A worker may split an active map task in two parts, "primary" and "residual", continuing to process the primary part and returning the residual part into the pool of available work. This event is called a "dynamic split" and is critical to the dynamic work rebalancing feature. The two obtained sub-tasks are called "parts" of the split. The parts, if concatenated, must represent the same input as would be read by the current task if the split did not happen. The exact way in which the original task is decomposed into the two parts is specified either as a position demarcating them (stop_position), or explicitly as two DerivedSources, if this task consumes a user-defined source type (dynamic_source_split). The "current" task is adjusted as a result of the split: after a task with range [A, B) sends a stop_position update at C, its range is considered to be [A, C), e.g.: * Progress should be interpreted relative to the new range, e.g. "75% completed" means "75% of [A, C) completed" * The worker should interpret proposed_stop_position relative to the new range, e.g. "split at 68%" should be interpreted as "split at 68% of [A, C)". * If the worker chooses to split again using stop_position, only stop_positions in [A, C) will be accepted. * Etc. dynamic_source_split has similar semantics: e.g., if a task with source S splits using dynamic_source_split into {P, R} (where P and R must be together equivalent to S), then subsequent progress and proposed_stop_position should be interpreted relative to P, and in a potential subsequent dynamic_source_split into {P', R'}, P' and R' must be together equivalent to P, etc.

totalThrottlerWaitTimeSeconds

number (double format)

Total time the worker spent being throttled by external systems.

workItemId

string

Identifies the WorkItem.

WorkerDetails

Information about a worker
Fields
workItems[]

object (WorkItemDetails)

Work items processed by this worker, sorted by time.

workerName

string

Name of this worker

WorkerHealthReport

WorkerHealthReport contains information about the health of a worker. The VM should be identified by the labels attached to the WorkerMessage that this health ping belongs to.
Fields
msg

string

Message describing any unusual health reports.

pods[]

object

The pods running on the worker. See: http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_pod This field is used by the worker to send the status of the indvidual containers running on each worker.

reportInterval

string (Duration format)

The interval at which the worker is sending health reports. The default value of 0 should be interpreted as the field is not being explicitly set by the worker.

vmBrokenCode

string

Code to describe a specific reason, if known, that a VM has reported broken state.

vmIsBroken

boolean

Whether the VM is in a permanently broken state. Broken VMs should be abandoned or deleted ASAP to avoid assigning or completing any work.

vmIsHealthy

boolean

Whether the VM is currently healthy.

vmStartupTime

string (Timestamp format)

The time the VM was booted.

WorkerHealthReportResponse

WorkerHealthReportResponse contains information returned to the worker in response to a health ping.
Fields
reportInterval

string (Duration format)

A positive value indicates the worker should change its reporting interval to the specified value. The default value of zero means no change in report rate is requested by the server.

WorkerLifecycleEvent

A report of an event in a worker's lifecycle. The proto contains one event, because the worker is expected to asynchronously send each message immediately after the event. Due to this asynchrony, messages may arrive out of order (or missing), and it is up to the consumer to interpret. The timestamp of the event is in the enclosing WorkerMessage proto.
Fields
containerStartTime

string (Timestamp format)

The start time of this container. All events will report this so that events can be grouped together across container/VM restarts.

event

enum

The event being reported.

Enum type. Can be one of the following:
UNKNOWN_EVENT Invalid event.
OS_START The time the VM started.
CONTAINER_START Our container code starts running. Multiple containers could be distinguished with WorkerMessage.labels if desired.
NETWORK_UP The worker has a functional external network connection.
STAGING_FILES_DOWNLOAD_START Started downloading staging files.
STAGING_FILES_DOWNLOAD_FINISH Finished downloading all staging files.
SDK_INSTALL_START For applicable SDKs, started installation of SDK and worker packages.
SDK_INSTALL_FINISH Finished installing SDK.
metadata

map (key: string, value: string)

Other stats that can accompany an event. E.g. { "downloaded_bytes" : "123456" }

WorkerMessage

WorkerMessage provides information to the backend about a worker.
Fields
labels

map (key: string, value: string)

Labels are used to group WorkerMessages. For example, a worker_message about a particular container might have the labels: { "JOB_ID": "2015-04-22", "WORKER_ID": "wordcount-vm-2015…" "CONTAINER_TYPE": "worker", "CONTAINER_ID": "ac1234def"} Label tags typically correspond to Label enum values. However, for ease of development other strings can be used as tags. LABEL_UNSPECIFIED should not be used here.

time

string (Timestamp format)

The timestamp of the worker_message.

workerHealthReport

object (WorkerHealthReport)

The health of a worker.

workerLifecycleEvent

object (WorkerLifecycleEvent)

Record of worker lifecycle events.

workerMessageCode

object (WorkerMessageCode)

A worker message code.

workerMetrics

object (ResourceUtilizationReport)

Resource metrics reported by workers.

workerShutdownNotice

object (WorkerShutdownNotice)

Shutdown notice by workers.

WorkerMessageCode

A message code is used to report status and error messages to the service. The message codes are intended to be machine readable. The service will take care of translating these into user understandable messages if necessary. Example use cases: 1. Worker processes reporting successful startup. 2. Worker processes reporting specific errors (e.g. package staging failure).
Fields
code

string

The code is a string intended for consumption by a machine that identifies the type of message being sent. Examples: 1. "HARNESS_STARTED" might be used to indicate the worker harness has started. 2. "GCS_DOWNLOAD_ERROR" might be used to indicate an error downloading a Cloud Storage file as part of the boot process of one of the worker containers. This is a string and not an enum to make it easy to add new codes without waiting for an API change.

parameters

map (key: string, value: any)

Parameters contains specific information about the code. This is a struct to allow parameters of different types. Examples: 1. For a "HARNESS_STARTED" message parameters might provide the name of the worker and additional data like timing information. 2. For a "GCS_DOWNLOAD_ERROR" parameters might contain fields listing the Cloud Storage objects being downloaded and fields containing errors. In general complex data structures should be avoided. If a worker needs to send a specific and complicated data structure then please consider defining a new proto and adding it to the data oneof in WorkerMessageResponse. Conventions: Parameters should only be used for information that isn't typically passed as a label. hostname and other worker identifiers should almost always be passed as labels since they will be included on most messages.

WorkerMessageResponse

A worker_message response allows the server to pass information to the sender.
Fields
workerHealthReportResponse

object (WorkerHealthReportResponse)

The service's response to a worker's health report.

workerMetricsResponse

object (ResourceUtilizationReportResponse)

Service's response to reporting worker metrics (currently empty).

workerShutdownNoticeResponse

object (WorkerShutdownNoticeResponse)

Service's response to shutdown notice (currently empty).

WorkerPool

Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.
Fields
autoscalingSettings

object (AutoscalingSettings)

Settings for autoscaling of this WorkerPool.

dataDisks[]

object (Disk)

Data disks that are used by a VM in this workflow.

defaultPackageSet

enum

The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

Enum type. Can be one of the following:
DEFAULT_PACKAGE_SET_UNKNOWN The default set of packages to stage is unknown, or unspecified.
DEFAULT_PACKAGE_SET_NONE Indicates that no packages should be staged at the worker unless explicitly specified by the job.
DEFAULT_PACKAGE_SET_JAVA Stage packages typically useful to workers written in Java.
DEFAULT_PACKAGE_SET_PYTHON Stage packages typically useful to workers written in Python.
diskSizeGb

integer (int32 format)

Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

diskSourceImage

string

Fully qualified source image for disks.

diskType

string

Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

ipConfiguration

enum

Configuration for VM IPs.

Enum type. Can be one of the following:
WORKER_IP_UNSPECIFIED The configuration is unknown, or unspecified.
WORKER_IP_PUBLIC Workers should have public IP addresses.
WORKER_IP_PRIVATE Workers should have private IP addresses.
kind

string

The kind of the worker pool; currently only harness and shuffle are supported.

machineType

string

Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

metadata

map (key: string, value: string)

Metadata to set on the Google Compute Engine VMs.

network

string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numThreadsPerWorker

integer (int32 format)

The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

numWorkers

integer (int32 format)

Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

onHostMaintenance

string

The action to take on host maintenance, as defined by the Google Compute Engine API.

packages[]

object (Package)

Packages to be installed on workers.

poolArgs

map (key: string, value: any)

Extra arguments for this worker pool.

sdkHarnessContainerImages[]

object (SdkHarnessContainerImage)

Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

subnetwork

string

Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

taskrunnerSettings

object (TaskRunnerSettings)

Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

teardownPolicy

enum

Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

Enum type. Can be one of the following:
TEARDOWN_POLICY_UNKNOWN The teardown policy isn't specified, or is unknown.
TEARDOWN_ALWAYS Always teardown the resource.
TEARDOWN_ON_SUCCESS Teardown the resource on success. This is useful for debugging failures.
TEARDOWN_NEVER Never teardown the resource. This is useful for debugging and development.
workerHarnessContainerImage

string

Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

zone

string

Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

WorkerSettings

Provides data to pass through to the worker harness.
Fields
baseUrl

string

The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

reportingEnabled

boolean

Whether to send work progress updates to the service.

servicePath

string

The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".

shuffleServicePath

string

The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".

tempStoragePrefix

string

The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

workerId

string

The ID of the worker running this pipeline.

WorkerShutdownNotice

Shutdown notification from workers. This is to be sent by the shutdown script of the worker VM so that the backend knows that the VM is being shut down.
Fields
reason

string

The reason for the worker shutdown. Current possible values are: "UNKNOWN": shutdown reason is unknown. "PREEMPTION": shutdown reason is preemption. Other possible reasons may be added in the future.

WriteInstruction

An instruction that writes records. Takes one input, produces no outputs.
Fields
input

object (InstructionInput)

The input.

sink

object (Sink)

The sink to write to.