Index
FlexTemplatesService
(interface)JobsV1Beta3
(interface)MessagesV1Beta3
(interface)MetricsV1Beta3
(interface)TemplatesService
(interface)AutoscalingAlgorithm
(enum)AutoscalingEvent
(message)AutoscalingEvent.AutoscalingEventType
(enum)AutoscalingSettings
(message)BigQueryIODetails
(message)BigTableIODetails
(message)CheckActiveJobsRequest
(message)CheckActiveJobsResponse
(message)CreateJobFromTemplateRequest
(message)CreateJobRequest
(message)DataSamplingConfig
(message)DataSamplingConfig.DataSamplingBehavior
(enum)DatastoreIODetails
(message)DebugOptions
(message)DefaultPackageSet
(enum)Disk
(message)DisplayData
(message)DynamicTemplateLaunchParams
(message)Environment
(message)ExecutionStageState
(message)ExecutionStageSummary
(message)ExecutionStageSummary.ComponentSource
(message)ExecutionStageSummary.ComponentTransform
(message)ExecutionStageSummary.StageSource
(message)ExecutionState
(enum)FailedLocation
(message)FileIODetails
(message)FlexResourceSchedulingGoal
(enum)FlexTemplateRuntimeEnvironment
(message)GetJobExecutionDetailsRequest
(message)GetJobMetricsRequest
(message)GetJobRequest
(message)GetStageExecutionDetailsRequest
(message)GetTemplateRequest
(message)GetTemplateRequest.TemplateView
(enum)GetTemplateResponse
(message)GetTemplateResponse.TemplateType
(enum)HotKeyDebuggingInfo
(message)HotKeyDebuggingInfo.HotKeyInfo
(message)InvalidTemplateParameters
(message)InvalidTemplateParameters.ParameterViolation
(message)Job
(message)JobExecutionDetails
(message)JobExecutionInfo
(message)JobExecutionStageInfo
(message)JobMessage
(message)JobMessageImportance
(enum)JobMetadata
(message)JobMetrics
(message)JobState
(enum)JobType
(enum)JobView
(enum)KindType
(enum)LaunchFlexTemplateParameter
(message)LaunchFlexTemplateRequest
(message)LaunchFlexTemplateResponse
(message)LaunchTemplateParameters
(message)LaunchTemplateRequest
(message)LaunchTemplateResponse
(message)ListJobMessagesRequest
(message)ListJobMessagesResponse
(message)ListJobsRequest
(message)ListJobsRequest.Filter
(enum)ListJobsResponse
(message)MetricStructuredName
(message)MetricUpdate
(message)Package
(message)ParameterMetadata
(message)ParameterMetadataEnumOption
(message)ParameterType
(enum)PipelineDescription
(message)ProgressTimeseries
(message)ProgressTimeseries.Point
(message)PubSubIODetails
(message)PubsubSnapshotMetadata
(message)RuntimeEnvironment
(message)RuntimeMetadata
(message)RuntimeUpdatableParams
(message)SDKInfo
(message)SDKInfo.Language
(enum)SdkBug
(message)SdkBug.Severity
(enum)SdkBug.Type
(enum)SdkHarnessContainerImage
(message)SdkVersion
(message)SdkVersion.SdkSupportStatus
(enum)ServiceResources
(message)ShuffleMode
(enum)Snapshot
(message)SnapshotJobRequest
(message)SnapshotState
(enum)SpannerIODetails
(message)StageExecutionDetails
(message)StageSummary
(message)Step
(message)Straggler
(message)StragglerInfo
(message)StragglerInfo.StragglerDebuggingInfo
(message)StragglerSummary
(message)StreamingMode
(enum)StreamingStragglerInfo
(message)StructuredMessage
(message)StructuredMessage.Parameter
(message)TaskRunnerSettings
(message)TeardownPolicy
(enum)TemplateMetadata
(message)TransformSummary
(message)UpdateJobRequest
(message)WorkItemDetails
(message)WorkerDetails
(message)WorkerIPAddressConfiguration
(enum)WorkerPool
(message)WorkerSettings
(message)
FlexTemplatesService
Provides a service for Flex templates.
LaunchFlexTemplate |
---|
Launch a job with a FlexTemplate.
|
JobsV1Beta3
Provides a method to create and modify Google Cloud Dataflow jobs. A Job is a multi-stage computation graph run by the Cloud Dataflow service.
AggregatedListJobs |
---|
List the jobs of a project across all regions. Note: This method doesn't support filtering the list of jobs by name.
|
CheckActiveJobs |
---|
Check for existence of active jobs in the given project across all regions.
|
CreateJob |
---|
Creates a Cloud Dataflow job. To create a job, we recommend using Do not enter confidential information when you supply string values using the API.
|
GetJob |
---|
Gets the state of the specified Cloud Dataflow job. To get the state of a job, we recommend using
|
ListJobs |
---|
List the jobs of a project. To list the jobs of a project in a region, we recommend using
|
SnapshotJob |
---|
Snapshot the state of a streaming job.
|
UpdateJob |
---|
Updates the state of an existing Cloud Dataflow job. To update the state of an existing job, we recommend using
|
MessagesV1Beta3
The Dataflow Messages API is used for monitoring the progress of Dataflow jobs.
ListJobMessages |
---|
Request the job status. To request the status of a job, we recommend using
|
MetricsV1Beta3
The Dataflow Metrics API lets you monitor the progress of Dataflow jobs.
GetJobExecutionDetails |
---|
Request detailed information about the execution status of the job. EXPERIMENTAL. This API is subject to change or removal without notice.
|
GetJobMetrics |
---|
Request the job status. To request the status of a job, we recommend using
|
GetStageExecutionDetails |
---|
Request detailed information about the execution status of a stage of the job. EXPERIMENTAL. This API is subject to change or removal without notice.
|
TemplatesService
Provides a method to create Cloud Dataflow jobs from templates.
CreateJobFromTemplate |
---|
Creates a Cloud Dataflow job from a template. Do not enter confidential information when you supply string values using the API. To create a job, we recommend using
|
GetTemplate |
---|
Get the template associated with a template. To get the template, we recommend using
|
LaunchTemplate |
---|
Launches a template. To launch a template, we recommend using
|
AutoscalingAlgorithm
Specifies the algorithm used to determine the number of worker processes to run at any given point in time, based on the amount of data left to process, the number of workers, and how quickly existing workers are processing data.
Enums | |
---|---|
AUTOSCALING_ALGORITHM_UNKNOWN |
The algorithm is unknown, or unspecified. |
AUTOSCALING_ALGORITHM_NONE |
Disable autoscaling. |
AUTOSCALING_ALGORITHM_BASIC |
Increase worker count over time to reduce job execution time. |
AutoscalingEvent
A structured message reporting an autoscaling decision made by the Dataflow service.
Fields | |
---|---|
current_ |
The current number of workers the job has. |
target_ |
The target number of workers the worker pool wants to resize to use. |
event_ |
The type of autoscaling event to report. |
description |
A message describing why the system decided to adjust the current number of workers, why it failed, or why the system decided to not make any changes to the number of workers. |
time |
The time this event was emitted to indicate a new target or current num_workers value. |
worker_ |
A short and friendly name for the worker pool this event refers to. |
AutoscalingEventType
Indicates the type of autoscaling event.
Enums | |
---|---|
TYPE_UNKNOWN |
Default type for the enum. Value should never be returned. |
TARGET_NUM_WORKERS_CHANGED |
The TARGET_NUM_WORKERS_CHANGED type should be used when the target worker pool size has changed at the start of an actuation. An event should always be specified as TARGET_NUM_WORKERS_CHANGED if it reflects a change in the target_num_workers. |
CURRENT_NUM_WORKERS_CHANGED |
The CURRENT_NUM_WORKERS_CHANGED type should be used when actual worker pool size has been changed, but the target_num_workers has not changed. |
ACTUATION_FAILURE |
The ACTUATION_FAILURE type should be used when we want to report an error to the user indicating why the current number of workers in the pool could not be changed. Displayed in the current status and history widgets. |
NO_CHANGE |
Used when we want to report to the user a reason why we are not currently adjusting the number of workers. Should specify both target_num_workers, current_num_workers and a decision_message. |
AutoscalingSettings
Settings for WorkerPool autoscaling.
Fields | |
---|---|
algorithm |
The algorithm to use for autoscaling. |
max_ |
The maximum number of workers to cap scaling at. |
BigQueryIODetails
Metadata for a BigQuery connector used by the job.
Fields | |
---|---|
table |
Table accessed in the connection. |
dataset |
Dataset accessed in the connection. |
project_ |
Project accessed in the connection. |
query |
Query used to access data in the connection. |
BigTableIODetails
Metadata for a Cloud Bigtable connector used by the job.
Fields | |
---|---|
project_ |
ProjectId accessed in the connection. |
instance_ |
InstanceId accessed in the connection. |
table_ |
TableId accessed in the connection. |
CheckActiveJobsRequest
Request to check is active jobs exists for a project
Fields | |
---|---|
project_ |
The project which owns the jobs. |
CheckActiveJobsResponse
Response for CheckActiveJobsRequest.
Fields | |
---|---|
active_ |
If True, active jobs exists for project. False otherwise. |
CreateJobFromTemplateRequest
A request to create a Cloud Dataflow job from a template.
Fields | |
---|---|
project_ |
Required. The ID of the Cloud Platform project that the job belongs to. |
job_ |
Required. The job name to use for the created job. |
parameters |
The runtime parameters to pass to the job. |
environment |
The runtime environment for the job. |
location |
The regional endpoint to which to direct the request. |
Union field template . The template from which to create the job. template can be only one of the following: |
|
gcs_ |
Required. A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with |
CreateJobRequest
Request to create a Cloud Dataflow job.
Fields | |
---|---|
project_ |
The ID of the Cloud Platform project that the job belongs to. |
job |
The job to create. |
view |
The level of information requested in response. |
replace_ |
Deprecated. This field is now in the Job message. |
location |
The regional endpoint that contains this job. |
DataSamplingConfig
Configuration options for sampling elements.
Fields | |
---|---|
behaviors[] |
List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter. |
DataSamplingBehavior
The following enum defines what to sample for a running job.
Enums | |
---|---|
DATA_SAMPLING_BEHAVIOR_UNSPECIFIED |
If given, has no effect on sampling behavior. Used as an unknown or unset sentinel value. |
DISABLED |
When given, disables element sampling. Has same behavior as not setting the behavior. |
ALWAYS_ON |
When given, enables sampling in-flight from all PCollections. |
EXCEPTIONS |
When given, enables sampling input elements when a user-defined DoFn causes an exception. |
DatastoreIODetails
Metadata for a Datastore connector used by the job.
Fields | |
---|---|
namespace |
Namespace used in the connection. |
project_ |
ProjectId accessed in the connection. |
DebugOptions
Describes any options that have an effect on the debugging of pipelines.
Fields | |
---|---|
enable_ |
Optional. When true, enables the logging of the literal hot key to the user's Cloud Logging. |
data_ |
Configuration options for sampling elements from a running pipeline. |
DefaultPackageSet
The default set of packages to be staged on a pool of workers.
Enums | |
---|---|
DEFAULT_PACKAGE_SET_UNKNOWN |
The default set of packages to stage is unknown, or unspecified. |
DEFAULT_PACKAGE_SET_NONE |
Indicates that no packages should be staged at the worker unless explicitly specified by the job. |
DEFAULT_PACKAGE_SET_JAVA |
Stage packages typically useful to workers written in Java. |
DEFAULT_PACKAGE_SET_PYTHON |
Stage packages typically useful to workers written in Python. |
Disk
Describes the data disk used by a workflow job.
Fields | |
---|---|
size_ |
Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default. |
disk_ |
Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard |
mount_ |
Directory in a VM where disk is mounted. |
DisplayData
Data provided with a pipeline or transform to provide descriptive info.
Fields | |
---|---|
key |
The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system. |
namespace |
The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering. |
short_ |
A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip. |
url |
An optional full URL. |
label |
An optional label to display in a dax UI for the element. |
Union field Value . Various value types which can be used for display data. Only one will be set. Value can be only one of the following: |
|
str_ |
Contains value if the data is of string type. |
int64_ |
Contains value if the data is of int64 type. |
float_ |
Contains value if the data is of float type. |
java_ |
Contains value if the data is of java class type. |
timestamp_ |
Contains value if the data is of timestamp type. |
duration_ |
Contains value if the data is of duration type. |
bool_ |
Contains value if the data is of a boolean type. |
DynamicTemplateLaunchParams
Parameters to pass when launching a dynamic template.
Fields | |
---|---|
gcs_ |
Path to the dynamic template specification file on Cloud Storage. The file must be a JSON serialized |
staging_ |
Cloud Storage path for staging dependencies. Must be a valid Cloud Storage URL, beginning with |
Environment
Describes the environment in which a Dataflow Job runs.
Fields | |
---|---|
temp_ |
The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |
cluster_ |
The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com". |
experiments[] |
The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options. |
service_ |
Optional. The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on). |
service_ |
Optional. If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY |
worker_ |
The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers. |
user_ |
A description of the process that generated the request. |
version |
A structure describing which components and their versions of the service are required in order to run the job. |
dataset |
Optional. The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset} |
sdk_ |
The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way. |
internal_ |
Experimental settings. |
service_ |
Optional. Identity to run virtual machines as. Defaults to the default account. |
flex_ |
Optional. Which Flexible Resource Scheduling mode to run in. |
worker_ |
Optional. The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region. |
worker_ |
Optional. The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. |
shuffle_ |
Output only. The shuffle mode used for the job. |
debug_ |
Optional. Any debugging options to be supplied to the job. |
use_ |
Output only. Whether the job uses the Streaming Engine resource-based billing model. |
streaming_ |
Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode. |
ExecutionStageState
A message describing the state of a particular execution stage.
Fields | |
---|---|
execution_ |
The name of the execution stage. |
execution_ |
Executions stage states allow the same set of values as JobState. |
current_ |
The time at which the stage transitioned to this state. |
ExecutionStageSummary
Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning.
Fields | |
---|---|
name |
Dataflow service generated name for this stage. |
id |
Dataflow service generated id for this stage. |
kind |
Type of transform this stage is executing. |
input_ |
Input sources for this stage. |
output_ |
Output sources for this stage. |
prerequisite_ |
Other stages that must complete before this stage can run. |
component_ |
Transforms that comprise this execution stage. |
component_ |
Collections produced and consumed by component transforms of this stage. |
ComponentSource
Description of an interstitial value between transforms in an execution stage.
Fields | |
---|---|
user_ |
Human-readable name for this transform; may be user or system generated. |
name |
Dataflow service generated name for this source. |
original_ |
User name for the original user transform or collection with which this source is most closely associated. |
ComponentTransform
Description of a transform executed as part of an execution stage.
Fields | |
---|---|
user_ |
Human-readable name for this transform; may be user or system generated. |
name |
Dataflow service generated name for this source. |
original_ |
User name for the original user transform with which this transform is most closely associated. |
StageSource
Description of an input or output of an execution stage.
Fields | |
---|---|
user_ |
Human-readable name for this source; may be user or system generated. |
name |
Dataflow service generated name for this source. |
original_ |
User name for the original user transform or collection with which this source is most closely associated. |
size_ |
Size of the source, if measurable. |
ExecutionState
The state of some component of job execution.
Enums | |
---|---|
EXECUTION_STATE_UNKNOWN |
The component state is unknown or unspecified. |
EXECUTION_STATE_NOT_STARTED |
The component is not yet running. |
EXECUTION_STATE_RUNNING |
The component is currently running. |
EXECUTION_STATE_SUCCEEDED |
The component succeeded. |
EXECUTION_STATE_FAILED |
The component failed. |
EXECUTION_STATE_CANCELLED |
Execution of the component was cancelled. |
FailedLocation
Indicates which regional endpoint failed to respond to a request for data.
Fields | |
---|---|
name |
The name of the regional endpoint that failed to respond. |
FileIODetails
Metadata for a File connector used by the job.
Fields | |
---|---|
file_ |
File Pattern used to access files by the connector. |
FlexResourceSchedulingGoal
Specifies the resource to optimize for in Flexible Resource Scheduling.
Enums | |
---|---|
FLEXRS_UNSPECIFIED |
Run in the default mode. |
FLEXRS_SPEED_OPTIMIZED |
Optimize for lower execution time. |
FLEXRS_COST_OPTIMIZED |
Optimize for lower cost. |
FlexTemplateRuntimeEnvironment
The environment values to be set at runtime for flex template. LINT.IfChange
Fields | |
---|---|
num_ |
The initial number of Google Compute Engine instances for the job. |
max_ |
The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000. |
zone |
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence. |
service_ |
The email address of the service account to run the job as. |
temp_ |
The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with |
machine_ |
The machine type to use for the job. Defaults to the value from the template if not specified. |
additional_ |
Additional experiment flags for the job. |
network |
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default". |
subnetwork |
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. |
additional_ |
Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions page. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }. |
kms_ |
Name for the Cloud KMS key for the job. Key format is: projects/ |
ip_ |
Configuration for VM IPs. |
worker_ |
The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region. |
worker_ |
The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both |
enable_ |
Whether to enable Streaming Engine for the job. |
flexrs_ |
Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs |
staging_ |
The Cloud Storage path for staging local files. Must be a valid Cloud Storage URL, beginning with |
sdk_ |
Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines. |
disk_ |
Worker disk size, in gigabytes. |
autoscaling_ |
The algorithm to use for autoscaling |
dump_ |
If true, when processing time is spent almost entirely on garbage collection (GC), saves a heap dump before ending the thread or process. If false, ends the thread or process without saving a heap dump. Does not save a heap dump when the Java Virtual Machine (JVM) has an out of memory error during processing. The location of the heap file is either echoed back to the user, or the user is given the opportunity to download the heap file. |
save_ |
Cloud Storage bucket (directory) to upload heap dumps to. Enabling this field implies that |
launcher_ |
The machine type to use for launching the job. The default is n1-standard-1. |
enable_ |
If true serial port logging will be enabled for the launcher VM. |
streaming_ |
Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode. |
GetJobExecutionDetailsRequest
Request to get job execution details.
Fields | |
---|---|
project_ |
A project id. |
job_ |
The job to get execution details for. |
location |
The regional endpoint that contains the job specified by job_id. |
page_ |
If specified, determines the maximum number of stages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results. |
page_ |
If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned. |
GetJobMetricsRequest
Request to get job metrics.
Fields | |
---|---|
project_ |
A project id. |
job_ |
The job to get metrics for. |
start_ |
Return only metric data that has changed since this time. Default is to return all information about all metrics for the job. |
location |
The regional endpoint that contains the job specified by job_id. |
GetJobRequest
Request to get the state of a Cloud Dataflow job.
Fields | |
---|---|
project_ |
The ID of the Cloud Platform project that the job belongs to. |
job_ |
The job ID. |
view |
The level of information requested in response. |
location |
The regional endpoint that contains this job. |
GetStageExecutionDetailsRequest
Request to get information about a particular execution stage of a job. Currently only tracked for Batch jobs.
Fields | |
---|---|
project_ |
A project id. |
job_ |
The job to get execution details for. |
location |
The regional endpoint that contains the job specified by job_id. |
stage_ |
The stage for which to fetch information. |
page_ |
If specified, determines the maximum number of work items to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results. |
page_ |
If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned. |
start_ |
Lower time bound of work items to include, by start time. |
end_ |
Upper time bound of work items to include, by start time. |
GetTemplateRequest
A request to retrieve a Cloud Dataflow job template.
Fields | |
---|---|
project_ |
Required. The ID of the Cloud Platform project that the job belongs to. |
view |
The view to retrieve. Defaults to METADATA_ONLY. |
location |
The regional endpoint to which to direct the request. |
Union field template . The template from which to create the job. template can be only one of the following: |
|
gcs_ |
Required. A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with 'gs://'. |
TemplateView
The various views of a template that may be retrieved.
Enums | |
---|---|
METADATA_ONLY |
Template view that retrieves only the metadata associated with the template. |
GetTemplateResponse
The response to a GetTemplate request.
Fields | |
---|---|
status |
The status of the get template request. Any problems with the request will be indicated in the error_details. |
metadata |
The template metadata describing the template name, available parameters, etc. |
template_ |
Template Type. |
runtime_ |
Describes the runtime metadata with SDKInfo and available parameters. |
TemplateType
Template Type.
Enums | |
---|---|
UNKNOWN |
Unknown Template Type. |
LEGACY |
Legacy Template. |
FLEX |
Flex Template. |
HotKeyDebuggingInfo
Information useful for debugging a hot key detection.
Fields | |
---|---|
detected_ |
Debugging information for each detected hot key. Keyed by a hash of the key. |
HotKeyInfo
Information about a hot key.
Fields | |
---|---|
hot_ |
The age of the hot key measured from when it was first detected. |
key |
A detected hot key that is causing limited parallelism. This field will be populated only if the following flag is set to true: "--enable_hot_key_logging". |
key_ |
If true, then the above key is truncated and cannot be deserialized. This occurs if the key above is populated and the key size is >5MB. |
InvalidTemplateParameters
Used in the error_details field of a google.rpc.Status message, this indicates problems with the template parameter.
Fields | |
---|---|
parameter_ |
Describes all parameter violations in a template request. |
ParameterViolation
A specific template-parameter violation.
Fields | |
---|---|
parameter |
The parameter that failed to validate. |
description |
A description of why the parameter failed to validate. |
Job
Defines a job to be run by the Cloud Dataflow service. Do not enter confidential information when you supply string values using the API.
Fields | |
---|---|
id |
The unique ID of this job. This field is set by the Dataflow service when the job is created, and is immutable for the life of the job. |
project_ |
The ID of the Google Cloud project that the job belongs to. |
name |
Optional. The user-specified Dataflow job name. Only one active job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a job with the same name as an active job that already exists, the attempt returns the existing job. The name must match the regular expression |
type |
Optional. The type of Dataflow job. |
environment |
Optional. The environment for the job. |
steps[] |
Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL. |
steps_ |
The Cloud Storage location where the steps are stored. |
current_ |
The current state of the job. Jobs are created in the A job in the This field might be mutated by the Dataflow service; callers cannot mutate it. |
current_ |
The timestamp associated with the current state. |
requested_ |
The job's requested state. Applies to Set This field has no effect on |
execution_ |
Deprecated. |
create_ |
The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service. |
replace_ |
If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a |
transform_ |
Optional. The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. |
client_ |
The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it. |
replaced_ |
If another job is an update of this job (and thus, this job is in |
temp_ |
A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |
labels |
User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:
|
location |
Optional. The regional endpoint that contains this job. |
pipeline_ |
Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL. |
stage_ |
This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. |
job_ |
This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher. |
start_ |
The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service. |
created_ |
If this is specified, the job's initial state is populated from the given snapshot. |
satisfies_ |
Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests. |
runtime_ |
This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation. |
satisfies_ |
Output only. Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests. |
service_ |
Output only. Resources used by the Dataflow Service to run the job. |
JobExecutionDetails
Information about the execution of a job.
Fields | |
---|---|
stages[] |
The stages of the job execution. |
next_ |
If present, this response does not contain all requested tasks. To obtain the next page of results, repeat the request with page_token set to this value. |
JobExecutionInfo
Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job.
Fields | |
---|---|
stages |
A mapping from each stage to the information about that stage. |
JobExecutionStageInfo
Contains information about how a particular google.dataflow.v1beta3.Step
will be executed.
Fields | |
---|---|
step_ |
The steps associated with the execution stage. Note that stages may have several steps, and that a given step might be run by more than one stage. |
JobMessage
A particular message pertaining to a Dataflow job.
Fields | |
---|---|
id |
Deprecated. |
time |
The timestamp of the message. |
message_ |
The text of the message. |
message_ |
Importance level of the message. |
JobMessageImportance
Indicates the importance of the message.
Enums | |
---|---|
JOB_MESSAGE_IMPORTANCE_UNKNOWN |
The message importance isn't specified, or is unknown. |
JOB_MESSAGE_DEBUG |
The message is at the 'debug' level: typically only useful for software engineers working on the code the job is running. Typically, Dataflow pipeline runners do not display log messages at this level by default. |
JOB_MESSAGE_DETAILED |
The message is at the 'detailed' level: somewhat verbose, but potentially useful to users. Typically, Dataflow pipeline runners do not display log messages at this level by default. These messages are displayed by default in the Dataflow monitoring UI. |
JOB_MESSAGE_BASIC |
The message is at the 'basic' level: useful for keeping track of the execution of a Dataflow pipeline. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI. |
JOB_MESSAGE_WARNING |
The message is at the 'warning' level: indicating a condition pertaining to a job which may require human intervention. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI. |
JOB_MESSAGE_ERROR |
The message is at the 'error' level: indicating a condition preventing a job from succeeding. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI. |
JobMetadata
Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.
Fields | |
---|---|
sdk_ |
The SDK version used to run the job. |
spanner_ |
Identification of a Spanner source used in the Dataflow job. |
bigquery_ |
Identification of a BigQuery source used in the Dataflow job. |
big_ |
Identification of a Cloud Bigtable source used in the Dataflow job. |
pubsub_ |
Identification of a Pub/Sub source used in the Dataflow job. |
file_ |
Identification of a File source used in the Dataflow job. |
datastore_ |
Identification of a Datastore source used in the Dataflow job. |
user_ |
List of display properties to help UI filter jobs. |
JobMetrics
JobMetrics contains a collection of metrics describing the detailed progress of a Dataflow job. Metrics correspond to user-defined and system-defined metrics in the job. For more information, see Dataflow job metrics.
This resource captures only the most recent values of each metric; time-series data can be queried for them (under the same metric names) from Cloud Monitoring.
Fields | |
---|---|
metric_ |
Timestamp as of which metric values are current. |
metrics[] |
All metrics for this job. |
JobState
Describes the overall state of a google.dataflow.v1beta3.Job
.
Enums | |
---|---|
JOB_STATE_UNKNOWN |
The job's run state isn't specified. |
JOB_STATE_STOPPED |
JOB_STATE_STOPPED indicates that the job has not yet started to run. |
JOB_STATE_RUNNING |
JOB_STATE_RUNNING indicates that the job is currently running. |
JOB_STATE_DONE |
JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING . It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state. |
JOB_STATE_FAILED |
JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING . |
JOB_STATE_CANCELLED |
JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state. |
JOB_STATE_UPDATED |
JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING . |
JOB_STATE_DRAINING |
JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING . Jobs that are draining may only transition to JOB_STATE_DRAINED , JOB_STATE_CANCELLED , or JOB_STATE_FAILED . |
JOB_STATE_DRAINED |
JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING . |
JOB_STATE_PENDING |
JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING , or JOB_STATE_FAILED . |
JOB_STATE_CANCELLING |
JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED . |
JOB_STATE_QUEUED |
JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED . |
JOB_STATE_RESOURCE_CLEANING_UP |
JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested. |
JobType
Specifies the processing model used by a google.dataflow.v1beta3.Job
, which determines the way the Job is managed by the Cloud Dataflow service (how workers are scheduled, how inputs are sharded, etc).
Enums | |
---|---|
JOB_TYPE_UNKNOWN |
The type of the job is unspecified, or unknown. |
JOB_TYPE_BATCH |
A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done. |
JOB_TYPE_STREAMING |
A continuously streaming job with no end: data is read, processed, and written continuously. |
JobView
Selector for how much information is returned in Job responses.
Enums | |
---|---|
JOB_VIEW_UNKNOWN |
The job view to return isn't specified, or is unknown. Responses will contain at least the JOB_VIEW_SUMMARY information, and may contain additional information. |
JOB_VIEW_SUMMARY |
Request summary information only: Project ID, Job ID, job name, job type, job status, start/end time, and Cloud SDK version details. |
JOB_VIEW_ALL |
Request all information available for this job. When the job is in JOB_STATE_PENDING , the job has been created but is not yet running, and not all job information is available. For complete job information, wait until the job in is JOB_STATE_RUNNING . For more information, see JobState. |
JOB_VIEW_DESCRIPTION |
Request summary info and limited job description data for steps, labels and environment. |
KindType
Type of transform or stage operation.
Enums | |
---|---|
UNKNOWN_KIND |
Unrecognized transform type. |
PAR_DO_KIND |
ParDo transform. |
GROUP_BY_KEY_KIND |
Group By Key transform. |
FLATTEN_KIND |
Flatten transform. |
READ_KIND |
Read transform. |
WRITE_KIND |
Write transform. |
CONSTANT_KIND |
Constructs from a constant value, such as with Create.of. |
SINGLETON_KIND |
Creates a Singleton view of a collection. |
SHUFFLE_KIND |
Opening or closing a shuffle session, often as part of a GroupByKey. |
LaunchFlexTemplateParameter
Launch FlexTemplate Parameter.
Fields | |
---|---|
job_ |
Required. The job name to use for the created job. For update job request, job name should be same as the existing running job. |
parameters |
The parameters for FlexTemplate. Ex. {"num_workers":"5"} |
launch_ |
Launch options for this flex template job. This is a common set of options across languages and templates. This should not be used to pass job parameters. |
environment |
The runtime environment for the FlexTemplate job |
update |
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job. |
transform_ |
Use this to pass transform_name_mappings for streaming update jobs. Ex:{"oldTransformName":"newTransformName",...}' |
Union field template . Launch Mechanism. template can be only one of the following: |
|
container_ |
Cloud Storage path to a file with json serialized ContainerSpec as content. |
LaunchFlexTemplateRequest
A request to launch a Cloud Dataflow job from a FlexTemplate.
Fields | |
---|---|
project_ |
Required. The ID of the Cloud Platform project that the job belongs to. |
launch_ |
Required. Parameter to launch a job form Flex Template. |
location |
Required. The regional endpoint to which to direct the request. E.g., us-central1, us-west1. |
validate_ |
If true, the request is validated but not actually executed. Defaults to false. |
LaunchFlexTemplateResponse
Response to the request to launch a job from Flex Template.
Fields | |
---|---|
job |
The job that was launched, if the request was not a dry run and the job was successfully launched. |
LaunchTemplateParameters
Parameters to provide to the template being launched. Note that the metadata in the pipeline code determines which runtime parameters are valid.
Fields | |
---|---|
job_ |
Required. The job name to use for the created job. The name must match the regular expression |
parameters |
The runtime parameters to pass to the job. |
environment |
The runtime environment for the job. |
update |
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state. |
transform_ |
Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. |
LaunchTemplateRequest
A request to launch a template.
Fields | |
---|---|
project_ |
Required. The ID of the Cloud Platform project that the job belongs to. |
validate_ |
If true, the request is validated but not actually executed. Defaults to false. |
launch_ |
The parameters of the template to launch. Part of the body of the POST request. |
location |
The regional endpoint to which to direct the request. |
Union field template . The template to use to create the job. template can be only one of the following: |
|
gcs_ |
A Cloud Storage path to the template to use to create the job. Must be valid Cloud Storage URL, beginning with |
dynamic_ |
Parameters for launching a dynamic template. |
LaunchTemplateResponse
Response to the request to launch a template.
Fields | |
---|---|
job |
The job that was launched, if the request was not a dry run and the job was successfully launched. |
ListJobMessagesRequest
Request to list job messages. Up to max_results messages will be returned in the time range specified starting with the oldest messages first. If no time range is specified the results with start with the oldest message.
Fields | |
---|---|
project_ |
A project id. |
job_ |
The job to get messages about. |
minimum_ |
Filter to only get messages with importance >= level |
page_ |
If specified, determines the maximum number of messages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results. |
page_ |
If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned. |
start_ |
If specified, return only messages with timestamps >= start_time. The default is the job creation time (i.e. beginning of messages). |
end_ |
Return only messages with timestamps < end_time. The default is now (i.e. return up to the latest messages available). |
location |
The regional endpoint that contains the job specified by job_id. |
ListJobMessagesResponse
Response to a request to list job messages.
Fields | |
---|---|
job_ |
Messages in ascending timestamp order. |
next_ |
The token to obtain the next page of results if there are more. |
autoscaling_ |
Autoscaling events in ascending timestamp order. |
ListJobsRequest
Request to list Cloud Dataflow jobs.
Fields | |
---|---|
filter |
The kind of filter to use. |
project_ |
The project which owns the jobs. |
view |
Deprecated. ListJobs always returns summaries now. Use GetJob for other JobViews. |
page_ |
If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit. |
page_ |
Set this to the 'next_page_token' field of a previous response to request additional results in a long list. |
location |
The regional endpoint that contains this job. |
name |
Optional. The job name. |
Filter
This field filters out and returns jobs in the specified job state. The order of data returned is determined by the filter used, and is subject to change.
Enums | |
---|---|
UNKNOWN |
The filter isn't specified, or is unknown. This returns all jobs ordered on descending JobUuid . |
ALL |
Returns all running jobs first ordered on creation timestamp, then returns all terminated jobs ordered on the termination timestamp. |
TERMINATED |
Filters the jobs that have a terminated state, ordered on the termination timestamp. Example terminated states: JOB_STATE_STOPPED , JOB_STATE_UPDATED , JOB_STATE_DRAINED , etc. |
ACTIVE |
Filters the jobs that are running ordered on the creation timestamp. |
ListJobsResponse
Response to a request to list Cloud Dataflow jobs in a project. This might be a partial response, depending on the page size in the ListJobsRequest. However, if the project does not have any jobs, an instance of ListJobsResponse is not returned and the requests's response body is empty {}.
Fields | |
---|---|
jobs[] |
A subset of the requested job information. |
next_ |
Set if there may be more results than fit in this response. |
failed_ |
Zero or more messages describing the regional endpoints that failed to respond. |
MetricStructuredName
Identifies a metric, by describing the source which generated the metric.
Fields | |
---|---|
origin |
Origin (namespace) of metric name. May be blank for user-define metrics; will be "dataflow" for metrics defined by the Dataflow service or SDK. |
name |
Worker-defined metric name. |
context |
Zero or more labeled fields which identify the part of the job this metric is associated with, such as the name of a step or collection. For example, built-in counters associated with steps will have context['step'] = |
MetricUpdate
Describes the state of a metric.
Fields | |
---|---|
name |
Name of the metric. |
kind |
Metric aggregation kind. The possible metric aggregation kinds are "Sum", "Max", "Min", "Mean", "Set", "And", "Or", and "Distribution". The specified aggregation kind is case-insensitive. If omitted, this is not an aggregated value but instead a single metric sample value. |
cumulative |
True if this metric is reported as the total cumulative aggregate value accumulated since the worker started working on this WorkItem. By default this is false, indicating that this metric is reported as a delta that is not associated with any WorkItem. |
scalar |
Worker-computed aggregate value for aggregation kinds "Sum", "Max", "Min", "And", and "Or". The possible value types are Long, Double, and Boolean. |
mean_ |
Worker-computed aggregate value for the "Mean" aggregation kind. This holds the sum of the aggregated values and is used in combination with mean_count below to obtain the actual mean aggregate value. The only possible value types are Long and Double. |
mean_ |
Worker-computed aggregate value for the "Mean" aggregation kind. This holds the count of the aggregated values and is used in combination with mean_sum above to obtain the actual mean aggregate value. The only possible value type is Long. |
set |
Worker-computed aggregate value for the "Set" aggregation kind. The only possible value type is a list of Values whose type can be Long, Double, or String, according to the metric's type. All Values in the list must be of the same type. |
distribution |
A struct value describing properties of a distribution of numeric values. |
gauge |
A struct value describing properties of a Gauge. Metrics of gauge type show the value of a metric across time, and is aggregated based on the newest value. |
internal |
Worker-computed aggregate value for internal use by the Dataflow service. |
update_ |
Timestamp associated with the metric value. Optional when workers are reporting work progress; it will be filled in responses from the metrics API. |
Package
The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.
This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run.
Fields | |
---|---|
name |
The name of the package. |
location |
The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/ |
ParameterMetadata
Metadata for a specific parameter.
Fields | |
---|---|
name |
Required. The name of the parameter. |
label |
Required. The label to display for the parameter. |
help_ |
Required. The help text to display for the parameter. |
is_ |
Optional. Whether the parameter is optional. Defaults to false. |
regexes[] |
Optional. Regexes that the parameter must match. |
param_ |
Optional. The type of the parameter. Used for selecting input picker. |
custom_ |
Optional. Additional metadata for describing this parameter. |
group_ |
Optional. Specifies a group name for this parameter to be rendered under. Group header text will be rendered exactly as specified in this field. Only considered when parent_name is NOT provided. |
parent_ |
Optional. Specifies the name of the parent parameter. Used in conjunction with 'parent_trigger_values' to make this parameter conditional (will only be rendered conditionally). Should be mappable to a ParameterMetadata.name field. |
parent_ |
Optional. The value(s) of the 'parent_name' parameter which will trigger this parameter to be shown. If left empty, ANY non-empty value in parent_name will trigger this parameter to be shown. Only considered when this parameter is conditional (when 'parent_name' has been provided). |
enum_ |
Optional. The options shown when ENUM ParameterType is specified. |
default_ |
Optional. The default values will pre-populate the parameter with the given value from the proto. If default_value is left empty, the parameter will be populated with a default of the relevant type, e.g. false for a boolean. |
ParameterMetadataEnumOption
ParameterMetadataEnumOption specifies the option shown in the enum form.
Fields | |
---|---|
value |
Required. The value of the enum option. |
label |
Optional. The label to display for the enum option. |
description |
Optional. The description to display for the enum option. |
ParameterType
ParameterType specifies what kind of input we need for this parameter.
Enums | |
---|---|
DEFAULT |
Default input type. |
TEXT |
The parameter specifies generic text input. |
GCS_READ_BUCKET |
The parameter specifies a Cloud Storage Bucket to read from. |
GCS_WRITE_BUCKET |
The parameter specifies a Cloud Storage Bucket to write to. |
GCS_READ_FILE |
The parameter specifies a Cloud Storage file path to read from. |
GCS_WRITE_FILE |
The parameter specifies a Cloud Storage file path to write to. |
GCS_READ_FOLDER |
The parameter specifies a Cloud Storage folder path to read from. |
GCS_WRITE_FOLDER |
The parameter specifies a Cloud Storage folder to write to. |
PUBSUB_TOPIC |
The parameter specifies a Pub/Sub Topic. |
PUBSUB_SUBSCRIPTION |
The parameter specifies a Pub/Sub Subscription. |
BIGQUERY_TABLE |
The parameter specifies a BigQuery table. |
JAVASCRIPT_UDF_FILE |
The parameter specifies a JavaScript UDF in Cloud Storage. |
SERVICE_ACCOUNT |
The parameter specifies a Service Account email. |
MACHINE_TYPE |
The parameter specifies a Machine Type. |
KMS_KEY_NAME |
The parameter specifies a KMS Key name. |
WORKER_REGION |
The parameter specifies a Worker Region. |
WORKER_ZONE |
The parameter specifies a Worker Zone. |
BOOLEAN |
The parameter specifies a boolean input. |
ENUM |
The parameter specifies an enum input. |
NUMBER |
The parameter specifies a number input. |
KAFKA_TOPIC |
Deprecated. Please use KAFKA_READ_TOPIC instead. |
KAFKA_READ_TOPIC |
The parameter specifies the fully-qualified name of an Apache Kafka topic. This can be either a Google Managed Kafka topic or a non-managed Kafka topic. |
KAFKA_WRITE_TOPIC |
The parameter specifies the fully-qualified name of an Apache Kafka topic. This can be an existing Google Managed Kafka topic, the name for a new Google Managed Kafka topic, or an existing non-managed Kafka topic. |
PipelineDescription
A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics.
Fields | |
---|---|
original_ |
Description of each transform in the pipeline and collections between them. |
execution_ |
Description of each stage of execution of the pipeline. |
display_ |
Pipeline level display data. |
step_ |
A hash value of the submitted pipeline portable graph step names if exists. |
ProgressTimeseries
Information about the progress of some component of job execution.
Fields | |
---|---|
current_ |
The current progress of the component, in the range [0,1]. |
data_ |
History of progress for the component. Points are sorted by time. |
Point
A point in the timeseries.
Fields | |
---|---|
time |
The timestamp of the point. |
value |
The value of the point. |
PubSubIODetails
Metadata for a Pub/Sub connector used by the job.
Fields | |
---|---|
topic |
Topic accessed in the connection. |
subscription |
Subscription used in the connection. |
PubsubSnapshotMetadata
Represents a Pubsub snapshot.
Fields | |
---|---|
topic_ |
The name of the Pubsub topic. |
snapshot_ |
The name of the Pubsub snapshot. |
expire_ |
The expire time of the Pubsub snapshot. |
RuntimeEnvironment
The environment values to set at runtime. LINT.IfChange
Fields | |
---|---|
num_ |
Optional. The initial number of Google Compute Engine instances for the job. The default value is 11. |
max_ |
Optional. The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000. The default value is 1. |
zone |
Optional. The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence. |
service_ |
Optional. The email address of the service account to run the job as. |
temp_ |
Required. The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with |
bypass_ |
Optional. Whether to bypass the safety checks for the job's temporary directory. Use with caution. |
machine_ |
Optional. The machine type to use for the job. Defaults to the value from the template if not specified. |
additional_ |
Optional. Additional experiment flags for the job, specified with the |
network |
Optional. Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default". |
subnetwork |
Optional. Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. |
additional_ |
Optional. Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }. |
kms_ |
Optional. Name for the Cloud KMS key for the job. Key format is: projects/ |
ip_ |
Optional. Configuration for VM IPs. |
worker_ |
Required. The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region. |
worker_ |
Optional. The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both |
enable_ |
Optional. Whether to enable Streaming Engine for the job. |
disk_ |
Optional. The disk size, in gigabytes, to use on each remote Compute Engine worker instance. |
streaming_ |
Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode. |
RuntimeMetadata
RuntimeMetadata describing a runtime environment.
Fields | |
---|---|
sdk_ |
SDK Info for the template. |
parameters[] |
The parameters for the template. |
RuntimeUpdatableParams
Additional job parameters that can only be updated during runtime using the projects.jobs.update method. These fields have no effect when specified during job creation.
Fields | |
---|---|
max_ |
The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs. |
min_ |
The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs. |
worker_ |
Target worker utilization, compared against the aggregate utilization of the worker pool by autoscaler, to determine upscaling and downscaling when absent other constraints such as backlog. For more information, see Update an existing pipeline. |
SDKInfo
SDK Information.
Fields | |
---|---|
language |
Required. The SDK Language. |
version |
Optional. The SDK version. |
Language
SDK Language.
Enums | |
---|---|
UNKNOWN |
UNKNOWN Language. |
JAVA |
Java. |
PYTHON |
Python. |
GO |
Go. |
SdkBug
A bug found in the Dataflow SDK.
Fields | |
---|---|
type |
Output only. Describes the impact of this SDK bug. |
severity |
Output only. How severe the SDK bug is. |
uri |
Output only. Link to more information on the bug. |
Severity
Indicates the severity of the bug. Other severities may be added to this list in the future.
Enums | |
---|---|
SEVERITY_UNSPECIFIED |
A bug of unknown severity. |
NOTICE |
A minor bug that that may reduce reliability or performance for some jobs. Impact will be minimal or non-existent for most jobs. |
WARNING |
A bug that has some likelihood of causing performance degradation, data loss, or job failures. |
SEVERE |
A bug with extremely significant impact. Jobs may fail erroneously, performance may be severely degraded, and data loss may be very likely. |
Type
Nature of the issue, ordered from least severe to most. Other bug types may be added to this list in the future.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Unknown issue with this SDK. |
GENERAL |
Catch-all for SDK bugs that don't fit in the below categories. |
PERFORMANCE |
Using this version of the SDK may result in degraded performance. |
DATALOSS |
Using this version of the SDK may cause data loss. |
SdkHarnessContainerImage
Defines an SDK harness container for executing Dataflow pipelines.
Fields | |
---|---|
container_ |
A docker container image that resides in Google Container Registry. |
use_ |
If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed. |
environment_ |
Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness. |
capabilities[] |
The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto |
SdkVersion
The version of the SDK used to run the job.
Fields | |
---|---|
version |
The version of the SDK used to run the job. |
version_ |
A readable string describing the version of the SDK. |
sdk_ |
The support status for this SDK version. |
bugs[] |
Output only. Known bugs found in this SDK version. |
SdkSupportStatus
The support status of the SDK used to run the job.
Enums | |
---|---|
UNKNOWN |
Cloud Dataflow is unaware of this version. |
SUPPORTED |
This is a known version of an SDK, and is supported. |
STALE |
A newer version of the SDK family exists, and an update is recommended. |
DEPRECATED |
This version of the SDK is deprecated and will eventually be unsupported. |
UNSUPPORTED |
Support for this SDK version has ended and it should no longer be used. |
ServiceResources
Resources used by the Dataflow Service to run the job.
Fields | |
---|---|
zones[] |
Output only. List of Cloud Zones being used by the Dataflow Service for this job. Example: us-central1-c |
ShuffleMode
Specifies the shuffle mode used by a google.dataflow.v1beta3.Job
, which determines the approach data is shuffled during processing. More details in: https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#dataflow-shuffle
Enums | |
---|---|
SHUFFLE_MODE_UNSPECIFIED |
Shuffle mode information is not available. |
VM_BASED |
Shuffle is done on the worker VMs. |
SERVICE_BASED |
Shuffle is done on the service side. |
Snapshot
Represents a snapshot of a job.
Fields | |
---|---|
id |
The unique ID of this snapshot. |
project_ |
The project this snapshot belongs to. |
source_ |
The job this snapshot was created from. |
creation_ |
The time this snapshot was created. |
ttl |
The time after which this snapshot will be automatically deleted. |
state |
State of the snapshot. |
pubsub_ |
Pub/Sub snapshot metadata. |
description |
User specified description of the snapshot. Maybe empty. |
disk_ |
The disk byte size of the snapshot. Only available for snapshots in READY state. |
region |
Cloud region where this snapshot lives in, e.g., "us-central1". |
SnapshotJobRequest
Request to create a snapshot of a job.
Fields | |
---|---|
project_ |
The project which owns the job to be snapshotted. |
job_ |
The job to be snapshotted. |
ttl |
TTL for the snapshot. |
location |
The location that contains this job. |
snapshot_ |
If true, perform snapshots for sources which support this. |
description |
User specified description of the snapshot. Maybe empty. |
SnapshotState
Snapshot state.
Enums | |
---|---|
UNKNOWN_SNAPSHOT_STATE |
Unknown state. |
PENDING |
Snapshot intent to create has been persisted, snapshotting of state has not yet started. |
RUNNING |
Snapshotting is being performed. |
READY |
Snapshot has been created and is ready to be used. |
FAILED |
Snapshot failed to be created. |
DELETED |
Snapshot has been deleted. |
SpannerIODetails
Metadata for a Spanner connector used by the job.
Fields | |
---|---|
project_ |
ProjectId accessed in the connection. |
instance_ |
InstanceId accessed in the connection. |
database_ |
DatabaseId accessed in the connection. |
StageExecutionDetails
Information about the workers and work items within a stage.
Fields | |
---|---|
workers[] |
Workers that have done work on the stage. |
next_ |
If present, this response does not contain all requested tasks. To obtain the next page of results, repeat the request with page_token set to this value. |
StageSummary
Information about a particular execution stage of a job.
Fields | |
---|---|
stage_ |
ID of this stage |
state |
State of this stage. |
start_ |
Start time of this stage. |
end_ |
End time of this stage. If the work item is completed, this is the actual end time of the stage. Otherwise, it is the predicted end time. |
progress |
Progress for this stage. Only applicable to Batch jobs. |
metrics[] |
Metrics for this stage. |
straggler_ |
Straggler summary for this stage. |
Step
Defines a particular step within a Cloud Dataflow job.
A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job.
Note: The properties of this object are not stable and might change.
Here's an example of a sequence of steps which together implement a Map-Reduce job:
Read a collection of data from some source, parsing the collection's elements.
Validate the elements.
Apply a user-defined function to map each element to some value and extract an element-specific key value.
Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection.
Write the elements out to some data sink.
Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce.
Fields | |
---|---|
kind |
The kind of step in the Cloud Dataflow job. |
name |
The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job. |
properties |
Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL. |
Straggler
Information for a straggler.
Fields | |
---|---|
Union field straggler_info . Information useful for straggler identification and debugging. straggler_info can be only one of the following: |
|
batch_ |
Batch straggler identification and debugging information. |
streaming_ |
Streaming straggler identification and debugging information. |
StragglerInfo
Information useful for straggler identification and debugging.
Fields | |
---|---|
start_ |
The time when the work item attempt became a straggler. |
causes |
The straggler causes, keyed by the string representation of the StragglerCause enum and contains specialized debugging information for each straggler cause. |
StragglerDebuggingInfo
Information useful for debugging a straggler. Each type will provide specialized debugging information relevant for a particular cause. The StragglerDebuggingInfo will be 1:1 mapping to the StragglerCause enum.
Fields | |
---|---|
Union field
|
|
hot_ |
Hot key debugging details. |
StragglerSummary
Summarized straggler identification details.
Fields | |
---|---|
total_ |
The total count of stragglers. |
straggler_ |
Aggregated counts of straggler causes, keyed by the string representation of the StragglerCause enum. |
recent_ |
The most recent stragglers. |
StreamingMode
Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages written to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode.
Enums | |
---|---|
STREAMING_MODE_UNSPECIFIED |
Run in the default mode. |
STREAMING_MODE_EXACTLY_ONCE |
In this mode, message deduplication is performed against persistent state to make sure each message is processed and committed to storage exactly once. |
STREAMING_MODE_AT_LEAST_ONCE |
Message deduplication is not performed. Messages might be processed multiple times, and the results are applied multiple times. Note: Setting this value also enables Streaming Engine and Streaming Engine resource-based billing. |
StreamingStragglerInfo
Information useful for streaming straggler identification and debugging.
Fields | |
---|---|
start_ |
Start time of this straggler. |
end_ |
End time of this straggler. |
worker_ |
Name of the worker where the straggler was detected. |
data_ |
The event-time watermark lag at the time of the straggler detection. |
system_ |
The system watermark lag at the time of the straggler detection. |
StructuredMessage
A rich message format, including a human readable string, a key for identifying the message, and structured data associated with the message for programmatic consumption.
Fields | |
---|---|
message_ |
Human-readable version of message. |
message_ |
Identifier for this message type. Used by external systems to internationalize or personalize message. |
parameters[] |
The structured data associated with this message. |
Parameter
Structured data associated with this message.
Fields | |
---|---|
key |
Key or name for this parameter. |
value |
Value for this parameter. |
TaskRunnerSettings
Taskrunner configuration settings.
Fields | |
---|---|
task_ |
The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root". |
task_ |
The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel". |
oauth_ |
The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API. |
base_ |
The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/" |
dataflow_ |
The API version of endpoint, e.g. "v1b3" |
parallel_ |
The settings to pass to the parallel worker harness. |
base_ |
The location on the worker for task-specific subdirectories. |
continue_ |
Whether to continue taskrunner if an exception is hit. |
log_ |
Whether to send taskrunner log info to Google Compute Engine VM serial console. |
alsologtostderr |
Whether to also send taskrunner log info to stderr. |
log_ |
Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |
log_ |
The directory on the VM to store logs. |
temp_ |
The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |
harness_ |
The command to launch the worker harness. |
workflow_ |
The file to store the workflow in. |
commandlines_ |
The file to store preprocessing commands in. |
vm_ |
The ID string of the VM. |
language_ |
The suggested backend language. |
streaming_ |
The streaming worker main class name. |
TeardownPolicy
Specifies what happens to a resource when a Cloud Dataflow google.dataflow.v1beta3.Job
has completed.
Enums | |
---|---|
TEARDOWN_POLICY_UNKNOWN |
The teardown policy isn't specified, or is unknown. |
TEARDOWN_ALWAYS |
Always teardown the resource. |
TEARDOWN_ON_SUCCESS |
Teardown the resource on success. This is useful for debugging failures. |
TEARDOWN_NEVER |
Never teardown the resource. This is useful for debugging and development. |
TemplateMetadata
Metadata describing a template.
Fields | |
---|---|
name |
Required. The name of the template. |
description |
Optional. A description of the template. |
parameters[] |
The parameters for the template. |
streaming |
Optional. Indicates if the template is streaming or not. |
supports_ |
Optional. Indicates if the streaming template supports at least once mode. |
supports_ |
Optional. Indicates if the streaming template supports exactly once mode. |
default_ |
Optional. Indicates the default streaming mode for a streaming template. Only valid if both supports_at_least_once and supports_exactly_once are true. Possible values: UNSPECIFIED, EXACTLY_ONCE and AT_LEAST_ONCE |
TransformSummary
Description of the type, names/ids, and input/outputs for a transform.
Fields | |
---|---|
kind |
Type of transform. |
id |
SDK generated id of this transform instance. |
name |
User provided name for this transform instance. |
display_ |
Transform-specific display data. |
output_ |
User names for all collection outputs to this transform. |
input_ |
User names for all collection inputs to this transform. |
UpdateJobRequest
Request to update a Cloud Dataflow job.
Fields | |
---|---|
project_ |
The ID of the Cloud Platform project that the job belongs to. |
job_ |
The job ID. |
job |
The updated job. Only the job state is updatable; other fields will be ignored. |
location |
The regional endpoint that contains this job. |
update_ |
The list of fields to update relative to Job. If empty, only RequestedJobState will be considered for update. If the FieldMask is not empty and RequestedJobState is none/empty, The fields specified in the update mask will be the only ones considered for update. If both RequestedJobState and update_mask are specified, an error will be returned as we cannot update both state and mask. |
WorkItemDetails
Information about an individual work item execution.
Fields | |
---|---|
task_ |
Name of this work item. |
attempt_ |
Attempt ID of this work item |
start_ |
Start time of this work item attempt. |
end_ |
End time of this work item attempt. If the work item is completed, this is the actual end time of the work item. Otherwise, it is the predicted end time. |
state |
State of this work item. |
progress |
Progress of this work item. |
metrics[] |
Metrics for this work item. |
straggler_ |
Information about straggler detections for this work item. |
WorkerDetails
Information about a worker
Fields | |
---|---|
worker_ |
Name of this worker |
work_ |
Work items processed by this worker, sorted by time. |
WorkerIPAddressConfiguration
Specifies how to allocate IP addresses to worker machines. You can also use pipeline options to specify whether Dataflow workers use external IP addresses.
Enums | |
---|---|
WORKER_IP_UNSPECIFIED |
The configuration is unknown, or unspecified. |
WORKER_IP_PUBLIC |
Workers should have public IP addresses. |
WORKER_IP_PRIVATE |
Workers should have private IP addresses. |
WorkerPool
Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.
Fields | |
---|---|
kind |
The kind of the worker pool; currently only |
num_ |
Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default. |
packages[] |
Packages to be installed on workers. |
default_ |
The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language. |
machine_ |
Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default. |
teardown_ |
Sets the policy for determining when to turndown worker pool. Allowed values are: If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the If unknown or unspecified, the service will attempt to choose a reasonable default. |
disk_ |
Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default. |
disk_ |
Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default. |
disk_ |
Fully qualified source image for disks. |
zone |
Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default. |
taskrunner_ |
Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field. |
on_ |
The action to take on host maintenance, as defined by the Google Compute Engine API. |
data_ |
Data disks that are used by a VM in this workflow. |
metadata |
Metadata to set on the Google Compute Engine VMs. |
autoscaling_ |
Settings for autoscaling of this WorkerPool. |
pool_ |
Extra arguments for this worker pool. |
network |
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default". |
subnetwork |
Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK". |
worker_ |
Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead. |
num_ |
The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming). |
ip_ |
Configuration for VM IPs. |
sdk_ |
Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries. |
WorkerSettings
Provides data to pass through to the worker harness.
Fields | |
---|---|
base_ |
The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/" |
reporting_ |
Whether to send work progress updates to the service. |
service_ |
The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects". |
shuffle_ |
The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1". |
worker_ |
The ID of the worker running this pipeline. |
temp_ |
The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |