Package types (0.8.13)

API documentation for dataflow_v1beta3.types package.

Classes

AutoscalingAlgorithm

Specifies the algorithm used to determine the number of worker processes to run at any given point in time, based on the amount of data left to process, the number of workers, and how quickly existing workers are processing data.

AutoscalingEvent

A structured message reporting an autoscaling decision made by the Dataflow service.

AutoscalingSettings

Settings for WorkerPool autoscaling.

BigQueryIODetails

Metadata for a BigQuery connector used by the job.

BigTableIODetails

Metadata for a Cloud Bigtable connector used by the job.

CheckActiveJobsRequest

Request to check is active jobs exists for a project

CheckActiveJobsResponse

Response for CheckActiveJobsRequest.

ComputationTopology

All configuration data for a particular Computation.

ContainerSpec

Container Spec.

CreateJobFromTemplateRequest

A request to create a Cloud Dataflow job from a template.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

CreateJobRequest

Request to create a Cloud Dataflow job.

CustomSourceLocation

Identifies the location of a custom souce.

DataDiskAssignment

Data disk assignment for a given VM instance.

DatastoreIODetails

Metadata for a Datastore connector used by the job.

DebugOptions

Describes any options that have an effect on the debugging of pipelines.

DefaultPackageSet

The default set of packages to be staged on a pool of workers.

DeleteSnapshotRequest

Request to delete a snapshot.

DeleteSnapshotResponse

Response from deleting a snapshot.

Disk

Describes the data disk used by a workflow job.

DisplayData

Data provided with a pipeline or transform to provide descriptive info.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

DynamicTemplateLaunchParams

Params which should be passed when launching a dynamic template.

Environment

Describes the environment in which a Dataflow Job runs.

ExecutionStageState

A message describing the state of a particular execution stage.

ExecutionStageSummary

Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning.

ExecutionState

The state of some component of job execution.

FailedLocation

Indicates which regional endpoint failed to respond to a request for data.

FileIODetails

Metadata for a File connector used by the job.

FlexResourceSchedulingGoal

Specifies the resource to optimize for in Flexible Resource Scheduling.

FlexTemplateRuntimeEnvironment

The environment values to be set at runtime for flex template.

GetJobExecutionDetailsRequest

Request to get job execution details.

GetJobMetricsRequest

Request to get job metrics.

GetJobRequest

Request to get the state of a Cloud Dataflow job.

GetSnapshotRequest

Request to get information about a snapshot

GetStageExecutionDetailsRequest

Request to get information about a particular execution stage of a job. Currently only tracked for Batch jobs.

GetTemplateRequest

A request to retrieve a Cloud Dataflow job template.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

GetTemplateResponse

The response to a GetTemplate request.

InvalidTemplateParameters

Used in the error_details field of a google.rpc.Status message, this indicates problems with the template parameter.

Job

Defines a job to be run by the Cloud Dataflow service.

JobExecutionDetails

Information about the execution of a job.

JobExecutionInfo

Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job.

JobExecutionStageInfo

Contains information about how a particular google.dataflow.v1beta3.Step][google.dataflow.v1beta3.Step] will be executed.

JobMessage

A particular message pertaining to a Dataflow job.

JobMessageImportance

Indicates the importance of the message.

JobMetadata

Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.

JobMetrics

JobMetrics contains a collection of metrics describing the detailed progress of a Dataflow job. Metrics correspond to user-defined and system-defined metrics in the job.

This resource captures only the most recent values of each metric; time-series data can be queried for them (under the same metric names) from Cloud Monitoring.

JobState

Describes the overall state of a google.dataflow.v1beta3.Job][google.dataflow.v1beta3.Job].

JobType

Specifies the processing model used by a [google.dataflow.v1beta3.Job], which determines the way the Job is managed by the Cloud Dataflow service (how workers are scheduled, how inputs are sharded, etc).

JobView

Selector for how much information is returned in Job responses.

    Project ID, Job ID, job name, job type, job
    status, start/end time, and Cloud SDK version
    details.
JOB_VIEW_ALL (2):
    Request all information available for this
    job.
JOB_VIEW_DESCRIPTION (3):
    Request summary info and limited job
    description data for steps, labels and
    environment.

KeyRangeDataDiskAssignment

Data disk assignment information for a specific key-range of a sharded computation. Currently we only support UTF-8 character splits to simplify encoding into JSON.

KeyRangeLocation

Location information for a specific key-range of a sharded computation. Currently we only support UTF-8 character splits to simplify encoding into JSON.

KindType

Type of transform or stage operation.

LaunchFlexTemplateParameter

Launch FlexTemplate Parameter.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

LaunchFlexTemplateRequest

A request to launch a Cloud Dataflow job from a FlexTemplate.

LaunchFlexTemplateResponse

Response to the request to launch a job from Flex Template.

LaunchTemplateParameters

Parameters to provide to the template being launched.

LaunchTemplateRequest

A request to launch a template.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

LaunchTemplateResponse

Response to the request to launch a template.

ListJobMessagesRequest

Request to list job messages. Up to max_results messages will be returned in the time range specified starting with the oldest messages first. If no time range is specified the results with start with the oldest message.

ListJobMessagesResponse

Response to a request to list job messages.

ListJobsRequest

Request to list Cloud Dataflow jobs.

ListJobsResponse

Response to a request to list Cloud Dataflow jobs in a project. This might be a partial response, depending on the page size in the ListJobsRequest. However, if the project does not have any jobs, an instance of ListJobsResponse is not returned and the requests's response body is empty {}.

ListSnapshotsRequest

Request to list snapshots.

ListSnapshotsResponse

List of snapshots.

MetricStructuredName

Identifies a metric, by describing the source which generated the metric.

MetricUpdate

Describes the state of a metric.

MountedDataDisk

Describes mounted data disk.

Package

The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.

This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run.

ParameterMetadata

Metadata for a specific parameter.

ParameterType

ParameterType specifies what kind of input we need for this parameter.

PipelineDescription

A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics.

ProgressTimeseries

Information about the progress of some component of job execution.

PubSubIODetails

Metadata for a Pub/Sub connector used by the job.

PubsubLocation

Identifies a pubsub location to use for transferring data into or out of a streaming Dataflow job.

PubsubSnapshotMetadata

Represents a Pubsub snapshot.

RuntimeEnvironment

The environment values to set at runtime.

RuntimeMetadata

RuntimeMetadata describing a runtime environment.

SDKInfo

SDK Information.

SdkHarnessContainerImage

Defines a SDK harness container for executing Dataflow pipelines.

SdkVersion

The version of the SDK used to run the job.

ShuffleMode

Specifies the shuffle mode used by a [google.dataflow.v1beta3.Job], which determines the approach data is shuffled during processing. More details in: https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#dataflow-shuffle

Snapshot

Represents a snapshot of a job.

SnapshotJobRequest

Request to create a snapshot of a job.

SnapshotState

Snapshot state.

SpannerIODetails

Metadata for a Spanner connector used by the job.

StageExecutionDetails

Information about the workers and work items within a stage.

StageSummary

Information about a particular execution stage of a job.

StateFamilyConfig

State family configuration.

Step

Defines a particular step within a Cloud Dataflow job.

A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job.

Here's an example of a sequence of steps which together implement a Map-Reduce job:

  • Read a collection of data from some source, parsing the collection's elements.

  • Validate the elements.

  • Apply a user-defined function to map each element to some value and extract an element-specific key value.

  • Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection.

  • Write the elements out to some data sink.

Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce.

StreamLocation

Describes a stream of data, either as input to be processed or as output of a streaming Dataflow job.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

StreamingApplianceSnapshotConfig

Streaming appliance snapshot configuration.

StreamingComputationRanges

Describes full or partial data disk assignment information of the computation ranges.

StreamingSideInputLocation

Identifies the location of a streaming side input.

StreamingStageLocation

Identifies the location of a streaming computation stage, for stage-to-stage communication.

StructuredMessage

A rich message format, including a human readable string, a key for identifying the message, and structured data associated with the message for programmatic consumption.

TaskRunnerSettings

Taskrunner configuration settings.

TeardownPolicy

Specifies what happens to a resource when a Cloud Dataflow google.dataflow.v1beta3.Job][google.dataflow.v1beta3.Job] has completed.

TemplateMetadata

Metadata describing a template.

TopologyConfig

Global topology of the streaming Dataflow job, including all computations and their sharded locations.

TransformSummary

Description of the type, names/ids, and input/outputs for a transform.

UpdateJobRequest

Request to update a Cloud Dataflow job.

WorkItemDetails

Information about an individual work item execution.

WorkerDetails

Information about a worker

WorkerIPAddressConfiguration

Specifies how IP addresses should be allocated to the worker machines.

WorkerPool

Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.

WorkerSettings

Provides data to pass through to the worker harness.