API documentation for dataproc_v1.types
module.
Classes
AcceleratorConfig
Specifies the type and number of accelerator cards attached to the
instances of an instance. See GPUs on Compute Engine
<https://cloud.google.com/compute/docs/gpus/>
__.
.. attribute:: accelerator_type_uri
Full URL, partial URI, or short name of the accelerator type
resource to expose to this instance. See Compute Engine
AcceleratorTypes <https://cloud.google.com/compute/docs/refere
nce/beta/acceleratorTypes>
. Examples: - https://www.go
ogleapis.com/compute/beta/projects/[project_id]/zones/us-
east1-a/acceleratorTypes/nvidia-tesla-k80
-
projects/[project_id]/zones/us-
east1-a/acceleratorTypes/nvidia-tesla-k80
- nvidia-
tesla-k80
Auto Zone Exception: If you are using the
Dataproc Auto Zone Placement
<https://cloud.google.com/dataproc/docs/concepts/configuring-
clusters/auto-zone#using_auto_zone_placement>
feature, you
must use the short name of the accelerator type resource, for
example, nvidia-tesla-k80
.
Any
API documentation for dataproc_v1.types.Any
class.
AutoscalingConfig
Autoscaling Policy config associated with the cluster. .. attribute:: policy_uri
Optional. The autoscaling policy used by the cluster. Only
resource names including projectid and location (region) are
valid. Examples: - https://www.googleapis.com/compute/v1/p
rojects/[project_id]/locations/[dataproc_region]/autoscalingPo
licies/[policy_id]
- projects/[project_id]/locations/[dat
aproc_region]/autoscalingPolicies/[policy_id]
Note that the
policy must be in the same project and Dataproc region.
AutoscalingPolicy
Describes an autoscaling policy for Dataproc cluster autoscaler. .. attribute:: id
Required. The policy id. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
Autoscaling algorithm for policy.
Optional. Describes how the autoscaler will operate for secondary workers.
BasicAutoscalingAlgorithm
Basic algorithm for autoscaling. .. attribute:: yarn_config
Required. YARN autoscaling configuration.
BasicYarnAutoscalingConfig
Basic autoscaling configurations for YARN. .. attribute:: graceful_decommission_timeout
Required. Timeout for YARN graceful decommissioning of Node Managers. Specifies the duration to wait for jobs to complete before forcefully removing workers (and potentially interrupting jobs). Only applicable to downscaling operations. Bounds: [0s, 1d].
Required. Fraction of average pending memory in the last cooldown period for which to remove workers. A scale-down factor of 1 will result in scaling down so that there is no available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. Bounds: [0.0, 1.0].
Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0.
CancelJobRequest
A request to cancel a job. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
Required. The job ID.
CancelOperationRequest
API documentation for dataproc_v1.types.CancelOperationRequest
class.
Cluster
Describes the identifying information, config, and status of a cluster of Compute Engine instances. .. attribute:: project_id
Required. The Google Cloud Platform project ID that the cluster belongs to.
Required. The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
Output only. Cluster status.
Output only. A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
ClusterConfig
The cluster config. .. attribute:: config_bucket
Optional. A Cloud Storage bucket used to stage job
dependencies, config files, and job driver console output. If
you do not specify a staging bucket, Cloud Dataproc will
determine a Cloud Storage location (US, ASIA, or EU) for your
cluster’s staging bucket according to the Compute Engine zone
where your cluster is deployed, and then create and manage
this project-level, per-location bucket (see Dataproc staging
bucket
<https://cloud.google.com/dataproc/docs/concepts/configuring-
clusters/staging-bucket>
__).
Optional. The Compute Engine config settings for the master instance in a cluster.
Optional. The Compute Engine config settings for additional worker instances in a cluster.
Optional. Commands to execute on each node after config is
completed. By default, executables are run on master and all
worker nodes. You can test a node’s role
metadata to run
an executable on a master or worker node, as shown below using
curl
(you can also use wget
): ROLE=\ :math:(curl -H
Metadata-Flavor:Google http://metadata/computeMetadata/v1/ins
tance/attributes/dataproc-role) if [[ "
\ {ROLE}" == ‘Master’
]]; then … master specific actions … else … worker specific
actions … fi
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
Optional. Lifecycle setting for the cluster.
ClusterMetrics
Contains cluster daemon metrics, such as HDFS and YARN stats. Beta Feature: This report is available for testing purposes only. It may be changed before final release. .. attribute:: hdfs_metrics
The HDFS metrics.
ClusterOperation
The cluster operation triggered by a workflow. .. attribute:: operation_id
Output only. The id of the cluster operation.
Output only. Indicates the operation is done.
ClusterOperationMetadata
Metadata describing the operation. .. attribute:: cluster_name
Output only. Name of the cluster for the operation.
Output only. Current operation status.
Output only. The operation type.
Output only. Labels associated with the operation
ClusterOperationStatus
The status of the operation. .. attribute:: state
Output only. A message containing the operation state.
Output only. A message containing any operation metadata details.
ClusterSelector
A selector that chooses target cluster for jobs based on metadata. .. attribute:: zone
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster. If unspecified, the zone of the first cluster matching the selector is used.
ClusterStatus
The status of a cluster and its instances. .. attribute:: state
Output only. The cluster’s state.
Output only. Time when this state was entered (see JSON
representation of Timestamp
<https://developers.google.com/protocol-
buffers/docs/proto3#json>
__).
CreateAutoscalingPolicyRequest
A request to create an autoscaling policy. .. attribute:: parent
Required. The “resource name” of the region or location, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.autoscalingPolicies.create
, the resource
name of the region has the following format:
projects/{project_id}/regions/{region}
- For
projects.locations.autoscalingPolicies.create
, the
resource name of the location has the following format:
projects/{project_id}/locations/{location}
CreateClusterRequest
A request to create a cluster. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
Required. The cluster to create.
CreateWorkflowTemplateRequest
A request to create a workflow template. .. attribute:: parent
Required. The resource name of the region or location, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates,create
, the resource
name of the region has the following format:
projects/{project_id}/regions/{region}
- For
projects.locations.workflowTemplates.create
, the resource
name of the location has the following format:
projects/{project_id}/locations/{location}
DeleteAutoscalingPolicyRequest
A request to delete an autoscaling policy. Autoscaling policies in use by one or more clusters will not be deleted. .. attribute:: name
Required. The “resource name” of the autoscaling policy, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.autoscalingPolicies.delete
, the resource
name of the policy has the following format: projects/{pr
oject_id}/regions/{region}/autoscalingPolicies/{policy_id}
- For
projects.locations.autoscalingPolicies.delete
, the resource name of the policy has the following format:projects/{project_id}/locations/{location}/autoscalingPolicies /{policy_id}
DeleteClusterRequest
A request to delete a cluster. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
Required. The cluster name.
Optional. A unique id used to identify the request. If the
server receives two [DeleteClusterRequest][google.cloud.datapr
oc.v1.DeleteClusterRequest] requests with the same id, then
the second request will be ignored and the first
google.longrunning.Operation][google.longrunning.Operation]
created and stored in the backend is returned. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The id
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (), and hyphens (-). The maximum length is 40
characters.
DeleteJobRequest
A request to delete a job. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
Required. The job ID.
DeleteOperationRequest
API documentation for dataproc_v1.types.DeleteOperationRequest
class.
DeleteWorkflowTemplateRequest
A request to delete a workflow template. Currently started workflows will remain running. .. attribute:: name
Required. The resource name of the workflow template, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates.delete
, the resource
name of the template has the following format: project
s/{project_id}/regions/{region}/workflowTemplates/{template_id
}
- For
projects.locations.workflowTemplates.instantiate
, the
resource name of the template has the following format: p
rojects/{project_id}/locations/{location}/workflowTemplates/{t
emplate_id}
DiagnoseClusterRequest
A request to collect cluster diagnostic information. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
Required. The cluster name.
DiagnoseClusterResults
The location of diagnostic output. .. attribute:: output_uri
Output only. The Cloud Storage URI of the diagnostic output. The output report is a plain text file with a summary of collected diagnostics.
DiskConfig
Specifies the config of disk options for a group of VM instances. .. attribute:: boot_disk_type
Optional. Type of the boot disk (default is “pd-standard”). Valid values: “pd-ssd” (Persistent Disk Solid State Drive) or “pd-standard” (Persistent Disk Hard Disk Drive).
Optional. Number of attached SSDs, from 0 to 4 (default is 0).
If SSDs are not attached, the boot disk is used to store
runtime logs and HDFS <https://hadoop.apache.org/docs/r1.2.1/
hdfs_user_guide.html>
__ data. If one or more SSDs are
attached, this runtime bulk data is spread across them, and
the boot disk contains only basic config and installed
binaries.
Duration
API documentation for dataproc_v1.types.Duration
class.
Empty
API documentation for dataproc_v1.types.Empty
class.
EncryptionConfig
Encryption settings for the cluster. .. attribute:: gce_pd_kms_key_name
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
FieldMask
API documentation for dataproc_v1.types.FieldMask
class.
GceClusterConfig
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. .. attribute:: zone_uri
Optional. The zone where the Compute Engine cluster will be
located. On a create request, it is required in the “global”
region. If omitted in a non-global Dataproc region, the
service will pick a zone in the corresponding Compute Engine
region. On a get request, zone will always be present. A full
URL, partial URI, or short name are valid. Examples: - htt
ps://www.googleapis.com/compute/v1/projects/[project_id]/zones
/[zone]
- projects/[project_id]/zones/[zone]
- us-
central1-f
Optional. The Compute Engine subnetwork to be used for machine
communications. Cannot be specified with network_uri. A full
URL, partial URI, or short name are valid. Examples: - htt
ps://www.googleapis.com/compute/v1/projects/[project_id]/regio
ns/us-east1/subnetworks/sub0
-
projects/[project_id]/regions/us-east1/subnetworks/sub0
-
sub0
Optional. The Dataproc service account
<https://cloud.google.com/dataproc/docs/concepts/configuring-
clusters/service-
accounts#service_accounts_in_cloud_dataproc>
(also see VM
Data Plane identity
<https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-
principals#vm_service_account_data_plane_identity>
) used by
Dataproc cluster VM instances to access Google Cloud Platform
services. If not specified, the Compute Engine default
service account
<https://cloud.google.com/compute/docs/access/service-
accounts#default_service_account>
__ is used.
The Compute Engine tags to add to all instances (see Tagging
instances <https://cloud.google.com/compute/docs/label-or-tag-
resources#tags>
__).
Optional. Reservation Affinity for consuming Zonal reservation.
GetAutoscalingPolicyRequest
A request to fetch an autoscaling policy. .. attribute:: name
Required. The “resource name” of the autoscaling policy, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.autoscalingPolicies.get
, the resource
name of the policy has the following format: projects/
{project_id}/regions/{region}/autoscalingPolicies/{policy_id}`
` - For
projects.locations.autoscalingPolicies.get, the
resource name of the policy has the following format:
projects/{project_id}/locations/{location}/autoscalingPolicies
/{policy_id}``
GetClusterRequest
Request to get the resource representation for a cluster in a project. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
Required. The cluster name.
GetJobRequest
A request to get the resource representation for a job in a project. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
Required. The job ID.
GetOperationRequest
API documentation for dataproc_v1.types.GetOperationRequest
class.
GetWorkflowTemplateRequest
A request to fetch a workflow template. .. attribute:: name
Required. The resource name of the workflow template, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates.get
, the resource name
of the template has the following format: projects/{pr
oject_id}/regions/{region}/workflowTemplates/{template_id}
- For
projects.locations.workflowTemplates.get
, the resource name of the template has the following format:projects/{project_id}/locations/{location}/workflowTemplates /{template_id}
HadoopJob
A Dataproc job for running Apache Hadoop MapReduce
<https://hadoop.apache.org/docs/current/hadoop-mapreduce-
client/hadoop-mapreduce-client-core/MapReduceTutorial.html>
jobs on
Apache Hadoop YARN <https://hadoop.apache.org/docs/r2.7.1/hadoop-
yarn/hadoop-yarn-site/YARN.html>
.
.. attribute:: driver
Required. Indicates the location of the driver’s main class.
Specify either the jar file that contains the main class or
the main class name. To specify both, add the jar file to
jar_file_uris
, and then specify the main class name in
this property.
The name of the driver’s main class. The jar file containing
the class must be in the default CLASSPATH or specified in
jar_file_uris
.
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
Optional. The runtime log config for job execution.
HiveJob
A Dataproc job for running Apache Hive <https://hive.apache.org/>
__
queries on YARN.
.. attribute:: queries
Required. The sequence of Hive queries to execute, specified as either an HCFS file URI or a list of queries.
A list of queries.
Optional. Mapping of query variable names to values
(equivalent to the Hive command: SET name="value";
).
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
InstanceGroupAutoscalingPolicyConfig
Configuration for the size bounds of an instance group, including its proportional size to other groups. .. attribute:: min_instances
Optional. Minimum number of instances for this group. Primary workers - Bounds: [2, max_instances]. Default: 2. Secondary workers - Bounds: [0, max_instances]. Default: 0.
Optional. Weight for the instance group, which is used to
determine the fraction of total workers in the cluster from
this instance group. For example, if primary workers have
weight 2, and secondary workers have weight 1, the cluster
will have approximately 2 primary workers for each secondary
worker. The cluster may not reach the specified balance if
constrained by min/max bounds or other autoscaling settings.
For example, if max_instances
for secondary workers is 0,
then only primary workers will be added. The cluster can also
be out of balance when created. If weight is not set on any
instance group, the cluster will default to equal weight for
all groups: the cluster will attempt to maintain an equal
number of workers in each group within the configured size
bounds for each group. If weight is set for one group only,
the cluster will default to zero weight on the unset group.
For example if weight is set only on primary workers, the
cluster will use primary workers only and no secondary
workers.
InstanceGroupConfig
The config settings for Compute Engine resources in an instance group, such as a master or worker group. .. attribute:: num_instances
Optional. The number of VM instances in the instance group. For master instance groups, must be set to 1.
Optional. The Compute Engine image resource used for cluster
instances. The URI can represent an image or image family.
Image examples: - https://www.googleapis.com/compute/beta/
projects/[project_id]/global/images/[image-id]
-
projects/[project_id]/global/images/[image-id]
- image-
id
Image family examples. Dataproc will use the most recent
image from the family: - https://www.googleapis.com/comput
e/beta/projects/[project_id]/global/images/family/[custom-
image-family-name]
-
projects/[project_id]/global/images/family/[custom-image-
family-name]
If the URI is unspecified, it will be inferred
from SoftwareConfig.image_version
or the system default.
Optional. Disk option config settings.
Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
Optional. Specifies the minimum cpu platform for the Instance
Group. See Dataproc -> Minimum CPU Platform <https://cloud.go
ogle.com/dataproc/docs/concepts/compute/dataproc-min-cpu>
__.
InstantiateInlineWorkflowTemplateRequest
A request to instantiate an inline workflow template. .. attribute:: parent
Required. The resource name of the region or location, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates,instantiateinline
, the
resource name of the region has the following format:
projects/{project_id}/regions/{region}
- For
projects.locations.workflowTemplates.instantiateinline
,
the resource name of the location has the following format:
projects/{project_id}/locations/{location}
Optional. A tag that prevents multiple concurrent workflow
instances with the same tag from running. This mitigates risk
of concurrent instances started due to retries. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The tag
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (), and hyphens (-). The maximum length is 40
characters.
InstantiateWorkflowTemplateRequest
A request to instantiate a workflow template. .. attribute:: name
Required. The resource name of the workflow template, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates.instantiate
, the
resource name of the template has the following format:
projects/{project_id}/regions/{region}/workflowTemplates/{te
mplate_id}
- For
projects.locations.workflowTemplates.instantiate
, the
resource name of the template has the following format: p
rojects/{project_id}/locations/{location}/workflowTemplates/{t
emplate_id}
Optional. A tag that prevents multiple concurrent workflow
instances with the same tag from running. This mitigates risk
of concurrent instances started due to retries. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The tag
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (), and hyphens (-). The maximum length is 40
characters.
Job
A Dataproc job resource. .. attribute:: reference
Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
Required. The application/framework-specific portion of the job.
Optional. Job is a Spark job.
Optional. Job is a Hive job.
Optional. Job is a SparkR job.
Optional. Job is a Presto job.
Output only. The previous job status.
Output only. A URI pointing to the location of the stdout of the job’s driver program.
Optional. The labels to associate with this job. Label
keys must contain 1 to 63 characters, and must conform to
RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>
. Label
values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. No more than 32
labels can be associated with a job.
Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
JobMetadata
Job Operation metadata. .. attribute:: job_id
Output only. The job id.
Output only. Operation type.
JobPlacement
Dataproc job config. .. attribute:: cluster_name
Required. The name of the cluster where the job will be submitted.
JobReference
Encapsulates the full scoping used to reference a job. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
JobScheduling
Job scheduling options. .. attribute:: max_failures_per_hour
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed. A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window. Maximum value is 10.
JobStatus
Dataproc job status. .. attribute:: state
Output only. A state message specifying the overall job state.
Output only. The time when this state was entered.
KerberosConfig
Specifies Kerberos related configuration. .. attribute:: enable_kerberos
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
Required. The uri of the KMS key used to encrypt various sensitive files.
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
LifecycleConfig
Specifies the cluster auto-delete schedule configuration. .. attribute:: idle_delete_ttl
Optional. The duration to keep the cluster alive while idling
(when no jobs are running). Passing this threshold will cause
the cluster to be deleted. Minimum value is 10 minutes;
maximum value is 14 days (see JSON representation of Duration
<https://developers.google.com/protocol-
buffers/docs/proto3#json>
__.
Optional. The time when cluster will be auto-deleted (see JSON
representation of Timestamp
<https://developers.google.com/protocol-
buffers/docs/proto3#json>
__).
Output only. The time when cluster became idle (most recent
job finished) and became eligible for deletion due to idleness
(see JSON representation of Timestamp
<https://developers.google.com/protocol-
buffers/docs/proto3#json>
__).
ListAutoscalingPoliciesRequest
A request to list autoscaling policies in a project. .. attribute:: parent
Required. The “resource name” of the region or location, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.autoscalingPolicies.list
, the resource
name of the region has the following format:
projects/{project_id}/regions/{region}
- For
projects.locations.autoscalingPolicies.list
, the resource
name of the location has the following format:
projects/{project_id}/locations/{location}
Optional. The page token, returned by a previous call, to request the next page of results.
ListAutoscalingPoliciesResponse
A response to a request to list autoscaling policies in a project. .. attribute:: policies
Output only. Autoscaling policies list.
ListClustersRequest
A request to list the clusters in a project. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
Optional. A filter constraining the clusters to list. Filters
are case-sensitive and have the following syntax: field =
value [AND [field = value]] … where field is one of
status.state
, clusterName
, or labels.[KEY]
, and
[KEY]
is a label key. value can be *
to match all
values. status.state
can be one of the following:
ACTIVE
, INACTIVE
, CREATING
, RUNNING
,
ERROR
, DELETING
, or UPDATING
. ACTIVE
contains
the CREATING
, UPDATING
, and RUNNING
states.
INACTIVE
contains the DELETING
and ERROR
states.
clusterName
is the name of the cluster provided at
creation time. Only the logical AND
operator is supported;
space-separated items are treated as having an implicit
AND
operator. Example filter: status.state = ACTIVE AND
clusterName = mycluster AND labels.env = staging AND
labels.starred = *
Optional. The standard List page token.
ListClustersResponse
The list of all clusters in a project. .. attribute:: clusters
Output only. The clusters in the project.
ListJobsRequest
A request to list jobs in a project. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
Optional. The number of results to return in each response.
Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster.
Optional. A filter constraining the jobs to list. Filters are
case-sensitive and have the following syntax: [field = value]
AND [field [= value]] … where field is status.state
or labels.[KEY]
, and [KEY]
is a label key. value
can be *
to match all values. status.state
can be
either ACTIVE
or NON_ACTIVE
. Only the logical AND
operator is supported; space-separated items are treated as
having an implicit AND
operator. Example filter:
status.state = ACTIVE AND labels.env = staging AND
labels.starred = *
ListJobsResponse
A list of jobs in a project. .. attribute:: jobs
Output only. Jobs list.
ListOperationsRequest
API documentation for dataproc_v1.types.ListOperationsRequest
class.
ListOperationsResponse
API documentation for dataproc_v1.types.ListOperationsResponse
class.
ListWorkflowTemplatesRequest
A request to list workflow templates in a project. .. attribute:: parent
Required. The resource name of the region or location, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates,list
, the resource name
of the region has the following format:
projects/{project_id}/regions/{region}
- For
projects.locations.workflowTemplates.list
, the resource
name of the location has the following format:
projects/{project_id}/locations/{location}
Optional. The page token, returned by a previous call, to request the next page of results.
ListWorkflowTemplatesResponse
A response to a request to list workflow templates in a project. .. attribute:: templates
Output only. WorkflowTemplates list.
LoggingConfig
The runtime logging config of the job. .. attribute:: driver_log_levels
The per-package log levels for the driver. This may include “root” package name to configure rootLogger. Examples: ‘com.google = FATAL’, ‘root = INFO’, ‘org.apache = DEBUG’
ManagedCluster
Cluster that is managed by the workflow. .. attribute:: cluster_name
Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix. The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
Optional. The labels to associate with this cluster. Label keys must be between 1 and 63 characters long. Label values must be between 1 and 63 characters long. No more than 32 labels can be associated with a given cluster.
ManagedGroupConfig
Specifies the resources used to actively manage an instance group. .. attribute:: instance_template_name
Output only. The name of the Instance Template used for the Managed Instance Group.
NodeInitializationAction
Specifies an executable to run on a fully configured node and a timeout period for executable completion. .. attribute:: executable_file
Required. Cloud Storage URI of executable file.
Operation
API documentation for dataproc_v1.types.Operation
class.
OperationInfo
API documentation for dataproc_v1.types.OperationInfo
class.
OrderedJob
A job executed by the workflow. .. attribute:: step_id
Required. The step id. The id must be unique among all jobs
within the template. The step id is used as prefix for job
id, as job goog-dataproc-workflow-step-id
label, and in [p
rerequisiteStepIds][google.cloud.dataproc.v1.OrderedJob.prereq
uisite_step_ids] field from other steps. The id must contain
only letters (a-z, A-Z), numbers (0-9), underscores (_), and
hyphens (-). Cannot begin or end with underscore or hyphen.
Must consist of between 3 and 50 characters.
Spark R job
Optional. The labels to associate with this job. Label keys must be between 1 and 63 characters long. Label values must be between 1 and 63 characters long. No more than 32 labels can be associated with a given job.
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
ParameterValidation
Configuration for parameter validation. .. attribute:: validation_type
Required. The type of validation to be performed.
Validation based on a list of allowed values.
PigJob
A Dataproc job for running Apache Pig <https://pig.apache.org/>
__
queries on YARN.
.. attribute:: queries
Required. The sequence of Pig queries to execute, specified as an HCFS file URI or a list of queries.
A list of queries.
Optional. Mapping of query variable names to values
(equivalent to the Pig command: name=[value]
).
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
PrestoJob
A Dataproc job for running Presto <https://prestosql.io/>
queries.
IMPORTANT: The Dataproc Presto Optional Component
<https://cloud.google.com/dataproc/docs/concepts/components/presto>
must be enabled when the cluster is created to submit a Presto job to
the cluster.
.. attribute:: queries
Required. The sequence of Presto queries to execute, specified as either an HCFS file URI or as a list of queries.
A list of queries.
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
Optional. A mapping of property names to values. Used to set
Presto session properties
<https://prestodb.io/docs/current/sql/set-session.html>
__
Equivalent to using the –session flag in the Presto CLI
PySparkJob
A Dataproc job for running Apache PySpark
<https://spark.apache.org/docs/0.9.0/python-programming-guide.html>
__
applications on YARN.
.. attribute:: main_python_file_uri
Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
QueryList
A list of queries to run on a cluster. .. attribute:: queries
Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: “hiveJob”: { “queryList”: { “queries”: [ “query1”, “query2”, “query3;query4”, ] } }
RegexValidation
Validation based on regular expressions. .. attribute:: regexes
Required. RE2 regular expressions used to validate the parameter’s value. The value must match the regex in its entirety (substring matches are not sufficient).
ReservationAffinity
Reservation Affinity for consuming Zonal reservation. .. attribute:: consume_reservation_type
Optional. Type of reservation to consume
Optional. Corresponds to the label values of reservation resource.
SecurityConfig
Security related configuration, including Kerberos. .. attribute:: kerberos_config
Kerberos related configuration.
SoftwareConfig
Specifies the selection and config of software inside the cluster. .. attribute:: image_version
Optional. The version of software inside the cluster. It must
be one of the supported Dataproc Versions <https://cloud.goog
le.com/dataproc/docs/concepts/versioning/dataproc-
versions#supported_cloud_dataproc_versions>
, such as “1.2”
(including a subminor version, such as “1.2.29”), or the
“preview” version <https://cloud.google.com/dataproc/docs/con
cepts/versioning/dataproc-versions#other_versions>
. If
unspecified, it defaults to the latest Debian version.
Optional. The set of components to activate on the cluster.
SparkJob
A Dataproc job for running Apache Spark <http://spark.apache.org/>
__
applications on YARN.
.. attribute:: driver
Required. The specification of the main method to call to
drive the job. Specify either the jar file that contains the
main class or the main class name. To pass both a main jar and
a main class in that jar, add the jar to
CommonJob.jar_file_uris
, and then specify the main class
name in main_class
.
The name of the driver’s main class. The jar file that
contains the class must be in the default CLASSPATH or
specified in jar_file_uris
.
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Optional. The runtime log config for job execution.
SparkRJob
A Dataproc job for running Apache SparkR
<https://spark.apache.org/docs/latest/sparkr.html>
__ applications on
YARN.
.. attribute:: main_r_file_uri
Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkSqlJob
A Dataproc job for running Apache Spark SQL
<http://spark.apache.org/sql/>
__ queries.
.. attribute:: queries
Required. The sequence of Spark SQL queries to execute, specified as either an HCFS file URI or as a list of queries.
A list of queries.
Optional. A mapping of property names to values, used to configure Spark SQL’s SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
Optional. The runtime log config for job execution.
Status
API documentation for dataproc_v1.types.Status
class.
SubmitJobRequest
A request to submit a job. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
Required. The job resource.
TemplateParameter
A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector) .. attribute:: name
Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
Optional. Brief description of the parameter. Must not exceed 1024 characters.
Timestamp
API documentation for dataproc_v1.types.Timestamp
class.
UpdateAutoscalingPolicyRequest
A request to update an autoscaling policy. .. attribute:: policy
Required. The updated autoscaling policy.
UpdateClusterRequest
A request to update a cluster. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project the cluster belongs to.
Required. The cluster name.
Optional. Timeout for graceful YARN decomissioning. Graceful
decommissioning allows removing nodes from the cluster without
interrupting jobs in progress. Timeout specifies how long to
wait for jobs in progress to finish before forcefully removing
nodes (and potentially interrupting jobs). Default timeout is
0 (for forceful decommission), and the maximum allowed timeout
is 1 day. (see JSON representation of Duration
<https://developers.google.com/protocol-
buffers/docs/proto3#json>
__). Only supported on Dataproc
image versions 1.2 and higher.
Optional. A unique id used to identify the request. If the
server receives two [UpdateClusterRequest][google.cloud.datapr
oc.v1.UpdateClusterRequest] requests with the same id, then
the second request will be ignored and the first
google.longrunning.Operation][google.longrunning.Operation]
created and stored in the backend is returned. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The id
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (), and hyphens (-). The maximum length is 40
characters.
UpdateJobRequest
A request to update a job. .. attribute:: project_id
Required. The ID of the Google Cloud Platform project that the job belongs to.
Required. The job ID.
Required. Specifies the path, relative to Job, of the field to
update. For example, to update the labels of a Job the
update_mask parameter would be specified as labels, and the
PATCH
request body would specify the new value. Note:
Currently, labels is the only field that can be updated.
UpdateWorkflowTemplateRequest
A request to update a workflow template. .. attribute:: template
Required. The updated workflow template. The
template.version
field must match the current version.
ValueValidation
Validation based on a list of allowed values. .. attribute:: values
Required. List of allowed values for the parameter.
WaitOperationRequest
API documentation for dataproc_v1.types.WaitOperationRequest
class.
WorkflowGraph
The workflow graph. .. attribute:: nodes
Output only. The workflow nodes.
WorkflowMetadata
A Dataproc workflow template resource. .. attribute:: template
Output only. The resource name of the workflow template as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates
, the resource name of
the template has the following format: projects/{proje
ct_id}/regions/{region}/workflowTemplates/{template_id}
-
For projects.locations.workflowTemplates
, the resource
name of the template has the following format: project
s/{project_id}/locations/{location}/workflowTemplates/{templat
e_id}
Output only. The create cluster operation metadata.
Output only. The delete cluster operation metadata.
Output only. The name of the target cluster.
Output only. Workflow start time.
Output only. The UUID of target cluster.
WorkflowNode
The workflow node. .. attribute:: step_id
Output only. The name of the node.
Output only. The job id; populated after the node enters RUNNING state.
Output only. The error detail.
WorkflowTemplate
A Dataproc workflow template resource. .. attribute:: name
Output only. The resource name of the workflow template, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates
, the resource name of
the template has the following format: projects/{proje
ct_id}/regions/{region}/workflowTemplates/{template_id}
-
For projects.locations.workflowTemplates
, the resource
name of the template has the following format: project
s/{project_id}/locations/{location}/workflowTemplates/{templat
e_id}
Output only. The time template was created.
Optional. The labels to associate with this template. These
labels will be propagated to all jobs and clusters created by
the workflow instance. Label keys must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. Label values
may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. No more than 32
labels can be associated with a template.
Required. The Directed Acyclic Graph of Jobs to submit.
WorkflowTemplatePlacement
Specifies workflow execution target. Either managed_cluster
or
cluster_selector
is required.
.. attribute:: placement
Required. Specifies where workflow executes; either on a managed cluster or an existing cluster chosen by labels.
Optional. A selector that chooses target cluster for jobs based on metadata. The selector is evaluated at the time each job is submitted.
YarnApplication
A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto. Beta Feature: This report is available for testing purposes only. It may be changed before final release. .. attribute:: name
Required. The application name.
Required. The numerical progress of the application, from 1 to 100.