API documentation for dataproc_v1beta2.types
module.
Classes
AcceleratorConfig
Specifies the type and number of accelerator cards
attached to the instances of an instance group (see GPUs on Compute
Engine </compute/docs/gpus/>
__).
The number of the accelerator cards of this type exposed to this instance.
Any
API documentation for dataproc_v1beta2.types.Any
class.
AutoscalingConfig
Autoscaling Policy config associated with the cluster.
AutoscalingPolicy
Describes an autoscaling policy for Dataproc cluster autoscaler.
Output only. The "resource name" of the autoscaling policy, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.autoscalingPolicies
, the resource name of
the policy has the following format: projects/{project_id
}/regions/{region}/autoscalingPolicies/{policy_id}
- For
projects.locations.autoscalingPolicies
, the resource name
of the policy has the following format: projects/{proj
ect_id}/locations/{location}/autoscalingPolicies/{policy_id}
Required. Describes how the autoscaler will operate for primary workers.
BasicAutoscalingAlgorithm
Basic algorithm for autoscaling.
Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed. Bounds: [2m, 1d]. Default: 2m.
BasicYarnAutoscalingConfig
Basic autoscaling configurations for YARN.
Required. Fraction of average pending memory in the last cooldown period for which to add workers. A scale-up factor of 1.0 will result in scaling up so that there is no pending memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). Bounds: [0.0, 1.0].
Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0.
CancelJobRequest
A request to cancel a job.
Required. The Dataproc region in which to handle the request.
CancelOperationRequest
API documentation for dataproc_v1beta2.types.CancelOperationRequest
class.
Cluster
Describes the identifying information, config, and status of a cluster of Compute Engine instances.
Required. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
Optional. The labels to associate with this cluster. Label
keys must contain 1 to 63 characters, and must conform to
RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>
. Label
values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. No more than 32
labels can be associated with a cluster.
Output only. The previous cluster status.
Output only. Contains cluster daemon metrics such as HDFS and YARN stats. Beta Feature: This report is available for testing purposes only. It may be changed before final release.
ClusterConfig
The cluster config.
Optional. The shared Compute Engine config settings for all instances in a cluster.
Optional. The Compute Engine config settings for worker instances in a cluster.
Optional. The config settings for software inside the cluster.
Optional. Commands to execute on each node after config is
completed. By default, executables are run on master and all
worker nodes. You can test a node's role metadata to run an
executable on a master or worker node, as shown below using
curl
(you can also use wget
): :: ROLE=$(curl -H
Metadata-Flavor:Google http://metadata/computeMetadata/v1b
eta2/instance/attributes/dataproc-role) if [[ "${ROLE}" ==
'Master' ]]; then ... master specific actions ...
else ... worker specific actions ... fi
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
Optional. Security related configuration.
ClusterMetrics
Contains cluster daemon metrics, such as HDFS and YARN stats.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
The YARN metrics.
ClusterOperation
The cluster operation triggered by a workflow.
Output only. Error, if operation failed.
ClusterOperationMetadata
Metadata describing the operation.
Output only. Cluster UUID for the operation.
Output only. The previous operation status.
Output only. Short description of operation.
Output only. Errors encountered during operation execution.
ClusterOperationStatus
The status of the operation.
Output only. A message containing the detailed operation state.
Output only. The time this state was entered.
ClusterSelector
A selector that chooses target cluster for jobs based on metadata.
Required. The cluster labels. Cluster must have all labels to match.
ClusterStatus
The status of a cluster and its instances.
Output only. Optional details of cluster's state.
Output only. Additional state information that includes status reported by the agent.
CreateAutoscalingPolicyRequest
A request to create an autoscaling policy.
Required. The autoscaling policy to create.
CreateClusterRequest
A request to create a cluster.
Required. The Dataproc region in which to handle the request.
Optional. A unique id used to identify the request. If the
server receives two [CreateClusterRequest][google.cloud.datapr
oc.v1beta2.CreateClusterRequest] requests with the same id,
then the second request will be ignored and the first
google.longrunning.Operation][google.longrunning.Operation]
created and stored in the backend is returned. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The id
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (\), and hyphens (-). The maximum length is 40
characters.
CreateWorkflowTemplateRequest
A request to create a workflow template.
Required. The Dataproc workflow template to create.
DeleteAutoscalingPolicyRequest
A request to delete an autoscaling policy.
Autoscaling policies in use by one or more clusters will not be deleted.
DeleteClusterRequest
A request to delete a cluster.
Required. The Dataproc region in which to handle the request.
Optional. Specifying the cluster_uuid
means the RPC should
fail (with error NOT_FOUND) if cluster with specified UUID
does not exist.
DeleteJobRequest
A request to delete a job.
Required. The Dataproc region in which to handle the request.
DeleteOperationRequest
API documentation for dataproc_v1beta2.types.DeleteOperationRequest
class.
DeleteWorkflowTemplateRequest
A request to delete a workflow template.
Currently started workflows will remain running.
Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version.
DiagnoseClusterRequest
A request to collect cluster diagnostic information.
Required. The Dataproc region in which to handle the request.
DiagnoseClusterResults
The location of diagnostic output.
DiskConfig
Specifies the config of disk options for a group of VM instances.
Optional. Size in GB of the boot disk (default is 500GB).
Duration
API documentation for dataproc_v1beta2.types.Duration
class.
Empty
API documentation for dataproc_v1beta2.types.Empty
class.
EncryptionConfig
Encryption settings for the cluster.
EndpointConfig
Endpoint config for this cluster
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
FieldMask
API documentation for dataproc_v1beta2.types.FieldMask
class.
GceClusterConfig
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
Optional. The Compute Engine network to be used for machine
communications. Cannot be specified with subnetwork_uri. If
neither network_uri
nor subnetwork_uri
is specified,
the "default" network of the project is used, if it exists.
Cannot be a "Custom Subnet Network" (see Using Subnetworks
</compute/docs/subnetworks>
__ for more information). A full
URL, partial URI, or short name are valid. Examples: - htt
ps://www.googleapis.com/compute/v1/projects/[project_id]/regio
ns/global/default
-
projects/[project_id]/regions/global/default
-
default
Optional. If true, all instances in the cluster will only have
internal IP addresses. By default, clusters are not restricted
to internal IP addresses, and will have ephemeral external IP
addresses assigned to each instance. This internal_ip_only
restriction can only be enabled for subnetwork enabled
networks, and all off-cluster dependencies must be configured
to be accessible without external IP addresses.
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: - https://www.googleapis.com/auth/cloud.useraccounts.readonly - https://www.googleapis.com/auth/devstorage.read_write - https://www.googleapis.com/auth/logging.write If no scopes are specified, the following defaults are also provided: - https://www.googleapis.com/auth/bigquery - https://www.googleapis.com/auth/bigtable.admin.table - https://www.googleapis.com/auth/bigtable.data - https://www.googleapis.com/auth/devstorage.full_control
The Compute Engine metadata entries to add to all instances
(see Project and instance metadata
<https://cloud.google.com/compute/docs/storing-retrieving-
metadata#project_and_instance_metadata>
__).
GetAutoscalingPolicyRequest
A request to fetch an autoscaling policy.
GetClusterRequest
Request to get the resource representation for a cluster in a project.
Required. The Dataproc region in which to handle the request.
GetJobRequest
A request to get the resource representation for a job in a project.
Required. The Dataproc region in which to handle the request.
GetOperationRequest
API documentation for dataproc_v1beta2.types.GetOperationRequest
class.
GetWorkflowTemplateRequest
A request to fetch a workflow template.
Optional. The version of workflow template to retrieve. Only previously instantiated versions can be retrieved. If unspecified, retrieves the current version.
HadoopJob
A Dataproc job for running Apache Hadoop
MapReduce <https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html>
jobs on Apache Hadoop
YARN <https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html>
.
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful- metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce- examples.jar'
Optional. The arguments to pass to the driver. Do not include
arguments, such as -libjars
or -Dfoo=bar
, that can be
set as job properties, since a collision may occur that causes
an incorrect job submission.
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
HiveJob
A Dataproc job for running Apache
Hive <https://hive.apache.org/>
__ queries on YARN.
The HCFS URI of the script that contains Hive queries.
Optional. Whether to continue executing queries if a query
fails. The default value is false
. Setting to true
can
be useful when executing independent parallel queries.
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive- site.xml, and classes in user code.
InstanceGroupAutoscalingPolicyConfig
Configuration for the size bounds of an instance group, including its proportional size to other groups.
Optional. Maximum number of instances for this group. Required for primary workers. Note that by default, clusters will not use secondary workers. Required for secondary workers if the minimum secondary instances is set. Primary workers - Bounds: [min_instances, ). Required. Secondary workers - Bounds: [min_instances, ). Default: 0.
InstanceGroupConfig
The config settings for Compute Engine resources in an instance group, such as a master or worker group.
Output only. The list of instance names. Dataproc derives the
names from cluster_name
, num_instances
, and the
instance group.
Optional. The Compute Engine machine type used for cluster
instances. A full URL, partial URI, or short name are valid.
Examples: - https://www.googleapis.com/compute/v1/projects
/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-
projects/[project_id]/zones/us-
east1-a/machineTypes/n1-standard-2
- n1-standard-2
Auto Zone Exception: If you are using the Dataproc Auto
Zone Placement </dataproc/docs/concepts/configuring-
clusters/auto-zone#using_auto_zone_placement>
__ feature, you
must use the short name of the machine type resource, for
example, n1-standard-2
.
Optional. Specifies that this instance group contains preemptible instances.
Optional. The Compute Engine accelerator configuration for these instances.
InstantiateInlineWorkflowTemplateRequest
A request to instantiate an inline workflow template.
Required. The workflow template to instantiate.
Optional. A tag that prevents multiple concurrent workflow
instances with the same tag from running. This mitigates risk
of concurrent instances started due to retries. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The tag
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (\), and hyphens (-). The maximum length is 40
characters.
InstantiateWorkflowTemplateRequest
A request to instantiate a workflow template.
Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template.
Optional. A tag that prevents multiple concurrent workflow
instances with the same tag from running. This mitigates risk
of concurrent instances started due to retries. It is
recommended to always set this value to a UUID <https://en.wi
kipedia.org/wiki/Universally_unique_identifier>
_. The tag
must contain only letters (a-z, A-Z), numbers (0-9),
underscores (\), and hyphens (-). The maximum length is 40
characters.
Job
A Dataproc job resource.
Required. Job information, including how, when, and where to run the job.
Output only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
Output only. The collection of YARN applications spun up by this job. Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Output only. A URI pointing to the location of the stdout of the job's driver program.
Optional. The labels to associate with this job. Label
keys must contain 1 to 63 characters, and must conform to
RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>
. Label
values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. No more than 32
labels can be associated with a job.
Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
JobPlacement
Dataproc job config.
Output only. A cluster UUID generated by the Dataproc service when the job is submitted.
JobReference
Encapsulates the full scoping used to reference a job.
Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters. If not specified by the caller, the job ID will be provided by the server.
JobScheduling
Job scheduling options.
JobStatus
Dataproc job status.
Output only. Optional Job state details, such as an error description if the state is ERROR.
Output only. Additional state information, which includes status reported by the agent.
KerberosConfig
Specifies Kerberos related configuration.
Required. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self- signed certificate.
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
LifecycleConfig
Specifies the cluster auto-delete schedule configuration.
Either the exact time the cluster should be deleted at or the cluster maximum age.
Optional. The lifetime duration of cluster. The cluster will
be auto-deleted at the end of this period. Minimum value is 10
minutes; maximum value is 14 days (see JSON representation of
Duration <https://developers.google.com/protocol-
buffers/docs/proto3#json>
__).
ListAutoscalingPoliciesRequest
A request to list autoscaling policies in a project.
Optional. The maximum number of results to return in each response. Must be less than or equal to 1000. Defaults to 100.
ListAutoscalingPoliciesResponse
A response to a request to list autoscaling policies in a project.
Output only. This token is included in the response if there are more results to fetch.
ListClustersRequest
A request to list the clusters in a project.
Required. The Dataproc region in which to handle the request.
Optional. The standard List page size.
ListClustersResponse
The list of all clusters in a project.
Output only. This token is included in the response if there
are more results to fetch. To fetch additional results,
provide this value as the page_token
in a subsequent
ListClustersRequest.
ListJobsRequest
A request to list jobs in a project.
Required. The Dataproc region in which to handle the request.
Optional. The page token, returned by a previous call, to request the next page of results.
Optional. Specifies enumerated categories of jobs to list.
(default = match ALL jobs). If filter
is provided,
jobStateMatcher
will be ignored.
ListJobsResponse
A list of jobs in a project.
Optional. This token is included in the response if there are
more results to fetch. To fetch additional results, provide
this value as the page_token
in a subsequent
ListJobsRequest.
ListOperationsRequest
API documentation for dataproc_v1beta2.types.ListOperationsRequest
class.
ListOperationsResponse
API documentation for dataproc_v1beta2.types.ListOperationsResponse
class.
ListWorkflowTemplatesRequest
A request to list workflow templates in a project.
Optional. The maximum number of results to return in each response.
ListWorkflowTemplatesResponse
A response to a request to list workflow templates in a project.
Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListWorkflowTemplatesRequest.
LoggingConfig
The runtime logging config of the job.
ManagedCluster
Cluster that is managed by the workflow.
Required. The cluster configuration.
ManagedGroupConfig
Specifies the resources used to actively manage an instance group.
Output only. The name of the Instance Group Manager for this group.
NodeInitializationAction
Specifies an executable to run on a fully configured node and a timeout period for executable completion.
Optional. Amount of time executable has to complete. Default
is 10 minutes (see JSON representation of Duration
<https://developers.google.com/protocol-
buffers/docs/proto3#json>
__). Cluster creation fails with an
explanatory error message (the name of the executable that
caused the error and the exceeded timeout period) if the
executable is not completed at end of the timeout period.
Operation
API documentation for dataproc_v1beta2.types.Operation
class.
OperationInfo
API documentation for dataproc_v1beta2.types.OperationInfo
class.
OrderedJob
A job executed by the workflow.
Required. The job definition.
Optional. Job scheduling configuration.
ParameterValidation
Configuration for parameter validation.
Validation based on regular expressions.
PigJob
A Dataproc job for running Apache
Pig <https://pig.apache.org/>
__ queries on YARN.
The HCFS URI of the script that contains the Pig queries.
Optional. Whether to continue executing queries if a query
fails. The default value is false
. Setting to true
can
be useful when executing independent parallel queries.
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
Optional. The runtime log config for job execution.
PySparkJob
A Dataproc job for running Apache
PySpark <https://spark.apache.org/docs/0.9.0/python-programming-guide.html>
__
applications on YARN.
Optional. The arguments to pass to the driver. Do not include
arguments, such as --conf
, that can be set as job
properties, since a collision may occur that causes an
incorrect job submission.
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
Optional. HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.
Optional. The runtime log config for job execution.
QueryList
A list of queries to run on a cluster.
RegexValidation
Validation based on regular expressions.
ReservationAffinity
Reservation Affinity for consuming Zonal reservation.
Optional. Corresponds to the label key of reservation resource.
SecurityConfig
Security related configuration, including encryption, Kerberos, etc.
SoftwareConfig
Specifies the selection and config of software inside the cluster.
Optional. The properties to set on daemon config files.
Property keys are specified in prefix:property
format, for
example core:hadoop.tmp.dir
. The following are supported
prefixes and their mappings: - capacity-scheduler:
capacity-scheduler.xml
- core: core-site.xml
-
distcp: distcp-default.xml
- hdfs: hdfs-site.xml
-
hive: hive-site.xml
- mapred: mapred-site.xml
- pig:
pig.properties
- spark: spark-defaults.conf
- yarn:
yarn-site.xml
For more information, see Cluster
properties </dataproc/docs/concepts/cluster-properties>
__.
SparkJob
A Dataproc job for running Apache
Spark <http://spark.apache.org/>
__ applications on YARN. The
specification of the main method to call to drive the job. Specify
either the jar file that contains the main class or the main class name.
To pass both a main jar and a main class in that jar, add the jar to
CommonJob.jar_file_uris
, and then specify the main class name in
main_class
.
The name of the driver's main class. The jar file that
contains the class must be in the default CLASSPATH or
specified in jar_file_uris
.
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Optional. The runtime log config for job execution.
SparkRJob
A Dataproc job for running Apache
SparkR <https://spark.apache.org/docs/latest/sparkr.html>
__
applications on YARN.
Optional. The arguments to pass to the driver. Do not include
arguments, such as --conf
, that can be set as job
properties, since a collision may occur that causes an
incorrect job submission.
Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Optional. The runtime log config for job execution.
SparkSqlJob
A Dataproc job for running Apache Spark
SQL <http://spark.apache.org/sql/>
__ queries.
The HCFS URI of the script that contains SQL queries.
Optional. Mapping of query variable names to values
(equivalent to the Spark SQL command: SET name="value";
).
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
Status
API documentation for dataproc_v1beta2.types.Status
class.
SubmitJobRequest
A request to submit a job.
Required. The Dataproc region in which to handle the request.
Optional. A unique id used to identify the request. If the
server receives two [SubmitJobRequest][google.cloud.dataproc.v
1beta2.SubmitJobRequest] requests with the same id, then the
second request will be ignored and the first
Job created and stored in
the backend is returned. It is recommended to always set this
value to a UUID <https://en.wikipedia.org/wiki/Universally_un
ique_identifier>
_. The id must contain only letters (a-z,
A-Z), numbers (0-9), underscores (\), and hyphens (-). The
maximum length is 40 characters.
TemplateParameter
A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
Required. Paths to all fields that the parameter replaces. A
field is allowed to appear in at most one parameter's list of
field paths. A field path is similar in syntax to a
google.protobuf.FieldMask][google.protobuf.FieldMask]
. For
example, a field path that references the zone field of a
workflow template's cluster selector would be specified as
placement.clusterSelector.zone
. Also, field paths can
reference fields using the following syntax: - Values in
maps can be referenced by key: - labels['key'] -
placement.clusterSelector.clusterLabels['key'] -
placement.managedCluster.labels['key'] -
placement.clusterSelector.clusterLabels['key'] -
jobs['step-id'].labels['key'] - Jobs in the jobs list can be
referenced by step-id: - jobs['step-
id'].hadoopJob.mainJarFileUri - jobs['step-
id'].hiveJob.queryFileUri - jobs['step-
id'].pySparkJob.mainPythonFileUri - jobs['step-
id'].hadoopJob.jarFileUris[0] - jobs['step-
id'].hadoopJob.archiveUris[0] - jobs['step-
id'].hadoopJob.fileUris[0] - jobs['step-
id'].pySparkJob.pythonFileUris[0] - Items in repeated fields
can be referenced by a zero-based index: - jobs['step-
id'].sparkJob.args[0] - Other examples: - jobs['step-
id'].hadoopJob.properties['key'] - jobs['step-
id'].hadoopJob.args[0] - jobs['step-
id'].hiveJob.scriptVariables['key'] - jobs['step-
id'].hadoopJob.mainJarFileUri -
placement.clusterSelector.zone It may not be possible to
parameterize maps and repeated fields in their entirety since
only individual map values and individual items in repeated
fields can be referenced. For example, the following field
paths are invalid: - placement.clusterSelector.clusterLabels
jobs['step-id'].sparkJob.args
Optional. Validation rules to be applied to this parameter's value.
Timestamp
API documentation for dataproc_v1beta2.types.Timestamp
class.
UpdateAutoscalingPolicyRequest
A request to update an autoscaling policy.
UpdateClusterRequest
A request to update a cluster.
Required. The Dataproc region in which to handle the request.
Required. The changes to the cluster.
Required. Specifies the path, relative to Cluster
, of the
field to update. For example, to change the number of workers
in a cluster to 5, the update_mask
parameter would be
specified as config.worker_config.num_instances
, and the
PATCH
request body would specify the new value, as
follows: :: { "config":{ "workerConfig":{
"numInstances":"5" } } } Similarly, to
change the number of preemptible workers in a cluster to 5,
the update_mask
parameter would be
config.secondary_worker_config.num_instances
, and the
PATCH
request body would be set as follows: :: {
"config":{ "secondaryWorkerConfig":{
"numInstances":"5" } } } Note: currently
only the following fields can be updated: .. raw:: html
Mask .. raw:: html | Purpose .. raw:: html |
labels .. raw:: html | Updates labels .. raw:: html |
config.worker_config.num_instances .. raw:: html | Resize primary worker group .. raw:: html |
config.secondary_worker_config.num_instances .. raw:: html | Resize secondary worker group .. raw:: html |
config.lifecycle_config.auto_delete_ttl .. raw:: html | Reset MAX TTL duration .. raw:: html |
config.lifecycle_config.auto_delete_time .. raw:: html | Update MAX TTL deletion timestamp .. raw:: html |
config.lifecycle_config.idle_delete_ttl .. raw:: html | Update Idle TTL duration .. raw:: html |
config.autoscaling_config.policy_uri .. raw:: html | Use, stop using, or change autoscaling policies .. raw:: html |
UpdateJobRequest
A request to update a job.
Required. The Dataproc region in which to handle the request.
Required. The changes to the job.
UpdateWorkflowTemplateRequest
A request to update a workflow template.
ValueValidation
Validation based on a list of allowed values.
WaitOperationRequest
API documentation for dataproc_v1beta2.types.WaitOperationRequest
class.
WorkflowGraph
The workflow graph.
WorkflowMetadata
A Dataproc workflow template resource.
Output only. The version of template at the time of workflow instantiation.
Output only. The workflow graph.
Output only. The workflow state.
Map from parameter names to values that were used for those parameters.
Output only. Workflow end time.
WorkflowNode
The workflow node.
Output only. Node's prerequisite nodes.
Output only. The node state.
WorkflowTemplate
A Dataproc workflow template resource.
Output only. The resource name of the workflow template, as
described in
https://cloud.google.com/apis/design/resource_names. - For
projects.regions.workflowTemplates
, the resource name of
the template has the following format: projects/{proje
ct_id}/regions/{region}/workflowTemplates/{template_id}
-
For projects.locations.workflowTemplates
, the resource
name of the template has the following format: project
s/{project_id}/locations/{location}/workflowTemplates/{templat
e_id}
Output only. The time template was created.
Optional. The labels to associate with this template. These
labels will be propagated to all jobs and clusters created by
the workflow instance. Label keys must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. Label values
may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035
<https://www.ietf.org/rfc/rfc1035.txt>
. No more than 32
labels can be associated with a template.
Required. The Directed Acyclic Graph of Jobs to submit.
WorkflowTemplatePlacement
Specifies workflow execution target.
Either managed_cluster
or cluster_selector
is required.
Optional. A cluster that is managed by the workflow.
YarnApplication
A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Output only. The application state.
Output only. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application- specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.