Module types (0.6.1)

API documentation for dataproc_v1.types module.

Classes

AcceleratorConfig

Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine </compute/docs/gpus/>__.

The number of the accelerator cards of this type exposed to this instance.

Any

API documentation for dataproc_v1.types.Any class.

AutoscalingConfig

Autoscaling Policy config associated with the cluster.

CancelJobRequest

A request to cancel a job.

Required. The Cloud Dataproc region in which to handle the request.

CancelOperationRequest

API documentation for dataproc_v1.types.CancelOperationRequest class.

Cluster

Describes the identifying information, config, and status of a cluster of Compute Engine instances.

Required. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>. No more than 32 labels can be associated with a cluster.

Output only. The previous cluster status.

Contains cluster daemon metrics such as HDFS and YARN stats. Beta Feature: This report is available for testing purposes only. It may be changed before final release.

ClusterConfig

The cluster config.

Optional. The shared Compute Engine config settings for all instances in a cluster.

Optional. The Compute Engine config settings for worker instances in a cluster.

Optional. The config settings for software inside the cluster.

Optional. Encryption settings for the cluster.

Optional. Security settings for the cluster.

ClusterMetrics

Contains cluster daemon metrics, such as HDFS and YARN stats.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

The YARN metrics.

ClusterOperation

The cluster operation triggered by a workflow.

Output only. Error, if operation failed.

ClusterOperationMetadata

Metadata describing the operation.

Output only. Cluster UUID for the operation.

Output only. The previous operation status.

Output only. Short description of operation.

Output only. Errors encountered during operation execution.

ClusterOperationStatus

The status of the operation.

Output only. A message containing the detailed operation state.

Output only. The time this state was entered.

ClusterSelector

A selector that chooses target cluster for jobs based on metadata.

Required. The cluster labels. Cluster must have all labels to match.

ClusterStatus

The status of a cluster and its instances.

Optional. Output only. Details of cluster's state.

Output only. Additional state information that includes status reported by the agent.

CreateClusterRequest

A request to create a cluster.

Required. The Cloud Dataproc region in which to handle the request.

Optional. A unique id used to identify the request. If the server receives two [CreateClusterRequest][google.cloud.datapr oc.v1.CreateClusterRequest] requests with the same id, then the second request will be ignored and the first google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned. It is recommended to always set this value to a UUID <https://en.wi kipedia.org/wiki/Universally_unique_identifier>_. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (\), and hyphens (-). The maximum length is 40 characters.

CreateWorkflowTemplateRequest

A request to create a workflow template.

Required. The Dataproc workflow template to create.

DeleteClusterRequest

A request to delete a cluster.

Required. The Cloud Dataproc region in which to handle the request.

Optional. Specifying the cluster_uuid means the RPC should fail (with error NOT_FOUND) if cluster with specified UUID does not exist.

DeleteJobRequest

A request to delete a job.

Required. The Cloud Dataproc region in which to handle the request.

DeleteOperationRequest

API documentation for dataproc_v1.types.DeleteOperationRequest class.

DeleteWorkflowTemplateRequest

A request to delete a workflow template.

Currently started workflows will remain running.

Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version.

DiagnoseClusterRequest

A request to collect cluster diagnostic information.

Required. The Cloud Dataproc region in which to handle the request.

DiagnoseClusterResults

The location of diagnostic output.

DiskConfig

Specifies the config of disk options for a group of VM instances.

Optional. Size in GB of the boot disk (default is 500GB).

Duration

API documentation for dataproc_v1.types.Duration class.

Empty

API documentation for dataproc_v1.types.Empty class.

EncryptionConfig

Encryption settings for the cluster.

FieldMask

API documentation for dataproc_v1.types.FieldMask class.

GceClusterConfig

Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks </compute/docs/subnetworks>__ for more information). A full URL, partial URI, or short name are valid. Examples: - htt ps://www.googleapis.com/compute/v1/projects/[project_id]/regio ns/global/default - projects/[project_id]/regions/global/default - default

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: - https://www.googleapis.com/auth/cloud.useraccounts.readonly - https://www.googleapis.com/auth/devstorage.read_write - https://www.googleapis.com/auth/logging.write If no scopes are specified, the following defaults are also provided: - https://www.googleapis.com/auth/bigquery - https://www.googleapis.com/auth/bigtable.admin.table - https://www.googleapis.com/auth/bigtable.data - https://www.googleapis.com/auth/devstorage.full_control

The Compute Engine metadata entries to add to all instances (see Project and instance metadata <https://cloud.google.com/compute/docs/storing-retrieving- metadata#project_and_instance_metadata>__).

GetClusterRequest

Request to get the resource representation for a cluster in a project.

Required. The Cloud Dataproc region in which to handle the request.

GetJobRequest

A request to get the resource representation for a job in a project.

Required. The Cloud Dataproc region in which to handle the request.

GetOperationRequest

API documentation for dataproc_v1.types.GetOperationRequest class.

GetWorkflowTemplateRequest

A request to fetch a workflow template.

Optional. The version of workflow template to retrieve. Only previously instantiated versions can be retrieved. If unspecified, retrieves the current version.

HadoopJob

A Cloud Dataproc job for running Apache Hadoop MapReduce <https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html> jobs on Apache Hadoop YARN <https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html>.

The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful- metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce- examples.jar'

Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

HiveJob

A Cloud Dataproc job for running Apache Hive <https://hive.apache.org/>__ queries on YARN.

The HCFS URI of the script that contains Hive queries.

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.

InstanceGroupConfig

Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group.

Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.

Optional. The Compute Engine machine type used for cluster instances. A full URL, partial URI, or short name are valid. Examples: - https://www.googleapis.com/compute/v1/projects /[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 - projects/[project_id]/zones/us- east1-a/machineTypes/n1-standard-2 - n1-standard-2 Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement </dataproc/docs/concepts/configuring- clusters/auto-zone#using_auto_zone_placement>__ feature, you must use the short name of the machine type resource, for example, n1-standard-2.

Optional. Specifies that this instance group contains preemptible instances.

Optional. The Compute Engine accelerator configuration for these instances.

InstantiateInlineWorkflowTemplateRequest

A request to instantiate an inline workflow template.

Required. The workflow template to instantiate.

InstantiateWorkflowTemplateRequest

A request to instantiate a workflow template.

Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template.

Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 100 characters.

Job

A Cloud Dataproc job resource.

Required. Job information, including how, when, and where to run the job.

Job is a Hadoop job.

Job is a Pyspark job.

Job is a Pig job.

Output only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.

Output only. The collection of YARN applications spun up by this job. Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

Optional. Job scheduling configuration.

JobPlacement

Cloud Dataproc job config.

Output only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted.

JobReference

Encapsulates the full scoping used to reference a job.

Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters. If not specified by the caller, the job ID will be provided by the server.

JobScheduling

Job scheduling options.

JobStatus

Cloud Dataproc job status.

Optional. Output only. Job state details, such as an error description if the state is ERROR.

Output only. Additional state information, which includes status reported by the agent.

KerberosConfig

Specifies Kerberos related configuration.

Required. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self- signed certificate.

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

ListClustersRequest

A request to list the clusters in a project.

Required. The Cloud Dataproc region in which to handle the request.

Optional. The standard List page size.

ListClustersResponse

The list of all clusters in a project.

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListClustersRequest.

ListJobsRequest

A request to list jobs in a project.

Required. The Cloud Dataproc region in which to handle the request.

Optional. The page token, returned by a previous call, to request the next page of results.

Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs). If filter is provided, jobStateMatcher will be ignored.

ListJobsResponse

A list of jobs in a project.

Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListJobsRequest.

ListOperationsRequest

API documentation for dataproc_v1.types.ListOperationsRequest class.

ListOperationsResponse

API documentation for dataproc_v1.types.ListOperationsResponse class.

ListWorkflowTemplatesRequest

A request to list workflow templates in a project.

Optional. The maximum number of results to return in each response.

ListWorkflowTemplatesResponse

A response to a request to list workflow templates in a project.

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListWorkflowTemplatesRequest.

LoggingConfig

The runtime logging config of the job.

ManagedCluster

Cluster that is managed by the workflow.

Required. The cluster configuration.

ManagedGroupConfig

Specifies the resources used to actively manage an instance group.

Output only. The name of the Instance Group Manager for this group.

NodeInitializationAction

Specifies an executable to run on a fully configured node and a timeout period for executable completion.

Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

Operation

API documentation for dataproc_v1.types.Operation class.

OperationInfo

API documentation for dataproc_v1.types.OperationInfo class.

OrderedJob

A job executed by the workflow.

Required. The job definition.

Job is a Spark job.

Job is a Hive job.

Job is a SparkSql job.

Optional. Job scheduling configuration.

ParameterValidation

Configuration for parameter validation.

Validation based on regular expressions.

PigJob

A Cloud Dataproc job for running Apache Pig <https://pig.apache.org/>__ queries on YARN.

The HCFS URI of the script that contains the Pig queries.

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.

Optional. The runtime log config for job execution.

PySparkJob

A Cloud Dataproc job for running Apache PySpark <https://spark.apache.org/docs/0.9.0/python-programming-guide.html>__ applications on YARN.

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

Optional. HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

Optional. The runtime log config for job execution.

QueryList

A list of queries to run on a cluster.

RegexValidation

Validation based on regular expressions.

SecurityConfig

Security related configuration, including Kerberos.

SoftwareConfig

Specifies the selection and config of software inside the cluster.

Optional. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: - capacity-scheduler: capacity-scheduler.xml - core: core-site.xml - distcp: distcp-default.xml - hdfs: hdfs-site.xml - hive: hive-site.xml - mapred: mapred-site.xml - pig: pig.properties - spark: spark-defaults.conf - yarn: yarn-site.xml For more information, see Cluster properties </dataproc/docs/concepts/cluster-properties>__.

SparkJob

A Cloud Dataproc job for running Apache Spark <http://spark.apache.org/>__ applications on YARN.

The HCFS URI of the jar file that contains the main class.

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Optional. HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

SparkSqlJob

A Cloud Dataproc job for running Apache Spark SQL <http://spark.apache.org/sql/>__ queries.

The HCFS URI of the script that contains SQL queries.

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

Status

API documentation for dataproc_v1.types.Status class.

SubmitJobRequest

A request to submit a job.

Required. The Cloud Dataproc region in which to handle the request.

Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest requests with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned. It is recommended to always set this value to a UUID <https://en.wi kipedia.org/wiki/Universally_unique_identifier>_. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (\), and hyphens (-). The maximum length is 40 characters.

TemplateParameter

A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties

  • Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)

    Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths. A field path is similar in syntax to a google.protobuf.FieldMask][google.protobuf.FieldMask]. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone. Also, field paths can reference fields using the following syntax: - Values in maps can be referenced by key: - labels['key'] - placement.clusterSelector.clusterLabels['key'] - placement.managedCluster.labels['key'] - placement.clusterSelector.clusterLabels['key'] - jobs['step-id'].labels['key'] - Jobs in the jobs list can be referenced by step-id: - jobs['step- id'].hadoopJob.mainJarFileUri - jobs['step- id'].hiveJob.queryFileUri - jobs['step- id'].pySparkJob.mainPythonFileUri - jobs['step- id'].hadoopJob.jarFileUris[0] - jobs['step- id'].hadoopJob.archiveUris[0] - jobs['step- id'].hadoopJob.fileUris[0] - jobs['step- id'].pySparkJob.pythonFileUris[0] - Items in repeated fields can be referenced by a zero-based index: - jobs['step- id'].sparkJob.args[0] - Other examples: - jobs['step- id'].hadoopJob.properties['key'] - jobs['step- id'].hadoopJob.args[0] - jobs['step- id'].hiveJob.scriptVariables['key'] - jobs['step- id'].hadoopJob.mainJarFileUri - placement.clusterSelector.zone It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: - placement.clusterSelector.clusterLabels

    • jobs['step-id'].sparkJob.args

      Optional. Validation rules to be applied to this parameter's value.

Timestamp

API documentation for dataproc_v1.types.Timestamp class.

UpdateClusterRequest

A request to update a cluster.

Required. The Cloud Dataproc region in which to handle the request.

Required. The changes to the cluster.

Required. Specifies the path, relative to Cluster, of the field to update. For example, to change the number of workers in a cluster to 5, the update_mask parameter would be specified as config.worker_config.num_instances, and the PATCH request body would specify the new value, as follows: :: { "config":{ "workerConfig":{ "numInstances":"5" } } } Similarly, to change the number of preemptible workers in a cluster to 5, the update_mask parameter would be config.secondary_worker_config.num_instances, and the PATCH request body would be set as follows: :: { "config":{ "secondaryWorkerConfig":{ "numInstances":"5" } } } Note: Currently, only the following fields can be updated: .. raw:: html

.. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html .. raw:: html
Mask .. raw:: html Purpose .. raw:: html
labels .. raw:: html Update labels .. raw:: html
config.worker_config.num_instances .. raw:: html Resize primary worker group .. raw:: html
config.secondary_worker_config.num_instances .. raw:: html Resize secondary worker group .. raw:: html
config.autoscaling_config.policy_uri .. raw:: html Use, stop using, or change autoscaling policies .. raw:: html

UpdateJobRequest

A request to update a job.

Required. The Cloud Dataproc region in which to handle the request.

Required. The changes to the job.

UpdateWorkflowTemplateRequest

A request to update a workflow template.

ValueValidation

Validation based on a list of allowed values.

WaitOperationRequest

API documentation for dataproc_v1.types.WaitOperationRequest class.

WorkflowGraph

The workflow graph.

WorkflowMetadata

A Cloud Dataproc workflow template resource.

Output only. The version of template at the time of workflow instantiation.

Output only. The workflow graph.

Output only. The workflow state.

Map from parameter names to values that were used for those parameters.

Output only. Workflow end time.

WorkflowNode

The workflow node.

Output only. Node's prerequisite nodes.

Output only. The node state.

WorkflowTemplate

A Cloud Dataproc workflow template resource.

Optional. Used to perform a consistent read-modify-write. This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.

Output only. The time template was last updated.

Required. WorkflowTemplate scheduling information.

Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.

WorkflowTemplatePlacement

Specifies workflow execution target.

Either managed_cluster or cluster_selector is required.

A cluster that is managed by the workflow.

YarnApplication

A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Required. The application state.

Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application- specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.