EnvironmentConfig

Environment configuration for a workload.

JSON representation
{
  "executionConfig": {
    object (ExecutionConfig)
  },
  "peripheralsConfig": {
    object (PeripheralsConfig)
  }
}
Fields
executionConfig

object (ExecutionConfig)

Optional. Execution configuration for a workload.

peripheralsConfig

object (PeripheralsConfig)

Optional. Peripherals configuration that workload has access to.

ExecutionConfig

Execution configuration for a workload.

JSON representation
{
  "serviceAccount": string,
  "networkTags": [
    string
  ],
  "kmsKey": string,
  "idleTtl": string,
  "ttl": string,
  "stagingBucket": string,

  // Union field network can be only one of the following:
  "networkUri": string,
  "subnetworkUri": string
  // End of list of possible types for union field network.
}
Fields
serviceAccount

string

Optional. Service account that used to execute workload.

networkTags[]

string

Optional. Tags used for network traffic control.

kmsKey

string

Optional. The Cloud KMS key to use for encryption.

idleTtl

string (Duration format)

Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). Defaults to 1 hour if not set. If both ttl and idleTtl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idleTtl or when ttl has been exceeded, whichever occurs first.

ttl

string (Duration format)

Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idleTtl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idleTtl or when ttl has been exceeded, whichever occurs first.

stagingBucket

string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

Union field network. Network configuration for workload execution. network can be only one of the following:
networkUri

string

Optional. Network URI to connect workload to.

subnetworkUri

string

Optional. Subnetwork URI to connect workload to.

PeripheralsConfig

Auxiliary services configuration for a workload.

JSON representation
{
  "metastoreService": string,
  "sparkHistoryServerConfig": {
    object (SparkHistoryServerConfig)
  }
}
Fields
metastoreService

string

Optional. Resource name of an existing Dataproc Metastore service.

Example:

  • projects/[projectId]/locations/[region]/services/[service_id]
sparkHistoryServerConfig

object (SparkHistoryServerConfig)

Optional. The Spark History Server configuration for the workload.

SparkHistoryServerConfig

Spark History Server configuration for the workload.

JSON representation
{
  "dataprocCluster": string
}
Fields
dataprocCluster

string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.

Example:

  • projects/[projectId]/regions/[region]/clusters/[clusterName]