CustomJobSpec

Represents the spec of a CustomJob.

JSON representation
{
  "workerPoolSpecs": [
    {
      object (WorkerPoolSpec)
    }
  ],
  "scheduling": {
    object (Scheduling)
  },
  "serviceAccount": string,
  "network": string,
  "reservedIpRanges": [
    string
  ],
  "baseOutputDirectory": {
    object (GcsDestination)
  },
  "protectedArtifactLocationId": string,
  "tensorboard": string,
  "enableWebAccess": boolean,
  "enableDashboardAccess": boolean,
  "experiment": string,
  "experimentRun": string,
  "models": [
    string
  ]
}
Fields
workerPoolSpecs[]

object (WorkerPoolSpec)

Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.

scheduling

object (Scheduling)

Scheduling options for a CustomJob.

serviceAccount

string

Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom code service Agent for the CustomJob's project is used.

network

string

Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name.

To specify this field, you must have already configured VPC Network Peering for Vertex AI.

If this field is left unspecified, the job is not peered with any network.

reservedIpRanges[]

string

Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job.

If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network.

Example: ['vertex-ai-ip-range'].

baseOutputDirectory

object (GcsDestination)

The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory.

The following Vertex AI environment variables will be passed to containers or python modules when this field is set:

For CustomJob:

  • AIP_MODEL_DIR = <baseOutputDirectory>/model/
  • AIP_CHECKPOINT_DIR = <baseOutputDirectory>/checkpoints/
  • AIP_TENSORBOARD_LOG_DIR = <baseOutputDirectory>/logs/

For CustomJob backing a Trial of HyperparameterTuningJob:

  • AIP_MODEL_DIR = <baseOutputDirectory>/<trial_id>/model/
  • AIP_CHECKPOINT_DIR = <baseOutputDirectory>/<trial_id>/checkpoints/
  • AIP_TENSORBOARD_LOG_DIR = <baseOutputDirectory>/<trial_id>/logs/
protectedArtifactLocationId

string

The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations

tensorboard

string

Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

enableWebAccess

boolean

Optional. Whether you want Vertex AI to enable interactive shell access to training containers.

If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).

enableDashboardAccess

boolean

Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container.

If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).

experiment

string

Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}

experimentRun

string

Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}

models[]

string

Optional. The name of the Model resources for which to generate a mapping to artifact URIs. Applicable only to some of the Google-provided custom jobs. Format: projects/{project}/locations/{location}/models/{model}

In order to retrieve a specific version of the model, also provide the version ID or version alias. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden If no version ID or alias is specified, the "default" version will be returned. The "default" version alias is created for the first version of the model, and can be moved to other versions later on. There will be exactly one default version.

WorkerPoolSpec

Represents the spec of a worker pool in a job.

JSON representation
{
  "machineSpec": {
    object (MachineSpec)
  },
  "replicaCount": string,
  "nfsMounts": [
    {
      object (NfsMount)
    }
  ],
  "diskSpec": {
    object (DiskSpec)
  },

  // Union field task can be only one of the following:
  "containerSpec": {
    object (ContainerSpec)
  },
  "pythonPackageSpec": {
    object (PythonPackageSpec)
  }
  // End of list of possible types for union field task.
}
Fields
machineSpec

object (MachineSpec)

Optional. Immutable. The specification of a single machine.

replicaCount

string (int64 format)

Optional. The number of worker replicas to use for this worker pool.

nfsMounts[]

object (NfsMount)

Optional. List of NFS mount spec.

diskSpec

object (DiskSpec)

Disk spec.

Union field task. The custom task to be executed in this worker pool. task can be only one of the following:
containerSpec

object (ContainerSpec)

The custom container task.

pythonPackageSpec

object (PythonPackageSpec)

The Python packaged task.

ContainerSpec

The spec of a Container.

JSON representation
{
  "imageUri": string,
  "command": [
    string
  ],
  "args": [
    string
  ],
  "env": [
    {
      object (EnvVar)
    }
  ]
}
Fields
imageUri

string

Required. The URI of a container image in the Container Registry that is to be run on each worker replica.

command[]

string

The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.

args[]

string

The arguments to be passed when starting the container.

env[]

object (EnvVar)

Environment variables to be passed to the container. Maximum limit is 100.

PythonPackageSpec

The spec of a Python packaged code.

JSON representation
{
  "executorImageUri": string,
  "packageUris": [
    string
  ],
  "pythonModule": string,
  "args": [
    string
  ],
  "env": [
    {
      object (EnvVar)
    }
  ]
}
Fields
executorImageUri

string

Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.

packageUris[]

string

Required. The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.

pythonModule

string

Required. The Python module name to run after installing the packages.

args[]

string

Command line arguments to be passed to the Python task.

env[]

object (EnvVar)

Environment variables to be passed to the python module. Maximum limit is 100.

NfsMount

Represents a mount configuration for Network File System (NFS) to mount.

JSON representation
{
  "server": string,
  "path": string,
  "mountPoint": string
}
Fields
server

string

Required. IP address of the NFS server.

path

string

Required. Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path

mountPoint

string

Required. Destination mount path. The NFS will be mounted for the user under /mnt/nfs/

Scheduling

All parameters related to queuing and scheduling of custom jobs.

JSON representation
{
  "timeout": string,
  "restartJobOnWorkerRestart": boolean,
  "disableRetries": boolean
}
Fields
timeout

string (Duration format)

The maximum job running time. The default is 7 days.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

restartJobOnWorkerRestart

boolean

Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.

disableRetries

boolean

Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.