gcloud beta ai-platform versions create

NAME
gcloud beta ai-platform versions create - create a new AI Platform version
SYNOPSIS
gcloud beta ai-platform versions create VERSION --model=MODEL [--accelerator=[count=COUNT],[type=TYPE]] [--async] [--config=CONFIG] [--description=DESCRIPTION] [--explanation-method=EXPLANATION_METHOD] [--framework=FRAMEWORK] [--labels=[KEY=VALUE,…]] [--machine-type=MACHINE_TYPE] [--num-integral-steps=NUM_INTEGRAL_STEPS; default=50] [--num-paths=NUM_PATHS; default=50] [--origin=ORIGIN] [--python-version=PYTHON_VERSION] [--region=REGION] [--runtime-version=RUNTIME_VERSION] [--service-account=SERVICE_ACCOUNT] [--staging-bucket=STAGING_BUCKET] [--package-uris=[PACKAGE_URI,…] --prediction-class=PREDICTION_CLASS] [GCLOUD_WIDE_FLAG]
DESCRIPTION
(BETA) Creates a new version of an AI Platform model.

For more details on managing AI Platform models and versions see https://cloud.google.com/ml-engine/docs/how-tos/managing-models-jobs

POSITIONAL ARGUMENTS
VERSION
Name of the model version.
REQUIRED FLAGS
--model=MODEL
Name of the model.
OPTIONAL FLAGS
--accelerator=[count=COUNT],[type=TYPE]
Manage the accelerator config for GPU serving. When deploying a model with the new Alpha Google Compute Engine Machine Types, a GPU accelerator may also be selected. Accelerator config for version creation is currently available in us-central1 only.
type
The type of the accelerator. Choices are 'nvidia-tesla-k80', 'nvidia-tesla-p100', 'nvidia-tesla-p4', 'nvidia-tesla-t4', 'nvidia-tesla-v100'.
count
The number of accelerators to attach to each machine running the job.
--async
Return immediately, without waiting for the operation in progress to complete.
--config=CONFIG
Path to a YAML configuration file containing configuration parameters for the version to create.

The file is in YAML format. Note that not all attributes of a version are configurable; available attributes (with example values) are:

  description: A free-form description of the version.
  deploymentUri: gs://path/to/source
  runtimeVersion: '1.0'
  manualScaling:
    nodes: 10  # The number of nodes to allocate for this model.
  autoScaling:
    minNodes: 0  # The minimum number of nodes to allocate for this model.
  labels:
    user-defined-key: user-defined-value

The name of the version must always be specified via the required VERSION argument.

Only one of manualScaling or autoScaling can be specified. If both are specified in same yaml file an error will be returned.

If an option is specified both in the configuration file and via command-line arguments, the command-line arguments override the configuration file.

--description=DESCRIPTION
The description of the version.
--explanation-method=EXPLANATION_METHOD
Enable explanations and select the explanation method to use.

The valid options are: integrated-gradients: Use Integrated Gradients. sampled-shapley: Use Sampled Shapley. xrai: Use XRAI.

EXPLANATION_METHOD must be one of: integrated-gradients, sampled-shapley, xrai.

--framework=FRAMEWORK
The ML framework used to train this version of the model. If not specified, defaults to 'tensorflow'. FRAMEWORK must be one of: scikit-learn, tensorflow, xgboost.
--labels=[KEY=VALUE,…]
List of label KEY=VALUE pairs to add.

Keys must start with a lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.

--machine-type=MACHINE_TYPE
Type of machine on which to serve the model. Currently only applies to online prediction. For available machine types, see https://cloud.google.com/ml-engine/docs/tensorflow/online-predict#machine-types.
--num-integral-steps=NUM_INTEGRAL_STEPS; default=50
Number of integral steps for Integrated Gradients. Only valid when --explanation-method=integrated-gradients or --explanation-method=xrai is specified.
--num-paths=NUM_PATHS; default=50
Number of paths for Sampled Shapley. Only valid when --explanation-method=sampled-shapley is specified.
--origin=ORIGIN
Location of model/ "directory" (as output by https://www.tensorflow.org/versions/r0.12/api_docs/python/state_ops.html#Saver).

This overrides deploymentUri in the --config file. If this flag is not passed, deploymentUri must be specified in the file from --config.

Can be a Google Cloud Storage (gs://) path or local file path (no prefix). In the latter case the files will be uploaded to Google Cloud Storage and a --staging-bucket argument is required.

--python-version=PYTHON_VERSION
Version of Python used when creating the version. If not set, the default version is 2.7. Python 3.5 is available when --runtime-version is set to 1.4 and above. Python 2.7 works with all supported runtime versions.
--region=REGION
Google Cloud region of the regional endpoint to use for this command. If unspecified, the command uses the global endpoint of the AI Platform Training and Prediction API.

Learn more about regional endpoints and see a list of available regions: https://cloud.google.com/ai-platform/prediction/docs/regional-endpoints

REGION must be one of: asia-east1, europe-west4, us-central1.

--runtime-version=RUNTIME_VERSION
AI Platform runtime version for this job. Must be specified unless --master-image-uri is specified instead. It is defined in documentation along with the list of supported versions: https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list
--service-account=SERVICE_ACCOUNT
Specifies the service account for resource access control.
--staging-bucket=STAGING_BUCKET
Bucket in which to stage training archives.

Required only if a file upload is necessary (that is, other flags include local paths) and no other flags implicitly specify an upload path.

Configure user code in prediction. AI Platform allows a model to have user-provided prediction code; these options configure that code.
--package-uris=[PACKAGE_URI,…]
Comma-separated list of Google Cloud Storage URIs ('gs://…') for user-supplied Python packages to use.
--prediction-class=PREDICTION_CLASS
The fully-qualified name of the custom prediction class in the package provided for custom prediction.

For example, --prediction-class=my_package.SequenceModel.

GCLOUD WIDE FLAGS
These flags are available to all commands: --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

EXAMPLES
To create an AI Platform version model with the version ID 'versionId' and with the name 'model-name', run:
gcloud beta ai-platform versions create versionId --model=model-name
NOTES
This command is currently in BETA and may change without notice. These variants are also available:
gcloud ai-platform versions create
gcloud alpha ai-platform versions create