gcloud alpha ai-platform predict

NAME
gcloud alpha ai-platform predict - run AI Platform online prediction
SYNOPSIS
gcloud alpha ai-platform predict --model=MODEL (--json-instances=JSON_INSTANCES     | --json-request=JSON_REQUEST     | --text-instances=TEXT_INSTANCES) [--signature-name=SIGNATURE_NAME] [--version=VERSION] [GCLOUD_WIDE_FLAG]
DESCRIPTION
(ALPHA) gcloud alpha ai-platform predict sends a prediction request to AI Platform for the given instances. This command will read up to 100 instances, though the service itself will accept instances up to the payload limit size (currently, 1.5MB). If you are predicting on more instances, you should use batch prediction via
  $ gcloud alpha ai-platform jobs submit prediction.
REQUIRED FLAGS
--model=MODEL
Name of the model.
Exactly one of these must be specified:
--json-instances=JSON_INSTANCES
Path to a local file from which instances are read. Instances are in JSON format; newline delimited.

An example of the JSON instances file:

  {"images": [0.0, …, 0.1], "key": 3}
  {"images": [0.0, …, 0.1], "key": 2}
  …

This flag accepts "-" for stdin.

--json-request=JSON_REQUEST
Path to a local file containing the body of JSON request.

An example of a JSON request:

  {
    "instances": [
      {"x": [1, 2], "y": [3, 4]},
      {"x": [-1, -2], "y": [-3, -4]}
    ]
  }

This flag accepts "-" for stdin.

--text-instances=TEXT_INSTANCES
Path to a local file from which instances are read. Instances are in UTF-8 encoded text format; newline delimited.

An example of the text instances file:

  107,4.9,2.5,4.5,1.7
  100,5.7,2.8,4.1,1.3
  …

This flag accepts "-" for stdin.

OPTIONAL FLAGS
--signature-name=SIGNATURE_NAME
The name of the signature defined in the SavedModel to use for this job. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY in https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants, which is "serving_default". Only applies to TensorFlow models.
--version=VERSION
Model version to be used.

If unspecified, the default version of the model will be used. To list model versions run

  $ gcloud alpha ai-platform versions list
GCLOUD WIDE FLAGS
These flags are available to all commands: --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES
This command is currently in ALPHA and may change without notice. If this command fails with API permission errors despite specifying the right project, you may be trying to access an API with an invitation-only early access whitelist. These variants are also available:
  $ gcloud ai-platform predict
  $ gcloud beta ai-platform predict