Method: projects.locations.models.predict

Perform a prediction.

HTTP request

POST https://automl.googleapis.com/v1beta1/{name}:predict

Path parameters

Parameters
name

string

Name of the model requested to serve the prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

Request body

The request body contains data with the following structure:

JSON representation
{
  "payload": {
    object(ExamplePayload)
  },
  "params": {
    string: string,
    ...
  }
}
Fields
payload

object(ExamplePayload)

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params

map (key: string, value: string)

Additional domain-specific parameters, any string must be up to 25000 characters long.

  • For Image Classification:

score_threshold - (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score threshold. The default is 0.5.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

Response body

If successful, the response body contains data with the following structure:

Response message for PredictionService.Predict.

Currently, this is only used to return an image recognition prediction result. More prediction output metadata might be introduced in the future.

JSON representation
{
  "payload": [
    {
      object(AnnotationPayload)
    }
  ],
  "metadata": {
    string: string,
    ...
  }
}
Fields
payload[]

object(AnnotationPayload)

Prediction result.

metadata

map (key: string, value: string)

Additional domain-specific prediction response metadata.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ExamplePayload

Example data used for training or prediction.

JSON representation
{

  // Union field payload can be only one of the following:
  "image": {
    object(Image)
  },
  "textSnippet": {
    object(TextSnippet)
  }
  // End of list of possible types for union field payload.
}
Fields
Union field payload. Required. Input only. The example data. payload can be only one of the following:
image

object(Image)

An example image.

textSnippet

object(TextSnippet)

Example text.

Image

A representation of an image.

JSON representation
{
  "thumbnailUri": string,

  // Union field data can be only one of the following:
  "imageBytes": string,
  "inputConfig": {
    object(InputConfig)
  }
  // End of list of possible types for union field data.
}
Fields
thumbnailUri

string

Output only. HTTP URI to the thumbnail image.

Union field data. Input only. The data representing the image. For Predict calls [image_bytes][] must be set, as other options are not currently supported by prediction API. You can read the contents of an uploaded image by using the [content_uri][] field. data can be only one of the following:
imageBytes

string (bytes format)

Image content represented as a stream of bytes. Note: As with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

A base64-encoded string.

inputConfig

object(InputConfig)

An input config specifying the content of the image.

TextSnippet

A representation of a text snippet.

JSON representation
{
  "content": string,
  "mimeType": string,
  "contentUri": string
}
Fields
content

string

Required. The content of the text snippet as a string. Up to 250000 characters long.

mimeType

string

The format of the source text. For example, "text/html" or "text/plain". If left blank the format is automatically determined from the type of the uploaded content. The default is "text/html". Up to 25000 characters long.

contentUri

string

Output only. HTTP URI where you can download the content.

AnnotationPayload

Contains annotation information that is relevant to AutoML.

JSON representation
{
  "annotationSpecId": string,
  "displayName": string,

  // Union field detail can be only one of the following:
  "translation": {
    object(TranslationAnnotation)
  },
  "classification": {
    object(ClassificationAnnotation)
  }
  // End of list of possible types for union field detail.
}
Fields
annotationSpecId

string

Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use.

displayName

string

Output only. The value of AnnotationSpec.display_name when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the displayName between any two model training.

Union field detail. Output only . Additional information about the annotation specific to the AutoML solution. detail can be only one of the following:
translation

object(TranslationAnnotation)

Annotation details for translation.

classification

object(ClassificationAnnotation)

Annotation details for content or image classification.

TranslationAnnotation

Annotation details specific to translation.

JSON representation
{
  "translatedContent": {
    object(TextSnippet)
  }
}
Fields
translatedContent

object(TextSnippet)

Output only . The translated content.

ClassificationAnnotation

Contains annotation details specific to classification.

JSON representation
{
  "score": number
}
Fields
score

number

Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive.