Method: projects.locations.models.predict

Perform an online prediction. The prediction result will be directly returned in the response.

You can call the predict method for an image in .JPEG, .GIF or .PNG format up to 30MB in size.

See Annotating images for more details.

HTTP request

POST https://automl.googleapis.com/v1beta1/{name}:predict

Path parameters

Parameters
name

string

Name of the model requested to serve the prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

Request body

The request body contains data with the following structure:

JSON representation
{
  "payload": {
    object(ExamplePayload)
  },
  "params": {
    string: string,
    ...
  }
}
Fields
payload

object(ExamplePayload)

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params

map (key: string, value: string)

Additional domain-specific parameters, any string must be up to 25000 characters long.

You can set the following fields:

  • score_threshold - (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

  • max_bounding_box_count - (int64) When the model makes predictions for an image, no more than this number of bounding boxes are returned in the response. The default is 100. The requested value might be limited by the server.

See Annotating images for more details.

Response body

If successful, the response body contains data with the following structure:

Response message for PredictionService.Predict.

JSON representation
{
  "payload": [
    {
      object(AnnotationPayload)
    }
  ],
  "metadata": {
    string: string,
    ...
  }
}
Fields
payload[]

object(AnnotationPayload)

Prediction result.

metadata

map (key: string, value: string)

Additional domain-specific prediction response metadata.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ExamplePayload

Example data used for training or prediction.

JSON representation
{

  // Union field payload can be only one of the following:
  "document": {
    object(Document)
  },
  "row": {
    object(Row)
  }
  // End of list of possible types for union field payload.
}
Fields
Union field payload. Required. Input only. The example data. payload can be only one of the following:
document

object(Document)

Example document.

row

object(Row)

Example relational table row.

Document

A structured text document e.g. a PDF.

JSON representation
{
  "inputConfig": {
    object(DocumentInputConfig)
  }
}
Fields
inputConfig

object(DocumentInputConfig)

An input config specifying the content of the document.

DocumentInputConfig

Input configuration of a Document.

JSON representation
{
  "gcsSource": {
    object(GcsSource)
  }
}
Fields
gcsSource

object(GcsSource)

The Google Cloud Storage location of the document file. Only a single path should be given. Max supported size: 512MB. Supported extensions: .PDF.

Row

A representation of a row in a relational table.

JSON representation
{
  "columnSpecIds": [
    string
  ],
  "values": [
    value
  ]
}
Fields
columnSpecIds[]

string

The resource IDs of the column specs describing the columns of the row. If set must contain, but possibly in a different order, all input feature

columnSpecIds of the Model this row is being passed to. Note: The below values field must match order of this field, if this field is set.

values[]

value (Value format)

Required. The values of the row cells, given in the same order as the columnSpecIds, or, if not set, then in the same order as input feature

columnSpecs of the Model this row is being passed to.

AnnotationPayload

Contains annotation information that is relevant to AutoML.

JSON representation
{
  "annotationSpecId": string,
  "displayName": string,

  // Union field detail can be only one of the following:
  "classification": {
    object(ClassificationAnnotation)
  },
  "imageObjectDetection": {
    object(ImageObjectDetectionAnnotation)
  },
  "imageSegmentation": {
    object(ImageSegmentationAnnotation)
  },
  "textSentiment": {
    object(TextSentimentAnnotation)
  }
  // End of list of possible types for union field detail.
}
Fields
annotationSpecId

string

Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use.

displayName

string

Output only. The value of displayName when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the displayName between any two model training.

Union field detail. Output only . Additional information about the annotation specific to the AutoML domain. detail can be only one of the following:
classification

object(ClassificationAnnotation)

Annotation details for classification predictions.

imageObjectDetection

object(ImageObjectDetectionAnnotation)

Annotation details for image object detection.

imageSegmentation

object(ImageSegmentationAnnotation)

Annotation details for image segmentation.

textSentiment

object(TextSentimentAnnotation)

Annotation details for text sentiment.

ClassificationAnnotation

Contains annotation details specific to classification.

JSON representation
{
  "score": number
}
Fields
score

number

Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive.

ImageObjectDetectionAnnotation

Annotation details for image object detection.

JSON representation
{
  "boundingBox": {
    object(BoundingPoly)
  },
  "score": number,
  "iouScore": number,
  "createTimePositivity": enum(Positivity)
}
Fields
boundingBox

object(BoundingPoly)

Output only. The rectangle representing the object location.

score

number

Output only. The confidence that this annotation is positive for the parent example, value in [0, 1], higher means higher positivity confidence.

iouScore

number

Output only. The intersection-over-union score, value in [0, 1] or -1. Meaningful only for Positive-on-creation Annotations that were created by user or on examples which already had a ground truth. Set to -1 if and only if non-meaningful. For annotations created by the user the iouScore is 1.

The measure of overlap of this Annotation's bounding box with ground truth bounding box on the example.

createTimePositivity

enum(Positivity)

Output only. Was the annotation spec of this annotation considered positive for the parent example when this annotation was created? In other words, what was the ground truth for the parent example w.r.t. the annotation spec at the moment of this annotation creation. Only set for annotations created on existing in a dataset examples and which source is a model

BoundingPoly

A bounding polygon of a detected object on a plane. On output both vertices and normalizedVertices are provided. The polygon is formed by connecting vertices in the order they are listed.

JSON representation
{
  "normalizedVertices": [
    {
      object(NormalizedVertex)
    }
  ]
}
Fields
normalizedVertices[]

object(NormalizedVertex)

Output only . The bounding polygon normalized vertices.

NormalizedVertex

A vertex represents a 2D point in the image. The normalized vertex coordinates are between 0 to 1 fractions relative to the original plane (image, video). E.g. if the plane (e.g. whole image) would have size 10 x 20 then a point with normalized coordinates (0.1, 0.3) would be at the position (1, 6) on that plane.

JSON representation
{
  "x": number,
  "y": number
}
Fields
x

number

Required. Horizontal coordinate.

y

number

Required. Vertical coordinate.

Positivity

Status on whether an annotation is considered positive (i.e. whether the annotation matches the example). UNDECIDED is the starting value by default.

Enums
POSITIVITY_UNSPECIFIED Should not be used, an un-set enum has this value by default.
UNDECIDED It hasn't been yet decided (e.g. no "score threshold" was set) whether this annotation matches the example.
POSITIVE The annotation matches the example.
NEGATIVE The annotation does not match the example.

ImageSegmentationAnnotation

Annotation details for image segmentation.

JSON representation
{
  "confidenceMask": {
    object(Image)
  },
  "categoryMask": {
    object(Image)
  }
}
Fields
confidenceMask

object(Image)

Output only. One channel image which is encoded as an 8bit lossless PNG. The size of the image will be the same as the original image. For a specific pixel, darker color means less confidence in correctness of the cateogry in the category_meask for the corresponding pixel. Black means no confidence and white means full confidence.

categoryMask

object(Image)

Required. One channel image which is encoded as an 8bit lossless PNG. Each pixel in the image mask represents the category which the pixel in the original image was predicted to belong to. For prediction result, the category must corresponds to one of

[thin_annotation_specs'][google.cloud.automl.v1beta1.ImageSegmentationModelMetadata.thin_annotation_specs] [category][google.cloud.automl.v1beta1.ThinAnnotationSpec.category] in

imageSegmentationModelMetadata. The model will choose the most likely category and if none of the categories reach the confidence threshold, the pixel will be marked as background.

Image

A representation of an image. Only images up to 30MB in size are supported.

JSON representation
{
  "thumbnailUri": string,
  "contentUri": string,

  // Union field data can be only one of the following:
  "imageBytes": string,
  "inputConfig": {
    object(InputConfig)
  }
  // End of list of possible types for union field data.
}
Fields
thumbnailUri

string

Output only. HTTP URI to the thumbnail image.

contentUri

string

Output only. HTTP URI to the normalized image.

Union field data. Input only. The data representing the image. For Predict calls [image_bytes][] must be set, as other options are not currently supported by prediction API. You can read the contents of an uploaded image by using the [content_uri][] field. data can be only one of the following:
imageBytes

string (bytes format)

Image content represented as a stream of bytes. Note: As with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

A base64-encoded string.

inputConfig

object(InputConfig)

An input config specifying the content of the image.

TextSentimentAnnotation

Contains annotation details specific to text sentiment.

JSON representation
{
  "sentiment": number
}
Fields
sentiment

number

Output only. The sentiment with the semantic, as given to the AutoMl.ImportData when populating the dataset from which the model used for the prediction had been trained. The sentiment values are between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive), with higher value meaning more positive sentiment. They are completely relative, i.e. 0 means least positive sentiment and sentimentMax means the most positive from the sentiments present in the train data. Therefore e.g. if train data had only negative sentiment, then sentimentMax, would be still negative (although least negative). The sentiment shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API.

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...

Cloud AutoML Vision Object Detection