After you have created (trained) a model and deployed it you can make online (or synchronous) prediction requests to it.
Online (individual) prediction example
After you have deployed your trained model, you can request a prediction for an
image using the predict
method, or use the UI to get prediction annotations. The predict
method
applies labels to object bounding boxes in your
image.
Your model incurs charges while it is deployed. After making predictions with your trained model you can undeploy your model if you no longer want to incur model hosting usage charges.
Web UI
Open the AutoML Vision Object Detection UI and click the Models tab (with lightbulb icon) in the left navigation bar to display the available models.
To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.
Click the row for the model you want to use to label your images.
If your model is not yet deployed, deploy it now by selecting Deploy model.
Your model must be deployed to use online predictions. Deploying your model incurs costs. For more information, see the pricing page.
Click the Test & Use tab just below the title bar.
Click Upload Images to upload the images that you want to label.
REST
To test prediction you must first deploy your Cloud-hosted model.
Before using any of the request data, make the following replacements:
- project-id: your GCP project ID.
- model-id: the ID of your model, from the
response when you created the model. The ID is the last element of the name of your model.
For example:
- model name:
projects/project-id/locations/location-id/models/IOD4412217016962778756
- model id:
IOD4412217016962778756
- model name:
- base64-encoded-image: the base64
representation (ASCII string) of your binary image data. This string should look similar to the
following string:
/9j/4QAYRXhpZgAA...9tAVx/zDQDlGxn//2Q==
. Visit the base64 encode topic for more information.
Field-specific considerations:
scoreThreshold
- A value from 0 to 1. Only values with score thresholds of at least this value will be displayed. The default value is 0.5.maxBoundingBoxCount
- The greatest number (upper bound) of bounding boxes to be returned in a response. The default value is 100 and the maximum is 500. This value is subject to resource constraints, and may be limited by the server.
HTTP method and URL:
POST https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:predict
Request JSON body:
{ "payload": { "image": { "imageBytes": "BASE64_ENCODED_IMAGE" } }, "params": { "scoreThreshold": "0.5", "maxBoundingBoxCount": "100" } }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: project-id" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:predict"
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "project-id" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:predict" | Select-Object -Expand Content
Output is returned in JSON form. The predictions from your
AutoML Vision Object Detection model are contained in the payload
field:
boundingBox
of an object is specified by diagonally opposed vertices.displayName
is the object's label predicted by the AutoML Vision Object Detection model.score
represents a confidence level that the specified label applies to the image. It ranges from0
(no confidence) to1
(high confidence).
{ "payload": [ { "imageObjectDetection": { "boundingBox": { "normalizedVertices": [ { "x": 0.034553755, "y": 0.015524037 }, { "x": 0.941527, "y": 0.9912563 } ] }, "score": 0.9997793 }, "displayName": "Salad" }, { "imageObjectDetection": { "boundingBox": { "normalizedVertices": [ { "x": 0.11737197, "y": 0.7098793 }, { "x": 0.510878, "y": 0.87987 } ] }, "score": 0.63219965 }, "displayName": "Tomato" } ] }
Go
Before trying this sample, follow the setup instructions for this language on the Client Libraries page.
Java
Before trying this sample, follow the setup instructions for this language on the Client Libraries page.
Node.js
Before trying this sample, follow the setup instructions for this language on the Client Libraries page.
Python
Before trying this sample, follow the setup instructions for this language on the Client Libraries page.
Additional languages
C#: Please follow the C# setup instructions on the client libraries page and then visit the AutoML Vision Object Detection reference documentation for .NET.
PHP: Please follow the PHP setup instructions on the client libraries page and then visit the AutoML Vision Object Detection reference documentation for PHP.
Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the AutoML Vision Object Detection reference documentation for Ruby.