Detect Multiple Objects

The Cloud Vision API can detect and extract multiple objects in an image with Object Localization.

Object localization identifies multiple objects in an image and provides a LocalizedObjectAnnotation for each object in the image. Each LocalizedObjectAnnotation identifies information about the object, the position of the object, and rectangular bounds for the region of the image that contains the object.

Object localization identifies both significant and less-prominent objects in an image.

Object information is returned in English only. The Cloud Translation can translate English labels into any of a number of other languages.

Sample image

For example, the API might return the following information and bounding location data for the objects in the image above:

Name mid Score Bounds
Bicycle wheel /m/01bqk0 0.89648587 (0.32076266, 0.78941387), (0.43812272, 0.78941387), (0.43812272, 0.97331065), (0.32076266, 0.97331065)
Bicycle /m/0199g 0.886761 (0.312, 0.6616471), (0.638353, 0.6616471), (0.638353, 0.9705882), (0.312, 0.9705882)
Bicycle wheel /m/01bqk0 0.6345275 (0.5125398, 0.760708), (0.6256646, 0.760708), (0.6256646, 0.94601655), (0.5125398, 0.94601655)
Picture frame /m/06z37_ 0.6207608 (0.79177403, 0.16160682), (0.97047985, 0.16160682), (0.97047985, 0.31348917), (0.79177403, 0.31348917)
Tire /m/0h9mv 0.55886006 (0.32076266, 0.78941387), (0.43812272, 0.78941387), (0.43812272, 0.97331065), (0.32076266, 0.97331065)
Door /m/02dgv 0.5160098 (0.77569866, 0.37104446), (0.9412425, 0.37104446), (0.9412425, 0.81507325), (0.77569866, 0.81507325)

mid contains a machine-generated identifier (MID) corresponding to a label's Google Knowledge Graph entry. For information on inspecting mid values, see the Google Knowledge Graph Search API documentation.

Code samples

For samples in a number of programming languages, see:

Object Localization requests

Set up your GCP project and authentication

Object localization

Powershell

To make a label detection request using Windows PowerShell, make a POST request to the https://vision.googleapis.com/v1/images:annotate endpoint and specify OBJECT_LOCALIZATION as the value of features.type.

Images can be passed in one of two ways: as a Google Cloud Storage URI; or as a publicly-accessible HTTPS or HTTP URL. See Making requests for more information.

See the AnnotateImageRequest reference documentation for more information on configuring the request body.

$cred = gcloud auth application-default print-access-token
$headers = @{ Authorization = "Bearer $cred" }

Invoke-WebRequest `
  -Method Post `
  -Headers $headers `
  -ContentType: "application/json; charset=utf-8" `
  -Body "{
      'requests': [
        {
          'image': {
            'source': {
              'imageUri': 'https://cloud.google.com/vision/docs/images/bicycle_example.png'
            }
          },
          'features': [
            {
              'type': 'OBJECT_LOCALIZATION'
            }
          ]
        }
      ]
    }" `
  -Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

curl command

To make an object detection and localization request, make a POST request to the following endpoint:

POST https://vision.googleapis.com/v1/images:annotate

Specify OBJECT_LOCALIZATION as the value of the features.type field.

Images can be passed in one of two ways: as a Google Cloud Storage URI; or as a publicly-accessible HTTPS or HTTP URL. See Making requests for more information.

See the AnnotateImageRequest reference documentation for more information on configuring the request body.

curl -X POST \
     -H "Authorization: Bearer "$(gcloud auth application-default print-access-token)  \
     -H "Content-Type: application/json; charset=utf-8" \
     --data "{
      'requests': [
        {
          'image': {
            'source': {
              'imageUri': 'https://cloud.google.com/vision/docs/images/bicycle_example.png'
            }
          },
          'features': [
            {
              'type': 'OBJECT_LOCALIZATION'
            }
          ]
        }
      ]
    }" "https://vision.googleapis.com/v1/images:annotate"

Object Localization Response

The JSON response from an Object Localization request is in the following format:

{
  "responses": [
    {
      "localizedObjectAnnotations": [
        {
          "mid": "/m/01bqk0",
          "name": "Bicycle wheel",
          "score": 0.89648587,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.32076266,
                "y": 0.78941387
              },
              {
                "x": 0.43812272,
                "y": 0.78941387
              },
              {
                "x": 0.43812272,
                "y": 0.97331065
              },
              {
                "x": 0.32076266,
                "y": 0.97331065
              }
            ]
          }
        },
        {
          "mid": "/m/0199g",
          "name": "Bicycle",
          "score": 0.886761,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.312,
                "y": 0.6616471
              },
              {
                "x": 0.638353,
                "y": 0.6616471
              },
              {
                "x": 0.638353,
                "y": 0.9705882
              },
              {
                "x": 0.312,
                "y": 0.9705882
              }
            ]
          }
        },
        {
          "mid": "/m/01bqk0",
          "name": "Bicycle wheel",
          "score": 0.6345275,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.5125398,
                "y": 0.760708
              },
              {
                "x": 0.6256646,
                "y": 0.760708
              },
              {
                "x": 0.6256646,
                "y": 0.94601655
              },
              {
                "x": 0.5125398,
                "y": 0.94601655
              }
            ]
          }
        },
        {
          "mid": "/m/06z37_",
          "name": "Picture frame",
          "score": 0.6207608,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.79177403,
                "y": 0.16160682
              },
              {
                "x": 0.97047985,
                "y": 0.16160682
              },
              {
                "x": 0.97047985,
                "y": 0.31348917
              },
              {
                "x": 0.79177403,
                "y": 0.31348917
              }
            ]
          }
        },
        {
          "mid": "/m/0h9mv",
          "name": "Tire",
          "score": 0.55886006,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.32076266,
                "y": 0.78941387
              },
              {
                "x": 0.43812272,
                "y": 0.78941387
              },
              {
                "x": 0.43812272,
                "y": 0.97331065
              },
              {
                "x": 0.32076266,
                "y": 0.97331065
              }
            ]
          }
        },
        {
          "mid": "/m/02dgv",
          "name": "Door",
          "score": 0.5160098,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.77569866,
                "y": 0.37104446
              },
              {
                "x": 0.9412425,
                "y": 0.37104446
              },
              {
                "x": 0.9412425,
                "y": 0.81507325
              },
              {
                "x": 0.77569866,
                "y": 0.81507325
              }
            ]
          }
        }
      ]
    }
  ]
}
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Vision API Documentation
Need help? Visit our support page.