This page shows you how to send three feature detection and annotation requests
to Cloud Vision using the REST interface
and the curl
command.
Cloud Vision API enables easy integration of Google vision recognition technologies into developer applications. You can send image data and desired feature types to the Vision API, which then returns a corresponding response based on the image attributes you are interested in. For more information about the feature types offered, see the List of all Vision API features.
Before you begin
-
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
-
In the Cloud Console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
- Enable the Vision API.
-
Set up authentication:
-
In the Cloud Console, go to the Create service account key page.
Go to the Create Service Account Key page - From the Service account list, select New service account.
- In the Service account name field, enter a name.
From the Role list, select Project > Owner.
Note: The Role field authorizes your service account to access resources. You can view and change this field later by using the Cloud Console. If you are developing a production app, specify more granular permissions than Project > Owner. For more information, see granting roles to service accounts.- Click Create. A JSON file that contains your key downloads to your computer.
-
-
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again.
- Install and initialize the Cloud SDK.
Make an image annotation request
After completing the Before you begin steps you can use Vision API to annotate an image file.
In this example you use curl to send a request to the Vision API using the following image:
Cloud Storage URI:
gs://cloud-samples-data/vision/using_curl/shanghai.jpeg
HTTPS URL:
https://console.cloud.google.com/storage/browser/cloud-samples-data/vision/using_curl/shanghai.jpeg

Create the request JSON
The following request.json
file demonstrates how
to request three images:annotate
features
and limit the results in the response.
Create the JSON request file with the following text, and save it as a
request.json
plain text file in your working directory:
request.json
{ "requests": [ { "image": { "source": { "imageUri": "gs://cloud-samples-data/vision/using_curl/shanghai.jpeg" } }, "features": [ { "type": "LABEL_DETECTION", "maxResults": 3 }, { "type": "OBJECT_LOCALIZATION", "maxResults": 1 }, { "type": "TEXT_DETECTION", "maxResults": 1, "model": "builtin/latest" } ] } ] }
Send the request
You use curl and the body content from request.json
to send the request
to the Cloud Vision API. Enter the following on your
command line:
curl -X POST \ -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \ -H "Content-Type: application/json; charset=utf-8" \ https://vision.googleapis.com/v1/images:annotate -d @request.json
Interpret the response
You should see a JSON response similar to the one below.
The request JSON body specified maxResults
for each annotation type.
Consequently, you will see the following in the response JSON:
- three
labelAnnotations
results - one
textAnnotations
result (shortened for clarity) - one
localizedObjectAnnotations
result
Label detection results
- description: "People", score: 0.950
- description: "Street", score: 0.891
- description: "Mode of transport", score: 0.890

Text detection results
- text: 牛牛面馆\n
- vertices: (x: 159, y: 212), (x: 947, y: 212), (x: 947, y: 354), (x: 159, y: 354 )

Object detection results
- name: "Person", score: 0.944
- normalized vertices: (x: 0.260, y: 0.468), (x: 0.407, y: 0.468), (x: 0.407, y: 0.895), (x: 0.260, y: 0.895)

Congratulations! You've sent your first request to Vision API.
What's next
- See a list of all feature types and their uses.
- Get started with the Vision API in your language of choice by using a Vision API Client Library.
- Use the How-to guides to learn more about specific features, see example annotations, and get annotations for an individual file or image.
- Learn about batch image and file (PDF/TIFF/GIF) annotation.
- Work through the sample applications.
- Browse more specific use cases on the community tutorials page.