Face Detection detects multiple faces within an image along with the
associated key facial attributes such as emotional state or wearing headwear
.
Specific individual Facial Recognition is not supported.
Try it for yourself
If you're new to Google Cloud, create an account to evaluate how Cloud Vision API performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
Try Cloud Vision API freeFace detection requests
Set up your GCP project and authentication
Detect Faces in a local image
The Vision API can perform feature detection on a local image file by sending the contents of the image file as a base64 encoded string in the body of your request.
REST & CMD LINE
Before using any of the request data below, make the following replacements:
- base64-encoded-image: The base64
representation (ASCII string) of your binary image data. This string should look similar to the
following string:
/9j/4QAYRXhpZgAA...9tAVx/zDQDlGxn//2Q==
HTTP method and URL:
POST https://vision.googleapis.com/v1/images:annotate
Request JSON body:
{ "requests": [ { "image": { "content": "base64-encoded-image" }, "features": [ { "maxResults": 10, "type": "FACE_DETECTION" } ] } ] }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://vision.googleapis.com/v1/images:annotate
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content
If the request is successful, the server returns a 200 OK
HTTP status code and the
response in JSON format.
A FACE_DETECTION
response includes bounding boxes for all faces detected, landmarks
detected on the faces (eyes, nose, mouth, etc.), and confidence ratings for face and image
properties (joy, sorrow, anger, surprise, etc.).
C#
Before trying this sample, follow the C# setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision C# API reference documentation.
Go
Before trying this sample, follow the Go setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries. For more information, see the Vision API Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Node.js API reference documentation.
PHP
Before trying this sample, follow the PHP setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision PHP API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Python API reference documentation.
Ruby
Before trying this sample, follow the Ruby setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Ruby API reference documentation.
Detect Faces in a remote image
For your convenience, the Vision API can perform feature detection directly on an image file located in Google Cloud Storage or on the Web without the need to send the contents of the image file in the body of your request.
REST & CMD LINE
Before using any of the request data below, make the following replacements:
- cloud-storage-image-uri: the path to a valid
image file in a Cloud Storage bucket. You must at least have read privileges to the file.
Example:
gs://cloud-samples-data/vision/face/faces.jpeg
HTTP method and URL:
POST https://vision.googleapis.com/v1/images:annotate
Request JSON body:
{ "requests": [ { "image": { "source": { "imageUri": "cloud-storage-image-uri" } }, "features": [ { "maxResults": 10, "type": "FACE_DETECTION" } ] } ] }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://vision.googleapis.com/v1/images:annotate
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content
If the request is successful, the server returns a 200 OK
HTTP status code and the
response in JSON format.
A FACE_DETECTION
response includes bounding boxes for all faces detected, landmarks
detected on the faces (eyes, nose, mouth, etc.), and confidence ratings for face and image
properties (joy, sorrow, anger, surprise, etc.).
C#
Before trying this sample, follow the C# setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision C# API reference documentation.
Go
Before trying this sample, follow the Go setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries. For more information, see the Vision API Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Node.js API reference documentation.
PHP
Before trying this sample, follow the PHP setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision PHP API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Python API reference documentation.
Ruby
Before trying this sample, follow the Ruby setup instructions in the Vision Quickstart Using Client Libraries. For more information, see the Vision Ruby API reference documentation.
gcloud
To perform face detection, use the
gcloud ml vision detect-faces
command as shown in the following example:
gcloud ml vision detect-faces gs://cloud-samples-data/vision/face/faces.jpeg
Try it
Try face detection below. You can use the
image specified already (gs://cloud-samples-data/vision/face/faces.jpeg
) or
specify your own image in its place. Send the request by selecting
Execute.
Request body:
{ "requests": [ { "features": [ { "maxResults": 10, "type": "FACE_DETECTION" } ], "image": { "source": { "imageUri": "gs://cloud-samples-data/vision/face/faces.jpeg" } } } ] }