对象跟踪功能可跟踪在输入视频中检测到的对象。要发出对象跟踪请求,请调用 annotate
方法并在 features
字段中指定 OBJECT_TRACKING
。
对于在视频或视频片段中检测到的实体和空间位置,对象跟踪请求会使用适合这些实体和空间位置的标签来注释视频。例如,如果某个视频中有车辆正在穿过交通信号灯,则可能会产生“汽车”、“卡车”、“自行车”、“轮胎”、“灯”、“窗户”等标签。每个标签可包括一系列边界框,其中每个边界框都有一个包含时间偏移量的关联时间段,该时间偏移量指示相对于视频开始时的时长偏移量。注解还包含其他实体信息,包括实体 ID,您可以在 Google Knowledge Graph Search API 中使用该实体 ID 查找有关实体的更多信息。
对象跟踪与标签检测
对象跟踪与标签检测不同。标签检测提供没有边界框的标签,而对象跟踪则提供给定视频中存在的各个对象的标签,以及每个时间步中每个对象实例的边界框。
系统会将相同对象类型的多个实例分配给 ObjectTrackingAnnotation
消息的不同实例,其中,给定对象跟踪的所有实例都保留在其自己的 ObjectTrackingAnnotation
实例中。例如,如果视频中有一辆红色汽车和一辆蓝色汽车显示了 5 秒,则跟踪请求应返回 ObjectTrackingAnnotation
的两个实例。第一个实例将包含两辆汽车之一(例如红色汽车)的位置,而第二个实例将包含另一辆汽车的位置。
请求对 Cloud Storage 中的视频执行对象跟踪
以下示例演示了如何对位于 Cloud Storage 中的文件进行对象跟踪。
REST
发送处理请求
下面演示了如何向 annotate
方法发送 POST
请求。该示例使用通过 Google Cloud CLI 为项目设置的服务帐号的访问令牌。如需了解如何安装 Google Cloud CLI、为项目设置服务帐号以及获取访问令牌,请参阅 Video Intelligence 快速入门。
在使用任何请求数据之前,请先进行以下替换:
- INPUT_URI:STORAGE_URI
例如:
"inputUri": "gs://cloud-videointelligence-demo/assistant.mp4",
- PROJECT_NUMBER:您的 Google Cloud 项目的数字标识符
HTTP 方法和网址:
POST https://videointelligence.googleapis.com/v1/videos:annotate
请求 JSON 正文:
{ "inputUri": "STORAGE_URI", "features": ["OBJECT_TRACKING"] }
如需发送您的请求,请展开以下选项之一:
curl(Linux、macOS 或 Cloud Shell)
将请求正文保存在名为 request.json
的文件中,然后执行以下命令:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_NUMBER" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://videointelligence.googleapis.com/v1/videos:annotate"
PowerShell (Windows)
将请求正文保存在名为 request.json
的文件中,然后执行以下命令:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://videointelligence.googleapis.com/v1/videos:annotate" | Select-Object -Expand Content
您应该收到类似以下内容的 JSON 响应:
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION_ID/operations/OPERATION_ID" }
如果请求成功,Video Intelligence API 将返回操作的 name
。上面的示例展示了此类响应的示例,其中 PROJECT_NUMBER
是您的项目编号,OPERATION_ID
是为请求创建的长时间运行的操作的 ID。
获取结果
要获取请求的结果,请使用对 videos:annotate
的调用返回的操作名称发送 GET
,如下例所示。
在使用任何请求数据之前,请先进行以下替换:
- OPERATION_NAME:Video Intelligence API 返回的操作名称。操作名称采用
projects/PROJECT_NUMBER/locations/LOCATION_ID/operations/OPERATION_ID
格式 - PROJECT_NUMBER:您的 Google Cloud 项目的数字标识符
HTTP 方法和网址:
GET https://videointelligence.googleapis.com/v1/OPERATION_NAME
如需发送您的请求,请展开以下选项之一:
curl(Linux、macOS 或 Cloud Shell)
执行以下命令:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_NUMBER" \
"https://videointelligence.googleapis.com/v1/OPERATION_NAME"
PowerShell (Windows)
执行以下命令:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://videointelligence.googleapis.com/v1/OPERATION_NAME" | Select-Object -Expand Content
您应该收到类似以下内容的 JSON 响应:
响应
// Object tracking annotations are returned as a objectAnnotations
list.
{
"name": "projects/PROJECT_NUMBER/locations/LOCATION_ID/operations/OPERATION_ID",
"metadata": {
"@type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoProgress",
"annotationProgress": [
{
"inputUri": "/cloud-ml-sandbox/video/chicago.mp4",
"progressPercent": 100,
"startTime": "2019-12-21T16:56:46.755199Z",
"updateTime": "2019-12-21T16:59:17.911197Z"
}
]
},
"done": true,
"response": {
"@type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoResponse",
"annotationResults": [
{
"inputUri": "/cloud-ml-sandbox/video/chicago.mp4",
"objectAnnotations": [
{
"entity": {
"entityId": "/m/0k4j",
"description": "car",
"languageCode": "en-US"
},
"frames": [
{
"normalizedBoundingBox": {
"left": 0.2672763,
"top": 0.5677657,
"right": 0.4388713,
"bottom": 0.7623171
},
"timeOffset": "0s"
},
{
"normalizedBoundingBox": {
"left": 0.26920167,
"top": 0.5659805,
"right": 0.44331276,
"bottom": 0.76780635
},
"timeOffset": "0.100495s"
},
...
{
"normalizedBoundingBox": {
"left": 0.83573246,
"top": 0.6645812,
"right": 1,
"bottom": 0.99865407
},
"timeOffset": "2.311402s"
}
],
"segment": {
"startTimeOffset": "0s",
"endTimeOffset": "2.311402s"
},
"confidence": 0.99488896
},
...
{
"entity": {
"entityId": "/m/0cgh4",
"description": "building",
"languageCode": "en-US"
},
"frames": [
{
"normalizedBoundingBox": {
"left": 0.12340179,
"top": 0.010383379,
"right": 0.21914443,
"bottom": 0.5591795
},
"timeOffset": "0s"
},
{
"normalizedBoundingBox": {
"left": 0.12340179,
"top": 0.009684974,
"right": 0.22915152,
"bottom": 0.56070584
},
"timeOffset": "0.100495s"
},
...
{
"normalizedBoundingBox": {
"left": 0.12340179,
"top": 0.008624528,
"right": 0.22723165,
"bottom": 0.56158626
},
"timeOffset": "0.401983s"
}
],
"segment": {
"startTimeOffset": "0s",
"endTimeOffset": "0.401983s"
},
"confidence": 0.33914912
},
...
{
"entity": {
"entityId": "/m/0cgh4",
"description": "building",
"languageCode": "en-US"
},
"frames": [
{
"normalizedBoundingBox": {
"left": 0.79324204,
"top": 0.0006896425,
"right": 0.99659824,
"bottom": 0.5324423
},
"timeOffset": "37.585421s"
},
{
"normalizedBoundingBox": {
"left": 0.78935236,
"top": 0.0011992548,
"right": 0.99659824,
"bottom": 0.5374946
},
"timeOffset": "37.685917s"
},
...
{
"normalizedBoundingBox": {
"left": 0.79404694,
"right": 0.99659824,
"bottom": 0.5280966
},
"timeOffset": "38.590379s"
}
],
"segment": {
"startTimeOffset": "37.585421s",
"endTimeOffset": "38.590379s"
},
"confidence": 0.3415429
}
]
}
]
}
}
下载注解结果
将来源中的注解复制到目标存储桶(请参阅复制文件和对象):
gsutil cp gcs_uri gs://my-bucket
注意:如果输出 gcs uri 由用户提供,则注解存储在该 gcs uri 中。
Go
import (
"context"
"fmt"
"io"
video "cloud.google.com/go/videointelligence/apiv1"
videopb "cloud.google.com/go/videointelligence/apiv1/videointelligencepb"
"github.com/golang/protobuf/ptypes"
)
// objectTrackingGCS analyzes a video and extracts entities with their bounding boxes.
func objectTrackingGCS(w io.Writer, gcsURI string) error {
// gcsURI := "gs://cloud-samples-data/video/cat.mp4"
ctx := context.Background()
// Creates a client.
client, err := video.NewClient(ctx)
if err != nil {
return fmt.Errorf("video.NewClient: %w", err)
}
defer client.Close()
op, err := client.AnnotateVideo(ctx, &videopb.AnnotateVideoRequest{
InputUri: gcsURI,
Features: []videopb.Feature{
videopb.Feature_OBJECT_TRACKING,
},
})
if err != nil {
return fmt.Errorf("AnnotateVideo: %w", err)
}
resp, err := op.Wait(ctx)
if err != nil {
return fmt.Errorf("Wait: %w", err)
}
// Only one video was processed, so get the first result.
result := resp.GetAnnotationResults()[0]
for _, annotation := range result.ObjectAnnotations {
fmt.Fprintf(w, "Description: %q\n", annotation.Entity.GetDescription())
if len(annotation.Entity.EntityId) > 0 {
fmt.Fprintf(w, "\tEntity ID: %q\n", annotation.Entity.GetEntityId())
}
segment := annotation.GetSegment()
start, _ := ptypes.Duration(segment.GetStartTimeOffset())
end, _ := ptypes.Duration(segment.GetEndTimeOffset())
fmt.Fprintf(w, "\tSegment: %v to %v\n", start, end)
fmt.Fprintf(w, "\tConfidence: %f\n", annotation.GetConfidence())
// Here we print only the bounding box of the first frame in this segment.
frame := annotation.GetFrames()[0]
seconds := float32(frame.GetTimeOffset().GetSeconds())
nanos := float32(frame.GetTimeOffset().GetNanos())
fmt.Fprintf(w, "\tTime offset of the first frame: %fs\n", seconds+nanos/1e9)
box := frame.GetNormalizedBoundingBox()
fmt.Fprintf(w, "\tBounding box position:\n")
fmt.Fprintf(w, "\t\tleft : %f\n", box.GetLeft())
fmt.Fprintf(w, "\t\ttop : %f\n", box.GetTop())
fmt.Fprintf(w, "\t\tright : %f\n", box.GetRight())
fmt.Fprintf(w, "\t\tbottom: %f\n", box.GetBottom())
}
return nil
}
Java
/**
* Track objects in a video.
*
* @param gcsUri the path to the video file to analyze.
*/
public static VideoAnnotationResults trackObjectsGcs(String gcsUri) throws Exception {
try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
// Create the request
AnnotateVideoRequest request =
AnnotateVideoRequest.newBuilder()
.setInputUri(gcsUri)
.addFeatures(Feature.OBJECT_TRACKING)
.setLocationId("us-east1")
.build();
// asynchronously perform object tracking on videos
OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future =
client.annotateVideoAsync(request);
System.out.println("Waiting for operation to complete...");
// The first result is retrieved because a single video was processed.
AnnotateVideoResponse response = future.get(450, TimeUnit.SECONDS);
VideoAnnotationResults results = response.getAnnotationResults(0);
// Get only the first annotation for demo purposes.
ObjectTrackingAnnotation annotation = results.getObjectAnnotations(0);
System.out.println("Confidence: " + annotation.getConfidence());
if (annotation.hasEntity()) {
Entity entity = annotation.getEntity();
System.out.println("Entity description: " + entity.getDescription());
System.out.println("Entity id:: " + entity.getEntityId());
}
if (annotation.hasSegment()) {
VideoSegment videoSegment = annotation.getSegment();
Duration startTimeOffset = videoSegment.getStartTimeOffset();
Duration endTimeOffset = videoSegment.getEndTimeOffset();
// Display the segment time in seconds, 1e9 converts nanos to seconds
System.out.println(
String.format(
"Segment: %.2fs to %.2fs",
startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9,
endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
}
// Here we print only the bounding box of the first frame in this segment.
ObjectTrackingFrame frame = annotation.getFrames(0);
// Display the offset time in seconds, 1e9 converts nanos to seconds
Duration timeOffset = frame.getTimeOffset();
System.out.println(
String.format(
"Time offset of the first frame: %.2fs",
timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
// Display the bounding box of the detected object
NormalizedBoundingBox normalizedBoundingBox = frame.getNormalizedBoundingBox();
System.out.println("Bounding box position:");
System.out.println("\tleft: " + normalizedBoundingBox.getLeft());
System.out.println("\ttop: " + normalizedBoundingBox.getTop());
System.out.println("\tright: " + normalizedBoundingBox.getRight());
System.out.println("\tbottom: " + normalizedBoundingBox.getBottom());
return results;
}
}
Node.js
要向 Video Intelligence 进行身份验证,请设置应用默认凭据。如需了解详情,请参阅为本地开发环境设置身份验证。
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/video-intelligence');
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();
/**
* TODO(developer): Uncomment the following line before running the sample.
*/
// const gcsUri = 'GCS URI of the video to analyze, e.g. gs://my-bucket/my-video.mp4';
const request = {
inputUri: gcsUri,
features: ['OBJECT_TRACKING'],
//recommended to use us-east1 for the best latency due to different types of processors used in this region and others
locationId: 'us-east1',
};
// Detects objects in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');
//Gets annotations for video
const annotations = results[0].annotationResults[0];
const objects = annotations.objectAnnotations;
objects.forEach(object => {
console.log(`Entity description: ${object.entity.description}`);
console.log(`Entity id: ${object.entity.entityId}`);
const time = object.segment;
console.log(
`Segment: ${time.startTimeOffset.seconds || 0}` +
`.${(time.startTimeOffset.nanos / 1e6).toFixed(0)}s to ${
time.endTimeOffset.seconds || 0
}.` +
`${(time.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(`Confidence: ${object.confidence}`);
const frame = object.frames[0];
const box = frame.normalizedBoundingBox;
const timeOffset = frame.timeOffset;
console.log(
`Time offset for the first frame: ${timeOffset.seconds || 0}` +
`.${(timeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log('Bounding box position:');
console.log(` left :${box.left}`);
console.log(` top :${box.top}`);
console.log(` right :${box.right}`);
console.log(` bottom :${box.bottom}`);
});
Python
"""Object tracking in a video stored on GCS."""
from google.cloud import videointelligence
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.Feature.OBJECT_TRACKING]
operation = video_client.annotate_video(
request={"features": features, "input_uri": gcs_uri}
)
print("\nProcessing video for object annotations.")
result = operation.result(timeout=500)
print("\nFinished processing.\n")
# The first result is retrieved because a single video was processed.
object_annotations = result.annotation_results[0].object_annotations
for object_annotation in object_annotations:
print("Entity description: {}".format(object_annotation.entity.description))
if object_annotation.entity.entity_id:
print("Entity id: {}".format(object_annotation.entity.entity_id))
print(
"Segment: {}s to {}s".format(
object_annotation.segment.start_time_offset.seconds
+ object_annotation.segment.start_time_offset.microseconds / 1e6,
object_annotation.segment.end_time_offset.seconds
+ object_annotation.segment.end_time_offset.microseconds / 1e6,
)
)
print("Confidence: {}".format(object_annotation.confidence))
# Here we print only the bounding box of the first frame in the segment
frame = object_annotation.frames[0]
box = frame.normalized_bounding_box
print(
"Time offset of the first frame: {}s".format(
frame.time_offset.seconds + frame.time_offset.microseconds / 1e6
)
)
print("Bounding box position:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
print("\n")
其他语言
C#:请按照“客户端库”页面上的 C# 设置说明进行操作,然后访问适用于 .NET 的 Video Intelligence 参考文档。
PHP:请按照客户端库页面上的 PHP 设置说明操作,然后访问 PHP 版 Video Intelligence 参考文档。
Ruby:请按照客户端库页面上的 Ruby 设置说明操作,然后访问 Ruby 版 Video Intelligence 参考文档。
请求对本地文件中的视频执行对象跟踪
以下示例展示如何对本地存储的文件执行对象跟踪。
REST
发送处理请求
要在本地视频文件上执行注解,请对视频文件的内容进行 base64 编码。在请求的 inputContent
字段中添加 base64 编码的内容。如需了解如何对视频文件的内容进行 base64 编码,请参阅 Base64 编码。
下面演示了如何向 videos:annotate
方法发送 POST
请求。该示例使用通过 Google Cloud CLI 为项目设置的服务帐号的访问令牌。如需了解有关安装 Google Cloud CLI、使用服务帐号设置项目以及获取访问令牌的说明,请参阅 Video Intelligence 快速入门。
在使用任何请求数据之前,请先进行以下替换:
- inputContent:BASE64_ENCODED_CONTENT
例如:"UklGRg41AwBBVkkgTElTVAwBAABoZHJsYXZpaDgAAAA1ggAAxPMBAAAAAAAQCAA..."
- PROJECT_NUMBER:您的 Google Cloud 项目的数字标识符
HTTP 方法和网址:
POST https://videointelligence.googleapis.com/v1/videos:annotate
请求 JSON 正文:
{ "inputContent": "BASE64_ENCODED_CONTENT", "features": ["OBJECT_TRACKING"] }
如需发送您的请求,请展开以下选项之一:
curl(Linux、macOS 或 Cloud Shell)
将请求正文保存在名为 request.json
的文件中,然后执行以下命令:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_NUMBER" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://videointelligence.googleapis.com/v1/videos:annotate"
PowerShell (Windows)
将请求正文保存在名为 request.json
的文件中,然后执行以下命令:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://videointelligence.googleapis.com/v1/videos:annotate" | Select-Object -Expand Content
您应该收到类似以下内容的 JSON 响应:
响应
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION_ID/operations/OPERATION_ID" }
如果请求成功,则 Video Intelligence 会为您的操作分配 name
。以下示例展示了此类响应,其中 PROJECT_NUMBER
是您的项目编号,OPERATION_ID
是为请求创建的长时间运行操作的 ID。
获取结果
要获取请求的结果,您必须使用对 videos:annotate
的调用返回的操作名称发送 GET
,如下例所示。
在使用任何请求数据之前,请先进行以下替换:
- OPERATION_NAME:Video Intelligence API 返回的操作名称。操作名称采用
projects/PROJECT_NUMBER/locations/LOCATION_ID/operations/OPERATION_ID
格式 - PROJECT_NUMBER:您的 Google Cloud 项目的数字标识符
HTTP 方法和网址:
GET https://videointelligence.googleapis.com/v1/OPERATION_NAME
如需发送您的请求,请展开以下选项之一:
curl(Linux、macOS 或 Cloud Shell)
执行以下命令:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_NUMBER" \
"https://videointelligence.googleapis.com/v1/OPERATION_NAME"
PowerShell (Windows)
执行以下命令:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://videointelligence.googleapis.com/v1/OPERATION_NAME" | Select-Object -Expand Content
您应该收到类似以下内容的 JSON 响应:
响应
// Object tracking annotations are returned as a objectAnnotations
list.
{
"name": "projects/PROJECT_NUMBER/locations/LOCATION_ID/operations/OPERATION_ID",
"metadata": {
"@type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoProgress",
"annotationProgress": [
{
"inputContent": "UklGRg41AwBBVkkgTElTVAwBAABoZHJsYXZpaDgAAAA1ggAAxPMBAAAAAAAQCAA...",
"progressPercent": 100,
"startTime": "2018-06-21T16:56:46.755199Z",
"updateTime": "2018-06-21T16:59:17.911197Z"
}
]
},
"done": true,
"response": {
"@type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoResponse",
"annotationResults": [
{
"inputContent": "/cloud-ml-sandbox/video/chicago.mp4",
"objectAnnotations": [
{
"entity": {
"entityId": "/m/0k4j",
"description": "car",
"languageCode": "en-US"
},
"frames": [
{
"normalizedBoundingBox": {
"left": 0.2672763,
"top": 0.5677657,
"right": 0.4388713,
"bottom": 0.7623171
},
"timeOffset": "0s"
},
{
"normalizedBoundingBox": {
"left": 0.26920167,
"top": 0.5659805,
"right": 0.44331276,
"bottom": 0.76780635
},
"timeOffset": "0.100495s"
},
...
{
"normalizedBoundingBox": {
"left": 0.83573246,
"top": 0.6645812,
"right": 1,
"bottom": 0.99865407
},
"timeOffset": "2.311402s"
}
],
"segment": {
"startTimeOffset": "0s",
"endTimeOffset": "2.311402s"
},
"confidence": 0.99488896
},
...
{
"entity": {
"entityId": "/m/0cgh4",
"description": "building",
"languageCode": "en-US"
},
"frames": [
{
"normalizedBoundingBox": {
"left": 0.12340179,
"top": 0.010383379,
"right": 0.21914443,
"bottom": 0.5591795
},
"timeOffset": "0s"
},
{
"normalizedBoundingBox": {
"left": 0.12340179,
"top": 0.009684974,
"right": 0.22915152,
"bottom": 0.56070584
},
"timeOffset": "0.100495s"
},
...
{
"normalizedBoundingBox": {
"left": 0.12340179,
"top": 0.008624528,
"right": 0.22723165,
"bottom": 0.56158626
},
"timeOffset": "0.401983s"
}
],
"segment": {
"startTimeOffset": "0s",
"endTimeOffset": "0.401983s"
},
"confidence": 0.33914912
},
...
{
"entity": {
"entityId": "/m/0cgh4",
"description": "building",
"languageCode": "en-US"
},
"frames": [
{
"normalizedBoundingBox": {
"left": 0.79324204,
"top": 0.0006896425,
"right": 0.99659824,
"bottom": 0.5324423
},
"timeOffset": "37.585421s"
},
{
"normalizedBoundingBox": {
"left": 0.78935236,
"top": 0.0011992548,
"right": 0.99659824,
"bottom": 0.5374946
},
"timeOffset": "37.685917s"
},
...
{
"normalizedBoundingBox": {
"left": 0.79404694,
"right": 0.99659824,
"bottom": 0.5280966
},
"timeOffset": "38.590379s"
}
],
"segment": {
"startTimeOffset": "37.585421s",
"endTimeOffset": "38.590379s"
},
"confidence": 0.3415429
}
]
}
]
}
}
Go
import (
"context"
"fmt"
"io"
"io/ioutil"
video "cloud.google.com/go/videointelligence/apiv1"
videopb "cloud.google.com/go/videointelligence/apiv1/videointelligencepb"
"github.com/golang/protobuf/ptypes"
)
// objectTracking analyzes a video and extracts entities with their bounding boxes.
func objectTracking(w io.Writer, filename string) error {
// filename := "../testdata/cat.mp4"
ctx := context.Background()
// Creates a client.
client, err := video.NewClient(ctx)
if err != nil {
return fmt.Errorf("video.NewClient: %w", err)
}
defer client.Close()
fileBytes, err := ioutil.ReadFile(filename)
if err != nil {
return err
}
op, err := client.AnnotateVideo(ctx, &videopb.AnnotateVideoRequest{
InputContent: fileBytes,
Features: []videopb.Feature{
videopb.Feature_OBJECT_TRACKING,
},
})
if err != nil {
return fmt.Errorf("AnnotateVideo: %w", err)
}
resp, err := op.Wait(ctx)
if err != nil {
return fmt.Errorf("Wait: %w", err)
}
// Only one video was processed, so get the first result.
result := resp.GetAnnotationResults()[0]
for _, annotation := range result.ObjectAnnotations {
fmt.Fprintf(w, "Description: %q\n", annotation.Entity.GetDescription())
if len(annotation.Entity.EntityId) > 0 {
fmt.Fprintf(w, "\tEntity ID: %q\n", annotation.Entity.GetEntityId())
}
segment := annotation.GetSegment()
start, _ := ptypes.Duration(segment.GetStartTimeOffset())
end, _ := ptypes.Duration(segment.GetEndTimeOffset())
fmt.Fprintf(w, "\tSegment: %v to %v\n", start, end)
fmt.Fprintf(w, "\tConfidence: %f\n", annotation.GetConfidence())
// Here we print only the bounding box of the first frame in this segment.
frame := annotation.GetFrames()[0]
seconds := float32(frame.GetTimeOffset().GetSeconds())
nanos := float32(frame.GetTimeOffset().GetNanos())
fmt.Fprintf(w, "\tTime offset of the first frame: %fs\n", seconds+nanos/1e9)
box := frame.GetNormalizedBoundingBox()
fmt.Fprintf(w, "\tBounding box position:\n")
fmt.Fprintf(w, "\t\tleft : %f\n", box.GetLeft())
fmt.Fprintf(w, "\t\ttop : %f\n", box.GetTop())
fmt.Fprintf(w, "\t\tright : %f\n", box.GetRight())
fmt.Fprintf(w, "\t\tbottom: %f\n", box.GetBottom())
}
return nil
}
Java
/**
* Track objects in a video.
*
* @param filePath the path to the video file to analyze.
*/
public static VideoAnnotationResults trackObjects(String filePath) throws Exception {
try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
// Read file
Path path = Paths.get(filePath);
byte[] data = Files.readAllBytes(path);
// Create the request
AnnotateVideoRequest request =
AnnotateVideoRequest.newBuilder()
.setInputContent(ByteString.copyFrom(data))
.addFeatures(Feature.OBJECT_TRACKING)
.setLocationId("us-east1")
.build();
// asynchronously perform object tracking on videos
OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future =
client.annotateVideoAsync(request);
System.out.println("Waiting for operation to complete...");
// The first result is retrieved because a single video was processed.
AnnotateVideoResponse response = future.get(450, TimeUnit.SECONDS);
VideoAnnotationResults results = response.getAnnotationResults(0);
// Get only the first annotation for demo purposes.
ObjectTrackingAnnotation annotation = results.getObjectAnnotations(0);
System.out.println("Confidence: " + annotation.getConfidence());
if (annotation.hasEntity()) {
Entity entity = annotation.getEntity();
System.out.println("Entity description: " + entity.getDescription());
System.out.println("Entity id:: " + entity.getEntityId());
}
if (annotation.hasSegment()) {
VideoSegment videoSegment = annotation.getSegment();
Duration startTimeOffset = videoSegment.getStartTimeOffset();
Duration endTimeOffset = videoSegment.getEndTimeOffset();
// Display the segment time in seconds, 1e9 converts nanos to seconds
System.out.println(
String.format(
"Segment: %.2fs to %.2fs",
startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9,
endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
}
// Here we print only the bounding box of the first frame in this segment.
ObjectTrackingFrame frame = annotation.getFrames(0);
// Display the offset time in seconds, 1e9 converts nanos to seconds
Duration timeOffset = frame.getTimeOffset();
System.out.println(
String.format(
"Time offset of the first frame: %.2fs",
timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
// Display the bounding box of the detected object
NormalizedBoundingBox normalizedBoundingBox = frame.getNormalizedBoundingBox();
System.out.println("Bounding box position:");
System.out.println("\tleft: " + normalizedBoundingBox.getLeft());
System.out.println("\ttop: " + normalizedBoundingBox.getTop());
System.out.println("\tright: " + normalizedBoundingBox.getRight());
System.out.println("\tbottom: " + normalizedBoundingBox.getBottom());
return results;
}
}
Node.js
要向 Video Intelligence 进行身份验证,请设置应用默认凭据。如需了解详情,请参阅为本地开发环境设置身份验证。
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/video-intelligence');
const fs = require('fs');
const util = require('util');
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();
/**
* TODO(developer): Uncomment the following line before running the sample.
*/
// const path = 'Local file to analyze, e.g. ./my-file.mp4';
// Reads a local video file and converts it to base64
const file = await util.promisify(fs.readFile)(path);
const inputContent = file.toString('base64');
const request = {
inputContent: inputContent,
features: ['OBJECT_TRACKING'],
//recommended to use us-east1 for the best latency due to different types of processors used in this region and others
locationId: 'us-east1',
};
// Detects objects in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');
//Gets annotations for video
const annotations = results[0].annotationResults[0];
const objects = annotations.objectAnnotations;
objects.forEach(object => {
console.log(`Entity description: ${object.entity.description}`);
console.log(`Entity id: ${object.entity.entityId}`);
const time = object.segment;
console.log(
`Segment: ${time.startTimeOffset.seconds || 0}` +
`.${(time.startTimeOffset.nanos / 1e6).toFixed(0)}s to ${
time.endTimeOffset.seconds || 0
}.` +
`${(time.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(`Confidence: ${object.confidence}`);
const frame = object.frames[0];
const box = frame.normalizedBoundingBox;
const timeOffset = frame.timeOffset;
console.log(
`Time offset for the first frame: ${timeOffset.seconds || 0}` +
`.${(timeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log('Bounding box position:');
console.log(` left :${box.left}`);
console.log(` top :${box.top}`);
console.log(` right :${box.right}`);
console.log(` bottom :${box.bottom}`);
});
Python
"""Object tracking in a local video."""
from google.cloud import videointelligence
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.Feature.OBJECT_TRACKING]
with io.open(path, "rb") as file:
input_content = file.read()
operation = video_client.annotate_video(
request={"features": features, "input_content": input_content}
)
print("\nProcessing video for object annotations.")
result = operation.result(timeout=500)
print("\nFinished processing.\n")
# The first result is retrieved because a single video was processed.
object_annotations = result.annotation_results[0].object_annotations
# Get only the first annotation for demo purposes.
object_annotation = object_annotations[0]
print("Entity description: {}".format(object_annotation.entity.description))
if object_annotation.entity.entity_id:
print("Entity id: {}".format(object_annotation.entity.entity_id))
print(
"Segment: {}s to {}s".format(
object_annotation.segment.start_time_offset.seconds
+ object_annotation.segment.start_time_offset.microseconds / 1e6,
object_annotation.segment.end_time_offset.seconds
+ object_annotation.segment.end_time_offset.microseconds / 1e6,
)
)
print("Confidence: {}".format(object_annotation.confidence))
# Here we print only the bounding box of the first frame in this segment
frame = object_annotation.frames[0]
box = frame.normalized_bounding_box
print(
"Time offset of the first frame: {}s".format(
frame.time_offset.seconds + frame.time_offset.microseconds / 1e6
)
)
print("Bounding box position:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
print("\n")
其他语言
C#:请按照“客户端库”页面上的 C# 设置说明进行操作,然后访问适用于 .NET 的 Video Intelligence 参考文档。
PHP:请按照客户端库页面上的 PHP 设置说明操作,然后访问 PHP 版 Video Intelligence 参考文档。
Ruby:请按照客户端库页面上的 Ruby 设置说明操作,然后访问 Ruby 版 Video Intelligence 参考文档。