INPUT_URI: 파일 이름을 포함하여 주석을 추가하고자 하는 파일을 포함한 Cloud Storage 버킷입니다. gs://로 시작해야 합니다. 예를 들면 `'inputUri': 'gs://cloud-samples-data/video/googlework_short.mp4'`입니다.
PROJECT_NUMBER: Google Cloud 프로젝트의 숫자 식별자
HTTP 메서드 및 URL:
POST https://videointelligence.googleapis.com/v1/videos:annotate
얼굴 인식 주석은 faceAnnotations 목록으로 반환됩니다.
참고: done 필드는 값이 True일 경우에만 반환됩니다.
작업이 완료되지 않은 경우에는 응답에 이 필드가 포함되지 않습니다.
Java
Video Intelligence에 인증하려면 애플리케이션 기본 사용자 인증 정보를 설정합니다.
자세한 내용은 로컬 개발 환경의 인증 설정을 참조하세요.
importcom.google.api.gax.longrunning.OperationFuture;importcom.google.cloud.videointelligence.v1.AnnotateVideoProgress;importcom.google.cloud.videointelligence.v1.AnnotateVideoRequest;importcom.google.cloud.videointelligence.v1.AnnotateVideoResponse;importcom.google.cloud.videointelligence.v1.DetectedAttribute;importcom.google.cloud.videointelligence.v1.FaceDetectionAnnotation;importcom.google.cloud.videointelligence.v1.FaceDetectionConfig;importcom.google.cloud.videointelligence.v1.Feature;importcom.google.cloud.videointelligence.v1.TimestampedObject;importcom.google.cloud.videointelligence.v1.Track;importcom.google.cloud.videointelligence.v1.VideoAnnotationResults;importcom.google.cloud.videointelligence.v1.VideoContext;importcom.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient;importcom.google.cloud.videointelligence.v1.VideoSegment;publicclassDetectFacesGcs{publicstaticvoiddetectFacesGcs()throwsException{// TODO(developer): Replace these variables before running the sample.StringgcsUri="gs://cloud-samples-data/video/googlework_short.mp4";detectFacesGcs(gcsUri);}// Detects faces in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.publicstaticvoiddetectFacesGcs(StringgcsUri)throwsException{try(VideoIntelligenceServiceClientvideoIntelligenceServiceClient=VideoIntelligenceServiceClient.create()){FaceDetectionConfigfaceDetectionConfig=FaceDetectionConfig.newBuilder()// Must set includeBoundingBoxes to true to get facial attributes..setIncludeBoundingBoxes(true).setIncludeAttributes(true).build();VideoContextvideoContext=VideoContext.newBuilder().setFaceDetectionConfig(faceDetectionConfig).build();AnnotateVideoRequestrequest=AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.FACE_DETECTION).setVideoContext(videoContext).build();// Detects faces in a videoOperationFuture<AnnotateVideoResponse,AnnotateVideoProgress>future=videoIntelligenceServiceClient.annotateVideoAsync(request);System.out.println("Waiting for operation to complete...");AnnotateVideoResponseresponse=future.get();// Gets annotations for videoVideoAnnotationResultsannotationResult=response.getAnnotationResultsList().get(0);// Annotations for list of people detected, tracked and recognized in video.for(FaceDetectionAnnotationfaceDetectionAnnotation:annotationResult.getFaceDetectionAnnotationsList()){System.out.print("Face detected:\n");for(Tracktrack:faceDetectionAnnotation.getTracksList()){VideoSegmentsegment=track.getSegment();System.out.printf("\tStart: %d.%.0fs\n",segment.getStartTimeOffset().getSeconds(),segment.getStartTimeOffset().getNanos()/1e6);System.out.printf("\tEnd: %d.%.0fs\n",segment.getEndTimeOffset().getSeconds(),segment.getEndTimeOffset().getNanos()/1e6);// Each segment includes timestamped objects that// include characteristics of the face detected.TimestampedObjectfirstTimestampedObject=track.getTimestampedObjects(0);for(DetectedAttributeattribute:firstTimestampedObject.getAttributesList()){// Attributes include glasses, headwear, smiling, direction of gazeSystem.out.printf("\tAttribute %s: %s %s\n",attribute.getName(),attribute.getValue(),attribute.getConfidence());}}}}}}
Node.js
Video Intelligence에 인증하려면 애플리케이션 기본 사용자 인증 정보를 설정합니다.
자세한 내용은 로컬 개발 환경의 인증 설정을 참조하세요.
/** * TODO(developer): Uncomment these variables before running the sample. */// const gcsUri = 'GCS URI of the video to analyze, e.g. gs://my-bucket/my-video.mp4';// Imports the Google Cloud Video Intelligence library + Node's fs libraryconstVideo=require('@google-cloud/video-intelligence').v1;// Creates a clientconstvideo=newVideo.VideoIntelligenceServiceClient();asyncfunctiondetectFacesGCS(){constrequest={inputUri:gcsUri,features:['FACE_DETECTION'],videoContext:{faceDetectionConfig:{// Must set includeBoundingBoxes to true to get facial attributes.includeBoundingBoxes:true,includeAttributes:true,},},};// Detects faces in a video// We get the first result because we only process 1 videoconst[operation]=awaitvideo.annotateVideo(request);constresults=awaitoperation.promise();console.log('Waiting for operation to complete...');// Gets annotations for videoconstfaceAnnotations=results[0].annotationResults[0].faceDetectionAnnotations;for(const{tracks}offaceAnnotations){console.log('Face detected:');for(const{segment,timestampedObjects}oftracks){console.log(`\tStart: ${segment.startTimeOffset.seconds}.`+`${(segment.startTimeOffset.nanos/1e6).toFixed(0)}s`);console.log(`\tEnd: ${segment.endTimeOffset.seconds}.`+`${(segment.endTimeOffset.nanos/1e6).toFixed(0)}s`);// Each segment includes timestamped objects that// include characteristics of the face detected.const[firstTimestapedObject]=timestampedObjects;for(const{name}offirstTimestapedObject.attributes){// Attributes include 'glasses', 'headwear', 'smiling'.console.log(`\tAttribute: ${name}; `);}}}}detectFacesGCS();
Python
Video Intelligence에 인증하려면 애플리케이션 기본 사용자 인증 정보를 설정합니다.
자세한 내용은 로컬 개발 환경의 인증 설정을 참조하세요.
fromgoogle.cloudimportvideointelligence_v1asvideointelligencedefdetect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):"""Detects faces in a video."""client=videointelligence.VideoIntelligenceServiceClient()# Configure the requestconfig=videointelligence.FaceDetectionConfig(include_bounding_boxes=True,include_attributes=True)context=videointelligence.VideoContext(face_detection_config=config)# Start the asynchronous requestoperation=client.annotate_video(request={"features":[videointelligence.Feature.FACE_DETECTION],"input_uri":gcs_uri,"video_context":context,})print("\nProcessing video for face detection annotations.")result=operation.result(timeout=300)print("\nFinished processing.\n")# Retrieve the first result, because a single video was processed.annotation_result=result.annotation_results[0]forannotationinannotation_result.face_detection_annotations:print("Face detected:")fortrackinannotation.tracks:print("Segment: {}s to {}s".format(track.segment.start_time_offset.seconds+track.segment.start_time_offset.microseconds/1e6,track.segment.end_time_offset.seconds+track.segment.end_time_offset.microseconds/1e6,))# Each segment includes timestamped faces that include# characteristics of the face detected.# Grab the first timestamped facetimestamped_object=track.timestamped_objects[0]box=timestamped_object.normalized_bounding_boxprint("Bounding box:")print("\tleft : {}".format(box.left))print("\ttop : {}".format(box.top))print("\tright : {}".format(box.right))print("\tbottom: {}".format(box.bottom))# Attributes include glasses, headwear, smiling, direction of gazeprint("Attributes:")forattributeintimestamped_object.attributes:print("\t{}:{}{}".format(attribute.name,attribute.value,attribute.confidence))
다음 예시는 얼굴 인식을 사용해서 로컬 머신에 업로드된 동영상 파일에서 동영상에 있는 항목을 찾습니다.
REST
프로세스 요청 전송
로컬 동영상 파일에서 얼굴 인식을 수행하려면 동영상 파일의 콘텐츠를 base64로 인코딩해야 합니다. 동영상 파일의 콘텐츠를 base64로 인코딩하는 방법에 대한 자세한 내용은 Base64 인코딩을 참조하세요. 그런 다음 videos:annotate 메서드에 대해 POST 요청을 실행합니다. 요청의 inputContent 필드에 base64 인코딩 콘텐츠를 포함하고 FACE_DETECTION 기능을 지정합니다.
다음은 curl을 사용한 POST 요청의 예시입니다. 이 예시에서는 Google Cloud CLI를 사용하여 액세스 토큰을 만듭니다. gcloud CLI 설치에 대한 안내는 Video Intelligence API 빠른 시작을 참조하세요.
요청 데이터를 사용하기 전에 다음을 바꿉니다.
inputContent: 바이너리 형식의 로컬 동영상 파일
예: 'AAAAGGZ0eXBtcDQyAAAAAGlzb21tcDQyAAGVYW1vb3YAAABsbXZoZAAAAADWvhlR1r4ZUQABX5ABCOxo
AAEAAAEAAAAAAA4...'
PROJECT_NUMBER: Google Cloud 프로젝트의 숫자 식별자
HTTP 메서드 및 URL:
POST https://videointelligence.googleapis.com/v1/videos:annotate
JSON 요청 본문:
{
inputContent: "Local video file in binary format",
"features": ["FACE_DETECTION"]
}
Video Intelligence에 인증하려면 애플리케이션 기본 사용자 인증 정보를 설정합니다.
자세한 내용은 로컬 개발 환경의 인증 설정을 참조하세요.
importcom.google.api.gax.longrunning.OperationFuture;importcom.google.cloud.videointelligence.v1.AnnotateVideoProgress;importcom.google.cloud.videointelligence.v1.AnnotateVideoRequest;importcom.google.cloud.videointelligence.v1.AnnotateVideoResponse;importcom.google.cloud.videointelligence.v1.DetectedAttribute;importcom.google.cloud.videointelligence.v1.FaceDetectionAnnotation;importcom.google.cloud.videointelligence.v1.FaceDetectionConfig;importcom.google.cloud.videointelligence.v1.Feature;importcom.google.cloud.videointelligence.v1.TimestampedObject;importcom.google.cloud.videointelligence.v1.Track;importcom.google.cloud.videointelligence.v1.VideoAnnotationResults;importcom.google.cloud.videointelligence.v1.VideoContext;importcom.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient;importcom.google.cloud.videointelligence.v1.VideoSegment;importcom.google.protobuf.ByteString;importjava.nio.file.Files;importjava.nio.file.Path;importjava.nio.file.Paths;publicclassDetectFaces{publicstaticvoiddetectFaces()throwsException{// TODO(developer): Replace these variables before running the sample.StringlocalFilePath="resources/googlework_short.mp4";detectFaces(localFilePath);}// Detects faces in a video stored in a local file using the Cloud Video Intelligence API.publicstaticvoiddetectFaces(StringlocalFilePath)throwsException{try(VideoIntelligenceServiceClientvideoIntelligenceServiceClient=VideoIntelligenceServiceClient.create()){// Reads a local video file and converts it to base64.Pathpath=Paths.get(localFilePath);byte[]data=Files.readAllBytes(path);ByteStringinputContent=ByteString.copyFrom(data);FaceDetectionConfigfaceDetectionConfig=FaceDetectionConfig.newBuilder()// Must set includeBoundingBoxes to true to get facial attributes..setIncludeBoundingBoxes(true).setIncludeAttributes(true).build();VideoContextvideoContext=VideoContext.newBuilder().setFaceDetectionConfig(faceDetectionConfig).build();AnnotateVideoRequestrequest=AnnotateVideoRequest.newBuilder().setInputContent(inputContent).addFeatures(Feature.FACE_DETECTION).setVideoContext(videoContext).build();// Detects faces in a videoOperationFuture<AnnotateVideoResponse,AnnotateVideoProgress>future=videoIntelligenceServiceClient.annotateVideoAsync(request);System.out.println("Waiting for operation to complete...");AnnotateVideoResponseresponse=future.get();// Gets annotations for videoVideoAnnotationResultsannotationResult=response.getAnnotationResultsList().get(0);// Annotations for list of faces detected, tracked and recognized in video.for(FaceDetectionAnnotationfaceDetectionAnnotation:annotationResult.getFaceDetectionAnnotationsList()){System.out.print("Face detected:\n");for(Tracktrack:faceDetectionAnnotation.getTracksList()){VideoSegmentsegment=track.getSegment();System.out.printf("\tStart: %d.%.0fs\n",segment.getStartTimeOffset().getSeconds(),segment.getStartTimeOffset().getNanos()/1e6);System.out.printf("\tEnd: %d.%.0fs\n",segment.getEndTimeOffset().getSeconds(),segment.getEndTimeOffset().getNanos()/1e6);// Each segment includes timestamped objects that// include characteristics of the face detected.TimestampedObjectfirstTimestampedObject=track.getTimestampedObjects(0);for(DetectedAttributeattribute:firstTimestampedObject.getAttributesList()){// Attributes include glasses, headwear, smiling, direction of gazeSystem.out.printf("\tAttribute %s: %s %s\n",attribute.getName(),attribute.getValue(),attribute.getConfidence());}}}}}}
Node.js
Video Intelligence에 인증하려면 애플리케이션 기본 사용자 인증 정보를 설정합니다.
자세한 내용은 로컬 개발 환경의 인증 설정을 참조하세요.
/** * TODO(developer): Uncomment these variables before running the sample. */// const path = 'Local file to analyze, e.g. ./my-file.mp4';// Imports the Google Cloud Video Intelligence library + Node's fs libraryconstVideo=require('@google-cloud/video-intelligence').v1;constfs=require('fs');// Creates a clientconstvideo=newVideo.VideoIntelligenceServiceClient();// Reads a local video file and converts it to base64constfile=fs.readFileSync(path);constinputContent=file.toString('base64');asyncfunctiondetectFaces(){constrequest={inputContent:inputContent,features:['FACE_DETECTION'],videoContext:{faceDetectionConfig:{// Must set includeBoundingBoxes to true to get facial attributes.includeBoundingBoxes:true,includeAttributes:true,},},};// Detects faces in a video// We get the first result because we only process 1 videoconst[operation]=awaitvideo.annotateVideo(request);constresults=awaitoperation.promise();console.log('Waiting for operation to complete...');// Gets annotations for videoconstfaceAnnotations=results[0].annotationResults[0].faceDetectionAnnotations;for(const{tracks}offaceAnnotations){console.log('Face detected:');for(const{segment,timestampedObjects}oftracks){console.log(`\tStart: ${segment.startTimeOffset.seconds}`+`.${(segment.startTimeOffset.nanos/1e6).toFixed(0)}s`);console.log(`\tEnd: ${segment.endTimeOffset.seconds}.`+`${(segment.endTimeOffset.nanos/1e6).toFixed(0)}s`);// Each segment includes timestamped objects that// include characteristics of the face detected.const[firstTimestapedObject]=timestampedObjects;for(const{name}offirstTimestapedObject.attributes){// Attributes include 'glasses', 'headwear', 'smiling'.console.log(`\tAttribute: ${name}; `);}}}}detectFaces();
Python
Video Intelligence에 인증하려면 애플리케이션 기본 사용자 인증 정보를 설정합니다.
자세한 내용은 로컬 개발 환경의 인증 설정을 참조하세요.
importiofromgoogle.cloudimportvideointelligence_v1asvideointelligencedefdetect_faces(local_file_path="path/to/your/video-file.mp4"):"""Detects faces in a video from a local file."""client=videointelligence.VideoIntelligenceServiceClient()withio.open(local_file_path,"rb")asf:input_content=f.read()# Configure the requestconfig=videointelligence.FaceDetectionConfig(include_bounding_boxes=True,include_attributes=True)context=videointelligence.VideoContext(face_detection_config=config)# Start the asynchronous requestoperation=client.annotate_video(request={"features":[videointelligence.Feature.FACE_DETECTION],"input_content":input_content,"video_context":context,})print("\nProcessing video for face detection annotations.")result=operation.result(timeout=300)print("\nFinished processing.\n")# Retrieve the first result, because a single video was processed.annotation_result=result.annotation_results[0]forannotationinannotation_result.face_detection_annotations:print("Face detected:")fortrackinannotation.tracks:print("Segment: {}s to {}s".format(track.segment.start_time_offset.seconds+track.segment.start_time_offset.microseconds/1e6,track.segment.end_time_offset.seconds+track.segment.end_time_offset.microseconds/1e6,))# Each segment includes timestamped faces that include# characteristics of the face detected.# Grab the first timestamped facetimestamped_object=track.timestamped_objects[0]box=timestamped_object.normalized_bounding_boxprint("Bounding box:")print("\tleft : {}".format(box.left))print("\ttop : {}".format(box.top))print("\tright : {}".format(box.right))print("\tbottom: {}".format(box.bottom))# Attributes include glasses, headwear, smiling, direction of gazeprint("Attributes:")forattributeintimestamped_object.attributes:print("\t{}:{}{}".format(attribute.name,attribute.value,attribute.confidence))
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2024-11-19(UTC)"],[],[]]