[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-17(UTC)"],[],[],null,["# Text Detection\n\n*Text Detection* performs Optical Character Recognition (OCR) to detect\nvisible text from frames in a video, or video segments, and returns the\ndetected text along with information about the frame-level location and\ntimestamp in the video for that text.\n\nText Detection is particularly useful for media \\& entertainment use cases,\nincluding, detecting and extracting cast lists at the end of shows and movies,\nor detecting the presence of burnt-in subtitles.\n\nText detection is available for the [languages](/vision/docs/languages)\nsupported by the Cloud Vision API.\n\nTo detect visible text from a video or video segments, call the\n[`annotate`](/video-intelligence/docs/reference/rest/v1p2beta1/videos/annotate)\nmethod and specify\n[`TEXT_DETECTION`](/video-intelligence/docs/reference/rest/v1p2beta1/videos#Feature)\nin the `features` field.\n\nCheck out the [Video Intelligence API visualizer](https://zackakil.github.io/video-intelligence-api-visualiser/#Text%20Detection) to see this feature in action.\n\nFor examples of requesting text detection and getting the annotated results,\nsee [Text Detection](/video-intelligence/docs/text-detection)."]]