HTTP/HTTPS URL에서 이미지를 가져올 때 Google은 요청 완료를 보장할 수 없습니다. 지정된 호스트에서 요청을 거부하거나(예를 들어 요청 제한 또는 DoS 예방으로 인해) Google에서 악용 방지를 위해 사이트에 대한 요청을 제한하는 경우 요청이 실패할 수 있습니다. 프로덕션 애플리케이션의 외부 호스팅 이미지에 의존하지 않는 것이 좋습니다.
JSON 응답 형식
annotate 요청은 AnnotateImageResponse 유형의 JSON 응답을 수신합니다.
요청은 특징 유형에 따른 차이가 거의 없지만, 응답은 특징 유형별로 상당히 다를 수 있습니다. 자세한 내용은 Vision API 참조에서 확인하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-19(UTC)"],[],[],null,["# Make a Vision API request\n\nThe Cloud Vision API is a REST API that uses HTTP POST operations to perform\ndata analysis on images you send in the request. The API uses JSON for both\nrequests and responses.\n\nSummary\n-------\n\n- Requests are POST requests to `https://vision.googleapis.com/v1/images:annotate`.\n- You must [authenticate](/vision/docs/auth) your requests.\n- The request body looks like [this](#json_request_format). Responses look sort of like [this](#response_anchor), but the fields will vary depending on what type of annotation you're doing.\n- Here's how to [send a request with cURL](/vision/docs/using-curl).\n- There are also [client libraries](/vision/docs/reference/libraries).\n- Looking for a quick demo? Just [drag and drop](/vision/docs/drag-and-drop)!\n\nEndpoint\n--------\n\nThe Vision API consists of a single endpoint\n(`https://vision.googleapis.com/v1/images`) that supports one HTTP request\nmethod (`annotate`): \n\n POST https://vision.googleapis.com/v1/images:annotate\n\nAuthentication\n--------------\n\nThe POST request must authenticate by passing either an API key or an\nOAuth token. For details, refer to the [Authenticate](/vision/docs/auth) page.\n\nJSON request format\n-------------------\n\nThe body of your POST request contains a JSON object, containing a single\n`requests` list, which itself contains one or more objects of type\n[`AnnotateImageRequest`](/vision/docs/reference/rest/v1/images/annotate#AnnotateImageRequest): \n\n {\n \"requests\":[\n {\n \"image\":{\n \"content\":\"/9j/7QBEUGhvdG9...image contents...eYxxxzj/Coa6Bax//Z\"\n },\n \"features\":[\n {\n \"type\":\"LABEL_DETECTION\",\n \"maxResults\":1\n }\n ]\n }\n ]\n }\n\nEvery request:\n\n- Must contain a `requests` list.\n\nWithin the `requests` list:\n\n- `image` specifies the image file. It can be sent as a base64-encoded string,\n a Cloud Storage file location, or as a publicly-accessible URL. See\n [Providing the image](#providing_the_image) for details.\n\n- `features` lists the types of annotation to perform on the image. You can\n specify one or many types, as well as the `maxResults` to return for each.\n\n- `imageContext` (not shown in the example above) specifies hints to the\n service to help with annotation: bounding boxes, languages, and crop hints\n aspect ratios.\n\nProviding the image\n-------------------\n\nYou can provide the image in your request in one of three ways:\n\n- As a base64-encoded image string. If the image is stored locally, you can\n convert it to a string and pass it as the value of `image.content`:\n\n {\n \"requests\":[\n {\n \"image\":{\n \"content\":\"/9j/7QBEUGhvdG9zaG9...image contents...fXNWzvDEeYxxxzj/Coa6Bax//Z\"\n },\n \"features\":[\n {\n \"type\":\"FACE_DETECTION\",\n \"maxResults\":10\n }\n ]\n }\n ]\n }\n\n See [Base64-encoding](/vision/docs/base64) for instructions on encoding on\n various platforms.\n- As a [Cloud Storage](/storage) URI. Pass the full URI as the value of\n `image.source.imageUri`:\n\n {\n \"requests\":[\n {\n \"image\":{\n \"source\":{\n \"imageUri\":\n \"gs://bucket_name/path_to_image_object\"\n }\n },\n \"features\":[\n {\n \"type\":\"LABEL_DETECTION\",\n \"maxResults\":1\n }\n ]\n }\n ]\n }\n\n The file in Cloud Storage must be accessible to the authentication method\n you're using. If you're using an API key, the file must be publicly\n accessible. If you're using a service account, the file must be accessible\n to the user who created the service account.\n- As a publicly-accessible HTTP or HTTPS URL. Pass the URL as the value of\n `image.source.imageUri`:\n\n {\n \"requests\":[\n {\n \"image\":{\n \"source\":{\n \"imageUri\":\n \"https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png\"\n }\n },\n \"features\":[\n {\n \"type\":\"LOGO_DETECTION\",\n \"maxResults\":1\n }\n ]\n }\n ]\n }\n\n When fetching images from HTTP/HTTPS URLs, Google cannot guarantee that the\n request will be completed. Your request may fail if the specified host\n denies the request (e.g. due to request throttling or DoS prevention), or if\n Google throttles requests to the site for abuse prevention. As a best\n practice, don't depend on externally-hosted images for production\n applications.\n\nJSON response format\n--------------------\n\nThe `annotate` request receives a JSON response of type `AnnotateImageResponse`.\nAlthough the requests are similar for each feature type, the responses for\neach feature type can be quite different. Consult the\n[Vision API Reference](/vision/docs/reference/rest/v1/images/annotate#AnnotateImageResponse)\nfor complete information.\n\nThe code below demonstrates a sample label detection response for the photo\nshown below: \n\n```\n{\n \"responses\": [\n {\n \"labelAnnotations\": [\n {\n \"mid\": \"/m/0bt9lr\",\n \"description\": \"dog\",\n \"score\": 0.97346616\n },\n {\n \"mid\": \"/m/09686\",\n \"description\": \"vertebrate\",\n \"score\": 0.85700572\n },\n {\n \"mid\": \"/m/01pm38\",\n \"description\": \"clumber spaniel\",\n \"score\": 0.84881884\n },\n {\n \"mid\": \"/m/04rky\",\n \"description\": \"mammal\",\n \"score\": 0.847575\n },\n {\n \"mid\": \"/m/02wbgd\",\n \"description\": \"english cocker spaniel\",\n \"score\": 0.75829375\n }\n ]\n }\n ]\n}\n```\n\nClient libraries\n----------------\n\nGoogle provides client libraries in a number of programming languages to\nsimplify the process of building and sending requests, and receiving and\nparsing responses.\n\nRefer to the [Client libraries](/vision/docs/reference/libraries) for\ninstallation and usage instructions."]]