予測をリクエストすると、Vertex AI がモデルの目的に基づいて結果を返します。AutoML 画像オブジェクト検出予測のレスポンスでは、画像で検出されたすべてのオブジェクトが返されます。検出された各オブジェクトには、対応する信頼スコアを持つアノテーション(ラベルと正規化された境界ボックス)があります。境界ボックスは次のように記述します。
"bboxes": [
[xMin, xMax, yMin, yMax],
...]
ここで、xMin, xMax は x 値の最小値と最大値、
yMin, yMax は y 値の最小値と最大値です。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-08 UTC。"],[],[],null,["# Interpret prediction results from image object detection models\n\nAfter requesting a prediction, Vertex AI returns results based on your model's objective. AutoML image object detection prediction responses return all objects found in an image. Each found object has an annotation (label and normalized bounding box) with a corresponding confidence score. The bounding box is written as:\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n`\n\"bboxes\": [\n[xMin, xMax, yMin, yMax],\n...]\n`\nWhere `xMin, xMax` are the minimum and maximum x values and `\nyMin, yMax` are the minimum and maximum y values respectively.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n#### Example batch prediction output\n\nBatch AutoML image object detection prediction responses are stored as\nJSON Lines files in Cloud Storage buckets. Each line of the JSON Lines\nfile\ncontains all objects found in a single image file. Each found object has\nan annotation (label and normalized bounding box) with a corresponding\nconfidence score.\n| **Note: Zero coordinate values omitted.** When the API detects a coordinate (\"x\" or \"y\") value of 0, ***that coordinate is omitted in the\n| JSON response*** . Thus, a response with a bounding poly around the entire image would be \n| **\\[{},{\"x\": 1,\"y\": 1}\\]** . For more information, see [Method: projects.locations.models.predict](https://cloud.google.com/automl/docs/reference/rest/v1/projects.locations.models/predict#boundingpoly).\n\n\n| **Note**: The following JSON Lines example includes line breaks for\n| readability. In your JSON Lines files, line breaks are included only after each\n| each JSON object.\n\n\u003cbr /\u003e\n\n\n\u003cbr /\u003e\n\n**Important:** Bounding boxes are specified as:\n\n\n`\n\"bboxes\": [\n[xMin, xMax, yMin, yMax],\n...]\n`\nWhere `xMin` and `xMax` are the minimum and maximum x values and `\nyMin` and `yMax` are the minimum and maximum y values respectively.\n\n\u003cbr /\u003e\n\n```\n{\n \"instance\": {\"content\": \"gs://bucket/image.jpg\", \"mimeType\": \"image/jpeg\"},\n \"prediction\": {\n \"ids\": [1, 2],\n \"displayNames\": [\"cat\", \"dog\"],\n \"bboxes\": [\n [0.1, 0.2, 0.3, 0.4],\n [0.2, 0.3, 0.4, 0.5]\n ],\n \"confidences\": [0.7, 0.5]\n }\n}\n```"]]