// The prediction result protocol buffer for object detectionmessageObjectDetectionPredictionResult{// Current timestampprotobuf.Timestamptimestamp=1;// The entity information for annotations from object detection prediction// resultsmessageEntity{// Label idint64label_id=1;// The human-readable label stringstringlabel_string=2;}// The identified box contains the location and the entity of the objectmessageIdentifiedBox{// An unique id for this boxint64box_id=1;// Bounding Box in normalized coordinates [0,1]messageNormalizedBoundingBox{// Min in x coordinatefloatxmin=1;// Min in y coordinatefloatymin=2;// Width of the bounding boxfloatwidth=3;// Height of the bounding boxfloatheight=4;}// Bounding Box in the normalized coordinatesNormalizedBoundingBoxnormalized_bounding_box=2;// Confidence score associated with this bounding boxfloatconfidence_score=3;// Entity of this boxEntityentity=4;}// A list of identified boxesrepeatedIdentifiedBoxidentified_boxes=2;}
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Object detector guide\n\nThe **Object detector model** can identify and locate more than 500 types\nof objects in a video. The model accepts a video stream as input and outputs\na [protocol buffer](https://developers.google.com/protocol-buffers) with the detection results to\nBigQuery. The model runs at one FPS. When you create an app that uses\nthe object detector model, you must direct model output to a BigQuery\nconnector to view prediction output.\n\nObject detector model app specifications\n----------------------------------------\n\nUse the following instructions to create a object detector model in the\nGoogle Cloud console. \n\n### Console\n\n**Create an app in the Google Cloud console**\n\n1. To create a object detector app, follow instructions in\n [Build an application](/vision-ai/docs/build-app).\n\n [Go to the Applications tab](https://console.cloud.google.com/ai/vision-ai/applications)\n\n**Add an object detector model**\n\n1. When you add model nodes, select the **Object detector** from the list of pre-trained models.\n\n**Add a BigQuery connector**\n\n1. To use the output, connect the app to a **BigQuery**\n connector.\n\n For information about using the **BigQuery** connector,\n see [Connect and store data to BigQuery](/vision-ai/docs/connect-bigquery). For\n BigQuery pricing information, see the\n [BigQuery pricing](/bigquery/pricing) page.\n\n**View output results in BigQuery**\n\nAfter the model outputs data to BigQuery, view output\nannotations in the BigQuery dashboard.\n\nIf you didn't specify a BigQuery path, you can view the\nsystem-created path in the Vertex AI Vision\nschema**Studio** page.\n\n1. In the Google Cloud console, open the BigQuery page.\n\n [Go to BigQuery](https://console.cloud.google.com/bigquery)\n2. Select arrow_drop_down**Expand**\n next to the target project, dataset name, and application name.\n\n3. In the table detail view, click **Preview** . View results in the\n **annotation** column. For a description of the output format, see\n [model output](#model-output).\n\nThe application stores results in chronological order. The oldest\nresults are the beginning of the table, while the most recent results\nare added to the end of the table. To check the latest results, click\nthe page number to go to the last table page.\n\nModel output\n------------\n\nThe model outputs bounding boxes, their object labels, and confidence scores\nfor each video frame. The output also contains a timestamp. The rate of the\noutput stream is one frame per second.\n\nIn the [protocol buffer](https://developers.google.com/protocol-buffers) output example that follows, note the\nfollowing:\n\n- Timestamp - The timestamp corresponds to the time for this inference result.\n- Identified boxes - The main detection result that includes box identity, bounding box information, confidence score, and object prediction.\n\n### Sample annotation output JSON object\n\n```\n{\n \"currentTime\": \"2022-11-09T02:18:54.777154048Z\",\n \"identifiedBoxes\": [\n {\n \"boxId\":\"0\",\n \"normalizedBoundingBox\": {\n \"xmin\": 0.6963465,\n \"ymin\": 0.23144785,\n \"width\": 0.23944569,\n \"height\": 0.3544306\n },\n \"confidenceScore\": 0.49874997,\n \"entity\": {\n \"labelId\": \"0\",\n \"labelString\": \"Houseplant\"\n }\n }\n ]\n}\n```\n\n### Protocol buffer definition\n\n // The prediction result protocol buffer for object detection\n message ObjectDetectionPredictionResult {\n // Current timestamp\n protobuf.Timestamp timestamp = 1;\n\n // The entity information for annotations from object detection prediction\n // results\n message Entity {\n // Label id\n int64 label_id = 1;\n\n // The human-readable label string\n string label_string = 2;\n }\n\n // The identified box contains the location and the entity of the object\n message IdentifiedBox {\n // An unique id for this box\n int64 box_id = 1;\n\n // Bounding Box in normalized coordinates [0,1]\n message NormalizedBoundingBox {\n // Min in x coordinate\n float xmin = 1;\n // Min in y coordinate\n float ymin = 2;\n // Width of the bounding box\n float width = 3;\n // Height of the bounding box\n float height = 4;\n }\n // Bounding Box in the normalized coordinates\n NormalizedBoundingBox normalized_bounding_box = 2;\n\n // Confidence score associated with this bounding box\n float confidence_score = 3;\n\n // Entity of this box\n Entity entity = 4;\n }\n // A list of identified boxes\n repeated IdentifiedBox identified_boxes = 2;\n }\n\nBest practices and limitations\n------------------------------\n\nTo get the best results when you use the object detector, consider the following\nwhen you source data and use the model.\n\n### Source data recommendations\n\nRecommended: Make sure the objects in the\npicture are clear and are not covered or largely obscured by other objects.\n\nSample image data the object detector is able to process correctly:\n\nSending the model this image data returns the following object detection\ninformation^\\*^:\n\n^\\*^ The annotations in the following image are for illustrative purposes\nonly. The bounding boxes, labels, and confidence scores are manually\ndrawn and not added by the model or any Google Cloud console tool.\n^*Image source:\n[Spacejoy](https://unsplash.com/photos/umAXneH4GhA)\non [Unsplash](https://unsplash.com/)\n(annotations manually added).*^\n\nNot recommended: Avoid image data where the\nkey object items are too small in the frame.\n\nSample image data the object detector isn't able to process correctly:\n\nNot recommended: Avoid image data that\nshow the key object items partially or fully covered by other objects.\n\nSample image data the object detector isn't able to process correctly:\n\n### Limitations\n\n- **Video resolution**: The recommended maximum input video resolution is 1920 x 1080, and the recommended minimum resolution is 160 x 120.\n- **Lighting**: The model performance is sensitive to lighting conditions. Extreme brightness or darkness might lead to lower detection quality.\n- **Object size**: The object detector has a minimal detectable object size. Make sure the target objects are sufficiently large and visible in your video data."]]