Rilevamento di più oggetti in un file Cloud Storage
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Esegui il rilevamento di più oggetti in un'immagine su un file archiviato in Cloud Storage.
Per saperne di più
Per la documentazione dettagliata che include questo esempio di codice, vedi quanto segue:
Esempio di codice
Salvo quando diversamente specificato, i contenuti di questa pagina sono concessi in base alla licenza Creative Commons Attribution 4.0, mentre gli esempi di codice sono concessi in base alla licenza Apache 2.0. Per ulteriori dettagli, consulta le norme del sito di Google Developers. Java è un marchio registrato di Oracle e/o delle sue consociate.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],[],[],[],null,["# Detect multiple objects in a Cloud Storage file.\n\nPerform object detection for multiple objects in an image on a file stored in Cloud Storage.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Detect multiple objects](/vision/docs/object-localizer)\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Go API\nreference documentation](https://godoc.org/cloud.google.com/go/vision/apiv1).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n\n // localizeObjects gets objects and bounding boxes from the Vision API for an image at the given file path.\n func localizeObjectsURI(w io.Writer, file string) error {\n \tctx := context.Background()\n\n \tclient, err := vision.NewImageAnnotatorClient(ctx)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \timage := vision.NewImageFromURI(file)\n \tannotations, err := client.LocalizeObjects(ctx, image, nil)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \tif len(annotations) == 0 {\n \t\tfmt.Fprintln(w, \"No objects found.\")\n \t\treturn nil\n \t}\n\n \tfmt.Fprintln(w, \"Objects:\")\n \tfor _, annotation := range annotations {\n \t\tfmt.Fprintln(w, annotation.Name)\n \t\tfmt.Fprintln(w, annotation.Score)\n\n \t\tfor _, v := range annotation.BoundingPoly.NormalizedVertices {\n \t\t\tfmt.Fprintf(w, \"(%f,%f)\\n\", v.X, v.Y)\n \t\t}\n \t}\n\n \treturn nil\n }\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Java API\nreference documentation](/java/docs/reference/google-cloud-vision/latest/overview).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n /**\n * Detects localized objects in a remote image on Google Cloud Storage.\n *\n * @param gcsPath The path to the remote file on Google Cloud Storage to detect localized objects\n * on.\n * @throws Exception on errors while closing the client.\n * @throws IOException on Input/Output errors.\n */\n public static void detectLocalizedObjectsGcs(String gcsPath) throws IOException {\n List\u003cAnnotateImageRequest\u003e requests = new ArrayList\u003c\u003e();\n\n ImageSource imgSource = ImageSource.newBuilder().setGcsImageUri(gcsPath).build();\n Image img = Image.newBuilder().setSource(imgSource).build();\n\n AnnotateImageRequest request =\n AnnotateImageRequest.newBuilder()\n .addFeatures(Feature.newBuilder().setType(Type.OBJECT_LOCALIZATION))\n .setImage(img)\n .build();\n requests.add(request);\n\n // Initialize client that will be used to send requests. This client only needs to be created\n // once, and can be reused for multiple requests. After completing all of your requests, call\n // the \"close\" method on the client to safely clean up any remaining background resources.\n try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {\n // Perform the request\n BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);\n List\u003cAnnotateImageResponse\u003e responses = response.getResponsesList();\n client.close();\n // Display the results\n for (AnnotateImageResponse res : responses) {\n for (LocalizedObjectAnnotation entity : res.getLocalizedObjectAnnotationsList()) {\n System.out.format(\"Object name: %s%n\", entity.getName());\n System.out.format(\"Confidence: %s%n\", entity.getScore());\n System.out.format(\"Normalized Vertices:%n\");\n entity\n .getBoundingPoly()\n .getNormalizedVerticesList()\n .forEach(vertex -\u003e System.out.format(\"- (%s, %s)%n\", vertex.getX(), vertex.getY()));\n }\n }\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Node.js API\nreference documentation](https://googleapis.dev/nodejs/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Imports the Google Cloud client libraries\n const vision = require('https://cloud.google.com/nodejs/docs/reference/vision/latest/overview.html');\n\n // Creates a client\n const client = new vision.https://cloud.google.com/nodejs/docs/reference/vision/latest/overview.html();\n\n /**\n * TODO(developer): Uncomment the following line before running the sample.\n */\n // const gcsUri = `gs://bucket/bucketImage.png`;\n\n const [result] = await client.objectLocalization(gcsUri);\n const objects = https://cloud.google.com/nodejs/docs/reference/vision/latest/vision/protos.google.longrunning.operation.html.localizedObjectAnnotations;\n objects.forEach(object =\u003e {\n console.log(`Name: ${object.name}`);\n console.log(`Confidence: ${object.score}`);\n const veritices = object.boundingPoly.normalizedVertices;\n veritices.forEach(v =\u003e console.log(`x: ${v.x}, y:${v.y}`));\n });\n\n### PHP\n\n\nBefore trying this sample, follow the PHP setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision PHP API\nreference documentation](/php/docs/reference/cloud-vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n namespace Google\\Cloud\\Samples\\Vision;\n\n use Google\\Cloud\\Vision\\V1\\ImageAnnotatorClient;\n\n /**\n * @param string $path GCS path to the image, e.g. \"gs://path/to/your/image.jpg\"\n */\n function detect_object_gcs(string $path)\n {\n $imageAnnotator = new ImageAnnotatorClient();\n\n # annotate the image\n $response = $imageAnnotator-\u003eobjectLocalization($path);\n $objects = $response-\u003egetLocalizedObjectAnnotations();\n\n foreach ($objects as $object) {\n $name = $object-\u003egetName();\n $score = $object-\u003egetScore();\n $vertices = $object-\u003egetBoundingPoly()-\u003egetNormalizedVertices();\n\n printf('%s (confidence %d)):' . PHP_EOL, $name, $score);\n print('normalized bounding polygon vertices: ');\n foreach ($vertices as $vertex) {\n printf(' (%d, %d)', $vertex-\u003egetX(), $vertex-\u003egetY());\n }\n print(PHP_EOL);\n }\n\n $imageAnnotator-\u003eclose();\n }\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Python API\nreference documentation](/python/docs/reference/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n def localize_objects_uri(uri):\n \"\"\"Localize objects in the image on Google Cloud Storage\n\n Args:\n uri: The path to the file in Google Cloud Storage (gs://...)\n \"\"\"\n from google.cloud import vision\n\n client = vision.https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1.services.image_annotator.ImageAnnotatorClient.html()\n\n image = vision.https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1.types.Image.html()\n image.source.image_uri = uri\n\n objects = client.object_localization(image=image).localized_object_annotations\n\n print(f\"Number of objects found: {len(objects)}\")\n for object_ in objects:\n print(f\"\\n{object_.name} (confidence: {object_.score})\")\n print(\"Normalized bounding polygon vertices: \")\n for vertex in object_.bounding_poly.normalized_vertices:\n print(f\" - ({vertex.x}, {vertex.y})\")\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=vision)."]]