Détecter plusieurs objets dans un fichier local
Restez organisé à l'aide des collections
Enregistrez et classez les contenus selon vos préférences.
Effectue la détection de plusieurs objets dans une image correspondant à un fichier local.
En savoir plus
Pour obtenir une documentation détaillée incluant cet exemple de code, consultez la page suivante :
Exemple de code
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.
[[["Facile à comprendre","easyToUnderstand","thumb-up"],["J'ai pu résoudre mon problème","solvedMyProblem","thumb-up"],["Autre","otherUp","thumb-up"]],[["Difficile à comprendre","hardToUnderstand","thumb-down"],["Informations ou exemple de code incorrects","incorrectInformationOrSampleCode","thumb-down"],["Il n'y a pas l'information/les exemples dont j'ai besoin","missingTheInformationSamplesINeed","thumb-down"],["Problème de traduction","translationIssue","thumb-down"],["Autre","otherDown","thumb-down"]],[],[],[],null,["# Detect multiple objects in a local file\n\nPerform object detection for multiple objects in an image on a local file.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Detect multiple objects](/vision/docs/object-localizer)\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Go API\nreference documentation](https://godoc.org/cloud.google.com/go/vision/apiv1).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n\n // localizeObjects gets objects and bounding boxes from the Vision API for an image at the given file path.\n func localizeObjects(w io.Writer, file string) error {\n \tctx := context.Background()\n\n \tclient, err := vision.NewImageAnnotatorClient(ctx)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \tf, err := os.Open(file)\n \tif err != nil {\n \t\treturn err\n \t}\n \tdefer f.Close()\n\n \timage, err := vision.NewImageFromReader(f)\n \tif err != nil {\n \t\treturn err\n \t}\n \tannotations, err := client.LocalizeObjects(ctx, image, nil)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \tif len(annotations) == 0 {\n \t\tfmt.Fprintln(w, \"No objects found.\")\n \t\treturn nil\n \t}\n\n \tfmt.Fprintln(w, \"Objects:\")\n \tfor _, annotation := range annotations {\n \t\tfmt.Fprintln(w, annotation.Name)\n \t\tfmt.Fprintln(w, annotation.Score)\n\n \t\tfor _, v := range annotation.BoundingPoly.NormalizedVertices {\n \t\t\tfmt.Fprintf(w, \"(%f,%f)\\n\", v.X, v.Y)\n \t\t}\n \t}\n\n \treturn nil\n }\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Java API\nreference documentation](/java/docs/reference/google-cloud-vision/latest/overview).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n /**\n * Detects localized objects in the specified local image.\n *\n * @param filePath The path to the file to perform localized object detection on.\n * @throws Exception on errors while closing the client.\n * @throws IOException on Input/Output errors.\n */\n public static void detectLocalizedObjects(String filePath) throws IOException {\n List\u003cAnnotateImageRequest\u003e requests = new ArrayList\u003c\u003e();\n\n ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));\n\n Image img = Image.newBuilder().setContent(imgBytes).build();\n AnnotateImageRequest request =\n AnnotateImageRequest.newBuilder()\n .addFeatures(Feature.newBuilder().setType(Type.OBJECT_LOCALIZATION))\n .setImage(img)\n .build();\n requests.add(request);\n\n // Initialize client that will be used to send requests. This client only needs to be created\n // once, and can be reused for multiple requests. After completing all of your requests, call\n // the \"close\" method on the client to safely clean up any remaining background resources.\n try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {\n // Perform the request\n BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);\n List\u003cAnnotateImageResponse\u003e responses = response.getResponsesList();\n\n // Display the results\n for (AnnotateImageResponse res : responses) {\n for (LocalizedObjectAnnotation entity : res.getLocalizedObjectAnnotationsList()) {\n System.out.format(\"Object name: %s%n\", entity.getName());\n System.out.format(\"Confidence: %s%n\", entity.getScore());\n System.out.format(\"Normalized Vertices:%n\");\n entity\n .getBoundingPoly()\n .getNormalizedVerticesList()\n .forEach(vertex -\u003e System.out.format(\"- (%s, %s)%n\", vertex.getX(), vertex.getY()));\n }\n }\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Node.js API\nreference documentation](https://googleapis.dev/nodejs/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Imports the Google Cloud client libraries\n const vision = require('https://cloud.google.com/nodejs/docs/reference/vision/latest/overview.html');\n const fs = require('fs');\n\n // Creates a client\n const client = new vision.https://cloud.google.com/nodejs/docs/reference/vision/latest/overview.html();\n\n /**\n * TODO(developer): Uncomment the following line before running the sample.\n */\n // const fileName = `/path/to/localImage.png`;\n const request = {\n image: {content: fs.readFileSync(fileName)},\n };\n\n const [result] = await client.objectLocalization(request);\n const objects = https://cloud.google.com/nodejs/docs/reference/vision/latest/vision/protos.google.longrunning.operation.html.localizedObjectAnnotations;\n objects.forEach(object =\u003e {\n console.log(`Name: ${object.name}`);\n console.log(`Confidence: ${object.score}`);\n const vertices = object.boundingPoly.normalizedVertices;\n vertices.forEach(v =\u003e console.log(`x: ${v.x}, y:${v.y}`));\n });\n\n### PHP\n\n\nBefore trying this sample, follow the PHP setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision PHP API\nreference documentation](/php/docs/reference/cloud-vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n namespace Google\\Cloud\\Samples\\Vision;\n\n use Google\\Cloud\\Vision\\V1\\ImageAnnotatorClient;\n\n /**\n * @param string $path Path to the image, e.g. \"path/to/your/image.jpg\"\n */\n function detect_object(string $path)\n {\n $imageAnnotator = new ImageAnnotatorClient();\n\n # annotate the image\n $image = file_get_contents($path);\n $response = $imageAnnotator-\u003eobjectLocalization($image);\n $objects = $response-\u003egetLocalizedObjectAnnotations();\n\n foreach ($objects as $object) {\n $name = $object-\u003egetName();\n $score = $object-\u003egetScore();\n $vertices = $object-\u003egetBoundingPoly()-\u003egetNormalizedVertices();\n\n printf('%s (confidence %f)):' . PHP_EOL, $name, $score);\n print('normalized bounding polygon vertices: ');\n foreach ($vertices as $vertex) {\n printf(' (%f, %f)', $vertex-\u003egetX(), $vertex-\u003egetY());\n }\n print(PHP_EOL);\n }\n\n $imageAnnotator-\u003eclose();\n }\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Python API\nreference documentation](/python/docs/reference/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n def localize_objects(path):\n \"\"\"Localize objects in the local image.\n\n Args:\n path: The path to the local file.\n \"\"\"\n from google.cloud import vision\n\n client = vision.https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1.services.image_annotator.ImageAnnotatorClient.html()\n\n with open(path, \"rb\") as image_file:\n content = image_file.read()\n image = vision.https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1.types.Image.html(content=content)\n\n objects = client.object_localization(image=image).localized_object_annotations\n\n print(f\"Number of objects found: {len(objects)}\")\n for object_ in objects:\n print(f\"\\n{object_.name} (confidence: {object_.score})\")\n print(\"Normalized bounding polygon vertices: \")\n for vertex in object_.bounding_poly.normalized_vertices:\n print(f\" - ({vertex.x}, {vertex.y})\")\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=vision)."]]