Processar a resposta da API Cloud Vision
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Processar a resposta da API Cloud Vision quando houver faces em uma imagem
Mais informações
Para conferir a documentação detalhada que inclui este exemplo de código, consulte:
Exemplo de código
Exceto em caso de indicação contrária, o conteúdo desta página é licenciado de acordo com a Licença de atribuição 4.0 do Creative Commons, e as amostras de código são licenciadas de acordo com a Licença Apache 2.0. Para mais detalhes, consulte as políticas do site do Google Developers. Java é uma marca registrada da Oracle e/ou afiliadas.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],[],[],[],null,["# Process the Cloud Vision API response when faces are detected in an image.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Face detection tutorial](/vision/docs/face-tutorial)\n\nCode sample\n-----------\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Java API\nreference documentation](/java/docs/reference/google-cloud-vision/latest/overview).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n /** Reads image {@code inputPath} and writes {@code outputPath} with {@code faces} outlined. */\n private static void writeWithFaces(Path inputPath, Path outputPath, List\u003cFaceAnnotation\u003e faces)\n throws IOException {\n BufferedImage img = ImageIO.read(inputPath.toFile());\n annotateWithFaces(img, faces);\n ImageIO.write(img, \"jpg\", outputPath.toFile());\n }\n\n /** Annotates an image {@code img} with a polygon around each face in {@code faces}. */\n public static void annotateWithFaces(BufferedImage img, List\u003cFaceAnnotation\u003e faces) {\n for (FaceAnnotation face : faces) {\n annotateWithFace(img, face);\n }\n }\n\n /** Annotates an image {@code img} with a polygon defined by {@code face}. */\n private static void annotateWithFace(BufferedImage img, FaceAnnotation face) {\n Graphics2D gfx = img.createGraphics();\n Polygon poly = new Polygon();\n for (Vertex vertex : face.getFdBoundingPoly().getVertices()) {\n poly.addPoint(vertex.getX(), vertex.getY());\n }\n gfx.setStroke(new BasicStroke(5));\n gfx.setColor(new Color(0x00ff00));\n gfx.draw(poly);\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Node.js API\nreference documentation](https://googleapis.dev/nodejs/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n async function highlightFaces(inputFile, faces, outputFile, PImage) {\n // Open the original image\n const stream = fs.createReadStream(inputFile);\n let promise;\n if (inputFile.match(/\\.jpg$/)) {\n promise = PImage.decodeJPEGFromStream(stream);\n } else if (inputFile.match(/\\.png$/)) {\n promise = PImage.decodePNGFromStream(stream);\n } else {\n throw new Error(`Unknown filename extension ${inputFile}`);\n }\n const img = await promise;\n const context = img.getContext('2d');\n context.drawImage(img, 0, 0, img.width, img.height, 0, 0);\n\n // Now draw boxes around all the faces\n context.strokeStyle = 'rgba(0,255,0,0.8)';\n context.lineWidth = '5';\n\n faces.forEach(face =\u003e {\n context.beginPath();\n let origX = 0;\n let origY = 0;\n face.boundingPoly.vertices.forEach((bounds, i) =\u003e {\n if (i === 0) {\n origX = bounds.x;\n origY = bounds.y;\n context.moveTo(bounds.x, bounds.y);\n } else {\n context.lineTo(bounds.x, bounds.y);\n }\n });\n context.lineTo(origX, origY);\n context.stroke();\n });\n\n // Write the result to a file\n console.log(`Writing to file ${outputFile}`);\n const writeStream = fs.createWriteStream(outputFile);\n await PImage.encodePNGToStream(img, writeStream);\n }\n\n### PHP\n\n\nBefore trying this sample, follow the PHP setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision PHP API\nreference documentation](/php/docs/reference/cloud-vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n # draw box around faces\n if ($faces-\u003ecount() && $outFile) {\n $imageCreateFunc = [\n 'png' =\u003e 'imagecreatefrompng',\n 'gd' =\u003e 'imagecreatefromgd',\n 'gif' =\u003e 'imagecreatefromgif',\n 'jpg' =\u003e 'imagecreatefromjpeg',\n 'jpeg' =\u003e 'imagecreatefromjpeg',\n ];\n $imageWriteFunc = [\n 'png' =\u003e 'imagepng',\n 'gd' =\u003e 'imagegd',\n 'gif' =\u003e 'imagegif',\n 'jpg' =\u003e 'imagejpeg',\n 'jpeg' =\u003e 'imagejpeg',\n ];\n\n copy($path, $outFile);\n $ext = strtolower(pathinfo($path, PATHINFO_EXTENSION));\n if (!array_key_exists($ext, $imageCreateFunc)) {\n throw new \\Exception('Unsupported image extension');\n }\n $outputImage = call_user_func($imageCreateFunc[$ext], $outFile);\n\n foreach ($faces as $face) {\n $vertices = $face-\u003egetBoundingPoly()-\u003egetVertices();\n if ($vertices) {\n $x1 = $vertices[0]-\u003egetX();\n $y1 = $vertices[0]-\u003egetY();\n $x2 = $vertices[2]-\u003egetX();\n $y2 = $vertices[2]-\u003egetY();\n imagerectangle($outputImage, $x1, $y1, $x2, $y2, 0x00ff00);\n }\n }\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Python API\nreference documentation](/python/docs/reference/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n def highlight_faces(image, faces, output_filename):\n \"\"\"Draws a polygon around the faces, then saves to output_filename.\n\n Args:\n image: a file containing the image with the faces.\n faces: a list of faces found in the file. This should be in the format\n returned by the Vision API.\n output_filename: the name of the image file to be created, where the\n faces have polygons drawn around them.\n \"\"\"\n im = Image.open(image)\n draw = ImageDraw.Draw(im)\n # Sepecify the font-family and the font-size\n for face in faces:\n box = [(vertex.x, vertex.y) for vertex in face.bounding_poly.vertices]\n draw.line(box + [box[0]], width=5, fill=\"#00ff00\")\n # Place the confidence value/score of the detected faces above the\n # detection box in the output image\n draw.text(\n (\n (face.bounding_poly.vertices)[0].x,\n (face.bounding_poly.vertices)[0].y - 30,\n ),\n str(format(face.detection_confidence, \".3f\")) + \"%\",\n fill=\"#FF0000\",\n )\n im.save(output_filename)\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=vision)."]]