偵測本機檔案中的手寫文字 (Beta 版)
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
對本機檔案執行手寫文字偵測 (Beta 版發布)。
程式碼範例
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[],[],null,["# Detect handwritten text in a local file (beta)\n\nPerform handwritten text detection on a local file (for beta launch).\n\nCode sample\n-----------\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Java API\nreference documentation](/java/docs/reference/google-cloud-vision/latest/overview).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n /**\n * Performs handwritten text detection on a local image file.\n *\n * @param filePath The path to the local file to detect handwritten text on.\n * @param out A {@link PrintStream} to write the results to.\n * @throws Exception on errors while closing the client.\n * @throws IOException on Input/Output errors.\n */\n public static void detectHandwrittenOcr(String filePath, PrintStream out) throws Exception {\n List\u003cAnnotateImageRequest\u003e requests = new ArrayList\u003c\u003e();\n\n ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));\n\n Image img = Image.newBuilder().setContent(imgBytes).build();\n Feature feat = Feature.newBuilder().setType(Type.DOCUMENT_TEXT_DETECTION).build();\n // Set the Language Hint codes for handwritten OCR\n ImageContext imageContext =\n ImageContext.newBuilder().addLanguageHints(\"en-t-i0-handwrit\").build();\n\n AnnotateImageRequest request =\n AnnotateImageRequest.newBuilder()\n .addFeatures(feat)\n .setImage(img)\n .setImageContext(imageContext)\n .build();\n requests.add(request);\n\n try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {\n BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);\n List\u003cAnnotateImageResponse\u003e responses = response.getResponsesList();\n client.close();\n\n for (AnnotateImageResponse res : responses) {\n if (res.hasError()) {\n out.printf(\"Error: %s\\n\", res.getError().getMessage());\n return;\n }\n\n // For full list of available annotations, see http://g.co/cloud/vision/docs\n TextAnnotation annotation = res.getFullTextAnnotation();\n for (Page page : annotation.getPagesList()) {\n String pageText = \"\";\n for (Block block : page.getBlocksList()) {\n String blockText = \"\";\n for (Paragraph para : block.getParagraphsList()) {\n String paraText = \"\";\n for (Word word : para.getWordsList()) {\n String wordText = \"\";\n for (Symbol symbol : word.getSymbolsList()) {\n wordText = wordText + symbol.getText();\n out.format(\n \"Symbol text: %s (confidence: %f)\\n\",\n symbol.getText(), symbol.getConfidence());\n }\n out.format(\"Word text: %s (confidence: %f)\\n\\n\", wordText, word.getConfidence());\n paraText = String.format(\"%s %s\", paraText, wordText);\n }\n // Output Example using Paragraph:\n out.println(\"\\nParagraph: \\n\" + paraText);\n out.format(\"Paragraph Confidence: %f\\n\", para.getConfidence());\n blockText = blockText + paraText;\n }\n pageText = pageText + blockText;\n }\n }\n out.println(\"\\nComplete annotation:\");\n out.println(annotation.getText());\n }\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Node.js API\nreference documentation](https://googleapis.dev/nodejs/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Imports the Google Cloud client libraries\n const vision = require('https://cloud.google.com/nodejs/docs/reference/vision/latest/overview.html').v1p3beta1;\n const fs = require('fs');\n\n // Creates a client\n const client = new vision.https://cloud.google.com/nodejs/docs/reference/vision/latest/overview.html();\n\n /**\n * TODO(developer): Uncomment the following line before running the sample.\n */\n // const fileName = `/path/to/localImage.png`;\n\n const request = {\n image: {\n content: fs.readFileSync(fileName),\n },\n feature: {\n languageHints: ['en-t-i0-handwrit'],\n },\n };\n\n const [result] = await client.documentTextDetection(request);\n const fullTextAnnotation = https://cloud.google.com/nodejs/docs/reference/vision/latest/vision/protos.google.longrunning.operation.html.fullTextAnnotation;\n console.log(`Full text: ${fullTextAnnotation.text}`);\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vision quickstart using\nclient libraries](/vision/docs/quickstart-client-libraries).\n\n\nFor more information, see the\n[Vision Python API\nreference documentation](/python/docs/reference/vision/latest).\n\n\nTo authenticate to Vision, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n def detect_handwritten_ocr(path):\n \"\"\"Detects handwritten characters in a local image.\n\n Args:\n path: The path to the local file.\n \"\"\"\n from google.cloud import vision_v1p3beta1 as vision\n\n client = vision.ImageAnnotatorClient()\n\n with open(path, \"rb\") as image_file:\n content = image_file.read()\n\n image = vision.Image(content=content)\n\n # Language hint codes for handwritten OCR:\n # en-t-i0-handwrit, mul-Latn-t-i0-handwrit\n # Note: Use only one language hint code per request for handwritten OCR.\n image_context = vision.ImageContext(language_hints=[\"en-t-i0-handwrit\"])\n\n response = client.document_text_detection(image=image, image_context=image_context)\n\n print(f\"Full Text: {response.full_text_annotation.text}\")\n for page in response.full_text_annotation.pages:\n for block in page.blocks:\n print(f\"\\nBlock confidence: {block.confidence}\\n\")\n\n for paragraph in block.paragraphs:\n print(\"Paragraph confidence: {}\".format(paragraph.confidence))\n\n for word in paragraph.words:\n word_text = \"\".join([symbol.text for symbol in word.symbols])\n print(\n \"Word text: {} (confidence: {})\".format(\n word_text, word.confidence\n )\n )\n\n for symbol in word.symbols:\n print(\n \"\\tSymbol: {} (confidence: {})\".format(\n symbol.text, symbol.confidence\n )\n )\n\n if response.error.message:\n raise Exception(\n \"{}\\nFor more info on error messages, check: \"\n \"https://cloud.google.com/apis/design/errors\".format(response.error.message)\n )\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=vision)."]]