Optical character recognition (OCR)

The Vision API can detect and extract text from images. There are two annotation features that support OCR:

  • TEXT_DETECTION detects and extracts text from any image. For example, a photograph might contain a street sign or traffic sign. The JSON includes the entire extracted string, as well as individual words, and their bounding boxes.

  • DOCUMENT_TEXT_DETECTION also extracts text from an image, but the response is optimized for dense text and documents. The JSON includes page, block, paragraph, word, and break information.

Specifying the language

Both types of OCR requests support one or more languageHints that specify the language of any text in the image. However, in most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting languageHints is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong). Text detection returns an error if one or more of the specified languages is not one of the supported languages.

Code samples

For samples in a number of programming languages, see:

Text detection requests

Set up your GCP project and authentication

Detect Text

Powershell

To make a label detection request using Windows Powershell, make a POST request to the https://vision.googleapis.com/v1/images:annotate endpoint and specify TEXT_DETECTION as the value of features.type, as shown in the following example:

$cred = gcloud auth application-default print-access-token
$headers = @{ Authorization = "Bearer $cred" }

Invoke-WebRequest `
  -Method Post `
  -Headers $headers `
  -ContentType: "application/json; charset=utf-8" `
  -Body "{
      'requests': [
        {
          'image': {
            'source': {
              'imageUri': 'gs://bucket-name-123/abbey_road.jpg'
            }
          },
          'features': [
            {
              'type': 'TEXT_DETECTION'
            }
          ]
        }
      ]
    }" `
  -Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

Images can be passed in one of three ways: as a base64-encoded string; as a Google Cloud Storage URI; or as a publicly-accessible HTTPS or HTTP URL. See Making requests for more information.

See the AnnotateImageRequest reference documentation for more information on configuring the request body.

curl command

To make a label detection request using curl from the Linux or MacOS command line, make a POST request to the https://vision.googleapis.com/v1/images:annotate endpoint and specify TEXT_DETECTION as the value of features.type, as shown in the following example:

curl -X POST \
     -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
     -H "Content-Type: application/json; charset=utf-8" \
     --data "{
      'requests': [
        {
          'image': {
            'source': {
              'imageUri': 'gs://bucket-name-123/abbey_road.jpg'
            }
          },
          'features': [
            {
              'type': 'TEXT_DETECTION'
            }
          ]
        }
      ]
    }" "https://vision.googleapis.com/v1/images:annotate"

For document text detection, substitute "type": "DOCUMENT_TEXT_DETECTION" in the request above.

Images can be passed in one of three ways: as a base64-encoded string (shown above); as a Google Cloud Storage URI; or as a web URI. See Making requests for more information.

See the AnnotateImageRequest reference documentation for more information on configuring the request body.

GCLOUD COMMAND

To perform entity analysis, use the gcloud ml vision detect-text command as shown in the following example:

gcloud ml vision detect-text "gs://bucket-name-123/abbey_road.jpg"

Text detection responses

If the request is successful, the server returns a 200 OK HTTP status code and the response in JSON format.

A TEXT_DETECTION response includes the detected phrase, its bounding box, and individual words and their bounding boxes.

{
  "responses": [
    {
      "textAnnotations": [
        {
          "locale": "en",
          "description": "ABBEY\nROAD NW8\nCITY OF WESTMINSTER\n",
          "boundingPoly": {
            "vertices": [
              {
                "x": 45,
                "y": 43
              },
              {
                "x": 269,
                "y": 43
              },
              {
                "x": 269,
                "y": 178
              },
              {
                "x": 45,
                "y": 178
              }
            ]
          }
        },
        {
          "description": "ABBEY",
          "boundingPoly": {
            "vertices": [
              {
                "x": 45,
                "y": 50
              },
              {
                "x": 181,
                "y": 43
              },
              {
                "x": 183,
                "y": 80
              },
              {
                "x": 47,
                "y": 87
              }
            ]
          }
        },
        {
          "description": "ROAD",
          "boundingPoly": {
            "vertices": [
              {
                "x": 48,
                "y": 96
              },
              {
                "x": 155,
                "y": 96
              },
              {
                "x": 155,
                "y": 132
              },
              {
                "x": 48,
                "y": 132
              }
            ]
          }
        },
        {
          "description": "NW8",
          "boundingPoly": {
            "vertices": [
              {
                "x": 182,
                "y": 95
              },
              {
                "x": 269,
                "y": 95
              },
              {
                "x": 269,
                "y": 130
              },
              {
                "x": 182,
                "y": 130
              }
            ]
          }
        },
        {
          "description": "CITY",
          "boundingPoly": {
            "vertices": [
              {
                "x": 51,
                "y": 162
              },
              {
                "x": 85,
                "y": 161
              },
              {
                "x": 85,
                "y": 177
              },
              {
                "x": 51,
                "y": 178
              }
            ]
          }
        },
        {
          "description": "OF",
          "boundingPoly": {
            "vertices": [
              {
                "x": 95,
                "y": 162
              },
              {
                "x": 111,
                "y": 162
              },
              {
                "x": 111,
                "y": 176
              },
              {
                "x": 95,
                "y": 176
              }
            ]
          }
        },
        {
          "description": "WESTMINSTER",
          "boundingPoly": {
            "vertices": [
              {
                "x": 124,
                "y": 162
              },
              {
                "x": 249,
                "y": 160
              },
              {
                "x": 249,
                "y": 174
              },
              {
                "x": 124,
                "y": 176
              }
            ]
          }
        }
      ]
    }
  ]
}

A DOCUMENT_TEXT_DETECTION response includes additional layout information, such as page, block, paragraph, word, and break information, along with confidence scores for each. (The sample below uses a simplified example; the response from a dense document is too long to display on this page.)

{
  "responses": [
    {
      "textAnnotations": [
        {
          "locale": "en",
          "description": "O Google Cloud Platform\n",
          "boundingPoly": {
            "vertices": [
              {
                "x": 14, "y": 11
              },
              {
                "x": 279, "y": 11
              },
              {
                "x": 279, "y": 37
              },
              {
                "x": 14, "y": 37
              }
            ]
          }
        },
      ],
      "fullTextAnnotation": {
        "pages": [
          {
            "property": {
              "detectedLanguages": [
                {
                  "languageCode": "en",
                  "confidence": 1
                }
              ]
            },
            "width": 281,
            "height": 44,
            "blocks": [
              {
                "property": {
                  "detectedLanguages": [
                    {
                      "languageCode": "en",
                      "confidence": 1
                    }
                  ]
                },
                "boundingBox": {
                  "vertices": [
                    {
                      "x": 14, "y": 11
                    },
                    {
                      "x": 279, "y": 11
                    },
                    {
                      "x": 279, "y": 37
                    },
                    {
                      "x": 14, "y": 37
                    }
                  ]
                },
                "paragraphs": [
                  {
                    "property": {
                      "detectedLanguages": [
                        {
                          "languageCode": "en"
                        }
                      ]
                    },
                    "boundingBox": {
                      "vertices": [
                        {
                          "x": 14, "y": 11
                        },
                        {
                          "x": 279, "y": 11
                        },
                        {
                          "x": 279, "y": 37
                        },
                        {
                          "x": 14, "y": 37
                        }
                      ]
                    },
                    "words": [
                      {
                        "property": {
                          "detectedLanguages": [
                            {
                              "languageCode": "en"
                            }
                          ]
                        },
                        "boundingBox": {
                          "vertices": [
                            {
                              "x": 14, "y": 11
                            },
                            {
                              "x": 23, "y": 11
                            },
                            {
                              "x": 23, "y": 37
                            },
                            {
                              "x": 14, "y": 37
                            }
                          ]
                        },
                        "symbols": [
                          {
                            "property": {
                              "detectedLanguages": [
                                {
                                  "languageCode": "en"
                                }
                              ],
                              "detectedBreak": {
                                "type": "SPACE"
                              }
                            },
                            "boundingBox": {
                              "vertices": [
                                {
                                  "x": 14, "y": 11
                                },
                                {
                                  "x": 23, "y": 11
                                },
                                {
                                  "x": 23, "y": 37
                                },
                                {
                                  "x": 14, "y": 37
                                }
                              ]
                            },
                            "text": "O"
                          }
                        ]
                      },
                    ]
                  }
                ],
                "blockType": "TEXT"
              }
            ]
          }
        ],
        "text": "Google Cloud Platform\n"
      }
    }
  ]
}

Detecting Handwriting

Document text detection can also detect handwriting in an image. When images are known to contain handwriting, language hint can also be provided to improve results.

Sample code

You can detect handwriting on an image by passing the image's filepath or public URL to Cloud Vision API.

Protocol

To detect handwriting in an image, specify the DOCUMENT_TEXT_DETECTION feature and include a language hint of "en-t-i0-handwrit". The language hint tells the Vision API to use handwriting detection in an image. Then make a POST request and provide the appropriate request body:

POST https://vision.googleapis.com/v1p3beta1/files:asyncBatchAnnotate
Authorization: Bearer ACCESS_TOKEN

{
  "requests": [
    {
      "image": {
        "source": {
          "imageUri": "<var>image-url</var>"
        }
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        }
      ],
      "imageContext": {
        "languageHints": ["en-t-i0-handwrit"]
      }
    }
  ]
}

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Java API reference documentation .

/**
 * Performs handwritten text detection on a local image file.
 *
 * @param filePath The path to the local file to detect handwritten text on.
 * @param out A {@link PrintStream} to write the results to.
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
public static void detectHandwrittenOcr(String filePath, PrintStream out) throws Exception {
  List<AnnotateImageRequest> requests = new ArrayList<>();

  ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));

  Image img = Image.newBuilder().setContent(imgBytes).build();
  Feature feat = Feature.newBuilder().setType(Type.DOCUMENT_TEXT_DETECTION).build();
  // Set the Language Hint codes for handwritten OCR
  ImageContext imageContext =
      ImageContext.newBuilder().addLanguageHints("en-t-i0-handwrit").build();

  AnnotateImageRequest request =
      AnnotateImageRequest.newBuilder()
          .addFeatures(feat)
          .setImage(img)
          .setImageContext(imageContext)
          .build();
  requests.add(request);

  try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
    BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
    List<AnnotateImageResponse> responses = response.getResponsesList();
    client.close();

    for (AnnotateImageResponse res : responses) {
      if (res.hasError()) {
        out.printf("Error: %s\n", res.getError().getMessage());
        return;
      }

      // For full list of available annotations, see http://g.co/cloud/vision/docs
      TextAnnotation annotation = res.getFullTextAnnotation();
      for (Page page : annotation.getPagesList()) {
        String pageText = "";
        for (Block block : page.getBlocksList()) {
          String blockText = "";
          for (Paragraph para : block.getParagraphsList()) {
            String paraText = "";
            for (Word word : para.getWordsList()) {
              String wordText = "";
              for (Symbol symbol : word.getSymbolsList()) {
                wordText = wordText + symbol.getText();
                out.format(
                    "Symbol text: %s (confidence: %f)\n",
                    symbol.getText(), symbol.getConfidence());
              }
              out.format("Word text: %s (confidence: %f)\n\n", wordText, word.getConfidence());
              paraText = String.format("%s %s", paraText, wordText);
            }
            // Output Example using Paragraph:
            out.println("\nParagraph: \n" + paraText);
            out.format("Paragraph Confidence: %f\n", para.getConfidence());
            blockText = blockText + paraText;
          }
          pageText = pageText + blockText;
        }
      }
      out.println("\nComplete annotation:");
      out.println(annotation.getText());
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Node.js API reference documentation .

// Imports the Google Cloud client libraries
const vision = require('@google-cloud/vision').v1p3beta1;
const fs = require('fs');

// Creates a client
const client = new vision.ImageAnnotatorClient();

/**
 * TODO(developer): Uncomment the following line before running the sample.
 */
// const fileName = `/path/to/localImage.png`;

const request = {
  image: {
    content: fs.readFileSync(fileName),
  },
  feature: {
    languageHints: ['en-t-i0-handwrit'],
  },
};
client
  .documentTextDetection(request)
  .then(results => {
    const fullTextAnnotation = results[0].fullTextAnnotation;
    console.log(`Full text: ${fullTextAnnotation.text}`);
  })
  .catch(err => {
    console.error('ERROR:', err);
  });

Python

Before trying this sample, follow the Python setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Python API reference documentation .

def detect_handwritten_ocr(path):
    """Detects handwritten characters in a local image.

    Args:
    path: The path to the local file.
    """
    from google.cloud import vision_v1p3beta1 as vision
    client = vision.ImageAnnotatorClient()

    with io.open(path, 'rb') as image_file:
        content = image_file.read()

    image = vision.types.Image(content=content)

    # Language hint codes for handwritten OCR:
    # en-t-i0-handwrit, mul-Latn-t-i0-handwrit
    # Note: Use only one language hint code per request for handwritten OCR.
    image_context = vision.types.ImageContext(
        language_hints=['en-t-i0-handwrit'])

    response = client.document_text_detection(image=image,
                                              image_context=image_context)

    print('Full Text: {}'.format(response.full_text_annotation.text))
    for page in response.full_text_annotation.pages:
        for block in page.blocks:
            print('\nBlock confidence: {}\n'.format(block.confidence))

            for paragraph in block.paragraphs:
                print('Paragraph confidence: {}'.format(
                    paragraph.confidence))

                for word in paragraph.words:
                    word_text = ''.join([
                        symbol.text for symbol in word.symbols
                    ])
                    print('Word text: {} (confidence: {})'.format(
                        word_text, word.confidence))

                    for symbol in word.symbols:
                        print('\tSymbol: {} (confidence: {})'.format(
                            symbol.text, symbol.confidence))

In addition to reading a local file and inlining the image in the request, you also have the option of specifying the Google Cloud Storage URI to the image you wish to extract handwriting from.

Protocol

To detect handwriting in an image, specify the DOCUMENT_TEXT_DETECTION feature and include a language hint of "en-t-i0-handwrit". The language hint tells the Vision API to use handwriting detection in an image. Then make a POST request and provide the appropriate request body:

POST https://vision.googleapis.com/v1p3beta1/files:asyncBatchAnnotate
Authorization: Bearer ACCESS_TOKEN

{
  "requests": [
    {
      "image": {
        "source": {
          "imageUri": "gs://YOUR_BUCKET_NAME/YOUR_FILE_NAME"
        }
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        }
      ],
      "imageContext": {
        "languageHints": ["en-t-i0-handwrit"]
      }
    }
  ]
}

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Java API reference documentation .

/**
 * Performs handwritten text detection on a remote image on Google Cloud Storage.
 *
 * @param gcsPath The path to the remote file on Google Cloud Storage to detect handwritten text
 *     on.
 * @param out A {@link PrintStream} to write the results to.
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
public static void detectHandwrittenOcrGcs(String gcsPath, PrintStream out) throws Exception {
  List<AnnotateImageRequest> requests = new ArrayList<>();

  ImageSource imgSource = ImageSource.newBuilder().setGcsImageUri(gcsPath).build();
  Image img = Image.newBuilder().setSource(imgSource).build();

  Feature feat = Feature.newBuilder().setType(Type.DOCUMENT_TEXT_DETECTION).build();
  // Set the parameters for the image
  ImageContext imageContext =
      ImageContext.newBuilder().addLanguageHints("en-t-i0-handwrit").build();

  AnnotateImageRequest request =
      AnnotateImageRequest.newBuilder()
          .addFeatures(feat)
          .setImage(img)
          .setImageContext(imageContext)
          .build();
  requests.add(request);

  try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
    BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
    List<AnnotateImageResponse> responses = response.getResponsesList();
    client.close();

    for (AnnotateImageResponse res : responses) {
      if (res.hasError()) {
        out.printf("Error: %s\n", res.getError().getMessage());
        return;
      }

      // For full list of available annotations, see http://g.co/cloud/vision/docs
      TextAnnotation annotation = res.getFullTextAnnotation();
      for (Page page : annotation.getPagesList()) {
        String pageText = "";
        for (Block block : page.getBlocksList()) {
          String blockText = "";
          for (Paragraph para : block.getParagraphsList()) {
            String paraText = "";
            for (Word word : para.getWordsList()) {
              String wordText = "";
              for (Symbol symbol : word.getSymbolsList()) {
                wordText = wordText + symbol.getText();
                out.format(
                    "Symbol text: %s (confidence: %f)\n",
                    symbol.getText(), symbol.getConfidence());
              }
              out.format("Word text: %s (confidence: %f)\n\n", wordText, word.getConfidence());
              paraText = String.format("%s %s", paraText, wordText);
            }
            // Output Example using Paragraph:
            out.println("\nParagraph: \n" + paraText);
            out.format("Paragraph Confidence: %f\n", para.getConfidence());
            blockText = blockText + paraText;
          }
          pageText = pageText + blockText;
        }
      }
      out.println("\nComplete annotation:");
      out.println(annotation.getText());
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Node.js API reference documentation .

// Imports the Google Cloud client libraries
const vision = require('@google-cloud/vision').v1p3beta1;
const fs = require('fs');

// Creates a client
const client = new vision.ImageAnnotatorClient();

/**
 * TODO(developer): Uncomment the following line before running the sample.
 */
// const uri = `gs://bucket/bucketImage.png`;

const request = {
  image: {
    content: fs.readFileSync(uri),
  },
  feature: {
    languageHints: ['en-t-i0-handwrit'],
  },
};

client
  .documentTextDetection(request)
  .then(results => {
    const fullTextAnnotation = results[0].fullTextAnnotation;
    console.log(`Full text: ${fullTextAnnotation.text}`);
  })
  .catch(err => {
    console.error('ERROR:', err);
  });

Python

Before trying this sample, follow the Python setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Python API reference documentation .

def detect_handwritten_ocr_uri(uri):
    """Detects handwritten characters in the file located in Google Cloud
    Storage.

    Args:
    uri: The path to the file in Google Cloud Storage (gs://...)
    """
    from google.cloud import vision_v1p3beta1 as vision
    client = vision.ImageAnnotatorClient()
    image = vision.types.Image()
    image.source.image_uri = uri

    # Language hint codes for handwritten OCR:
    # en-t-i0-handwrit, mul-Latn-t-i0-handwrit
    # Note: Use only one language hint code per request for handwritten OCR.
    image_context = vision.types.ImageContext(
        language_hints=['en-t-i0-handwrit'])

    response = client.document_text_detection(image=image,
                                              image_context=image_context)

    print('Full Text: {}'.format(response.full_text_annotation.text))
    for page in response.full_text_annotation.pages:
        for block in page.blocks:
            print('\nBlock confidence: {}\n'.format(block.confidence))

            for paragraph in block.paragraphs:
                print('Paragraph confidence: {}'.format(
                    paragraph.confidence))

                for word in paragraph.words:
                    word_text = ''.join([
                        symbol.text for symbol in word.symbols
                    ])
                    print('Word text: {} (confidence: {})'.format(
                        word_text, word.confidence))

                    for symbol in word.symbols:
                        print('\tSymbol: {} (confidence: {})'.format(
                            symbol.text, symbol.confidence))

Try it

Try text detection and document text detection below. You can use the image specified already (gs://bucket-name-123/abbey_road.jpg) by clicking Execute, or you can specify your own image in its place.

To try document text detection, update the value of type to DOCUMENT_TEXT_DETECTION.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Vision API Documentation
Need help? Visit our support page.