Small batch file annotation online

The Vision API can provide online (immediate) annotation of multiple pages or frames from PDF, TIFF, or GIF files stored in Cloud Storage.

You can request online feature detection and annotation of 5 frames (GIF; "image/gif") or pages (PDF; "application/pdf", or TIFF; "image/tiff") of your choosing for each file.

The example annotations on this page are for DOCUMENT_TEXT_DETECTION, but online small batch annotation is available for all Vision API features.

first five pages of a pdf file
gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf

Page 1

page 1 of example pdf
...
"text": "á\n7.1.15\nOIL, GAS AND MINERAL LEASE
\nNORVEL J. CHITTIM, ET AL\n.\n.
\nTO\nW. L. SCHEIG\n"
},
"context": {"pageNumber": 1}
...

Page 2

page 2 (top) of example pdf
...
"text": "...\n.\n*\n.\n.\n.\nA\nNY\nALA...\n7
\n| THE STATE OF TEXAS
\nOIL, GAS AND MINERAL LEASE
\nCOUNTY OF MAVERICK ]
\nTHIS AGREEMENT made this 14 day of_June
\n1954, between Norvel J. Chittim and his wife, Lieschen G. Chittim;
\nMary Anne Chittim Parker, joined herein pro forma by her husband,
\nJoseph Bright Parker; Dorothea Chittim Oppenheimer, joined herein
\npro forma by her husband, Fred J. Oppenheimer; Tuleta Chittim
\nWright, joined herein pro forma by her husband, Gilbert G. Wright,
\nJr.; Gilbert G. Wright, III; Dela Wright White, joined herein pro
\nforma by her husband, John H. White; Anne Wright Basse, joined
\nherein pro forma by her husband, E. A. Basse, Jr.; Norvel J.
\nChittim, Independent Executor and Trustee for Estate of Marstella
\nChittim, Deceased; Mary Louise Roswell, joined herein pro forma by
\nher husband, Charles M. 'Roswell; and James M. Chittim and his wife,
\nThelma Neal Chittim; as LESSORS, and W. L. Scheig of San Antonio,
\nTexas, as LESSEE,
page 2 (bottom) of example pdf
\nW I T N E s s E T H:
\n1. Lessors, in consideration of $10.00, cash in hand paid,
\nof the royalties herein provided, and of the agreement of Lessee
\nherein contained, hereby grant, lease and let exclusively unto
\nLessee the tracts of land hereinafter described for the purpose of
\ntesting for mineral indications, and in such tests use the Seismo-
\ngraph, Torsion Balance, Core Drill, or any other tools, machinery,
\nequipment or explosive necessary and proper; and also prospecting,
\ndrilling and mining for and producing oil, gas and other minerals
\n(except metallic minerals), laying pipe lines, building tanks,
\npower stations, telephone lines and other structures thereon to
\nproduce, save, take care of, treat, transport and own said pro-
\nducts and housing its employees (Lessee to conduct its geophysical
\nwork in such manner as not to damage the buildings, water tanks
\nor wells of Lessors, or the livestock of Lessors or Lessors' ten- !
\nants, )said lands being situated in Maverick, Zavalla and Dimmit
\nCounties, Texas, to-wit:\n3-1.\n"
},
"context": {"pageNumber": 2}
...

Page 3

page 3 (top) of example pdf
...
"text": "Being a tract consisting of 140,769.86 acres, more or
\nless, out of what is known as the \"Chittim Ranch\" in said counties,
\nas designated and described in Exhibit \"A\" hereto attached and
\nmade a part hereof as if fully written herein. It being under-
\nstood that the acreage intended to be included in this lease aggre-
\ngates approximately 140,769.86 acres whether it actually comprises
\nmore or less, but for the purpose of calculating the payments
\nhereinafter provided for, it is agreed that the land included with-
\nin the terms of this lease is One hundred forty thousand seven
\nhundred sixty-nine and eighty-six one hundredths (140,769.86) acres,
\nand that each survey listed above contains the acreage stated above.
\nIt is understood that tract designated \"TRACT II\" in
\nExhibit \"A\" is subject to a one-sixteenth (1/16) royalty reserved.
\nto the State of Texas, and the rights of the State of Texas must
\nbe respected in the development of the said property.
page 3 (bottom) of example pdf
\n2. Subject to the other provisions hereof, this lease shall
\nbe for a term of ten (10) years from date hereof (called \"Primary
\nTerm\"), and as long thereafter as oil, gas or other minerals
\n(except metallic minerals) are produced from said land hereunder
\nin paying quantities, subject, however, to all of the terms and
\nprovisions of this lease. After expiration of the primary term,
\nthis lease shall terminate as to all lands included herein, save
\nand except as to those tracts which lessee maintains in force and
\neffect according to the requirements hereof.
\n3. The royalties to be paid by Lessee are (a) on oil, one-
\neighth (1/8) of that produced and saved from said land, the same to
\nbe delivered at the well or to the credit of Lessors into the pipe i
\nline to which the well may be connected; (b) on gas, including
\ni casinghead gas or other gaseous or vaporous substance, produced
\nfrom the leased premises and sold or used by Lessee off the leased
\npremises or in the manufacture of gasoline or other products, the
\nmarket value, at the mouth of the well, of one-eighth (1/8) of
\n.\n3-2-\n?\n"
},
"context": {"pageNumber": 3}
...

Page 4

page 4 (top) of example pdf
...
"text": "•\n:\n.\nthe gas or casinghead gas so used or sold. On all gas or casing-
\nhead gas sold at the well, the royalty shall be one-eighth (1/8)
\nof the amounts realized from such sales. While gas from any well
\nproducing gas only is being used or sold by. Lessee, Lessor may have
\nenough of said gas for all stoves and inside lights in the prin-
\ncipal dwelling house on the leased premises by making Lessors' own
\nconnections with the well and by assuming all risk and paying all
\nexpenses. And (c) on all other minerals (except metallic minerals)
\nmined and marketed, one tenth (1/10). either in kind or value at the
\nwell or mine at Lessee's election.
\nFor the purpose of royalty payments under 3 (b) hereof,
\nall liquid hydrocarbons (including distillate) recovered and saved
n| by Lessee in separators or traps on the leased premises shall be
\nconsidered as oil. Should such a plant be constructed by another
\nthan Lessee to whom Lessee should sell or deliver the gas or cas-
\ninghead gas produced from the leased premises for processing, then
\nthe royalty thereon shall be one-eighth (1/8) of the amounts
\nrealized by Lessee from such sales or deliveries.
page 4 (bottom) of example pdf
\nOr if such plant is owned or constructed or operated by
\nLessee, then the royalty shall be on the basis of one-eighth (1/8) |
\nof the prevailing price in the area for such products..
\nThe provisions of this paragraph shall control as to any
\nconflict with Paragraph 3 (b). Lessors shall also be entitled to
\nsaid royalty interest in all residue gas .obtained, saved and mar-
\nketed from said premises, or used off the premises, or that may be
\nreplaced in the reservoir by 'any recycling process, settlement
\ntherefor to be made to Lessors when such gas is marketed or used
\noff the premises. !
\nIf at the expiration of the primary term of this lease
\nLessee has not found and produced oil or gas in paying quantities
\nin any formation lying fifty (50) feet below the base of what is
\nknown as the Rhodessa section at the particular point where the
\nwell is drilled, then, subject to the further provisions hereof,
\nthis lease shall terminate as to all horizons below fifty (50)
\nI feet below the Rhodessa section. And if at the expiration of the
\n3 -3-\n"
},
"context": {"pageNumber": 4}
...

Page 5

page 5 (top) of example pdf
...
"text": ".\n.\n:\nI\n.\n.\n.:250:-....\n.\n...\n.\n....\n....\n..\n..\n. ..
\n.\n..\n.\n...\n...\n.-\n.\n.\n..\n..\n17\n.\n:\n-\n-\n-\n.\n..\n.
\nprimary term production of oil or gas in paying quantities is not
\nfound in the Jurassic, then this lease shall terminate as to the
\nJurassic and lower formations unless Lessee shall have completed
\nat least two (2) tests in the Jurassic. And after the primary
\nterm Lessee shall complete at least one (1) Jurassic test each
\nthree years on said property as to which this lease is still in
\neffect, until paying production is obtained in or below the
\nJurassic, or upon failure so to do Lessee shall release this
\nlease as to all formations below the top of the Jurassic. Upon
\ncompliance with the above provisions as to Jurassic tests, and
\nif production is found in the Jurassic, then, subject to the
\nother provisions hereof, this lease shall be effective as to all
\nhorizons, including the Jurassic..
\n5. It is understood and expressly agreed that the consider-
\niation first recited in this lease, the down cash payment, receipt
\nof which is hereby acknowledged by Lessors, is full and adequate
\nconsideration to maintain this lease in full force and effect for
\na period of one year from the date hereof, and does not impose
\nany obligation on the part of Lessee to drill and develop this
\nlease during the said term of one year from date of this lease.
page 5 (bottom) of example pdf
\n6. This lease shall terminate as to both parties unless
\non or before one year from this date, Lessee shall pay to or ten- !
\nder to Lessors or to the credit of Lessors, in the National Bank
\nof Commerce, at San Antonio, Texas, (which bank and its successors
\nare Lessors' agent, and shall continue as the depository for all \"
\nrental payable hereunder regardless of changes in ownership of
\nsaid land or the rental), the sum of One Dollar ($1.00) per acre
\nas to all acreage then covered by this lease, and not surrendered,
\nor maintained by production of oil, gas or other minerals, or by
\ndrilling-reworking operations, all as hereinafter fully set out, :
\nwhich shall maintain this lease in full force and effect for
\nanother twelve-month period, without imposing any obligation on
\nthe part of Lessee to drill and develop this lease. In like
\nmanner, and upon like payment or tender annually, Lessee may
\nmaintain this lease .in full force and effect for successive
\ntwelve-month periods during the primary term, without imposing
\n.\n--.\n.\n.\n.\n-\n::\n---
\n-\n3\n.\n..-\n-\n-\n:.\n.\n::\n.
\n3-4-\n"
},
"context": {"pageNumber": 5}
...

Limitations

At most 5 pages will be annotated. Users can specify the specific 5 pages to be annotated.

Authentication

Set up your GCP project and authentication

Currently supported feature types

Feature type
CROP_HINTS Determine suggested vertices for a crop region on an image.
DOCUMENT_TEXT_DETECTION Perform OCR on dense text images, such as documents (PDF/TIFF), and images with handwriting. TEXT_DETECTION can be used for sparse text images. Takes precedence when both DOCUMENT_TEXT_DETECTION and TEXT_DETECTION are present.
FACE_DETECTION Detect faces within the image.
IMAGE_PROPERTIES Compute a set of image properties, such as the image's dominant colors.
LABEL_DETECTION Add labels based on image content.
LANDMARK_DETECTION Detect geographic landmarks within the image.
LOGO_DETECTION Detect company logos within the image.
OBJECT_LOCALIZATION Detect and extract multiple objects in an image.
SAFE_SEARCH_DETECTION Run Safe Search to detect potentially unsafe or undesirable content.
TEXT_DETECTION Perform Optical Character Recognition (OCR) on text within the image. Text detection is optimized for areas of sparse text within a larger image. If the image is a document (PDF/TIFF), has dense text, or contains handwriting, use DOCUMENT_TEXT_DETECTION instead.
WEB_DETECTION Detect topical entities such as news, events, or celebrities within the image, and find similar images on the web using the power of Google Image Search.

Sample code

You can either send an annotation request with a locally stored file, or use a file that is stored on Cloud Storage.

Using a locally stored file

Use the following code samples to get any feature annotation for a locally stored file.

REST & CMD LINE

To perform online PDF/TIFF/GIF feature detection for a small batch of files, make a POST request and provide the appropriate request body:

Before using any of the request data below, make the following replacements:

  • base64-encoded-file: The base64 representation (ASCII string) of your binary file data. This string should look similar to the following string:
    • JVBERi0xLjUNCiW1tbW1...ydHhyZWYNCjk5NzM2OQ0KJSVFT0Y=
    Visit the base64 encode topic for more information.

Field-specific considerations:

  • inputConfig.mimeType - One of the following: "application/pdf", "image/tiff" or "image/gif".
  • pages - specifies the specific pages of the file to perform feature detection.

HTTP method and URL:

POST https://vision.googleapis.com/v1/files:annotate

Request JSON body:

{
  "requests": [
    {
      "inputConfig": {
        "content": "base64-encoded-file",
        "mimeType": "application/pdf"
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        }
      ],
      "pages": [
        1,2,3,4,5
      ]
    }
  ]
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://vision.googleapis.com/v1/files:annotate

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/files:annotate" | Select-Object -Expand Content
Response:

A successful annotate request immediately returns a JSON response.

For this feature (DOCUMENT_TEXT_DETECTION) the JSON response is similar to that of an image's document text detection request. The response contains bounding boxes for blocks broken down by paragraphs, words, and individual symbols, as well as the full text detected. The response also contain a context field showing the location of the PDF or TIFF that was specified and the result's page number in the file.

The response JSON shown is only for a single page (page 2), and has been shortened for clarity.

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries. For more information, see the Vision API Java API reference documentation.

/*
 * Please include the following imports to run this sample.
 *
 * import com.google.cloud.vision.v1.AnnotateFileRequest;
 * import com.google.cloud.vision.v1.AnnotateImageResponse;
 * import com.google.cloud.vision.v1.BatchAnnotateFilesRequest;
 * import com.google.cloud.vision.v1.BatchAnnotateFilesResponse;
 * import com.google.cloud.vision.v1.Block;
 * import com.google.cloud.vision.v1.Feature;
 * import com.google.cloud.vision.v1.ImageAnnotatorClient;
 * import com.google.cloud.vision.v1.InputConfig;
 * import com.google.cloud.vision.v1.Page;
 * import com.google.cloud.vision.v1.Paragraph;
 * import com.google.cloud.vision.v1.Symbol;
 * import com.google.cloud.vision.v1.Word;
 * import com.google.protobuf.ByteString;
 * import java.nio.file.Files;
 * import java.nio.file.Path;
 * import java.nio.file.Paths;
 * import java.util.Arrays;
 * import java.util.List;
 */

/**
 * Perform batch file annotation
 *
 * @param filePath Path to local pdf file, e.g. /path/document.pdf
 */
public static void sampleBatchAnnotateFiles(String filePath) {
  try (ImageAnnotatorClient imageAnnotatorClient = ImageAnnotatorClient.create()) {
    // filePath = "resources/kafka.pdf";

    // Supported mime_type: application/pdf, image/tiff, image/gif
    String mimeType = "application/pdf";
    Path path = Paths.get(filePath);
    byte[] data = Files.readAllBytes(path);
    ByteString content = ByteString.copyFrom(data);
    InputConfig inputConfig =
        InputConfig.newBuilder().setMimeType(mimeType).setContent(content).build();
    Feature.Type type = Feature.Type.DOCUMENT_TEXT_DETECTION;
    Feature featuresElement = Feature.newBuilder().setType(type).build();
    List<Feature> features = Arrays.asList(featuresElement);

    // The service can process up to 5 pages per document file. Here we specify the first, second,
    // and
    // last page of the document to be processed.
    int pagesElement = 1;
    int pagesElement2 = 2;
    int pagesElement3 = -1;
    List<Integer> pages = Arrays.asList(pagesElement, pagesElement2, pagesElement3);
    AnnotateFileRequest requestsElement =
        AnnotateFileRequest.newBuilder()
            .setInputConfig(inputConfig)
            .addAllFeatures(features)
            .addAllPages(pages)
            .build();
    List<AnnotateFileRequest> requests = Arrays.asList(requestsElement);
    BatchAnnotateFilesRequest request =
        BatchAnnotateFilesRequest.newBuilder().addAllRequests(requests).build();
    BatchAnnotateFilesResponse response = imageAnnotatorClient.batchAnnotateFiles(request);
    for (AnnotateImageResponse imageResponse :
        response.getResponsesList().get(0).getResponsesList()) {
      System.out.printf("Full text: %s\n", imageResponse.getFullTextAnnotation().getText());
      for (Page page : imageResponse.getFullTextAnnotation().getPagesList()) {
        for (Block block : page.getBlocksList()) {
          System.out.printf("\nBlock confidence: %s\n", block.getConfidence());
          for (Paragraph par : block.getParagraphsList()) {
            System.out.printf("\tParagraph confidence: %s\n", par.getConfidence());
            for (Word word : par.getWordsList()) {
              System.out.printf("\t\tWord confidence: %s\n", word.getConfidence());
              for (Symbol symbol : word.getSymbolsList()) {
                System.out.printf(
                    "\t\t\tSymbol: %s, (confidence: %s)\n",
                    symbol.getText(), symbol.getConfidence());
              }
            }
          }
        }
      }
    }
  } catch (Exception exception) {
    System.err.println("Failed to create the client due to: " + exception);
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Node.js API reference documentation .


const vision = require('@google-cloud/vision').v1;

const fs = require('fs');
/**
 * Perform batch file annotation
 *
 * @param filePath {string} Path to local pdf file, e.g. /path/document.pdf
 */
function sampleBatchAnnotateFiles(filePath) {
  const client = new vision.ImageAnnotatorClient();
  // const filePath = 'resources/kafka.pdf';

  // Supported mime_type: application/pdf, image/tiff, image/gif
  const mimeType = 'application/pdf';
  const content = fs.readFileSync(filePath).toString('base64');
  const inputConfig = {
    mimeType: mimeType,
    content: content,
  };
  const type = 'DOCUMENT_TEXT_DETECTION';
  const featuresElement = {
    type: type,
  };
  const features = [featuresElement];

  // The service can process up to 5 pages per document file. Here we specify the first, second, and
  // last page of the document to be processed.
  const pagesElement = 1;
  const pagesElement2 = 2;
  const pagesElement3 = -1;
  const pages = [pagesElement, pagesElement2, pagesElement3];
  const requestsElement = {
    inputConfig: inputConfig,
    features: features,
    pages: pages,
  };
  const requests = [requestsElement];
  client.batchAnnotateFiles({requests: requests})
    .then(responses => {
      const response = responses[0];
      for (const imageResponse of response.responses[0].responses) {
        console.log(`Full text: ${imageResponse.fullTextAnnotation.text}`);
        for (const page of imageResponse.fullTextAnnotation.pages) {
          for (const block of page.blocks) {
            console.log(`\nBlock confidence: ${block.confidence}`);
            for (const par of block.paragraphs) {
              console.log(`\tParagraph confidence: ${par.confidence}`);
              for (const word of par.words) {
                console.log(`\t\tWord confidence: ${word.confidence}`);
                for (const symbol of word.symbols) {
                  console.log(`\t\t\tSymbol: ${symbol.text}, (confidence: ${symbol.confidence})`);
                }
              }
            }
          }
        }
      }
    })
    .catch(err => {
      console.error(err);
    });
}

Python

Before trying this sample, follow the Python setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Python API reference documentation .


from google.cloud import vision_v1
from google.cloud.vision_v1 import enums
import io
import six

def sample_batch_annotate_files(file_path):
  """
    Perform batch file annotation

    Args:
      file_path Path to local pdf file, e.g. /path/document.pdf
    """

  client = vision_v1.ImageAnnotatorClient()

  # file_path = 'resources/kafka.pdf'

  if isinstance(file_path, six.binary_type):
    file_path = file_path.decode('utf-8')

  # Supported mime_type: application/pdf, image/tiff, image/gif
  mime_type = 'application/pdf'
  with io.open(file_path, 'rb') as f:
    content = f.read()
  input_config = {'mime_type': mime_type, 'content': content}
  type_ = enums.Feature.Type.DOCUMENT_TEXT_DETECTION
  features_element = {'type': type_}
  features = [features_element]

  # The service can process up to 5 pages per document file. Here we specify the
  # first, second, and last page of the document to be processed.
  pages_element = 1
  pages_element_2 = 2
  pages_element_3 = -1
  pages = [pages_element, pages_element_2, pages_element_3]
  requests_element = {'input_config': input_config, 'features': features, 'pages': pages}
  requests = [requests_element]

  response = client.batch_annotate_files(requests)
  for image_response in response.responses[0].responses:
    print('Full text: {}'.format(image_response.full_text_annotation.text))
    for page in image_response.full_text_annotation.pages:
      for block in page.blocks:
        print('\nBlock confidence: {}'.format(block.confidence))
        for par in block.paragraphs:
          print('\tParagraph confidence: {}'.format(par.confidence))
          for word in par.words:
            print('\t\tWord confidence: {}'.format(word.confidence))
            for symbol in word.symbols:
              print('\t\t\tSymbol: {}, (confidence: {})'.format(symbol.text, symbol.confidence))

Ruby

Before trying this sample, follow the Ruby setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Ruby API reference documentation .


 # Perform batch file annotation
 #
 # @param file_path {String} Path to local pdf file, e.g. /path/document.pdf
def sample_batch_annotate_files(file_path)
  # Instantiate a client
  image_annotator_client = Google::Cloud::Vision::ImageAnnotator.new version: :v1

  # file_path = "resources/kafka.pdf"

  # Supported mime_type: application/pdf, image/tiff, image/gif
  mime_type = "application/pdf"
  content = File.binread file_path
  input_config = { mime_type: mime_type, content: content }
  type = :DOCUMENT_TEXT_DETECTION
  features_element = { type: type }
  features = [features_element]

  # The service can process up to 5 pages per document file. Here we specify the first, second, and
  # last page of the document to be processed.
  pages_element = 1
  pages_element_2 = 2
  pages_element_3 = -1
  pages = [pages_element, pages_element_2, pages_element_3]
  requests_element = {
    input_config: input_config,
    features: features,
    pages: pages
  }
  requests = [requests_element]

  response = image_annotator_client.batch_annotate_files(requests)
  response.responses[0].responses.each do |image_response|
    puts "Full text: #{image_response.full_text_annotation.text}"
    image_response.full_text_annotation.pages.each do |page|
      page.blocks.each do |block|
        puts "\nBlock confidence: #{block.confidence}"
        block.paragraphs.each do |par|
          puts "\tParagraph confidence: #{par.confidence}"
          par.words.each do |word|
            puts "\t\tWord confidence: #{word.confidence}"
            word.symbols.each do |symbol|
              puts "\t\t\tSymbol: #{symbol.text}, (confidence: #{symbol.confidence})"
            end
          end
        end
      end
    end
  end

end

Using a file on Cloud Storage

Use the following code samples to get any feature annotation for a file on Cloud Storage.

REST & CMD LINE

To perform online PDF/TIFF/GIF feature detection for a small batch of files, make a POST request and provide the appropriate request body:

Before using any of the request data below, make the following replacements:

  • cloud-storage-file-uri: the path to a valid file (PDF/TIFF) in a Cloud Storage bucket. You must at least have read privileges to the file. Example:
    • gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf

Field-specific considerations:

  • inputConfig.mimeType - One of the following: "application/pdf", "image/tiff" or "image/gif".
  • pages - specifies the specific pages of the file to perform feature detection.

HTTP method and URL:

POST https://vision.googleapis.com/v1/files:annotate

Request JSON body:

{
  "requests": [
    {
      "inputConfig": {
        "gcsSource": {
          "uri": "cloud-storage-file-uri"
        },
        "mimeType": "application/pdf"
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        }
      ],
      "pages": [
        1,2,3,4,5
      ]
    }
  ]
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://vision.googleapis.com/v1/files:annotate

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/files:annotate" | Select-Object -Expand Content
Response:

A successful annotate request immediately returns a JSON response.

For this feature (DOCUMENT_TEXT_DETECTION) the JSON response is similar to that of an image's document text detection request. The response contains bounding boxes for blocks broken down by paragraphs, words, and individual symbols, as well as the full text detected. The response also contain a context field showing the location of the PDF or TIFF that was specified and the result's page number in the file.

The response JSON shown is only for a single page (page 2), and has been shortened for clarity.

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries. For more information, see the Vision API Java API reference documentation.

/*
 * Please include the following imports to run this sample.
 *
 * import com.google.cloud.vision.v1.AnnotateFileRequest;
 * import com.google.cloud.vision.v1.AnnotateImageResponse;
 * import com.google.cloud.vision.v1.BatchAnnotateFilesRequest;
 * import com.google.cloud.vision.v1.BatchAnnotateFilesResponse;
 * import com.google.cloud.vision.v1.Block;
 * import com.google.cloud.vision.v1.Feature;
 * import com.google.cloud.vision.v1.GcsSource;
 * import com.google.cloud.vision.v1.ImageAnnotatorClient;
 * import com.google.cloud.vision.v1.InputConfig;
 * import com.google.cloud.vision.v1.Page;
 * import com.google.cloud.vision.v1.Paragraph;
 * import com.google.cloud.vision.v1.Symbol;
 * import com.google.cloud.vision.v1.Word;
 * import java.util.Arrays;
 * import java.util.List;
 */

/**
 * Perform batch file annotation
 *
 * @param storageUri Cloud Storage URI to source image in the format gs://[bucket]/[file]
 */
public static void sampleBatchAnnotateFiles(String storageUri) {
  try (ImageAnnotatorClient imageAnnotatorClient = ImageAnnotatorClient.create()) {
    // storageUri = "gs://cloud-samples-data/vision/document_understanding/kafka.pdf";
    GcsSource gcsSource = GcsSource.newBuilder().setUri(storageUri).build();
    InputConfig inputConfig = InputConfig.newBuilder().setGcsSource(gcsSource).build();
    Feature.Type type = Feature.Type.DOCUMENT_TEXT_DETECTION;
    Feature featuresElement = Feature.newBuilder().setType(type).build();
    List<Feature> features = Arrays.asList(featuresElement);

    // The service can process up to 5 pages per document file.
    // Here we specify the first, second, and last page of the document to be processed.
    int pagesElement = 1;
    int pagesElement2 = 2;
    int pagesElement3 = -1;
    List<Integer> pages = Arrays.asList(pagesElement, pagesElement2, pagesElement3);
    AnnotateFileRequest requestsElement =
        AnnotateFileRequest.newBuilder()
            .setInputConfig(inputConfig)
            .addAllFeatures(features)
            .addAllPages(pages)
            .build();
    List<AnnotateFileRequest> requests = Arrays.asList(requestsElement);
    BatchAnnotateFilesRequest request =
        BatchAnnotateFilesRequest.newBuilder().addAllRequests(requests).build();
    BatchAnnotateFilesResponse response = imageAnnotatorClient.batchAnnotateFiles(request);
    for (AnnotateImageResponse imageResponse :
        response.getResponsesList().get(0).getResponsesList()) {
      System.out.printf("Full text: %s\n", imageResponse.getFullTextAnnotation().getText());
      for (Page page : imageResponse.getFullTextAnnotation().getPagesList()) {
        for (Block block : page.getBlocksList()) {
          System.out.printf("\nBlock confidence: %s\n", block.getConfidence());
          for (Paragraph par : block.getParagraphsList()) {
            System.out.printf("\tParagraph confidence: %s\n", par.getConfidence());
            for (Word word : par.getWordsList()) {
              System.out.printf("\t\tWord confidence: %s\n", word.getConfidence());
              for (Symbol symbol : word.getSymbolsList()) {
                System.out.printf(
                    "\t\t\tSymbol: %s, (confidence: %s)\n",
                    symbol.getText(), symbol.getConfidence());
              }
            }
          }
        }
      }
    }
  } catch (Exception exception) {
    System.err.println("Failed to create the client due to: " + exception);
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Node.js API reference documentation .


const vision = require('@google-cloud/vision').v1;

/**
 * Perform batch file annotation
 *
 * @param storageUri {string} Cloud Storage URI to source image in the format gs://[bucket]/[file]
 */
function sampleBatchAnnotateFiles(storageUri) {
  const client = new vision.ImageAnnotatorClient();
  // const storageUri = 'gs://cloud-samples-data/vision/document_understanding/kafka.pdf';
  const gcsSource = {
    uri: storageUri,
  };
  const inputConfig = {
    gcsSource: gcsSource,
  };
  const type = 'DOCUMENT_TEXT_DETECTION';
  const featuresElement = {
    type: type,
  };
  const features = [featuresElement];

  // The service can process up to 5 pages per document file.
  // Here we specify the first, second, and last page of the document to be processed.
  const pagesElement = 1;
  const pagesElement2 = 2;
  const pagesElement3 = -1;
  const pages = [pagesElement, pagesElement2, pagesElement3];
  const requestsElement = {
    inputConfig: inputConfig,
    features: features,
    pages: pages,
  };
  const requests = [requestsElement];
  client.batchAnnotateFiles({requests: requests})
    .then(responses => {
      const response = responses[0];
      for (const imageResponse of response.responses[0].responses) {
        console.log(`Full text: ${imageResponse.fullTextAnnotation.text}`);
        for (const page of imageResponse.fullTextAnnotation.pages) {
          for (const block of page.blocks) {
            console.log(`\nBlock confidence: ${block.confidence}`);
            for (const par of block.paragraphs) {
              console.log(`\tParagraph confidence: ${par.confidence}`);
              for (const word of par.words) {
                console.log(`\t\tWord confidence: ${word.confidence}`);
                for (const symbol of word.symbols) {
                  console.log(`\t\t\tSymbol: ${symbol.text}, (confidence: ${symbol.confidence})`);
                }
              }
            }
          }
        }
      }
    })
    .catch(err => {
      console.error(err);
    });
}

Python

Before trying this sample, follow the Python setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Python API reference documentation .


from google.cloud import vision_v1
from google.cloud.vision_v1 import enums
import six

def sample_batch_annotate_files(storage_uri):
  """
    Perform batch file annotation

    Args:
      storage_uri Cloud Storage URI to source image in the format gs://[bucket]/
      [file]
    """

  client = vision_v1.ImageAnnotatorClient()

  # storage_uri = 'gs://cloud-samples-data/vision/document_understanding/kafka.pdf'

  if isinstance(storage_uri, six.binary_type):
    storage_uri = storage_uri.decode('utf-8')
  gcs_source = {'uri': storage_uri}
  input_config = {'gcs_source': gcs_source}
  type_ = enums.Feature.Type.DOCUMENT_TEXT_DETECTION
  features_element = {'type': type_}
  features = [features_element]

  # The service can process up to 5 pages per document file.
  # Here we specify the first, second, and last page of the document to be
  # processed.
  pages_element = 1
  pages_element_2 = 2
  pages_element_3 = -1
  pages = [pages_element, pages_element_2, pages_element_3]
  requests_element = {'input_config': input_config, 'features': features, 'pages': pages}
  requests = [requests_element]

  response = client.batch_annotate_files(requests)
  for image_response in response.responses[0].responses:
    print('Full text: {}'.format(image_response.full_text_annotation.text))
    for page in image_response.full_text_annotation.pages:
      for block in page.blocks:
        print('\nBlock confidence: {}'.format(block.confidence))
        for par in block.paragraphs:
          print('\tParagraph confidence: {}'.format(par.confidence))
          for word in par.words:
            print('\t\tWord confidence: {}'.format(word.confidence))
            for symbol in word.symbols:
              print('\t\t\tSymbol: {}, (confidence: {})'.format(symbol.text, symbol.confidence))

Ruby

Before trying this sample, follow the Ruby setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Ruby API reference documentation .


 # Perform batch file annotation
 #
 # @param storage_uri {String} Cloud Storage URI to source image in the format gs://[bucket]/[file]
def sample_batch_annotate_files(storage_uri)
  # Instantiate a client
  image_annotator_client = Google::Cloud::Vision::ImageAnnotator.new version: :v1

  # storage_uri = "gs://cloud-samples-data/vision/document_understanding/kafka.pdf"
  gcs_source = { uri: storage_uri }
  input_config = { gcs_source: gcs_source }
  type = :DOCUMENT_TEXT_DETECTION
  features_element = { type: type }
  features = [features_element]

  # The service can process up to 5 pages per document file.
  # Here we specify the first, second, and last page of the document to be processed.
  pages_element = 1
  pages_element_2 = 2
  pages_element_3 = -1
  pages = [pages_element, pages_element_2, pages_element_3]
  requests_element = {
    input_config: input_config,
    features: features,
    pages: pages
  }
  requests = [requests_element]

  response = image_annotator_client.batch_annotate_files(requests)
  response.responses[0].responses.each do |image_response|
    puts "Full text: #{image_response.full_text_annotation.text}"
    image_response.full_text_annotation.pages.each do |page|
      page.blocks.each do |block|
        puts "\nBlock confidence: #{block.confidence}"
        block.paragraphs.each do |par|
          puts "\tParagraph confidence: #{par.confidence}"
          par.words.each do |word|
            puts "\t\tWord confidence: #{word.confidence}"
            word.symbols.each do |symbol|
              puts "\t\t\tSymbol: #{symbol.text}, (confidence: #{symbol.confidence})"
            end
          end
        end
      end
    end
  end

end

Try it

Try small batch online feature detection below.

You can use the PDF file specified already or specify your own file in its place.

first five pages of a pdf file
gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf

There are three feature types specified for this request:

  • DOCUMENT_TEXT_DETECTION
  • LABEL_DETECTION
  • CROP_HINTS

You can add or remove other feature types by changing the appropriate object in the request ({"type": "FEATURE_NAME"}).

Send the request by selecting Execute.

Оцените, насколько информация на этой странице была вам полезна:

Оставить отзыв о...

Текущей странице
Cloud Vision API Documentation
Нужна помощь? Обратитесь в службу поддержки.