Anotación en línea de archivo en lote pequeño

La API de Vision puede proporcionar anotaciones en línea (inmediatas) de varias páginas o marcos de archivos PDF, TIFF o GIF almacenados en Cloud Storage.

Puedes solicitar la anotación y detección de funciones en línea de 5 marcos (GIF; "imagen/gif") o páginas (PDF; "application/pdf", o TIFF; "image/tiff") que prefieras para cada archivo.

Las anotaciones de ejemplo en esta página son para DOCUMENT_TEXT_DETECTION, pero la anotación en línea de lotes pequeños está disponible para todas las funciones de Vision.

Las primeras cinco páginas de un archivo PDF
gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf

Página 1

Página 1 del PDF de ejemplo
...
"text": "á\n7.1.15\nOIL, GAS AND MINERAL LEASE
\nNORVEL J. CHITTIM, ET AL\n.\n.
\nTO\nW. L. SCHEIG\n"
},
"context": {"pageNumber": 1}
...

Page 2

Página 2 (parte superior) del PDF de ejemplo
...
"text": "...\n.\n*\n.\n.\n.\nA\nNY\nALA...\n7
\n| THE STATE OF TEXAS
\nOIL, GAS AND MINERAL LEASE
\nCOUNTY OF MAVERICK ]
\nTHIS AGREEMENT made this 14 day of_June
\n1954, between Norvel J. Chittim and his wife, Lieschen G. Chittim;
\nMary Anne Chittim Parker, joined herein pro forma by her husband,
\nJoseph Bright Parker; Dorothea Chittim Oppenheimer, joined herein
\npro forma by her husband, Fred J. Oppenheimer; Tuleta Chittim
\nWright, joined herein pro forma by her husband, Gilbert G. Wright,
\nJr.; Gilbert G. Wright, III; Dela Wright White, joined herein pro
\nforma by her husband, John H. White; Anne Wright Basse, joined
\nherein pro forma by her husband, E. A. Basse, Jr.; Norvel J.
\nChittim, Independent Executor and Trustee for Estate of Marstella
\nChittim, Deceased; Mary Louise Roswell, joined herein pro forma by
\nher husband, Charles M. 'Roswell; and James M. Chittim and his wife,
\nThelma Neal Chittim; as LESSORS, and W. L. Scheig of San Antonio,
\nTexas, as LESSEE,
Página 2 (parte inferior) del PDF de ejemplo
\nW I T N E s s E T H:
\n1. Lessors, in consideration of $10.00, cash in hand paid,
\nof the royalties herein provided, and of the agreement of Lessee
\nherein contained, hereby grant, lease and let exclusively unto
\nLessee the tracts of land hereinafter described for the purpose of
\ntesting for mineral indications, and in such tests use the Seismo-
\ngraph, Torsion Balance, Core Drill, or any other tools, machinery,
\nequipment or explosive necessary and proper; and also prospecting,
\ndrilling and mining for and producing oil, gas and other minerals
\n(except metallic minerals), laying pipe lines, building tanks,
\npower stations, telephone lines and other structures thereon to
\nproduce, save, take care of, treat, transport and own said pro-
\nducts and housing its employees (Lessee to conduct its geophysical
\nwork in such manner as not to damage the buildings, water tanks
\nor wells of Lessors, or the livestock of Lessors or Lessors' ten- !
\nants, )said lands being situated in Maverick, Zavalla and Dimmit
\nCounties, Texas, to-wit:\n3-1.\n"
},
"context": {"pageNumber": 2}
...

Page 3

Página 3 (parte superior) del PDF de ejemplo
...
"text": "Being a tract consisting of 140,769.86 acres, more or
\nless, out of what is known as the \"Chittim Ranch\" in said counties,
\nas designated and described in Exhibit \"A\" hereto attached and
\nmade a part hereof as if fully written herein. It being under-
\nstood that the acreage intended to be included in this lease aggre-
\ngates approximately 140,769.86 acres whether it actually comprises
\nmore or less, but for the purpose of calculating the payments
\nhereinafter provided for, it is agreed that the land included with-
\nin the terms of this lease is One hundred forty thousand seven
\nhundred sixty-nine and eighty-six one hundredths (140,769.86) acres,
\nand that each survey listed above contains the acreage stated above.
\nIt is understood that tract designated \"TRACT II\" in
\nExhibit \"A\" is subject to a one-sixteenth (1/16) royalty reserved.
\nto the State of Texas, and the rights of the State of Texas must
\nbe respected in the development of the said property.
Página 3 (parte inferior) del PDF de ejemplo
\n2. Subject to the other provisions hereof, this lease shall
\nbe for a term of ten (10) years from date hereof (called \"Primary
\nTerm\"), and as long thereafter as oil, gas or other minerals
\n(except metallic minerals) are produced from said land hereunder
\nin paying quantities, subject, however, to all of the terms and
\nprovisions of this lease. After expiration of the primary term,
\nthis lease shall terminate as to all lands included herein, save
\nand except as to those tracts which lessee maintains in force and
\neffect according to the requirements hereof.
\n3. The royalties to be paid by Lessee are (a) on oil, one-
\neighth (1/8) of that produced and saved from said land, the same to
\nbe delivered at the well or to the credit of Lessors into the pipe i
\nline to which the well may be connected; (b) on gas, including
\ni casinghead gas or other gaseous or vaporous substance, produced
\nfrom the leased premises and sold or used by Lessee off the leased
\npremises or in the manufacture of gasoline or other products, the
\nmarket value, at the mouth of the well, of one-eighth (1/8) of
\n.\n3-2-\n?\n"
},
"context": {"pageNumber": 3}
...

Page 4

Página 4 (parte superior) del PDF de ejemplo
...
"text": "•\n:\n.\nthe gas or casinghead gas so used or sold. On all gas or casing-
\nhead gas sold at the well, the royalty shall be one-eighth (1/8)
\nof the amounts realized from such sales. While gas from any well
\nproducing gas only is being used or sold by. Lessee, Lessor may have
\nenough of said gas for all stoves and inside lights in the prin-
\ncipal dwelling house on the leased premises by making Lessors' own
\nconnections with the well and by assuming all risk and paying all
\nexpenses. And (c) on all other minerals (except metallic minerals)
\nmined and marketed, one tenth (1/10). either in kind or value at the
\nwell or mine at Lessee's election.
\nFor the purpose of royalty payments under 3 (b) hereof,
\nall liquid hydrocarbons (including distillate) recovered and saved
n| by Lessee in separators or traps on the leased premises shall be
\nconsidered as oil. Should such a plant be constructed by another
\nthan Lessee to whom Lessee should sell or deliver the gas or cas-
\ninghead gas produced from the leased premises for processing, then
\nthe royalty thereon shall be one-eighth (1/8) of the amounts
\nrealized by Lessee from such sales or deliveries.
Página 4 (parte inferior) del PDF de ejemplo
\nOr if such plant is owned or constructed or operated by
\nLessee, then the royalty shall be on the basis of one-eighth (1/8) |
\nof the prevailing price in the area for such products..
\nThe provisions of this paragraph shall control as to any
\nconflict with Paragraph 3 (b). Lessors shall also be entitled to
\nsaid royalty interest in all residue gas .obtained, saved and mar-
\nketed from said premises, or used off the premises, or that may be
\nreplaced in the reservoir by 'any recycling process, settlement
\ntherefor to be made to Lessors when such gas is marketed or used
\noff the premises. !
\nIf at the expiration of the primary term of this lease
\nLessee has not found and produced oil or gas in paying quantities
\nin any formation lying fifty (50) feet below the base of what is
\nknown as the Rhodessa section at the particular point where the
\nwell is drilled, then, subject to the further provisions hereof,
\nthis lease shall terminate as to all horizons below fifty (50)
\nI feet below the Rhodessa section. And if at the expiration of the
\n3 -3-\n"
},
"context": {"pageNumber": 4}
...

Page 5

Página 5 (parte superior) del PDF de ejemplo
...
"text": ".\n.\n:\nI\n.\n.\n.:250:-....\n.\n...\n.\n....\n....\n..\n..\n. ..
\n.\n..\n.\n...\n...\n.-\n.\n.\n..\n..\n17\n.\n:\n-\n-\n-\n.\n..\n.
\nprimary term production of oil or gas in paying quantities is not
\nfound in the Jurassic, then this lease shall terminate as to the
\nJurassic and lower formations unless Lessee shall have completed
\nat least two (2) tests in the Jurassic. And after the primary
\nterm Lessee shall complete at least one (1) Jurassic test each
\nthree years on said property as to which this lease is still in
\neffect, until paying production is obtained in or below the
\nJurassic, or upon failure so to do Lessee shall release this
\nlease as to all formations below the top of the Jurassic. Upon
\ncompliance with the above provisions as to Jurassic tests, and
\nif production is found in the Jurassic, then, subject to the
\nother provisions hereof, this lease shall be effective as to all
\nhorizons, including the Jurassic..
\n5. It is understood and expressly agreed that the consider-
\niation first recited in this lease, the down cash payment, receipt
\nof which is hereby acknowledged by Lessors, is full and adequate
\nconsideration to maintain this lease in full force and effect for
\na period of one year from the date hereof, and does not impose
\nany obligation on the part of Lessee to drill and develop this
\nlease during the said term of one year from date of this lease.
Página 5 (parte inferior) del PDF de ejemplo
\n6. This lease shall terminate as to both parties unless
\non or before one year from this date, Lessee shall pay to or ten- !
\nder to Lessors or to the credit of Lessors, in the National Bank
\nof Commerce, at San Antonio, Texas, (which bank and its successors
\nare Lessors' agent, and shall continue as the depository for all \"
\nrental payable hereunder regardless of changes in ownership of
\nsaid land or the rental), the sum of One Dollar ($1.00) per acre
\nas to all acreage then covered by this lease, and not surrendered,
\nor maintained by production of oil, gas or other minerals, or by
\ndrilling-reworking operations, all as hereinafter fully set out, :
\nwhich shall maintain this lease in full force and effect for
\nanother twelve-month period, without imposing any obligation on
\nthe part of Lessee to drill and develop this lease. In like
\nmanner, and upon like payment or tender annually, Lessee may
\nmaintain this lease .in full force and effect for successive
\ntwelve-month periods during the primary term, without imposing
\n.\n--.\n.\n.\n.\n-\n::\n---
\n-\n3\n.\n..-\n-\n-\n:.\n.\n::\n.
\n3-4-\n"
},
"context": {"pageNumber": 5}
...

Limitaciones

Se anotarán cinco páginas como máximo. Los usuarios pueden detallar las 5 páginas específicas que se anotarán.

Autenticación

Configura el proyecto de Google Cloud y la autenticación

Tipos de funciones admitidas en la actualidad

Tipo de atributo
CROP_HINTS Determina los vértices sugeridos para una región de recorte en una imagen.
DOCUMENT_TEXT_DETECTION Realiza OCR en imágenes de texto denso, como documentos (PDF o TIFF) y también en imágenes con escritura a mano. TEXT_DETECTION se puede usar para imágenes de texto escaso. Tiene prioridad cuando están presentes DOCUMENT_TEXT_DETECTION y TEXT_DETECTION.
FACE_DETECTION Detecta rostros dentro de la imagen.
IMAGE_PROPERTIES Procesa un conjunto de propiedades de la imagen, como sus colores predominantes.
LABEL_DETECTION Agrega etiquetas según el contenido de la imagen.
LANDMARK_DETECTION Detecta puntos de referencia geográficos dentro de la imagen.
LOGO_DETECTION Detecta logotipos de empresas dentro de una imagen.
OBJECT_LOCALIZATION Detecta y extrae varios objetos de una imagen.
SAFE_SEARCH_DETECTION Ejecuta SafeSearch para detectar contenido inseguro o no deseado.
TEXT_DETECTION Realiza reconocimiento óptico de caracteres (OCR) en el texto dentro la imagen. La detección de texto está optimizada para áreas de texto escaso dentro de una imagen más grande. Si la imagen es un documento (PDF o TIFF) y tiene texto denso o contiene escritura a mano, usa DOCUMENT_TEXT_DETECTION en su lugar.
WEB_DETECTION Detecta entidades temáticas como noticias, eventos o celebridades en la imagen y encuentra imágenes similares en la Web gracias a la Búsqueda de imágenes de Google.

Código de muestra

Puedes enviar una solicitud de anotación con un archivo almacenado de forma local, o usar un archivo que esté almacenado en Cloud Storage.

Usa un archivo almacenado de forma local

Usa las siguientes muestras de código para obtener alguna anotación de funciones para un archivo almacenado de forma local.

REST

A fin de realizar la detección de funciones PDF/TIFF/GIF en línea para un pequeño lote de archivos, envía una solicitud POST y proporciona el cuerpo de solicitud correspondiente:

Antes de usar cualquiera de los datos de solicitud a continuación, realiza los siguientes reemplazos:

  • BASE64_ENCODED_FILE: Es la representación en base64 (string ASCII) de los datos de tu archivo binario. Esta string debería ser similar a la siguiente:
    • JVBERi0xLjUNCiW1tbW1...ydHhyZWYNCjk5NzM2OQ0KJSVFT0Y=
    Visita Codificación en base64 para obtener más información.
  • PROJECT_ID es el ID del proyecto de Google Cloud.

Consideraciones específicas del campo:

  • inputConfig.mimeType: Una de las siguientes opciones: “application/pdf”, “image/tiff” o “image/gif”.
  • pages : Detalla las páginas específicas del archivo para realizar la detección de funciones.

Método HTTP y URL:

POST https://vision.googleapis.com/v1/files:annotate

Cuerpo JSON de la solicitud:

{
  "requests": [
    {
      "inputConfig": {
        "content": "BASE64_ENCODED_FILE",
        "mimeType": "application/pdf"
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        }
      ],
      "pages": [
        1,2,3,4,5
      ]
    }
  ]
}

Para enviar tu solicitud, elige una de estas opciones:

curl

Guarda el cuerpo de la solicitud en un archivo llamado request.json y ejecuta el siguiente comando:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/files:annotate"

PowerShell

Guarda el cuerpo de la solicitud en un archivo llamado request.json y ejecuta el siguiente comando:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_ID" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/files:annotate" | Select-Object -Expand Content
Respuesta:

Una solicitud annotate correcta muestra una respuesta JSON de forma inmediata.

Para esta función (DOCUMENT_TEXT_DETECTION), la respuesta JSON es similar a la de la solicitud de detección de texto en el documento de una imagen. La respuesta contiene cuadros delimitadores para bloques desglosados por párrafos, palabras y símbolos individuales. También se detecta el texto completo. La respuesta también contiene un campo context que muestra la ubicación del PDF o del TIFF que se especificó y el resultado del número de página del archivo.

El siguiente JSON de respuesta es solo para una página (página 2) y se abrevió para mayor claridad.

Java

Antes de probar este código de muestra, sigue las instrucciones de configuración para Java que se encuentran la Guía de inicio rápido de la API de Vision sobre cómo usar las bibliotecas cliente. Si quieres obtener más información, consulta la documentación de referencia de la API de Vision para Java.

import com.google.cloud.vision.v1.AnnotateFileRequest;
import com.google.cloud.vision.v1.AnnotateImageResponse;
import com.google.cloud.vision.v1.BatchAnnotateFilesRequest;
import com.google.cloud.vision.v1.BatchAnnotateFilesResponse;
import com.google.cloud.vision.v1.Block;
import com.google.cloud.vision.v1.Feature;
import com.google.cloud.vision.v1.ImageAnnotatorClient;
import com.google.cloud.vision.v1.InputConfig;
import com.google.cloud.vision.v1.Page;
import com.google.cloud.vision.v1.Paragraph;
import com.google.cloud.vision.v1.Symbol;
import com.google.cloud.vision.v1.Word;
import com.google.protobuf.ByteString;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class BatchAnnotateFiles {

  public static void batchAnnotateFiles() throws IOException {
    String filePath = "path/to/your/file.pdf";
    batchAnnotateFiles(filePath);
  }

  public static void batchAnnotateFiles(String filePath) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ImageAnnotatorClient imageAnnotatorClient = ImageAnnotatorClient.create()) {
      // You can send multiple files to be annotated, this sample demonstrates how to do this with
      // one file. If you want to use multiple files, you have to create a `AnnotateImageRequest`
      // object for each file that you want annotated.
      // First read the files contents
      Path path = Paths.get(filePath);
      byte[] data = Files.readAllBytes(path);
      ByteString content = ByteString.copyFrom(data);

      // Specify the input config with the file's contents and its type.
      // Supported mime_type: application/pdf, image/tiff, image/gif
      // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#inputconfig
      InputConfig inputConfig =
          InputConfig.newBuilder().setMimeType("application/pdf").setContent(content).build();

      // Set the type of annotation you want to perform on the file
      // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.Feature.Type
      Feature feature = Feature.newBuilder().setType(Feature.Type.DOCUMENT_TEXT_DETECTION).build();

      // Build the request object for that one file. Note: for additional file you have to create
      // additional `AnnotateFileRequest` objects and store them in a list to be used below.
      // Since we are sending a file of type `application/pdf`, we can use the `pages` field to
      // specify which pages to process. The service can process up to 5 pages per document file.
      // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateFileRequest
      AnnotateFileRequest fileRequest =
          AnnotateFileRequest.newBuilder()
              .setInputConfig(inputConfig)
              .addFeatures(feature)
              .addPages(1) // Process the first page
              .addPages(2) // Process the second page
              .addPages(-1) // Process the last page
              .build();

      // Add each `AnnotateFileRequest` object to the batch request.
      BatchAnnotateFilesRequest request =
          BatchAnnotateFilesRequest.newBuilder().addRequests(fileRequest).build();

      // Make the synchronous batch request.
      BatchAnnotateFilesResponse response = imageAnnotatorClient.batchAnnotateFiles(request);

      // Process the results, just get the first result, since only one file was sent in this
      // sample.
      for (AnnotateImageResponse imageResponse :
          response.getResponsesList().get(0).getResponsesList()) {
        System.out.format("Full text: %s%n", imageResponse.getFullTextAnnotation().getText());
        for (Page page : imageResponse.getFullTextAnnotation().getPagesList()) {
          for (Block block : page.getBlocksList()) {
            System.out.format("%nBlock confidence: %s%n", block.getConfidence());
            for (Paragraph par : block.getParagraphsList()) {
              System.out.format("\tParagraph confidence: %s%n", par.getConfidence());
              for (Word word : par.getWordsList()) {
                System.out.format("\t\tWord confidence: %s%n", word.getConfidence());
                for (Symbol symbol : word.getSymbolsList()) {
                  System.out.format(
                      "\t\t\tSymbol: %s, (confidence: %s)%n",
                      symbol.getText(), symbol.getConfidence());
                }
              }
            }
          }
        }
      }
    }
  }
}

Node.js

Antes de probar este código de muestra, sigue las instrucciones de configuración para Node.js que se encuentran en la Guía de inicio rápido de Vision sobre cómo usar las bibliotecas cliente. Si quieres obtener más información, consulta la documentación de referencia de la API de Vision para Node.js.

Para autenticarte en Vision, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const fileName = 'path/to/your/file.pdf';

// Imports the Google Cloud client libraries
const {ImageAnnotatorClient} = require('@google-cloud/vision').v1;
const fs = require('fs').promises;

// Instantiates a client
const client = new ImageAnnotatorClient();

// You can send multiple files to be annotated, this sample demonstrates how to do this with
// one file. If you want to use multiple files, you have to create a request object for each file that you want annotated.
async function batchAnnotateFiles() {
  // First Specify the input config with the file's path and its type.
  // Supported mime_type: application/pdf, image/tiff, image/gif
  // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#inputconfig
  const inputConfig = {
    mimeType: 'application/pdf',
    content: await fs.readFile(fileName),
  };

  // Set the type of annotation you want to perform on the file
  // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.Feature.Type
  const features = [{type: 'DOCUMENT_TEXT_DETECTION'}];

  // Build the request object for that one file. Note: for additional files you have to create
  // additional file request objects and store them in a list to be used below.
  // Since we are sending a file of type `application/pdf`, we can use the `pages` field to
  // specify which pages to process. The service can process up to 5 pages per document file.
  // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateFileRequest
  const fileRequest = {
    inputConfig: inputConfig,
    features: features,
    // Annotate the first two pages and the last one (max 5 pages)
    // First page starts at 1, and not 0. Last page is -1.
    pages: [1, 2, -1],
  };

  // Add each `AnnotateFileRequest` object to the batch request.
  const request = {
    requests: [fileRequest],
  };

  // Make the synchronous batch request.
  const [result] = await client.batchAnnotateFiles(request);

  // Process the results, just get the first result, since only one file was sent in this
  // sample.
  const responses = result.responses[0].responses;

  for (const response of responses) {
    console.log(`Full text: ${response.fullTextAnnotation.text}`);
    for (const page of response.fullTextAnnotation.pages) {
      for (const block of page.blocks) {
        console.log(`Block confidence: ${block.confidence}`);
        for (const paragraph of block.paragraphs) {
          console.log(` Paragraph confidence: ${paragraph.confidence}`);
          for (const word of paragraph.words) {
            const symbol_texts = word.symbols.map(symbol => symbol.text);
            const word_text = symbol_texts.join('');
            console.log(
              `  Word text: ${word_text} (confidence: ${word.confidence})`
            );
            for (const symbol of word.symbols) {
              console.log(
                `   Symbol: ${symbol.text} (confidence: ${symbol.confidence})`
              );
            }
          }
        }
      }
    }
  }
}

batchAnnotateFiles();

Python

Antes de probar este código de muestra, sigue las instrucciones de configuración para Python que se encuentran en la Guía de inicio rápido de Vision sobre cómo usar las bibliotecas cliente. Si quieres obtener más información, consulta la documentación de referencia de la API de Vision para Python.

Para autenticarte en Vision, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.



from google.cloud import vision_v1


def sample_batch_annotate_files(file_path="path/to/your/document.pdf"):
    """Perform batch file annotation."""
    client = vision_v1.ImageAnnotatorClient()

    # Supported mime_type: application/pdf, image/tiff, image/gif
    mime_type = "application/pdf"
    with open(file_path, "rb") as f:
        content = f.read()
    input_config = {"mime_type": mime_type, "content": content}
    features = [{"type_": vision_v1.Feature.Type.DOCUMENT_TEXT_DETECTION}]

    # The service can process up to 5 pages per document file. Here we specify
    # the first, second, and last page of the document to be processed.
    pages = [1, 2, -1]
    requests = [{"input_config": input_config, "features": features, "pages": pages}]

    response = client.batch_annotate_files(requests=requests)
    for image_response in response.responses[0].responses:
        print(f"Full text: {image_response.full_text_annotation.text}")
        for page in image_response.full_text_annotation.pages:
            for block in page.blocks:
                print(f"\nBlock confidence: {block.confidence}")
                for par in block.paragraphs:
                    print(f"\tParagraph confidence: {par.confidence}")
                    for word in par.words:
                        print(f"\t\tWord confidence: {word.confidence}")
                        for symbol in word.symbols:
                            print(
                                "\t\t\tSymbol: {}, (confidence: {})".format(
                                    symbol.text, symbol.confidence
                                )
                            )

Usa un archivo en Cloud Storage

Usa las siguientes muestras de código para obtener alguna anotación de funciones para un archivo en Cloud Storage.

REST

A fin de realizar la detección de funciones PDF/TIFF/GIF en línea para un pequeño lote de archivos, envía una solicitud POST y proporciona el cuerpo de solicitud correspondiente:

Antes de usar cualquiera de los datos de solicitud a continuación, realiza los siguientes reemplazos:

  • CLOUD_STORAGE_FILE_URI: La ruta a un archivo válido (PDF/TIFF) en un depósito de Cloud Storage. Como mínimo, debes tener privilegios de lectura del archivo. Ejemplo:
    • gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf
  • PROJECT_ID es el ID del proyecto de Google Cloud.

Consideraciones específicas del campo:

  • inputConfig.mimeType: Una de las siguientes opciones: “application/pdf”, “image/tiff” o “image/gif”.
  • pages : Detalla las páginas específicas del archivo para realizar la detección de funciones.

Método HTTP y URL:

POST https://vision.googleapis.com/v1/files:annotate

Cuerpo JSON de la solicitud:

{
  "requests": [
    {
      "inputConfig": {
        "gcsSource": {
          "uri": "CLOUD_STORAGE_FILE_URI"
        },
        "mimeType": "application/pdf"
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        }
      ],
      "pages": [
        1,2,3,4,5
      ]
    }
  ]
}

Para enviar tu solicitud, elige una de estas opciones:

curl

Guarda el cuerpo de la solicitud en un archivo llamado request.json y ejecuta el siguiente comando:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/files:annotate"

PowerShell

Guarda el cuerpo de la solicitud en un archivo llamado request.json y ejecuta el siguiente comando:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_ID" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/files:annotate" | Select-Object -Expand Content
Respuesta:

Una solicitud annotate correcta muestra una respuesta JSON de forma inmediata.

Para esta función (DOCUMENT_TEXT_DETECTION), la respuesta JSON es similar a la de la solicitud de detección de texto en el documento de una imagen. La respuesta contiene cuadros delimitadores para bloques desglosados por párrafos, palabras y símbolos individuales. También se detecta el texto completo. La respuesta también contiene un campo context que muestra la ubicación del PDF o del TIFF que se especificó y el resultado del número de página del archivo.

El siguiente JSON de respuesta es solo para una página (página 2) y se abrevió para mayor claridad.

Java

Antes de probar este código de muestra, sigue las instrucciones de configuración para Java que se encuentran la Guía de inicio rápido de la API de Vision sobre cómo usar las bibliotecas cliente. Si quieres obtener más información, consulta la documentación de referencia de la API de Vision para Java.

import com.google.cloud.vision.v1.AnnotateFileRequest;
import com.google.cloud.vision.v1.AnnotateImageResponse;
import com.google.cloud.vision.v1.BatchAnnotateFilesRequest;
import com.google.cloud.vision.v1.BatchAnnotateFilesResponse;
import com.google.cloud.vision.v1.Block;
import com.google.cloud.vision.v1.Feature;
import com.google.cloud.vision.v1.GcsSource;
import com.google.cloud.vision.v1.ImageAnnotatorClient;
import com.google.cloud.vision.v1.InputConfig;
import com.google.cloud.vision.v1.Page;
import com.google.cloud.vision.v1.Paragraph;
import com.google.cloud.vision.v1.Symbol;
import com.google.cloud.vision.v1.Word;
import java.io.IOException;

public class BatchAnnotateFilesGcs {

  public static void batchAnnotateFilesGcs() throws IOException {
    String gcsUri = "gs://cloud-samples-data/vision/document_understanding/kafka.pdf";
    batchAnnotateFilesGcs(gcsUri);
  }

  public static void batchAnnotateFilesGcs(String gcsUri) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ImageAnnotatorClient imageAnnotatorClient = ImageAnnotatorClient.create()) {
      // You can send multiple files to be annotated, this sample demonstrates how to do this with
      // one file. If you want to use multiple files, you have to create a `AnnotateImageRequest`
      // object for each file that you want annotated.
      // First specify where the vision api can find the image
      GcsSource gcsSource = GcsSource.newBuilder().setUri(gcsUri).build();

      // Specify the input config with the file's uri and its type.
      // Supported mime_type: application/pdf, image/tiff, image/gif
      // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#inputconfig
      InputConfig inputConfig =
          InputConfig.newBuilder().setMimeType("application/pdf").setGcsSource(gcsSource).build();

      // Set the type of annotation you want to perform on the file
      // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.Feature.Type
      Feature feature = Feature.newBuilder().setType(Feature.Type.DOCUMENT_TEXT_DETECTION).build();

      // Build the request object for that one file. Note: for additional file you have to create
      // additional `AnnotateFileRequest` objects and store them in a list to be used below.
      // Since we are sending a file of type `application/pdf`, we can use the `pages` field to
      // specify which pages to process. The service can process up to 5 pages per document file.
      // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateFileRequest
      AnnotateFileRequest fileRequest =
          AnnotateFileRequest.newBuilder()
              .setInputConfig(inputConfig)
              .addFeatures(feature)
              .addPages(1) // Process the first page
              .addPages(2) // Process the second page
              .addPages(-1) // Process the last page
              .build();

      // Add each `AnnotateFileRequest` object to the batch request.
      BatchAnnotateFilesRequest request =
          BatchAnnotateFilesRequest.newBuilder().addRequests(fileRequest).build();

      // Make the synchronous batch request.
      BatchAnnotateFilesResponse response = imageAnnotatorClient.batchAnnotateFiles(request);

      // Process the results, just get the first result, since only one file was sent in this
      // sample.
      for (AnnotateImageResponse imageResponse :
          response.getResponsesList().get(0).getResponsesList()) {
        System.out.format("Full text: %s%n", imageResponse.getFullTextAnnotation().getText());
        for (Page page : imageResponse.getFullTextAnnotation().getPagesList()) {
          for (Block block : page.getBlocksList()) {
            System.out.format("%nBlock confidence: %s%n", block.getConfidence());
            for (Paragraph par : block.getParagraphsList()) {
              System.out.format("\tParagraph confidence: %s%n", par.getConfidence());
              for (Word word : par.getWordsList()) {
                System.out.format("\t\tWord confidence: %s%n", word.getConfidence());
                for (Symbol symbol : word.getSymbolsList()) {
                  System.out.format(
                      "\t\t\tSymbol: %s, (confidence: %s)%n",
                      symbol.getText(), symbol.getConfidence());
                }
              }
            }
          }
        }
      }
    }
  }
}

Node.js

Antes de probar este código de muestra, sigue las instrucciones de configuración para Node.js que se encuentran en la Guía de inicio rápido de Vision sobre cómo usar las bibliotecas cliente. Si quieres obtener más información, consulta la documentación de referencia de la API de Vision para Node.js.

Para autenticarte en Vision, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const gcsSourceUri = 'gs://cloud-samples-data/vision/document_understanding/kafka.pdf';

// Imports the Google Cloud client libraries
const {ImageAnnotatorClient} = require('@google-cloud/vision').v1;

// Instantiates a client
const client = new ImageAnnotatorClient();

// You can send multiple files to be annotated, this sample demonstrates how to do this with
// one file. If you want to use multiple files, you have to create a request object for each file that you want annotated.
async function batchAnnotateFiles() {
  // First Specify the input config with the file's uri and its type.
  // Supported mime_type: application/pdf, image/tiff, image/gif
  // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#inputconfig
  const inputConfig = {
    mimeType: 'application/pdf',
    gcsSource: {
      uri: gcsSourceUri,
    },
  };

  // Set the type of annotation you want to perform on the file
  // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.Feature.Type
  const features = [{type: 'DOCUMENT_TEXT_DETECTION'}];

  // Build the request object for that one file. Note: for additional files you have to create
  // additional file request objects and store them in a list to be used below.
  // Since we are sending a file of type `application/pdf`, we can use the `pages` field to
  // specify which pages to process. The service can process up to 5 pages per document file.
  // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateFileRequest
  const fileRequest = {
    inputConfig: inputConfig,
    features: features,
    // Annotate the first two pages and the last one (max 5 pages)
    // First page starts at 1, and not 0. Last page is -1.
    pages: [1, 2, -1],
  };

  // Add each `AnnotateFileRequest` object to the batch request.
  const request = {
    requests: [fileRequest],
  };

  // Make the synchronous batch request.
  const [result] = await client.batchAnnotateFiles(request);

  // Process the results, just get the first result, since only one file was sent in this
  // sample.
  const responses = result.responses[0].responses;

  for (const response of responses) {
    console.log(`Full text: ${response.fullTextAnnotation.text}`);
    for (const page of response.fullTextAnnotation.pages) {
      for (const block of page.blocks) {
        console.log(`Block confidence: ${block.confidence}`);
        for (const paragraph of block.paragraphs) {
          console.log(` Paragraph confidence: ${paragraph.confidence}`);
          for (const word of paragraph.words) {
            const symbol_texts = word.symbols.map(symbol => symbol.text);
            const word_text = symbol_texts.join('');
            console.log(
              `  Word text: ${word_text} (confidence: ${word.confidence})`
            );
            for (const symbol of word.symbols) {
              console.log(
                `   Symbol: ${symbol.text} (confidence: ${symbol.confidence})`
              );
            }
          }
        }
      }
    }
  }
}

batchAnnotateFiles();

Python

Antes de probar este código de muestra, sigue las instrucciones de configuración para Python que se encuentran en la Guía de inicio rápido de Vision sobre cómo usar las bibliotecas cliente. Si quieres obtener más información, consulta la documentación de referencia de la API de Vision para Python.

Para autenticarte en Vision, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.


from google.cloud import vision_v1


def sample_batch_annotate_files(
    storage_uri="gs://cloud-samples-data/vision/document_understanding/kafka.pdf",
):
    """Perform batch file annotation."""
    mime_type = "application/pdf"

    client = vision_v1.ImageAnnotatorClient()

    gcs_source = {"uri": storage_uri}
    input_config = {"gcs_source": gcs_source, "mime_type": mime_type}
    features = [{"type_": vision_v1.Feature.Type.DOCUMENT_TEXT_DETECTION}]

    # The service can process up to 5 pages per document file.
    # Here we specify the first, second, and last page of the document to be
    # processed.
    pages = [1, 2, -1]
    requests = [{"input_config": input_config, "features": features, "pages": pages}]

    response = client.batch_annotate_files(requests=requests)
    for image_response in response.responses[0].responses:
        print(f"Full text: {image_response.full_text_annotation.text}")
        for page in image_response.full_text_annotation.pages:
            for block in page.blocks:
                print(f"\nBlock confidence: {block.confidence}")
                for par in block.paragraphs:
                    print(f"\tParagraph confidence: {par.confidence}")
                    for word in par.words:
                        print(f"\t\tWord confidence: {word.confidence}")
                        for symbol in word.symbols:
                            print(
                                "\t\t\tSymbol: {}, (confidence: {})".format(
                                    symbol.text, symbol.confidence
                                )
                            )

Probar

Prueba la detección de funciones en lotes pequeños en línea a continuación.

Puedes usar el archivo PDF ya especificado o bien especificar tu propio archivo.

Las primeras cinco páginas de un archivo PDF
gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf

Existen tres tipos de funciones especificadas para esta solicitud:

  • DOCUMENT_TEXT_DETECTION
  • LABEL_DETECTION
  • CROP_HINTS

Puedes agregar o quitar otros tipos de funciones cuando cambias el objeto apropiado en la solicitud ({"type": "FEATURE_NAME"}).

Si deseas enviar la solicitud, selecciona Ejecutar.

Cuerpo de la solicitud:

{
  "requests": [
    {
      "inputConfig": {
        "gcsSource": {
          "uri": "gs://cloud-samples-data/vision/document_understanding/custom_0773375000.pdf"
        },
        "mimeType": "application/pdf"
      },
      "features": [
        {
          "type": "DOCUMENT_TEXT_DETECTION"
        },
        {
          "type": "LABEL_DETECTION"
        },
        {
          "type": "CROP_HINTS"
        }
      ],
      "pages": [
        1,
        2,
        3,
        4,
        5
      ]
    }
  ]
}