Choose a document processing function

This document provides a comparison of the document processing functions available in BigQuery ML, which are ML.GENERATE_TEXT and ML.PROCESS_DOCUMENT.

You can use the information in this document to help you decide which function to use in cases where the functions have overlapping capabilities.

At a high level, the difference between these functions is as follows:

  • ML.GENERATE_TEXT is a good choice for performing natural language processing (NLP) tasks where some of the content resides in documents. This function offers the following benefits:

    • Lower costs
    • More language support
    • Faster throughput
    • Model tuning capability
    • Availability of multimodal models

    For examples of document processing tasks that work best with this approach, see Explore document processing capabilities with the Gemini API.

  • ML.PROCESS_DOCUMENT is a good choice for performing document processing tasks that require document parsing and a predefined, structured response.

Supported models

Supported models are as follows:

  • ML.GENERATE_TEXT: you can use a subset of the Vertex AI Gemini models to generate text. For more information on supported models, see the ML.GENERATE_TEXT syntax.
  • ML.PROCESS_DOCUMENT: you use the default model of the Document AI API. Using the Document AI API gives you access to many different document processors, such as the invoice parser, layout parser, and form parser. You can use these document processors to work with PDF files with many different structures.

Supported tasks

Supported tasks are as follows:

  • ML.GENERATE_TEXT: you can perform any NLP task where the input is a document. For example, given a financial document for a company, you can retrieve document information by providing a prompt such as What is the quarterly revenue for each division?.
  • ML.PROCESS_DOCUMENT: you can perform specialized document processing for different document types, such as invoices, tax forms, and financial statements. You can also perform document chunking. For more information, on how to use the ML.PROCESS_DOCUMENT function fo this task, see Parse PDFs in a retrieval-augmented generation pipeline.

Pricing

Pricing is as follows:

  • ML.GENERATE_TEXT: For pricing of the Vertex AI models that you use with this function, see Vertex AI pricing. Supervised tuning of supported models is charged at dollars per node hour. For more information, see Vertex AI custom training pricing.
  • ML.PROCESS_DOCUMENT: For pricing of the Cloud AI service that you use with this function, see Document AI API pricing.

Supervised tuning

Supervised tuning support is as follows:

  • ML.GENERATE_TEXT: supervised tuning is supported for some models.
  • ML.PROCESS_DOCUMENT: supervised tuning isn't supported.

Queries per minute (QPM) limit

QPM limits are as follows:

  • ML.GENERATE_TEXT: 60 QPM in the default us-central1 region for gemini-1.5-pro models, and 200 QPM in the default us-central1 region for gemini-1.5-flash models. For more information, see Generative AI on Vertex AI quotas.
  • ML.PROCESS_DOCUMENT: 120 QPM per processor type, with an overall limit of 600 QPM per project. For more information, see Quotas list.

To increase your quota, see Request a higher quota.

Token limit

Token limits are as follows:

  • ML.GENERATE_TEXT: 700 input tokens, and 8196 output tokens.
  • ML.PROCESS_DOCUMENT: No token limit. However, this function does have different page limits depending on the processor you use. For more information, see Limits.

Supported languages

Supported languages are as follows:

  • ML.GENERATE_TEXT: supports the same languages as Gemini.
  • ML.PROCESS_DOCUMENT: language support depends on the document processor type; most only support English. For more information, see Processor list.

Region availability

Region availability is as follows:

  • ML.GENERATE_TEXT: available in all Generative AI for Vertex AI regions.
  • ML.PROCESS_DOCUMENT: available in the EU and US multi-regions for all processors. Some processors are also available in certain single regions. For more information, see Regional and multi-regional support.