Parse PDFs in a retrieval-augmented generation pipeline
This tutorial guides you through the process of creating a retrieval-augmented generation (RAG) pipeline based on parsed PDF content.
PDF files, such as financial documents, can be challenging to use in RAG pipelines because of their complex structure and mix of text, figures, and tables. This tutorial shows you how to use BigQuery ML capabilities in combination with Document AI's Layout Parser to build a RAG pipeline based on key information extracted from a PDF file.
You can alternatively perform this tutorial by using a Colab Enterprise notebook.
Objectives
This tutorial covers the following tasks:
- Creating a Cloud Storage bucket and uploading a sample PDF file.
- Creating a Cloud resource connection so that you can connect to Cloud Storage and Vertex AI from BigQuery.
- Creating an object table over the PDF file to make the PDF file available in BigQuery.
- Creating a Document AI processor that you can use to parse the PDF file.
- Creating a remote model that lets you use the Document AI API to access the document processor from BigQuery.
- Using the remote model with the
ML.PROCESS_DOCUMENT
function to parse the PDF contents into chunks and then write that content to a BigQuery table. - Extracting PDF content from the JSON data returned by the
ML.PROCESS_DOCUMENT
function, and then writing that content to a BigQuery table. - Creating a
remote model
that lets you use the Vertex AI
text-embedding-004
embedding generation model from BigQuery. - Using the remote model with the
ML.GENERATE_EMBEDDING
function to generate embeddings from the parsed PDF content, and then writing those embeddings to a BigQuery table. Embeddings are numerical representations of the PDF content that enable you to perform semantic search and retrieval on the PDF content. - Using the
VECTOR_SEARCH
function on the embeddings to identify semantically similar PDF content. - Creating a
remote model
that lets you use the Vertex AI
gemini-1.5-flash
text generation model from BigQuery. - Perform retrieval-augmented generation (RAG) by using the remote model
with the
ML.GENERATE_TEXT
function to generate text, using vector search results to augment the prompt input and improve results.
Costs
In this document, you use the following billable components of Google Cloud:
- BigQuery: You incur costs for the data that you process in BigQuery.
- Vertex AI: You incur costs for calls to Vertex AI models.
- Document AI: You incur costs for calls to the Document AI API.
- Cloud Storage: You incur costs for object storage in Cloud Storage.
To generate a cost estimate based on your projected usage,
use the pricing calculator.
For more information, see the following pricing pages:
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery, BigQuery Connection, Vertex AI, Document AI, and Cloud Storage APIs.
Required roles
To create Cloud Storage buckets and objects, you need membership in the
roles/storage.storageAdmin
role.To create a Document AI processor, you need membership in the
roles/documentai.editor
role.To create a connection, you need membership in the
roles/bigquery.connectionAdmin
role.To grant permissions to the connection's service account, you need membership in the
roles/resourcemanager.projectIamAdmin
role.The IAM permissions needed in this tutorial for the remaining BigQuery operations are included in the following two roles:
- BigQuery Data Editor (
roles/bigquery.dataEditor
) to create models, tables, and indexes. - BigQuery User (
roles/bigquery.user
) to run BigQuery jobs.
- BigQuery Data Editor (
Create a dataset
Create a BigQuery dataset to store your ML model:
In the Google Cloud console, go to the BigQuery page.
In the Explorer pane, click your project name.
Click
View actions > Create dataset.On the Create dataset page, do the following:
For Dataset ID, enter
bqml_tutorial
.For Location type, select Multi-region, and then select US (multiple regions in United States).
The public datasets are stored in the
US
multi-region. For simplicity, store your dataset in the same location.Leave the remaining default settings as they are, and click Create dataset.
Create a connection
Create a Cloud resource connection and get the connection's service account. Create the connection in the same location.
Select one of the following options:
Console
Go to the BigQuery page.
To create a connection, click
Add, and then click Connections to external data sources.In the Connection type list, select Vertex AI remote models, remote functions and BigLake (Cloud Resource).
In the Connection ID field, enter a name for your connection.
Click Create connection.
Click Go to connection.
In the Connection info pane, copy the service account ID for use in a later step.
bq
In a command-line environment, create a connection:
bq mk --connection --location=REGION --project_id=PROJECT_ID \ --connection_type=CLOUD_RESOURCE CONNECTION_ID
The
--project_id
parameter overrides the default project.Replace the following:
REGION
: your connection regionPROJECT_ID
: your Google Cloud project IDCONNECTION_ID
: an ID for your connection
When you create a connection resource, BigQuery creates a unique system service account and associates it with the connection.
Troubleshooting: If you get the following connection error, update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Retrieve and copy the service account ID for use in a later step:
bq show --connection PROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties 1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Terraform
Use the
google_bigquery_connection
resource.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
The following example creates a Cloud resource connection named
my_cloud_resource_connection
in the US
region:
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Grant access to the service account
Select one of the following options:
Console
Go to the IAM & Admin page.
Click
Grant Access.The Add principals dialog opens.
In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, select Document AI, and then select Document AI Viewer.
Click Add another role.
In the Select a role field, select Cloud Storage, and then select Storage Object Viewer.
Click Add another role.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
gcloud
Use the
gcloud projects add-iam-policy-binding
command:
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/documentai.viewer' --condition=None gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/storage.objectViewer' --condition=None gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER
: your project number.MEMBER
: the service account ID that you copied earlier.
Upload the sample PDF to Cloud Storage
To upload the sample PDF to Cloud Storage, follow these steps:
- Download the
scf23.pdf
sample PDF by going to https://www.federalreserve.gov/publications/files/scf23.pdf and clicking download . - Create a Cloud Storage bucket.
- Upload the
scf23.pdf
file to the bucket.
Create an object table
Create an object table over the PDF file in Cloud Storage:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE EXTERNAL TABLE `bqml_tutorial.pdf` WITH CONNECTION `LOCATION.CONNECTION_ID` OPTIONS( object_metadata = 'SIMPLE', uris = ['gs://BUCKET/scf23.pdf']);
Replace the following:
LOCATION
: the connection location.CONNECTION_ID
: the ID of your BigQuery connection.When you view the connection details in the Google Cloud console, the
CONNECTION_ID
is the value in the last section of the fully qualified connection ID that is shown in Connection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection
.BUCKET
: the Cloud Storage bucket containing thescf23.pdf
file. The fulluri
option value should look similar to['gs://mybucket/scf23.pdf']
.
Create a document processor
Create a document processor
based on the Layout Parser processor
in the us
multi-region.
Create the remote model for the document processor
Create a remote model to access the Document AI processor:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE MODEL `bqml_tutorial.parser_model` REMOTE WITH CONNECTION `LOCATION.CONNECTION_ID` OPTIONS(remote_service_type = 'CLOUD_AI_DOCUMENT_V1', document_processor = 'PROCESSOR_ID');
Replace the following:
LOCATION
: the connection location.CONNECTION_ID
: the ID of your BigQuery connection.When you view the connection details in the Google Cloud console, the
CONNECTION_ID
is the value in the last section of the fully qualified connection ID that is shown in Connection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection
.PROCESSOR_ID
: the document processor ID. To find this value, view the processor details, and then look at the ID row in the Basic Information section.
Parse the PDF file into chunks
Use the document processor with the ML.PROCESS_DOCUMENT
function to parse the
PDF file into chunks, and then write that content to a table. The
ML.PROCESS_DOCUMENT
function returns the PDF chunks in JSON format.
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE or REPLACE TABLE
bqml_tutorial.chunked_pdf
AS ( SELECT * FROM ML.PROCESS_DOCUMENT( MODELbqml_tutorial.parser_model
, TABLEbqml_tutorial.pdf
, PROCESS_OPTIONS => (JSON '{"layout_config": {"chunking_config": {"chunk_size": 250}}}') ) );
Parse the PDF chunk data into separate columns
Extract the PDF content and metadata information from the JSON data returned
by the ML.PROCESS_DOCUMENT
function, and then write that content to a
table:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement to parse the PDF content:
CREATE OR REPLACE TABLE
bqml_tutorial.parsed_pdf
AS ( SELECT uri, JSON_EXTRACT_SCALAR(json , '$.chunkId') AS id, JSON_EXTRACT_SCALAR(json , '$.content') AS content, JSON_EXTRACT_SCALAR(json , '$.pageFooters[0].text') AS page_footers_text, JSON_EXTRACT_SCALAR(json , '$.pageSpan.pageStart') AS page_span_start, JSON_EXTRACT_SCALAR(json , '$.pageSpan.pageEnd') AS page_span_end FROMbqml_tutorial.chunked_pdf
, UNNEST(JSON_EXTRACT_ARRAY(ml_process_document_result.chunkedDocument.chunks, '$')) json );In the query editor, run the following statement to view a subset of the parsed PDF content:
SELECT * FROM `bqml_tutorial.parsed_pdf` ORDER BY id LIMIT 5;
The output is similar to the following:
+-----------------------------------+------+------------------------------------------------------------------------------------------------------+-------------------+-----------------+---------------+ | uri | id | content | page_footers_text | page_span_start | page_span_end | +-----------------------------------+------+------------------------------------------------------------------------------------------------------+-------------------+-----------------+---------------+ | gs://mybucket/scf23.pdf | c1 | •BOARD OF OF FEDERAL GOVERN NOR RESERVE SYSTEM RESEARCH & ANALYSIS | NULL | 1 | 1 | | gs://mybucket/scf23.pdf | c10 | • In 2022, 20 percent of all families, 14 percent of families in the bottom half of the usual ... | NULL | 8 | 9 | | gs://mybucket/scf23.pdf | c100 | The SCF asks multiple questions intended to capture whether families are credit constrained, ... | NULL | 48 | 48 | | gs://mybucket/scf23.pdf | c101 | Bankruptcy behavior over the past five years is based on a series of retrospective questions ... | NULL | 48 | 48 | | gs://mybucket/scf23.pdf | c102 | # Percentiles of the Distributions of Income and Net Worth | NULL | 48 | 49 | +-----------------------------------+------+------------------------------------------------------------------------------------------------------+-------------------+-----------------+---------------+
Create the remote model for embedding generation
Create a remote model that represents a hosted Vertex AI text embedding generation model:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE MODEL `bqml_tutorial.embedding_model` REMOTE WITH CONNECTION `LOCATION.CONNECTION_ID` OPTIONS (ENDPOINT = 'text-embedding-004');
Replace the following:
LOCATION
: the connection location.CONNECTION_ID
: the ID of your BigQuery connection.When you view the connection details in the Google Cloud console, the
CONNECTION_ID
is the value in the last section of the fully qualified connection ID that is shown in Connection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection
.
Generate embeddings
Generate embeddings for the parsed PDF content and then write them to a table:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE TABLE `bqml_tutorial.embeddings` AS SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, TABLE `bqml_tutorial.parsed_pdf` );
Run a vector search
Run a vector search against the parsed PDF content.
The following query takes text input, creates an embedding for that input
using the ML.GENERATE_EMBEDDING
function, and then uses the VECTOR_SEARCH
function to match the input embedding with the most similar PDF content
embeddings. The results are the top ten PDF chunks that are most semantically
similar to the input.
Go to the BigQuery page.
In the query editor, run the following SQL statement:
SELECT query.query, base.id AS pdf_chunk_id, base.content, distance FROM VECTOR_SEARCH( TABLE `bqml_tutorial.embeddings`, 'ml_generate_embedding_result', ( SELECT ml_generate_embedding_result, content AS query FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, ( SELECT 'Did the typical family net worth increase? If so, by how much?' AS content) ) ), top_k => 10, OPTIONS => '{"fraction_lists_to_search": 0.01}') ORDER BY distance DESC;
The output is similar to the following:
+-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+ | query | pdf_chunk_id | content | distance | +-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+ | Did the typical family net worth increase? ,... | c9 | ## Assets | 0.31113668174119469 | | | | | | | | | The homeownership rate increased slightly between 2019 and 2022, to 66.1 percent. For ... | | +-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+ | Did the typical family net worth increase? ,... | c50 | # Box 3. Net Housing Wealth and Housing Affordability | 0.30973592073929113 | | | | | | | | | For families that own their primary residence ... | | +-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+ | Did the typical family net worth increase? ,... | c50 | 3 In the 2019 SCF, a small portion of the data collection overlapped with early months of | 0.29270064592817646 | | | | the COVID- ... | | +-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+
Create the remote model for text generation
Create a remote model that represents a hosted Vertex AI text generation model:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE MODEL `bqml_tutorial.text_model` REMOTE WITH CONNECTION `LOCATION.CONNECTION_ID` OPTIONS (ENDPOINT = 'gemini-1.5-flash-002');
Replace the following:
LOCATION
: the connection location.CONNECTION_ID
: the ID of your BigQuery connection.When you view the connection details in the Google Cloud console, the
CONNECTION_ID
is the value in the last section of the fully qualified connection ID that is shown in Connection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection
.
Generate text augmented by vector search results
Perform a vector search on the embeddings to identify semantically similar
PDF content, and then use the ML.GENERATE_TEXT
function with the vector
search results to augment the prompt input and improve the text generation
results. In this case, the query uses information from the PDF chunks to answer
a question about the change in family net worth over the past decade.
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
SELECT ml_generate_text_llm_result AS generated FROM ML.GENERATE_TEXT( MODEL `bqml_tutorial.text_model`, ( SELECT CONCAT( 'Did the typical family net worth change? How does this compare the SCF survey a decade earlier? Be concise and use the following context:', STRING_AGG(FORMAT("context: %s and reference: %s", base.content, base.uri), ',\n')) AS prompt, FROM VECTOR_SEARCH( TABLE `bqml_tutorial.embeddings`, 'ml_generate_embedding_result', ( SELECT ml_generate_embedding_result, content AS query FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, ( SELECT 'Did the typical family net worth change? How does this compare the SCF survey a decade earlier?' AS content ) ) ), top_k => 10, OPTIONS => '{"fraction_lists_to_search": 0.01}') ), STRUCT(512 AS max_output_tokens, TRUE AS flatten_json_output) );
The output is similar to the following:
+-------------------------------------------------------------------------------+ | generated | +-------------------------------------------------------------------------------+ | Between the 2019 and 2022 Survey of Consumer Finances (SCF), real median | | family net worth surged 37 percent to $192,900, and real mean net worth | | increased 23 percent to $1,063,700. This represents the largest three-year | | increase in median net worth in the history of the modern SCF, exceeding the | | next largest by more than double. In contrast, between 2010 and 2013, real | | median net worth decreased 2 percent, and real mean net worth remained | | unchanged. | +-------------------------------------------------------------------------------+
Clean up
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.