Create a Custom Document Splitter in the Google Cloud console

You can create Custom Document Splitters (CDS) that are specifically suited to your documents, and trained and evaluated with your data. This processor identifies classes of documents from a user-defined set of classes. You can then use this trained processor on additional documents. You typically would use a CDS on documents that are different types, then use the identification to pass the documents to an extraction processor to extract the entities.

A typical workflow to create and use a CDS is as follows:

  1. Create a Custom Document Splitter in Document AI Workbench.
  2. Create a dataset using an empty Cloud Storage bucket.
  3. Define and create the processor schema.
  4. Import documents.
  5. Assign documents to the Training and Test sets.
  6. Annotate documents manually in Document AI Workbench or with Labeling Tasks.
  7. Train the processor.
  8. Evaluate the processor.
  9. Deploy the processor.
  10. Test the processor.
  11. Use the processor on your documents.

You can make your own configuration choices that suit your workflow.

This guide describes how to use Document AI Workbench to create and train a Custom Document Splitter that splits and classifies procurement documents. Most of the document preparation work has been done so that you can focus on the other mechanics of creating a CDS.


To follow step-by-step guidance for this task directly in the Google Cloud console, click Guide me:

Guide me


Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Document AI, Cloud Storage APIs.

    Enable the APIs

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Document AI, Cloud Storage APIs.

    Enable the APIs

Create a processor

  1. In the Google Cloud console, in the Document AI section, go to the Workbench page.

    Workbench

  2. For Custom Document Splitter, click Create processor. Select CDS processor

  3. In the Create processor menu, enter a name for your processor, such as my-custom-document-splitter.

    Create CDS processor

  4. Select the region closest to you.

  5. Click Create. The Processor Details tab appears.

Create a Cloud Storage bucket for the dataset

In order to train this new processor, you must create a dataset with training and testing data to help the processor identify the documents that you want to split and classify.

This dataset requires a new Cloud Storage bucket. Do not use the same bucket where your documents are currently stored.

  1. Go to your processor's Train tab.

  2. You have the option to let Google create a Cloud Storage bucket for you, or you can use your own. For this tutorial, select Google-managed storage under Advanced Options.

  3. Click Continue to create the dataset. Creating the dataset might take several minutes.

    Create dataset

Define processor schema

You can create the processor schema either before or after you import documents into your dataset. The schema provides labels that you will use to annotate documents.

  1. On the Train tab, click Edit Schema in the lower left. The Manage labels page opens.

  2. Click Create label.

  3. Enter the name for the label. Click Create. Refer to Define processor schema for detailed instructions on creating and editing a schema.

    Note: Labels cannot be deleted. Instead, you can disable any label you do not want to use.

  4. Create each of the following labels for the processor schema.

    • bank_statement
    • form_1040
    • form_w2
    • form_w9
    • paystub
  5. Click Save when the labels are complete.

    Manage labels console

Import an unlabeled document into a dataset

Next, you import an unlabeled document into your dataset and label it.

If working on your own project, you determine how to label your data. Refer to Labeling options.

Document AI Custom Processors require a minimum of 10 documents in the training and test sets, along with 10 instances of each label in each set. We recommend at least 50 documents in each set, with 50 instances of each label for best performance. In general, more training data produces higher accuracy.

  1. On the Train tab, click Import documents.

    Import documents

  2. For this example, enter this path in Source path. This contains one document PDF.

    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-Unlabeled
    
  3. Set the Document label as None.

  4. Set the Dataset split dropdown to Unassigned.

    The document in this folder is not given a label or assigned to the testing or training set by default.

  5. Click Import. Document AI reads the documents from the bucket into the dataset. It does not modify the import bucket or read from the bucket after the import is complete.

When you import documents, you can optionally assign the documents to the Training or Test set when imported, or wait to assign them later.

If you want to delete a document or documents that you have imported, select them on the Train tab, and click Delete.

For more information about preparing your data for import, refer to the Data preparation guide.

Label a document

The process of applying labels to a document is known as annotation.

  1. Return to the Train tab, and click a document to open the Label management console.

  2. This document contains multiple page groups that need to be identified and labeled. First, you need to identify the split points. Move your mouse in between pages 1 and 2 in the image view and click on the + symbol.

    Page group

  3. Create split points before the following page numbers: 2, 3, 4, 5.

    Your console should look like this when finished. Page group

  4. In the Document type dropdown, select the appropriate label for each page group.

    Page(s) Document type
    1 paystub
    2 form_w9
    3 bank_statement
    4 form_w2
    5 & 6 form_1040

    The labeled document should look like this when complete: Labeled mixed document

  5. Click Mark as Labeled when you have finished annotating the document.

    On the Train tab, the left-hand panel shows that 1 document has been labeled.

Assign annotated document to the training set

Now that you have labeled this example document, you can assign it to the training set.

  1. On the Train tab, select the Select All checkbox.

  2. From the Assign to Set list, select Training.

In the left-hand panel, you can find that 1 document has been assigned to the training set.

Import data with batch labeling

Next, you import unlabeled PDF files that are sorted into different Cloud Storage folders by their type. Batch labeling helps save time on labeling by assigning a label at import time based on the file path.

  1. On the Train tab, click Import documents.

  2. Enter the following path in Source path. This folder contains PDFs of bank statements.

    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-CDS-BatchLabel/bank-statement
    
  3. Set the Document label as bank_statement.

  4. Set the Dataset split dropdown to Auto-split. This automatically splits the documents to have 80% in the training set and 20% in the test set.

  5. Click Add Another Folder to add more folders.

  6. Repeat the previous steps with the following paths and document labels:

    Bucket path Document label
    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-CDS-BatchLabel/1040 form_1040
    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-CDS-BatchLabel/w2 form_w2
    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-CDS-BatchLabel/w9 form_w9
    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-CDS-BatchLabel/paystub paystub

    The console should look like this when complete: Batch label import

  7. Click Import. The import takes several minutes.

When the import is finished, find the documents on the Train tab.

Import prelabeled data

In this guide, you are provided with prelabeled data in the Document format as JSON files.

This is the same format that Document AI outputs when processing a document, labeling with Human-in-the-Loop, or exporting a dataset.

  1. On the Train tab, click Import documents.

  2. Enter the following path in Source path.

    cloud-samples-data/documentai/Custom/Lending-Splitter/JSON-Labeled
    
  3. Set the Document label as None.

  4. Set the Dataset split dropdown to Auto-split.

  5. Click Import.

When the import is finished, find the documents on the Train tab.

Train the processor

Now that you have imported the training and test data, you can train the processor. Because training might take several hours, make sure you have set up the processor with the appropriate data and labels before you begin training.

  1. Click Train New Version.

  2. In the Version name field, enter a name for this processor version, such as my-cds-version-1.

  3. (Optional) Click View Label Stats to find information about the document labels. That can help determine your coverage. Click Close to return to the training setup.

  4. Click Start training You can check the status on the right-hand panel.

Deploy the processor version

  1. After training is complete, navigate to the Manage Versions tab. You can view details about the version you just trained.

  2. Click the three vertical dots on the right of the version you want to deploy, and select Deploy version.

  3. Select Deploy from the popup window.

    Deployment takes a few minutes to complete.

Evaluate and test the processor

  1. After deployment is complete, navigate to the Evaluate & Test tab.

    On this page, you can view evaluation metrics including the F1 score, Precision and Recall for the full document, and individual labels. For more information about evaluation and statistics, refer to Evaluate processor.

  2. Download a document that has not been involved in previous training or testing so that you can use it to evaluate the processor version. If using your own data, you would use a document set aside for this purpose.

    Download PDF

  3. Click Upload Test Document and select the document you just downloaded.

    The Custom Document Splitter analysis page opens. The screen output will demonstrate how well the document was split and classified.

    The console should look like this when complete: Evaluation

    You can also re-run the evaluation against a different test set or processor version.

(Optional) Import data with autolabeling

After deploying a trained processor version, you can use Auto-labeling to save time on labeling when importing new documents.

  1. On the Train tab, click Import documents.

  2. Enter the following path in Source path. This folder contains unlabeled PDFs of multiple document types.

    cloud-samples-data/documentai/Custom/Lending-Splitter/PDF-CDS-AutoLabel
    
  3. Set the Document label as Auto-label.

  4. Set the Dataset split dropdown to Auto-split.

  5. In the Auto-labeling section, set the Version as the version you previously trained.

    • For example: 2af620b2fd4d1fcf
  6. Click Import and wait for the documents to import.

  7. You cannot use autolabeled documents for training or testing without marking them as labeled. Go to the Auto-labeled section to view the autolabeled documents.

  8. Select the first document to enter the labeling console.

  9. Verify the label to ensure it's correct, and adjust if not.

  10. Select Mark as Labeled when finished.

  11. Repeat the label verification for each autolabeled document.

  12. Return to the Train page and click Train New Version to use the data for training.

Use the processor

You have successfully created and trained a Custom Document Splitter processor.

You can manage your custom-trained processor versions just like any other processor version. For more information, refer to Managing processor versions.

Once deployed, you can Send a processing request to your custom processor, and the response can be handled the same as other splitter processors.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.

To avoid unnecessary Google Cloud charges, use the Google Cloud console to delete your processor and project if you do not need them.

If you created a new project to learn about Document AI and you no longer need the project, delete the project.

If you used an existing Google Cloud project, delete the resources you created to avoid incurring charges to your account:

  1. In the Google Cloud console navigation menu, click Document AI and select My Processors.

  2. Click More actions in the same row as the processor you want to delete.

  3. Click Delete processor, type the processor name, then click Delete again to confirm.

What's next