[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eDocument AI Human-in-the-Loop (HITL) is being deprecated and will no longer be available on Google Cloud after January 16, 2025, and new customers are not being allowed, requiring existing users without the option to contact their Google Account team.\u003c/p\u003e\n"],["\u003cp\u003eHuman labelers require a clear instruction document that outlines which labels to review, if there are mandatory fields, business logic for corrections or rejections, and any special label name mappings to schema labels.\u003c/p\u003e\n"],["\u003cp\u003eGood instructions are concise, understandable by those unfamiliar with the domain, and should be reviewable within 20 minutes, including the concept and details of how to label the data.\u003c/p\u003e\n"],["\u003cp\u003eAn effective instructions file must contain a list of labels with descriptions, multiple positive and at least one negative example per label, edge case clarifications, and guidance on how to add annotations.\u003c/p\u003e\n"],["\u003cp\u003eVisual examples should be included in instructions to help clarify where different entities are expected in the document and how they should be mapped to the schema labels.\u003c/p\u003e\n"]]],[],null,["# Creating Instructions for HITL Review\n=====================================\n\n\n| **Caution** : Document AI Human-in-the-Loop is deprecated and will no longer be available on Google Cloud after January 16, 2025. New customers are not allowlisted. If you want to use (HITL) but don't see the option available, contact your Google Account team. \n|\n| To implement a human review and correction solution that meets your requirements, we recommend working with a Google Cloud certified partner like Devoteam, Searce, or Quantiphi. See [Deprecations](/document-ai/docs/deprecation) for details.\n\n\u003cbr /\u003e\n\n\n| **Note** : This product is subject to the [Data Processing and Security Terms](/terms/data-processing-terms).\n\n\u003cbr /\u003e\n\nWhile the HITL Labeler Workbench provides a What You See Is What\nYou Get (WYSIWYG) interface that maps document entities to the extracted labels,\nwhich makes it easy for the labeler to compare and correct. An instructions\ndocument is needed to instruct the human labelers of which labels to look for\nand add, and in case it's missed by the Document AI model or validation\nfilters of HITL. This includes:\n\n- Which labels to review.\n- Whether any fields are mandatory or optional.\n- Any business logic to\n - Correct labels (such as add \"USA\" for United States addresses that don't specify USA).\n - Reject documents with the correct rejection field - such as reject invoices \\\u003e$10,000.\n- Special label names in the document that map to schema labels, so labeler can add these - such as \"Client #\" = \"Account #\".\n- These can be set up as Filters in the HITL task configuration.\n\n\u003cbr /\u003e\n\nDesign good instruction\n-----------------------\n\nGood instructions are the most important factor in getting good human labeling\nresults. Good instructions are those that let human labelers know what you want\nthem to do. Here are some guidelines for creating good instructions:\n\n- The human labelers might not have your domain knowledge. The distinctions you ask labelers to make must be easy to understand for someone unfamiliar with your use case.\n- Avoid making the instructions too long. It is best if a labeler can review and understand them within 20 minutes.\n- Instructions must describe the concept of the task as well as details about how to label the data.\n- If your instructions have a corresponding label set, they must cover all labels in that set. The label name in the instructions must match the name in the label set.\n- It often takes several iterations to create good instructions. We recommend having a small dataset labeled first, then adjusting your instructions based on what you see in the results you receive.\n\n\u003cbr /\u003e\n\nA good instructions file must include the following sections:\n\n- Label list and description: list all of the labels that are used and describe the meaning of each label.\n- Examples: For each label, give at least three positive examples and one negative example. These examples must cover different cases.\n- Cover edge cases. Clarify as many edge cases as you can, This reduces the need for the labeler to interpret the label. For example, if you need to draw a bounding box for a person, it is better to clarify:\n - If there are multiple people, do you need a box for each person?\n - If a person is occluded, do you need a box??\n - Do you need a box for a person who is partially shown in the image?\n - Do you need a box for a person in a picture or painting?\n- Describe how to add annotations. For example:\n - For a bounding box, do you need a tight box or a loose box?\n - For text entity extraction, where should the interested entity start and end?\n- Clarification on labels. If two labels are similar or easy to confuse, give examples to clarify the differences.\n\n\u003cbr /\u003e\n\nVisual Examples\n---------------\n\nThe visual example provides clarification to the labelers where to expect\ndifferent entities in the document and how they map to the extracted labels in\nthe schema.\nInclude visual examples in your instructions like the following:"]]