Annotate assessments

This page explains how you can provide feedback on the accuracy of the assessments created by reCAPTCHA Enterprise. With this information, reCAPTCHA Enterprise can tune your site-specific model and provide improved performance for your site over time.

When to annotate assessments

reCAPTCHA Enterprise provides a score when creating an assessment that helps you understand the level of risk user interactions pose. Later, when your site has more information about user interactions to determine if the interaction was legitimate or fraudulent, you can confirm or correct reCAPTCHA Enterprise's assessment. To do this, you send the reCAPTCHA assessment IDs back to Google labeled as LEGITIMATE or FRAUDULENT. Confirming or correcting reCAPTCHA Enterprise's assessment improves the performance of reCAPTCHA Enterprise for your site.

To improve the performance of reCAPTCHA Enterprise, you can confirm the annotations for true positives and true negatives in addition to the annotations for potential assessment errors. For example, for a user who successfully authenticated using a 2-factor-authentication method and received a high reCAPTCHA score, you can annotate the assessment as LEGITIMATE. Alternatively, if reCAPTCHA score was low and your site determined that the interaction was fraudulent or abusive, you can annotate the assessment as FRAUDULENT.

The following list shows additional information about some sample user interactions that indicate whether a user interaction is legitimate or fraudulent:

  • Credit card chargebacks or other concrete evidence of fraud indicate that an earlier financial transaction was fraudulent.
  • A new account that sends messages that are flagged as spam by other users might indicate that the account creation was fraudulent.
  • A support case filed if a user has difficulty logging in to their account might indicate that the login attempt is legitimate.
  • A purchase or booking on a site that uses reCAPTCHA Enterprise to defend against scraping might indicate that the user is legitimate.

Before you begin

Retrieve the assessment ID

To annotate an assessment, retrieve the unique assessment ID in the following ways:

  • For web and mobile integrations, you can retrieve the unique assessment ID from the assessment response.

    After you create an assessment, you receive a JSON response as shown in the following example.

    Retrieve the unique assessment ID from the name field in the JSON response.

    'tokenProperties': {
      'valid': True,
      'hostname': '',
      'action': 'homepage',
      'createTime': u'2019-03-28T12:24:17.894Z'
    'riskAnalysis': {
      'score': 0.1,
      'reasons': ['AUTOMATION']
    'event': {
      'token': 'RESPONSE_TOKEN',
      'siteKey': 'KEY_ID'
    'name': 'ASSESSMENT_ID'
  • For WAF integrations, you can retrieve the unique assessment ID from the reCAPTCHA token. The unique assessment ID is the unique alphanumeric string of 16 characters that appears at the end of the reCAPTCHA token after :U=. For example, if the reCAPTFHA token is .................U=6ZZZZe73fZZZZZZ0, then the assessment ID is 6ZZZZe73fZZZZZZ0.

Annotate an assessment

  1. Determine the information and labels to add in the request JSON body depending on your use case.

    The following table lists the labels and values that you can use to annotate events:

    Label Description Request example
    reasons Optional. A label to support your assessments.

    Provide real-time event details in the reasons label in a few seconds or minutes after the event because they influence real-time detection.

    For the list of possible values, see reasons values.

    Example: To detect account takeovers, annotate if the entered password was correct with CORRECT_PASSWORD or INCORRECT_PASSWORD values. If you deployed your own MFA, you can add the following values: INITIATED_TWO_FACTOR, and PASSED_TWO_FACTOR or FAILED_TWO_FACTOR.

          "reasons": ["INCORRECT_PASSWORD"]
    annotation Optional. A label to indicate the legitimacy of assessments.

    Provide facts about login and registration events to validate or correct your risk assessments in the annotation label.

    Possible values: LEGITIMATE or FRAUDULENT.

    You can send this information at any time or as part of a batch job. However, we recommend sending this information in a few seconds or minutes after the event because they influence real-time detection.

           "annotation": "LEGITIMATE"

  2. Annotate an assessment using the projects.assessments.annotate method.

    Before using any of the request data, make the following replacements:

    • ASSESSMENT_ID: Value of the name field returned from the projects.assessments.create call
    • ANNOTATION_LABEL: The label to indicate whether the assessment is legitimate or fraudulent. Possible values are LEGITIMATE or FRAUDULENT.
    • ANNOTATION_REASON: Describes the context for the annotation that was chosen for this assessment. To learn about possible values, see reasons for annotating an assessment.

    HTTP method and URL:


    Request JSON body:

    "annotation": "ANNOTATION_LABEL"

    To send your request, choose one of these options:


    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \


    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "" | Select-Object -Expand Content

    You should receive a successful status code (2xx) and an empty response.

What's next