Interpreting an assessment

This page explains how to interpret a score to understand the level of risk that user interactions pose, and take appropriate actions for your site. reCAPTCHA Enterprise returns a score for each request based on the interactions with your site, regardless of the site key type. After you receive the score from reCAPTCHA Enterprise, you must interpret the score and take appropriate actions for your site.

Before you begin

Create an assessment.

Interpreting the assessment

After your backend submits a user's reCAPTCHA response token to reCAPTCHA Enterprise, you receive an assessment as a JSON response as shown in the following example.

To interpret an assessment, consider the following parameters:

  • action: a user interaction that triggered reCAPTCHA Enterprise verification.
  • expectedAction: the expected action from a user that you specified when creating the assessment.
  • score: level of risk the user interaction poses.
  • reasons: additional information about how reCAPTCHA Enterprise has interpreted the user interaction.
      "userAgent":"(USER-PROVIDED STRING)",


Verifying actions

The JSON response contains the action parameter that you specified for a user interaction when calling execute() and the expectedAction parameter that you specified when creating the assessment.

Verify that action matches the expectedAction. For example, a login action should be returned on your login page. If there is a mismatch, it indicates that an attacker is attempting to falsify actions. You can take actions against the user interaction, such as adding additional verifications or blocking the interaction to prevent any fraudulent activities.

Interpreting scores

The scoring system of reCAPTCHA Enterprise is an expansion from prior versions of reCAPTCHA to allow greater granularity in responses. reCAPTCHA Enterprise has 11 levels for scores with values ranging from 0.0 to 1.0. The score 1.0 indicates that the interaction poses low risk and is very likely legitimate, whereas 0.0 indicates that the interaction poses high risk and might be fraudulent. Out of the 11 levels, only the following four score levels are available by default: 0.1, 0.3, 0.7 and 0.9.

reCAPTCHA Enterprise learns by monitoring real traffic on your site. Therefore, scores in a staging environment and within 7 days of implementation might differ from the long-term production scores.

Because score-based site keys do not interrupt the user flow, you can first run reCAPTCHA Enterprise without taking action and then decide on thresholds by looking at the traffic.

Based on the score, you can take an appropriate action in the context of your site. To protect your site better, we recommend that you take the action in the background instead of blocking traffic.

The following table lists some of the actions you might take:

Use case Action
homepage See a cohesive view of your traffic on the admin console while filtering scrapers.
login With low scores, require MFA or email verification to prevent credential stuffing attacks.
social Limit unanswered friend requests from abusive users and send risky comments to moderation.
e-commerce Put your real sales ahead of bots and identify risky transactions.

Understanding reason codes

Some scores might be returned with reason codes that provide additional information about how reCAPTCHA Enterprise interpreted the interactions.

The following table lists the reason codes and their descriptions:

Reason code Description
AUTOMATION The interaction matches the behavior of an automated agent.
UNEXPECTED_ENVIRONMENT The event originated from an illegitimate environment.
TOO_MUCH_TRAFFIC Traffic volume from the event source is higher than normal.
UNEXPECTED_USAGE_PATTERNS The interaction with your site was significantly different from expected patterns.
LOW_CONFIDENCE_SCORE Too little traffic was received from this site to generate quality risk analysis.

What's next

  • To tune your site-specific model, you can send the assessment IDs back to Google to confirm true positives and true negatives, or correct errors. For details, see Annotating an assessment.