Detecting and preventing account-related fraudulent activities

This document shows you how to use reCAPTCHA Enterprise account defender to detect and prevent account-related fraudulent activities.

reCAPTCHA Enterprise helps you protect critical actions, such as login and checkout. However, there are many subtle forms of account abuse that can be detected by observing a specific user's behavior on the site over a period of time. reCAPTCHA Enterprise account defender helps identify these kinds of subtle abuse by creating a site-specific model for your website to detect a trend of suspicious behavior or a change in activity. By using the site-specific model, reCAPTCHA Enterprise account defender helps you detect the following:

  • Suspicious activities
  • Accounts with similar behaviors
  • Requests coming from devices that were marked as trusted for specific users

Based on the analysis of reCAPTCHA Enterprise account defender and the site-specific model, you can do the following:

  • Restrict or disable fraudulent accounts.
  • Prevent account takeover attempts.
  • Mitigate the successful account takeovers.
  • Grant access only to the requests coming from legitimate user accounts.
  • Reduce friction for users logging in from one of their trusted devices.

Before you begin

Choose the best method for setting up reCAPTCHA Enterprise in your environment and complete the setup.

Enable reCAPTCHA Enterprise account defender

  1. In the Cloud console, go to the reCAPTCHA Enterprise page.

    Go to reCAPTCHA Enterprise

  2. Verify that the name of your project appears in the resource selector at the top of the page.

    If you don't see the name of your project, click the resource selector, then select your project.

  3. Click Settings.

  4. In the Account defender pane, click Enable.

  5. In the Configure account defender dialog, click Enable.

It might take a few hours for the reCAPTCHA Enterprise account defender enablement to propagate to our systems. After the feature enablement is propagated to our systems, you should start receiving responses related to account defender as part of the assessments.

Understand the workflow

To use reCAPTCHA Enterprise account defender, perform the following steps:

  1. Install score-based site keys on clients (web pages or mobile applications).
  2. Create assessments using hashedAccountId.
  3. Interpret the assessment details.
  4. Annotate assessments with account-related metadata.

After completing these steps, you can optionally identify accounts with similar behaviors.

Install score-based site keys on clients

To use reCAPTCHA Enterprise account defender, install score-based site keys on the login and registration pages by following the platform-specific instructions:

For accurate results, we recommend installing score-based site keys on the pages that appear before and after critical actions like login and checkout. For example, consider a customer workflow that includes visiting the home page, login page, and welcome page on a website. In this workflow, you protect the login page by installing a score-based site key on the login page. We recommend installing score-based site keys on the home page and welcome page as well for accurate results.

When you install score-based site keys with unique actions on various pages of your website or mobile application, reCAPTCHA Enterprise account defender creates a custom site-specific model based on the behaviors of legitimate and fraudulent accounts. To learn how to install score-based site keys with unique actions, see installing score-based site keys for user actions.

Create assessments using hashedAccountId

Create assessments for critical actions. Critical actions include successful and failed logins, registrations, and additional actions of the logged-in users.

To create an assessment using hashedAccountId, do the following:

  1. Generate a unique stable hashed user identifier using the SHA256-HMAC method for a user account to your website. You can generate this identifier from a user ID, username, or an email address.

  2. Append the hashed user identifier to the hashedAccountId parameter in the projects.assessments.create method. hashedAccountId lets reCAPTCHA Enterprise account defender detect fraudulent accounts and hijacked accounts.

    Before using any of the request data, make the following replacements:

    • PROJECT_ID: your Google Cloud project ID
    • TOKEN: token returned from the grecaptcha.enterprise.execute() call
    • KEY: reCAPTCHA key associated with the site/app
    • HASHED_ACCOUNT_ID: a stable hashed user identifier generated using the SHA256-HMAC method for a user account to your website

    HTTP method and URL:

    POST https://recaptchaenterprise.googleapis.com/v1/projects/PROJECT_ID/assessments

    Request JSON body:

    {
      "event": {
        "token": "TOKEN",
        "siteKey": "KEY",
        "hashedAccountId": "HASHED_ACCOUNT_ID"
      }
    }
    

    To send your request, choose one of these options:

    curl

    Save the request body in a file called request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://recaptchaenterprise.googleapis.com/v1/projects/PROJECT_ID/assessments"

    PowerShell

    Save the request body in a file called request.json, and execute the following command:

    $cred = gcloud auth application-default print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://recaptchaenterprise.googleapis.com/v1/projects/PROJECT_ID/assessments" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    {
      "tokenProperties": {
        "valid": true,
        "hostname": "www.google.com",
        "action": "login",
        "createTime": "2019-03-28T12:24:17.894Z"
       },
      "riskAnalysis": {
        "score": 0.6,
      },
     "event": {
        "token": "TOKEN",
        "siteKey": "KEY",
        "expectedAction": "USER_ACTION"
      },
      "name": "projects/PROJECT_ID/assessments/b6ac310000000000",
      "accountDefenderAssessment": {
        labels: ["SUSPICIOUS_LOGIN_ACTIVITY"]
      }
    }
    
    

Interpret the assessment details

After creating an assessment, you receive a response like the following JSON example. reCAPTCHA Enterprise account defender returns accountDefenderAssessment as part of the assessment response. The value of accountDefenderAssessment helps you assess whether the user activity is legitimate or fraudulent.

{
  "tokenProperties": {
    "valid": true,
    "hostname": "www.google.com",
    "action": "login",
    "createTime": "2019-03-28T12:24:17.894Z"
   },
  "riskAnalysis": {
    "score": 0.6,
  },
 "event": {
    "token": "TOKEN",
    "siteKey": "KEY",
    "expectedAction": "USER_ACTION"
  },
  "name": "projects/PROJECT_ID/assessments/b6ac310000000000",
  "accountDefenderAssessment": {
    labels: ["SUSPICIOUS_LOGIN_ACTIVITY"]
  }
}

The accountDefenderAssessment field can have any of the following values:

  • PROFILE_MATCH: indicates that the attributes of the user match the attributes that have been seen earlier for this particular user. This value is an indicator that this user is on a trusted device that was used before to access your website.

    PROFILE_MATCH is returned only in the following scenarios:

    • You use Multi-factor(MFA) or Two-factor authentication(2FA) and reCAPTCHA Enterprise account defender marks user profiles as trusted after users pass the MFA or 2FA challenge.

    • You annotate assessments as LEGITIMATE and reCAPTCHA Enterprise account defender marks the corresponding user profile as trusted.

  • SUSPICIOUS_LOGIN_ACTIVITY: indicates that the request matched a profile that previously had a suspicious login activity. This value might indicate that there have been credential stuffing attacks similar to this request.

  • SUSPICIOUS_ACCOUNT_CREATION: indicates that the request matched a profile that previously had suspicious account creation behavior. This value might indicate that this account is fake or fraudulent.

Annotate assessments with account-related metadata

Annotate your assessments to enable reCAPTCHA Enterprise account defender perform the following actions:

  • Analyze all interactions with a specific identifier, and return accurate scores and reason codes.
  • Create a site-specific model of attackers on your site.

When annotating an assessment, confirm the annotations for true positives and true negatives by adding labels to the known events. It is best to annotate the event that assesses whether the request is legitimate or fraudulent, and include the reason for the assessment. However, you can also apply annotations to any previous assessments without a reason.

When you annotate an assessment, you send a request to the projects.assessments.annotate method with the assessment ID. In the body of that request, you include labels providing additional information about an event described in the assessment.

To annotate an assessment, do the following:

  1. Determine the labels to add in the request JSON body depending on your use case.

    The following table helps you understand labels that you can use to annotate assessments:

    Label Description Sample use case
    annotation

    A label to indicate the legitimacy of assessments. Possible values are LEGITIMATE or FRAUDULENT.

    Use this label when you want to indicate the legitimacy of critical actions like login.

    To indicate that a login event was legitimate, use the following request JSON body:

    {
    "annotation": "LEGITIMATE"
    }
    
    reasons

    A label to support your assessments. For the list of possible values, see reasons values.

    Use this label to provide reasons that support your annotations of critical actions. We recommend using INCORRECT_PASSWORD for requests that are made for accounts that do not exist.

    • To differentiate a failed login attempt, use the following request JSON body:

      {
      "reasons": ["INCORRECT_PASSWORD"]
      }
      
    • To indicate that a user successfully passed a two-factor challenge, use the following request JSON body:

      {
      "annotation": "LEGITIMATE",
      "reasons": ["PASSED_TWO_FACTOR"]
      }
      
    hashedAccountId

    A label to associate a hashed account ID with an event. A hashed account ID is a stable hashed user identifier generated using the SHA256-HMAC method for a user account on your website.

    If you created an assessment without a hashed account ID, use this label to provide the hashed account ID of an event.

    To indicate that an event is associated with a given hashed account ID, use the following request JSON body:

    {
    "hashedAccountId": "HASHED_ACCOUNT_ID"
    }
    

  2. Create an annotate request with the appropriate labels.

    Before using any of the request data, make the following replacements:

    • ASSESSMENT_ID: Value of the name field returned from the projects.assessments.create call.
    • ANNOTATION: Optional. A label to indicate whether the assessment is legitimate or fraudulent.
    • REASONS: Optional. Reasons that support your annotation. For the list of possible values, see reasons values.
    • HASHED_ACCOUNT_ID: Optional. A stable hashed user identifier generated using the SHA256-HMAC method for a user account on your website

    For more information, see labels for annotations.

    HTTP method and URL:

    POST https://recaptchaenterprise.googleapis.com/v1/ASSESSMENT_ID:annotate

    Request JSON body:

    {
      "annotation": ANNOTATION,
      "reasons": REASONS,
      "hashedAccountId": HASHED_ACCOUNT_ID
    }
    

    To send your request, choose one of these options:

    curl

    Save the request body in a file called request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://recaptchaenterprise.googleapis.com/v1/ASSESSMENT_ID:annotate"

    PowerShell

    Save the request body in a file called request.json, and execute the following command:

    $cred = gcloud auth application-default print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://recaptchaenterprise.googleapis.com/v1/ASSESSMENT_ID:annotate" | Select-Object -Expand Content

    You should receive a successful status code (2xx) and an empty response.

What's next