Detecting and preventing account-related fraudulent activities

This document shows you how to use reCAPTCHA Enterprise account defender to detect and prevent account-related fraudulent activities.

reCAPTCHA Enterprise helps you protect critical actions, such as login and checkout. However, there are many subtle forms of account abuse that can be detected by observing a specific user's behavior on the site over a period of time. reCAPTCHA Enterprise account defender helps identify these kinds of subtle abuse by creating a site-specific model for your website to detect a trend of suspicious behavior or a change in activity. By using the site-specific model, reCAPTCHA Enterprise account defender helps you detect the following:

  • Suspicious activities
  • Accounts with similar behaviors
  • Requests coming from devices that were marked as trusted for specific users

Based on the analysis of reCAPTCHA Enterprise account defender and the site-specific model, you can do the following:

  • Restrict or disable fraudulent accounts.
  • Prevent account takeover attempts.
  • Mitigate the successful account takeovers.
  • Grant access only to the requests coming from legitimate user accounts.
  • Reduce friction for users logging in from one of their trusted devices.

Before you begin

Choose the best method for setting up reCAPTCHA Enterprise in your environment and complete the setup.

Enable reCAPTCHA Enterprise account defender

  1. In the Google Cloud console, go to the reCAPTCHA Enterprise page.

    Go to reCAPTCHA Enterprise

  2. Verify that the name of your project appears in the resource selector at the top of the page.

    If you don't see the name of your project, click the resource selector, then select your project.

  3. Click Settings.

  4. In the Account defender pane, click Enable.

  5. In the Configure account defender dialog, click Enable.

It might take a few hours for the reCAPTCHA Enterprise account defender enablement to propagate to our systems. After the feature enablement is propagated to our systems, you should start receiving responses related to account defender as part of the assessments.

Understand the workflow

To use reCAPTCHA Enterprise account defender, perform the following steps:

  1. Install score-based site keys on clients (web pages or mobile applications).
  2. Create assessments using hashedAccountId.
  3. Interpret the assessment details.
  4. Annotate assessments with account-related metadata.

After completing these steps, you can optionally identify accounts with similar behaviors.

Install score-based site keys on clients

To use reCAPTCHA Enterprise account defender, install score-based site keys on the login and registration pages by following the platform-specific instructions:

For accurate results, we recommend installing score-based site keys on the pages that appear before and after critical actions like login and checkout. For example, consider a customer workflow that includes visiting the home page, login page, and welcome page on a website. In this workflow, you protect the login page by installing a score-based site key on the login page. We recommend installing score-based site keys on the home page and welcome page as well for accurate results.

When you install score-based site keys with unique actions on various pages of your website or mobile application, reCAPTCHA Enterprise account defender creates a custom site-specific model based on the behaviors of legitimate and fraudulent accounts. To learn how to install score-based site keys with unique actions, see installing score-based site keys for user actions.

Create assessments using hashedAccountId

Create assessments for critical actions. Critical actions include successful and failed logins, registrations, and additional actions of the logged-in users.

To create an assessment using hashedAccountId, do the following:

  1. Generate a unique stable hashed user identifier using the SHA256-HMAC method for a user account to your website. You can generate this identifier from a user ID, username, or an email address.

  2. Append the hashed user identifier to the hashedAccountId parameter in the projects.assessments.create method. hashedAccountId lets reCAPTCHA Enterprise account defender detect fraudulent accounts and hijacked accounts.

    Before using any of the request data, make the following replacements:

    • PROJECT_ID: your Google Cloud project ID
    • TOKEN: token returned from the grecaptcha.enterprise.execute() call
    • KEY: reCAPTCHA key associated with the site/app
    • HASHED_ACCOUNT_ID: a stable hashed user identifier generated using the SHA256-HMAC method for a user account to your website

    HTTP method and URL:

    POST https://recaptchaenterprise.googleapis.com/v1/projects/PROJECT_ID/assessments

    Request JSON body:

    {
      "event": {
        "token": "TOKEN",
        "siteKey": "KEY",
        "hashedAccountId": "HASHED_ACCOUNT_ID"
      }
    }
    

    To send your request, choose one of these options:

    curl

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://recaptchaenterprise.googleapis.com/v1/projects/PROJECT_ID/assessments"

    PowerShell

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://recaptchaenterprise.googleapis.com/v1/projects/PROJECT_ID/assessments" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    {
      "tokenProperties": {
        "valid": true,
        "hostname": "www.google.com",
        "action": "login",
        "createTime": "2019-03-28T12:24:17.894Z"
       },
      "riskAnalysis": {
        "score": 0.6,
      },
     "event": {
        "token": "TOKEN",
        "siteKey": "KEY",
        "expectedAction": "USER_ACTION"
      },
      "name": "projects/PROJECT_NUMBER/assessments/b6ac310000000000",
      "accountDefenderAssessment": {
        labels: ["SUSPICIOUS_LOGIN_ACTIVITY"]
      }
    }
    
    

Code sample

Java

To authenticate to reCAPTCHA Enterprise, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


import com.google.cloud.recaptchaenterprise.v1.RecaptchaEnterpriseServiceClient;
import com.google.protobuf.ByteString;
import com.google.recaptchaenterprise.v1.AccountDefenderAssessment.AccountDefenderLabel;
import com.google.recaptchaenterprise.v1.Assessment;
import com.google.recaptchaenterprise.v1.CreateAssessmentRequest;
import com.google.recaptchaenterprise.v1.Event;
import com.google.recaptchaenterprise.v1.ProjectName;
import com.google.recaptchaenterprise.v1.RiskAnalysis.ClassificationReason;
import com.google.recaptchaenterprise.v1.TokenProperties;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;
import java.util.List;
import java.util.UUID;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;

public class AccountDefenderAssessment {

  public static void main(String[] args)
      throws IOException, NoSuchAlgorithmException, InvalidKeyException {
    // TODO(developer): Replace these variables before running the sample.
    // projectId: Google Cloud Project ID
    String projectId = "project-id";

    // recaptchaSiteKey: Site key obtained by registering a domain/app to use recaptcha
    // services.
    String recaptchaSiteKey = "recaptcha-site-key";

    // token: The token obtained from the client on passing the recaptchaSiteKey.
    // To get the token, integrate the recaptchaSiteKey with frontend. See,
    // https://cloud.google.com/recaptcha-enterprise/docs/instrument-web-pages#frontend_integration_score
    String token = "recaptcha-token";

    // recaptchaAction: The action name corresponding to the token.
    String recaptchaAction = "recaptcha-action";

    // Unique ID of the customer, such as email, customer ID, etc.
    String userIdentifier = "default" + UUID.randomUUID().toString().split("-")[0];

    // Change this to a secret not shared with Google.
    final String HMAC_KEY = "SOME_INTERNAL_UNSHARED_KEY";

    // Get instance of Mac object implementing HmacSHA256, and initialize it with the above
    // secret key.
    Mac mac = Mac.getInstance("HmacSHA256");
    mac.init(new SecretKeySpec(HMAC_KEY.getBytes(StandardCharsets.UTF_8),
        "HmacSHA256"));
    byte[] hashBytes = mac.doFinal(userIdentifier.getBytes(StandardCharsets.UTF_8));
    ByteString hashedAccountId = ByteString.copyFrom(hashBytes);

    accountDefenderAssessment(projectId, recaptchaSiteKey, token, recaptchaAction, hashedAccountId);
  }

  /**
   * This assessment detects account takeovers. See,
   * https://cloud.google.com/recaptcha-enterprise/docs/account-takeovers The input is the hashed
   * account id. Result tells if the action represents an account takeover. You can optionally
   * trigger a Multi-Factor Authentication based on the result.
   */
  public static void accountDefenderAssessment(
      String projectId,
      String recaptchaSiteKey,
      String token,
      String recaptchaAction,
      ByteString hashedAccountId)
      throws IOException {
    try (RecaptchaEnterpriseServiceClient client = RecaptchaEnterpriseServiceClient.create()) {

      // Set the properties of the event to be tracked.
      Event event =
          Event.newBuilder()
              .setSiteKey(recaptchaSiteKey)
              .setToken(token)
              // Set the hashed account id (of the user).
              // Recommended approach: HMAC SHA256 along with salt (or secret key).
              .setHashedAccountId(hashedAccountId)
              .build();

      // Build the assessment request.
      CreateAssessmentRequest createAssessmentRequest =
          CreateAssessmentRequest.newBuilder()
              .setParent(ProjectName.of(projectId).toString())
              .setAssessment(Assessment.newBuilder().setEvent(event).build())
              .build();

      Assessment response = client.createAssessment(createAssessmentRequest);

      // Check integrity of the response token.
      if (!checkTokenIntegrity(response.getTokenProperties(), recaptchaAction)) {
        return;
      }

      // Get the reason(s) and the reCAPTCHA risk score.
      // For more information on interpreting the assessment,
      // see: https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment
      for (ClassificationReason reason : response.getRiskAnalysis().getReasonsList()) {
        System.out.println(reason);
      }
      float recaptchaScore = response.getRiskAnalysis().getScore();
      System.out.println("The reCAPTCHA score is: " + recaptchaScore);
      String assessmentName = response.getName();
      System.out.println(
          "Assessment name: " + assessmentName.substring(assessmentName.lastIndexOf("/") + 1));

      // Get the Account Defender result.
      com.google.recaptchaenterprise.v1.AccountDefenderAssessment accountDefenderAssessment =
          response.getAccountDefenderAssessment();
      System.out.println(accountDefenderAssessment);

      // Get Account Defender label.
      List<AccountDefenderLabel> defenderResult =
          response.getAccountDefenderAssessment().getLabelsList();
      // Based on the result, can you choose next steps.
      // If the 'defenderResult' field is empty, it indicates that Account Defender did not have
      // anything to add to the score.
      // Few result labels: ACCOUNT_DEFENDER_LABEL_UNSPECIFIED, PROFILE_MATCH,
      // SUSPICIOUS_LOGIN_ACTIVITY, SUSPICIOUS_ACCOUNT_CREATION, RELATED_ACCOUNTS_NUMBER_HIGH.
      // For more information on interpreting the assessment, see:
      // https://cloud.google.com/recaptcha-enterprise/docs/account-defender#interpret-assessment-details
      System.out.println("Account Defender Assessment Result: " + defenderResult);
    }
  }

  private static boolean checkTokenIntegrity(
      TokenProperties tokenProperties, String recaptchaAction) {
    // Check if the token is valid.
    if (!tokenProperties.getValid()) {
      System.out.println(
          "The Account Defender Assessment call failed because the token was: "
              + tokenProperties.getInvalidReason().name());
      return false;
    }

    // Check if the expected action was executed.
    if (!tokenProperties.getAction().equals(recaptchaAction)) {
      System.out.printf(
          "The action attribute in the reCAPTCHA tag '%s' does not match "
              + "the action '%s' you are expecting to score",
          tokenProperties.getAction(), recaptchaAction);
      return false;
    }
    return true;
  }
}

Interpret the assessment details

After creating an assessment, you receive a response like the following JSON example. reCAPTCHA Enterprise account defender returns accountDefenderAssessment as part of the assessment response. The value of accountDefenderAssessment helps you assess whether the user activity is legitimate or fraudulent.

{
  "tokenProperties": {
    "valid": true,
    "hostname": "www.google.com",
    "action": "login",
    "createTime": "2019-03-28T12:24:17.894Z"
   },
  "riskAnalysis": {
    "score": 0.6,
  },
 "event": {
    "token": "TOKEN",
    "siteKey": "KEY",
    "expectedAction": "USER_ACTION"
  },
  "name": "projects/PROJECT_ID/assessments/b6ac310000000000",
  "accountDefenderAssessment": {
    labels: ["SUSPICIOUS_LOGIN_ACTIVITY"]
  }
}

The accountDefenderAssessment field can have any of the following values:

  • PROFILE_MATCH: indicates that the attributes of the user match the attributes that have been seen earlier for this particular user. This value is an indicator that this user is on a trusted device that was used before to access your website.

    PROFILE_MATCH is returned only in the following scenarios:

    • You use Multi-factor(MFA) or Two-factor authentication(2FA) and reCAPTCHA Enterprise account defender marks user profiles as trusted after users pass the MFA or 2FA challenge.

    • You annotate assessments as LEGITIMATE and reCAPTCHA Enterprise account defender marks the corresponding user profile as trusted.

  • SUSPICIOUS_LOGIN_ACTIVITY: indicates that the request matched a profile that previously had a suspicious login activity. This value might indicate that there have been credential stuffing attacks similar to this request.

  • SUSPICIOUS_ACCOUNT_CREATION: indicates that the request matched a profile that previously had suspicious account creation behavior. This value might indicate that this account is fake or fraudulent.

Annotate assessments with account-related metadata

Annotate your assessments to enable reCAPTCHA Enterprise account defender perform the following actions:

  • Analyze all interactions with a specific identifier, and return accurate scores and reason codes.
  • Create a site-specific model of attackers on your site.

When annotating an assessment, confirm the annotations for true positives and true negatives by adding labels to the known events. It is best to annotate the event that assesses whether the request is legitimate or fraudulent, and include the reason for the assessment. However, you can also apply annotations to any previous assessments without a reason.

When you annotate an assessment, you send a request to the projects.assessments.annotate method with the assessment ID. In the body of that request, you include labels providing additional information about an event described in the assessment.

To annotate an assessment, do the following:

  1. Determine the labels to add in the request JSON body depending on your use case.

    The following table helps you understand labels that you can use to annotate assessments:

    Label Description Sample use case
    annotation

    A label to indicate the legitimacy of assessments. Possible values are LEGITIMATE or FRAUDULENT.

    Use this label when you want to indicate the legitimacy of critical actions like login.

    To indicate that a login event was legitimate, use the following request JSON body:

    {
    "annotation": "LEGITIMATE"
    }
    
    reasons

    A label to support your assessments. For the list of possible values, see reasons values.

    Use this label to provide reasons that support your annotations of critical actions. We recommend using INCORRECT_PASSWORD for requests that are made for accounts that do not exist.

    • To differentiate a failed login attempt, use the following request JSON body:

      {
      "reasons": ["INCORRECT_PASSWORD"]
      }
      
    • To indicate that a user successfully passed a two-factor challenge, use the following request JSON body:

      {
      "annotation": "LEGITIMATE",
      "reasons": ["PASSED_TWO_FACTOR"]
      }
      
    hashedAccountId

    A label to associate a hashed account ID with an event. A hashed account ID is a stable hashed user identifier generated using the SHA256-HMAC method for a user account on your website.

    If you created an assessment without a hashed account ID, use this label to provide the hashed account ID of an event.

    To indicate that an event is associated with a given hashed account ID, use the following request JSON body:

    {
    "hashedAccountId": "HASHED_ACCOUNT_ID"
    }
    

  2. Create an annotate request with the appropriate labels.

    Before using any of the request data, make the following replacements:

    • ASSESSMENT_ID: Value of the name field returned from the projects.assessments.create call.
    • ANNOTATION: Optional. A label to indicate whether the assessment is legitimate or fraudulent.
    • REASONS: Optional. Reasons that support your annotation. For the list of possible values, see reasons values.
    • HASHED_ACCOUNT_ID: Optional. A stable hashed user identifier generated using the SHA256-HMAC method for a user account on your website

    For more information, see labels for annotations.

    HTTP method and URL:

    POST https://recaptchaenterprise.googleapis.com/v1/ASSESSMENT_ID:annotate

    Request JSON body:

    {
      "annotation": ANNOTATION,
      "reasons": REASONS,
      "hashedAccountId": HASHED_ACCOUNT_ID
    }
    

    To send your request, choose one of these options:

    curl

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://recaptchaenterprise.googleapis.com/v1/ASSESSMENT_ID:annotate"

    PowerShell

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://recaptchaenterprise.googleapis.com/v1/ASSESSMENT_ID:annotate" | Select-Object -Expand Content

    You should receive a successful status code (2xx) and an empty response.

Code sample

Java

To authenticate to reCAPTCHA Enterprise, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


import com.google.cloud.recaptchaenterprise.v1.RecaptchaEnterpriseServiceClient;
import com.google.protobuf.ByteString;
import com.google.recaptchaenterprise.v1.AnnotateAssessmentRequest;
import com.google.recaptchaenterprise.v1.AnnotateAssessmentRequest.Annotation;
import com.google.recaptchaenterprise.v1.AnnotateAssessmentRequest.Reason;
import com.google.recaptchaenterprise.v1.AnnotateAssessmentResponse;
import com.google.recaptchaenterprise.v1.AssessmentName;
import java.io.IOException;
import java.security.NoSuchAlgorithmException;

public class AnnotateAccountDefenderAssessment {

  public static void main(String[] args) throws IOException, NoSuchAlgorithmException {
    // TODO(developer): Replace these variables before running the sample.
    // projectID: GCloud Project id.
    String projectID = "project-id";

    // assessmentId: Value of the 'name' field returned from the CreateAssessment call.
    String assessmentId = "account-defender-assessment-id";

    // hashedAccountId: Set the hashedAccountId corresponding to the assessment id.
    ByteString hashedAccountId = ByteString.copyFrom(new byte[] {});

    annotateAssessment(projectID, assessmentId, hashedAccountId);
  }

  /**
   * Pre-requisite: Create an assessment before annotating. Annotate an assessment to provide
   * feedback on the correctness of recaptcha prediction.
   */
  public static void annotateAssessment(
      String projectID, String assessmentId, ByteString hashedAccountId) throws IOException {

    try (RecaptchaEnterpriseServiceClient client = RecaptchaEnterpriseServiceClient.create()) {
      // Build the annotation request.
      // For more info on when/how to annotate, see:
      // https://cloud.google.com/recaptcha-enterprise/docs/annotate-assessment#when_to_annotate
      AnnotateAssessmentRequest annotateAssessmentRequest =
          AnnotateAssessmentRequest.newBuilder()
              .setName(AssessmentName.of(projectID, assessmentId).toString())
              .setAnnotation(Annotation.LEGITIMATE)
              .addReasons(Reason.PASSED_TWO_FACTOR)
              .setHashedAccountId(hashedAccountId)
              .build();

      // Empty response is sent back.
      AnnotateAssessmentResponse response = client.annotateAssessment(annotateAssessmentRequest);
      System.out.println("Annotated response sent successfully ! " + response);
    }
  }
}

What's next