Creating and scheduling Cloud DLP inspection jobs

This topic describes in detail how to create a Cloud Data Loss Prevention (DLP) inspection job, and how to schedule recurring inspection jobs by creating a job trigger. For a quick walkthrough of how to create a new job trigger using the Cloud DLP UI, see Quickstart: Creating a Cloud DLP job trigger.

About inspection jobs and job triggers

When Cloud DLP performs an inspection scan to identify sensitive data, each scan runs as a job. Cloud DLP creates and runs a job resource whenever you tell it to inspect your Google Cloud storage repositories, including Cloud Storage buckets, BigQuery tables, and Datastore kinds.

You schedule Cloud DLP inspection scan jobs by creating job triggers. A job trigger automates the creation of DLP jobs on a periodic basis, and can also be run on demand.

To learn more about jobs and job triggers in Cloud DLP, see the Jobs and job triggers conceptual page.

Create a new inspection job

To create a new Cloud DLP inspection job:

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. From the Create menu, choose Job or job trigger.

    Alternatively, click the following button:

    Create new job

The Create job page contains the following sections:

Choose input data

Name

Enter a name for the job. You can use letters, numbers, and hyphens. Naming your job is optional. If you don't enter a name, Cloud DLP will give the job a unique number identifier.

Location

From the Storage type menu, choose the kind of repository that stores the data you want to scan:

  • Cloud Storage: Either enter the URL of the bucket you want to scan, or choose Include/exclude from the Location type menu, and then click Browse to navigate to the bucket or subfolder you want to scan. Select the Scan folder recursively checkbox to scan the specified directory and all contained directories. Leave it unselected to scan only the specified directory and no deeper.
  • BigQuery: Enter the identifiers for the project, dataset, and table that you want to scan.
  • Datastore: Enter the identifiers for the project, namespace (optional), and kind that you want to scan.

Sampling

Sampling is an optional way to save resources if you have a very large amount of data.

Under Sampling, you can choose whether to scan all the selected data or to sample the data by scanning a certain percentage. Sampling works differently depending on the type of storage repository you're scanning:

  • For BigQuery, you can sample a subset of the total selected rows, corresponding to the percentage of files you specify to include in the scan.
  • For Cloud Storage, if any file exceeds the size specified in the Max byte size to scan per file, Cloud DLP scans it up to that maximum file size and then moves on to the next file.

To turn on sampling, choose one of the following options from the first menu:

  • Start sampling from top: Cloud DLP starts the partial scan at the beginning of the data. For BigQuery, this starts the scan at the first row. For Cloud Storage, this starts the scan at the beginning of each file, and stops scanning once Cloud DLP has scanned up to any specified maximum file size (see above).
  • Start sampling from random start: Cloud DLP starts the partial scan at a random location within the data. For BigQuery, this starts the scan at a random row. For Cloud Storage, this setting only applies to files that exceed any specified maximum size. Cloud DLP scans files under the maximum file size in their entirety, and scans files above the maximum file size up to the maximum.

To perform a partial scan, you must also choose what percentage of the data you want to scan. Use the slider to set the percentage.

Advanced configuration

When you create a job for a scan of Cloud Storage buckets or BigQuery tables, you can narrow your search by specifying an advanced configuration. Specifically, you can configure:

  • Files (Cloud Storage only): The file types to scan for, which include text, binary, and image files.
  • Identifying fields (BigQuery only): Unique row identifiers within the table.
  • For Cloud Storage, if any file exceeds the size specified in the Max byte size to scan per file, Cloud DLP scans it up to that maximum file size and then moves on to the next file.

To turn on sampling, choose what percentage of the data you want to scan. Use the slider to set the percentage. Then, choose one of the following options from the first menu:

  • Start sampling from top: Cloud DLP starts the partial scan at the beginning of the data. For BigQuery, this starts the scan at the first row. For Cloud Storage, this starts the scan at the beginning of each file, and stops scanning once Cloud DLP has scanned up to any specified maximum file size (see above).
  • Start sampling from random start: Cloud DLP starts the partial scan at a random location within the data. For BigQuery, this starts the scan at a random row. For Cloud Storage, this setting only applies to files that exceed any specified maximum size. Cloud DLP scans files under the maximum file size in their entirety, and scans files above the maximum file size up to the maximum.

Files

For files stored in Cloud Storage, you can specify the types to include in your scan under Files.

You can choose from binary, text, image, Microsoft Word, PDF, and Apache Avro files. An exhaustive list of file extensions that Cloud DLP can scan in Cloud Storage buckets is included on the API reference page for FileType. Be aware that choosing Binary causes Cloud DLP to scan files of types that are unrecognized.

Identifying fields

For tables in BigQuery, under Identifying fields you can direct Cloud DLP to scan only rows that have values in a specific field or fields.

To add a field, click Add identifying field. Enter the field name, using dot notation to specify nested fields, if necessary.

You can add as many fields as you want. To remove a field, click Delete item (the trashcan icon) next to the field you want to delete.

Configure detection

The Configure detection section is where you specify the types of sensitive data you want to scan for. Completing this section is optional. If you skip this section, Cloud DLP will scan your data for a set of common types of sensitive data using the Most common option, which corresponds to the ALL_BASIC infoType detector.

Template

You can optionally use a Cloud DLP template to reuse configuration information you've specified previously.

If you have already created a template that you want to use, click in the Template name field to see a list of existing inspection templates. Choose or type the name of the template you want to use.

For more information about creating templates, see Creating Cloud DLP inspection templates.

InfoTypes

InfoType detectors find sensitive data of a certain type. For example, the Cloud DLP US_SOCIAL_SECURITY_NUMBER built-in infoType detector finds US Social Security numbers. In addition to the built-in infoType detectors, you can create your own custom infoType detectors.

Under InfoTypes, choose the infoType detector that corresponds to a data type you want to scan for. You can also leave this field blank to scan for all default infoTypes. More information about each detector is provided in InfoType detector reference.

You can also add custom infoType detectors in the Custom infoTypes section, and customize both built-in and custom infoType detectors in the Inspection rulesets section.

Custom infoTypes

To add a custom infoType detector:

  1. Click Add custom infoType.
  2. Choose the type of custom infoType detector you want to create:
    • Words or phrases: Matches on one or more words or phrases that you enter into the field. Use this custom infoType when you have just a few words or phrases to search for. Give your custom infoType a name, and then, under List of words or phrases, type the word or phrase you want Cloud DLP to match on. To search on multiple words or phrases, press Enter after each one. For more information, see Creating a regular custom dictionary detector.
    • Dictionary path: Searches your content for items in a list of words and phrases. The list is stored in a text file in Cloud Storage. Use this custom infoType when you have anywhere from a few to several hundred thousand words or phrases to search for. This method is also useful if your list contains sensitive elements and you don't want to store them inside of a job or template. Give your custom infoType a name, and then, under Dictionary location, enter or browse to the Cloud Storage path where the dictionary file is stored. For more information, see Creating a regular custom dictionary detector.
    • Regex: Matches content based on a regular expression. Give your custom infoType a name, and then, in the Regex field, enter a regex pattern to match words and phrases. See the supported regex syntax.
    • Stored infoType: This option adds a stored custom dictionary detector, which is a kind of dictionary detector that is built from either a large text file stored in Cloud Storage or a single column of a BigQuery table. Use this kind of custom infoType when you have anywhere from several hundred thousand to tens of millions of words or phrases to search for. Be aware that this is the only option in this menu for which you must have already created the stored infoType to use it. Give your custom infoType a name (different from the name you gave the stored infoType), and then, in the Stored infoType field, enter the name of the stored infoType. For more information about creating stored custom dictionaries, see Creating a stored custom dictionary detector.

Click Add custom infoType again to add additional custom infoType detectors.

Inspection rulesets

Inspection rulesets allow you to customize both built-in and custom infoType detectors using context rules. The two types of inspection rules are:

To add a new ruleset, first specify one or more built-in or custom infoType detectors in the InfoTypes section. These are the infoType detectors that your rulesets will be modifying. Then, do the following:

  1. Click in the Choose infoTypes field. The infoType or infoTypes you specified previously appear below the field in a menu, as shown here:
  2. Screenshot of the DLP UI's inspection rulesets configuration.
  3. Choose an infoType from the menu, and then click Add rule. A menu appears with the two options Hotword rule and Exclusion rule.

For hotword rules, choose Hotword rules. Then, do the following:

  1. In the Hotword field, enter a regular expression that Cloud DLP should look for.
  2. From the Hotword proximity menu, choose whether the hotword you entered is found before or after the chosen infoType.
  3. In Hotword distance from infoType, enter the approximate number of characters between the hotword and the chosen infoType.
  4. In Confidence level adjustment, choose whether to assign matches a fixed likelihood level, or to increase or decrease the default likelihood level by a certain amount.

For exclusion rules, choose Exclusion rules. Then, do the following:

  1. In the Exclude field, enter a regular expression (regex) that Cloud DLP should look for.
  2. From the Matching type menu, choose one of the following:
    • Full match: The finding must completely match the regex.
    • Partial match: A substring of the finding can match the regex.
    • Inverse match: The finding doesn't match the regex.

You can add additional hotword or exclusion rules and rulesets to further refine your scan results.

Confidence threshold

Every time Cloud DLP detects a potential match for sensitive data, it assigns it a likelihood value on a scale from "Very unlikely" to "Very likely." When you set a likelihood value here, you are instructing Cloud DLP to only match on data that corresponds to that likelihood value or higher.

The default value of "Possible" is sufficient for most purposes. If you routinely get matches that are too broad, move the slider up. If you get too few matches, move the slider down.

When you're done, click Continue.

Add actions

In the Add actions step, you select the action or actions you want Cloud DLP to take after the job completes.

Your choices are:

  • Save to BigQuery: This option saves findings to a BigQuery table. The findings that are stored in BigQuery contain details about each finding's location and match likelihood. If you don't store findings, the completed job will only contain statistics about the number and infoTypes of the findings. If you don't specify a table ID, BigQuery assigns a default name to a new table. If you specify an existing table, findings are appended to it. Choose the Include quote checkbox to include contextual text in each match result.
  • Publish to Pub/Sub: This option sends a notification message to a Pub/Sub when the job completes. Click New Topic to specify one or more topic names to which you want to publish the notification.
  • Publish to Google Cloud Security Command Center: This option publishes a summary of your results to Security Command Center. For more information, see Sending Cloud DLP scan results to Security Command Center.
  • Publish to Data Catalog: Choose this option to send inspection results to Data Catalog, Google Cloud's metadata management service.
  • Publish to Stackdriver: Choose this option to send inspection results to Cloud Monitoring, Google Cloud's operations suite.
  • Notify by email: This option causes Cloud DLP to send an email to project owners and editors when the job completes.

When you're done selecting actions, click Continue.

Review

The Review section contains a JSON-formatted summary of the job settings you just specified.

Click Create to create the job (if you didn't specify a schedule) and to run the job once. The job'sinformation page appears, which contains status and other information. If the job is currently running, you can click the Cancel button to stop it. You can also delete the job by clicking Delete.

To return to the main Cloud DLP page, click the Back arrow in the Cloud Console.

Protocol

A job is represented in the DLP API by the DlpJobs resource. You can create a new job by using the DlpJob resource's projects.dlpJobs.create method.

This sample JSON can be sent in a POST request to the specified Cloud DLP REST endpoint. This example JSON demonstrates how to create a job in Cloud DLP. The job is a Datastore inspection scan.

To quickly try this out, you can use the API Explorer that's embedded below. Keep in mind that a successful request, even one created in API Explorer, will create a job. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

JSON input:

{
  "inspectJob": {
    "storageConfig": {
      "bigQueryOptions": {
        "tableReference": {
          "projectId": "bigquery-public-data",
          "datasetId": "san_francisco_sfpd_incidents",
          "tableId": "sfpd_incidents"
        }
      },
      "timespanConfig": {
        "startTime": "2020-01-01T00:00:01Z",
        "endTime": "2020-01-31T23:59:59Z",
        "timestampField": {
          "name": "timestamp"
        }
      }
    },
    "inspectConfig": {
      "infoTypes": [
        {
          "name": "PERSON_NAME"
        },
        {
          "name": "STREET_ADDRESS"
        }
      ],
      "excludeInfoTypes": false,
      "includeQuote": true,
      "minLikelihood": "LIKELY"
    },
    "actions": [
      {
        "saveFindings": {
          "outputConfig": {
            "table": {
              "projectId": "[PROJECT-ID]",
              "datasetId": "[DATASET-ID]"
            }
          }
        }
      }
    ]
  }
}

JSON output:

The following output indicates that the job was successfully created.

{
  "name": "projects/[PROJECT-ID]/dlpJobs/[JOB-ID]",
  "type": "INSPECT_JOB",
  "state": "PENDING",
  "inspectDetails": {
    "requestedOptions": {
      "snapshotInspectTemplate": {},
      "jobConfig": {
        "storageConfig": {
          "bigQueryOptions": {
            "tableReference": {
              "projectId": "bigquery-public-data",
              "datasetId": "san_francisco_sfpd_incidents",
              "tableId": "sfpd_incidents"
            }
          },
          "timespanConfig": {
            "startTime": "2020-01-01T00:00:01Z",
            "endTime": "2020-01-31T23:59:59Z",
            "timestampField": {
              "name": "timestamp"
            }
          }
        },
        "inspectConfig": {
          "infoTypes": [
            {
              "name": "PERSON_NAME"
            },
            {
              "name": "STREET_ADDRESS"
            }
          ],
          "minLikelihood": "LIKELY",
          "limits": {},
          "includeQuote": true
        },
        "actions": [
          {
            "saveFindings": {
              "outputConfig": {
                "table": {
                  "projectId": "[PROJECT-ID]",
                  "datasetId": "[DATASET-ID]",
                  "tableId": "[TABLE-ID]"
                }
              }
            }
          }
        ]
      }
    },
    "result": {}
  },
  "createTime": "2020-07-10T07:26:33.643Z"
}

Java


import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.Action;
import com.google.privacy.dlp.v2.CloudStorageOptions;
import com.google.privacy.dlp.v2.CreateDlpJobRequest;
import com.google.privacy.dlp.v2.DlpJob;
import com.google.privacy.dlp.v2.InfoType;
import com.google.privacy.dlp.v2.InspectConfig;
import com.google.privacy.dlp.v2.InspectJobConfig;
import com.google.privacy.dlp.v2.Likelihood;
import com.google.privacy.dlp.v2.LocationName;
import com.google.privacy.dlp.v2.StorageConfig;
import com.google.privacy.dlp.v2.StorageConfig.TimespanConfig;
import java.io.IOException;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class JobsCreate {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    String gcsPath = "gs://" + "your-bucket-name" + "path/to/file.txt";
    createJobs(projectId, gcsPath);
  }

  // Creates a DLP Job
  public static void createJobs(String projectId, String gcsPath) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

      // Set autoPopulateTimespan to true to scan only new content
      boolean autoPopulateTimespan = true;
      TimespanConfig timespanConfig =
          TimespanConfig.newBuilder()
              .setEnableAutoPopulationOfTimespanConfig(autoPopulateTimespan)
              .build();

      // Specify the GCS file to be inspected.
      CloudStorageOptions cloudStorageOptions =
          CloudStorageOptions.newBuilder()
              .setFileSet(CloudStorageOptions.FileSet.newBuilder().setUrl(gcsPath))
              .build();
      StorageConfig storageConfig =
          StorageConfig.newBuilder()
              .setCloudStorageOptions(cloudStorageOptions)
              .setTimespanConfig(timespanConfig)
              .build();

      // Specify the type of info the inspection will look for.
      // See https://cloud.google.com/dlp/docs/infotypes-reference for complete list of info types
      List<InfoType> infoTypes =
          Stream.of("EMAIL_ADDRESS", "PERSON_NAME", "LOCATION", "PHONE_NUMBER")
              .map(it -> InfoType.newBuilder().setName(it).build())
              .collect(Collectors.toList());
      // The minimum likelihood required before returning a match:
      // See: https://cloud.google.com/dlp/docs/likelihood
      Likelihood minLikelihood = Likelihood.UNLIKELY;

      // The maximum number of findings to report (0 = server maximum)
      InspectConfig.FindingLimits findingLimits =
          InspectConfig.FindingLimits.newBuilder().setMaxFindingsPerItem(100).build();

      InspectConfig inspectConfig =
          InspectConfig.newBuilder()
              .addAllInfoTypes(infoTypes)
              .setIncludeQuote(true)
              .setMinLikelihood(minLikelihood)
              .setLimits(findingLimits)
              .build();

      // Specify the action that is triggered when the job completes.
      Action.PublishSummaryToCscc publishSummaryToCscc =
          Action.PublishSummaryToCscc.getDefaultInstance();
      Action action = Action.newBuilder().setPublishSummaryToCscc(publishSummaryToCscc).build();

      // Configure the inspection job we want the service to perform.
      InspectJobConfig inspectJobConfig =
          InspectJobConfig.newBuilder()
              .setInspectConfig(inspectConfig)
              .setStorageConfig(storageConfig)
              .addActions(action)
              .build();

      // Construct the job creation request to be sent by the client.
      CreateDlpJobRequest createDlpJobRequest =
          CreateDlpJobRequest.newBuilder()
              .setParent(LocationName.of(projectId, "global").toString())
              .setInspectJob(inspectJobConfig)
              .build();

      // Send the job creation request and process the response.
      DlpJob createdDlpJob = dlpServiceClient.createDlpJob(createDlpJobRequest);
      System.out.println("Job created successfully: " + createdDlpJob.getName());
    }
  }
}

Create a new job trigger

To create a new Cloud DLP job trigger:

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. From the Create menu, choose Job or job trigger.

    Alternatively, click the following button:

    Create new job trigger

The Create job trigger page contains the following sections:

Choose input data

Name

Enter a name for the job trigger. You can use letters, numbers, and hyphens. Naming your job trigger is optional. If you don't enter a name Cloud DLP will give the job trigger a unique number identifier.

Location

From the Storage type menu, choose the kind of repository that stores the data you want to scan:

  • Cloud Storage: Either enter the URL of the bucket you want to scan, or choose Include/exclude from the Location type menu, and then click Browse to navigate to the bucket or subfolder you want to scan. Select the Scan folder recursively checkbox to scan the specified directory and all contained directories. Leave it unselected to scan only the specified directory and no deeper.
  • BigQuery: Enter the identifiers for the project, dataset, and table that you want to scan.
  • Datastore: Enter the identifiers for the project, namespace (optional), and kind that you want to scan.

Sampling

Sampling is an optional way to save resources if you have a very large amount of data.

Under Sampling, you can choose whether to scan all the selected data or to sample the data by scanning a certain percentage. Sampling works differently depending on the type of storage repository you're scanning:

  • For BigQuery, you can sample a subset of the total selected rows, corresponding to the percentage of files you specify to include in the scan.
  • For Cloud Storage, if any file exceeds the size specified in the Max byte size to scan per file, Cloud DLP scans it up to that maximum file size and then moves on to the next file.

To turn on sampling, choose one of the following options from the first menu:

  • Start sampling from top: Cloud DLP starts the partial scan at the beginning of the data. For BigQuery, this starts the scan at the first row. For Cloud Storage, this starts the scan at the beginning of each file, and stops scanning once Cloud DLP has scanned up to any specified maximum file size (see above).
  • Start sampling from random start: Cloud DLP starts the partial scan at a random location within the data. For BigQuery, this starts the scan at a random row. For Cloud Storage, this setting only applies to files that exceed any specified maximum size. Cloud DLP scans files under the maximum file size in their entirety, and scans files above the maximum file size up to the maximum.

To perform a partial scan, you must also choose what percentage of the data you want to scan. Use the slider to set the percentage.

Advanced configuration

When you create a job trigger for a scan of Cloud Storage buckets or BigQuery tables, you can narrow your search by specifying an advanced configuration. Specifically, you can configure:

  • Files (Cloud Storage only): The file types to scan for, which include text, binary, and image files.
  • Identifying fields (BigQuery only): Unique row identifiers within the table.
  • For Cloud Storage, if any file exceeds the size specified in the Max byte size to scan per file, Cloud DLP scans it up to that maximum file size and then moves on to the next file.

To turn on sampling, choose what percentage of the data you want to scan. Use the slider to set the percentage. Then, choose one of the following options from the first menu:

  • Start sampling from top: Cloud DLP starts the partial scan at the beginning of the data. For BigQuery, this starts the scan at the first row. For Cloud Storage, this starts the scan at the beginning of each file, and stops scanning once Cloud DLP has scanned up to any specified maximum file size (see above).
  • Start sampling from random start: Cloud DLP starts the partial scan at a random location within the data. For BigQuery, this starts the scan at a random row. For Cloud Storage, this setting only applies to files that exceed any specified maximum size. Cloud DLP scans files under the maximum file size in their entirety, and scans files above the maximum file size up to the maximum.

Files

For files stored in Cloud Storage, you can specify the types to include in your scan under Files.

You can choose from binary, text, image, Microsoft Word, PDF, and Apache Avro files. An exhaustive list of file extensions that Cloud DLP can scan in Cloud Storage buckets is included on the API reference page for FileType. Be aware that choosing Binary causes Cloud DLP to scan files of types that are unrecognized.

Identifying fields

For tables in BigQuery, under Identifying fields you can direct Cloud DLP to scan only rows that have values in a specific field or fields.

To add a field, click Add identifying field. Enter the field name, using dot notation to specify nested fields, if necessary.

You can add as many fields as you want. To remove a field, click Delete item (the trashcan icon) next to the field you want to delete.

Configure detection

The Configure detection section is where you specify the types of sensitive data you want to scan for. Completing this section is optional. If you skip this section, Cloud DLP will scan your data for a set of common types of sensitive data using the Most common option, which corresponds to the ALL_BASIC infoType detector.

Template

You can optionally use a Cloud DLP template to reuse configuration information you've specified previously.

If you have already created a template that you want to use, click in the Template name field to see a list of existing inspection templates. Choose or type the name of the template you want to use.

For more information about creating templates, see Creating Cloud DLP inspection templates.

InfoTypes

InfoType detectors find sensitive data of a certain type. For example, the Cloud DLP US_SOCIAL_SECURITY_NUMBER built-in infoType detector finds US Social Security numbers. In addition to the built-in infoType detectors, you can create your own custom infoType detectors.

Under InfoTypes, choose the infoType detector that corresponds to a data type you want to scan for. You can also leave this field blank to scan for all default infoTypes. More information about each detector is provided in InfoType detector reference.

You can also add custom infoType detectors in the Custom infoTypes section, and customize both built-in and custom infoType detectors in the Inspection rulesets section.

Custom infoTypes

To add a custom infoType detector:

  1. Click Add custom infoType.
  2. Choose the type of custom infoType detector you want to create:
    • Words or phrases: Matches on one or more words or phrases that you enter into the field. Use this custom infoType when you have just a few words or phrases to search for. Give your custom infoType a name, and then, under List of words or phrases, type the word or phrase you want Cloud DLP to match on. To search on multiple words or phrases, press Enter after each one. For more information, see Creating a regular custom dictionary detector.
    • Dictionary path: Searches your content for items in a list of words and phrases. The list is stored in a text file in Cloud Storage. Use this custom infoType when you have anywhere from a few to several hundred thousand words or phrases to search for. This method is also useful if your list contains sensitive elements and you don't want to store them inside of a job or template. Give your custom infoType a name, and then, under Dictionary location, enter or browse to the Cloud Storage path where the dictionary file is stored. For more information, see Creating a regular custom dictionary detector.
    • Regex: Matches content based on a regular expression. Give your custom infoType a name, and then, in the Regex field, enter a regex pattern to match words and phrases. See the supported regex syntax.
    • Stored infoType: This option adds a stored custom dictionary detector, which is a kind of dictionary detector that is built from either a large text file stored in Cloud Storage or a single column of a BigQuery table. Use this kind of custom infoType when you have anywhere from several hundred thousand to tens of millions of words or phrases to search for. Be aware that this is the only option in this menu for which you must have already created the stored infoType to use it. Give your custom infoType a name (different from the name you gave the stored infoType), and then, in the Stored infoType field, enter the name of the stored infoType. For more information about creating stored custom dictionaries, see Creating a stored custom dictionary detector.

Click Add custom infoType again to add additional custom infoType detectors.

Inspection rulesets

Inspection rulesets allow you to customize both built-in and custom infoType detectors using context rules. The two types of inspection rules are:

To add a new ruleset, first specify one or more built-in or custom infoType detectors in the InfoTypes section. These are the infoType detectors that your rulesets will be modifying. Then, do the following:

  1. Click in the Choose infoTypes field. The infoType or infoTypes you specified previously appear below the field in a menu, as shown here:
  2. Screenshot of the DLP UI's inspection rulesets configuration.
  3. Choose an infoType from the menu, and then click Add rule. A menu appears with the two options Hotword rule and Exclusion rule.

For hotword rules, choose Hotword rules. Then, do the following:

  1. In the Hotword field, enter a regular expression that Cloud DLP should look for.
  2. From the Hotword proximity menu, choose whether the hotword you entered is found before or after the chosen infoType.
  3. In Hotword distance from infoType, enter the approximate number of characters between the hotword and the chosen infoType.
  4. In Confidence level adjustment, choose whether to assign matches a fixed likelihood level, or to increase or decrease the default likelihood level by a certain amount.

For exclusion rules, choose Exclusion rules. Then, do the following:

  1. In the Exclude field, enter a regular expression (regex) that Cloud DLP should look for.
  2. From the Matching type menu, choose one of the following:
    • Full match: The finding must completely match the regex.
    • Partial match: A substring of the finding can match the regex.
    • Inverse match: The finding doesn't match the regex.

You can add additional hotword or exclusion rules and rulesets to further refine your scan results.

Confidence threshold

Every time Cloud DLP detects a potential match for sensitive data, it assigns it a likelihood value on a scale from "Very unlikely" to "Very likely." When you set a likelihood value here, you are instructing Cloud DLP to only match on data that corresponds to that likelihood value or higher.

The default value of "Possible" is sufficient for most purposes. If you routinely get matches that are too broad, move the slider up. If you get too few matches, move the slider down.

When you're done, click Continue.

Add actions

In the Add actions step, you select the action or actions you want Cloud DLP to take after the job completes.

Your choices are:

  • Save to BigQuery: This option saves findings to a BigQuery table. The findings that are stored in BigQuery contain details about each finding's location and match likelihood. If you don't store findings, the completed job will only contain statistics about the number and infoTypes of the findings. If you don't specify a table ID, BigQuery assigns a default name to a new table. If you specify an existing table, findings are appended to it. Choose the Include quote checkbox to include contextual text in each match result.
  • Publish to Pub/Sub: This option sends a notification message to a Pub/Sub when the job completes. Click New Topic to specify one or more topic names to which you want to publish the notification.
  • Publish to Google Cloud Security Command Center: This option publishes a summary of your results to Security Command Center. For more information, see Sending Cloud DLP scan results to Security Command Center.
  • Publish to Data Catalog: Choose this option to send inspection results to Data Catalog, Google Cloud's metadata management service.
  • Publish to Stackdriver: Choose this option to send inspection results to Cloud Monitoring, Google Cloud's operations suite.
  • Notify by email: This option causes Cloud DLP to send an email to project owners and editors when the job completes.

When you're done selecting actions, click Continue.

Schedule

In the Schedule section, you can do two things:

  • Specify time span: This option limits the files or rows to scan by date. Click Start time to specify the earliest file timestamp to include. Leave this value blank to specify all files. Click End time to specify the latest file timestamp to include. Leave this value blank to specify no upper timestamp limit.
  • Create a trigger to run the job on a periodic schedule: This option creates the job trigger, and sets it to run the job you've specified on a periodic schedule. The default value is also the minimum value: 24 hours. The maximum value is 60 days. If you only want Cloud DLP to scan new files or rows, select the Limit scans only to new content checkbox.

Review

The Review section contains a JSON-formatted summary of the job settings you just specified.

Click Create to create the job trigger (if you specified a schedule). The job trigger's information page appears, which contains status and other information. If the job is currently running, you can click the Cancel button to stop it. You can also delete the job trigger by clicking Delete.

To return to the main Cloud DLP page, click the Back arrow in the Cloud Console.

Protocol

A job trigger is represented in the DLP API by the JobTrigger resource. You can create a new job trigger by using the JobTrigger resource's projects.jobTriggers.create method.

This sample JSON can be sent in a POST request to the specified Cloud DLP REST endpoint. This example JSON demonstrates how to create a job trigger in Cloud DLP. The job that this trigger will kick off is a Datastore inspection scan. The job trigger that is created runs every 86,400 seconds (or 24 hours).

To quickly try this out, you can use the API Explorer that's embedded below. Keep in mind that a successful request, even one created in API Explorer, will create a new scheduled job trigger. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

JSON input:

{
  "jobTrigger":{
    "displayName":"JobTrigger1",
    "description":"Starts a DLP scan job of a Datastore kind",
    "triggers":[
      {
        "schedule":{
          "recurrencePeriodDuration":"86400s"
        }
      }
    ],
    "status":"HEALTHY",
    "inspectJob":{
      "storageConfig":{
        "datastoreOptions":{
          "kind":{
            "name":"Example-Kind"
          },
          "partitionId":{
            "projectId":"[PROJECT_ID]",
            "namespaceId":"[NAMESPACE_ID]"
          }
        }
      },
      "inspectConfig":{
        "infoTypes":[
          {
            "name":"PHONE_NUMBER"
          }
        ],
        "excludeInfoTypes":false,
        "includeQuote":true,
        "minLikelihood":"LIKELY"
      },
      "actions":[
        {
          "saveFindings":{
            "outputConfig":{
              "table":{
                "projectId":"[PROJECT_ID]",
                "datasetId":"[BIGQUERY_DATASET_NAME]",
                "tableId":"[BIGQUERY_TABLE_NAME]"
              }
            }
          }
        }
      ]
    }
  }
}

JSON output:

The following output indicates that the job trigger was successfully created.

{
  "name":"projects/[PROJECT_ID]/jobTriggers/[JOB_TRIGGER_NAME]",
  "displayName":"JobTrigger1",
  "description":"Starts a DLP scan job of a Datastore kind",
  "inspectJob":{
    "storageConfig":{
      "datastoreOptions":{
        "partitionId":{
          "projectId":"[PROJECT_ID]",
          "namespaceId":"[NAMESPACE_ID]"
        },
        "kind":{
          "name":"Example-Kind"
        }
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"PHONE_NUMBER"
        }
      ],
      "minLikelihood":"LIKELY",
      "limits":{

      },
      "includeQuote":true
    },
    "actions":[
      {
        "saveFindings":{
          "outputConfig":{
            "table":{
              "projectId":"[PROJECT_ID]",
              "datasetId":"[BIGQUERY_DATASET_NAME]",
              "tableId":"[BIGQUERY_TABLE_NAME]"
            }
          }
        }
      }
    ]
  },
  "triggers":[
    {
      "schedule":{
        "recurrencePeriodDuration":"86400s"
      }
    }
  ],
  "createTime":"2018-11-30T01:52:41.171857Z",
  "updateTime":"2018-11-30T01:52:41.171857Z",
  "status":"HEALTHY"
}

Java


import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.CloudStorageOptions;
import com.google.privacy.dlp.v2.CreateJobTriggerRequest;
import com.google.privacy.dlp.v2.InfoType;
import com.google.privacy.dlp.v2.InspectConfig;
import com.google.privacy.dlp.v2.InspectJobConfig;
import com.google.privacy.dlp.v2.JobTrigger;
import com.google.privacy.dlp.v2.LocationName;
import com.google.privacy.dlp.v2.Schedule;
import com.google.privacy.dlp.v2.StorageConfig;
import com.google.privacy.dlp.v2.StorageConfig.TimespanConfig;
import com.google.protobuf.Duration;
import java.io.IOException;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class TriggersCreate {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    String gcsPath = "gs://" + "your-bucket-name" + "path/to/file.txt";
    createTrigger(projectId, gcsPath);
  }

  public static void createTrigger(String projectId, String gcsPath) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

      // Set autoPopulateTimespan to true to scan only new content
      boolean autoPopulateTimespan = true;
      TimespanConfig timespanConfig =
          TimespanConfig.newBuilder()
              .setEnableAutoPopulationOfTimespanConfig(autoPopulateTimespan)
              .build();

      // Specify the GCS file to be inspected.
      CloudStorageOptions cloudStorageOptions =
          CloudStorageOptions.newBuilder()
              .setFileSet(CloudStorageOptions.FileSet.newBuilder().setUrl(gcsPath))
              .build();
      StorageConfig storageConfig =
          StorageConfig.newBuilder()
              .setCloudStorageOptions(cloudStorageOptions)
              .setTimespanConfig(timespanConfig)
              .build();

      // Specify the type of info the inspection will look for.
      // See https://cloud.google.com/dlp/docs/infotypes-reference for complete list of info types
      List<InfoType> infoTypes =
          Stream.of("PHONE_NUMBER", "EMAIL_ADDRESS", "CREDIT_CARD_NUMBER")
              .map(it -> InfoType.newBuilder().setName(it).build())
              .collect(Collectors.toList());

      InspectConfig inspectConfig = InspectConfig.newBuilder().addAllInfoTypes(infoTypes).build();

      // Configure the inspection job we want the service to perform.
      InspectJobConfig inspectJobConfig =
          InspectJobConfig.newBuilder()
              .setInspectConfig(inspectConfig)
              .setStorageConfig(storageConfig)
              .build();

      // Set scanPeriod to the number of days between scans (minimum: 1 day)
      int scanPeriod = 1;

      // Optionally set a display name of max 100 chars and a description of max 250 chars
      String displayName = "Daily Scan";
      String description = "A daily inspection for personally identifiable information.";

      // Schedule scan of GCS bucket every scanPeriod number of days (minimum = 1 day)
      Duration duration = Duration.newBuilder().setSeconds(scanPeriod * 24 * 3600).build();
      Schedule schedule = Schedule.newBuilder().setRecurrencePeriodDuration(duration).build();
      JobTrigger.Trigger trigger = JobTrigger.Trigger.newBuilder().setSchedule(schedule).build();
      JobTrigger jobTrigger =
          JobTrigger.newBuilder()
              .setInspectJob(inspectJobConfig)
              .setDisplayName(displayName)
              .setDescription(description)
              .setStatus(JobTrigger.Status.HEALTHY)
              .addTriggers(trigger)
              .build();

      // Create scan request to be sent by client
      CreateJobTriggerRequest createJobTriggerRequest =
          CreateJobTriggerRequest.newBuilder()
              .setParent(LocationName.of(projectId, "global").toString())
              .setJobTrigger(jobTrigger)
              .build();

      // Send the scan request and process the response
      JobTrigger createdJobTrigger = dlpServiceClient.createJobTrigger(createJobTriggerRequest);

      System.out.println("Created Trigger: " + createdJobTrigger.getName());
      System.out.println("Display Name: " + createdJobTrigger.getDisplayName());
      System.out.println("Description: " + createdJobTrigger.getDescription());
    }
  }
}

Node.js

// Imports the Google Cloud Data Loss Prevention library
const DLP = require('@google-cloud/dlp');

// Instantiates a client
const dlp = new DLP.DlpServiceClient();

// The project ID to run the API call under
// const callingProjectId = process.env.GCLOUD_PROJECT;

// (Optional) The name of the trigger to be created.
// const triggerId = 'my-trigger';

// (Optional) A display name for the trigger to be created
// const displayName = 'My Trigger';

// (Optional) A description for the trigger to be created
// const description = "This is a sample trigger.";

// The name of the bucket to scan.
// const bucketName = 'YOUR-BUCKET';

// Limit scan to new content only.
// const autoPopulateTimespan = true;

// How often to wait between scans, in days (minimum = 1 day)
// const scanPeriod = 1;

// The infoTypes of information to match
// const infoTypes = [{ name: 'PHONE_NUMBER' }, { name: 'EMAIL_ADDRESS' }, { name: 'CREDIT_CARD_NUMBER' }];

// The minimum likelihood required before returning a match
// const minLikelihood = 'LIKELIHOOD_UNSPECIFIED';

// The maximum number of findings to report per request (0 = server maximum)
// const maxFindings = 0;

// Get reference to the bucket to be inspected
const storageItem = {
  cloudStorageOptions: {
    fileSet: {url: `gs://${bucketName}/*`},
  },
  timeSpanConfig: {
    enableAutoPopulationOfTimespanConfig: autoPopulateTimespan,
  },
};

// Construct job to be triggered
const job = {
  inspectConfig: {
    infoTypes: infoTypes,
    minLikelihood: minLikelihood,
    limits: {
      maxFindingsPerRequest: maxFindings,
    },
  },
  storageConfig: storageItem,
};

// Construct trigger creation request
const request = {
  parent: `projects/${callingProjectId}/locations/global`,
  jobTrigger: {
    inspectJob: job,
    displayName: displayName,
    description: description,
    triggers: [
      {
        schedule: {
          recurrencePeriodDuration: {
            seconds: scanPeriod * 60 * 60 * 24, // Trigger the scan daily
          },
        },
      },
    ],
    status: 'HEALTHY',
  },
  triggerId: triggerId,
};

try {
  // Run trigger creation request
  const [trigger] = await dlp.createJobTrigger(request);
  console.log(`Successfully created trigger ${trigger.name}.`);
} catch (err) {
  console.log(`Error in createTrigger: ${err.message || err}`);
}

Python

def create_trigger(
    project,
    bucket,
    scan_period_days,
    info_types,
    trigger_id=None,
    display_name=None,
    description=None,
    min_likelihood=None,
    max_findings=None,
    auto_populate_timespan=False,
):
    """Creates a scheduled Data Loss Prevention API inspect_content trigger.
    Args:
        project: The Google Cloud project id to use as a parent resource.
        bucket: The name of the GCS bucket to scan. This sample scans all
            files in the bucket using a wildcard.
        scan_period_days: How often to repeat the scan, in days.
            The minimum is 1 day.
        info_types: A list of strings representing info types to look for.
            A full list of info type categories can be fetched from the API.
        trigger_id: The id of the trigger. If omitted, an id will be randomly
            generated.
        display_name: The optional display name of the trigger.
        description: The optional description of the trigger.
        min_likelihood: A string representing the minimum likelihood threshold
            that constitutes a match. One of: 'LIKELIHOOD_UNSPECIFIED',
            'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE', 'LIKELY', 'VERY_LIKELY'.
        max_findings: The maximum number of findings to report; 0 = no maximum.
        auto_populate_timespan: Automatically populates time span config start
            and end times in order to scan new content only.
    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library
    import google.cloud.dlp

    # Instantiate a client.
    dlp = google.cloud.dlp_v2.DlpServiceClient()

    # Prepare info_types by converting the list of strings into a list of
    # dictionaries (protos are also accepted).
    info_types = [{"name": info_type} for info_type in info_types]

    # Construct the configuration dictionary. Keys which are None may
    # optionally be omitted entirely.
    inspect_config = {
        "info_types": info_types,
        "min_likelihood": min_likelihood,
        "limits": {"max_findings_per_request": max_findings},
    }

    # Construct a cloud_storage_options dictionary with the bucket's URL.
    url = "gs://{}/*".format(bucket)
    storage_config = {
        "cloud_storage_options": {"file_set": {"url": url}},
        # Time-based configuration for each storage object.
        "timespan_config": {
            # Auto-populate start and end times in order to scan new objects
            # only.
            "enable_auto_population_of_timespan_config": auto_populate_timespan
        },
    }

    # Construct the job definition.
    job = {"inspect_config": inspect_config, "storage_config": storage_config}

    # Construct the schedule definition:
    schedule = {
        "recurrence_period_duration": {
            "seconds": scan_period_days * 60 * 60 * 24
        }
    }

    # Construct the trigger definition.
    job_trigger = {
        "inspect_job": job,
        "display_name": display_name,
        "description": description,
        "triggers": [{"schedule": schedule}],
        "status": "HEALTHY",
    }

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Call the API.
    response = dlp.create_job_trigger(
        parent, job_trigger=job_trigger, trigger_id=trigger_id
    )

    print("Successfully created trigger {}".format(response.name))

Go

import (
	"context"
	"fmt"
	"io"

	dlp "cloud.google.com/go/dlp/apiv2"
	"github.com/golang/protobuf/ptypes/duration"
	dlppb "google.golang.org/genproto/googleapis/privacy/dlp/v2"
)

// createTrigger creates a trigger with the given configuration.
func createTrigger(w io.Writer, projectID string, triggerID, displayName, description, bucketName string, infoTypeNames []string) error {
	// projectID := "my-project-id"
	// triggerID := "my-trigger"
	// displayName := "My Trigger"
	// description := "My trigger description"
	// bucketName := "my-bucket"
	// infoTypeNames := []string{"US_SOCIAL_SECURITY_NUMBER"}

	ctx := context.Background()

	client, err := dlp.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("dlp.NewClient: %v", err)
	}

	// Convert the info type strings to a list of InfoTypes.
	var infoTypes []*dlppb.InfoType
	for _, it := range infoTypeNames {
		infoTypes = append(infoTypes, &dlppb.InfoType{Name: it})
	}

	// Create a configured request.
	req := &dlppb.CreateJobTriggerRequest{
		Parent:    fmt.Sprintf("projects/%s/locations/global", projectID),
		TriggerId: triggerID,
		JobTrigger: &dlppb.JobTrigger{
			DisplayName: displayName,
			Description: description,
			Status:      dlppb.JobTrigger_HEALTHY,
			// Triggers control when the job will start.
			Triggers: []*dlppb.JobTrigger_Trigger{
				{
					Trigger: &dlppb.JobTrigger_Trigger_Schedule{
						Schedule: &dlppb.Schedule{
							Option: &dlppb.Schedule_RecurrencePeriodDuration{
								RecurrencePeriodDuration: &duration.Duration{
									Seconds: 10 * 60 * 60 * 24, // 10 days in seconds.
								},
							},
						},
					},
				},
			},
			// Job configures the job to run when the trigger runs.
			Job: &dlppb.JobTrigger_InspectJob{
				InspectJob: &dlppb.InspectJobConfig{
					InspectConfig: &dlppb.InspectConfig{
						InfoTypes:     infoTypes,
						MinLikelihood: dlppb.Likelihood_POSSIBLE,
						Limits: &dlppb.InspectConfig_FindingLimits{
							MaxFindingsPerRequest: 10,
						},
					},
					StorageConfig: &dlppb.StorageConfig{
						Type: &dlppb.StorageConfig_CloudStorageOptions{
							CloudStorageOptions: &dlppb.CloudStorageOptions{
								FileSet: &dlppb.CloudStorageOptions_FileSet{
									Url: "gs://" + bucketName + "/*",
								},
							},
						},
						// Time-based configuration for each storage object. See more at
						// https://cloud.google.com/dlp/docs/reference/rest/v2/InspectJobConfig#TimespanConfig
						TimespanConfig: &dlppb.StorageConfig_TimespanConfig{
							// Auto-populate start and end times in order to scan new objects only.
							EnableAutoPopulationOfTimespanConfig: true,
						},
					},
				},
			},
		},
	}

	// Send the request.
	resp, err := client.CreateJobTrigger(ctx, req)
	if err != nil {
		return fmt.Errorf("CreateJobTrigger: %v", err)
	}
	fmt.Fprintf(w, "Successfully created trigger: %v", resp.GetName())
	return nil
}

PHP

/**
 * Create a Data Loss Prevention API job trigger.
 */
use Google\Cloud\Dlp\V2\DlpServiceClient;
use Google\Cloud\Dlp\V2\JobTrigger;
use Google\Cloud\Dlp\V2\JobTrigger\Trigger;
use Google\Cloud\Dlp\V2\JobTrigger\Status;
use Google\Cloud\Dlp\V2\InspectConfig;
use Google\Cloud\Dlp\V2\InspectJobConfig;
use Google\Cloud\Dlp\V2\Schedule;
use Google\Cloud\Dlp\V2\CloudStorageOptions;
use Google\Cloud\Dlp\V2\CloudStorageOptions_FileSet;
use Google\Cloud\Dlp\V2\StorageConfig;
use Google\Cloud\Dlp\V2\StorageConfig_TimespanConfig;
use Google\Cloud\Dlp\V2\InfoType;
use Google\Cloud\Dlp\V2\Likelihood;
use Google\Cloud\Dlp\V2\InspectConfig\FindingLimits;
use Google\Protobuf\Duration;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';
// $bucketName = 'The name of the bucket to scan';
// $triggerId = '';   // (Optional) The name of the trigger to be created';
// $displayName = ''; // (Optional) The human-readable name to give the trigger';
// $description = ''; // (Optional) A description for the trigger to be created';
// $scanPeriod = 1; // (Optional) How often to wait between scans, in days (minimum = 1 day)
// $autoPopulateTimespan = true; // (Optional) Automatically limit scan to new content only
// $maxFindings = 0; // (Optional) The maximum number of findings to report per request (0 = server maximum)

// Instantiate a client.
$dlp = new DlpServiceClient();

// ----- Construct job config -----
// The infoTypes of information to match
$personNameInfoType = (new InfoType())
    ->setName('PERSON_NAME');
$phoneNumberInfoType = (new InfoType())
    ->setName('PHONE_NUMBER');
$infoTypes = [$personNameInfoType, $phoneNumberInfoType];

// The minimum likelihood required before returning a match
$minLikelihood = likelihood::LIKELIHOOD_UNSPECIFIED;

// Specify finding limits
$limits = (new FindingLimits())
    ->setMaxFindingsPerRequest($maxFindings);

// Create the inspectConfig object
$inspectConfig = (new InspectConfig())
    ->setMinLikelihood($minLikelihood)
    ->setLimits($limits)
    ->setInfoTypes($infoTypes);

// Create triggers
$duration = (new Duration())
    ->setSeconds($scanPeriod * 60 * 60 * 24);

$schedule = (new Schedule())
    ->setRecurrencePeriodDuration($duration);

$triggerObject = (new Trigger())
    ->setSchedule($schedule);

// Create the storageConfig object
$fileSet = (new CloudStorageOptions_FileSet())
    ->setUrl('gs://' . $bucketName . '/*');

$storageOptions = (new CloudStorageOptions())
    ->setFileSet($fileSet);

// Auto-populate start and end times in order to scan new objects only.
$timespanConfig = (new StorageConfig_TimespanConfig())
    ->setEnableAutoPopulationOfTimespanConfig($autoPopulateTimespan);

$storageConfig = (new StorageConfig())
    ->setCloudStorageOptions($storageOptions)
    ->setTimespanConfig($timespanConfig);

// Construct the jobConfig object
$jobConfig = (new InspectJobConfig())
    ->setInspectConfig($inspectConfig)
    ->setStorageConfig($storageConfig);

// ----- Construct trigger object -----
$jobTriggerObject = (new JobTrigger())
    ->setTriggers([$triggerObject])
    ->setInspectJob($jobConfig)
    ->setStatus(Status::HEALTHY)
    ->setDisplayName($displayName)
    ->setDescription($description);

// Run trigger creation request
$parent = "projects/$callingProjectId/locations/global";
$trigger = $dlp->createJobTrigger($parent, [
    'jobTrigger' => $jobTriggerObject,
    'triggerId' => $triggerId
]);

// Print results
printf('Successfully created trigger %s' . PHP_EOL, $trigger->getName());

C#


using Google.Api.Gax.ResourceNames;
using Google.Cloud.Dlp.V2;
using System;
using System.Collections.Generic;
using static Google.Cloud.Dlp.V2.CloudStorageOptions.Types;
using static Google.Cloud.Dlp.V2.InspectConfig.Types;
using static Google.Cloud.Dlp.V2.JobTrigger.Types;
using static Google.Cloud.Dlp.V2.StorageConfig.Types;

public class TriggersCreate
{
    public static JobTrigger Create(
        string projectId,
        string bucketName,
        Likelihood minLikelihood,
        int maxFindings,
        bool autoPopulateTimespan,
        int scanPeriod,
        IEnumerable<InfoType> infoTypes,
        string triggerId,
        string displayName,
        string description)
    {
        var dlp = DlpServiceClient.Create();

        var jobConfig = new InspectJobConfig
        {
            InspectConfig = new InspectConfig
            {
                MinLikelihood = minLikelihood,
                Limits = new FindingLimits
                {
                    MaxFindingsPerRequest = maxFindings
                },
                InfoTypes = { infoTypes }
            },
            StorageConfig = new StorageConfig
            {
                CloudStorageOptions = new CloudStorageOptions
                {
                    FileSet = new FileSet
                    {
                        Url = $"gs://{bucketName}/*"
                    }
                },
                TimespanConfig = new TimespanConfig
                {
                    EnableAutoPopulationOfTimespanConfig = autoPopulateTimespan
                }
            }
        };

        var jobTrigger = new JobTrigger
        {
            Triggers =
            {
                new Trigger
                {
                    Schedule = new Schedule
                    {
                        RecurrencePeriodDuration = new Google.Protobuf.WellKnownTypes.Duration
                        {
                            Seconds = scanPeriod * 60 * 60 * 24
                        }
                    }
                }
            },
            InspectJob = jobConfig,
            Status = Status.Healthy,
            DisplayName = displayName,
            Description = description
        };

        var response = dlp.CreateJobTrigger(
            new CreateJobTriggerRequest
            {
                Parent = new LocationName(projectId, "global").ToString(),
                JobTrigger = jobTrigger,
                TriggerId = triggerId
            });

        Console.WriteLine($"Successfully created trigger {response.Name}");
        return response;
    }
}

List all jobs

To list all jobs for the current project:

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. Under the Jobs & job triggers tab, click the All jobs tab.

The console displays a list of all jobs for the current project, including their job identifiers, state, creation time, and end time. You can get more information about any job—including a summary of its results— by clicking its identifier.

Protocol

The DlpJob resource has a projects.dlpJobs.list method, with which you can list all jobs.

To list all jobs currently defined in your project, send a GET request to the dlpJobs endpoint, as shown here:

URL:

GET https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs?key={YOUR_API_KEY}

The following JSON output lists one of the jobs returned. Note that the structure of the job mirrors that of the DlpJob resource.

JSON output:

{
  "jobs":[
    {
      "name":"projects/[PROJECT-ID]/dlpJobs/i-5270277269264714623",
      "type":"INSPECT_JOB",
      "state":"DONE",
      "inspectDetails":{
        "requestedOptions":{
          "snapshotInspectTemplate":{
          },
          "jobConfig":{
            "storageConfig":{
              "cloudStorageOptions":{
                "fileSet":{
                  "url":"[CLOUD-STORAGE-URL]"
                },
                "fileTypes":[
                  "FILE_TYPE_UNSPECIFIED"
                ],
                "filesLimitPercent":100
              },
              "timespanConfig":{
                "startTime":"2019-09-08T22:43:16.623Z",
                "enableAutoPopulationOfTimespanConfig":true
              }
            },
            "inspectConfig":{
              "infoTypes":[
                {
                  "name":"US_SOCIAL_SECURITY_NUMBER"
                },
                {
                  "name":"CANADA_SOCIAL_INSURANCE_NUMBER"
                }
              ],
              "minLikelihood":"LIKELY",
              "limits":{
              },
              "includeQuote":true
            },
            "actions":[
              {
                "saveFindings":{
                  "outputConfig":{
                    "table":{
                      "projectId":"[PROJECT-ID]",
                      "datasetId":"[DATASET-ID]",
                      "tableId":"[TABLE-ID]"
                    }
                  }
                }
              }
            ]
          }
        },
        "result":{
          ...
        }
      },
      "createTime":"2019-09-09T22:43:16.918Z",
      "startTime":"2019-09-09T22:43:16.918Z",
      "endTime":"2019-09-09T22:43:53.091Z",
      "jobTriggerName":"projects/[PROJECT-ID]/jobTriggers/sample-trigger2"
    },
    ...

To quickly try this out, you can use the API Explorer that's embedded below. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

Java


import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.DlpJob;
import com.google.privacy.dlp.v2.DlpJobType;
import com.google.privacy.dlp.v2.ListDlpJobsRequest;
import com.google.privacy.dlp.v2.LocationName;
import java.io.IOException;

public class JobsList {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    listJobs(projectId);
  }

  // Lists DLP jobs
  public static void listJobs(String projectId) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

      // Construct the request to be sent by the client.
      // For more info on filters and job types,
      // see https://cloud.google.com/dlp/docs/reference/rest/v2/projects.dlpJobs/list
      ListDlpJobsRequest listDlpJobsRequest =
          ListDlpJobsRequest.newBuilder()
              .setParent(LocationName.of(projectId, "global").toString())
              .setFilter("state=DONE")
              .setType(DlpJobType.valueOf("INSPECT_JOB"))
              .build();

      // Send the request to list jobs and process the response
      DlpServiceClient.ListDlpJobsPagedResponse response =
          dlpServiceClient.listDlpJobs(listDlpJobsRequest);

      System.out.println("DLP jobs found:");
      for (DlpJob dlpJob : response.getPage().getValues()) {
        System.out.println(dlpJob.getName() + " -- " + dlpJob.getState());
      }
    }
  }
}

Node.js

// Imports the Google Cloud Data Loss Prevention library
const DLP = require('@google-cloud/dlp');

// Instantiates a client
const dlp = new DLP.DlpServiceClient();

// The project ID to run the API call under
// const callingProjectId = process.env.GCLOUD_PROJECT;

// The filter expression to use
// For more information and filter syntax, see https://cloud.google.com/dlp/docs/reference/rest/v2/projects.dlpJobs/list
// const filter = `state=DONE`;

// The type of job to list (either 'INSPECT_JOB' or 'RISK_ANALYSIS_JOB')
// const jobType = 'INSPECT_JOB';

// Construct request for listing DLP scan jobs
const request = {
  parent: `projects/${callingProjectId}/locations/global`,
  filter: filter,
  type: jobType,
};

try {
  // Run job-listing request
  const [jobs] = await dlp.listDlpJobs(request);
  jobs.forEach(job => {
    console.log(`Job ${job.name} status: ${job.state}`);
  });
} catch (err) {
  console.log(`Error in listJobs: ${err.message || err}`);
}

Python

def list_dlp_jobs(project, filter_string=None, job_type=None):
    """Uses the Data Loss Prevention API to lists DLP jobs that match the
        specified filter in the request.
    Args:
        project: The project id to use as a parent resource.
        filter: (Optional) Allows filtering.
            Supported syntax:
            * Filter expressions are made up of one or more restrictions.
            * Restrictions can be combined by 'AND' or 'OR' logical operators.
            A sequence of restrictions implicitly uses 'AND'.
            * A restriction has the form of '<field> <operator> <value>'.
            * Supported fields/values for inspect jobs:
                - `state` - PENDING|RUNNING|CANCELED|FINISHED|FAILED
                - `inspected_storage` - DATASTORE|CLOUD_STORAGE|BIGQUERY
                - `trigger_name` - The resource name of the trigger that
                                   created job.
            * Supported fields for risk analysis jobs:
                - `state` - RUNNING|CANCELED|FINISHED|FAILED
            * The operator must be '=' or '!='.
            Examples:
            * inspected_storage = cloud_storage AND state = done
            * inspected_storage = cloud_storage OR inspected_storage = bigquery
            * inspected_storage = cloud_storage AND
                                  (state = done OR state = canceled)
        type: (Optional) The type of job. Defaults to 'INSPECT'.
            Choices:
            DLP_JOB_TYPE_UNSPECIFIED
            INSPECT_JOB: The job inspected content for sensitive data.
            RISK_ANALYSIS_JOB: The job executed a Risk Analysis computation.

    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library.
    import google.cloud.dlp

    # Instantiate a client.
    dlp = google.cloud.dlp_v2.DlpServiceClient()

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Job type dictionary
    job_type_to_int = {
        "DLP_JOB_TYPE_UNSPECIFIED":
            google.cloud.dlp.enums.DlpJobType.DLP_JOB_TYPE_UNSPECIFIED,
        "INSPECT_JOB": google.cloud.dlp.enums.DlpJobType.INSPECT_JOB,
        "RISK_ANALYSIS_JOB": google.cloud.dlp.enums.DlpJobType.RISK_ANALYSIS_JOB,
    }
    # If job type is specified, convert job type to number through enums.
    if job_type:
        job_type = job_type_to_int[job_type]

    # Call the API to get a list of jobs.
    response = dlp.list_dlp_jobs(parent, filter_=filter_string, type_=job_type)

    # Iterate over results.
    for job in response:
        print("Job: %s; status: %s" % (job.name, job.JobState.Name(job.state)))

Go

import (
	"context"
	"fmt"
	"io"

	dlp "cloud.google.com/go/dlp/apiv2"
	"google.golang.org/api/iterator"
	dlppb "google.golang.org/genproto/googleapis/privacy/dlp/v2"
)

// listJobs lists jobs matching the given optional filter and optional jobType.
func listJobs(w io.Writer, projectID, filter, jobType string) error {
	// projectID := "my-project-id"
	// filter := "`state` = FINISHED"
	// jobType := "RISK_ANALYSIS_JOB"
	ctx := context.Background()
	client, err := dlp.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("dlp.NewClient: %v", err)
	}

	// Create a configured request.
	req := &dlppb.ListDlpJobsRequest{
		Parent: fmt.Sprintf("projects/%s/locations/global", projectID),
		Filter: filter,
		Type:   dlppb.DlpJobType(dlppb.DlpJobType_value[jobType]),
	}
	// Send the request and iterate over the results.
	it := client.ListDlpJobs(ctx, req)
	for {
		j, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			return fmt.Errorf("Next: %v", err)
		}
		fmt.Fprintf(w, "Job %v status: %v\n", j.GetName(), j.GetState())
	}
	return nil
}

PHP

/**
 * List Data Loss Prevention API jobs corresponding to a given filter.
 */
use Google\Cloud\Dlp\V2\DlpServiceClient;
use Google\Cloud\Dlp\V2\DlpJobType;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';
// $filter = 'The filter expression to use';

// Instantiate a client.
$dlp = new DlpServiceClient();

// The type of job to list (either 'INSPECT_JOB' or 'REDACT_JOB')
$jobType = DlpJobType::INSPECT_JOB;

// Run job-listing request
// For more information and filter syntax,
// @see https://cloud.google.com/dlp/docs/reference/rest/v2/projects.dlpJobs/list
$parent = "projects/$callingProjectId/locations/global";
$response = $dlp->listDlpJobs($parent, [
  'filter' => $filter,
  'type' => $jobType
]);

// Print job list
$jobs = $response->iterateAllElements();
foreach ($jobs as $job) {
    printf('Job %s status: %s' . PHP_EOL, $job->getName(), $job->getState());
    $infoTypeStats = $job->getInspectDetails()->getResult()->getInfoTypeStats();

    if (count($infoTypeStats) > 0) {
        foreach ($infoTypeStats as $infoTypeStat) {
            printf(
                '  Found %s instance(s) of type %s' . PHP_EOL,
                $infoTypeStat->getCount(),
                $infoTypeStat->getInfoType()->getName()
            );
        }
    } else {
        print('  No findings.' . PHP_EOL);
    }
}

C#


using Google.Api.Gax;
using Google.Api.Gax.ResourceNames;
using Google.Cloud.Dlp.V2;

public class JobsList
{
    public static PagedEnumerable<ListDlpJobsResponse, DlpJob> ListDlpJobs(string projectId, string filter, DlpJobType jobType)
    {
        var dlp = DlpServiceClient.Create();

        var response = dlp.ListDlpJobs(new ListDlpJobsRequest
        {
            Parent = new LocationName(projectId, "global").ToString(),
            Filter = filter,
            Type = jobType
        });

        // Uncomment to print jobs
        // foreach (var job in response)
        // {
        //     Console.WriteLine($"Job: {job.Name} status: {job.State}");
        // }

        return response;
    }
}

List all job triggers

To list all job triggers for the current project:

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. Under the Jobs & job triggers tab, click the Job triggers tab.

The console displays a list of all job triggers for the current project.

Protocol

The JobTrigger resource has a projects.jobTriggers.list method, with which you can list all job triggers.

To list all job triggers currently defined in your project, send a GET request to the jobTriggers endpoint, as shown here:

URL:

GET https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/jobTriggers?key={YOUR_API_KEY}

The following JSON output lists the job trigger we created in the previous section. Note that the structure of the job trigger mirrors that of the JobTrigger resource.

JSON output:

{
  "jobTriggers":[
    {
      "name":"projects/[PROJECT_ID]/jobTriggers/[JOB_TRIGGER_NAME]",
      "displayName":"JobTrigger1",
      "description":"Starts a DLP scan job of a Datastore kind",
      "inspectJob":{
        "storageConfig":{
          "datastoreOptions":{
            "partitionId":{
              "projectId":"[PROJECT_ID]",
              "namespaceId":"[NAMESPACE_ID]"
            },
            "kind":{
              "name":"Example-Kind"
            }
          }
        },
        "inspectConfig":{
          "infoTypes":[
            {
              "name":"PHONE_NUMBER"
            }
          ],
          "minLikelihood":"LIKELY",
          "limits":{

          },
          "includeQuote":true
        },
        "actions":[
          {
            "saveFindings":{
              "outputConfig":{
                "table":{
                  "projectId":"[PROJECT_ID]",
                  "datasetId":"[BIGQUERY_DATASET_NAME]",
                  "tableId":"[BIGQUERY_TABLE_NAME]"
                }
              }
            }
          }
        ]
      },
      "triggers":[
        {
          "schedule":{
            "recurrencePeriodDuration":"86400s"
          }
        }
      ],
      "createTime":"2018-11-30T01:52:41.171857Z",
      "updateTime":"2018-11-30T01:52:41.171857Z",
      "status":"HEALTHY"
    },

    ...

],
  "nextPageToken":"KkwKCQjivJ2UpPreAgo_Kj1wcm9qZWN0cy92ZWx2ZXR5LXN0dWR5LTE5NjEwMS9qb2JUcmlnZ2Vycy8xNTA5NzEyOTczMDI0MDc1NzY0"
}

To quickly try this out, you can use the API Explorer that's embedded below. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

Java


import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.JobTrigger;
import com.google.privacy.dlp.v2.ListJobTriggersRequest;
import com.google.privacy.dlp.v2.LocationName;
import java.io.IOException;

class TriggersList {
  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    listTriggers(projectId);
  }

  public static void listTriggers(String projectId) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {
      // Build the request to be sent by the client
      ListJobTriggersRequest listJobTriggersRequest =
          ListJobTriggersRequest.newBuilder()
              .setParent(LocationName.of(projectId, "global").toString())
              .build();

      // Use the client to send the API request.
      DlpServiceClient.ListJobTriggersPagedResponse response =
          dlpServiceClient.listJobTriggers(listJobTriggersRequest);

      // Parse the response and process the results
      System.out.println("DLP triggers found:");
      for (JobTrigger trigger : response.getPage().getValues()) {
        System.out.println("Trigger: " + trigger.getName());
        System.out.println("\tCreated: " + trigger.getCreateTime());
        System.out.println("\tUpdated: " + trigger.getUpdateTime());
        if (trigger.getDisplayName() != null) {
          System.out.println("\tDisplay name: " + trigger.getDisplayName());
        }
        if (trigger.getDescription() != null) {
          System.out.println("\tDescription: " + trigger.getDescription());
        }
        System.out.println("\tStatus: " + trigger.getStatus());
        System.out.println("\tError count: " + trigger.getErrorsCount());
      }
      ;
    }
  }
}

Node.js

  // Imports the Google Cloud Data Loss Prevention library
  const DLP = require('@google-cloud/dlp');

  // Instantiates a client
  const dlp = new DLP.DlpServiceClient();

  // The project ID to run the API call under
  // const callingProjectId = process.env.GCLOUD_PROJECT;

  // Construct trigger listing request
  const request = {
    parent: `projects/${callingProjectId}/locations/global`,
  };

  // Helper function to pretty-print dates
  const formatDate = date => {
    const msSinceEpoch = parseInt(date.seconds, 10) * 1000;
    return new Date(msSinceEpoch).toLocaleString('en-US');
  };

  try {
    // Run trigger listing request
    const [triggers] = await dlp.listJobTriggers(request);
    triggers.forEach(trigger => {
      // Log trigger details
      console.log(`Trigger ${trigger.name}:`);
      console.log(`  Created: ${formatDate(trigger.createTime)}`);
      console.log(`  Updated: ${formatDate(trigger.updateTime)}`);
      if (trigger.displayName) {
        console.log(`  Display Name: ${trigger.displayName}`);
      }
      if (trigger.description) {
        console.log(`  Description: ${trigger.description}`);
      }
      console.log(`  Status: ${trigger.status}`);
      console.log(`  Error count: ${trigger.errors.length}`);
    });
  } catch (err) {
    console.log(`Error in listTriggers: ${err.message || err}`);
  }
}

async function deleteTrigger(triggerId) {
  // Imports the Google Cloud Data Loss Prevention library
  const DLP = require('@google-cloud/dlp');

  // Instantiates a client
  const dlp = new DLP.DlpServiceClient();

  // The name of the trigger to be deleted
  // Parent project ID is automatically extracted from this parameter
  // const triggerId = 'projects/my-project/triggers/my-trigger';

  // Construct trigger deletion request
  const request = {
    name: triggerId,
  };
  try {
    // Run trigger deletion request
    await dlp.deleteJobTrigger(request);
    console.log(`Successfully deleted trigger ${triggerId}.`);
  } catch (err) {
    console.log(`Error in deleteTrigger: ${err.message || err}`);
  }

}

const cli = require(`yargs`) // eslint-disable-line
  .demand(1)
  .command(
    'create <bucketName> <scanPeriod>',
    'Create a Data Loss Prevention API job trigger.',
    {
      infoTypes: {
        alias: 't',
        default: ['PHONE_NUMBER', 'EMAIL_ADDRESS', 'CREDIT_CARD_NUMBER'],
        type: 'array',
        global: true,
        coerce: infoTypes =>
          infoTypes.map(type => {
            return {name: type};
          }),
      },
      triggerId: {
        alias: 'n',
        default: '',
        type: 'string',
      },
      displayName: {
        alias: 'd',
        default: '',
        type: 'string',
      },
      description: {
        alias: 's',
        default: '',
        type: 'string',
      },
      autoPopulateTimespan: {
        default: false,
        type: 'boolean',
      },
      minLikelihood: {
        alias: 'm',
        default: 'LIKELIHOOD_UNSPECIFIED',
        type: 'string',
        choices: [
          'LIKELIHOOD_UNSPECIFIED',
          'VERY_UNLIKELY',
          'UNLIKELY',
          'POSSIBLE',
          'LIKELY',
          'VERY_LIKELY',
        ],
        global: true,
      },
      maxFindings: {
        alias: 'f',
        default: 0,
        type: 'number',
        global: true,
      },
    },
    opts =>
      createTrigger(
        opts.callingProjectId,
        opts.triggerId,
        opts.displayName,
        opts.description,
        opts.bucketName,
        opts.autoPopulateTimespan,
        opts.scanPeriod,
        opts.infoTypes,
        opts.minLikelihood,
        opts.maxFindings
      )
  )
  .command('list', 'List Data Loss Prevention API job triggers.', {}, opts =>
    listTriggers(opts.callingProjectId)
  )
  .command(
    'delete <triggerId>',
    'Delete a Data Loss Prevention API job trigger.',
    {},
    opts => deleteTrigger(opts.triggerId)
  )
  .option('c', {
    type: 'string',
    alias: 'callingProjectId',
    default: process.env.GCLOUD_PROJECT || '',
  })
  .example('node $0 create my-bucket 1')
  .example('node $0 list')
  .example('node $0 delete projects/my-project/jobTriggers/my-trigger')
  .wrap(120)
  .recommendCommands()
  .epilogue('For more information, see https://cloud.google.com/dlp/docs.');

if (module === require.main) {
  cli.help().strict().argv; // eslint-disable-line
}

Python

def list_triggers(project):
    """Lists all Data Loss Prevention API triggers.
    Args:
        project: The Google Cloud project id to use as a parent resource.
    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library
    import google.cloud.dlp

    # Instantiate a client.
    dlp = google.cloud.dlp_v2.DlpServiceClient()

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Call the API.
    response = dlp.list_job_triggers(parent)

    # Define a helper function to convert the API's "seconds since the epoch"
    # time format into a human-readable string.
    def human_readable_time(timestamp):
        return str(time.localtime(timestamp.seconds))

    for trigger in response:
        print("Trigger {}:".format(trigger.name))
        print("  Created: {}".format(human_readable_time(trigger.create_time)))
        print("  Updated: {}".format(human_readable_time(trigger.update_time)))
        if trigger.display_name:
            print("  Display Name: {}".format(trigger.display_name))
        if trigger.description:
            print("  Description: {}".format(trigger.discription))
        print("  Status: {}".format(trigger.status))
        print("  Error count: {}".format(len(trigger.errors)))

Go

import (
	"context"
	"fmt"
	"io"
	"time"

	dlp "cloud.google.com/go/dlp/apiv2"
	"github.com/golang/protobuf/ptypes"
	"google.golang.org/api/iterator"
	dlppb "google.golang.org/genproto/googleapis/privacy/dlp/v2"
)

// listTriggers lists the triggers for the given project.
func listTriggers(w io.Writer, projectID string) error {
	// projectID := "my-project-id"

	ctx := context.Background()

	client, err := dlp.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("dlp.NewClient: %v", err)
	}

	// Create a configured request.
	req := &dlppb.ListJobTriggersRequest{
		Parent: fmt.Sprintf("projects/%s/locations/global", projectID),
	}
	// Send the request and iterate over the results.
	it := client.ListJobTriggers(ctx, req)
	for {
		t, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			return fmt.Errorf("Next: %v", err)
		}
		fmt.Fprintf(w, "Trigger %v\n", t.GetName())
		c, err := ptypes.Timestamp(t.GetCreateTime())
		if err != nil {
			return fmt.Errorf("CreateTime Timestamp: %v", err)
		}
		fmt.Fprintf(w, "  Created: %v\n", c.Format(time.RFC1123))
		u, err := ptypes.Timestamp(t.GetUpdateTime())
		if err != nil {
			return fmt.Errorf("UpdateTime Timestamp: %v", err)
		}
		fmt.Fprintf(w, "  Updated: %v\n", u.Format(time.RFC1123))
		fmt.Fprintf(w, "  Display Name: %q\n", t.GetDisplayName())
		fmt.Fprintf(w, "  Description: %q\n", t.GetDescription())
		fmt.Fprintf(w, "  Status: %v\n", t.GetStatus())
		fmt.Fprintf(w, "  Error Count: %v\n", len(t.GetErrors()))
	}

	return nil
}

PHP

/**
 * List Data Loss Prevention API job triggers.
 */
use Google\Cloud\Dlp\V2\DlpServiceClient;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';

// Instantiate a client.
$dlp = new DlpServiceClient();

$parent = "projects/$callingProjectId/locations/global";

// Run request
$response = $dlp->listJobTriggers($parent);

// Print results
$triggers = $response->iterateAllElements();
foreach ($triggers as $trigger) {
    printf('Trigger %s' . PHP_EOL, $trigger->getName());
    printf('  Created: %s' . PHP_EOL, $trigger->getCreateTime()->getSeconds());
    printf('  Updated: %s' . PHP_EOL, $trigger->getUpdateTime()->getSeconds());
    printf('  Display Name: %s' . PHP_EOL, $trigger->getDisplayName());
    printf('  Description: %s' . PHP_EOL, $trigger->getDescription());
    printf('  Status: %s' . PHP_EOL, $trigger->getStatus());
    printf('  Error count: %s' . PHP_EOL, count($trigger->getErrors()));
    $timespanConfig = $trigger->getInspectJob()->getStorageConfig()->getTimespanConfig();
    printf('  Auto-populates timespan config: %s' . PHP_EOL,
        ($timespanConfig && $timespanConfig->getEnableAutoPopulationOfTimespanConfig() ? 'yes' : 'no'));
}

C#


using Google.Api.Gax;
using Google.Api.Gax.ResourceNames;
using Google.Cloud.Dlp.V2;
using System;

public class TriggersList
{
    public static PagedEnumerable<ListJobTriggersResponse, JobTrigger> List(string projectId)
    {
        var dlp = DlpServiceClient.Create();

        var response = dlp.ListJobTriggers(
            new ListJobTriggersRequest
            {
                Parent = new LocationName(projectId, "global").ToString(),
            });

        foreach (var trigger in response)
        {
            Console.WriteLine($"Name: {trigger.Name}");
            Console.WriteLine($"  Created: {trigger.CreateTime}");
            Console.WriteLine($"  Updated: {trigger.UpdateTime}");
            Console.WriteLine($"  Display Name: {trigger.DisplayName}");
            Console.WriteLine($"  Description: {trigger.Description}");
            Console.WriteLine($"  Status: {trigger.Status}");
            Console.WriteLine($"  Error count: {trigger.Errors.Count}");
        }

        return response;
    }
}

Delete a job

To delete a job from your project, which includes its results, do the following. Any results saved externally (such as to BigQuery) are untouched by this operation.

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. Under the Jobs & job triggers tab, click the All jobs tab. The Google Cloud Console displays a list of all jobs for the current project.

  3. In the Actions column for the job trigger you want to delete, click the more actions menu (displayed as three dots arranged vertically) , and then click Delete. s

Alternatively, from the list of jobs, click the identifier of the job you want to delete. On the job's detail page, click Delete.

Protocol

To delete a job from the current project, send a DELETE request to the dlpJobs endpoint, as shown here. Replace the [JOB-IDENTIFIER] field with the identifier of the job, which starts with i-.

URL:

DELETE https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs/[JOB-IDENTIFIER]?key={YOUR_API_KEY}

If the request was successful, the Cloud DLP API will return a success response. To verify the job was successfully deleted, list all jobs.

To quickly try this out, you can use the API Explorer that's embedded below. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

Java


import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.DeleteDlpJobRequest;
import com.google.privacy.dlp.v2.DlpJobName;
import java.io.IOException;

public class JobsDelete {
  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    String jobId = "your-job-id";
    deleteJobs(projectId, jobId);
  }

  // Deletes a DLP Job with the given jobId
  public static void deleteJobs(String projectId, String jobId) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

      // Construct the complete job name from the projectId and jobId
      DlpJobName jobName = DlpJobName.of(projectId, jobId);

      // Construct the job deletion request to be sent by the client.
      DeleteDlpJobRequest deleteDlpJobRequest =
          DeleteDlpJobRequest.newBuilder().setName(jobName.toString()).build();

      // Send the job deletion request
      dlpServiceClient.deleteDlpJob(deleteDlpJobRequest);
      System.out.println("Job deleted successfully.");
    }
  }
}

Node.js

// Imports the Google Cloud Data Loss Prevention library
const DLP = require('@google-cloud/dlp');

// Instantiates a client
const dlp = new DLP.DlpServiceClient();

// The name of the job whose results should be deleted
// Parent project ID is automatically extracted from this parameter
// const jobName = 'projects/my-project/dlpJobs/X-#####'

// Construct job deletion request
const request = {
  name: jobName,
};

// Run job deletion request
dlp
  .deleteDlpJob(request)
  .then(() => {
    console.log(`Successfully deleted job ${jobName}.`);
  })
  .catch(err => {
    console.log(`Error in deleteJob: ${err.message || err}`);
  });

Python

def delete_dlp_job(project, job_name):
    """Uses the Data Loss Prevention API to delete a long-running DLP job.
    Args:
        project: The project id to use as a parent resource.
        job_name: The name of the DlpJob resource to be deleted.

    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library.
    import google.cloud.dlp

    # Instantiate a client.
    dlp = google.cloud.dlp_v2.DlpServiceClient()

    # Convert the project id and job name into a full resource id.
    name = dlp.dlp_job_path(project, job_name)

    # Call the API to delete job.
    dlp.delete_dlp_job(name)

    print("Successfully deleted %s" % job_name)

Go

import (
	"context"
	"fmt"
	"io"

	dlp "cloud.google.com/go/dlp/apiv2"
	dlppb "google.golang.org/genproto/googleapis/privacy/dlp/v2"
)

// deleteJob deletes the job with the given name.
func deleteJob(w io.Writer, jobName string) error {
	// jobName := "job-example"
	ctx := context.Background()
	client, err := dlp.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("dlp.NewClient: %v", err)
	}
	req := &dlppb.DeleteDlpJobRequest{
		Name: jobName,
	}
	if err = client.DeleteDlpJob(ctx, req); err != nil {
		return fmt.Errorf("DeleteDlpJob: %v", err)
	}
	fmt.Fprintf(w, "Successfully deleted job")
	return nil
}

PHP

/**
 * Delete results of a Data Loss Prevention API job
 */
use Google\Cloud\Dlp\V2\DlpServiceClient;

/** Uncomment and populate these variables in your code */
// $jobId = 'The name of the job whose results should be deleted';

// Instantiate a client.
$dlp = new DlpServiceClient();

// Run job-deletion request
// The Parent project ID is automatically extracted from this parameter
$dlp->deleteDlpJob($jobId);

// Print status
printf('Successfully deleted job %s' . PHP_EOL, $jobId);

C#


using System;
using Google.Cloud.Dlp.V2;

public class JobsDelete
{
    public static void DeleteJob(string jobName)
    {
        var dlp = DlpServiceClient.Create();

        dlp.DeleteDlpJob(new DeleteDlpJobRequest
        {
            Name = jobName
        });

        Console.WriteLine($"Successfully deleted job {jobName}.");
    }
}

Delete a job trigger

To delete an existing job trigger:

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. Under the Jobs & job triggers tab, click the Job triggers tab. The console displays a list of all job triggers for the current project.

  3. In the Actions column for the job trigger you want to delete, click the more actions menu (displayed as three dots arranged vertically) , and then click Delete.

Alternatively, from the list of job triggers, click the name of the job trigger you want to delete. On the job trigger's detail page, click Delete.

Protocol

To delete a job trigger from the current project, send a DELETE request to the jobTriggers endpoint, as shown here. Replace the [JOB-TRIGGER-NAME] field with the name of the job trigger.

URL:

DELETE https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/jobTriggers/[JOB-TRIGGER-NAME]?key={YOUR_API_KEY}

If the request was successful, the Cloud DLP API will return a success response. To verify the job trigger was successfully deleted, list all job triggers.

To quickly try this out, you can use the API Explorer that's embedded below. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

Java

import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.DeleteJobTriggerRequest;
import com.google.privacy.dlp.v2.ProjectJobTriggerName;
import java.io.IOException;

class TriggersDelete {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    String triggerId = "your-trigger-id";
    deleteTrigger(projectId, triggerId);
  }

  public static void deleteTrigger(String projectId, String triggerId) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

      // Get the full trigger name from the given triggerId and ProjectId
      ProjectJobTriggerName triggerName = ProjectJobTriggerName.of(projectId, triggerId);

      // Construct the trigger deletion request to be sent by the client
      DeleteJobTriggerRequest deleteJobTriggerRequest =
          DeleteJobTriggerRequest.newBuilder().setName(triggerName.toString()).build();

      // Send the trigger deletion request
      dlpServiceClient.deleteJobTrigger(deleteJobTriggerRequest);
      System.out.println("Trigger deleted: " + triggerName.toString());
    }
  }
}

Node.js

// Imports the Google Cloud Data Loss Prevention library
const DLP = require('@google-cloud/dlp');

// Instantiates a client
const dlp = new DLP.DlpServiceClient();

// The name of the trigger to be deleted
// Parent project ID is automatically extracted from this parameter
// const triggerId = 'projects/my-project/triggers/my-trigger';

// Construct trigger deletion request
const request = {
  name: triggerId,
};
try {
  // Run trigger deletion request
  await dlp.deleteJobTrigger(request);
  console.log(`Successfully deleted trigger ${triggerId}.`);
} catch (err) {
  console.log(`Error in deleteTrigger: ${err.message || err}`);
}

Python

def delete_trigger(project, trigger_id):
    """Deletes a Data Loss Prevention API trigger.
    Args:
        project: The id of the Google Cloud project which owns the trigger.
        trigger_id: The id of the trigger to delete.
    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library
    import google.cloud.dlp

    # Instantiate a client.
    dlp = google.cloud.dlp_v2.DlpServiceClient()

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Combine the trigger id with the parent id.
    trigger_resource = "{}/jobTriggers/{}".format(parent, trigger_id)

    # Call the API.
    dlp.delete_job_trigger(trigger_resource)

    print("Trigger {} successfully deleted.".format(trigger_resource))




if __name__ == "__main__":
    default_project = os.environ.get("GOOGLE_CLOUD_PROJECT")

    parser = argparse.ArgumentParser(description=__doc__)
    subparsers = parser.add_subparsers(
        dest="action", help="Select which action to perform."
    )
    subparsers.required = True

    parser_create = subparsers.add_parser("create", help="Create a trigger.")
    parser_create.add_argument(
        "bucket", help="The name of the GCS bucket containing the file."
    )
    parser_create.add_argument(
        "scan_period_days",
        type=int,
        help="How often to repeat the scan, in days. The minimum is 1 day.",
    )
    parser_create.add_argument(
        "--trigger_id",
        help="The id of the trigger. If omitted, an id will be randomly "
        "generated",
    )
    parser_create.add_argument(
        "--display_name", help="The optional display name of the trigger."
    )
    parser_create.add_argument(
        "--description", help="The optional description of the trigger."
    )
    parser_create.add_argument(
        "--project",
        help="The Google Cloud project id to use as a parent resource.",
        default=default_project,
    )
    parser_create.add_argument(
        "--info_types",
        nargs="+",
        help="Strings representing info types to look for. A full list of "
        "info categories and types is available from the API. Examples "
        'include "FIRST_NAME", "LAST_NAME", "EMAIL_ADDRESS". '
        "If unspecified, the three above examples will be used.",
        default=["FIRST_NAME", "LAST_NAME", "EMAIL_ADDRESS"],
    )
    parser_create.add_argument(
        "--min_likelihood",
        choices=[
            "LIKELIHOOD_UNSPECIFIED",
            "VERY_UNLIKELY",
            "UNLIKELY",
            "POSSIBLE",
            "LIKELY",
            "VERY_LIKELY",
        ],
        help="A string representing the minimum likelihood threshold that "
        "constitutes a match.",
    )
    parser_create.add_argument(
        "--max_findings",
        type=int,
        help="The maximum number of findings to report; 0 = no maximum.",
    )
    parser_create.add_argument(
        "--auto_populate_timespan",
        type=bool,
        help="Limit scan to new content only.",
    )

    parser_list = subparsers.add_parser("list", help="List all triggers.")
    parser_list.add_argument(
        "--project",
        help="The Google Cloud project id to use as a parent resource.",
        default=default_project,
    )

    parser_delete = subparsers.add_parser("delete", help="Delete a trigger.")
    parser_delete.add_argument(
        "trigger_id", help="The id of the trigger to delete."
    )
    parser_delete.add_argument(
        "--project",
        help="The Google Cloud project id to use as a parent resource.",
        default=default_project,
    )

    args = parser.parse_args()

    if args.action == "create":
        create_trigger(
            args.project,
            args.bucket,
            args.scan_period_days,
            args.info_types,
            trigger_id=args.trigger_id,
            display_name=args.display_name,
            description=args.description,
            min_likelihood=args.min_likelihood,
            max_findings=args.max_findings,
            auto_populate_timespan=args.auto_populate_timespan,
        )
    elif args.action == "list":
        list_triggers(args.project)
    elif args.action == "delete":
        delete_trigger(args.project, args.trigger_id)

Go

import (
	"context"
	"fmt"
	"io"

	dlp "cloud.google.com/go/dlp/apiv2"
	dlppb "google.golang.org/genproto/googleapis/privacy/dlp/v2"
)

// deleteTrigger deletes the given trigger.
func deleteTrigger(w io.Writer, triggerID string) error {
	// projectID := "my-project-id"
	// triggerID := "my-trigger"

	ctx := context.Background()

	client, err := dlp.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("dlp.NewClient: %v", err)
	}

	req := &dlppb.DeleteJobTriggerRequest{
		Name: triggerID,
	}

	if err := client.DeleteJobTrigger(ctx, req); err != nil {
		return fmt.Errorf("DeleteJobTrigger: %v", err)
	}
	fmt.Fprintf(w, "Successfully deleted trigger %v", triggerID)
	return nil
}

PHP

/**
 * Delete a Data Loss Prevention API job trigger.
 */
use Google\Cloud\Dlp\V2\DlpServiceClient;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';
// $triggerId = 'The name of the trigger to be deleted.';

// Instantiate a client.
$dlp = new DlpServiceClient();

// Run request
// The Parent project ID is automatically extracted from this parameter
$triggerName = "projects/$callingProjectId/locations/global/jobTriggers/$triggerId";
$response = $dlp->deleteJobTrigger($triggerName);

// Print the results
printf('Successfully deleted trigger %s' . PHP_EOL, $triggerName);

C#


using Google.Cloud.Dlp.V2;
using System;

public class TriggersDelete
{

    public static void Delete(string triggerName)
    {
        var dlp = DlpServiceClient.Create();

        dlp.DeleteJobTrigger(
            new DeleteJobTriggerRequest
            {
                Name = triggerName
            });

        Console.WriteLine($"Successfully deleted trigger {triggerName}.");
    }
}

Get a job

To get a job from your project, which includes its results, do the following. Any results saved externally (such as to BigQuery) are untouched by this operation.

Protocol

To get a job from the current project, send a GET request to the dlpJobs endpoint, as shown here. Replace the [JOB-IDENTIFIER] field with the identifier of the job, which starts with i-.

URL:

GET https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs/[JOB-IDENTIFIER]?key={YOUR_API_KEY}

If the request was successful, the Cloud DLP API will return a success response.

To quickly try this out, you can use the API Explorer that's embedded below. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

Java


import com.google.cloud.dlp.v2.DlpServiceClient;
import com.google.privacy.dlp.v2.DlpJobName;
import com.google.privacy.dlp.v2.GetDlpJobRequest;
import java.io.IOException;

public class JobsGet {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-project-id";
    String jobId = "your-job-id";
    getJobs(projectId, jobId);
  }

  // Gets a DLP Job with the given jobId
  public static void getJobs(String projectId, String jobId) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

      // Construct the complete job name from the projectId and jobId
      DlpJobName jobName = DlpJobName.of(projectId, jobId);

      // Construct the get job request to be sent by the client.
      GetDlpJobRequest getDlpJobRequest =
          GetDlpJobRequest.newBuilder().setName(jobName.toString()).build();

      // Send the get job request
      dlpServiceClient.getDlpJob(getDlpJobRequest);
      System.out.println("Job got successfully.");
    }
  }
}

Update an existing job trigger

In addition to creating, listing, and deleting job triggers, you can also update an existing job trigger. To change the configuration for an existing job trigger:

Console

  1. In the Cloud Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. Click the Job triggers tab. The console displays a list of all job triggers for the current project.

  3. In the Actions column for the job trigger you want to delete, click the three vertical dots, and then click View details.

  4. On the job trigger detail page, click Edit.

  5. On the Edit trigger page, you can change the location of the input data; detection details such as templates, infoTypes, or likelihood; any post-scan actions, and the job trigger's schedule. When you're done making changes, click Save.

Protocol

Use the projects.jobTriggers.patch method to send new JobTrigger values to the Cloud DLP API to update those values within a specified job trigger.

For example, consider the following simple job trigger. This JSON represents the job trigger, and was returned after sending a GET request to the current project's job trigger endpoint.

JSON output:

{
  "name":"projects/[PROJECT_ID]/jobTriggers/[JOB_TRIGGER_NAME]",
  "inspectJob":{
    "storageConfig":{
      "cloudStorageOptions":{
        "fileSet":{
          "url":"gs://dlptesting/*"
        },
        "fileTypes":[
          "FILE_TYPE_UNSPECIFIED"
        ],
        "filesLimitPercent":100
      },
      "timespanConfig":{
        "enableAutoPopulationOfTimespanConfig":true
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"US_SOCIAL_SECURITY_NUMBER"
        }
      ],
      "minLikelihood":"POSSIBLE",
      "limits":{

      }
    },
    "actions":[
      {
        "jobNotificationEmails":{

        }
      }
    ]
  },
  "triggers":[
    {
      "schedule":{
        "recurrencePeriodDuration":"86400s"
      }
    }
  ],
  "createTime":"2019-03-06T21:19:45.774841Z",
  "updateTime":"2019-03-06T21:19:45.774841Z",
  "status":"HEALTHY"
}

The following JSON, when sent with a PATCH request to the specified endpoint, updates the given job trigger with a new infoType to scan for, as well as a new minimum likelihood. Note that you must also specify the updateMask attribute, and that its value is in FieldMask format.

JSON input:

PATCH https://dlp.googleapis.com/v2/projects/[PROJECT_ID]/jobTriggers/[JOB_TRIGGER_NAME]?key={YOUR_API_KEY}

{
  "jobTrigger":{
    "inspectJob":{
      "inspectConfig":{
        "infoTypes":[
          {
            "name":"US_INDIVIDUAL_TAXPAYER_IDENTIFICATION_NUMBER"
          }
        ],
        "minLikelihood":"LIKELY"
      }
    }
  },
  "updateMask":"inspectJob(inspectConfig(infoTypes,minLikelihood))"
}

After you send this JSON to the specified URL, it returns the following, which represents the updated job trigger. Note that the original infoType and likelihood values have been replaced by the new values.

JSON output:

{
  "name":"projects/[PROJECT_ID]/jobTriggers/[JOB_TRIGGER_NAME]",
  "inspectJob":{
    "storageConfig":{
      "cloudStorageOptions":{
        "fileSet":{
          "url":"gs://dlptesting/*"
        },
        "fileTypes":[
          "FILE_TYPE_UNSPECIFIED"
        ],
        "filesLimitPercent":100
      },
      "timespanConfig":{
        "enableAutoPopulationOfTimespanConfig":true
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"US_INDIVIDUAL_TAXPAYER_IDENTIFICATION_NUMBER"
        }
      ],
      "minLikelihood":"LIKELY",
      "limits":{

      }
    },
    "actions":[
      {
        "jobNotificationEmails":{

        }
      }
    ]
  },
  "triggers":[
    {
      "schedule":{
        "recurrencePeriodDuration":"86400s"
      }
    }
  ],
  "createTime":"2019-03-06T21:19:45.774841Z",
  "updateTime":"2019-03-06T21:27:01.650183Z",
  "lastRunTime":"1970-01-01T00:00:00Z",
  "status":"HEALTHY"
}

To quickly try this out, you can use the API Explorer that's embedded below. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

Use a job trigger

This section describes how to use job triggers to only scan new content, and how to trigger jobs every time a file is uploaded to Cloud Storage using Cloud Functions.

Limit scans to only new content

You can also set an option to automatically set the timespan date for files stored in either Cloud Storage or BigQuery. Once you've set the TimespanConfig object to auto-populate, Cloud DLP will only scan data that has been added or modified since the last trigger ran:

...
  timespan_config {
        enable_auto_population_of_timespan_config: true
      }
...

Trigger jobs at file upload

In addition to the support for job triggers that is built into Cloud DLP, the Google Cloud also has a variety of other components that can be used to integrate or trigger DLP jobs. For example, you can use Cloud Functions to trigger a DLP scan every time a file is uploaded to Cloud Storage.

For step-by-step instructions about how to do this, see Automating the classification of data uploaded to Cloud Storage.