Inspecting storage and databases for sensitive data

Properly managing sensitive data that is stored in a storage repository starts with storage classification: identifying where your sensitive data is in the repository, what type of sensitive data it is, and how it's being used. This knowledge can help you properly set access control and sharing permissions, and it can be part of an ongoing monitoring plan.

Cloud Data Loss Prevention (DLP) can detect and classify sensitive data stored in a Cloud Storage location, Cloud Datastore kind, or BigQuery table. A list of file extensions for the file types within Cloud Storage that Cloud DLP can scan is available on the API reference page for FileType. Files of types that are unrecognized are scanned as binary files.

To inspect storage and databases for sensitive data, you specify the location of the data and the type of sensitive data that Cloud DLP should look for. Cloud DLP initiates a job that inspects the data at the given location, and then makes available details about infoTypes found in the content, likelihood values, and more.

You can set up inspection of storage and databases using Cloud DLP in the GCP Console, via the RESTful Cloud DLP API, or programmatically using a Cloud DLP client library in one of several languages.

This topic includes:

  • Best practices for setting up scans of storage repositories and databases.
  • Instructions for setting up an inspection scan using Cloud DLP in the GCP Console, and (optionally) for scheduling periodic repeating inspection scans.
  • Example JSON for each Google Cloud Platform storage repository type (Cloud Storage, Cloud Datastore, and BigQuery), and code samples in several programming languages.
  • A detailed overview of the configuration options for scan jobs.
  • Instructions for how to retrieve scan results and how to manage the scan jobs that are created from each successful request.

Best practices

Identify and prioritize scanning

It's important to evaluate your assets and specify which have the highest priority for scanning. When just getting started you may have a large backlog of data that needs classification, and it will be impossible to scan it all immediately. Choose data initially that poses the highest risk—for example, data that is frequently accessed, widely accessible, or unknown.

Ensure that Cloud DLP can access your data

Cloud DLP must be able to access data to be scanned. Be sure that the Cloud DLP service account is permitted to read your resources.

Limit the scope of your first scans

For best results, limit the scope of your first jobs instead of scanning all of your data. Start with one table, one bucket, or a few files and use sampling. By limiting the scope of your first scans, you can better determine what detectors to enable and what exclusion rules might be needed to reduce false positives so that your findings will be more meaningful. Avoid turning on all infoTypes if you don't need them all, as false positives or unusable findings may make it harder to assess your risk. While useful in certain scenarios, infoTypes such as DATE, TIME, DOMAIN_NAME, and URL match on a broad range of findings and may not be useful to turn on for large data scans.

Schedule your scans

Use Cloud DLP job triggers to automatically run scans and generate findings daily, weekly, or quarterly. These scans can also be configured to only inspect data that has changed since the last scan, which can save time and reduce costs. Running scans on a regular basis can help you identify trends or anomalies in your scan results.

Before you begin

The instructions provided in this topic assume the following:

Storage classification requires the following OAuth scope: https://www.googleapis.com/auth/cloud-platform. For more information, see Authenticating to the Cloud DLP API.

Inspect a Cloud Storage location

You can set up a Cloud DLP inspection of a Cloud Storage location using the Google Cloud Platform Console, the Cloud DLP API via REST or RPC requests, or programmatically in several languages using a client library.

To set up a scan job of a Cloud Storage bucket using Cloud DLP:

Console

  1. In the GCP Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. From the Create menu, choose Job or job trigger.

    Screenshot of Create new job or job trigger menu choice.

  3. Enter the Cloud DLP job information and click Continue to complete each step:

    • For Step 1: Choose input data, name the job by entering a value in the Name field. In Location, choose Cloud Storage from the Storage type menu, and then enter the location of the data to scan. The Sampling section is pre-configured to run a sample scan against your data. You can adjust the Percentage of objects scanned within bucket field to save resources if you have a large amount of data. For more details, see Choose input data.

    • (Optional) For Step 2: Configure detection, you can configure what types of data to look for, called "infoTypes." You can select from the list of pre-defined infoTypes, or you can select a template if one exists. For more details, see Configure detection.

    • (Optional) For Step 3: Add actions, make sure Notify by email is enabled.

      Enable Save to BigQuery to publish your Cloud DLP findings to a BigQuery table. Provide the following:

      • For Project ID, enter the project ID where your results are stored.
      • For Dataset ID, enter the name of the dataset that stores your results.
      • (Optional) For Table ID, enter the name of the table that stores your results. If no table ID is specified, a default name is assigned to a new table similar to the following: dlp_googleapis_[DATE]_1234567890. If you specify an existing table, findings are appended to it.

      You can also save results to Cloud Pub/Sub, Cloud Security Command Center, and Data Catalog. For more details, see Add actions.

    • (Optional) For Step 4: Schedule, to run the scan one time only, leave the menu set to None. To schedule scans to run periodically, click Create a trigger to run the job on a periodic schedule. For more details, see Schedule.

  4. Click Create.

  5. After the Cloud DLP job completes, you are redirected to the job details page and notified via email. You can view the results of the inspection on the job details page.

  6. (Optional) If you chose to publish Cloud DLP findings to BigQuery, on the Job details page, click View Findings in BigQuery to open the table in the BigQuery web UI. You can then query the table and analyze your findings. For more information on querying your results in BigQuery, see Querying Cloud DLP findings in BigQuery.

Protocol

Following is sample JSON that can be sent in a POST request to the specified Cloud DLP REST endpoint. This example JSON demonstrates how to use the Cloud DLP API to inspect Cloud Storage buckets. For information about the parameters included with the request, see "Configure storage inspection," later in this topic.

To quickly try this out, you can use APIs Explorer. Keep in mind that a successful request, even in APIs Explorer, will create a new scan job. For information about how to control scan jobs, see "Retrieve inspection results," later in this topic. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

JSON input:

POST https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs?key={YOUR_API_KEY}

{
  "inspectJob":{
    "storageConfig":{
      "cloudStorageOptions":{
        "fileSet":{
          "url":"gs://[BUCKET-NAME]/*"
        },
        "bytesLimitPerFile":"1073741824"
      },
      "timespanConfig":{
        "startTime":"2017-11-13T12:34:29.965633345Z",
        "endTime":"2018-01-05T04:45:04.240912125Z"
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"PHONE_NUMBER"
        }
      ],
      "excludeInfoTypes":false,
      "includeQuote":true,
      "minLikelihood":"LIKELY"
    },
    "actions":[
      {
        "saveFindings":{
          "outputConfig":{
            "table":{
              "projectId":"[PROJECT-ID]",
              "datasetId":"[DATASET-ID]"
            }
          }
        }
      }
    ]
  }
}

JSON output:

{
  "name":"projects/[PROJECT-ID]/dlpJobs/[JOB-ID]",
  "type":"INSPECT_JOB",
  "state":"PENDING",
  "inspectDetails":{
    "requestedOptions":{
      "snapshotInspectTemplate":{

      },
      "jobConfig":{
        "storageConfig":{
          "cloudStorageOptions":{
            "fileSet":{
              "url":"gs://[BUCKET-NAME]/*"
            },
            "bytesLimitPerFile":"1073741824"
          },
          "timespanConfig":{
            "startTime":"2017-11-13T12:34:29.965633345Z",
            "endTime":"2018-01-05T04:45:04.240912125Z"
          }
        },
        "inspectConfig":{
          "infoTypes":[
            {
              "name":"PHONE_NUMBER"
            }
          ],
          "minLikelihood":"LIKELY",
          "limits":{

          },
          "includeQuote":true
        },
        "actions":[
          {
            "saveFindings":{
              "outputConfig":{
                "table":{
                  "projectId":"[PROJECT-ID]",
                  "datasetId":"[DATASET-ID]",
                  "tableId":"[NEW-TABLE-ID]"
                }
              }
            }
          }
        ]
      }
    }
  },
  "createTime":"2018-11-07T18:01:14.225Z"
}

Java

/**
 * Inspect GCS file for Info types and wait on job completion using Google Cloud Pub/Sub
 * notification
 *
 * @param bucketName The name of the bucket where the file resides.
 * @param fileName The path to the file within the bucket to inspect (can include wildcards, eg.
 *     my-image.*)
 * @param minLikelihood The minimum likelihood required before returning a match
 * @param infoTypes The infoTypes of information to match
 * @param maxFindings The maximum number of findings to report (0 = server maximum)
 * @param topicId Google Cloud Pub/Sub topic Id to notify of job status
 * @param subscriptionId Google Cloud Subscription to above topic to listen for job status updates
 * @param projectId Google Cloud project ID
 */
private static void inspectGcsFile(
    String bucketName,
    String fileName,
    Likelihood minLikelihood,
    List<InfoType> infoTypes,
    List<CustomInfoType> customInfoTypes,
    int maxFindings,
    String topicId,
    String subscriptionId,
    String projectId)
    throws Exception {
  // Instantiates a client
  try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

    CloudStorageOptions cloudStorageOptions =
        CloudStorageOptions.newBuilder()
            .setFileSet(
                CloudStorageOptions.FileSet.newBuilder()
                    .setUrl("gs://" + bucketName + "/" + fileName))
            .build();

    StorageConfig storageConfig =
        StorageConfig.newBuilder().setCloudStorageOptions(cloudStorageOptions).build();

    FindingLimits findingLimits =
        FindingLimits.newBuilder().setMaxFindingsPerRequest(maxFindings).build();

    InspectConfig inspectConfig =
        InspectConfig.newBuilder()
            .addAllInfoTypes(infoTypes)
            .addAllCustomInfoTypes(customInfoTypes)
            .setMinLikelihood(minLikelihood)
            .setLimits(findingLimits)
            .build();

    String pubSubTopic = String.format("projects/%s/topics/%s", projectId, topicId);
    Action.PublishToPubSub publishToPubSub =
        Action.PublishToPubSub.newBuilder().setTopic(pubSubTopic).build();

    Action action = Action.newBuilder().setPubSub(publishToPubSub).build();

    InspectJobConfig inspectJobConfig =
        InspectJobConfig.newBuilder()
            .setStorageConfig(storageConfig)
            .setInspectConfig(inspectConfig)
            .addActions(action)
            .build();

    // Semi-synchronously submit an inspect job, and wait on results
    CreateDlpJobRequest createDlpJobRequest =
        CreateDlpJobRequest.newBuilder()
            .setParent(ProjectName.of(projectId).toString())
            .setInspectJob(inspectJobConfig)
            .build();

    DlpJob dlpJob = dlpServiceClient.createDlpJob(createDlpJobRequest);

    System.out.println("Job created with ID:" + dlpJob.getName());

    final SettableApiFuture<Boolean> done = SettableApiFuture.create();

    // Set up a Pub/Sub subscriber to listen on the job completion status
    Subscriber subscriber =
        Subscriber.newBuilder(
                ProjectSubscriptionName.of(projectId, subscriptionId),
          (pubsubMessage, ackReplyConsumer) -> {
            if (pubsubMessage.getAttributesCount() > 0
                && pubsubMessage
                    .getAttributesMap()
                    .get("DlpJobName")
                    .equals(dlpJob.getName())) {
              // notify job completion
              done.set(true);
              ackReplyConsumer.ack();
            }
          })
            .build();
    subscriber.startAsync();

    // Wait for job completion semi-synchronously
    // For long jobs, consider using a truly asynchronous execution model such as Cloud Functions
    try {
      done.get(1, TimeUnit.MINUTES);
      Thread.sleep(500); // Wait for the job to become available
    } catch (Exception e) {
      System.out.println("Unable to verify job completion.");
    }

    DlpJob completedJob =
        dlpServiceClient.getDlpJob(
            GetDlpJobRequest.newBuilder().setName(dlpJob.getName()).build());

    System.out.println("Job status: " + completedJob.getState());
    InspectDataSourceDetails inspectDataSourceDetails = completedJob.getInspectDetails();
    InspectDataSourceDetails.Result result = inspectDataSourceDetails.getResult();
    if (result.getInfoTypeStatsCount() > 0) {
      System.out.println("Findings: ");
      for (InfoTypeStats infoTypeStat : result.getInfoTypeStatsList()) {
        System.out.print("\tInfo type: " + infoTypeStat.getInfoType().getName());
        System.out.println("\tCount: " + infoTypeStat.getCount());
      }
    } else {
      System.out.println("No findings.");
    }
  }
}

Node.js

// Import the Google Cloud client libraries
const DLP = require('@google-cloud/dlp');
const {PubSub} = require('@google-cloud/pubsub');

// Instantiates clients
const dlp = new DLP.DlpServiceClient();
const pubsub = new PubSub();

// The project ID to run the API call under
// const callingProjectId = process.env.GCLOUD_PROJECT;

// The name of the bucket where the file resides.
// const bucketName = 'YOUR-BUCKET';

// The path to the file within the bucket to inspect.
// Can contain wildcards, e.g. "my-image.*"
// const fileName = 'my-image.png';

// The minimum likelihood required before returning a match
// const minLikelihood = 'LIKELIHOOD_UNSPECIFIED';

// The maximum number of findings to report per request (0 = server maximum)
// const maxFindings = 0;

// The infoTypes of information to match
// const infoTypes = [{ name: 'PHONE_NUMBER' }, { name: 'EMAIL_ADDRESS' }, { name: 'CREDIT_CARD_NUMBER' }];

// The customInfoTypes of information to match
// const customInfoTypes = [{ name: 'DICT_TYPE', dictionary: { wordList: { words: ['foo', 'bar', 'baz']}}},
//   { name: 'REGEX_TYPE', regex: '\\(\\d{3}\\) \\d{3}-\\d{4}'}];

// The name of the Pub/Sub topic to notify once the job completes
// TODO(developer): create a Pub/Sub topic to use for this
// const topicId = 'MY-PUBSUB-TOPIC'

// The name of the Pub/Sub subscription to use when listening for job
// completion notifications
// TODO(developer): create a Pub/Sub subscription to use for this
// const subscriptionId = 'MY-PUBSUB-SUBSCRIPTION'

// Get reference to the file to be inspected
const storageItem = {
  cloudStorageOptions: {
    fileSet: {url: `gs://${bucketName}/${fileName}`},
  },
};

// Construct request for creating an inspect job
const request = {
  parent: dlp.projectPath(callingProjectId),
  inspectJob: {
    inspectConfig: {
      infoTypes: infoTypes,
      customInfoTypes: customInfoTypes,
      minLikelihood: minLikelihood,
      limits: {
        maxFindingsPerRequest: maxFindings,
      },
    },
    storageConfig: storageItem,
    actions: [
      {
        pubSub: {
          topic: `projects/${callingProjectId}/topics/${topicId}`,
        },
      },
    ],
  },
};

try {
  // Create a GCS File inspection job and wait for it to complete
  const [topicResponse] = await pubsub.topic(topicId).get();
  // Verify the Pub/Sub topic and listen for job notifications via an
  // existing subscription.
  const subscription = await topicResponse.subscription(subscriptionId);
  const [jobsResponse] = await dlp.createDlpJob(request);
  // Get the job's ID
  const jobName = jobsResponse.name;
  // Watch the Pub/Sub topic until the DLP job finishes
  await new Promise((resolve, reject) => {
    const messageHandler = message => {
      if (message.attributes && message.attributes.DlpJobName === jobName) {
        message.ack();
        subscription.removeListener('message', messageHandler);
        subscription.removeListener('error', errorHandler);
        resolve(jobName);
      } else {
        message.nack();
      }
    };

    const errorHandler = err => {
      subscription.removeListener('message', messageHandler);
      subscription.removeListener('error', errorHandler);
      reject(err);
    };

    subscription.on('message', messageHandler);
    subscription.on('error', errorHandler);
  });

  setTimeout(() => {
    console.log(`Waiting for DLP job to fully complete`);
  }, 500);
  const [job] = await dlp.getDlpJob({name: jobName});
  console.log(`Job ${job.name} status: ${job.state}`);

  const infoTypeStats = job.inspectDetails.result.infoTypeStats;
  if (infoTypeStats.length > 0) {
    infoTypeStats.forEach(infoTypeStat => {
      console.log(
        `  Found ${infoTypeStat.count} instance(s) of infoType ${infoTypeStat.infoType.name}.`
      );
    });
  } else {
    console.log(`No findings.`);
  }
} catch (err) {
  console.log(`Error in inspectGCSFile: ${err.message || err}`);
}

Python

def inspect_gcs_file(project, bucket, filename, topic_id, subscription_id,
                     info_types, custom_dictionaries=None,
                     custom_regexes=None, min_likelihood=None,
                     max_findings=None, timeout=300):
    """Uses the Data Loss Prevention API to analyze a file on GCS.
    Args:
        project: The Google Cloud project id to use as a parent resource.
        bucket: The name of the GCS bucket containing the file, as a string.
        filename: The name of the file in the bucket, including the path, as a
            string; e.g. 'images/myfile.png'.
        topic_id: The id of the Cloud Pub/Sub topic to which the API will
            broadcast job completion. The topic must already exist.
        subscription_id: The id of the Cloud Pub/Sub subscription to listen on
            while waiting for job completion. The subscription must already
            exist and be subscribed to the topic.
        info_types: A list of strings representing info types to look for.
            A full list of info type categories can be fetched from the API.
        min_likelihood: A string representing the minimum likelihood threshold
            that constitutes a match. One of: 'LIKELIHOOD_UNSPECIFIED',
            'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE', 'LIKELY', 'VERY_LIKELY'.
        max_findings: The maximum number of findings to report; 0 = no maximum.
        timeout: The number of seconds to wait for a response from the API.
    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library.
    import google.cloud.dlp

    # This sample additionally uses Cloud Pub/Sub to receive results from
    # potentially long-running operations.
    import google.cloud.pubsub

    # This sample also uses threading.Event() to wait for the job to finish.
    import threading

    # Instantiate a client.
    dlp = google.cloud.dlp.DlpServiceClient()

    # Prepare info_types by converting the list of strings into a list of
    # dictionaries (protos are also accepted).
    if not info_types:
        info_types = ['FIRST_NAME', 'LAST_NAME', 'EMAIL_ADDRESS']
    info_types = [{'name': info_type} for info_type in info_types]

    # Prepare custom_info_types by parsing the dictionary word lists and
    # regex patterns.
    if custom_dictionaries is None:
        custom_dictionaries = []
    dictionaries = [{
        'info_type': {'name': 'CUSTOM_DICTIONARY_{}'.format(i)},
        'dictionary': {
            'word_list': {'words': custom_dict.split(',')}
        }
    } for i, custom_dict in enumerate(custom_dictionaries)]
    if custom_regexes is None:
        custom_regexes = []
    regexes = [{
        'info_type': {'name': 'CUSTOM_REGEX_{}'.format(i)},
        'regex': {'pattern': custom_regex}
    } for i, custom_regex in enumerate(custom_regexes)]
    custom_info_types = dictionaries + regexes

    # Construct the configuration dictionary. Keys which are None may
    # optionally be omitted entirely.
    inspect_config = {
        'info_types': info_types,
        'custom_info_types': custom_info_types,
        'min_likelihood': min_likelihood,
        'limits': {'max_findings_per_request': max_findings},
    }

    # Construct a storage_config containing the file's URL.
    url = 'gs://{}/{}'.format(bucket, filename)
    storage_config = {
        'cloud_storage_options': {
            'file_set': {'url': url}
        }
    }

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Tell the API where to send a notification when the job is complete.
    actions = [{
        'pub_sub': {'topic': '{}/topics/{}'.format(parent, topic_id)}
    }]

    # Construct the inspect_job, which defines the entire inspect content task.
    inspect_job = {
        'inspect_config': inspect_config,
        'storage_config': storage_config,
        'actions': actions,
    }

    operation = dlp.create_dlp_job(parent, inspect_job=inspect_job)

    # Create a Pub/Sub client and find the subscription. The subscription is
    # expected to already be listening to the topic.
    subscriber = google.cloud.pubsub.SubscriberClient()
    subscription_path = subscriber.subscription_path(
        project, subscription_id)

    # Set up a callback to acknowledge a message. This closes around an event
    # so that it can signal that it is done and the main thread can continue.
    job_done = threading.Event()

    def callback(message):
        try:
            if (message.attributes['DlpJobName'] == operation.name):
                # This is the message we're looking for, so acknowledge it.
                message.ack()

                # Now that the job is done, fetch the results and print them.
                job = dlp.get_dlp_job(operation.name)
                if job.inspect_details.result.info_type_stats:
                    for finding in job.inspect_details.result.info_type_stats:
                        print('Info type: {}; Count: {}'.format(
                            finding.info_type.name, finding.count))
                else:
                    print('No findings.')

                # Signal to the main thread that we can exit.
                job_done.set()
            else:
                # This is not the message we're looking for.
                message.drop()
        except Exception as e:
            # Because this is executing in a thread, an exception won't be
            # noted unless we print it manually.
            print(e)
            raise

    subscriber.subscribe(subscription_path, callback=callback)
    finished = job_done.wait(timeout=timeout)
    if not finished:
        print('No event received before the timeout. Please verify that the '
              'subscription provided is subscribed to the topic provided.')

Go


// inspectGCSFile searches for the given info types in the given file.
func inspectGCSFile(w io.Writer, client *dlp.Client, project string, minLikelihood dlppb.Likelihood, maxFindings int32, includeQuote bool, infoTypes []string, customDictionaries []string, customRegexes []string, pubSubTopic, pubSubSub, bucketName, fileName string) {
	// Convert the info type strings to a list of InfoTypes.
	var i []*dlppb.InfoType
	for _, it := range infoTypes {
		i = append(i, &dlppb.InfoType{Name: it})
	}
	// Convert the custom dictionary word lists and custom regexes to a list of CustomInfoTypes.
	var customInfoTypes []*dlppb.CustomInfoType
	for idx, it := range customDictionaries {
		customInfoTypes = append(customInfoTypes, &dlppb.CustomInfoType{
			InfoType: &dlppb.InfoType{
				Name: fmt.Sprintf("CUSTOM_DICTIONARY_%d", idx),
			},
			Type: &dlppb.CustomInfoType_Dictionary_{
				Dictionary: &dlppb.CustomInfoType_Dictionary{
					Source: &dlppb.CustomInfoType_Dictionary_WordList_{
						WordList: &dlppb.CustomInfoType_Dictionary_WordList{
							Words: strings.Split(it, ","),
						},
					},
				},
			},
		})
	}
	for idx, it := range customRegexes {
		customInfoTypes = append(customInfoTypes, &dlppb.CustomInfoType{
			InfoType: &dlppb.InfoType{
				Name: fmt.Sprintf("CUSTOM_REGEX_%d", idx),
			},
			Type: &dlppb.CustomInfoType_Regex_{
				Regex: &dlppb.CustomInfoType_Regex{
					Pattern: it,
				},
			},
		})
	}

	ctx := context.Background()

	// Create a PubSub Client used to listen for when the inspect job finishes.
	pClient, err := pubsub.NewClient(ctx, project)
	if err != nil {
		log.Fatalf("Error creating PubSub client: %v", err)
	}
	defer pClient.Close()

	// Create a PubSub subscription we can use to listen for messages.
	s, err := setupPubSub(ctx, pClient, project, pubSubTopic, pubSubSub)
	if err != nil {
		log.Fatalf("Error setting up PubSub: %v\n", err)
	}

	// topic is the PubSub topic string where messages should be sent.
	topic := "projects/" + project + "/topics/" + pubSubTopic

	// Create a configured request.
	req := &dlppb.CreateDlpJobRequest{
		Parent: "projects/" + project,
		Job: &dlppb.CreateDlpJobRequest_InspectJob{
			InspectJob: &dlppb.InspectJobConfig{
				// StorageConfig describes where to find the data.
				StorageConfig: &dlppb.StorageConfig{
					Type: &dlppb.StorageConfig_CloudStorageOptions{
						CloudStorageOptions: &dlppb.CloudStorageOptions{
							FileSet: &dlppb.CloudStorageOptions_FileSet{
								Url: "gs://" + bucketName + "/" + fileName,
							},
						},
					},
				},
				// InspectConfig describes what fields to look for.
				InspectConfig: &dlppb.InspectConfig{
					InfoTypes:       i,
					CustomInfoTypes: customInfoTypes,
					MinLikelihood:   minLikelihood,
					Limits: &dlppb.InspectConfig_FindingLimits{
						MaxFindingsPerRequest: maxFindings,
					},
					IncludeQuote: includeQuote,
				},
				// Send a message to PubSub using Actions.
				Actions: []*dlppb.Action{
					{
						Action: &dlppb.Action_PubSub{
							PubSub: &dlppb.Action_PublishToPubSub{
								Topic: topic,
							},
						},
					},
				},
			},
		},
	}
	// Create the inspect job.
	j, err := client.CreateDlpJob(context.Background(), req)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Fprintf(w, "Created job: %v\n", j.GetName())

	// Wait for the inspect job to finish by waiting for a PubSub message.
	ctx, cancel := context.WithCancel(ctx)
	err = s.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) {
		// If this is the wrong job, do not process the result.
		if msg.Attributes["DlpJobName"] != j.GetName() {
			msg.Nack()
			return
		}
		msg.Ack()
		resp, err := client.GetDlpJob(ctx, &dlppb.GetDlpJobRequest{
			Name: j.GetName(),
		})
		if err != nil {
			log.Fatalf("Error getting completed job: %v\n", err)
		}
		r := resp.GetInspectDetails().GetResult().GetInfoTypeStats()
		if len(r) == 0 {
			fmt.Fprintf(w, "No results")
		}
		for _, s := range r {
			fmt.Fprintf(w, "  Found %v instances of infoType %v\n", s.GetCount(), s.GetInfoType().GetName())
		}
		// Stop listening for more messages.
		cancel()
	})
	if err != nil {
		log.Fatalf("Error receiving from PubSub: %v\n", err)
	}
}

PHP

/**
 * Inspect a file stored on Google Cloud Storage , using Pub/Sub for job status notifications.
 */
use Google\Cloud\Core\ExponentialBackoff;
use Google\Cloud\Dlp\V2\DlpServiceClient;
use Google\Cloud\Dlp\V2\CloudStorageOptions;
use Google\Cloud\Dlp\V2\CloudStorageOptions\FileSet;
use Google\Cloud\Dlp\V2\InfoType;
use Google\Cloud\Dlp\V2\InspectConfig;
use Google\Cloud\Dlp\V2\StorageConfig;
use Google\Cloud\Dlp\V2\Likelihood;
use Google\Cloud\Dlp\V2\DlpJob\JobState;
use Google\Cloud\Dlp\V2\InspectConfig\FindingLimits;
use Google\Cloud\Dlp\V2\Action;
use Google\Cloud\Dlp\V2\Action\PublishToPubSub;
use Google\Cloud\Dlp\V2\InspectJobConfig;
use Google\Cloud\PubSub\PubSubClient;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';
// $topicId = 'The name of the Pub/Sub topic to notify once the job completes';
// $subscriptionId = 'The name of the Pub/Sub subscription to use when listening for job';
// $bucketId = 'The name of the bucket where the file resides';
// $file = 'The path to the file within the bucket to inspect. Can contain wildcards e.g. "my-image.*"';
// $maxFindings = 0;  // (Optional) The maximum number of findings to report per request (0 = server maximum)

// Instantiate a client.
$dlp = new DlpServiceClient([
    'projectId' => $callingProjectId,
]);
$pubsub = new PubSubClient([
    'projectId' => $callingProjectId,
]);
$topic = $pubsub->topic($topicId);

// The infoTypes of information to match
$personNameInfoType = (new InfoType())
    ->setName('PERSON_NAME');
$creditCardNumberInfoType = (new InfoType())
    ->setName('CREDIT_CARD_NUMBER');
$infoTypes = [$personNameInfoType, $creditCardNumberInfoType];

// The minimum likelihood required before returning a match
$minLikelihood = likelihood::LIKELIHOOD_UNSPECIFIED;

// Specify finding limits
$limits = (new FindingLimits())
    ->setMaxFindingsPerRequest($maxFindings);

// Construct items to be inspected
$fileSet = (new FileSet())
    ->setUrl('gs://' . $bucketId . '/' . $file);

$cloudStorageOptions = (new CloudStorageOptions())
    ->setFileSet($fileSet);

$storageConfig = (new StorageConfig())
    ->setCloudStorageOptions($cloudStorageOptions);

// Construct the inspect config object
$inspectConfig = (new InspectConfig())
    ->setMinLikelihood($minLikelihood)
    ->setLimits($limits)
    ->setInfoTypes($infoTypes);

// Construct the action to run when job completes
$pubSubAction = (new PublishToPubSub())
    ->setTopic($topic->name());

$action = (new Action())
    ->setPubSub($pubSubAction);

// Construct inspect job config to run
$inspectJob = (new InspectJobConfig())
    ->setInspectConfig($inspectConfig)
    ->setStorageConfig($storageConfig)
    ->setActions([$action]);

// Listen for job notifications via an existing topic/subscription.
$subscription = $topic->subscription($subscriptionId);

// Submit request
$parent = $dlp->projectName($callingProjectId);
$job = $dlp->createDlpJob($parent, [
    'inspectJob' => $inspectJob
]);

// Poll Pub/Sub using exponential backoff until job finishes
$backoff = new ExponentialBackoff(25);
$backoff->execute(function () use ($subscription, $dlp, &$job) {
    printf('Waiting for job to complete' . PHP_EOL);
    foreach ($subscription->pull() as $message) {
        if (isset($message->attributes()['DlpJobName']) &&
            $message->attributes()['DlpJobName'] === $job->getName()) {
            $subscription->acknowledge($message);
            // Get the updated job. Loop to avoid race condition with DLP API.
            do {
                $job = $dlp->getDlpJob($job->getName());
            } while ($job->getState() == JobState::RUNNING);
            return true;
        }
    }
    throw new Exception('Job has not yet completed');
});

// Print finding counts
printf('Job %s status: %s' . PHP_EOL, $job->getName(), JobState::name($job->getState()));
switch ($job->getState()) {
    case JobState::DONE:
        $infoTypeStats = $job->getInspectDetails()->getResult()->getInfoTypeStats();
        if (count($infoTypeStats) === 0) {
            print('No findings.' . PHP_EOL);
        } else {
            foreach ($infoTypeStats as $infoTypeStat) {
                printf('  Found %s instance(s) of infoType %s' . PHP_EOL, $infoTypeStat->getCount(), $infoTypeStat->getInfoType()->getName());
            }
        }
        break;
    case JobState::FAILED:
        printf('Job %s had errors:' . PHP_EOL, $job->getName());
        $errors = $job->getErrors();
        foreach ($errors as $error) {
            var_dump($error->getDetails());
        }
        break;
    default:
        print('Unexpected job state. Most likely, the job is either running or has not yet started.');
}

C#


public static object InspectGCS(
    string projectId,
    string minLikelihood,
    int maxFindings,
    bool includeQuote,
    IEnumerable<InfoType> infoTypes,
    IEnumerable<CustomInfoType> customInfoTypes,
    string bucketName,
    string topicId,
    string subscriptionId)
{
    var inspectJob = new InspectJobConfig
    {
        StorageConfig = new StorageConfig
        {
            CloudStorageOptions = new CloudStorageOptions
            {
                FileSet = new CloudStorageOptions.Types.FileSet { Url = $"gs://{bucketName}/*.txt" },
                BytesLimitPerFile = 1073741824
            },
        },
        InspectConfig = new InspectConfig
        {
            InfoTypes = { infoTypes },
            CustomInfoTypes = { customInfoTypes },
            ExcludeInfoTypes = false,
            IncludeQuote = includeQuote,
            Limits = new FindingLimits
            {
                MaxFindingsPerRequest = maxFindings
            },
            MinLikelihood = (Likelihood)System.Enum.Parse(typeof(Likelihood), minLikelihood)
        },
        Actions =
        {
            new Google.Cloud.Dlp.V2.Action
            {
                // Send results to Pub/Sub topic
                PubSub = new Google.Cloud.Dlp.V2.Action.Types.PublishToPubSub
                {
                    Topic = topicId,
                }
            }
        }
    };

    // Issue Create Dlp Job Request
    DlpServiceClient client = DlpServiceClient.Create();
    var request = new CreateDlpJobRequest
    {
        InspectJob = inspectJob,
        ParentAsProjectName = new ProjectName(projectId),
    };

    // We need created job name
    var dlpJob = client.CreateDlpJob(request);

    // Get a pub/sub subscription and listen for DLP results
    var fireEvent = new ManualResetEventSlim();

    var subscriptionName = new SubscriptionName(projectId, subscriptionId);
    var subscriber = SubscriberClient.CreateAsync(subscriptionName).Result;
    subscriber.StartAsync(
        (pubSubMessage, cancellationToken) =>
        {
            // Given a message that we receive on this subscription, we should either acknowledge or decline it
            if (pubSubMessage.Attributes["DlpJobName"] == dlpJob.Name)
            {
                fireEvent.Set();
                return Task.FromResult(SubscriberClient.Reply.Ack);
            }

            return Task.FromResult(SubscriberClient.Reply.Nack);
        });

    // We block here until receiving a signal from a separate thread that is waiting on a message indicating receiving a result of Dlp job
    if (fireEvent.Wait(TimeSpan.FromMinutes(1)))
    {
        // Stop the thread that is listening to messages as a result of StartAsync call earlier
        subscriber.StopAsync(CancellationToken.None).Wait();

        // Now we can inspect full job results
        var job = client.GetDlpJob(new GetDlpJobRequest { DlpJobName = new DlpJobName(projectId, dlpJob.Name) });

        // Inspect Job details
        Console.WriteLine($"Processed bytes: {job.InspectDetails.Result.ProcessedBytes}");
        Console.WriteLine($"Total estimated bytes: {job.InspectDetails.Result.TotalEstimatedBytes}");
        var stats = job.InspectDetails.Result.InfoTypeStats;
        Console.WriteLine("Found stats:");
        foreach (var stat in stats)
        {
            Console.WriteLine($"{stat.InfoType.Name}");
        }
    }
    else
    {
        Console.WriteLine("Error: The wait failed on timeout");
    }

    return 0;
}

Inspect a Cloud Datastore kind

You can set up an inspection of a Cloud Datastore kind using the Google Cloud Platform Console, the Cloud DLP API via REST or RPC requests, or programmatically in several languages using a client library.

To set up a scan job of a Cloud Datastore kind using Cloud DLP:

Console

To set up a scan job of a Cloud Datastore bucket using Cloud DLP:

  1. In the GCP Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. From the Create menu, choose Job or job trigger.

    Screenshot of Create new job or job trigger menu choice.

  3. Enter the Cloud DLP job information and click Continue to complete each step:

    • For Step 1: Choose input data, enter the identifiers for the project, namespace (optional), and kind that you want to scan. For more details, see Choose input data.

    • (Optional) For Step 2: Configure detection, you can configure what types of data to look for, called "infoTypes." You can select from the list of pre-defined infoTypes, or you can select a template if one exists. For more details, see Configure detection.

    • (Optional) For Step 3: Add actions, make sure Notify by email is enabled.

      Enable Save to BigQuery to publish your Cloud DLP findings to a BigQuery table. Provide the following:

      • For Project ID, enter the project ID where your results are stored.
      • For Dataset ID, enter the name of the dataset that stores your results.
      • (Optional) For Table ID, enter the name of the table that stores your results. If no table ID is specified, a default name is assigned to a new table similar to the following: dlp_googleapis_[DATE]_1234567890. If you specify an existing table, findings are appended to it.

      For more information about the other actions listed, see Add actions.

    • (Optional) For Step 4: Schedule, configure a time span or schedule by selecting either Specify time span or Create a trigger to run the job on a periodic schedule. For more information, see Schedule.

  4. Click Create.

  5. After the Cloud DLP job completes, you are redirected to the job details page and notified via email. You can view the results of the inspection on the job details page.

  6. (Optional) If you chose to publish Cloud DLP findings to BigQuery, on the Job details page, click View Findings in BigQuery to open the table in the BigQuery web UI. You can then query the table and analyze your findings. For more information on querying your results in BigQuery, see Querying Cloud DLP findings in BigQuery.

Protocol

Following is sample JSON that can be sent in a POST request to the specified Cloud DLP API REST endpoint. This example JSON demonstrates how to use the Cloud DLP API to inspect Cloud Datastore kinds. For information about the parameters included with the request, see "Configure storage inspection," later in this topic.

To quickly try this out, you can use APIs Explorer. Keep in mind that a successful request, even in APIs Explorer, will create a new scan job. For information about how to control scan jobs, see Retrieve inspection results, later in this topic. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

JSON input:

POST https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs?key={YOUR_API_KEY}

{
  "inspectJob":{
    "storageConfig":{
      "datastoreOptions":{
        "kind":{
          "name":"Example-Kind"
        },
        "partitionId":{
          "namespaceId":"[NAMESPACE-ID]",
          "projectId":"[PROJECT-ID]"
        }
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"PHONE_NUMBER"
        }
      ],
      "excludeInfoTypes":false,
      "includeQuote":true,
      "minLikelihood":"LIKELY"
    },
    "actions":[
      {
        "saveFindings":{
          "outputConfig":{
            "table":{
              "projectId":"[PROJECT-ID]",
              "datasetId":"[BIGQUERY-DATASET-NAME]",
              "tableId":"[BIGQUERY-TABLE-NAME]"
            }
          }
        }
      }
    ]
  }
}

Java

/**
 * Inspect a Datastore kind
 *
 * @param projectId The project ID containing the target Datastore
 * @param namespaceId The ID namespace of the Datastore document to inspect
 * @param kind The kind of the Datastore entity to inspect
 * @param minLikelihood The minimum likelihood required before returning a match
 * @param infoTypes The infoTypes of information to match
 * @param maxFindings max number of findings
 * @param topicId Google Cloud Pub/Sub topic to notify job status updates
 * @param subscriptionId Google Cloud Pub/Sub subscription to above topic to receive status
 *     updates
 */
private static void inspectDatastore(
    String projectId,
    String namespaceId,
    String kind,
    Likelihood minLikelihood,
    List<InfoType> infoTypes,
    List<CustomInfoType> customInfoTypes,
    int maxFindings,
    String topicId,
    String subscriptionId) {
  // Instantiates a client
  try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {

    // Reference to the Datastore namespace
    PartitionId partitionId =
        PartitionId.newBuilder().setProjectId(projectId).setNamespaceId(namespaceId).build();

    // Reference to the Datastore kind
    KindExpression kindExpression = KindExpression.newBuilder().setName(kind).build();
    DatastoreOptions datastoreOptions =
        DatastoreOptions.newBuilder().setKind(kindExpression).setPartitionId(partitionId).build();

    // Construct Datastore configuration to be inspected
    StorageConfig storageConfig =
        StorageConfig.newBuilder().setDatastoreOptions(datastoreOptions).build();

    FindingLimits findingLimits =
        FindingLimits.newBuilder().setMaxFindingsPerRequest(maxFindings).build();

    InspectConfig inspectConfig =
        InspectConfig.newBuilder()
            .addAllInfoTypes(infoTypes)
            .addAllCustomInfoTypes(customInfoTypes)
            .setMinLikelihood(minLikelihood)
            .setLimits(findingLimits)
            .build();

    String pubSubTopic = String.format("projects/%s/topics/%s", projectId, topicId);
    Action.PublishToPubSub publishToPubSub =
        Action.PublishToPubSub.newBuilder().setTopic(pubSubTopic).build();

    Action action = Action.newBuilder().setPubSub(publishToPubSub).build();

    InspectJobConfig inspectJobConfig =
        InspectJobConfig.newBuilder()
            .setStorageConfig(storageConfig)
            .setInspectConfig(inspectConfig)
            .addActions(action)
            .build();

    // Asynchronously submit an inspect job, and wait on results
    CreateDlpJobRequest createDlpJobRequest =
        CreateDlpJobRequest.newBuilder()
            .setParent(ProjectName.of(projectId).toString())
            .setInspectJob(inspectJobConfig)
            .build();

    DlpJob dlpJob = dlpServiceClient.createDlpJob(createDlpJobRequest);

    System.out.println("Job created with ID:" + dlpJob.getName());

    final SettableApiFuture<Boolean> done = SettableApiFuture.create();

    // Set up a Pub/Sub subscriber to listen on the job completion status
    Subscriber subscriber =
        Subscriber.newBuilder(
                ProjectSubscriptionName.of(projectId, subscriptionId),
          (pubsubMessage, ackReplyConsumer) -> {
            if (pubsubMessage.getAttributesCount() > 0
                && pubsubMessage
                    .getAttributesMap()
                    .get("DlpJobName")
                    .equals(dlpJob.getName())) {
              // notify job completion
              done.set(true);
              ackReplyConsumer.ack();
            }
          })
            .build();
    subscriber.startAsync();

    // Wait for job completion semi-synchronously
    // For long jobs, consider using a truly asynchronous execution model such as Cloud Functions
    try {
      done.get(1, TimeUnit.MINUTES);
      Thread.sleep(500); // Wait for the job to become available
    } catch (Exception e) {
      System.out.println("Unable to verify job completion.");
    }

    DlpJob completedJob =
        dlpServiceClient.getDlpJob(
            GetDlpJobRequest.newBuilder().setName(dlpJob.getName()).build());

    System.out.println("Job status: " + completedJob.getState());
    InspectDataSourceDetails inspectDataSourceDetails = completedJob.getInspectDetails();
    InspectDataSourceDetails.Result result = inspectDataSourceDetails.getResult();
    if (result.getInfoTypeStatsCount() > 0) {
      System.out.println("Findings: ");
      for (InfoTypeStats infoTypeStat : result.getInfoTypeStatsList()) {
        System.out.print("\tInfo type: " + infoTypeStat.getInfoType().getName());
        System.out.println("\tCount: " + infoTypeStat.getCount());
      }
    } else {
      System.out.println("No findings.");
    }
  } catch (Exception e) {
    System.out.println("inspectDatastore Problems: " + e.getMessage());
  }
}

Node.js

// Import the Google Cloud client libraries
const DLP = require('@google-cloud/dlp');
const {PubSub} = require('@google-cloud/pubsub');

// Instantiates clients
const dlp = new DLP.DlpServiceClient();
const pubsub = new PubSub();

// The project ID to run the API call under
// const callingProjectId = process.env.GCLOUD_PROJECT;

// The project ID the target Datastore is stored under
// This may or may not equal the calling project ID
// const dataProjectId = process.env.GCLOUD_PROJECT;

// (Optional) The ID namespace of the Datastore document to inspect.
// To ignore Datastore namespaces, set this to an empty string ('')
// const namespaceId = '';

// The kind of the Datastore entity to inspect.
// const kind = 'Person';

// The minimum likelihood required before returning a match
// const minLikelihood = 'LIKELIHOOD_UNSPECIFIED';

// The maximum number of findings to report per request (0 = server maximum)
// const maxFindings = 0;

// The infoTypes of information to match
// const infoTypes = [{ name: 'PHONE_NUMBER' }, { name: 'EMAIL_ADDRESS' }, { name: 'CREDIT_CARD_NUMBER' }];

// The customInfoTypes of information to match
// const customInfoTypes = [{ name: 'DICT_TYPE', dictionary: { wordList: { words: ['foo', 'bar', 'baz']}}},
//   { name: 'REGEX_TYPE', regex: '\\(\\d{3}\\) \\d{3}-\\d{4}'}];

// The name of the Pub/Sub topic to notify once the job completes
// TODO(developer): create a Pub/Sub topic to use for this
// const topicId = 'MY-PUBSUB-TOPIC'

// The name of the Pub/Sub subscription to use when listening for job
// completion notifications
// TODO(developer): create a Pub/Sub subscription to use for this
// const subscriptionId = 'MY-PUBSUB-SUBSCRIPTION'

// Construct items to be inspected
const storageItems = {
  datastoreOptions: {
    partitionId: {
      projectId: dataProjectId,
      namespaceId: namespaceId,
    },
    kind: {
      name: kind,
    },
  },
};

// Construct request for creating an inspect job
const request = {
  parent: dlp.projectPath(callingProjectId),
  inspectJob: {
    inspectConfig: {
      infoTypes: infoTypes,
      customInfoTypes: customInfoTypes,
      minLikelihood: minLikelihood,
      limits: {
        maxFindingsPerRequest: maxFindings,
      },
    },
    storageConfig: storageItems,
    actions: [
      {
        pubSub: {
          topic: `projects/${callingProjectId}/topics/${topicId}`,
        },
      },
    ],
  },
};
try {
  // Run inspect-job creation request
  const [topicResponse] = await pubsub.topic(topicId).get();
  // Verify the Pub/Sub topic and listen for job notifications via an
  // existing subscription.
  const subscription = await topicResponse.subscription(subscriptionId);
  const [jobsResponse] = await dlp.createDlpJob(request);
  const jobName = jobsResponse.name;
  // Watch the Pub/Sub topic until the DLP job finishes
  await new Promise((resolve, reject) => {
    const messageHandler = message => {
      if (message.attributes && message.attributes.DlpJobName === jobName) {
        message.ack();
        subscription.removeListener('message', messageHandler);
        subscription.removeListener('error', errorHandler);
        resolve(jobName);
      } else {
        message.nack();
      }
    };

    const errorHandler = err => {
      subscription.removeListener('message', messageHandler);
      subscription.removeListener('error', errorHandler);
      reject(err);
    };

    subscription.on('message', messageHandler);
    subscription.on('error', errorHandler);
  });
  // Wait for DLP job to fully complete
  setTimeout(() => {
    console.log(`Waiting for DLP job to fully complete`);
  }, 500);
  const [job] = await dlp.getDlpJob({name: jobName});
  console.log(`Job ${job.name} status: ${job.state}`);

  const infoTypeStats = job.inspectDetails.result.infoTypeStats;
  if (infoTypeStats.length > 0) {
    infoTypeStats.forEach(infoTypeStat => {
      console.log(
        `  Found ${infoTypeStat.count} instance(s) of infoType ${infoTypeStat.infoType.name}.`
      );
    });
  } else {
    console.log(`No findings.`);
  }
} catch (err) {
  console.log(`Error in inspectDatastore: ${err.message || err}`);
}

Python

def inspect_datastore(project, datastore_project, kind,
                      topic_id, subscription_id, info_types,
                      custom_dictionaries=None, custom_regexes=None,
                      namespace_id=None, min_likelihood=None,
                      max_findings=None, timeout=300):
    """Uses the Data Loss Prevention API to analyze Datastore data.
    Args:
        project: The Google Cloud project id to use as a parent resource.
        datastore_project: The Google Cloud project id of the target Datastore.
        kind: The kind of the Datastore entity to inspect, e.g. 'Person'.
        topic_id: The id of the Cloud Pub/Sub topic to which the API will
            broadcast job completion. The topic must already exist.
        subscription_id: The id of the Cloud Pub/Sub subscription to listen on
            while waiting for job completion. The subscription must already
            exist and be subscribed to the topic.
        info_types: A list of strings representing info types to look for.
            A full list of info type categories can be fetched from the API.
        namespace_id: The namespace of the Datastore document, if applicable.
        min_likelihood: A string representing the minimum likelihood threshold
            that constitutes a match. One of: 'LIKELIHOOD_UNSPECIFIED',
            'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE', 'LIKELY', 'VERY_LIKELY'.
        max_findings: The maximum number of findings to report; 0 = no maximum.
        timeout: The number of seconds to wait for a response from the API.
    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library.
    import google.cloud.dlp

    # This sample additionally uses Cloud Pub/Sub to receive results from
    # potentially long-running operations.
    import google.cloud.pubsub

    # This sample also uses threading.Event() to wait for the job to finish.
    import threading

    # Instantiate a client.
    dlp = google.cloud.dlp.DlpServiceClient()

    # Prepare info_types by converting the list of strings into a list of
    # dictionaries (protos are also accepted).
    if not info_types:
        info_types = ['FIRST_NAME', 'LAST_NAME', 'EMAIL_ADDRESS']
    info_types = [{'name': info_type} for info_type in info_types]

    # Prepare custom_info_types by parsing the dictionary word lists and
    # regex patterns.
    if custom_dictionaries is None:
        custom_dictionaries = []
    dictionaries = [{
        'info_type': {'name': 'CUSTOM_DICTIONARY_{}'.format(i)},
        'dictionary': {
            'word_list': {'words': custom_dict.split(',')}
        }
    } for i, custom_dict in enumerate(custom_dictionaries)]
    if custom_regexes is None:
        custom_regexes = []
    regexes = [{
        'info_type': {'name': 'CUSTOM_REGEX_{}'.format(i)},
        'regex': {'pattern': custom_regex}
    } for i, custom_regex in enumerate(custom_regexes)]
    custom_info_types = dictionaries + regexes

    # Construct the configuration dictionary. Keys which are None may
    # optionally be omitted entirely.
    inspect_config = {
        'info_types': info_types,
        'custom_info_types': custom_info_types,
        'min_likelihood': min_likelihood,
        'limits': {'max_findings_per_request': max_findings},
    }

    # Construct a storage_config containing the target Datastore info.
    storage_config = {
        'datastore_options': {
            'partition_id': {
                'project_id': datastore_project,
                'namespace_id': namespace_id,
            },
            'kind': {
                'name': kind
            },
        }
    }

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Tell the API where to send a notification when the job is complete.
    actions = [{
        'pub_sub': {'topic': '{}/topics/{}'.format(parent, topic_id)}
    }]

    # Construct the inspect_job, which defines the entire inspect content task.
    inspect_job = {
        'inspect_config': inspect_config,
        'storage_config': storage_config,
        'actions': actions,
    }

    operation = dlp.create_dlp_job(parent, inspect_job=inspect_job)

    # Create a Pub/Sub client and find the subscription. The subscription is
    # expected to already be listening to the topic.
    subscriber = google.cloud.pubsub.SubscriberClient()
    subscription_path = subscriber.subscription_path(
        project, subscription_id)

    # Set up a callback to acknowledge a message. This closes around an event
    # so that it can signal that it is done and the main thread can continue.
    job_done = threading.Event()

    def callback(message):
        try:
            if (message.attributes['DlpJobName'] == operation.name):
                # This is the message we're looking for, so acknowledge it.
                message.ack()

                # Now that the job is done, fetch the results and print them.
                job = dlp.get_dlp_job(operation.name)
                if job.inspect_details.result.info_type_stats:
                    for finding in job.inspect_details.result.info_type_stats:
                        print('Info type: {}; Count: {}'.format(
                            finding.info_type.name, finding.count))
                else:
                    print('No findings.')

                # Signal to the main thread that we can exit.
                job_done.set()
            else:
                # This is not the message we're looking for.
                message.drop()
        except Exception as e:
            # Because this is executing in a thread, an exception won't be
            # noted unless we print it manually.
            print(e)
            raise

    # Register the callback and wait on the event.
    subscriber.subscribe(subscription_path, callback=callback)

    finished = job_done.wait(timeout=timeout)
    if not finished:
        print('No event received before the timeout. Please verify that the '
              'subscription provided is subscribed to the topic provided.')

Go


// inspectDatastore searches for the given info types in the given dataset kind.
func inspectDatastore(w io.Writer, client *dlp.Client, project string, minLikelihood dlppb.Likelihood, maxFindings int32, includeQuote bool, infoTypes []string, customDictionaries []string, customRegexes []string, pubSubTopic, pubSubSub, dataProject, namespaceID, kind string) {
	// Convert the info type strings to a list of InfoTypes.
	var i []*dlppb.InfoType
	for _, it := range infoTypes {
		i = append(i, &dlppb.InfoType{Name: it})
	}
	// Convert the custom dictionary word lists and custom regexes to a list of CustomInfoTypes.
	var customInfoTypes []*dlppb.CustomInfoType
	for idx, it := range customDictionaries {
		customInfoTypes = append(customInfoTypes, &dlppb.CustomInfoType{
			InfoType: &dlppb.InfoType{
				Name: fmt.Sprintf("CUSTOM_DICTIONARY_%d", idx),
			},
			Type: &dlppb.CustomInfoType_Dictionary_{
				Dictionary: &dlppb.CustomInfoType_Dictionary{
					Source: &dlppb.CustomInfoType_Dictionary_WordList_{
						WordList: &dlppb.CustomInfoType_Dictionary_WordList{
							Words: strings.Split(it, ","),
						},
					},
				},
			},
		})
	}
	for idx, it := range customRegexes {
		customInfoTypes = append(customInfoTypes, &dlppb.CustomInfoType{
			InfoType: &dlppb.InfoType{
				Name: fmt.Sprintf("CUSTOM_REGEX_%d", idx),
			},
			Type: &dlppb.CustomInfoType_Regex_{
				Regex: &dlppb.CustomInfoType_Regex{
					Pattern: it,
				},
			},
		})
	}

	ctx := context.Background()

	// Create a PubSub Client used to listen for when the inspect job finishes.
	pClient, err := pubsub.NewClient(ctx, project)
	if err != nil {
		log.Fatalf("Error creating PubSub client: %v", err)
	}
	defer pClient.Close()

	// Create a PubSub subscription we can use to listen for messages.
	s, err := setupPubSub(ctx, pClient, project, pubSubTopic, pubSubSub)
	if err != nil {
		log.Fatalf("Error setting up PubSub: %v\n", err)
	}

	// topic is the PubSub topic string where messages should be sent.
	topic := "projects/" + project + "/topics/" + pubSubTopic

	// Create a configured request.
	req := &dlppb.CreateDlpJobRequest{
		Parent: "projects/" + project,
		Job: &dlppb.CreateDlpJobRequest_InspectJob{
			InspectJob: &dlppb.InspectJobConfig{
				// StorageConfig describes where to find the data.
				StorageConfig: &dlppb.StorageConfig{
					Type: &dlppb.StorageConfig_DatastoreOptions{
						DatastoreOptions: &dlppb.DatastoreOptions{
							PartitionId: &dlppb.PartitionId{
								ProjectId:   dataProject,
								NamespaceId: namespaceID,
							},
							Kind: &dlppb.KindExpression{
								Name: kind,
							},
						},
					},
				},
				// InspectConfig describes what fields to look for.
				InspectConfig: &dlppb.InspectConfig{
					InfoTypes:       i,
					CustomInfoTypes: customInfoTypes,
					MinLikelihood:   minLikelihood,
					Limits: &dlppb.InspectConfig_FindingLimits{
						MaxFindingsPerRequest: maxFindings,
					},
					IncludeQuote: includeQuote,
				},
				// Send a message to PubSub using Actions.
				Actions: []*dlppb.Action{
					{
						Action: &dlppb.Action_PubSub{
							PubSub: &dlppb.Action_PublishToPubSub{
								Topic: topic,
							},
						},
					},
				},
			},
		},
	}
	// Create the inspect job.
	j, err := client.CreateDlpJob(context.Background(), req)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Fprintf(w, "Created job: %v\n", j.GetName())

	// Wait for the inspect job to finish by waiting for a PubSub message.
	ctx, cancel := context.WithCancel(ctx)
	err = s.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) {
		// If this is the wrong job, do not process the result.
		if msg.Attributes["DlpJobName"] != j.GetName() {
			msg.Nack()
			return
		}
		msg.Ack()
		resp, err := client.GetDlpJob(ctx, &dlppb.GetDlpJobRequest{
			Name: j.GetName(),
		})
		if err != nil {
			log.Fatalf("Error getting completed job: %v\n", err)
		}
		r := resp.GetInspectDetails().GetResult().GetInfoTypeStats()
		if len(r) == 0 {
			fmt.Fprintf(w, "No results")
		}
		for _, s := range r {
			fmt.Fprintf(w, "  Found %v instances of infoType %v\n", s.GetCount(), s.GetInfoType().GetName())
		}
		// Stop listening for more messages.
		cancel()
	})
	if err != nil {
		log.Fatalf("Error receiving from PubSub: %v\n", err)
	}
}

PHP

/**
 * Inspect Datastore, using Pub/Sub for job status notifications.
 */
use Google\Cloud\Core\ExponentialBackoff;
use Google\Cloud\Dlp\V2\DlpServiceClient;
use Google\Cloud\Dlp\V2\DatastoreOptions;
use Google\Cloud\Dlp\V2\InfoType;
use Google\Cloud\Dlp\V2\Action;
use Google\Cloud\Dlp\V2\Action\PublishToPubSub;
use Google\Cloud\Dlp\V2\InspectConfig;
use Google\Cloud\Dlp\V2\InspectJobConfig;
use Google\Cloud\Dlp\V2\KindExpression;
use Google\Cloud\Dlp\V2\PartitionId;
use Google\Cloud\Dlp\V2\StorageConfig;
use Google\Cloud\Dlp\V2\Likelihood;
use Google\Cloud\Dlp\V2\DlpJob\JobState;
use Google\Cloud\Dlp\V2\InspectConfig\FindingLimits;
use Google\Cloud\PubSub\PubSubClient;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';
// $dataProjectId = 'The project ID containing the target Datastore';
// $topicId = 'The name of the Pub/Sub topic to notify once the job completes';
// $subscriptionId = 'The name of the Pub/Sub subscription to use when listening for job';
// $kind = 'The datastore kind to inspect';
// $namespaceId = 'The ID namespace of the Datastore document to inspect';
// $maxFindings = 0;  // (Optional) The maximum number of findings to report per request (0 = server maximum)

// Instantiate clients
$dlp = new DlpServiceClient();
$pubsub = new PubSubClient();
$topic = $pubsub->topic($topicId);

// The infoTypes of information to match
$personNameInfoType = (new InfoType())
    ->setName('PERSON_NAME');
$phoneNumberInfoType = (new InfoType())
    ->setName('PHONE_NUMBER');
$infoTypes = [$personNameInfoType, $phoneNumberInfoType];

// The minimum likelihood required before returning a match
$minLikelihood = likelihood::LIKELIHOOD_UNSPECIFIED;

// Specify finding limits
$limits = (new FindingLimits())
    ->setMaxFindingsPerRequest($maxFindings);

// Construct items to be inspected
$partitionId = (new PartitionId())
    ->setProjectId($dataProjectId)
    ->setNamespaceId($namespaceId);

$kindExpression = (new KindExpression())
    ->setName($kind);

$datastoreOptions = (new DatastoreOptions())
    ->setPartitionId($partitionId)
    ->setKind($kindExpression);

// Construct the inspect config object
$inspectConfig = (new InspectConfig())
    ->setInfoTypes($infoTypes)
    ->setMinLikelihood($minLikelihood)
    ->setLimits($limits);

// Construct the storage config object
$storageConfig = (new StorageConfig())
    ->setDatastoreOptions($datastoreOptions);

// Construct the action to run when job completes
$pubSubAction = (new PublishToPubSub())
    ->setTopic($topic->name());

$action = (new Action())
    ->setPubSub($pubSubAction);

// Construct inspect job config to run
$inspectJob = (new InspectJobConfig())
    ->setInspectConfig($inspectConfig)
    ->setStorageConfig($storageConfig)
    ->setActions([$action]);

// Listen for job notifications via an existing topic/subscription.
$subscription = $topic->subscription($subscriptionId);

// Submit request
$parent = $dlp->projectName($callingProjectId);
$job = $dlp->createDlpJob($parent, [
    'inspectJob' => $inspectJob
]);

// Poll Pub/Sub using exponential backoff until job finishes
$backoff = new ExponentialBackoff(25);
$backoff->execute(function () use ($subscription, $dlp, &$job) {
    printf('Waiting for job to complete' . PHP_EOL);
    foreach ($subscription->pull() as $message) {
        if (isset($message->attributes()['DlpJobName']) &&
            $message->attributes()['DlpJobName'] === $job->getName()) {
            $subscription->acknowledge($message);
            // Get the updated job. Loop to avoid race condition with DLP API.
            do {
                $job = $dlp->getDlpJob($job->getName());
            } while ($job->getState() == JobState::RUNNING);
            return true;
        }
    }
    throw new Exception('Job has not yet completed');
});

// Print finding counts
printf('Job %s status: %s' . PHP_EOL, $job->getName(), JobState::name($job->getState()));
switch ($job->getState()) {
    case JobState::DONE:
        $infoTypeStats = $job->getInspectDetails()->getResult()->getInfoTypeStats();
        if (count($infoTypeStats) === 0) {
            print('No findings.' . PHP_EOL);
        } else {
            foreach ($infoTypeStats as $infoTypeStat) {
                printf('  Found %s instance(s) of infoType %s' . PHP_EOL, $infoTypeStat->getCount(), $infoTypeStat->getInfoType()->getName());
            }
        }
        break;
    case JobState::FAILED:
        printf('Job %s had errors:' . PHP_EOL, $job->getName());
        $errors = $job->getErrors();
        foreach ($errors as $error) {
            var_dump($error->getDetails());
        }
        break;
    default:
        print('Unexpected job state.');
}

C#

public static object InspectCloudDataStore(
    string projectId,
    string minLikelihood,
    int maxFindings,
    bool includeQuote,
    string kindName,
    string namespaceId,
    IEnumerable<InfoType> infoTypes,
    IEnumerable<CustomInfoType> customInfoTypes,
    string datasetId,
    string tableId)
{
    var inspectJob = new InspectJobConfig
    {
        StorageConfig = new StorageConfig
        {
            DatastoreOptions = new DatastoreOptions
            {
                Kind = new KindExpression { Name = kindName },
                PartitionId = new PartitionId
                {
                    NamespaceId = namespaceId,
                    ProjectId = projectId,
                }
            },
            TimespanConfig = new StorageConfig.Types.TimespanConfig
            {
                StartTime = Timestamp.FromDateTime(System.DateTime.UtcNow.AddYears(-1)),
                EndTime = Timestamp.FromDateTime(System.DateTime.UtcNow)
            }
        },

        InspectConfig = new InspectConfig
        {
            InfoTypes = { infoTypes },
            CustomInfoTypes = { customInfoTypes },
            Limits = new FindingLimits
            {
                MaxFindingsPerRequest = maxFindings
            },
            ExcludeInfoTypes = false,
            IncludeQuote = includeQuote,
            MinLikelihood = (Likelihood)System.Enum.Parse(typeof(Likelihood), minLikelihood)
        },
        Actions =
        {
            new Google.Cloud.Dlp.V2.Action
            {
                // Save results in BigQuery Table
                SaveFindings = new Google.Cloud.Dlp.V2.Action.Types.SaveFindings
                {
                    OutputConfig = new OutputStorageConfig
                    {
                        Table = new Google.Cloud.Dlp.V2.BigQueryTable
                        {
                            ProjectId = projectId,
                            DatasetId = datasetId,
                            TableId = tableId
                        }
                    }
                },
            }
        }
    };

    // Issue Create Dlp Job Request
    DlpServiceClient client = DlpServiceClient.Create();
    var request = new CreateDlpJobRequest
    {
        InspectJob = inspectJob,
        ParentAsProjectName = new ProjectName(projectId),
    };

    // We need created job name
    var dlpJob = client.CreateDlpJob(request);
    var jobName = dlpJob.Name;

    // Make sure the job finishes before inspecting the results.
    // Alternatively, we can inspect results opportunistically, but
    // for testing purposes, we want consistent outcome
    bool jobFinished = EnsureJobFinishes(projectId, jobName);
    if (jobFinished)
    {
        var bigQueryClient = BigQueryClient.Create(projectId);
        var table = bigQueryClient.GetTable(datasetId, tableId);

        // Return only first page of 10 rows
        Console.WriteLine("DLP v2 Results:");
        var firstPage = table.ListRows(new ListRowsOptions { StartIndex = 0, PageSize = 10 });
        foreach (var item in firstPage)
        {
            Console.WriteLine($"\t {item[""]}");
        }
    }

    return 0;
}

Inspect a BigQuery table

You can set up an inspection of a BigQuery table using Cloud DLP via REST requests, or programmatically in several languages using a client library.

To set up a scan job of a BigQuery table using Cloud DLP:

Console

To set up a scan job of a BigQuery table using Cloud DLP:

  1. In the GCP Console, open Cloud DLP.

    Go to Cloud DLP UI

  2. From the Create menu, choose Job or job trigger.

    Screenshot of Create new job or job trigger menu choice.

  3. Enter the Cloud DLP job information and click Continue to complete each step:

    • For Step 1: Choose input data, name the job by entering a value in the Name field. In Location, choose BigQuery from the Storage type menu, and then enter the information for the table to scan. The Sampling section is pre-configured to run a sample scan against your data. You can adjust the Limit rows by and Maximum number of rows fields to save resources if you have a large amount of data. For more details, see Choose input data.

    • (Optional) For Step 2: Configure detection, you can configure what types of data to look for, called "infoTypes." You can select from the list of pre-defined infoTypes, or you can select a template if one exists. For more details, see Configure detection.

    • (Optional) For Step 3: Add actions, make sure Notify by email is enabled.

      Enable Save to BigQuery to publish your Cloud DLP findings to a BigQuery table. Provide the following:

      • For Project ID, enter the project ID where your results are stored.
      • For Dataset ID, enter the name of the dataset that stores your results.
      • (Optional) For Table ID, enter the name of the table that stores your results. If no table ID is specified, a default name is assigned to a new table similar to the following: dlp_googleapis_[DATE]_1234567890. If you specify an existing table, findings are appended to it.

      You can also save results to Cloud Pub/Sub, Cloud Security Command Center, and Data Catalog. For more details, see Add actions.

    • (Optional) For Step 4: Schedule, to run the scan one time only, leave the menu set to None. To schedule scans to run periodically, click Create a trigger to run the job on a periodic schedule. For more details, see Schedule.

  4. Click Create.

  5. After the Cloud DLP job completes, you are redirected to the job details page and notified via email. You can view the results of the inspection on the job details page.

  6. (Optional) If you chose to publish Cloud DLP findings to BigQuery, on the Job details page, click View Findings in BigQuery to open the table in the BigQuery web UI. You can then query the table and analyze your findings. For more information on querying your results in BigQuery, see Querying Cloud DLP findings in BigQuery.

Protocol

Following is sample JSON that can be sent in a POST request to the specified Cloud DLP API REST endpoint. This example JSON demonstrates how to use the Cloud DLP API to inspect BigQuery tables. For information about the parameters included with the request, see "Configure storage inspection," later in this topic.

To quickly try this out, you can use APIs Explorer. Keep in mind that a successful request, even in APIs Explorer, will create a new scan job. For information about how to control scan jobs, see "Retrieve inspection results," later in this topic. For general information about using JSON to send requests to the Cloud DLP API, see the JSON quickstart.

JSON input:

POST https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs?key={YOUR_API_KEY}

{
  "inspectJob":{
    "storageConfig":{
      "bigQueryOptions":{
        "tableReference":{
          "projectId":"[PROJECT-ID]",
          "datasetId":"[BIGQUERY-DATASET-NAME]",
          "tableId":"[BIGQUERY-TABLE-NAME]"
        },
        "identifyingFields":[
          {
            "name":"person.contactinfo"
          }
        ]
      },
      "timespanConfig":{
        "startTime":"2017-11-13T12:34:29.965633345Z ",
        "endTime":"2018-01-05T04:45:04.240912125Z "
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"PHONE_NUMBER"
        }
      ],
      "excludeInfoTypes":false,
      "includeQuote":true,
      "minLikelihood":"LIKELY"
    },
    "actions":[
      {
        "saveFindings":{
          "outputConfig":{
            "table":{
              "projectId":"[PROJECT-ID]",
              "datasetId":"[BIGQUERY-DATASET-NAME]",
              "tableId":"[BIGQUERY-TABLE-NAME]"
            }
          },
          "outputSchema": "BASIC_COLUMNS"
        }
      }
    ]
  }
}

Java

/**
 * Inspect a BigQuery table
 *
 * @param projectId The project ID to run the API call under
 * @param datasetId The ID of the dataset to inspect, e.g. 'my_dataset'
 * @param tableId The ID of the table to inspect, e.g. 'my_table'
 * @param minLikelihood The minimum likelihood required before returning a match
 * @param infoTypes The infoTypes of information to match
 * @param maxFindings The maximum number of findings to report (0 = server maximum)
 * @param topicId Topic ID for pubsub.
 * @param subscriptionId Subscription ID for pubsub.
 */
private static void inspectBigquery(
    String projectId,
    String datasetId,
    String tableId,
    Likelihood minLikelihood,
    List<InfoType> infoTypes,
    List<CustomInfoType> customInfoTypes,
    int maxFindings,
    String topicId,
    String subscriptionId) {
  // Instantiates a client
  try (DlpServiceClient dlpServiceClient = DlpServiceClient.create()) {
    // Reference to the BigQuery table
    BigQueryTable tableReference =
        BigQueryTable.newBuilder()
            .setProjectId(projectId)
            .setDatasetId(datasetId)
            .setTableId(tableId)
            .build();
    BigQueryOptions bigQueryOptions =
        BigQueryOptions.newBuilder().setTableReference(tableReference).build();

    // Construct BigQuery configuration to be inspected
    StorageConfig storageConfig =
        StorageConfig.newBuilder().setBigQueryOptions(bigQueryOptions).build();

    FindingLimits findingLimits =
        FindingLimits.newBuilder().setMaxFindingsPerRequest(maxFindings).build();

    InspectConfig inspectConfig =
        InspectConfig.newBuilder()
            .addAllInfoTypes(infoTypes)
            .addAllCustomInfoTypes(customInfoTypes)
            .setMinLikelihood(minLikelihood)
            .setLimits(findingLimits)
            .build();

    ProjectTopicName topic = ProjectTopicName.of(projectId, topicId);
    Action.PublishToPubSub publishToPubSub =
        Action.PublishToPubSub.newBuilder().setTopic(topic.toString()).build();

    Action action = Action.newBuilder().setPubSub(publishToPubSub).build();

    InspectJobConfig inspectJobConfig =
        InspectJobConfig.newBuilder()
            .setStorageConfig(storageConfig)
            .setInspectConfig(inspectConfig)
            .addActions(action)
            .build();

    // Asynchronously submit an inspect job, and wait on results
    CreateDlpJobRequest createDlpJobRequest =
        CreateDlpJobRequest.newBuilder()
            .setParent(ProjectName.of(projectId).toString())
            .setInspectJob(inspectJobConfig)
            .build();

    DlpJob dlpJob = dlpServiceClient.createDlpJob(createDlpJobRequest);

    System.out.println("Job created with ID:" + dlpJob.getName());

    // Wait for job completion semi-synchronously
    // For long jobs, consider using a truly asynchronous execution model such as Cloud Functions
    final SettableApiFuture<Boolean> done = SettableApiFuture.create();

    // Set up a Pub/Sub subscriber to listen on the job completion status
    Subscriber subscriber =
        Subscriber.newBuilder(
                ProjectSubscriptionName.of(projectId, subscriptionId),
          (pubsubMessage, ackReplyConsumer) -> {
            if (pubsubMessage.getAttributesCount() > 0
                && pubsubMessage
                    .getAttributesMap()
                    .get("DlpJobName")
                    .equals(dlpJob.getName())) {
              // notify job completion
              done.set(true);
              ackReplyConsumer.ack();
            }
          })
            .build();
    subscriber.startAsync();

    try {
      done.get(1, TimeUnit.MINUTES);
      Thread.sleep(500); // Wait for the job to become available
    } catch (Exception e) {
      System.out.println("Unable to verify job completion.");
    }

    DlpJob completedJob =
        dlpServiceClient.getDlpJob(
            GetDlpJobRequest.newBuilder().setName(dlpJob.getName()).build());

    System.out.println("Job status: " + completedJob.getState());
    InspectDataSourceDetails inspectDataSourceDetails = completedJob.getInspectDetails();
    InspectDataSourceDetails.Result result = inspectDataSourceDetails.getResult();
    if (result.getInfoTypeStatsCount() > 0) {
      System.out.println("Findings: ");
      for (InfoTypeStats infoTypeStat : result.getInfoTypeStatsList()) {
        System.out.print("\tInfo type: " + infoTypeStat.getInfoType().getName());
        System.out.println("\tCount: " + infoTypeStat.getCount());
      }
    } else {
      System.out.println("No findings.");
    }
  } catch (Exception e) {
    System.out.println("inspectBigquery Problems: " + e.getMessage());
  }
}

Node.js

// Import the Google Cloud client libraries
const DLP = require('@google-cloud/dlp');
const {PubSub} = require('@google-cloud/pubsub');

// Instantiates clients
const dlp = new DLP.DlpServiceClient();
const pubsub = new PubSub();

// The project ID to run the API call under
// const callingProjectId = process.env.GCLOUD_PROJECT;

// The project ID the table is stored under
// This may or (for public datasets) may not equal the calling project ID
// const dataProjectId = process.env.GCLOUD_PROJECT;

// The ID of the dataset to inspect, e.g. 'my_dataset'
// const datasetId = 'my_dataset';

// The ID of the table to inspect, e.g. 'my_table'
// const tableId = 'my_table';

// The minimum likelihood required before returning a match
// const minLikelihood = 'LIKELIHOOD_UNSPECIFIED';

// The maximum number of findings to report per request (0 = server maximum)
// const maxFindings = 0;

// The infoTypes of information to match
// const infoTypes = [{ name: 'PHONE_NUMBER' }, { name: 'EMAIL_ADDRESS' }, { name: 'CREDIT_CARD_NUMBER' }];

// The customInfoTypes of information to match
// const customInfoTypes = [{ name: 'DICT_TYPE', dictionary: { wordList: { words: ['foo', 'bar', 'baz']}}},
//   { name: 'REGEX_TYPE', regex: '\\(\\d{3}\\) \\d{3}-\\d{4}'}];

// The name of the Pub/Sub topic to notify once the job completes
// TODO(developer): create a Pub/Sub topic to use for this
// const topicId = 'MY-PUBSUB-TOPIC'

// The name of the Pub/Sub subscription to use when listening for job
// completion notifications
// TODO(developer): create a Pub/Sub subscription to use for this
// const subscriptionId = 'MY-PUBSUB-SUBSCRIPTION'

// Construct item to be inspected
const storageItem = {
  bigQueryOptions: {
    tableReference: {
      projectId: dataProjectId,
      datasetId: datasetId,
      tableId: tableId,
    },
  },
};

// Construct request for creating an inspect job
const request = {
  parent: dlp.projectPath(callingProjectId),
  inspectJob: {
    inspectConfig: {
      infoTypes: infoTypes,
      customInfoTypes: customInfoTypes,
      minLikelihood: minLikelihood,
      limits: {
        maxFindingsPerRequest: maxFindings,
      },
    },
    storageConfig: storageItem,
    actions: [
      {
        pubSub: {
          topic: `projects/${callingProjectId}/topics/${topicId}`,
        },
      },
    ],
  },
};

try {
  // Run inspect-job creation request
  const [topicResponse] = await pubsub.topic(topicId).get();
  // Verify the Pub/Sub topic and listen for job notifications via an
  // existing subscription.
  const subscription = await topicResponse.subscription(subscriptionId);
  const [jobsResponse] = await dlp.createDlpJob(request);
  const jobName = jobsResponse.name;
  // Watch the Pub/Sub topic until the DLP job finishes
  await new Promise((resolve, reject) => {
    const messageHandler = message => {
      if (message.attributes && message.attributes.DlpJobName === jobName) {
        message.ack();
        subscription.removeListener('message', messageHandler);
        subscription.removeListener('error', errorHandler);
        resolve(jobName);
      } else {
        message.nack();
      }
    };

    const errorHandler = err => {
      subscription.removeListener('message', messageHandler);
      subscription.removeListener('error', errorHandler);
      reject(err);
    };

    subscription.on('message', messageHandler);
    subscription.on('error', errorHandler);
  });
  // Wait for DLP job to fully complete
  setTimeout(() => {
    console.log(`Waiting for DLP job to fully complete`);
  }, 500);
  const [job] = await dlp.getDlpJob({name: jobName});
  console.log(`Job ${job.name} status: ${job.state}`);

  const infoTypeStats = job.inspectDetails.result.infoTypeStats;
  if (infoTypeStats.length > 0) {
    infoTypeStats.forEach(infoTypeStat => {
      console.log(
        `  Found ${infoTypeStat.count} instance(s) of infoType ${infoTypeStat.infoType.name}.`
      );
    });
  } else {
    console.log(`No findings.`);
  }
} catch (err) {
  console.log(`Error in inspectBigquery: ${err.message || err}`);
}

Python

def inspect_bigquery(project, bigquery_project, dataset_id, table_id,
                     topic_id, subscription_id, info_types,
                     custom_dictionaries=None, custom_regexes=None,
                     min_likelihood=None, max_findings=None, timeout=300):
    """Uses the Data Loss Prevention API to analyze BigQuery data.
    Args:
        project: The Google Cloud project id to use as a parent resource.
        bigquery_project: The Google Cloud project id of the target table.
        dataset_id: The id of the target BigQuery dataset.
        table_id: The id of the target BigQuery table.
        topic_id: The id of the Cloud Pub/Sub topic to which the API will
            broadcast job completion. The topic must already exist.
        subscription_id: The id of the Cloud Pub/Sub subscription to listen on
            while waiting for job completion. The subscription must already
            exist and be subscribed to the topic.
        info_types: A list of strings representing info types to look for.
            A full list of info type categories can be fetched from the API.
        namespace_id: The namespace of the Datastore document, if applicable.
        min_likelihood: A string representing the minimum likelihood threshold
            that constitutes a match. One of: 'LIKELIHOOD_UNSPECIFIED',
            'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE', 'LIKELY', 'VERY_LIKELY'.
        max_findings: The maximum number of findings to report; 0 = no maximum.
        timeout: The number of seconds to wait for a response from the API.
    Returns:
        None; the response from the API is printed to the terminal.
    """

    # Import the client library.
    import google.cloud.dlp

    # This sample additionally uses Cloud Pub/Sub to receive results from
    # potentially long-running operations.
    import google.cloud.pubsub

    # This sample also uses threading.Event() to wait for the job to finish.
    import threading

    # Instantiate a client.
    dlp = google.cloud.dlp.DlpServiceClient()

    # Prepare info_types by converting the list of strings into a list of
    # dictionaries (protos are also accepted).
    if not info_types:
        info_types = ['FIRST_NAME', 'LAST_NAME', 'EMAIL_ADDRESS']
    info_types = [{'name': info_type} for info_type in info_types]

    # Prepare custom_info_types by parsing the dictionary word lists and
    # regex patterns.
    if custom_dictionaries is None:
        custom_dictionaries = []
    dictionaries = [{
        'info_type': {'name': 'CUSTOM_DICTIONARY_{}'.format(i)},
        'dictionary': {
            'word_list': {'words': custom_dict.split(',')}
        }
    } for i, custom_dict in enumerate(custom_dictionaries)]
    if custom_regexes is None:
        custom_regexes = []
    regexes = [{
        'info_type': {'name': 'CUSTOM_REGEX_{}'.format(i)},
        'regex': {'pattern': custom_regex}
    } for i, custom_regex in enumerate(custom_regexes)]
    custom_info_types = dictionaries + regexes

    # Construct the configuration dictionary. Keys which are None may
    # optionally be omitted entirely.
    inspect_config = {
        'info_types': info_types,
        'custom_info_types': custom_info_types,
        'min_likelihood': min_likelihood,
        'limits': {'max_findings_per_request': max_findings},
    }

    # Construct a storage_config containing the target Bigquery info.
    storage_config = {
        'big_query_options': {
            'table_reference': {
                'project_id': bigquery_project,
                'dataset_id': dataset_id,
                'table_id': table_id,
            }
        }
    }

    # Convert the project id into a full resource id.
    parent = dlp.project_path(project)

    # Tell the API where to send a notification when the job is complete.
    actions = [{
        'pub_sub': {'topic': '{}/topics/{}'.format(parent, topic_id)}
    }]

    # Construct the inspect_job, which defines the entire inspect content task.
    inspect_job = {
        'inspect_config': inspect_config,
        'storage_config': storage_config,
        'actions': actions,
    }

    operation = dlp.create_dlp_job(parent, inspect_job=inspect_job)

    # Create a Pub/Sub client and find the subscription. The subscription is
    # expected to already be listening to the topic.
    subscriber = google.cloud.pubsub.SubscriberClient()
    subscription_path = subscriber.subscription_path(
        project, subscription_id)

    # Set up a callback to acknowledge a message. This closes around an event
    # so that it can signal that it is done and the main thread can continue.
    job_done = threading.Event()

    def callback(message):
        try:
            if (message.attributes['DlpJobName'] == operation.name):
                # This is the message we're looking for, so acknowledge it.
                message.ack()

                # Now that the job is done, fetch the results and print them.
                job = dlp.get_dlp_job(operation.name)
                if job.inspect_details.result.info_type_stats:
                    for finding in job.inspect_details.result.info_type_stats:
                        print('Info type: {}; Count: {}'.format(
                            finding.info_type.name, finding.count))
                else:
                    print('No findings.')

                # Signal to the main thread that we can exit.
                job_done.set()
            else:
                # This is not the message we're looking for.
                message.drop()
        except Exception as e:
            # Because this is executing in a thread, an exception won't be
            # noted unless we print it manually.
            print(e)
            raise

    # Register the callback and wait on the event.
    subscriber.subscribe(subscription_path, callback=callback)
    finished = job_done.wait(timeout=timeout)
    if not finished:
        print('No event received before the timeout. Please verify that the '
              'subscription provided is subscribed to the topic provided.')

Go


// inspectBigquery searches for the given info types in the given Bigquery dataset table.
func inspectBigquery(w io.Writer, client *dlp.Client, project string, minLikelihood dlppb.Likelihood, maxFindings int32, includeQuote bool, infoTypes []string, customDictionaries []string, customRegexes []string, pubSubTopic, pubSubSub, dataProject, datasetID, tableID string) {
	// Convert the info type strings to a list of InfoTypes.
	var i []*dlppb.InfoType
	for _, it := range infoTypes {
		i = append(i, &dlppb.InfoType{Name: it})
	}
	// Convert the custom dictionary word lists and custom regexes to a list of CustomInfoTypes.
	var customInfoTypes []*dlppb.CustomInfoType
	for idx, it := range customDictionaries {
		customInfoTypes = append(customInfoTypes, &dlppb.CustomInfoType{
			InfoType: &dlppb.InfoType{
				Name: fmt.Sprintf("CUSTOM_DICTIONARY_%d", idx),
			},
			Type: &dlppb.CustomInfoType_Dictionary_{
				Dictionary: &dlppb.CustomInfoType_Dictionary{
					Source: &dlppb.CustomInfoType_Dictionary_WordList_{
						WordList: &dlppb.CustomInfoType_Dictionary_WordList{
							Words: strings.Split(it, ","),
						},
					},
				},
			},
		})
	}
	for idx, it := range customRegexes {
		customInfoTypes = append(customInfoTypes, &dlppb.CustomInfoType{
			InfoType: &dlppb.InfoType{
				Name: fmt.Sprintf("CUSTOM_REGEX_%d", idx),
			},
			Type: &dlppb.CustomInfoType_Regex_{
				Regex: &dlppb.CustomInfoType_Regex{
					Pattern: it,
				},
			},
		})
	}

	ctx := context.Background()

	// Create a PubSub Client used to listen for when the inspect job finishes.
	pClient, err := pubsub.NewClient(ctx, project)
	if err != nil {
		log.Fatalf("Error creating PubSub client: %v", err)
	}
	defer pClient.Close()

	// Create a PubSub subscription we can use to listen for messages.
	s, err := setupPubSub(ctx, pClient, project, pubSubTopic, pubSubSub)
	if err != nil {
		log.Fatalf("Error setting up PubSub: %v\n", err)
	}

	// topic is the PubSub topic string where messages should be sent.
	topic := "projects/" + project + "/topics/" + pubSubTopic

	// Create a configured request.
	req := &dlppb.CreateDlpJobRequest{
		Parent: "projects/" + project,
		Job: &dlppb.CreateDlpJobRequest_InspectJob{
			InspectJob: &dlppb.InspectJobConfig{
				// StorageConfig describes where to find the data.
				StorageConfig: &dlppb.StorageConfig{
					Type: &dlppb.StorageConfig_BigQueryOptions{
						BigQueryOptions: &dlppb.BigQueryOptions{
							TableReference: &dlppb.BigQueryTable{
								ProjectId: dataProject,
								DatasetId: datasetID,
								TableId:   tableID,
							},
						},
					},
				},
				// InspectConfig describes what fields to look for.
				InspectConfig: &dlppb.InspectConfig{
					InfoTypes:       i,
					CustomInfoTypes: customInfoTypes,
					MinLikelihood:   minLikelihood,
					Limits: &dlppb.InspectConfig_FindingLimits{
						MaxFindingsPerRequest: maxFindings,
					},
					IncludeQuote: includeQuote,
				},
				// Send a message to PubSub using Actions.
				Actions: []*dlppb.Action{
					{
						Action: &dlppb.Action_PubSub{
							PubSub: &dlppb.Action_PublishToPubSub{
								Topic: topic,
							},
						},
					},
				},
			},
		},
	}
	// Create the inspect job.
	j, err := client.CreateDlpJob(context.Background(), req)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Fprintf(w, "Created job: %v\n", j.GetName())

	// Wait for the inspect job to finish by waiting for a PubSub message.
	ctx, cancel := context.WithCancel(ctx)
	err = s.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) {
		// If this is the wrong job, do not process the result.
		if msg.Attributes["DlpJobName"] != j.GetName() {
			msg.Nack()
			return
		}
		msg.Ack()
		resp, err := client.GetDlpJob(ctx, &dlppb.GetDlpJobRequest{
			Name: j.GetName(),
		})
		if err != nil {
			log.Fatalf("Error getting completed job: %v\n", err)
		}
		r := resp.GetInspectDetails().GetResult().GetInfoTypeStats()
		if len(r) == 0 {
			fmt.Fprintf(w, "No results")
		}
		for _, s := range r {
			fmt.Fprintf(w, "  Found %v instances of infoType %v\n", s.GetCount(), s.GetInfoType().GetName())
		}
		// Stop listening for more messages.
		cancel()
	})
	if err != nil {
		log.Fatalf("Error receiving from PubSub: %v\n", err)
	}
}

PHP

/**
 * Inspect a BigQuery table , using Pub/Sub for job status notifications.
 */
use Google\Cloud\Core\ExponentialBackoff;
use Google\Cloud\Dlp\V2\DlpServiceClient;
use Google\Cloud\Dlp\V2\BigQueryOptions;
use Google\Cloud\Dlp\V2\InfoType;
use Google\Cloud\Dlp\V2\InspectConfig;
use Google\Cloud\Dlp\V2\StorageConfig;
use Google\Cloud\Dlp\V2\BigQueryTable;
use Google\Cloud\Dlp\V2\Likelihood;
use Google\Cloud\Dlp\V2\DlpJob\JobState;
use Google\Cloud\Dlp\V2\InspectConfig\FindingLimits;
use Google\Cloud\Dlp\V2\Action;
use Google\Cloud\Dlp\V2\Action\PublishToPubSub;
use Google\Cloud\Dlp\V2\InspectJobConfig;
use Google\Cloud\PubSub\PubSubClient;

/** Uncomment and populate these variables in your code */
// $callingProjectId = 'The project ID to run the API call under';
// $dataProjectId = 'The project ID containing the target Datastore';
// $topicId = 'The name of the Pub/Sub topic to notify once the job completes';
// $subscriptionId = 'The name of the Pub/Sub subscription to use when listening for job';
// $datasetId = 'The ID of the dataset to inspect';
// $tableId = 'The ID of the table to inspect';
// $columnName = 'The name of the column to compute risk metrics for, e.g. "age"';
// $maxFindings = 0;  // (Optional) The maximum number of findings to report per request (0 = server maximum)

// Instantiate a client.
$dlp = new DlpServiceClient();
$pubsub = new PubSubClient();
$topic = $pubsub->topic($topicId);

// The infoTypes of information to match
$personNameInfoType = (new InfoType())
    ->setName('PERSON_NAME');
$creditCardNumberInfoType = (new InfoType())
    ->setName('CREDIT_CARD_NUMBER');
$infoTypes = [$personNameInfoType, $creditCardNumberInfoType];

// The minimum likelihood required before returning a match
$minLikelihood = likelihood::LIKELIHOOD_UNSPECIFIED;

// Specify finding limits
$limits = (new FindingLimits())
    ->setMaxFindingsPerRequest($maxFindings);

// Construct items to be inspected
$bigqueryTable = (new BigQueryTable())
    ->setProjectId($dataProjectId)
    ->setDatasetId($datasetId)
    ->setTableId($tableId);

$bigQueryOptions = (new BigQueryOptions())
    ->setTableReference($bigqueryTable);

$storageConfig = (new StorageConfig())
    ->setBigQueryOptions($bigQueryOptions);

// Construct the inspect config object
$inspectConfig = (new InspectConfig())
    ->setMinLikelihood($minLikelihood)
    ->setLimits($limits)
    ->setInfoTypes($infoTypes);

// Construct the action to run when job completes
$pubSubAction = (new PublishToPubSub())
    ->setTopic($topic->name());

$action = (new Action())
    ->setPubSub($pubSubAction);

// Construct inspect job config to run
$inspectJob = (new InspectJobConfig())
    ->setInspectConfig($inspectConfig)
    ->setStorageConfig($storageConfig)
    ->setActions([$action]);

// Listen for job notifications via an existing topic/subscription.
$subscription = $topic->subscription($subscriptionId);

// Submit request
$parent = $dlp->projectName($callingProjectId);
$job = $dlp->createDlpJob($parent, [
    'inspectJob' => $inspectJob
]);

// Poll Pub/Sub using exponential backoff until job finishes
$backoff = new ExponentialBackoff(25);
$backoff->execute(function () use ($subscription, $dlp, &$job) {
    printf('Waiting for job to complete' . PHP_EOL);
    foreach ($subscription->pull() as $message) {
        if (isset($message->attributes()['DlpJobName']) &&
            $message->attributes()['DlpJobName'] === $job->getName()) {
            $subscription->acknowledge($message);
            // Get the updated job. Loop to avoid race condition with DLP API.
            do {
                $job = $dlp->getDlpJob($job->getName());
            } while ($job->getState() == JobState::RUNNING);
            return true;
        }
    }
    throw new Exception('Job has not yet completed');
});

// Print finding counts
printf('Job %s status: %s' . PHP_EOL, $job->getName(), JobState::name($job->getState()));
switch ($job->getState()) {
    case JobState::DONE:
        $infoTypeStats = $job->getInspectDetails()->getResult()->getInfoTypeStats();
        if (count($infoTypeStats) === 0) {
            print('No findings.' . PHP_EOL);
        } else {
            foreach ($infoTypeStats as $infoTypeStat) {
                printf(
                    '  Found %s instance(s) of infoType %s' . PHP_EOL,
                    $infoTypeStat->getCount(),
                    $infoTypeStat->getInfoType()->getName()
                );
            }
        }
        break;
    case JobState::FAILED:
        printf('Job %s had errors:' . PHP_EOL, $job->getName());
        $errors = $job->getErrors();
        foreach ($errors as $error) {
            var_dump($error->getDetails());
        }
        break;
    default:
        printf('Unexpected job state. Most likely, the job is either running or has not yet started.');
}

C#

public static object InspectBigQuery(
    string projectId,
    string minLikelihood,
    int maxFindings,
    bool includeQuote,
    IEnumerable<FieldId> identifyingFields,
    IEnumerable<InfoType> infoTypes,
    IEnumerable<CustomInfoType> customInfoTypes,
    string datasetId,
    string tableId)
{
    var inspectJob = new InspectJobConfig
    {
        StorageConfig = new StorageConfig
        {
            BigQueryOptions = new BigQueryOptions
            {
                TableReference = new Google.Cloud.Dlp.V2.BigQueryTable
                {
                    ProjectId = projectId,
                    DatasetId = datasetId,
                    TableId = tableId,
                },
                IdentifyingFields =
                {
                    identifyingFields
                }
            },

            TimespanConfig = new StorageConfig.Types.TimespanConfig
            {
                StartTime = Timestamp.FromDateTime(System.DateTime.UtcNow.AddYears(-1)),
                EndTime = Timestamp.FromDateTime(System.DateTime.UtcNow)
            }
        },

        InspectConfig = new InspectConfig
        {
            InfoTypes = { infoTypes },
            CustomInfoTypes = { customInfoTypes },
            Limits = new FindingLimits
            {
                MaxFindingsPerRequest = maxFindings
            },
            ExcludeInfoTypes = false,
            IncludeQuote = includeQuote,
            MinLikelihood = (Likelihood)System.Enum.Parse(typeof(Likelihood), minLikelihood)
        },
        Actions =
        {
            new Google.Cloud.Dlp.V2.Action
            {
                // Save results in BigQuery Table
                SaveFindings = new Google.Cloud.Dlp.V2.Action.Types.SaveFindings
                {
                    OutputConfig = new OutputStorageConfig
                    {
                        Table = new Google.Cloud.Dlp.V2.BigQueryTable
                        {
                            ProjectId = projectId,
                            DatasetId = datasetId,
                            TableId = tableId
                        }
                    }
                },
            }
        }
    };

    // Issue Create Dlp Job Request
    DlpServiceClient client = DlpServiceClient.Create();
    var request = new CreateDlpJobRequest
    {
        InspectJob = inspectJob,
        ParentAsProjectName = new ProjectName(projectId),
    };

    // We need created job name
    var dlpJob = client.CreateDlpJob(request);
    string jobName = dlpJob.Name;

    // Make sure the job finishes before inspecting the results.
    // Alternatively, we can inspect results opportunistically, but
    // for testing purposes, we want consistent outcome
    bool jobFinished = EnsureJobFinishes(projectId, jobName);
    if (jobFinished)
    {
        var bigQueryClient = BigQueryClient.Create(projectId);
        var table = bigQueryClient.GetTable(datasetId, tableId);

        // Return only first page of 10 rows
        Console.WriteLine("DLP v2 Results:");
        var firstPage = table.ListRows(new ListRowsOptions { StartIndex = 0, PageSize = 10 });
        foreach (var item in firstPage)
        {
            Console.WriteLine($"\t {item[""]}");
        }
    }

    return 0;
}

Configure storage inspection

To inspect a Cloud Storage location, Cloud Datastore kind, or BigQuery table, you send a request to the projects.dlpJobs.create method of the Cloud DLP API that contains at least the location of the data to scan and what to scan for. Beyond those required parameters, you can also specify where to write the scan results, size and likelihood thresholds, and more. A successful request results in the creation of a DlpJob object instance, which is discussed in "Retrieve inspection results."

The available configuration options are summarized here:

  • InspectJobConfig object: Contains the configuration information for the inspection job. Note that the InspectJobConfig object is also used by the JobTriggers object for scheduling the creation of DlpJobs. This object includes:

    • StorageConfig object: Required. Contains details about the storage repository to scan:

      • One of the following must be included in the StorageConfig object, depending on the type of storage repository being scanned:

      • CloudStorageOptions object: Contains information about the Cloud Storage bucket to scan.

      • DatastoreOptions object: Contains information about the Cloud Datastore data set to scan.

      • BigQueryOptions object: Contains information about the BigQuery table (and, optionally, identifying fields) to scan. This object also enables results sampling. For more information, see Enabling results sampling below.

      • TimespanConfig object: Optional. Specifies the timespan of the items to include in the scan.

    • InspectConfig object: Required. Specifies what to scan for, such as infoTypes and likelihood values.

      • InfoType objects: Required. One or more infoType values to scan for.
      • Likelihood enumeration: Optional. When set, Cloud DLP will only return findings equal to or above this likelihood threshold. If this enum is omitted, the default value is POSSIBLE.
      • FindingLimits object: Optional. When set, this object enables you to specify a limit for the number of findings returned.
      • includeQuote parameter: Optional. Defaults to false. When set to true, each finding will include a contextual quote from the data that triggered it.
      • excludeInfoTypes parameter: Optional. Defaults to false. When set to true, scan results will exclude type information for the findings.
      • CustomInfoType objects: One or more custom, user-created infoTypes. For more information about creating custom infoTypes, see Creating custom infoType detectors.
    • inspectTemplateName string: Optional. Specifies a template to use to populate default values in the InspectConfig object. If you've already specified InspectConfig, template values will be merged in.

    • Action objects: Optional. One or more actions to execute at the completion of the job. Each action is executed in the order in which they're listed. This is where you specify where to write results, or whether to publish a notification to a Cloud Pub/Sub topic.

  • jobId: Optional. An identifier for the job returned by Cloud DLP. If jobId is omitted or empty, the system creates an ID for the job. If specified, the job is assigned this ID value. The job ID must be unique, and can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the following regular expression: [a-zA-Z\\d-]+.

Limit the amount of content inspected

If you are scanning BigQuery tables or Cloud Storage buckets, Cloud DLP includes a way to scan a subset of the dataset. This has the effect of providing a sampling of scan results without incurring the potential costs of scanning an entire dataset.

The following sections contain information about limiting the size of both Cloud Storage scans and BigQuery scans.

Limit Cloud Storage scans

You can enable sampling in Cloud Storage by limiting the amount of data that is scanned. You can instruct the Cloud DLP API to scan only files under a certain size, only certain file types, and only a certain percentage of the total number of files in the input file set. To do so, specify the following optional fields within CloudStorageOptions:

  • bytesLimitPerFile: Sets the maximum number of bytes to scan from a file. If a scanned file's size is larger than this value, the rest of the bytes are omitted.
  • fileTypes[]: Lists the file type groups to include in the scan. This can be set to one or more of the following FileType enumerated types:
    • FILE_TYPE_UNSPECIFIED: All files.
    • BINARY_FILE: All file extensions not included in TEXT_FILE.
    • TEXT_FILE: Several text file formats. For the most up-to-date list, see FileType.
  • filesLimitPercent: Limits the number of files to scan to the specified percentage of the input FileSet. Specifying either 0 or 100 here indicates there is no limit.
  • sampleMethod: How to sample bytes if not all bytes are scanned. Specifying this value is meaningful only when used in conjunction with bytesLimitPerFile. If not specified, scanning starts from the top. This field can be set to one of two values:
    • TOP: Scanning starts from the top.
    • RANDOM_START: For each file larger than the size specified in bytesLimitPerFile, randomly pick the offset to start scanning. The scanned bytes are contiguous.

The following example JSON demonstrates using the Cloud DLP API to scan a 90% subset of a Cloud Storage bucket for person names. The scan starts from a random location in the dataset, and only includes text files under 200 bytes.

JSON input:

POST https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs?key={YOUR_API_KEY}

{
  "inspectJob":{
    "storageConfig":{
      "cloudStorageOptions":{
        "fileSet":{
          "url":"gs://[BUCKET-NAME]/*"
        },
        "bytesLimitPerFile":"200",
        "fileTypes":[
          "TEXT_FILE"
        ],
        "filesLimitPercent":90,
        "sampleMethod":"RANDOM_START"
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"PERSON_NAME"
        }
      ],
      "excludeInfoTypes":true,
      "includeQuote":true,
      "minLikelihood":"POSSIBLE"
    },
    "actions":[
      {
        "saveFindings":{
          "outputConfig":{
            "table":{
              "projectId":"[PROJECT-ID]",
              "datasetId":"testingdlp"
            },
            "outputSchema":"BASIC_COLUMNS"
          }
        }
      }
    ]
  }
}

After sending the JSON input in a POST request to the specified endpoint, a Cloud DLP job is created, and the API sends the following response.

JSON output:

{
  "name":"projects/[PROJECT-ID]/dlpJobs/[JOB-ID]",
  "type":"INSPECT_JOB",
  "state":"PENDING",
  "inspectDetails":{
    "requestedOptions":{
      "snapshotInspectTemplate":{

      },
      "jobConfig":{
        "storageConfig":{
          "cloudStorageOptions":{
            "fileSet":{
              "url":"gs://[BUCKET_NAME]/*"
            },
            "bytesLimitPerFile":"200",
            "fileTypes":[
              "TEXT_FILE"
            ],
            "sampleMethod":"TOP",
            "filesLimitPercent":90
          }
        },
        "inspectConfig":{
          "infoTypes":[
            {
              "name":"PERSON_NAME"
            }
          ],
          "minLikelihood":"POSSIBLE",
          "limits":{

          },
          "includeQuote":true,
          "excludeInfoTypes":true
        },
        "actions":[
          {
            "saveFindings":{
              "outputConfig":{
                "table":{
                  "projectId":"[PROJECT-ID]",
                  "datasetId":"[DATASET-ID]",
                  "tableId":"[TABLE-ID]"
                },
                "outputSchema":"BASIC_COLUMNS"
              }
            }
          }
        ]
      }
    }
  },
  "createTime":"2018-05-30T22:22:08.279Z"
}

Limit BigQuery scans

To enable sampling in BigQuery by limiting the amount of data that is scanned, specify the following optional fields within BigQueryOptions:

  • rowsLimit: The maximum number of rows to scan. If the table has more rows than this value, the rest of the rows are omitted. If not set, or if set to 0, all rows will be scanned.
  • sampleMethod: How to sample rows if not all rows are scanned. If not specified, scanning starts from the top. This field can be set to one of two values:
    • TOP: Scanning starts from the top.
    • RANDOM_START: Scanning starts from a randomly selected row.

The following example JSON demonstrates using the Cloud DLP API to scan a 1000-row subset of a BigQuery table. The scan starts from a random row.

JSON input:

POST https://dlp.googleapis.com/v2/projects/[PROJECT-ID]/dlpJobs?key={YOUR_API_KEY}

{
  "inspectJob":{
    "storageConfig":{
      "bigQueryOptions":{
        "tableReference":{
          "projectId":"bigquery-public-data",
          "datasetId":"usa_names",
          "tableId":"usa_1910_current"
        },
        "rowsLimit":"1000",
        "sampleMethod":"RANDOM_START",
        "identifyingFields":[
          {
            "name":"name"
          }
        ]
      }
    },
    "inspectConfig":{
      "infoTypes":[
        {
          "name":"FIRST_NAME"
        }
      ],
      "includeQuote":true
    },
    "actions":[
      {
        "saveFindings":{
          "outputConfig":{
            "table":{
              "projectId":"[PROJECT-ID]",
              "datasetId":"testingdlp",
              "tableId":"bqsample3"
            },
            "outputSchema":"BASIC_COLUMNS"
          }
        }
      }
    ]
  }
}

After sending the JSON input in a POST request to the specified endpoint, a Cloud DLP job is created, and the API sends the following response.

JSON output:

{
  "name":"projects/[PROJECT-ID]/dlpJobs/[JOB-ID]",
  "type":"INSPECT_JOB",
  "state":"PENDING",
  "inspectDetails":{
    "requestedOptions":{
      "snapshotInspectTemplate":{

      },
      "jobConfig":{
        "storageConfig":{
          "bigQueryOptions":{
            "tableReference":{
              "projectId":"bigquery-public-data",
              "datasetId":"usa_names",
              "tableId":"usa_1910_current"
            },
            "rowsLimit":"1000",
            "sampleMethod":"RANDOM_START"
          }
        },
        "inspectConfig":{
          "infoTypes":[
            {
              "name":"FIRST_NAME"
            }
          ],
          "minLikelihood":"POSSIBLE",
          "limits":{

          },
          "includeQuote":true
        },
        "actions":[
          {
            "saveFindings":{
              "outputConfig":{
                "table":{
                  "projectId":"[PROJECT-ID]",
                  "datasetId":"testingdlp",
                  "tableId":"bqsample3"
                },
                "outputSchema":"BASIC_COLUMNS"
              }
            }
          }
        ]
      }
    }
  },
  "createTime":"2018-05-25T21:02:50.655Z"
}

When the inspect job finishes running and its results have been processed by BigQuery, the results of the scan are available in the specified BigQuery output table. For more information about retrieving inspection results, see the next section.

Retrieve inspection results

You can retrieve a summary of a DlpJob using the projects.dlpJobs.get method. The returned DlpJob includes its InspectDataSourceDetails object, which contains both a summary of the job’s configuration (RequestedOptions) and a summary of the outcome of the job (Result). The outcome summary includes:

  • processedBytes: The total size in bytes that have been processed.
  • totalEstimatedBytes: Estimate of the number of bytes remaining to process.
  • InfoTypeStatistics object: Statistics of how many instances of each infoType were found during the inspection job.

For complete inspection job results, you have two options. Depending on the Action you've chosen, inspection jobs are:

  • Saved to BigQuery (the SaveFindings object) in the table specified. Before viewing or analyzing the results, first ensure that the job has completed by using the projects.dlpJobs.get method, which is described below. Note that you can specify a schema for storing findings using the OutputSchema object.
  • Published to a Cloud Pub/Sub topic (the PublishToPubSub object). The topic must have given publishing access rights to Cloud DLP service account that runs the DlpJob sending the notifications.

To help sift through large amounts of data generated by Cloud DLP, you can use built-in BigQuery tools to run rich SQL analytics or tools such as Google Data Studio to generate reports. For more information, see Analyzing and reporting on Cloud DLP findings. For some sample queries, see Querying findings in BigQuery.

Sending a storage repository inspection request to Cloud DLP creates and runs a DlpJob object instance in response. These jobs can take seconds, minutes, or hours to run depending on the size of your data and the configuration that you have specified. Choosing to publish to a Cloud Pub/Sub topic (by specifying PublishToPubSub in Action) automatically sends notifications to the topic with the specified name when the job's status changes. The name of the Cloud Pub/Sub topic is specified in the form projects/[PROJECT-ID]/topics/[PUBSUB-TOPIC-NAME].

You have full control over the jobs you create, including the following management methods:

  • projects.dlpJobs.cancel method: Stops a job that is currently in progress. The server makes a best effort to cancel the job, but success is not guaranteed. The job and its configuration will remain until you delete it (with .
  • projects.dlpJobs.delete method: Deletes a job and its configuration.
  • projects.dlpJobs.get method: Retrieves a single job and returns its status, its configuration, and, if the job is done, summary results.
  • projects.dlpJobs.list method: Retrieves a list of all jobs, and includes the ability to filter results.
หน้านี้มีประโยชน์ไหม โปรดแสดงความคิดเห็น