Package google.privacy.dlp.v2

Index

DlpService

The Cloud Data Loss Prevention (DLP) API is a service that allows clients to detect the presence of Personally Identifiable Information (PII) and other privacy-sensitive data in user-supplied, unstructured data streams, like text blocks or images. The service also includes methods for sensitive data redaction and scheduling of data scans on Google Cloud Platform based data sets.

To learn more about concepts and find how-to guides see https://cloud.google.com/dlp/docs/.

ActivateJobTrigger

rpc ActivateJobTrigger(ActivateJobTriggerRequest) returns (DlpJob)

Activate a job trigger. Causes the immediate execute of a trigger instead of waiting on the trigger event to occur.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CancelDlpJob

rpc CancelDlpJob(CancelDlpJobRequest) returns (Empty)

Starts asynchronous cancellation on a long-running DlpJob. The server makes a best effort to cancel the DlpJob, but success is not guaranteed. See https://cloud.google.com/dlp/docs/inspecting-storage and https://cloud.google.com/dlp/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateDeidentifyTemplate

rpc CreateDeidentifyTemplate(CreateDeidentifyTemplateRequest) returns (DeidentifyTemplate)

Creates a DeidentifyTemplate for reusing frequently used configuration for de-identifying content, images, and storage. See https://cloud.google.com/dlp/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateDlpJob

rpc CreateDlpJob(CreateDlpJobRequest) returns (DlpJob)

Creates a new job to inspect storage or calculate risk metrics. See https://cloud.google.com/dlp/docs/inspecting-storage and https://cloud.google.com/dlp/docs/compute-risk-analysis to learn more.

When no InfoTypes or CustomInfoTypes are specified in inspect jobs, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateInspectTemplate

rpc CreateInspectTemplate(CreateInspectTemplateRequest) returns (InspectTemplate)

Creates an InspectTemplate for reusing frequently used configuration for inspecting content, images, and storage. See https://cloud.google.com/dlp/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateJobTrigger

rpc CreateJobTrigger(CreateJobTriggerRequest) returns (JobTrigger)

Creates a job trigger to run DLP actions such as scanning storage for sensitive information on a set schedule. See https://cloud.google.com/dlp/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateStoredInfoType

rpc CreateStoredInfoType(CreateStoredInfoTypeRequest) returns (StoredInfoType)

Creates a pre-built stored infoType to be used for inspection. See https://cloud.google.com/dlp/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeidentifyContent

rpc DeidentifyContent(DeidentifyContentRequest) returns (DeidentifyContentResponse)

De-identifies potentially sensitive info from a ContentItem. This method has limits on input size and output size. See https://cloud.google.com/dlp/docs/deidentify-sensitive-data to learn more.

When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteDeidentifyTemplate

rpc DeleteDeidentifyTemplate(DeleteDeidentifyTemplateRequest) returns (Empty)

Deletes a DeidentifyTemplate. See https://cloud.google.com/dlp/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteDlpJob

rpc DeleteDlpJob(DeleteDlpJobRequest) returns (Empty)

Deletes a long-running DlpJob. This method indicates that the client is no longer interested in the DlpJob result. The job will be canceled if possible. See https://cloud.google.com/dlp/docs/inspecting-storage and https://cloud.google.com/dlp/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteInspectTemplate

rpc DeleteInspectTemplate(DeleteInspectTemplateRequest) returns (Empty)

Deletes an InspectTemplate. See https://cloud.google.com/dlp/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteJobTrigger

rpc DeleteJobTrigger(DeleteJobTriggerRequest) returns (Empty)

Deletes a job trigger. See https://cloud.google.com/dlp/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteStoredInfoType

rpc DeleteStoredInfoType(DeleteStoredInfoTypeRequest) returns (Empty)

Deletes a stored infoType. See https://cloud.google.com/dlp/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

FinishDlpJob

rpc FinishDlpJob(FinishDlpJobRequest) returns (Empty)

Finish a running hybrid DlpJob. Triggers the finalization steps and running of any enabled actions that have not yet run.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetDeidentifyTemplate

rpc GetDeidentifyTemplate(GetDeidentifyTemplateRequest) returns (DeidentifyTemplate)

Gets a DeidentifyTemplate. See https://cloud.google.com/dlp/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetDlpJob

rpc GetDlpJob(GetDlpJobRequest) returns (DlpJob)

Gets the latest state of a long-running DlpJob. See https://cloud.google.com/dlp/docs/inspecting-storage and https://cloud.google.com/dlp/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetInspectTemplate

rpc GetInspectTemplate(GetInspectTemplateRequest) returns (InspectTemplate)

Gets an InspectTemplate. See https://cloud.google.com/dlp/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetJobTrigger

rpc GetJobTrigger(GetJobTriggerRequest) returns (JobTrigger)

Gets a job trigger. See https://cloud.google.com/dlp/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetStoredInfoType

rpc GetStoredInfoType(GetStoredInfoTypeRequest) returns (StoredInfoType)

Gets a stored infoType. See https://cloud.google.com/dlp/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

HybridInspectDlpJob

rpc HybridInspectDlpJob(HybridInspectDlpJobRequest) returns (HybridInspectResponse)

Inspect hybrid content and store findings to a job. To review the findings, inspect the job. Inspection will occur asynchronously.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

HybridInspectJobTrigger

rpc HybridInspectJobTrigger(HybridInspectJobTriggerRequest) returns (HybridInspectResponse)

Inspect hybrid content and store findings to a trigger. The inspection will be processed asynchronously. To review the findings monitor the jobs within the trigger.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

InspectContent

rpc InspectContent(InspectContentRequest) returns (InspectContentResponse)

Finds potentially sensitive info in content. This method has limits on input size, processing time, and output size.

When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

For how to guides, see https://cloud.google.com/dlp/docs/inspecting-images and https://cloud.google.com/dlp/docs/inspecting-text,

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListDeidentifyTemplates

rpc ListDeidentifyTemplates(ListDeidentifyTemplatesRequest) returns (ListDeidentifyTemplatesResponse)

Lists DeidentifyTemplates. See https://cloud.google.com/dlp/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListDlpJobs

rpc ListDlpJobs(ListDlpJobsRequest) returns (ListDlpJobsResponse)

Lists DlpJobs that match the specified filter in the request. See https://cloud.google.com/dlp/docs/inspecting-storage and https://cloud.google.com/dlp/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListInfoTypes

rpc ListInfoTypes(ListInfoTypesRequest) returns (ListInfoTypesResponse)

Returns a list of the sensitive information types that DLP API supports. See https://cloud.google.com/dlp/docs/infotypes-reference to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListInspectTemplates

rpc ListInspectTemplates(ListInspectTemplatesRequest) returns (ListInspectTemplatesResponse)

Lists InspectTemplates. See https://cloud.google.com/dlp/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListJobTriggers

rpc ListJobTriggers(ListJobTriggersRequest) returns (ListJobTriggersResponse)

Lists job triggers. See https://cloud.google.com/dlp/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListStoredInfoTypes

rpc ListStoredInfoTypes(ListStoredInfoTypesRequest) returns (ListStoredInfoTypesResponse)

Lists stored infoTypes. See https://cloud.google.com/dlp/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

RedactImage

rpc RedactImage(RedactImageRequest) returns (RedactImageResponse)

Redacts potentially sensitive info from an image. This method has limits on input size, processing time, and output size. See https://cloud.google.com/dlp/docs/redacting-sensitive-data-images to learn more.

When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ReidentifyContent

rpc ReidentifyContent(ReidentifyContentRequest) returns (ReidentifyContentResponse)

Re-identifies content that has been de-identified. See https://cloud.google.com/dlp/docs/pseudonymization#re-identification_in_free_text_code_example to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateDeidentifyTemplate

rpc UpdateDeidentifyTemplate(UpdateDeidentifyTemplateRequest) returns (DeidentifyTemplate)

Updates the DeidentifyTemplate. See https://cloud.google.com/dlp/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateInspectTemplate

rpc UpdateInspectTemplate(UpdateInspectTemplateRequest) returns (InspectTemplate)

Updates the InspectTemplate. See https://cloud.google.com/dlp/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateJobTrigger

rpc UpdateJobTrigger(UpdateJobTriggerRequest) returns (JobTrigger)

Updates a job trigger. See https://cloud.google.com/dlp/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateStoredInfoType

rpc UpdateStoredInfoType(UpdateStoredInfoTypeRequest) returns (StoredInfoType)

Updates the stored infoType by creating a new version. The existing version will continue to be used until the new version is ready. See https://cloud.google.com/dlp/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

Action

A task to execute on the completion of a job. See https://cloud.google.com/dlp/docs/concepts-actions to learn more.

Fields

Union field action.

action can be only one of the following:

save_findings

SaveFindings

Save resulting findings in a provided location.

pub_sub

PublishToPubSub

Publish a notification to a Pub/Sub topic.

publish_summary_to_cscc

PublishSummaryToCscc

Publish summary to Cloud Security Command Center (Alpha).

publish_findings_to_cloud_data_catalog

PublishFindingsToCloudDataCatalog

Publish findings to Cloud Datahub.

deidentify

Deidentify

Create a de-identified copy of the input data.

job_notification_emails

JobNotificationEmails

Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts.

publish_to_stackdriver

PublishToStackdriver

Enable Stackdriver metric dlp.googleapis.com/finding_count.

Deidentify

Create a de-identified copy of the requested table or files.

A TransformationDetail will be created for each transformation.

If any rows in BigQuery are skipped during de-identification (transformation errors or row size exceeds BigQuery insert API limits) they are placed in the failure output table. If the original row exceeds the BigQuery insert API limit it will be truncated when written to the failure output table. The failure output table can be set in the action.deidentify.output.big_query_output.deidentified_failure_output_table field, if no table is set, a table will be automatically created in the same project and dataset as the original table.

Compatible with: Inspect

Fields
transformation_config

TransformationConfig

User specified deidentify templates and configs for structured, unstructured, and image files.

transformation_details_storage_config

TransformationDetailsStorageConfig

Config for storing transformation details. This is separate from the de-identified content, and contains metadata about the successful transformations and/or failures that occurred while de-identifying. This needs to be set in order for users to access information about the status of each transformation (see TransformationDetails message for more information about what is noted).

file_types_to_transform[]

FileType

List of user-specified file type groups to transform. If specified, only the files with these filetypes will be transformed. If empty, all supported files will be transformed. Supported types may be automatically added over time. If a file type is set in this field that isn't supported by the Deidentify action then the job will fail and will not be successfully created/started. Currently the only filetypes supported are: IMAGES, TEXT_FILES, CSV, TSV.

Union field output.

output can be only one of the following:

cloud_storage_output

string

Required. User settable Cloud Storage bucket and folders to store de-identified files. This field must be set for cloud storage deidentification. The output Cloud Storage bucket must be different from the input bucket. De-identified files will overwrite files in the output path.

Form of: gs://bucket/folder/ or gs://bucket

JobNotificationEmails

This type has no fields.

Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts.

PublishFindingsToCloudDataCatalog

This type has no fields.

Publish findings of a DlpJob to Data Catalog. In Data Catalog, tag templates are applied to the resource that Cloud DLP scanned. Data Catalog tag templates are stored in the same project and region where the BigQuery table exists. For Cloud DLP to create and apply the tag template, the Cloud DLP service agent must have the roles/datacatalog.tagTemplateOwner permission on the project. The tag template contains fields summarizing the results of the DlpJob. Any field values previously written by another DlpJob are deleted. InfoType naming patterns are strictly enforced when using this feature.

Findings are persisted in Data Catalog storage and are governed by service-specific policies for Data Catalog. For more information, see Service Specific Terms.

Only a single instance of this action can be specified. This action is allowed only if all resources being scanned are BigQuery tables. Compatible with: Inspect

PublishSummaryToCscc

This type has no fields.

Publish the result summary of a DlpJob to Security Command Center. This action is available for only projects that belong to an organization. This action publishes the count of finding instances and their infoTypes. The summary of findings are persisted in Security Command Center and are governed by service-specific policies for Security Command Center. Only a single instance of this action can be specified. Compatible with: Inspect

PublishToPubSub

Publish a message into a given Pub/Sub topic when DlpJob has completed. The message contains a single field, DlpJobName, which is equal to the finished job's DlpJob.name. Compatible with: Inspect, Risk

Fields
topic

string

Cloud Pub/Sub topic to send notifications to. The topic must have given publishing access rights to the DLP API service account executing the long running DlpJob sending the notifications. Format is projects/{project}/topics/{topic}.

PublishToStackdriver

This type has no fields.

Enable Stackdriver metric dlp.googleapis.com/finding_count. This will publish a metric to stack driver on each infotype requested and how many findings were found for it. CustomDetectors will be bucketed as 'Custom' under the Stackdriver label 'info_type'.

SaveFindings

If set, the detailed findings will be persisted to the specified OutputStorageConfig. Only a single instance of this action can be specified. Compatible with: Inspect, Risk

Fields
output_config

OutputStorageConfig

Location to store findings outside of DLP.

ActionDetails

The results of an Action.

Fields
Union field details. Summary of what occurred in the actions. details can be only one of the following:
deidentify_details

DeidentifyDataSourceDetails

Outcome of a de-identification action.

ActivateJobTriggerRequest

Request message for ActivateJobTrigger.

Fields
name

string

Required. Resource name of the trigger to activate, for example projects/dlp-test-project/jobTriggers/53234423.

Authorization requires one or more of the following IAM permissions on the specified resource name:

  • dlp.jobTriggers.get
  • dlp.jobs.create

AnalyzeDataSourceRiskDetails

Result of a risk analysis operation request.

Fields
requested_privacy_metric

PrivacyMetric

Privacy metric to compute.

requested_source_table

BigQueryTable

Input dataset to compute metrics over.

requested_options

RequestedRiskAnalysisOptions

The configuration used for this job.

Union field result. Values associated with this metric. result can be only one of the following:
numerical_stats_result

NumericalStatsResult

Numerical stats result

categorical_stats_result

CategoricalStatsResult

Categorical stats result

k_anonymity_result

KAnonymityResult

K-anonymity result

l_diversity_result

LDiversityResult

L-divesity result

k_map_estimation_result

KMapEstimationResult

K-map result

delta_presence_estimation_result

DeltaPresenceEstimationResult

Delta-presence result

CategoricalStatsResult

Result of the categorical stats computation.

Fields
value_frequency_histogram_buckets[]

CategoricalStatsHistogramBucket

Histogram of value frequencies in the column.

CategoricalStatsHistogramBucket

Histogram of value frequencies in the column.

Fields
value_frequency_lower_bound

int64

Lower bound on the value frequency of the values in this bucket.

value_frequency_upper_bound

int64

Upper bound on the value frequency of the values in this bucket.

bucket_size

int64

Total number of values in this bucket.

bucket_values[]

ValueFrequency

Sample of value frequencies in this bucket. The total number of values returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct values in this bucket.

DeltaPresenceEstimationResult

Result of the δ-presence computation. Note that these results are an estimation, not exact values.

Fields
delta_presence_estimation_histogram[]

DeltaPresenceEstimationHistogramBucket

The intervals [min_probability, max_probability) do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_probability: 0, max_probability: 0.1, frequency: 17} {min_probability: 0.2, max_probability: 0.3, frequency: 42} {min_probability: 0.3, max_probability: 0.4, frequency: 99} mean that there are no record with an estimated probability in [0.1, 0.2) nor larger or equal to 0.4.

DeltaPresenceEstimationHistogramBucket

A DeltaPresenceEstimationHistogramBucket message with the following values: min_probability: 0.1 max_probability: 0.2 frequency: 42 means that there are 42 records for which δ is in [0.1, 0.2). An important particular case is when min_probability = max_probability = 1: then, every individual who shares this quasi-identifier combination is in the dataset.

Fields
min_probability

double

Between 0 and 1.

max_probability

double

Always greater than or equal to min_probability.

bucket_size

int64

Number of records within these probability bounds.

bucket_values[]

DeltaPresenceEstimationQuasiIdValues

Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct quasi-identifier tuple values in this bucket.

DeltaPresenceEstimationQuasiIdValues

A tuple of values for the quasi-identifier columns.

Fields
quasi_ids_values[]

Value

The quasi-identifier values.

estimated_probability

double

The estimated probability that a given individual sharing these quasi-identifier values is in the dataset. This value, typically called δ, is the ratio between the number of records in the dataset with these quasi-identifier values, and the total number of individuals (inside and outside the dataset) with these quasi-identifier values. For example, if there are 15 individuals in the dataset who share the same quasi-identifier values, and an estimated 100 people in the entire population with these values, then δ is 0.15.

KAnonymityResult

Result of the k-anonymity computation.

Fields
equivalence_class_histogram_buckets[]

KAnonymityHistogramBucket

Histogram of k-anonymity equivalence classes.

KAnonymityEquivalenceClass

The set of columns' values that share the same ldiversity value

Fields
quasi_ids_values[]

Value

Set of values defining the equivalence class. One value per quasi-identifier column in the original KAnonymity metric message. The order is always the same as the original request.

equivalence_class_size

int64

Size of the equivalence class, for example number of rows with the above set of values.

KAnonymityHistogramBucket

Histogram of k-anonymity equivalence classes.

Fields
equivalence_class_size_lower_bound

int64

Lower bound on the size of the equivalence classes in this bucket.

equivalence_class_size_upper_bound

int64

Upper bound on the size of the equivalence classes in this bucket.

bucket_size

int64

Total number of equivalence classes in this bucket.

bucket_values[]

KAnonymityEquivalenceClass

Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct equivalence classes in this bucket.

KMapEstimationResult

Result of the reidentifiability analysis. Note that these results are an estimation, not exact values.

Fields
k_map_estimation_histogram[]

KMapEstimationHistogramBucket

The intervals [min_anonymity, max_anonymity] do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_anonymity: 1, max_anonymity: 1, frequency: 17} {min_anonymity: 2, max_anonymity: 3, frequency: 42} {min_anonymity: 5, max_anonymity: 10, frequency: 99} mean that there are no record with an estimated anonymity of 4, 5, or larger than 10.

KMapEstimationHistogramBucket

A KMapEstimationHistogramBucket message with the following values: min_anonymity: 3 max_anonymity: 5 frequency: 42 means that there are 42 records whose quasi-identifier values correspond to 3, 4 or 5 people in the overlying population. An important particular case is when min_anonymity = max_anonymity = 1: the frequency field then corresponds to the number of uniquely identifiable records.

Fields
min_anonymity

int64

Always positive.

max_anonymity

int64

Always greater than or equal to min_anonymity.

bucket_size

int64

Number of records within these anonymity bounds.

bucket_values[]

KMapEstimationQuasiIdValues

Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct quasi-identifier tuple values in this bucket.

KMapEstimationQuasiIdValues

A tuple of values for the quasi-identifier columns.

Fields
quasi_ids_values[]

Value

The quasi-identifier values.

estimated_anonymity

int64

The estimated anonymity for these quasi-identifier values.

LDiversityResult

Result of the l-diversity computation.

Fields
sensitive_value_frequency_histogram_buckets[]

LDiversityHistogramBucket

Histogram of l-diversity equivalence class sensitive value frequencies.

LDiversityEquivalenceClass

The set of columns' values that share the same ldiversity value.

Fields
quasi_ids_values[]

Value

Quasi-identifier values defining the k-anonymity equivalence class. The order is always the same as the original request.

equivalence_class_size

int64

Size of the k-anonymity equivalence class.

num_distinct_sensitive_values

int64

Number of distinct sensitive values in this equivalence class.

top_sensitive_values[]

ValueFrequency

Estimated frequencies of top sensitive values.

LDiversityHistogramBucket

Histogram of l-diversity equivalence class sensitive value frequencies.

Fields
sensitive_value_frequency_lower_bound

int64

Lower bound on the sensitive value frequencies of the equivalence classes in this bucket.

sensitive_value_frequency_upper_bound

int64

Upper bound on the sensitive value frequencies of the equivalence classes in this bucket.

bucket_size

int64

Total number of equivalence classes in this bucket.

bucket_values[]

LDiversityEquivalenceClass

Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct equivalence classes in this bucket.

NumericalStatsResult

Result of the numerical stats computation.

Fields
min_value

Value

Minimum value appearing in the column.

max_value

Value

Maximum value appearing in the column.

quantile_values[]

Value

List of 99 values that partition the set of field values into 100 equal sized buckets.

RequestedRiskAnalysisOptions

Risk analysis options.

Fields
job_config

RiskAnalysisJobConfig

The job config for the risk job.

BigQueryField

Message defining a field of a BigQuery table.

Fields
table

BigQueryTable

Source table of the field.

field

FieldId

Designated field in the BigQuery table.

BigQueryKey

Row key for identifying a record in BigQuery table.

Fields
table_reference

BigQueryTable

Complete BigQuery table reference.

row_number

int64

Row number inferred at the time the table was scanned. This value is nondeterministic, cannot be queried, and may be null for inspection jobs. To locate findings within a table, specify inspect_job.storage_config.big_query_options.identifying_fields in CreateDlpJobRequest.

BigQueryOptions

Options defining BigQuery table and row identifiers.

Fields
table_reference

BigQueryTable

Complete BigQuery table reference.

identifying_fields[]

FieldId

Table fields that may uniquely identify a row within the table. When actions.saveFindings.outputConfig.table is specified, the values of columns specified here are available in the output table under location.content_locations.record_location.record_key.id_values. Nested fields such as person.birthdate.year are allowed.

rows_limit

int64

Max number of rows to scan. If the table has more rows than this value, the rest of the rows are omitted. If not set, or if set to 0, all rows will be scanned. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig.

rows_limit_percent

int32

Max percentage of rows to scan. The rest are omitted. The number of rows scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig.

sample_method

SampleMethod

excluded_fields[]

FieldId

References to fields excluded from scanning. This allows you to skip inspection of entire columns which you know have no findings.

included_fields[]

FieldId

Limit scanning only to these fields.

SampleMethod

How to sample rows if not all rows are scanned. Meaningful only when used in conjunction with either rows_limit or rows_limit_percent. If not specified, rows are scanned in the order BigQuery reads them.

Enums
SAMPLE_METHOD_UNSPECIFIED
TOP Scan groups of rows in the order BigQuery provides (default). Multiple groups of rows may be scanned in parallel, so results may not appear in the same order the rows are read.
RANDOM_START Randomly pick groups of rows to scan.

BigQueryTable

Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: <project_id>:<dataset_id>.<table_id> or <project_id>.<dataset_id>.<table_id>.

Fields
project_id

string

The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.

dataset_id

string

Dataset ID of the table.

table_id

string

Name of the table.

BoundingBox

Bounding box encompassing detected text within an image.

Fields
top

int32

Top coordinate of the bounding box. (0,0) is upper left.

left

int32

Left coordinate of the bounding box. (0,0) is upper left.

width

int32

Width of the bounding box in pixels.

height

int32

Height of the bounding box in pixels.

BucketingConfig

Generalization function that buckets values based on ranges. The ranges and replacement values are dynamically provided by the user for custom behavior, such as 1-30 -> LOW 31-65 -> MEDIUM 66-100 -> HIGH This can be used on data of type: number, long, string, timestamp. If the bound Value type differs from the type of data being transformed, we will first attempt converting the type of the data to be transformed to match the type of the bound before comparing. See https://cloud.google.com/dlp/docs/concepts-bucketing to learn more.

Fields
buckets[]

Bucket

Set of buckets. Ranges must be non-overlapping.

Bucket

Bucket is represented as a range, along with replacement values.

Fields
min

Value

Lower bound of the range, inclusive. Type should be the same as max if used.

max

Value

Upper bound of the range, exclusive; type must match min.

replacement_value

Value

Required. Replacement value for this bucket.

ByteContentItem

Container for bytes to inspect or redact.

Fields
type

BytesType

The type of data stored in the bytes string. Default will be TEXT_UTF8.

data

bytes

Content data to inspect or redact.

BytesType

The type of data being sent for inspection. To learn more, see Supported file types.

Enums
BYTES_TYPE_UNSPECIFIED Unused
IMAGE Any image type.
IMAGE_JPEG jpeg
IMAGE_BMP bmp
IMAGE_PNG png
IMAGE_SVG svg
TEXT_UTF8 plain text
WORD_DOCUMENT docx, docm, dotx, dotm
PDF pdf
POWERPOINT_DOCUMENT pptx, pptm, potx, potm, pot
EXCEL_DOCUMENT xlsx, xlsm, xltx, xltm
AVRO avro
CSV csv
TSV tsv

CancelDlpJobRequest

The request message for canceling a DLP job.

Fields
name

string

Required. The name of the DlpJob resource to be cancelled.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobs.cancel

CharacterMaskConfig

Partially mask a string by replacing a given number of characters with a fixed character. Masking can start from the beginning or end of the string. This can be used on data of any type (numbers, longs, and so on) and when de-identifying structured data we'll attempt to preserve the original data's type. (This allows you to take a long like 123 and modify it to a string like **3.

Fields
masking_character

string

Character to use to mask the sensitive values—for example, * for an alphabetic string such as a name, or 0 for a numeric string such as ZIP code or credit card number. This string must have a length of 1. If not supplied, this value defaults to * for strings, and 0 for digits.

number_to_mask

int32

Number of characters to mask. If not set, all matching chars will be masked. Skipped characters do not count towards this tally.

If number_to_mask is negative, this denotes inverse masking. Cloud DLP masks all but a number of characters. For example, suppose you have the following values:

  • masking_character is *
  • number_to_mask is -4
  • reverse_order is false
  • CharsToIgnore includes -
  • Input string is 1234-5678-9012-3456

The resulting de-identified string is ****-****-****-3456. Cloud DLP masks all but the last four characters. If reverse_order is true, all but the first four characters are masked as 1234-****-****-****.

reverse_order

bool

Mask characters in reverse order. For example, if masking_character is 0, number_to_mask is 14, and reverse_order is false, then the input string 1234-5678-9012-3456 is masked as 00000000000000-3456. If masking_character is *, number_to_mask is 3, and reverse_order is true, then the string 12345 is masked as 12***.

characters_to_ignore[]

CharsToIgnore

When masking a string, items in this list will be skipped when replacing characters. For example, if the input string is 555-555-5555 and you instruct Cloud DLP to skip - and mask 5 characters with *, Cloud DLP returns ***-**5-5555.

CharsToIgnore

Characters to skip when doing deidentification of a value. These will be left alone and skipped.

Fields

Union field characters.

characters can be only one of the following:

characters_to_skip

string

Characters to not transform when masking.

common_characters_to_ignore

CommonCharsToIgnore

Common characters to not transform when masking. Useful to avoid removing punctuation.

CommonCharsToIgnore

Convenience enum for indicating common characters to not transform.

Enums
COMMON_CHARS_TO_IGNORE_UNSPECIFIED Unused.
NUMERIC 0-9
ALPHA_UPPER_CASE A-Z
ALPHA_LOWER_CASE a-z
PUNCTUATION US Punctuation, one of !"#$%&'()*+,-./:;<=>?@[]^_`{|}~
WHITESPACE Whitespace character, one of [ \t\n\x0B\f\r]

CloudStorageFileSet

Message representing a set of files in Cloud Storage.

Fields
url

string

The url, in the format gs://<bucket>/<path>. Trailing wildcard in the path is allowed.

CloudStorageOptions

Options defining a file or a set of files within a Cloud Storage bucket.

Fields
file_set

FileSet

The set of one or more files to scan.

bytes_limit_per_file

int64

Max number of bytes to scan from a file. If a scanned file's size is bigger than this value then the rest of the bytes are omitted. Only one of bytes_limit_per_file and bytes_limit_per_file_percent can be specified. This field can't be set if de-identification is requested. For certain file types, setting this field has no effect. For more information, see Limits on bytes scanned per file.

bytes_limit_per_file_percent

int32

Max percentage of bytes to scan from a file. The rest are omitted. The number of bytes scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of bytes_limit_per_file and bytes_limit_per_file_percent can be specified. This field can't be set if de-identification is requested. For certain file types, setting this field has no effect. For more information, see Limits on bytes scanned per file.

file_types[]

FileType

List of file type groups to include in the scan. If empty, all files are scanned and available data format processors are applied. In addition, the binary content of the selected files is always scanned as well. Images are scanned only as binary if the specified region does not support image inspection and no file_types were specified. Image inspection is restricted to 'global', 'us', 'asia', and 'europe'.

sample_method

SampleMethod

files_limit_percent

int32

Limits the number of files to scan to this percentage of the input FileSet. Number of files scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0.

FileSet

Set of files to scan.

Fields
url

string

The Cloud Storage url of the file(s) to scan, in the format gs://<bucket>/<path>. Trailing wildcard in the path is allowed.

If the url ends in a trailing slash, the bucket or directory represented by the url will be scanned non-recursively (content in sub-directories will not be scanned). This means that gs://mybucket/ is equivalent to gs://mybucket/*, and gs://mybucket/directory/ is equivalent to gs://mybucket/directory/*.

Exactly one of url or regex_file_set must be set.

regex_file_set

CloudStorageRegexFileSet

The regex-filtered set of files to scan. Exactly one of url or regex_file_set must be set.

SampleMethod

How to sample bytes if not all bytes are scanned. Meaningful only when used in conjunction with bytes_limit_per_file. If not specified, scanning would start from the top.

Enums
SAMPLE_METHOD_UNSPECIFIED
TOP Scan from the top (default).
RANDOM_START For each file larger than bytes_limit_per_file, randomly pick the offset to start scanning. The scanned bytes are contiguous.

CloudStoragePath

Message representing a single file or path in Cloud Storage.

Fields
path

string

A url representing a file or path (no wildcards) in Cloud Storage. Example: gs://[BUCKET_NAME]/dictionary.txt

CloudStorageRegexFileSet

Message representing a set of files in a Cloud Storage bucket. Regular expressions are used to allow fine-grained control over which files in the bucket to include.

Included files are those that match at least one item in include_regex and do not match any items in exclude_regex. Note that a file that matches items from both lists will not be included. For a match to occur, the entire file path (i.e., everything in the url after the bucket name) must match the regular expression.

For example, given the input {bucket_name: "mybucket", include_regex: ["directory1/.*"], exclude_regex: ["directory1/excluded.*"]}:

  • gs://mybucket/directory1/myfile will be included
  • gs://mybucket/directory1/directory2/myfile will be included (.* matches across /)
  • gs://mybucket/directory0/directory1/myfile will not be included (the full path doesn't match any items in include_regex)
  • gs://mybucket/directory1/excludedfile will not be included (the path matches an item in exclude_regex)

If include_regex is left empty, it will match all files by default (this is equivalent to setting include_regex: [".*"]).

Some other common use cases:

  • {bucket_name: "mybucket", exclude_regex: [".*\.pdf"]} will include all files in mybucket except for .pdf files
  • {bucket_name: "mybucket", include_regex: ["directory/[^/]+"]} will include all files directly under gs://mybucket/directory/, without matching across /
Fields
bucket_name

string

The name of a Cloud Storage bucket. Required.

include_regex[]

string

A list of regular expressions matching file paths to include. All files in the bucket that match at least one of these regular expressions will be included in the set of files, except for those that also match an item in exclude_regex. Leaving this field empty will match all files by default (this is equivalent to including .* in the list).

Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

exclude_regex[]

string

A list of regular expressions matching file paths to exclude. All files in the bucket that match at least one of these regular expressions will be excluded from the scan.

Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

Color

Represents a color in the RGB color space.

Fields
red

float

The amount of red in the color as a value in the interval [0, 1].

green

float

The amount of green in the color as a value in the interval [0, 1].

blue

float

The amount of blue in the color as a value in the interval [0, 1].

ColumnDataProfile

The profile for a scanned column within a table.

Fields
name

string

The name of the profile.

profile_status

ProfileStatus

Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated.

state

State

State of a profile.

profile_last_generated

Timestamp

The last time the profile was generated.

table_data_profile

string

The resource name to the table data profile.

table_full_resource

string

The resource name of the table this column is within.

dataset_project_id

string

The Google Cloud project ID that owns the BigQuery dataset.

dataset_location

string

The BigQuery location where the dataset's data is stored. See https://cloud.google.com/bigquery/docs/locations for supported locations.

dataset_id

string

The BigQuery dataset ID.

table_id

string

The BigQuery table ID.

column

string

The name of the column.

sensitivity_score

SensitivityScore

The sensitivity of this column.

data_risk_level

DataRiskLevel

The data risk level for this column.

column_info_type

InfoTypeSummary

If it's been determined this column can be identified as a single type, this will be set. Otherwise the column either has unidentifiable content or mixed types.

other_matches[]

OtherInfoTypeSummary

Other types found within this column. List will be un-ordered.

estimated_null_percentage

NullPercentageLevel

Approximate percentage of entries being null in the column.

estimated_uniqueness_score

UniquenessScoreLevel

Approximate uniqueness of the column.

free_text_score

double

The likelihood that this column contains free-form text. A value close to 1 may indicate the column is likely to contain free-form or natural language text. Range in 0-1.

column_type

ColumnDataType

The data type of a given column.

policy_state

ColumnPolicyState

Indicates if a policy tag has been applied to the column.

ColumnDataType

Data types that a column can be. Types may be added over time.

Enums
COLUMN_DATA_TYPE_UNSPECIFIED Invalid type.
TYPE_INT64 Encoded as a string in decimal format.
TYPE_BOOL Encoded as a boolean "false" or "true".
TYPE_FLOAT64 Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
TYPE_STRING Encoded as a string value.
TYPE_BYTES Encoded as a base64 string per RFC 4648, section 4.
TYPE_TIMESTAMP Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
TYPE_DATE Encoded as RFC 3339 full-date format string: 1985-04-12
TYPE_TIME Encoded as RFC 3339 partial-time format string: 23:20:50.52
TYPE_DATETIME Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
TYPE_GEOGRAPHY Encoded as WKT
TYPE_NUMERIC Encoded as a decimal string.
TYPE_RECORD Container of ordered fields, each with a type and field name.
TYPE_BIGNUMERIC Decimal type.
TYPE_JSON Json type.

ColumnPolicyState

The possible policy states for a column.

Enums
COLUMN_POLICY_STATE_UNSPECIFIED No policy tags.
COLUMN_POLICY_TAGGED Column has policy tag applied.

State

Possible states of a profile. New items may be added.

Enums
STATE_UNSPECIFIED Unused.
RUNNING The profile is currently running. Once a profile has finished it will transition to DONE.
DONE The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed.

Container

Represents a container that may contain DLP findings. Examples of a container include a file, table, or database record.

Fields
type

string

Container type, for example BigQuery or Cloud Storage.

project_id

string

Project where the finding was found. Can be different from the project that owns the finding.

full_path

string

A string representation of the full container name. Examples: - BigQuery: 'Project:DataSetId.TableId' - Cloud Storage: 'gs://Bucket/folders/filename.txt'

root_path

string

The root of the container. Examples:

  • For BigQuery table project_id:dataset_id.table_id, the root is dataset_id
  • For Cloud Storage file gs://bucket/folder/filename.txt, the root is gs://bucket
relative_path

string

The rest of the path after the root. Examples:

  • For BigQuery table project_id:dataset_id.table_id, the relative path is table_id
  • For Cloud Storage file gs://bucket/folder/filename.txt, the relative path is folder/filename.txt
update_time

Timestamp

Findings container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated.

version

string

Findings container version, if available ("generation" for Cloud Storage).

ContentItem

Fields
Union field data_item. Data of the item either in the byte array or UTF-8 string form, or table. data_item can be only one of the following:
value

string

String data to inspect or redact.

table

Table

Structured content for inspection. See https://cloud.google.com/dlp/docs/inspecting-text#inspecting_a_table to learn more.

byte_item

ByteContentItem

Content data to inspect or redact. Replaces type and data.

ContentLocation

Precise location of the finding within a document, record, image, or metadata container.

Fields
container_name

string

Name of the container where the finding is located. The top level name is the source file name or table name. Names of some common storage containers are formatted as follows:

  • BigQuery tables: {project_id}:{dataset_id}.{table_id}
  • Cloud Storage files: gs://{bucket}/{path}
  • Datastore namespace: {namespace}

Nested names could be absent if the embedded object has no string identifier (for example, an image contained within a document).

container_timestamp

Timestamp

Finding container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated.

container_version

string

Finding container version, if available ("generation" for Cloud Storage).

Union field location. Type of the container within the file with location of the finding. location can be only one of the following:
record_location

RecordLocation

Location within a row or record of a database table.