Package google.privacy.dlp.v2

Index

DlpService

Sensitive Data Protection provides access to a powerful sensitive data inspection, classification, and de-identification platform that works on text, images, and Google Cloud storage repositories. To learn more about concepts and find how-to guides see https://cloud.google.com/sensitive-data-protection/docs/.

ActivateJobTrigger

rpc ActivateJobTrigger(ActivateJobTriggerRequest) returns (DlpJob)

Activate a job trigger. Causes the immediate execute of a trigger instead of waiting on the trigger event to occur.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CancelDlpJob

rpc CancelDlpJob(CancelDlpJobRequest) returns (Empty)

Starts asynchronous cancellation on a long-running DlpJob. The server makes a best effort to cancel the DlpJob, but success is not guaranteed. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateConnection

rpc CreateConnection(CreateConnectionRequest) returns (Connection)

Create a Connection to an external data source.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateDeidentifyTemplate

rpc CreateDeidentifyTemplate(CreateDeidentifyTemplateRequest) returns (DeidentifyTemplate)

Creates a DeidentifyTemplate for reusing frequently used configuration for de-identifying content, images, and storage. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateDiscoveryConfig

rpc CreateDiscoveryConfig(CreateDiscoveryConfigRequest) returns (DiscoveryConfig)

Creates a config for discovery to scan and profile storage.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateDlpJob

rpc CreateDlpJob(CreateDlpJobRequest) returns (DlpJob)

Creates a new job to inspect storage or calculate risk metrics. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.

When no InfoTypes or CustomInfoTypes are specified in inspect jobs, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateInspectTemplate

rpc CreateInspectTemplate(CreateInspectTemplateRequest) returns (InspectTemplate)

Creates an InspectTemplate for reusing frequently used configuration for inspecting content, images, and storage. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateJobTrigger

rpc CreateJobTrigger(CreateJobTriggerRequest) returns (JobTrigger)

Creates a job trigger to run DLP actions such as scanning storage for sensitive information on a set schedule. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateStoredInfoType

rpc CreateStoredInfoType(CreateStoredInfoTypeRequest) returns (StoredInfoType)

Creates a pre-built stored infoType to be used for inspection. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeidentifyContent

rpc DeidentifyContent(DeidentifyContentRequest) returns (DeidentifyContentResponse)

De-identifies potentially sensitive info from a ContentItem. This method has limits on input size and output size. See https://cloud.google.com/sensitive-data-protection/docs/deidentify-sensitive-data to learn more.

When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteConnection

rpc DeleteConnection(DeleteConnectionRequest) returns (Empty)

Delete a Connection.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteDeidentifyTemplate

rpc DeleteDeidentifyTemplate(DeleteDeidentifyTemplateRequest) returns (Empty)

Deletes a DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteDiscoveryConfig

rpc DeleteDiscoveryConfig(DeleteDiscoveryConfigRequest) returns (Empty)

Deletes a discovery configuration.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteDlpJob

rpc DeleteDlpJob(DeleteDlpJobRequest) returns (Empty)

Deletes a long-running DlpJob. This method indicates that the client is no longer interested in the DlpJob result. The job will be canceled if possible. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteFileStoreDataProfile

rpc DeleteFileStoreDataProfile(DeleteFileStoreDataProfileRequest) returns (Empty)

Delete a FileStoreDataProfile. Will not prevent the profile from being regenerated if the resource is still included in a discovery configuration.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteInspectTemplate

rpc DeleteInspectTemplate(DeleteInspectTemplateRequest) returns (Empty)

Deletes an InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteJobTrigger

rpc DeleteJobTrigger(DeleteJobTriggerRequest) returns (Empty)

Deletes a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteStoredInfoType

rpc DeleteStoredInfoType(DeleteStoredInfoTypeRequest) returns (Empty)

Deletes a stored infoType. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteTableDataProfile

rpc DeleteTableDataProfile(DeleteTableDataProfileRequest) returns (Empty)

Delete a TableDataProfile. Will not prevent the profile from being regenerated if the table is still included in a discovery configuration.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

FinishDlpJob

rpc FinishDlpJob(FinishDlpJobRequest) returns (Empty)

Finish a running hybrid DlpJob. Triggers the finalization steps and running of any enabled actions that have not yet run.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetColumnDataProfile

rpc GetColumnDataProfile(GetColumnDataProfileRequest) returns (ColumnDataProfile)

Gets a column data profile.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetConnection

rpc GetConnection(GetConnectionRequest) returns (Connection)

Get a Connection by name.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetDeidentifyTemplate

rpc GetDeidentifyTemplate(GetDeidentifyTemplateRequest) returns (DeidentifyTemplate)

Gets a DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetDiscoveryConfig

rpc GetDiscoveryConfig(GetDiscoveryConfigRequest) returns (DiscoveryConfig)

Gets a discovery configuration.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetDlpJob

rpc GetDlpJob(GetDlpJobRequest) returns (DlpJob)

Gets the latest state of a long-running DlpJob. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetFileStoreDataProfile

rpc GetFileStoreDataProfile(GetFileStoreDataProfileRequest) returns (FileStoreDataProfile)

Gets a file store data profile.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetInspectTemplate

rpc GetInspectTemplate(GetInspectTemplateRequest) returns (InspectTemplate)

Gets an InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetJobTrigger

rpc GetJobTrigger(GetJobTriggerRequest) returns (JobTrigger)

Gets a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetProjectDataProfile

rpc GetProjectDataProfile(GetProjectDataProfileRequest) returns (ProjectDataProfile)

Gets a project data profile.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetStoredInfoType

rpc GetStoredInfoType(GetStoredInfoTypeRequest) returns (StoredInfoType)

Gets a stored infoType. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetTableDataProfile

rpc GetTableDataProfile(GetTableDataProfileRequest) returns (TableDataProfile)

Gets a table data profile.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

HybridInspectDlpJob

rpc HybridInspectDlpJob(HybridInspectDlpJobRequest) returns (HybridInspectResponse)

Inspect hybrid content and store findings to a job. To review the findings, inspect the job. Inspection will occur asynchronously.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

HybridInspectJobTrigger

rpc HybridInspectJobTrigger(HybridInspectJobTriggerRequest) returns (HybridInspectResponse)

Inspect hybrid content and store findings to a trigger. The inspection will be processed asynchronously. To review the findings monitor the jobs within the trigger.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

InspectContent

rpc InspectContent(InspectContentRequest) returns (InspectContentResponse)

Finds potentially sensitive info in content. This method has limits on input size, processing time, and output size.

When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

For how to guides, see https://cloud.google.com/sensitive-data-protection/docs/inspecting-images and https://cloud.google.com/sensitive-data-protection/docs/inspecting-text,

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListColumnDataProfiles

rpc ListColumnDataProfiles(ListColumnDataProfilesRequest) returns (ListColumnDataProfilesResponse)

Lists column data profiles for an organization.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListConnections

rpc ListConnections(ListConnectionsRequest) returns (ListConnectionsResponse)

Lists Connections in a parent. Use SearchConnections to see all connections within an organization.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListDeidentifyTemplates

rpc ListDeidentifyTemplates(ListDeidentifyTemplatesRequest) returns (ListDeidentifyTemplatesResponse)

Lists DeidentifyTemplates. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListDiscoveryConfigs

rpc ListDiscoveryConfigs(ListDiscoveryConfigsRequest) returns (ListDiscoveryConfigsResponse)

Lists discovery configurations.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListDlpJobs

rpc ListDlpJobs(ListDlpJobsRequest) returns (ListDlpJobsResponse)

Lists DlpJobs that match the specified filter in the request. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListFileStoreDataProfiles

rpc ListFileStoreDataProfiles(ListFileStoreDataProfilesRequest) returns (ListFileStoreDataProfilesResponse)

Lists file store data profiles for an organization.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListInfoTypes

rpc ListInfoTypes(ListInfoTypesRequest) returns (ListInfoTypesResponse)

Returns a list of the sensitive information types that the DLP API supports. See https://cloud.google.com/sensitive-data-protection/docs/infotypes-reference to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListInspectTemplates

rpc ListInspectTemplates(ListInspectTemplatesRequest) returns (ListInspectTemplatesResponse)

Lists InspectTemplates. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListJobTriggers

rpc ListJobTriggers(ListJobTriggersRequest) returns (ListJobTriggersResponse)

Lists job triggers. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListProjectDataProfiles

rpc ListProjectDataProfiles(ListProjectDataProfilesRequest) returns (ListProjectDataProfilesResponse)

Lists project data profiles for an organization.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListStoredInfoTypes

rpc ListStoredInfoTypes(ListStoredInfoTypesRequest) returns (ListStoredInfoTypesResponse)

Lists stored infoTypes. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListTableDataProfiles

rpc ListTableDataProfiles(ListTableDataProfilesRequest) returns (ListTableDataProfilesResponse)

Lists table data profiles for an organization.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

RedactImage

rpc RedactImage(RedactImageRequest) returns (RedactImageResponse)

Redacts potentially sensitive info from an image. This method has limits on input size, processing time, and output size. See https://cloud.google.com/sensitive-data-protection/docs/redacting-sensitive-data-images to learn more.

When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ReidentifyContent

rpc ReidentifyContent(ReidentifyContentRequest) returns (ReidentifyContentResponse)

Re-identifies content that has been de-identified. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization#re-identification_in_free_text_code_example to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

SearchConnections

rpc SearchConnections(SearchConnectionsRequest) returns (SearchConnectionsResponse)

Searches for Connections in a parent.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateConnection

rpc UpdateConnection(UpdateConnectionRequest) returns (Connection)

Update a Connection.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateDeidentifyTemplate

rpc UpdateDeidentifyTemplate(UpdateDeidentifyTemplateRequest) returns (DeidentifyTemplate)

Updates the DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateDiscoveryConfig

rpc UpdateDiscoveryConfig(UpdateDiscoveryConfigRequest) returns (DiscoveryConfig)

Updates a discovery configuration.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateInspectTemplate

rpc UpdateInspectTemplate(UpdateInspectTemplateRequest) returns (InspectTemplate)

Updates the InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateJobTrigger

rpc UpdateJobTrigger(UpdateJobTriggerRequest) returns (JobTrigger)

Updates a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateStoredInfoType

rpc UpdateStoredInfoType(UpdateStoredInfoTypeRequest) returns (StoredInfoType)

Updates the stored infoType by creating a new version. The existing version will continue to be used until the new version is ready. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

Action

A task to execute on the completion of a job. See https://cloud.google.com/sensitive-data-protection/docs/concepts-actions to learn more.

Fields
Union field action. Extra events to execute after the job has finished. action can be only one of the following:
save_findings

SaveFindings

Save resulting findings in a provided location.

pub_sub

PublishToPubSub

Publish a notification to a Pub/Sub topic.

publish_summary_to_cscc

PublishSummaryToCscc

Publish summary to Cloud Security Command Center (Alpha).

publish_findings_to_cloud_data_catalog

PublishFindingsToCloudDataCatalog

Publish findings to Cloud Datahub.

deidentify

Deidentify

Create a de-identified copy of the input data.

job_notification_emails

JobNotificationEmails

Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts.

publish_to_stackdriver

PublishToStackdriver

Enable Stackdriver metric dlp.googleapis.com/finding_count.

Deidentify

Create a de-identified copy of the requested table or files.

A TransformationDetail will be created for each transformation.

If any rows in BigQuery are skipped during de-identification (transformation errors or row size exceeds BigQuery insert API limits) they are placed in the failure output table. If the original row exceeds the BigQuery insert API limit it will be truncated when written to the failure output table. The failure output table can be set in the action.deidentify.output.big_query_output.deidentified_failure_output_table field, if no table is set, a table will be automatically created in the same project and dataset as the original table.

Compatible with: Inspect

Fields
transformation_config

TransformationConfig

User specified deidentify templates and configs for structured, unstructured, and image files.

transformation_details_storage_config

TransformationDetailsStorageConfig

Config for storing transformation details. This is separate from the de-identified content, and contains metadata about the successful transformations and/or failures that occurred while de-identifying. This needs to be set in order for users to access information about the status of each transformation (see TransformationDetails message for more information about what is noted).

file_types_to_transform[]

FileType

List of user-specified file type groups to transform. If specified, only the files with these file types will be transformed. If empty, all supported files will be transformed. Supported types may be automatically added over time. If a file type is set in this field that isn't supported by the Deidentify action then the job will fail and will not be successfully created/started. Currently the only file types supported are: IMAGES, TEXT_FILES, CSV, TSV.

Union field output. Where to store the output. output can be only one of the following:
cloud_storage_output

string

Required. User settable Cloud Storage bucket and folders to store de-identified files. This field must be set for Cloud Storage deidentification. The output Cloud Storage bucket must be different from the input bucket. De-identified files will overwrite files in the output path.

Form of: gs://bucket/folder/ or gs://bucket

JobNotificationEmails

This type has no fields.

Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts.

PublishFindingsToCloudDataCatalog

This type has no fields.

Publish findings of a DlpJob to Data Catalog. In Data Catalog, tag templates are applied to the resource that Cloud DLP scanned. Data Catalog tag templates are stored in the same project and region where the BigQuery table exists. For Cloud DLP to create and apply the tag template, the Cloud DLP service agent must have the roles/datacatalog.tagTemplateOwner permission on the project. The tag template contains fields summarizing the results of the DlpJob. Any field values previously written by another DlpJob are deleted. InfoType naming patterns are strictly enforced when using this feature.

Findings are persisted in Data Catalog storage and are governed by service-specific policies for Data Catalog. For more information, see Service Specific Terms.

Only a single instance of this action can be specified. This action is allowed only if all resources being scanned are BigQuery tables. Compatible with: Inspect

PublishSummaryToCscc

This type has no fields.

Publish the result summary of a DlpJob to Security Command Center. This action is available for only projects that belong to an organization. This action publishes the count of finding instances and their infoTypes. The summary of findings are persisted in Security Command Center and are governed by service-specific policies for Security Command Center. Only a single instance of this action can be specified. Compatible with: Inspect

PublishToPubSub

Publish a message into a given Pub/Sub topic when DlpJob has completed. The message contains a single field, DlpJobName, which is equal to the finished job's DlpJob.name. Compatible with: Inspect, Risk

Fields
topic

string

Cloud Pub/Sub topic to send notifications to. The topic must have given publishing access rights to the DLP API service account executing the long running DlpJob sending the notifications. Format is projects/{project}/topics/{topic}.

PublishToStackdriver

This type has no fields.

Enable Stackdriver metric dlp.googleapis.com/finding_count. This will publish a metric to stack driver on each infotype requested and how many findings were found for it. CustomDetectors will be bucketed as 'Custom' under the Stackdriver label 'info_type'.

SaveFindings

If set, the detailed findings will be persisted to the specified OutputStorageConfig. Only a single instance of this action can be specified. Compatible with: Inspect, Risk

Fields
output_config

OutputStorageConfig

Location to store findings outside of DLP.

ActionDetails

The results of an Action.

Fields
Union field details. Summary of what occurred in the actions. details can be only one of the following:
deidentify_details

DeidentifyDataSourceDetails

Outcome of a de-identification action.

ActivateJobTriggerRequest

Request message for ActivateJobTrigger.

Fields
name

string

Required. Resource name of the trigger to activate, for example projects/dlp-test-project/jobTriggers/53234423.

Authorization requires one or more of the following IAM permissions on the specified resource name:

  • dlp.jobTriggers.get
  • dlp.jobs.create

AllOtherDatabaseResources

This type has no fields.

Match database resources not covered by any other filter.

AllOtherResources

This type has no fields.

Match discovery resources not covered by any other filter.

AmazonS3Bucket

Amazon S3 bucket.

Fields
aws_account

AwsAccount

The AWS account.

bucket_name

string

Required. The bucket name.

AmazonS3BucketConditions

Amazon S3 bucket conditions.

Fields
bucket_types[]

BucketType

Optional. Bucket types that should be profiled. Optional. Defaults to TYPE_ALL_SUPPORTED if unspecified.

object_storage_classes[]

ObjectStorageClass

Optional. Object classes that should be profiled. Optional. Defaults to ALL_SUPPORTED_CLASSES if unspecified.

BucketType

Supported Amazon S3 bucket types. Defaults to TYPE_ALL_SUPPORTED.

Enums
TYPE_UNSPECIFIED Unused.
TYPE_ALL_SUPPORTED All supported classes.
TYPE_GENERAL_PURPOSE A general purpose Amazon S3 bucket.

ObjectStorageClass

Supported Amazon S3 object storage classes. Defaults to ALL_SUPPORTED_CLASSES.

Enums
UNSPECIFIED Unused.
ALL_SUPPORTED_CLASSES All supported classes.
STANDARD Standard object class.
STANDARD_INFREQUENT_ACCESS Standard - infrequent access object class.
GLACIER_INSTANT_RETRIEVAL Glacier - instant retrieval object class.
INTELLIGENT_TIERING Objects in the S3 Intelligent-Tiering access tiers.

AmazonS3BucketRegex

Amazon S3 bucket regex.

Fields
aws_account_regex

AwsAccountRegex

The AWS account regex.

bucket_name_regex

string

Optional. Regex to test the bucket name against. If empty, all buckets match.

AnalyzeDataSourceRiskDetails

Result of a risk analysis operation request.

Fields
requested_privacy_metric

PrivacyMetric

Privacy metric to compute.

requested_source_table

BigQueryTable

Input dataset to compute metrics over.

requested_options

RequestedRiskAnalysisOptions

The configuration used for this job.

Union field result. Values associated with this metric. result can be only one of the following:
numerical_stats_result

NumericalStatsResult

Numerical stats result

categorical_stats_result

CategoricalStatsResult

Categorical stats result

k_anonymity_result

KAnonymityResult

K-anonymity result

l_diversity_result

LDiversityResult

L-divesity result

k_map_estimation_result

KMapEstimationResult

K-map result

delta_presence_estimation_result

DeltaPresenceEstimationResult

Delta-presence result

CategoricalStatsResult

Result of the categorical stats computation.

Fields
value_frequency_histogram_buckets[]

CategoricalStatsHistogramBucket

Histogram of value frequencies in the column.

CategoricalStatsHistogramBucket

Histogram of value frequencies in the column.

Fields
value_frequency_lower_bound

int64

Lower bound on the value frequency of the values in this bucket.

value_frequency_upper_bound

int64

Upper bound on the value frequency of the values in this bucket.

bucket_size

int64

Total number of values in this bucket.

bucket_values[]

ValueFrequency

Sample of value frequencies in this bucket. The total number of values returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct values in this bucket.

DeltaPresenceEstimationResult

Result of the δ-presence computation. Note that these results are an estimation, not exact values.

Fields
delta_presence_estimation_histogram[]

DeltaPresenceEstimationHistogramBucket

The intervals [min_probability, max_probability) do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_probability: 0, max_probability: 0.1, frequency: 17} {min_probability: 0.2, max_probability: 0.3, frequency: 42} {min_probability: 0.3, max_probability: 0.4, frequency: 99} mean that there are no record with an estimated probability in [0.1, 0.2) nor larger or equal to 0.4.

DeltaPresenceEstimationHistogramBucket

A DeltaPresenceEstimationHistogramBucket message with the following values: min_probability: 0.1 max_probability: 0.2 frequency: 42 means that there are 42 records for which δ is in [0.1, 0.2). An important particular case is when min_probability = max_probability = 1: then, every individual who shares this quasi-identifier combination is in the dataset.

Fields
min_probability

double

Between 0 and 1.

max_probability

double

Always greater than or equal to min_probability.

bucket_size

int64

Number of records within these probability bounds.

bucket_values[]

DeltaPresenceEstimationQuasiIdValues

Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct quasi-identifier tuple values in this bucket.

DeltaPresenceEstimationQuasiIdValues

A tuple of values for the quasi-identifier columns.

Fields
quasi_ids_values[]

Value

The quasi-identifier values.

estimated_probability

double

The estimated probability that a given individual sharing these quasi-identifier values is in the dataset. This value, typically called δ, is the ratio between the number of records in the dataset with these quasi-identifier values, and the total number of individuals (inside and outside the dataset) with these quasi-identifier values. For example, if there are 15 individuals in the dataset who share the same quasi-identifier values, and an estimated 100 people in the entire population with these values, then δ is 0.15.

KAnonymityResult

Result of the k-anonymity computation.

Fields
equivalence_class_histogram_buckets[]

KAnonymityHistogramBucket

Histogram of k-anonymity equivalence classes.

KAnonymityEquivalenceClass

The set of columns' values that share the same ldiversity value

Fields
quasi_ids_values[]

Value

Set of values defining the equivalence class. One value per quasi-identifier column in the original KAnonymity metric message. The order is always the same as the original request.

equivalence_class_size

int64

Size of the equivalence class, for example number of rows with the above set of values.

KAnonymityHistogramBucket

Histogram of k-anonymity equivalence classes.

Fields
equivalence_class_size_lower_bound

int64

Lower bound on the size of the equivalence classes in this bucket.

equivalence_class_size_upper_bound

int64

Upper bound on the size of the equivalence classes in this bucket.

bucket_size

int64

Total number of equivalence classes in this bucket.

bucket_values[]

KAnonymityEquivalenceClass

Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct equivalence classes in this bucket.

KMapEstimationResult

Result of the reidentifiability analysis. Note that these results are an estimation, not exact values.

Fields
k_map_estimation_histogram[]

KMapEstimationHistogramBucket

The intervals [min_anonymity, max_anonymity] do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_anonymity: 1, max_anonymity: 1, frequency: 17} {min_anonymity: 2, max_anonymity: 3, frequency: 42} {min_anonymity: 5, max_anonymity: 10, frequency: 99} mean that there are no record with an estimated anonymity of 4, 5, or larger than 10.

KMapEstimationHistogramBucket

A KMapEstimationHistogramBucket message with the following values: min_anonymity: 3 max_anonymity: 5 frequency: 42 means that there are 42 records whose quasi-identifier values correspond to 3, 4 or 5 people in the overlying population. An important particular case is when min_anonymity = max_anonymity = 1: the frequency field then corresponds to the number of uniquely identifiable records.

Fields
min_anonymity

int64

Always positive.

max_anonymity

int64

Always greater than or equal to min_anonymity.

bucket_size

int64

Number of records within these anonymity bounds.

bucket_values[]

KMapEstimationQuasiIdValues

Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct quasi-identifier tuple values in this bucket.

KMapEstimationQuasiIdValues

A tuple of values for the quasi-identifier columns.

Fields
quasi_ids_values[]

Value

The quasi-identifier values.

estimated_anonymity

int64

The estimated anonymity for these quasi-identifier values.

LDiversityResult

Result of the l-diversity computation.

Fields
sensitive_value_frequency_histogram_buckets[]

LDiversityHistogramBucket

Histogram of l-diversity equivalence class sensitive value frequencies.

LDiversityEquivalenceClass

The set of columns' values that share the same ldiversity value.

Fields
quasi_ids_values[]

Value

Quasi-identifier values defining the k-anonymity equivalence class. The order is always the same as the original request.

equivalence_class_size

int64

Size of the k-anonymity equivalence class.

num_distinct_sensitive_values

int64

Number of distinct sensitive values in this equivalence class.

top_sensitive_values[]

ValueFrequency

Estimated frequencies of top sensitive values.

LDiversityHistogramBucket

Histogram of l-diversity equivalence class sensitive value frequencies.

Fields
sensitive_value_frequency_lower_bound

int64

Lower bound on the sensitive value frequencies of the equivalence classes in this bucket.

sensitive_value_frequency_upper_bound

int64

Upper bound on the sensitive value frequencies of the equivalence classes in this bucket.

bucket_size

int64

Total number of equivalence classes in this bucket.

bucket_values[]

LDiversityEquivalenceClass

Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20.

bucket_value_count

int64

Total number of distinct equivalence classes in this bucket.

NumericalStatsResult

Result of the numerical stats computation.

Fields
min_value

Value

Minimum value appearing in the column.

max_value

Value

Maximum value appearing in the column.

quantile_values[]

Value

List of 99 values that partition the set of field values into 100 equal sized buckets.

RequestedRiskAnalysisOptions

Risk analysis options.

Fields
job_config

RiskAnalysisJobConfig

The job config for the risk job.

AwsAccount

AWS account.

Fields
account_id

string

Required. AWS account ID.

AwsAccountRegex

AWS account regex.

Fields
account_id_regex

string

Optional. Regex to test the AWS account ID against. If empty, all accounts match.

BigQueryDiscoveryTarget

Target used to match against for discovery with BigQuery tables

Fields
filter

DiscoveryBigQueryFilter

Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table.

conditions

DiscoveryBigQueryConditions

In addition to matching the filter, these conditions must be true before a profile is generated.

Union field frequency. The generation rule includes the logic on how frequently to update the data profiles. If not specified, discovery will re-run and update no more than once a month if new columns appear in the table. frequency can be only one of the following:
cadence

DiscoveryGenerationCadence

How often and when to update profiles. New tables that match both the filter and conditions are scanned as quickly as possible depending on system capacity.

disabled

Disabled

Tables that match this filter will not have profiles created.

BigQueryField

Message defining a field of a BigQuery table.

Fields
table

BigQueryTable

Source table of the field.

field

FieldId

Designated field in the BigQuery table.

BigQueryKey

Row key for identifying a record in BigQuery table.

Fields
table_reference

BigQueryTable

Complete BigQuery table reference.

row_number

int64

Row number inferred at the time the table was scanned. This value is nondeterministic, cannot be queried, and may be null for inspection jobs. To locate findings within a table, specify inspect_job.storage_config.big_query_options.identifying_fields in CreateDlpJobRequest.

BigQueryOptions

Options defining BigQuery table and row identifiers.

Fields
table_reference

BigQueryTable

Complete BigQuery table reference.

identifying_fields[]

FieldId

Table fields that may uniquely identify a row within the table. When actions.saveFindings.outputConfig.table is specified, the values of columns specified here are available in the output table under location.content_locations.record_location.record_key.id_values. Nested fields such as person.birthdate.year are allowed.

rows_limit

int64

Max number of rows to scan. If the table has more rows than this value, the rest of the rows are omitted. If not set, or if set to 0, all rows will be scanned. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig.

rows_limit_percent

int32

Max percentage of rows to scan. The rest are omitted. The number of rows scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig.

Caution: A known issue is causing the rowsLimitPercent field to behave unexpectedly. We recommend using rowsLimit instead.

sample_method

SampleMethod

How to sample the data.

excluded_fields[]

FieldId

References to fields excluded from scanning. This allows you to skip inspection of entire columns which you know have no findings. When inspecting a table, we recommend that you inspect all columns. Otherwise, findings might be affected because hints from excluded columns will not be used.

included_fields[]

FieldId

Limit scanning only to these fields. When inspecting a table, we recommend that you inspect all columns. Otherwise, findings might be affected because hints from excluded columns will not be used.

SampleMethod

How to sample rows if not all rows are scanned. Meaningful only when used in conjunction with either rows_limit or rows_limit_percent. If not specified, rows are scanned in the order BigQuery reads them.

Enums
SAMPLE_METHOD_UNSPECIFIED No sampling.
TOP Scan groups of rows in the order BigQuery provides (default). Multiple groups of rows may be scanned in parallel, so results may not appear in the same order the rows are read.
RANDOM_START Randomly pick groups of rows to scan.

BigQueryRegex

A pattern to match against one or more tables, datasets, or projects that contain BigQuery tables. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

Fields
project_id_regex

string

For organizations, if unset, will match all projects. Has no effect for data profile configurations created within a project.

dataset_id_regex

string

If unset, this property matches all datasets.

table_id_regex

string

If unset, this property matches all tables.

BigQueryRegexes

A collection of regular expressions to determine what tables to match against.

Fields
patterns[]

BigQueryRegex

A single BigQuery regular expression pattern to match against one or more tables, datasets, or projects that contain BigQuery tables.

BigQuerySchemaModification

Attributes evaluated to determine if a schema has been modified. New values may be added at a later time.

Enums
SCHEMA_MODIFICATION_UNSPECIFIED Unused
SCHEMA_NEW_COLUMNS Profiles should be regenerated when new columns are added to the table. Default.
SCHEMA_REMOVED_COLUMNS Profiles should be regenerated when columns are removed from the table.

BigQueryTable

Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: <project_id>:<dataset_id>.<table_id> or <project_id>.<dataset_id>.<table_id>.

Fields
project_id

string

The Google Cloud project ID of the project containing the table. If omitted, project ID is inferred from the API call.

dataset_id

string

Dataset ID of the table.

table_id

string

Name of the table.

BigQueryTableCollection

Specifies a collection of BigQuery tables. Used for Discovery.

Fields
Union field pattern. Maximum of 100 entries. The first filter containing a pattern that matches a table will be used. pattern can be only one of the following:
include_regexes

BigQueryRegexes

A collection of regular expressions to match a BigQuery table against.

BigQueryTableModification

Attributes evaluated to determine if a table has been modified. New values may be added at a later time.

Enums
TABLE_MODIFICATION_UNSPECIFIED Unused.
TABLE_MODIFIED_TIMESTAMP A table will be considered modified when the last_modified_time from BigQuery has been updated.

BigQueryTableType

Over time new types may be added. Currently VIEW, MATERIALIZED_VIEW, and non-BigLake external tables are not supported.

Enums
BIG_QUERY_TABLE_TYPE_UNSPECIFIED Unused.
BIG_QUERY_TABLE_TYPE_TABLE A normal BigQuery table.
BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE A table that references data stored in Cloud Storage.
BIG_QUERY_TABLE_TYPE_SNAPSHOT A snapshot of a BigQuery table.

BigQueryTableTypeCollection

Over time new types may be added. Currently VIEW, MATERIALIZED_VIEW, and non-BigLake external tables are not supported.

Enums
BIG_QUERY_COLLECTION_UNSPECIFIED Unused.
BIG_QUERY_COLLECTION_ALL_TYPES Automatically generate profiles for all tables, even if the table type is not yet fully supported for analysis. Profiles for unsupported tables will be generated with errors to indicate their partial support. When full support is added, the tables will automatically be profiled during the next scheduled run.
BIG_QUERY_COLLECTION_ONLY_SUPPORTED_TYPES Only those types fully supported will be profiled. Will expand automatically as Cloud DLP adds support for new table types. Unsupported table types will not have partial profiles generated.

BigQueryTableTypes

The types of BigQuery tables supported by Cloud DLP.

Fields
types[]

BigQueryTableType

A set of BigQuery table types.

BoundingBox

Bounding box encompassing detected text within an image.

Fields
top

int32

Top coordinate of the bounding box. (0,0) is upper left.

left

int32

Left coordinate of the bounding box. (0,0) is upper left.

width

int32

Width of the bounding box in pixels.

height

int32

Height of the bounding box in pixels.

BucketingConfig

Generalization function that buckets values based on ranges. The ranges and replacement values are dynamically provided by the user for custom behavior, such as 1-30 -> LOW, 31-65 -> MEDIUM, 66-100 -> HIGH.

This can be used on data of type: number, long, string, timestamp.

If the bound Value type differs from the type of data being transformed, we will first attempt converting the type of the data to be transformed to match the type of the bound before comparing. See https://cloud.google.com/sensitive-data-protection/docs/concepts-bucketing to learn more.

Fields
buckets[]

Bucket

Set of buckets. Ranges must be non-overlapping.

Bucket

Bucket is represented as a range, along with replacement values.

Fields
min

Value

Lower bound of the range, inclusive. Type should be the same as max if used.

max

Value

Upper bound of the range, exclusive; type must match min.

replacement_value

Value

Required. Replacement value for this bucket.

ByteContentItem

Container for bytes to inspect or redact.

Fields
type

BytesType

The type of data stored in the bytes string. Default will be TEXT_UTF8.

data

bytes

Content data to inspect or redact.

BytesType

The type of data being sent for inspection. To learn more, see Supported file types.

Enums
BYTES_TYPE_UNSPECIFIED Unused
IMAGE Any image type.
IMAGE_JPEG jpeg
IMAGE_BMP bmp
IMAGE_PNG png
IMAGE_SVG svg
TEXT_UTF8 plain text
WORD_DOCUMENT docx, docm, dotx, dotm
PDF pdf
POWERPOINT_DOCUMENT pptx, pptm, potx, potm, pot
EXCEL_DOCUMENT xlsx, xlsm, xltx, xltm
AVRO avro
CSV csv
TSV tsv
AUDIO Audio file types. Only used for profiling.
VIDEO Video file types. Only used for profiling.
EXECUTABLE Executable file types. Only used for profiling.

CancelDlpJobRequest

The request message for canceling a DLP job.

Fields
name

string

Required. The name of the DlpJob resource to be cancelled.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobs.cancel

CharacterMaskConfig

Partially mask a string by replacing a given number of characters with a fixed character. Masking can start from the beginning or end of the string. This can be used on data of any type (numbers, longs, and so on) and when de-identifying structured data we'll attempt to preserve the original data's type. (This allows you to take a long like 123 and modify it to a string like **3.

Fields
masking_character

string

Character to use to mask the sensitive values—for example, * for an alphabetic string such as a name, or 0 for a numeric string such as ZIP code or credit card number. This string must have a length of 1. If not supplied, this value defaults to * for strings, and 0 for digits.

number_to_mask

int32

Number of characters to mask. If not set, all matching chars will be masked. Skipped characters do not count towards this tally.

If number_to_mask is negative, this denotes inverse masking. Cloud DLP masks all but a number of characters. For example, suppose you have the following values:

  • masking_character is *
  • number_to_mask is -4
  • reverse_order is false
  • CharsToIgnore includes -
  • Input string is 1234-5678-9012-3456

The resulting de-identified string is ****-****-****-3456. Cloud DLP masks all but the last four characters. If reverse_order is true, all but the first four characters are masked as 1234-****-****-****.

reverse_order

bool

Mask characters in reverse order. For example, if masking_character is 0, number_to_mask is 14, and reverse_order is false, then the input string 1234-5678-9012-3456 is masked as 00000000000000-3456. If masking_character is *, number_to_mask is 3, and reverse_order is true, then the string 12345 is masked as 12***.

characters_to_ignore[]

CharsToIgnore

When masking a string, items in this list will be skipped when replacing characters. For example, if the input string is 555-555-5555 and you instruct Cloud DLP to skip - and mask 5 characters with *, Cloud DLP returns ***-**5-5555.

CharsToIgnore

Characters to skip when doing deidentification of a value. These will be left alone and skipped.

Fields
Union field characters. Type of characters to skip. characters can be only one of the following:
characters_to_skip

string

Characters to not transform when masking.

common_characters_to_ignore

CommonCharsToIgnore

Common characters to not transform when masking. Useful to avoid removing punctuation.

CommonCharsToIgnore

Convenience enum for indicating common characters to not transform.

Enums
COMMON_CHARS_TO_IGNORE_UNSPECIFIED Unused.
NUMERIC 0-9
ALPHA_UPPER_CASE A-Z
ALPHA_LOWER_CASE a-z
PUNCTUATION US Punctuation, one of !"#$%&'()*+,-./:;<=>?@[]^_`{|}~
WHITESPACE Whitespace character, one of [ \t\n\x0B\f\r]

CloudSqlDiscoveryTarget

Target used to match against for discovery with Cloud SQL tables.

Fields
filter

DiscoveryCloudSqlFilter

Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table.

conditions

DiscoveryCloudSqlConditions

In addition to matching the filter, these conditions must be true before a profile is generated.

Union field cadence. Type of schedule. cadence can be only one of the following:
generation_cadence

DiscoveryCloudSqlGenerationCadence

How often and when to update profiles. New tables that match both the filter and conditions are scanned as quickly as possible depending on system capacity.

disabled

Disabled

Disable profiling for database resources that match this filter.

CloudSqlIamCredential

This type has no fields.

Use IAM authentication to connect. This requires the Cloud SQL IAM feature to be enabled on the instance, which is not the default for Cloud SQL. See https://cloud.google.com/sql/docs/postgres/authentication and https://cloud.google.com/sql/docs/mysql/authentication.

CloudSqlProperties

Cloud SQL connection properties.

Fields
connection_name

string

Optional. Immutable. The Cloud SQL instance for which the connection is defined. Only one connection per instance is allowed. This can only be set at creation time, and cannot be updated.

It is an error to use a connection_name from different project or region than the one that holds the connection. For example, a Connection resource for Cloud SQL connection_name project-id:us-central1:sql-instance must be created under the parent projects/project-id/locations/us-central1

max_connections

int32

Required. The DLP API will limit its connections to max_connections. Must be 2 or greater.

database_engine

DatabaseEngine

Required. The database engine used by the Cloud SQL instance that this connection configures.

Union field credential. How to authenticate to the instance. credential can be only one of the following:
username_password

SecretManagerCredential

A username and password stored in Secret Manager.

cloud_sql_iam

CloudSqlIamCredential

Built-in IAM authentication (must be configured in Cloud SQL).

DatabaseEngine

Database engine of a Cloud SQL instance. New values may be added over time.

Enums
DATABASE_ENGINE_UNKNOWN An engine that is not currently supported by Sensitive Data Protection.
DATABASE_ENGINE_MYSQL Cloud SQL for MySQL instance.
DATABASE_ENGINE_POSTGRES Cloud SQL for PostgreSQL instance.

CloudStorageDiscoveryTarget

Target used to match against for discovery with Cloud Storage buckets.

Fields
filter

DiscoveryCloudStorageFilter

Required. The buckets the generation_cadence applies to. The first target with a matching filter will be the one to apply to a bucket.

conditions

DiscoveryFileStoreConditions

Optional. In addition to matching the filter, these conditions must be true before a profile is generated.

Union field cadence. How often and when to update profiles. cadence can be only one of the following:
generation_cadence

DiscoveryCloudStorageGenerationCadence

Optional. How often and when to update profiles. New buckets that match both the filter and conditions are scanned as quickly as possible depending on system capacity.

disabled

Disabled

Optional. Disable profiling for buckets that match this filter.

CloudStorageFileSet

Message representing a set of files in Cloud Storage.

Fields
url

string

The url, in the format gs://<bucket>/<path>. Trailing wildcard in the path is allowed.

CloudStorageOptions

Options defining a file or a set of files within a Cloud Storage bucket.

Fields
file_set

FileSet

The set of one or more files to scan.

bytes_limit_per_file

int64

Max number of bytes to scan from a file. If a scanned file's size is bigger than this value then the rest of the bytes are omitted. Only one of bytes_limit_per_file and bytes_limit_per_file_percent can be specified. This field can't be set if de-identification is requested. For certain file types, setting this field has no effect. For more information, see Limits on bytes scanned per file.

bytes_limit_per_file_percent

int32

Max percentage of bytes to scan from a file. The rest are omitted. The number of bytes scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of bytes_limit_per_file and bytes_limit_per_file_percent can be specified. This field can't be set if de-identification is requested. For certain file types, setting this field has no effect. For more information, see Limits on bytes scanned per file.

file_types[]

FileType

List of file type groups to include in the scan. If empty, all files are scanned and available data format processors are applied. In addition, the binary content of the selected files is always scanned as well. Images are scanned only as binary if the specified region does not support image inspection and no file_types were specified. Image inspection is restricted to 'global', 'us', 'asia', and 'europe'.

sample_method

SampleMethod

How to sample the data.

files_limit_percent

int32

Limits the number of files to scan to this percentage of the input FileSet. Number of files scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0.

FileSet

Set of files to scan.

Fields
url

string

The Cloud Storage url of the file(s) to scan, in the format gs://<bucket>/<path>. Trailing wildcard in the path is allowed.

If the url ends in a trailing slash, the bucket or directory represented by the url will be scanned non-recursively (content in sub-directories will not be scanned). This means that gs://mybucket/ is equivalent to gs://mybucket/*, and gs://mybucket/directory/ is equivalent to gs://mybucket/directory/*.

Exactly one of url or regex_file_set must be set.

regex_file_set

CloudStorageRegexFileSet

The regex-filtered set of files to scan. Exactly one of url or regex_file_set must be set.

SampleMethod

How to sample bytes if not all bytes are scanned. Meaningful only when used in conjunction with bytes_limit_per_file. If not specified, scanning would start from the top.

Enums
SAMPLE_METHOD_UNSPECIFIED No sampling.
TOP Scan from the top (default).
RANDOM_START For each file larger than bytes_limit_per_file, randomly pick the offset to start scanning. The scanned bytes are contiguous.

CloudStoragePath

Message representing a single file or path in Cloud Storage.

Fields
path

string

A URL representing a file or path (no wildcards) in Cloud Storage. Example: gs://[BUCKET_NAME]/dictionary.txt

CloudStorageRegex

A pattern to match against one or more file stores. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

Fields
project_id_regex

string

Optional. For organizations, if unset, will match all projects.

bucket_name_regex

string

Optional. Regex to test the bucket name against. If empty, all buckets match. Example: "marketing2021" or "(marketing)\d{4}" will both match the bucket gs://marketing2021

CloudStorageRegexFileSet

Message representing a set of files in a Cloud Storage bucket. Regular expressions are used to allow fine-grained control over which files in the bucket to include.

Included files are those that match at least one item in include_regex and do not match any items in exclude_regex. Note that a file that matches items from both lists will not be included. For a match to occur, the entire file path (i.e., everything in the url after the bucket name) must match the regular expression.

For example, given the input {bucket_name: "mybucket", include_regex: ["directory1/.*"], exclude_regex: ["directory1/excluded.*"]}:

  • gs://mybucket/directory1/myfile will be included
  • gs://mybucket/directory1/directory2/myfile will be included (.* matches across /)
  • gs://mybucket/directory0/directory1/myfile will not be included (the full path doesn't match any items in include_regex)
  • gs://mybucket/directory1/excludedfile will not be included (the path matches an item in exclude_regex)

If include_regex is left empty, it will match all files by default (this is equivalent to setting include_regex: [".*"]).

Some other common use cases:

  • {bucket_name: "mybucket", exclude_regex: [".*\.pdf"]} will include all files in mybucket except for .pdf files
  • {bucket_name: "mybucket", include_regex: ["directory/[^/]+"]} will include all files directly under gs://mybucket/directory/, without matching across /
Fields
bucket_name

string

The name of a Cloud Storage bucket. Required.

include_regex[]

string

A list of regular expressions matching file paths to include. All files in the bucket that match at least one of these regular expressions will be included in the set of files, except for those that also match an item in exclude_regex. Leaving this field empty will match all files by default (this is equivalent to including .* in the list).

Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

exclude_regex[]

string

A list of regular expressions matching file paths to exclude. All files in the bucket that match at least one of these regular expressions will be excluded from the scan.

Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

CloudStorageResourceReference

Identifies a single Cloud Storage bucket.

Fields
bucket_name

string

Required. The bucket to scan.

project_id

string

Required. If within a project-level config, then this must match the config's project id.

Color

Represents a color in the RGB color space.

Fields
red

float

The amount of red in the color as a value in the interval [0, 1].

green

float

The amount of green in the color as a value in the interval [0, 1].

blue

float

The amount of blue in the color as a value in the interval [0, 1].

ColumnDataProfile

The profile for a scanned column within a table.

Fields
name

string

The name of the profile.

profile_status

ProfileStatus

Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated.

state

State

State of a profile.

profile_last_generated

Timestamp

The last time the profile was generated.

table_data_profile

string

The resource name of the table data profile.

table_full_resource

string

The resource name of the resource this column is within.

dataset_project_id

string

The Google Cloud project ID that owns the profiled resource.

dataset_location

string

If supported, the location where the dataset's data is stored. See https://cloud.google.com/bigquery/docs/locations for supported BigQuery locations.

dataset_id

string

The BigQuery dataset ID, if the resource profiled is a BigQuery table.

table_id

string

The table ID.

column

string

The name of the column.

sensitivity_score

SensitivityScore

The sensitivity of this column.

data_risk_level

DataRiskLevel

The data risk level for this column.

column_info_type

InfoTypeSummary

If it's been determined this column can be identified as a single type, this will be set. Otherwise the column either has unidentifiable content or mixed types.

other_matches[]

OtherInfoTypeSummary

Other types found within this column. List will be unordered.

estimated_null_percentage

NullPercentageLevel

Approximate percentage of entries being null in the column.

estimated_uniqueness_score

UniquenessScoreLevel

Approximate uniqueness of the column.

free_text_score

double

The likelihood that this column contains free-form text. A value close to 1 may indicate the column is likely to contain free-form or natural language text. Range in 0-1.

column_type

ColumnDataType

The data type of a given column.

policy_state

ColumnPolicyState

Indicates if a policy tag has been applied to the column.

ColumnDataType

Data types of the data in a column. Types may be added over time.

Enums
COLUMN_DATA_TYPE_UNSPECIFIED Invalid type.
TYPE_INT64 Encoded as a string in decimal format.
TYPE_BOOL Encoded as a boolean "false" or "true".
TYPE_FLOAT64 Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
TYPE_STRING Encoded as a string value.
TYPE_BYTES Encoded as a base64 string per RFC 4648, section 4.
TYPE_TIMESTAMP Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
TYPE_DATE Encoded as RFC 3339 full-date format string: 1985-04-12
TYPE_TIME Encoded as RFC 3339 partial-time format string: 23:20:50.52
TYPE_DATETIME Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
TYPE_GEOGRAPHY Encoded as WKT
TYPE_NUMERIC Encoded as a decimal string.
TYPE_RECORD Container of ordered fields, each with a type and field name.
TYPE_BIGNUMERIC Decimal type.
TYPE_JSON Json type.
TYPE_INTERVAL Interval type.
TYPE_RANGE_DATE Range<Date> type.
TYPE_RANGE_DATETIME Range<Datetime> type.
TYPE_RANGE_TIMESTAMP Range<Timestamp> type.

ColumnPolicyState

The possible policy states for a column.

Enums
COLUMN_POLICY_STATE_UNSPECIFIED No policy tags.
COLUMN_POLICY_TAGGED Column has policy tag applied.

State

Possible states of a profile. New items may be added.

Enums
STATE_UNSPECIFIED Unused.
RUNNING The profile is currently running. Once a profile has finished it will transition to DONE.
DONE The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed.

Connection

A data connection to allow the DLP API to profile data in locations that require additional configuration.

Fields
name

string

Output only. Name of the connection: projects/{project}/locations/{location}/connections/{name}.

state

ConnectionState

Required. The connection's state in its lifecycle.

errors[]

Error

Output only. Set if status == ERROR, to provide additional details. Will store the last 10 errors sorted with the most recent first.

Union field properties. Type of connection. properties can be only one of the following:
cloud_sql

CloudSqlProperties

Connect to a Cloud SQL instance.

ConnectionState

State of the connection. New values may be added over time.

Enums
CONNECTION_STATE_UNSPECIFIED Unused
MISSING_CREDENTIALS The DLP API automatically created this connection during an initial scan, and it is awaiting full configuration by a user.
AVAILABLE A configured connection that has not encountered any errors.
ERROR

A configured connection that encountered errors during its last use. It will not be used again until it is set to AVAILABLE.

If the resolution requires external action, then the client must send a request to set the status to AVAILABLE when the connection is ready for use. If the resolution doesn't require external action, then any changes to the connection properties will automatically mark it as AVAILABLE.

Container

Represents a container that may contain DLP findings. Examples of a container include a file, table, or database record.

Fields
type

string

Container type, for example BigQuery or Cloud Storage.

project_id

string

Project where the finding was found. Can be different from the project that owns the finding.

full_path

string

A string representation of the full container name. Examples: - BigQuery: 'Project:DataSetId.TableId' - Cloud Storage: 'gs://Bucket/folders/filename.txt'

root_path

string

The root of the container. Examples:

  • For BigQuery table project_id:dataset_id.table_id, the root is dataset_id
  • For Cloud Storage file gs://bucket/folder/filename.txt, the root is gs://bucket
relative_path

string

The rest of the path after the root. Examples:

  • For BigQuery table project_id:dataset_id.table_id, the relative path is table_id
  • For Cloud Storage file gs://bucket/folder/filename.txt, the relative path is folder/filename.txt
update_time

Timestamp

Findings container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated.

version

string

Findings container version, if available ("generation" for Cloud Storage).

ContentItem

Type of content to inspect.

Fields
Union field data_item. Data of the item either in the byte array or UTF-8 string form, or table. data_item can be only one of the following:
value

string

String data to inspect or redact.

table

Table

Structured content for inspection. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-text#inspecting_a_table to learn more.

byte_item

ByteContentItem

Content data to inspect or redact. Replaces type and data.

ContentLocation

Precise location of the finding within a document, record, image, or metadata container.

Fields
container_name

string

Name of the container where the finding is located. The top level name is the source file name or table name. Names of some common storage containers are formatted as follows:

  • BigQuery tables: {project_id}:{dataset_id}.{table_id}
  • Cloud Storage files: gs://{bucket}/{path}
  • Datastore namespace: {namespace}

Nested names could be absent if the embedded object has no string identifier (for example, an image contained within a document).

container_timestamp

Timestamp

Finding container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated.

container_version

string

Finding container version, if available ("generation" for Cloud Storage).

Union field location. Type of the container within the file with location of the finding. location can be only one of the following:
record_location

RecordLocation

Location within a row or record of a database table.

image_location

ImageLocation

Location within an image's pixels.

document_location

DocumentLocation

Location data for document files.

metadata_location

MetadataLocation

Location within the metadata for inspected content.

ContentOption

Deprecated and unused.

Enums
CONTENT_UNSPECIFIED Includes entire content of a file or a data stream.
CONTENT_TEXT Text content within the data, excluding any metadata.
CONTENT_IMAGE Images found in the data.

CreateConnectionRequest

Request message for CreateConnection.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization):

  • Projects scope: projects/{project_id}/locations/{location_id}
  • Organizations scope: organizations/{org_id}/locations/{location_id}

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.connections.create
connection

Connection

Required. The connection resource.

CreateDeidentifyTemplateRequest

Request message for CreateDeidentifyTemplate.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}
  • Organizations scope, location specified: organizations/{org_id}/locations/{location_id}
  • Organizations scope, no location specified (defaults to global): organizations/{org_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.deidentifyTemplates.create
deidentify_template

DeidentifyTemplate

Required. The DeidentifyTemplate to create.

template_id

string

The template id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: [a-zA-Z\d-_]+. The maximum length is 100 characters. Can be empty to allow the system to generate one.

location_id

string

Deprecated. This field has no effect.

CreateDiscoveryConfigRequest

Request message for CreateDiscoveryConfig.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization):

  • Projects scope: projects/{project_id}/locations/{location_id}
  • Organizations scope: organizations/{org_id}/locations/{location_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.jobTriggers.create
discovery_config

DiscoveryConfig

Required. The DiscoveryConfig to create.

config_id

string

The config ID can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: [a-zA-Z\d-_]+. The maximum length is 100 characters. Can be empty to allow the system to generate one.

CreateDlpJobRequest

Request message for CreateDlpJobRequest. Used to initiate long running jobs such as calculating risk metrics or inspecting Google Cloud Storage.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.jobs.create
job_id

string

The job id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: [a-zA-Z\d-_]+. The maximum length is 100 characters. Can be empty to allow the system to generate one.

location_id

string

Deprecated. This field has no effect.

Union field job. The configuration details for the specific type of job to run. job can be only one of the following:
inspect_job

InspectJobConfig

An inspection job scans a storage repository for InfoTypes.

risk_job

RiskAnalysisJobConfig

A risk analysis job calculates re-identification risk metrics for a BigQuery table.

CreateInspectTemplateRequest

Request message for CreateInspectTemplate.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}
  • Organizations scope, location specified: organizations/{org_id}/locations/{location_id}
  • Organizations scope, no location specified (defaults to global): organizations/{org_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.inspectTemplates.create
inspect_template

InspectTemplate

Required. The InspectTemplate to create.

template_id

string

The template id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: [a-zA-Z\d-_]+. The maximum length is 100 characters. Can be empty to allow the system to generate one.

location_id

string

Deprecated. This field has no effect.

CreateJobTriggerRequest

Request message for CreateJobTrigger.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires one or more of the following IAM permissions on the specified resource parent:

  • dlp.jobTriggers.create
  • dlp.jobs.create
job_trigger

JobTrigger

Required. The JobTrigger to create.

trigger_id

string

The trigger id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: [a-zA-Z\d-_]+. The maximum length is 100 characters. Can be empty to allow the system to generate one.

location_id

string

Deprecated. This field has no effect.

CreateStoredInfoTypeRequest

Request message for CreateStoredInfoType.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}
  • Organizations scope, location specified: organizations/{org_id}/locations/{location_id}
  • Organizations scope, no location specified (defaults to global): organizations/{org_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.storedInfoType.create
config

StoredInfoTypeConfig

Required. Configuration of the storedInfoType to create.

stored_info_type_id

string

The storedInfoType ID can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: [a-zA-Z\d-_]+. The maximum length is 100 characters. Can be empty to allow the system to generate one.

location_id

string

Deprecated. This field has no effect.

CryptoDeterministicConfig

Pseudonymization method that generates deterministic encryption for the given input. Outputs a base64 encoded representation of the encrypted output. Uses AES-SIV based on the RFC https://tools.ietf.org/html/rfc5297.

Fields
crypto_key

CryptoKey

The key used by the encryption function. For deterministic encryption using AES-SIV, the provided key is internally expanded to 64 bytes prior to use.

surrogate_info_type

InfoType

The custom info type to annotate the surrogate with. This annotation will be applied to the surrogate by prefixing it with the name of the custom info type followed by the number of characters comprising the surrogate. The following scheme defines the format: {info type name}({surrogate character count}):{surrogate}

For example, if the name of custom info type is 'MY_TOKEN_INFO_TYPE' and the surrogate is 'abc', the full replacement value will be: 'MY_TOKEN_INFO_TYPE(3):abc'

This annotation identifies the surrogate when inspecting content using the custom info type 'Surrogate'. This facilitates reversal of the surrogate when it occurs in free text.

Note: For record transformations where the entire cell in a table is being transformed, surrogates are not mandatory. Surrogates are used to denote the location of the token and are necessary for re-identification in free form text.

In order for inspection to work properly, the name of this info type must not occur naturally anywhere in your data; otherwise, inspection may either

  • reverse a surrogate that does not correspond to an actual identifier
  • be unable to parse the surrogate and result in an error

Therefore, choose your custom info type name carefully after considering what your data looks like. One way to select a name that has a high chance of yielding reliable detection is to include one or more unicode characters that are highly improbable to exist in your data. For example, assuming your data is entered from a regular ASCII keyboard, the symbol with the hex code point 29DD might be used like so: ⧝MY_TOKEN_TYPE.

context

FieldId

A context may be used for higher security and maintaining referential integrity such that the same identifier in two different contexts will be given a distinct surrogate. The context is appended to plaintext value being encrypted. On decryption the provided context is validated against the value used during encryption. If a context was provided during encryption, same context must be provided during decryption as well.

If the context is not set, plaintext would be used as is for encryption. If the context is set but:

  1. there is no record present when transforming a given value or
  2. the field is not present when transforming a given value,

plaintext would be used as is for encryption.

Note that case (1) is expected when an InfoTypeTransformation is applied to both structured and unstructured ContentItems.

CryptoHashConfig

Pseudonymization method that generates surrogates via cryptographic hashing. Uses SHA-256. The key size must be either 32 or 64 bytes. Outputs a base64 encoded representation of the hashed output (for example, L7k0BHmF1ha5U3NfGykjro4xWi1MPVQPjhMAZbSV9mM=). Currently, only string and integer values can be hashed. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization to learn more.

Fields
crypto_key

CryptoKey

The key used by the hash function.

CryptoKey

This is a data encryption key (DEK) (as opposed to a key encryption key (KEK) stored by Cloud Key Management Service (Cloud KMS). When using Cloud KMS to wrap or unwrap a DEK, be sure to set an appropriate IAM policy on the KEK to ensure an attacker cannot unwrap the DEK.

Fields
Union field source. Sources of crypto keys. source can be only one of the following:
transient

TransientCryptoKey

Transient crypto key

unwrapped

UnwrappedCryptoKey

Unwrapped crypto key

kms_wrapped

KmsWrappedCryptoKey

Key wrapped using Cloud KMS

CryptoReplaceFfxFpeConfig

Replaces an identifier with a surrogate using Format Preserving Encryption (FPE) with the FFX mode of operation; however when used in the ReidentifyContent API method, it serves the opposite function by reversing the surrogate back into the original identifier. The identifier must be encoded as ASCII. For a given crypto key and context, the same identifier will be replaced with the same surrogate. Identifiers must be at least two characters long. In the case that the identifier is the empty string, it will be skipped. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization to learn more.

Note: We recommend using CryptoDeterministicConfig for all use cases which do not require preserving the input alphabet space and size, plus warrant referential integrity.

Fields
crypto_key

CryptoKey

Required. The key used by the encryption algorithm.

context

FieldId

The 'tweak', a context may be used for higher security since the same identifier in two different contexts won't be given the same surrogate. If the context is not set, a default tweak will be used.

If the context is set but:

  1. there is no record present when transforming a given value or
  2. the field is not present when transforming a given value,

a default tweak will be used.

Note that case (1) is expected when an InfoTypeTransformation is applied to both structured and unstructured ContentItems. Currently, the referenced field may be of value type integer or string.

The tweak is constructed as a sequence of bytes in big endian byte order such that:

  • a 64 bit integer is encoded followed by a single byte of value 1
  • a string is encoded in UTF-8 format followed by a single byte of value 2
surrogate_info_type

InfoType

The custom infoType to annotate the surrogate with. This annotation will be applied to the surrogate by prefixing it with the name of the custom infoType followed by the number of characters comprising the surrogate. The following scheme defines the format: info_type_name(surrogate_character_count):surrogate

For example, if the name of custom infoType is 'MY_TOKEN_INFO_TYPE' and the surrogate is 'abc', the full replacement value will be: 'MY_TOKEN_INFO_TYPE(3):abc'

This annotation identifies the surrogate when inspecting content using the custom infoType SurrogateType. This facilitates reversal of the surrogate when it occurs in free text.

In order for inspection to work properly, the name of this infoType must not occur naturally anywhere in your data; otherwise, inspection may find a surrogate that does not correspond to an actual identifier. Therefore, choose your custom infoType name carefully after considering what your data looks like. One way to select a name that has a high chance of yielding reliable detection is to include one or more unicode characters that are highly improbable to exist in your data. For example, assuming your data is entered from a regular ASCII keyboard, the symbol with the hex code point 29DD might be used like so: ⧝MY_TOKEN_TYPE

Union field alphabet. Choose an alphabet which the data being transformed will be made up of. alphabet can be only one of the following:
common_alphabet

FfxCommonNativeAlphabet

Common alphabets.

custom_alphabet

string

This is supported by mapping these to the alphanumeric characters that the FFX mode natively supports. This happens before/after encryption/decryption. Each character listed must appear only once. Number of characters must be in the range [2, 95]. This must be encoded as ASCII. The order of characters does not matter. The full list of allowed characters is: 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz~`!@#$%^&*()_-+={[}]|\:;"'<,>.?/

radix

int32

The native way to select the alphabet. Must be in the range [2, 95].

FfxCommonNativeAlphabet

These are commonly used subsets of the alphabet that the FFX mode natively supports. In the algorithm, the alphabet is selected using the "radix". Therefore each corresponds to a particular radix.

Enums
FFX_COMMON_NATIVE_ALPHABET_UNSPECIFIED Unused.
NUMERIC [0-9] (radix of 10)
HEXADECIMAL [0-9A-F] (radix of 16)
UPPER_CASE_ALPHA_NUMERIC [0-9A-Z] (radix of 36)
ALPHA_NUMERIC [0-9A-Za-z] (radix of 62)

CustomInfoType

Custom information type provided by the user. Used to find domain-specific sensitive information configurable to the data in question.

Fields
info_type

InfoType

CustomInfoType can either be a new infoType, or an extension of built-in infoType, when the name matches one of existing infoTypes and that infoType is specified in InspectContent.info_types field. Specifying the latter adds findings to the one detected by the system. If built-in info type is not specified in InspectContent.info_types list then the name is treated as a custom info type.

likelihood

Likelihood

Likelihood to return for this CustomInfoType. This base value can be altered by a detection rule if the finding meets the criteria specified by the rule. Defaults to VERY_LIKELY if not specified.

detection_rules[]

DetectionRule

Set of detection rules to apply to all findings of this CustomInfoType. Rules are applied in order that they are specified. Not supported for the surrogate_type CustomInfoType.

exclusion_type

ExclusionType

If set to EXCLUSION_TYPE_EXCLUDE this infoType will not cause a finding to be returned. It still can be used for rules matching.

sensitivity_score

SensitivityScore

Sensitivity for this CustomInfoType. If this CustomInfoType extends an existing InfoType, the sensitivity here will take precedence over that of the original InfoType. If unset for a CustomInfoType, it will default to HIGH. This only applies to data profiling.

Union field type. Type of custom detector. type can be only one of the following:
dictionary

Dictionary

A list of phrases to detect as a CustomInfoType.

regex

Regex

Regular expression based CustomInfoType.

surrogate_type

SurrogateType

Message for detecting output from deidentification transformations that support reversing.

stored_type

StoredType

Load an existing StoredInfoType resource for use in InspectDataSource. Not currently supported in InspectContent.

DetectionRule

Deprecated; use InspectionRuleSet instead. Rule for modifying a CustomInfoType to alter behavior under certain circumstances, depending on the specific details of the rule. Not supported for the surrogate_type custom infoType.

Fields
Union field type. Type of hotword rule. type can be only one of the following:
hotword_rule

HotwordRule

Hotword-based detection rule.

HotwordRule

The rule that adjusts the likelihood of findings within a certain proximity of hotwords.

Fields
hotword_regex

Regex

Regular expression pattern defining what qualifies as a hotword.

proximity

Proximity

Range of characters within which the entire hotword must reside. The total length of the window cannot exceed 1000 characters. The finding itself will be included in the window, so that hotwords can be used to match substrings of the finding itself. Suppose you want Cloud DLP to promote the likelihood of the phone number regex "(\d{3}) \d{3}-\d{4}" if the area code is known to be the area code of a company's office. In this case, use the hotword regex "(xxx)", where "xxx" is the area code in question.

For tabular data, if you want to modify the likelihood of an entire column of findngs, see Hotword example: Set the match likelihood of a table column.

likelihood_adjustment

LikelihoodAdjustment

Likelihood adjustment to apply to all matching findings.

LikelihoodAdjustment

Message for specifying an adjustment to the likelihood of a finding as part of a detection rule.

Fields
Union field adjustment. How the likelihood will be modified. adjustment can be only one of the following:
fixed_likelihood

Likelihood

Set the likelihood of a finding to a fixed value.

relative_likelihood

int32

Increase or decrease the likelihood by the specified number of levels. For example, if a finding would be POSSIBLE without the detection rule and relative_likelihood is 1, then it is upgraded to LIKELY, while a value of -1 would downgrade it to UNLIKELY. Likelihood may never drop below VERY_UNLIKELY or exceed VERY_LIKELY, so applying an adjustment of 1 followed by an adjustment of -1 when base likelihood is VERY_LIKELY will result in a final likelihood of LIKELY.

Proximity

Message for specifying a window around a finding to apply a detection rule.

Fields
window_before

int32

Number of characters before the finding to consider. For tabular data, if you want to modify the likelihood of an entire column of findngs, set this to 1. For more information, see Hotword example: Set the match likelihood of a table column.

window_after

int32

Number of characters after the finding to consider.

Dictionary

Custom information type based on a dictionary of words or phrases. This can be used to match sensitive information specific to the data, such as a list of employee IDs or job titles.

Dictionary words are case-insensitive and all characters other than letters and digits in the unicode Basic Multilingual Plane will be replaced with whitespace when scanning for matches, so the dictionary phrase "Sam Johnson" will match all three phrases "sam johnson", "Sam, Johnson", and "Sam (Johnson)". Additionally, the characters surrounding any match must be of a different type than the adjacent characters within the word, so letters must be next to non-letters and digits next to non-digits. For example, the dictionary word "jen" will match the first three letters of the text "jen123" but will return no matches for "jennifer".

Dictionary words containing a large number of characters that are not letters or digits may result in unexpected findings because such characters are treated as whitespace. The limits page contains details about the size limits of dictionaries. For dictionaries that do not fit within these constraints, consider using LargeCustomDictionaryConfig in the StoredInfoType API.

Fields
Union field source. The potential places the data can be read from. source can be only one of the following:
word_list

WordList

List of words or phrases to search for.

cloud_storage_path

CloudStoragePath

Newline-delimited file of words in Cloud Storage. Only a single file is accepted.

WordList

Message defining a list of words or phrases to search for in the data.

Fields
words[]

string

Words or phrases defining the dictionary. The dictionary must contain at least one phrase and every phrase must contain at least 2 characters that are letters or digits. [required]

ExclusionType

Type of exclusion rule.

Enums
EXCLUSION_TYPE_UNSPECIFIED A finding of this custom info type will not be excluded from results.
EXCLUSION_TYPE_EXCLUDE A finding of this custom info type will be excluded from final results, but can still affect rule execution.

Regex

Message defining a custom regular expression.

Fields
pattern

string

Pattern defining the regular expression. Its syntax (https://github.com/google/re2/wiki/Syntax) can be found under the google/re2 repository on GitHub.

group_indexes[]

int32

The index of the submatch to extract as findings. When not specified, the entire match is returned. No more than 3 may be included.

SurrogateType

This type has no fields.

Message for detecting output from deidentification transformations such as CryptoReplaceFfxFpeConfig. These types of transformations are those that perform pseudonymization, thereby producing a "surrogate" as output. This should be used in conjunction with a field on the transformation such as surrogate_info_type. This CustomInfoType does not support the use of detection_rules.

DataProfileAction

A task to execute when a data profile has been generated.

Fields
Union field action. Type of action to execute when a profile is generated. action can be only one of the following:
export_data

Export

Export data profiles into a provided location.

pub_sub_notification

PubSubNotification

Publish a message into the Pub/Sub topic.

publish_to_chronicle

PublishToChronicle

Publishes generated data profiles to Google Security Operations. For more information, see Use Sensitive Data Protection data in context-aware analytics.

publish_to_scc

PublishToSecurityCommandCenter

Publishes findings to Security Command Center for each data profile.

tag_resources

TagResources

Tags the profiled resources with the specified tag values.

EventType

Types of event that can trigger an action.

Enums
EVENT_TYPE_UNSPECIFIED Unused.
NEW_PROFILE New profile (not a re-profile).
CHANGED_PROFILE One of the following profile metrics changed: Data risk score, Sensitivity score, Resource visibility, Encryption type, Predicted infoTypes, Other infoTypes
SCORE_INCREASED Table data risk score or sensitivity score increased.
ERROR_CHANGED A user (non-internal) error occurred.

Export

If set, the detailed data profiles will be persisted to the location of your choice whenever updated.

Fields
profile_table

BigQueryTable

Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using streaming insert and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.

sample_findings_table

BigQueryTable

Store sample data profile findings in an existing table or a new table in an existing dataset. Each regeneration will result in new rows in BigQuery. Data is inserted using streaming insert and so data may be in the buffer for a period of time after the profile has finished.

PubSubNotification

Send a Pub/Sub message into the given Pub/Sub topic to connect other systems to data profile generation. The message payload data will be the byte serialization of DataProfilePubSubMessage.

Fields
topic

string

Cloud Pub/Sub topic to send notifications to. Format is projects/{project}/topics/{topic}.

event

EventType

The type of event that triggers a Pub/Sub. At most one PubSubNotification per EventType is permitted.

pubsub_condition

DataProfilePubSubCondition

Conditions (e.g., data risk or sensitivity level) for triggering a Pub/Sub.

detail_of_message

DetailLevel

How much data to include in the Pub/Sub message. If the user wishes to limit the size of the message, they can use resource_name and fetch the profile fields they wish to. Per table profile (not per column).

DetailLevel

The levels of detail that can be included in the Pub/Sub message.

Enums
DETAIL_LEVEL_UNSPECIFIED Unused.
TABLE_PROFILE The full table data profile.
RESOURCE_NAME The name of the profiled resource.
FILE_STORE_PROFILE The full file store data profile.

PublishToChronicle

This type has no fields.

Message expressing intention to publish to Google Security Operations.

PublishToSecurityCommandCenter

This type has no fields.

If set, a summary finding will be created or updated in Security Command Center for each profile.

TagResources

If set, attaches the tags provided to profiled resources. Tags support access control. You can conditionally grant or deny access to a resource based on whether the resource has a specific tag.

Fields
tag_conditions[]

TagCondition

The tags to associate with different conditions.

profile_generations_to_tag[]

ProfileGeneration

The profile generations for which the tag should be attached to resources. If you attach a tag to only new profiles, then if the sensitivity score of a profile subsequently changes, its tag doesn't change. By default, this field includes only new profiles. To include both new and updated profiles for tagging, this field should explicitly include both PROFILE_GENERATION_NEW and PROFILE_GENERATION_UPDATE.

lower_data_risk_to_low

bool

Whether applying a tag to a resource should lower the risk of the profile for that resource. For example, in conjunction with an IAM deny policy, you can deny all principals a permission if a tag value is present, mitigating the risk of the resource. This also lowers the data risk of resources at the lower levels of the resource hierarchy. For example, reducing the data risk of a table data profile also reduces the data risk of the constituent column data profiles.

TagCondition

The tag to attach to profiles matching the condition. At most one TagCondition can be specified per sensitivity level.

Fields
tag

TagValue

The tag value to attach to resources.

Union field type. The type of condition on which attaching the tag will be predicated. type can be only one of the following:
sensitivity_score

SensitivityScore

Conditions attaching the tag to a resource on its profile having this sensitivity score.

TagValue

A value of a tag.

Fields
Union field format. The format of the tag value. format can be only one of the following:
namespaced_value

string

The namespaced name for the tag value to attach to resources. Must be in the format {parent_id}/{tag_key_short_name}/{short_name}, for example, "123456/environment/prod".

DataProfileBigQueryRowSchema

The schema of data to be saved to the BigQuery table when the DataProfileAction is enabled.

Fields
Union field data_profile. Data profile type. data_profile can be only one of the following:
table_profile

TableDataProfile

Table data profile column

column_profile

ColumnDataProfile

Column data profile column

file_store_profile

FileStoreDataProfile

File store data profile column.

DataProfileConfigSnapshot

Snapshot of the configurations used to generate the profile.

Fields
inspect_config

InspectConfig

A copy of the inspection config used to generate this profile. This is a copy of the inspect_template specified in DataProfileJobConfig.

data_profile_job
(deprecated)

DataProfileJobConfig

A copy of the configuration used to generate this profile. This is deprecated, and the DiscoveryConfig field is preferred moving forward. DataProfileJobConfig will still be written here for Discovery in BigQuery for backwards compatibility, but will not be updated with new fields, while DiscoveryConfig will.

discovery_config

DiscoveryConfig

A copy of the configuration used to generate this profile.

inspect_template_name

string

Name of the inspection template used to generate this profile

inspect_template_modified_time

Timestamp

Timestamp when the template was modified

DataProfileFinding

Details about a piece of potentially sensitive information that was detected when the data resource was profiled.

Fields
quote

string

The content that was found. Even if the content is not textual, it may be converted to a textual representation here. If the finding exceeds 4096 bytes in length, the quote may be omitted.

infotype

InfoType

The type of content that might have been found.

quote_info

QuoteInfo

Contains data parsed from quotes. Currently supported infoTypes: DATE, DATE_OF_BIRTH, and TIME.

data_profile_resource_name

string

Resource name of the data profile associated with the finding.

finding_id

string

A unique identifier for the finding.

timestamp

Timestamp

Timestamp when the finding was detected.

location

DataProfileFindingLocation

Where the content was found.

resource_visibility

ResourceVisibility

How broadly a resource has been shared.

DataProfileFindingLocation

Location of a data profile finding within a resource.

Fields
container_name

string

Name of the container where the finding is located. The top-level name is the source file name or table name. Names of some common storage containers are formatted as follows:

  • BigQuery tables: {project_id}:{dataset_id}.{table_id}
  • Cloud Storage files: gs://{bucket}/{path}
Union field location_extra_details. Additional location details that may be provided for some types of profiles. At this time, only findings for table data profiles include such details. location_extra_details can be only one of the following:
data_profile_finding_record_location

DataProfileFindingRecordLocation

Location of a finding within a resource that produces a table data profile.

DataProfileFindingRecordLocation

Location of a finding within a resource that produces a table data profile.

Fields
field

FieldId

Field ID of the column containing the finding.

DataProfileJobConfig

Configuration for setting up a job to scan resources for profile generation. Only one data profile configuration may exist per organization, folder, or project.

The generated data profiles are retained according to the data retention policy.

Fields
location

DataProfileLocation

The data to scan.

project_id

string

The project that will run the scan. The DLP service account that exists within this project must have access to all resources that are profiled, and the DLP API must be enabled.

other_cloud_starting_location

OtherCloudDiscoveryStartingLocation

Must be set only when scanning other clouds.

inspect_templates[]

string

Detection logic for profile generation.

Not all template features are used by profiles. FindingLimits, include_quote and exclude_info_types have no impact on data profiling.

Multiple templates may be provided if there is data in multiple regions. At most one template must be specified per-region (including "global"). Each region is scanned using the applicable template. If no region-specific template is specified, but a "global" template is specified, it will be copied to that region and used instead. If no global or region-specific template is provided for a region with data, that region's data will not be scanned.

For more information, see https://cloud.google.com/sensitive-data-protection/docs/data-profiles#data-residency.

data_profile_actions[]

DataProfileAction

Actions to execute at the completion of the job.

DataProfileLocation

The data that will be profiled.

Fields
Union field location. The location to be scanned. location can be only one of the following:
organization_id

int64

The ID of an organization to scan.

folder_id

int64

The ID of the folder within an organization to scan.

DataProfilePubSubCondition

A condition for determining whether a Pub/Sub should be triggered.

Fields
expressions

PubSubExpressions

An expression.

ProfileScoreBucket

Various score levels for resources.

Enums
PROFILE_SCORE_BUCKET_UNSPECIFIED Unused.
HIGH High risk/sensitivity detected.
MEDIUM_OR_HIGH Medium or high risk/sensitivity detected.

PubSubCondition

A condition consisting of a value.

Fields
Union field value. The value for the condition to trigger. value can be only one of the following:
minimum_risk_score

ProfileScoreBucket

The minimum data risk score that triggers the condition.

minimum_sensitivity_score

ProfileScoreBucket

The minimum sensitivity level that triggers the condition.

PubSubExpressions

An expression, consisting of an operator and conditions.

Fields
logical_operator

PubSubLogicalOperator

The operator to apply to the collection of conditions.

conditions[]

PubSubCondition

Conditions to apply to the expression.

PubSubLogicalOperator

Logical operators for conditional checks.

Enums
LOGICAL_OPERATOR_UNSPECIFIED Unused.
OR Conditional OR.
AND Conditional AND.

DataProfilePubSubMessage

Pub/Sub topic message for a DataProfileAction.PubSubNotification event. To receive a message of protocol buffer schema type, convert the message data to an object of this proto class.

Fields
profile

TableDataProfile

If DetailLevel is TABLE_PROFILE this will be fully populated. Otherwise, if DetailLevel is RESOURCE_NAME, then only name and full_resource will be populated.

file_store_profile

FileStoreDataProfile

If DetailLevel is FILE_STORE_PROFILE this will be fully populated. Otherwise, if DetailLevel is RESOURCE_NAME, then only name and file_store_path will be populated.

event

EventType

The event that caused the Pub/Sub message to be sent.

DataProfileUpdateFrequency

How frequently data profiles can be updated. New options can be added at a later time.

Enums
UPDATE_FREQUENCY_UNSPECIFIED Unspecified.
UPDATE_FREQUENCY_NEVER After the data profile is created, it will never be updated.
UPDATE_FREQUENCY_DAILY The data profile can be updated up to once every 24 hours.
UPDATE_FREQUENCY_MONTHLY The data profile can be updated up to once every 30 days. Default.

DataRiskLevel

Score is a summary of all elements in the data profile. A higher number means more risk.

Fields
score

DataRiskLevelScore

The score applied to the resource.

DataRiskLevelScore

Various score levels for resources.

Enums
RISK_SCORE_UNSPECIFIED Unused.
RISK_LOW Low risk - Lower indication of sensitive data that appears to have additional access restrictions in place or no indication of sensitive data found.
RISK_UNKNOWN Unable to determine risk.
RISK_MODERATE Medium risk - Sensitive data may be present but additional access or fine grain access restrictions appear to be present. Consider limiting access even further or transform data to mask.
RISK_HIGH High risk – SPII may be present. Access controls may include public ACLs. Exfiltration of data may lead to user data loss. Re-identification of users may be possible. Consider limiting usage and or removing SPII.

DataSourceType

Message used to identify the type of resource being profiled.

Fields
data_source

string

Output only. An identifying string to the type of resource being profiled. Current values:

  • google/bigquery/table
  • google/project
  • google/sql/table
  • google/gcs/bucket

DatabaseResourceCollection

Match database resources using regex filters. Examples of database resources are tables, views, and stored procedures.

Fields
Union field pattern. The first filter containing a pattern that matches a database resource will be used. pattern can be only one of the following:
include_regexes

DatabaseResourceRegexes

A collection of regular expressions to match a database resource against.

DatabaseResourceReference

Identifies a single database resource, like a table within a database.

Fields
project_id

string

Required. If within a project-level config, then this must match the config's project ID.

instance

string

Required. The instance where this resource is located. For example: Cloud SQL instance ID.

database

string

Required. Name of a database within the instance.

database_resource

string

Required. Name of a database resource, for example, a table within the database.

DatabaseResourceRegex

A pattern to match against one or more database resources. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

Fields
project_id_regex

string

For organizations, if unset, will match all projects. Has no effect for configurations created within a project.

instance_regex

string

Regex to test the instance name against. If empty, all instances match.

database_regex

string

Regex to test the database name against. If empty, all databases match.

database_resource_name_regex

string

Regex to test the database resource's name against. An example of a database resource name is a table's name. Other database resource names like view names could be included in the future. If empty, all database resources match.

DatabaseResourceRegexes

A collection of regular expressions to determine what database resources to match against.

Fields
patterns[]

DatabaseResourceRegex

A group of regular expression patterns to match against one or more database resources. Maximum of 100 entries. The sum of all regular expression's length can't exceed 10 KiB.

DatastoreKey

Record key for a finding in Cloud Datastore.

Fields
entity_key

Key

Datastore entity key.

DatastoreOptions

Options defining a data set within Google Cloud Datastore.

Fields
partition_id

PartitionId

A partition ID identifies a grouping of entities. The grouping is always by project and namespace, however the namespace ID may be empty.

kind

KindExpression

The kind to process.

DateShiftConfig

Shifts dates by random number of days, with option to be consistent for the same context. See https://cloud.google.com/sensitive-data-protection/docs/concepts-date-shifting to learn more.

Fields
upper_bound_days

int32

Required. Range of shift in days. Actual shift will be selected at random within this range (inclusive ends). Negative means shift to earlier in time. Must not be more than 365250 days (1000 years) each direction.

For example, 3 means shift date to at most 3 days into the future.

lower_bound_days

int32

Required. For example, -5 means shift date to at most 5 days back in the past.

context

FieldId

Points to the field that contains the context, for example, an entity id. If set, must also set cryptoKey. If set, shift will be consistent for the given context.

Union field method. Method for calculating shift that takes context into consideration. If set, must also set context. Can only be applied to table items. method can be only one of the following:
crypto_key

CryptoKey

Causes the shift to be computed based on this key and the context. This results in the same shift for the same context and crypto_key. If set, must also set context. Can only be applied to table items.

DateTime

Message for a date time object. e.g. 2018-01-01, 5th August.

Fields
date

Date

One or more of the following must be set. Must be a valid date or time value.

day_of_week

DayOfWeek

Day of week

time

TimeOfDay

Time of day

time_zone

TimeZone

Time zone

TimeZone

Time zone of the date time object.

Fields
offset_minutes

int32

Set only if the offset can be determined. Positive for time ahead of UTC. E.g. For "UTC-9", this value is -540.

DeidentifyConfig

The configuration that controls how the data will change.

Fields
transformation_error_handling

TransformationErrorHandling

Mode for handling transformation errors. If left unspecified, the default mode is TransformationErrorHandling.ThrowError.

Union field transformation. Type of transformation transformation can be only one of the following:
info_type_transformations

InfoTypeTransformations

Treat the dataset as free-form text and apply the same free text transformation everywhere.

record_transformations

RecordTransformations

Treat the dataset as structured. Transformations can be applied to specific locations within structured datasets, such as transforming a column within a table.

image_transformations

ImageTransformations

Treat the dataset as an image and redact.

DeidentifyContentRequest

Request to de-identify a ContentItem.

Fields
parent

string

Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • serviceusage.services.use
deidentify_config

DeidentifyConfig

Configuration for the de-identification of the content item. Items specified here will override the template referenced by the deidentify_template_name argument.

inspect_config

InspectConfig

Configuration for the inspector. Items specified here will override the template referenced by the inspect_template_name argument.

item

ContentItem

The item to de-identify. Will be treated as text.

This value must be of type Table if your deidentify_config is a RecordTransformations object.

inspect_template_name

string

Template to use. Any configuration directly specified in inspect_config will override those set in the template. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged.

deidentify_template_name

string

Template to use. Any configuration directly specified in deidentify_config will override those set in the template. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged.

location_id

string

Deprecated. This field has no effect.

DeidentifyContentResponse

Results of de-identifying a ContentItem.

Fields
item

ContentItem

The de-identified item.

overview

TransformationOverview

An overview of the changes that were made on the item.

DeidentifyDataSourceDetails

The results of a Deidentify action from an inspect job.

Fields
requested_options

RequestedDeidentifyOptions

De-identification config used for the request.

deidentify_stats

DeidentifyDataSourceStats

Stats about the de-identification operation.

RequestedDeidentifyOptions

De-identification options.

Fields
snapshot_deidentify_template

DeidentifyTemplate

Snapshot of the state of the DeidentifyTemplate from the Deidentify action at the time this job was run.

snapshot_structured_deidentify_template

DeidentifyTemplate

Snapshot of the state of the structured DeidentifyTemplate from the Deidentify action at the time this job was run.

snapshot_image_redact_template

DeidentifyTemplate

Snapshot of the state of the image transformation DeidentifyTemplate from the Deidentify action at the time this job was run.

DeidentifyDataSourceStats

Summary of what was modified during a transformation.

Fields
transformed_bytes

int64

Total size in bytes that were transformed in some way.

transformation_count

int64

Number of successfully applied transformations.

transformation_error_count

int64

Number of errors encountered while trying to apply transformations.

DeidentifyTemplate

DeidentifyTemplates contains instructions on how to de-identify content. See https://cloud.google.com/sensitive-data-protection/docs/concepts-templates to learn more.

Fields
name

string

Output only. The template name.

The template will have one of the following formats: projects/PROJECT_ID/deidentifyTemplates/TEMPLATE_ID OR organizations/ORGANIZATION_ID/deidentifyTemplates/TEMPLATE_ID

display_name

string

Display name (max 256 chars).

description

string

Short description (max 256 chars).

create_time

Timestamp

Output only. The creation timestamp of an inspectTemplate.

update_time

Timestamp

Output only. The last update timestamp of an inspectTemplate.

deidentify_config

DeidentifyConfig

The core content of the template.

DeleteConnectionRequest

Request message for DeleteConnection.

Fields
name

string

Required. Resource name of the Connection to be deleted, in the format: projects/{project}/locations/{location}/connections/{connection}.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.connections.delete

DeleteDeidentifyTemplateRequest

Request message for DeleteDeidentifyTemplate.

Fields
name

string

Required. Resource name of the organization and deidentify template to be deleted, for example organizations/433245324/deidentifyTemplates/432452342 or projects/project-id/deidentifyTemplates/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.deidentifyTemplates.delete

DeleteDiscoveryConfigRequest

Request message for DeleteDiscoveryConfig.

Fields
name

string

Required. Resource name of the project and the config, for example projects/dlp-test-project/discoveryConfigs/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobTriggers.delete

DeleteDlpJobRequest

The request message for deleting a DLP job.

Fields
name

string

Required. The name of the DlpJob resource to be deleted.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobs.delete

DeleteFileStoreDataProfileRequest

Request message for DeleteFileStoreProfile.

Fields
name

string

Required. Resource name of the file store data profile.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.fileStoreDataProfiles.delete

DeleteInspectTemplateRequest

Request message for DeleteInspectTemplate.

Fields
name

string

Required. Resource name of the organization and inspectTemplate to be deleted, for example organizations/433245324/inspectTemplates/432452342 or projects/project-id/inspectTemplates/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.inspectTemplates.delete

DeleteJobTriggerRequest

Request message for DeleteJobTrigger.

Fields
name

string

Required. Resource name of the project and the triggeredJob, for example projects/dlp-test-project/jobTriggers/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobTriggers.delete

DeleteStoredInfoTypeRequest

Request message for DeleteStoredInfoType.

Fields
name

string

Required. Resource name of the organization and storedInfoType to be deleted, for example organizations/433245324/storedInfoTypes/432452342 or projects/project-id/storedInfoTypes/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.storedInfoTypes.delete

DeleteTableDataProfileRequest

Request message for DeleteTableProfile.

Fields
name

string

Required. Resource name of the table data profile.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.tableDataProfiles.delete

Disabled

This type has no fields.

Do not profile the tables.

DiscoveryBigQueryConditions

Requirements that must be true before a table is scanned in discovery for the first time. There is an AND relationship between the top-level attributes. Additionally, minimum conditions with an OR relationship that must be met before Cloud DLP scans a table can be set (like a minimum row count or a minimum table age).

Fields
created_after

Timestamp

BigQuery table must have been created after this date. Used to avoid backfilling.

or_conditions

OrConditions

At least one of the conditions must be true for a table to be scanned.

Union field included_types. The type of BigQuery tables to scan. If nothing is set the default behavior is to scan only tables of type TABLE and to give errors for all unsupported tables. included_types can be only one of the following:
types

BigQueryTableTypes

Restrict discovery to specific table types.

type_collection

BigQueryTableTypeCollection

Restrict discovery to categories of table types.

OrConditions

There is an OR relationship between these attributes. They are used to determine if a table should be scanned or not in Discovery.

Fields
min_row_count

int32

Minimum number of rows that should be present before Cloud DLP profiles a table

min_age

Duration

Minimum age a table must have before Cloud DLP can profile it. Value must be 1 hour or greater.

DiscoveryBigQueryFilter

Determines what tables will have profiles generated within an organization or project. Includes the ability to filter by regular expression patterns on project ID, dataset ID, and table ID.

Fields
Union field filter. Whether the filter applies to a specific set of tables or all other tables within the location being profiled. The first filter to match will be applied, regardless of the condition. If none is set, will default to other_tables. filter can be only one of the following:
tables

BigQueryTableCollection

A specific set of tables for this filter to apply to. A table collection must be specified in only one filter per config. If a table id or dataset is empty, Cloud DLP assumes all tables in that collection must be profiled. Must specify a project ID.

other_tables

AllOtherBigQueryTables

Catch-all. This should always be the last filter in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically.

table_reference

TableReference

The table to scan. Discovery configurations including this can only include one DiscoveryTarget (the DiscoveryTarget with this TableReference).

AllOtherBigQueryTables

This type has no fields.

Catch-all for all other tables not specified by other filters. Should always be last, except for single-table configurations, which will only have a TableReference target.

DiscoveryCloudSqlConditions

Requirements that must be true before a table is profiled for the first time.

Fields
database_engines[]

DatabaseEngine

Optional. Database engines that should be profiled. Optional. Defaults to ALL_SUPPORTED_DATABASE_ENGINES if unspecified.

types[]

DatabaseResourceType

Data profiles will only be generated for the database resource types specified in this field. If not specified, defaults to [DATABASE_RESOURCE_TYPE_ALL_SUPPORTED_TYPES].

DatabaseEngine

The database engines that should be profiled.

Enums
DATABASE_ENGINE_UNSPECIFIED Unused.
ALL_SUPPORTED_DATABASE_ENGINES Include all supported database engines.
MYSQL MySQL database.
POSTGRES PostgreSQL database.

DatabaseResourceType

Cloud SQL database resource types. New values can be added at a later time.

Enums
DATABASE_RESOURCE_TYPE_UNSPECIFIED Unused.
DATABASE_RESOURCE_TYPE_ALL_SUPPORTED_TYPES Includes database resource types that become supported at a later time.
DATABASE_RESOURCE_TYPE_TABLE Tables.

DiscoveryCloudSqlFilter

Determines what tables will have profiles generated within an organization or project. Includes the ability to filter by regular expression patterns on project ID, location, instance, database, and database resource name.

Fields
Union field filter. Whether the filter applies to a specific set of database resources or all other database resources within the location being profiled. The first filter to match will be applied, regardless of the condition. If none is set, will default to others. filter can be only one of the following:
collection

DatabaseResourceCollection

A specific set of database resources for this filter to apply to.

others

AllOtherDatabaseResources

Catch-all. This should always be the last target in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically.

database_resource_reference

DatabaseResourceReference

The database resource to scan. Targets including this can only include one target (the target with this database resource reference).

DiscoveryCloudSqlGenerationCadence

How often existing tables should have their profiles refreshed. New tables are scanned as quickly as possible depending on system capacity.

Fields
schema_modified_cadence

SchemaModifiedCadence

When to reprofile if the schema has changed.

refresh_frequency

DataProfileUpdateFrequency

Data changes (non-schema changes) in Cloud SQL tables can't trigger reprofiling. If you set this field, profiles are refreshed at this frequency regardless of whether the underlying tables have changed. Defaults to never.

inspect_template_modified_cadence

DiscoveryInspectTemplateModifiedCadence

Governs when to update data profiles when the inspection rules defined by the InspectTemplate change. If not set, changing the template will not cause a data profile to update.

SchemaModifiedCadence

How frequently to modify the profile when the table's schema is modified.

Fields
types[]

CloudSqlSchemaModification

The types of schema modifications to consider. Defaults to NEW_COLUMNS.

frequency

DataProfileUpdateFrequency

Frequency to regenerate data profiles when the schema is modified. Defaults to monthly.

CloudSqlSchemaModification

The type of modification that causes a profile update.

Enums
SQL_SCHEMA_MODIFICATION_UNSPECIFIED Unused.
NEW_COLUMNS New columns have appeared.
REMOVED_COLUMNS Columns have been removed from the table.

DiscoveryCloudStorageConditions

Requirements that must be true before a Cloud Storage bucket or object is scanned in discovery for the first time. There is an AND relationship between the top-level attributes.

Fields
included_object_attributes[]

CloudStorageObjectAttribute

Required. Only objects with the specified attributes will be scanned. If an object has one of the specified attributes but is inside an excluded bucket, it will not be scanned. Defaults to [ALL_SUPPORTED_OBJECTS]. A profile will be created even if no objects match the included_object_attributes.

included_bucket_attributes[]

CloudStorageBucketAttribute

Required. Only objects with the specified attributes will be scanned. Defaults to [ALL_SUPPORTED_BUCKETS] if unset.

CloudStorageBucketAttribute

The attribute of a bucket.

Enums
CLOUD_STORAGE_BUCKET_ATTRIBUTE_UNSPECIFIED Unused.
ALL_SUPPORTED_BUCKETS Scan buckets regardless of the attribute.
AUTOCLASS_DISABLED Buckets with autoclass disabled (https://cloud.google.com/storage/docs/autoclass). Only one of AUTOCLASS_DISABLED or AUTOCLASS_ENABLED should be set.
AUTOCLASS_ENABLED Buckets with autoclass enabled (https://cloud.google.com/storage/docs/autoclass). Only one of AUTOCLASS_DISABLED or AUTOCLASS_ENABLED should be set. Scanning Autoclass-enabled buckets can affect object storage classes.

CloudStorageObjectAttribute

The attribute of an object. See https://cloud.google.com/storage/docs/storage-classes for more information on storage classes.

Enums
CLOUD_STORAGE_OBJECT_ATTRIBUTE_UNSPECIFIED Unused.
ALL_SUPPORTED_OBJECTS Scan objects regardless of the attribute.
STANDARD Scan objects with the standard storage class.
NEARLINE Scan objects with the nearline storage class. This will incur retrieval fees.
COLDLINE Scan objects with the coldline storage class. This will incur retrieval fees.
ARCHIVE Scan objects with the archive storage class. This will incur retrieval fees.
REGIONAL Scan objects with the regional storage class.
MULTI_REGIONAL Scan objects with the multi-regional storage class.
DURABLE_REDUCED_AVAILABILITY Scan objects with the dual-regional storage class. This will incur retrieval fees.

DiscoveryCloudStorageFilter

Determines which buckets will have profiles generated within an organization or project. Includes the ability to filter by regular expression patterns on project ID and bucket name.

Fields
Union field filter. Whether the filter applies to a specific set of buckets or all other buckets within the location being profiled. The first filter to match will be applied, regardless of the condition. If none is set, will default to others. filter can be only one of the following:
collection

FileStoreCollection

Optional. A specific set of buckets for this filter to apply to.

cloud_storage_resource_reference

CloudStorageResourceReference

Optional. The bucket to scan. Targets including this can only include one target (the target with this bucket). This enables profiling the contents of a single bucket, while the other options allow for easy profiling of many bucets within a project or an organization.

others

AllOtherResources

Optional. Catch-all. This should always be the last target in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically.

DiscoveryCloudStorageGenerationCadence

How often existing buckets should have their profiles refreshed. New buckets are scanned as quickly as possible depending on system capacity.

Fields
refresh_frequency

DataProfileUpdateFrequency

Optional. Data changes in Cloud Storage can't trigger reprofiling. If you set this field, profiles are refreshed at this frequency regardless of whether the underlying buckets have changed. Defaults to never.

inspect_template_modified_cadence

DiscoveryInspectTemplateModifiedCadence

Optional. Governs when to update data profiles when the inspection rules defined by the InspectTemplate change. If not set, changing the template will not cause a data profile to update.

DiscoveryConfig

Configuration for discovery to scan resources for profile generation. Only one discovery configuration may exist per organization, folder, or project.

The generated data profiles are retained according to the data retention policy.

Fields
name

string

Unique resource name for the DiscoveryConfig, assigned by the service when the DiscoveryConfig is created, for example projects/dlp-test-project/locations/global/discoveryConfigs/53234423.

display_name

string

Display name (max 100 chars)

org_config

OrgConfig

Only set when the parent is an org.

other_cloud_starting_location

OtherCloudDiscoveryStartingLocation

Must be set only when scanning other clouds.

inspect_templates[]

string

Detection logic for profile generation.

Not all template features are used by Discovery. FindingLimits, include_quote and exclude_info_types have no impact on Discovery.

Multiple templates may be provided if there is data in multiple regions. At most one template must be specified per-region (including "global"). Each region is scanned using the applicable template. If no region-specific template is specified, but a "global" template is specified, it will be copied to that region and used instead. If no global or region-specific template is provided for a region with data, that region's data will not be scanned.

For more information, see https://cloud.google.com/sensitive-data-protection/docs/data-profiles#data-residency.

actions[]

DataProfileAction

Actions to execute at the completion of scanning.

targets[]

DiscoveryTarget

Target to match against for determining what to scan and how frequently.

errors[]

Error

Output only. A stream of errors encountered when the config was activated. Repeated errors may result in the config automatically being paused. Output only field. Will return the last 100 errors. Whenever the config is modified this list will be cleared.

create_time

Timestamp

Output only. The creation timestamp of a DiscoveryConfig.

update_time

Timestamp

Output only. The last update timestamp of a DiscoveryConfig.

last_run_time

Timestamp

Output only. The timestamp of the last time this config was executed.

status

Status

Required. A status for this configuration.

OrgConfig

Project and scan location information. Only set when the parent is an org.

Fields
location

DiscoveryStartingLocation

The data to scan: folder, org, or project

project_id

string

The project that will run the scan. The DLP service account that exists within this project must have access to all resources that are profiled, and the DLP API must be enabled.

Status

Whether the discovery config is currently active. New options may be added at a later time.

Enums
STATUS_UNSPECIFIED Unused
RUNNING The discovery config is currently active.
PAUSED The discovery config is paused temporarily.

DiscoveryFileStoreConditions

Requirements that must be true before a file store is scanned in discovery for the first time. There is an AND relationship between the top-level attributes.

Fields
created_after

Timestamp

Optional. File store must have been created after this date. Used to avoid backfilling.

min_age

Duration

Optional. Minimum age a file store must have. If set, the value must be 1 hour or greater.

Union field conditions. File store specific conditions. conditions can be only one of the following:
cloud_storage_conditions

DiscoveryCloudStorageConditions

Optional. Cloud Storage conditions.

DiscoveryGenerationCadence

What must take place for a profile to be updated and how frequently it should occur. New tables are scanned as quickly as possible depending on system capacity.

Fields
schema_modified_cadence

DiscoverySchemaModifiedCadence

Governs when to update data profiles when a schema is modified.

table_modified_cadence

DiscoveryTableModifiedCadence

Governs when to update data profiles when a table is modified.

inspect_template_modified_cadence

DiscoveryInspectTemplateModifiedCadence

Governs when to update data profiles when the inspection rules defined by the InspectTemplate change. If not set, changing the template will not cause a data profile to update.

refresh_frequency

DataProfileUpdateFrequency

Frequency at which profiles should be updated, regardless of whether the underlying resource has changed. Defaults to never.

DiscoveryInspectTemplateModifiedCadence

The cadence at which to update data profiles when the inspection rules defined by the InspectTemplate change.

Fields
frequency

DataProfileUpdateFrequency

How frequently data profiles can be updated when the template is modified. Defaults to never.

DiscoveryOtherCloudConditions

Requirements that must be true before a resource is profiled for the first time.

Fields
min_age

Duration

Minimum age a resource must be before Cloud DLP can profile it. Value must be 1 hour or greater.

Union field conditions. The conditions to apply. conditions can be only one of the following:
amazon_s3_bucket_conditions

AmazonS3BucketConditions

Amazon S3 bucket conditions.

DiscoveryOtherCloudFilter

Determines which resources from the other cloud will have profiles generated. Includes the ability to filter by resource names.

Fields
Union field filter. Whether the filter applies to a specific set of resources or all other resources. The first filter to match will be applied, regardless of the condition. Defaults to others if none is set. filter can be only one of the following:
collection

OtherCloudResourceCollection

A collection of resources for this filter to apply to.

single_resource

OtherCloudSingleResourceReference

The resource to scan. Configs using this filter can only have one target (the target with this single resource reference).

others

AllOtherResources

Optional. Catch-all. This should always be the last target in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically.

DiscoveryOtherCloudGenerationCadence

How often existing resources should have their profiles refreshed. New resources are scanned as quickly as possible depending on system capacity.

Fields
refresh_frequency

DataProfileUpdateFrequency

Optional. Frequency to update profiles regardless of whether the underlying resource has changes. Defaults to never.

inspect_template_modified_cadence

DiscoveryInspectTemplateModifiedCadence

Optional. Governs when to update data profiles when the inspection rules defined by the InspectTemplate change. If not set, changing the template will not cause a data profile to update.

DiscoverySchemaModifiedCadence

The cadence at which to update data profiles when a schema is modified.

Fields
types[]

BigQuerySchemaModification

The type of events to consider when deciding if the table's schema has been modified and should have the profile updated. Defaults to NEW_COLUMNS.

frequency

DataProfileUpdateFrequency

How frequently profiles may be updated when schemas are modified. Defaults to monthly.

DiscoveryStartingLocation

The location to begin a discovery scan. Denotes an organization ID or folder ID within an organization.

Fields
Union field location. The location to be scanned. location can be only one of the following:
organization_id

int64

The ID of an organization to scan.

folder_id

int64

The ID of the folder within an organization to be scanned.

DiscoveryTableModifiedCadence

The cadence at which to update data profiles when a table is modified.

Fields
types[]

BigQueryTableModification

The type of events to consider when deciding if the table has been modified and should have the profile updated. Defaults to MODIFIED_TIMESTAMP.

frequency

DataProfileUpdateFrequency

How frequently data profiles can be updated when tables are modified. Defaults to never.

DiscoveryTarget

Target used to match against for Discovery.

Fields
Union field target. A target to match against for Discovery. target can be only one of the following:
big_query_target

BigQueryDiscoveryTarget

BigQuery target for Discovery. The first target to match a table will be the one applied.

cloud_sql_target

CloudSqlDiscoveryTarget

Cloud SQL target for Discovery. The first target to match a table will be the one applied.

secrets_target

SecretsDiscoveryTarget

Discovery target that looks for credentials and secrets stored in cloud resource metadata and reports them as vulnerabilities to Security Command Center. Only one target of this type is allowed.

cloud_storage_target

CloudStorageDiscoveryTarget

Cloud Storage target for Discovery. The first target to match a table will be the one applied.

other_cloud_target

OtherCloudDiscoveryTarget

Other clouds target for discovery. The first target to match a resource will be the one applied.

DlpJob

Combines all of the information about a DLP job.

Fields
name

string

The server-assigned name.

type

DlpJobType

The type of job.

state

JobState

State of a job.

create_time

Timestamp

Time when the job was created.

start_time

Timestamp

Time when the job started.

end_time

Timestamp

Time when the job finished.

last_modified

Timestamp

Time when the job was last modified by the system.

job_trigger_name

string

If created by a job trigger, the resource name of the trigger that instantiated the job.

errors[]

Error

A stream of errors encountered running the job.

action_details[]

ActionDetails

Events that should occur after the job has completed.

Union field details. Job details. details can be only one of the following:
risk_details

AnalyzeDataSourceRiskDetails

Results from analyzing risk of a data source.

inspect_details

InspectDataSourceDetails

Results from inspecting a data source.

JobState

Possible states of a job. New items may be added.

Enums
JOB_STATE_UNSPECIFIED Unused.
PENDING The job has not yet started.
RUNNING The job is currently running. Once a job has finished it will transition to FAILED or DONE.
DONE The job is no longer running.
CANCELED The job was canceled before it could be completed.
FAILED The job had an error and did not complete.
ACTIVE The job is currently accepting findings via hybridInspect. A hybrid job in ACTIVE state may continue to have findings added to it through the calling of hybridInspect. After the job has finished no more calls to hybridInspect may be made. ACTIVE jobs can transition to DONE.

DlpJobType

An enum to represent the various types of DLP jobs.

Enums
DLP_JOB_TYPE_UNSPECIFIED Defaults to INSPECT_JOB.
INSPECT_JOB The job inspected Google Cloud for sensitive data.
RISK_ANALYSIS_JOB The job executed a Risk Analysis computation.

DocumentLocation

Location of a finding within a document.

Fields
file_offset

int64

Offset of the line, from the beginning of the file, where the finding is located.

EncryptionStatus

How a resource is encrypted.

Enums
ENCRYPTION_STATUS_UNSPECIFIED Unused.
ENCRYPTION_GOOGLE_MANAGED Google manages server-side encryption keys on your behalf.
ENCRYPTION_CUSTOMER_MANAGED Customer provides the key.

EntityId

An entity in a dataset is a field or set of fields that correspond to a single person. For example, in medical records the EntityId might be a patient identifier, or for financial records it might be an account identifier. This message is used when generalizations or analysis must take into account that multiple rows correspond to the same entity.

Fields
field

FieldId

Composite key indicating which field contains the entity identifier.

Error

Details information about an error encountered during job execution or the results of an unsuccessful activation of the JobTrigger.

Fields
details

Status

Detailed error codes and messages.

timestamps[]

Timestamp

The times the error occurred. List includes the oldest timestamp and the last 9 timestamps.

extra_info

ErrorExtraInfo

Additional information about the error.

ErrorExtraInfo

Additional information about the error.

Enums
ERROR_INFO_UNSPECIFIED Unused.
IMAGE_SCAN_UNAVAILABLE_IN_REGION Image scan is not available in the region.
FILE_STORE_CLUSTER_UNSUPPORTED File store cluster is not supported for profile generation.

ExcludeByHotword

The rule to exclude findings based on a hotword. For record inspection of tables, column names are considered hotwords. An example of this is to exclude a finding if it belongs to a BigQuery column that matches a specific pattern.

Fields
hotword_regex

Regex

Regular expression pattern defining what qualifies as a hotword.

proximity

Proximity

Range of characters within which the entire hotword must reside. The total length of the window cannot exceed 1000 characters. The windowBefore property in proximity should be set to 1 if the hotword needs to be included in a column header.

ExcludeInfoTypes

List of excluded infoTypes.

Fields
info_types[]

InfoType

InfoType list in ExclusionRule rule drops a finding when it overlaps or contained within with a finding of an infoType from this list. For example, for InspectionRuleSet.info_types containing "PHONE_NUMBER"and exclusion_rulecontainingexclude_info_types.info_types` with "EMAIL_ADDRESS" the phone number findings are dropped if they overlap with EMAIL_ADDRESS finding. That leads to "555-222-2222@example.org" to generate only a single finding, namely email address.

ExclusionRule

The rule that specifies conditions when findings of infoTypes specified in InspectionRuleSet are removed from results.

Fields
matching_type

MatchingType

How the rule is applied, see MatchingType documentation for details.

Union field type. Exclusion rule types. type can be only one of the following:
dictionary

Dictionary

Dictionary which defines the rule.

regex

Regex

Regular expression which defines the rule.

exclude_info_types

ExcludeInfoTypes

Set of infoTypes for which findings would affect this rule.

exclude_by_hotword

ExcludeByHotword

Drop if the hotword rule is contained in the proximate context. For tabular data, the context includes the column name.

FieldId

General identifier of a data field in a storage service.

Fields
name

string

Name describing the field.

FieldTransformation

The transformation to apply to the field.

Fields
fields[]

FieldId

Required. Input field(s) to apply the transformation to. When you have columns that reference their position within a list, omit the index from the FieldId. FieldId name matching ignores the index. For example, instead of "contact.nums[0].type", use "contact.nums.type".

condition

RecordCondition

Only apply the transformation if the condition evaluates to true for the given RecordCondition. The conditions are allowed to reference fields that are not used in the actual transformation.

Example Use Cases:

  • Apply a different bucket transformation to an age column if the zip code column for the same record is within a specific range.
  • Redact a field if the date of birth field is greater than 85.
Union field transformation. Transformation to apply. [required] transformation can be only one of the following:
primitive_transformation

PrimitiveTransformation

Apply the transformation to the entire field.

info_type_transformations

InfoTypeTransformations

Treat the contents of the field as free text, and selectively transform content that matches an InfoType.

FileClusterSummary

The file cluster summary.

Fields
file_cluster_type

FileClusterType

The file cluster type.

file_store_info_type_summaries[]

FileStoreInfoTypeSummary

InfoTypes detected in this cluster.

sensitivity_score

SensitivityScore

The sensitivity score of this cluster. The score will be SENSITIVITY_LOW if nothing has been scanned.

data_risk_level

DataRiskLevel

The data risk level of this cluster. RISK_LOW if nothing has been scanned.

errors[]

Error

A list of errors detected while scanning this cluster. The list is truncated to 10 per cluster.

file_extensions_scanned[]

FileExtensionInfo

A sample of file types scanned in this cluster. Empty if no files were scanned. File extensions can be derived from the file name or the file content.

file_extensions_seen[]

FileExtensionInfo

A sample of file types seen in this cluster. Empty if no files were seen. File extensions can be derived from the file name or the file content.

no_files_exist

bool

True if no files exist in this cluster. If the bucket had more files than could be listed, this will be false even if no files for this cluster were seen and file_extensions_seen is empty.

FileClusterType

Message used to identify file cluster type being profiled.

Fields
Union field file_cluster_type. File cluster type. file_cluster_type can be only one of the following:
cluster

Cluster

Cluster type.

Cluster

Cluster type. Each cluster corresponds to a set of file types. Over time, new types may be added and files may move between clusters.

Enums
CLUSTER_UNSPECIFIED Unused.
CLUSTER_UNKNOWN Unsupported files.
CLUSTER_TEXT Plain text.
CLUSTER_STRUCTURED_DATA Structured data like CSV, TSV etc.
CLUSTER_SOURCE_CODE Source code.
CLUSTER_RICH_DOCUMENT Rich document like docx, xlsx etc.
CLUSTER_IMAGE Images like jpeg, bmp.
CLUSTER_ARCHIVE Archives and containers like .zip, .tar etc.
CLUSTER_MULTIMEDIA Multimedia like .mp4, .avi etc.
CLUSTER_EXECUTABLE Executable files like .exe, .class, .apk etc.

FileExtensionInfo

Information regarding the discovered file extension.

Fields
file_extension

string

The file extension if set. (aka .pdf, .jpg, .txt)

FileStoreCollection

Match file stores (e.g. buckets) using regex filters.

Fields
Union field pattern. The first filter containing a pattern that matches a file store will be used. pattern can be only one of the following:
include_regexes

FileStoreRegexes

Optional. A collection of regular expressions to match a file store against.

FileStoreDataProfile

The profile for a file store.

  • Cloud Storage: maps 1:1 with a bucket.
  • Amazon S3: maps 1:1 with a bucket.
Fields
name

string

The name of the profile.

data_source_type

DataSourceType

The resource type that was profiled.

project_data_profile

string

The resource name of the project data profile for this file store.

project_id

string

The Google Cloud project ID that owns the resource. For Amazon S3 buckets, this is the AWS Account Id.

file_store_location

string

The location of the file store.

data_storage_locations[]

string

For resources that have multiple storage locations, these are those regions. For Cloud Storage this is the list of regions chosen for dual-region storage. file_store_location will normally be the corresponding multi-region for the list of individual locations. The first region is always picked as the processing and storage location for the data profile.

location_type

string

The location type of the bucket (region, dual-region, multi-region, etc). If dual-region, expect data_storage_locations to be populated.

file_store_path

string

The file store path.

  • Cloud Storage: gs://{bucket}
  • Amazon S3: s3://{bucket}
full_resource

string

The resource name of the resource profiled. https://cloud.google.com/apis/design/resource_names#full_resource_name

Example format of an S3 bucket full resource name: //cloudasset.googleapis.com/organizations/{org_id}/otherCloudConnections/aws/arn:aws:s3:::{bucket_name}

config_snapshot

DataProfileConfigSnapshot

The snapshot of the configurations used to generate the profile.

profile_status

ProfileStatus

Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated.

state

State

State of a profile.

profile_last_generated

Timestamp

The last time the profile was generated.

resource_visibility

ResourceVisibility

How broadly a resource has been shared.

sensitivity_score

SensitivityScore

The sensitivity score of this resource.

data_risk_level

DataRiskLevel

The data risk level of this resource.

create_time

Timestamp

The time the file store was first created.

last_modified_time

Timestamp

The time the file store was last modified.

file_cluster_summaries[]

FileClusterSummary

FileClusterSummary per each cluster.

resource_attributes

map<string, Value>

Attributes of the resource being profiled. Currently used attributes:

  • customer_managed_encryption: boolean
    • true: the resource is encrypted with a customer-managed key.
    • false: the resource is encrypted with a provider-managed key.
resource_labels

map<string, string>

The labels applied to the resource at the time the profile was generated.

file_store_info_type_summaries[]

FileStoreInfoTypeSummary

InfoTypes detected in this file store.

sample_findings_table

BigQueryTable

The BigQuery table to which the sample findings are written.

file_store_is_empty

bool

The file store does not have any files.

State

Possible states of a profile. New items may be added.

Enums
STATE_UNSPECIFIED Unused.
RUNNING The profile is currently running. Once a profile has finished it will transition to DONE.
DONE The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed.

FileStoreInfoTypeSummary

Information regarding the discovered InfoType.

Fields
info_type

InfoType

The InfoType seen.

FileStoreRegex

A pattern to match against one or more file stores.

Fields
Union field resource_regex. The type of resource regex to use. resource_regex can be only one of the following:
cloud_storage_regex

CloudStorageRegex

Optional. Regex for Cloud Storage.

FileStoreRegexes

A collection of regular expressions to determine what file store to match against.

Fields
patterns[]

FileStoreRegex

Required. The group of regular expression patterns to match against one or more file stores. Maximum of 100 entries. The sum of all regular expression's length can't exceed 10 KiB.

FileType

Definitions of file type groups to scan. New types will be added to this list.

Enums
FILE_TYPE_UNSPECIFIED Includes all files.
BINARY_FILE Includes all file extensions not covered by another entry. Binary scanning attempts to convert the content of the file to utf_8 to scan the file. If you wish to avoid this fall back, specify one or more of the other file types in your storage scan.
TEXT_FILE Included file extensions: asc,asp, aspx, brf, c, cc,cfm, cgi, cpp, csv, cxx, c++, cs, css, dart, dat, dot, eml,, epbub, ged, go, h, hh, hpp, hxx, h++, hs, html, htm, mkd, markdown, m, ml, mli, perl, pl, plist, pm, php, phtml, pht, properties, py, pyw, rb, rbw, rs, rss, rc, scala, sh, sql, swift, tex, shtml, shtm, xhtml, lhs, ics, ini, java, js, json, jsonl, kix, kml, ocaml, md, txt, text, tsv, vb, vcard, vcs, wml, xcodeproj, xml, xsl, xsd, yml, yaml.
IMAGE Included file extensions: bmp, gif, jpg, jpeg, jpe, png. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on image files. Image inspection is restricted to the global, us, asia, and europe regions.
WORD Microsoft Word files larger than 30 MB will be scanned as binary files. Included file extensions: docx, dotx, docm, dotm. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on Word files.
PDF PDF files larger than 30 MB will be scanned as binary files. Included file extensions: pdf. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on PDF files.
AVRO Included file extensions: avro
CSV Included file extensions: csv
TSV Included file extensions: tsv
POWERPOINT Microsoft PowerPoint files larger than 30 MB will be scanned as binary files. Included file extensions: pptx, pptm, potx, potm, pot. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on PowerPoint files.
EXCEL Microsoft Excel files larger than 30 MB will be scanned as binary files. Included file extensions: xlsx, xlsm, xltx, xltm. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on Excel files.

Finding

Represents a piece of potentially sensitive content.

Fields
name

string

Resource name in format projects/{project}/locations/{location}/findings/{finding} Populated only when viewing persisted findings.

quote

string

The content that was found. Even if the content is not textual, it may be converted to a textual representation here. Provided if include_quote is true and the finding is less than or equal to 4096 bytes long. If the finding exceeds 4096 bytes in length, the quote may be omitted.

info_type

InfoType

The type of content that might have been found. Provided if excluded_types is false.

likelihood

Likelihood

Confidence of how likely it is that the info_type is correct.

location

Location

Where the content was found.

create_time

Timestamp

Timestamp when finding was detected.

quote_info

QuoteInfo

Contains data parsed from quotes. Only populated if include_quote was set to true and a supported infoType was requested. Currently supported infoTypes: DATE, DATE_OF_BIRTH and TIME.

resource_name

string

The job that stored the finding.

trigger_name

string

Job trigger name, if applicable, for this finding.

labels

map<string, string>

The labels associated with this Finding.

Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?.

Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?.

No more than 10 labels can be associated with a given finding.

Examples:

  • "environment" : "production"
  • "pipeline" : "etl"
job_create_time

Timestamp

Time the job started that produced this finding.

job_name

string

The job that stored the finding.

finding_id

string

The unique finding id.

FinishDlpJobRequest

The request message for finishing a DLP hybrid job.

Fields
name

string

Required. The name of the DlpJob resource to be finished.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobs.finish

FixedSizeBucketingConfig

Buckets values based on fixed size ranges. The Bucketing transformation can provide all of this functionality, but requires more configuration. This message is provided as a convenience to the user for simple bucketing strategies.

The transformed value will be a hyphenated string of {lower_bound}-{upper_bound}. For example, if lower_bound = 10 and upper_bound = 20, all values that are within this bucket will be replaced with "10-20".

This can be used on data of type: double, long.

If the bound Value type differs from the type of data being transformed, we will first attempt converting the type of the data to be transformed to match the type of the bound before comparing.

See https://cloud.google.com/sensitive-data-protection/docs/concepts-bucketing to learn more.

Fields
lower_bound

Value

Required. Lower bound value of buckets. All values less than lower_bound are grouped together into a single bucket; for example if lower_bound = 10, then all values less than 10 are replaced with the value "-10".

upper_bound

Value

Required. Upper bound value of buckets. All values greater than upper_bound are grouped together into a single bucket; for example if upper_bound = 89, then all values greater than 89 are replaced with the value "89+".

bucket_size

double

Required. Size of each bucket (except for minimum and maximum buckets). So if lower_bound = 10, upper_bound = 89, and bucket_size = 10, then the following buckets would be used: -10, 10-20, 20-30, 30-40, 40-50, 50-60, 60-70, 70-80, 80-89, 89+. Precision up to 2 decimals works.

GetColumnDataProfileRequest

Request to get a column data profile.

Fields
name

string

Required. Resource name, for example organizations/12345/locations/us/columnDataProfiles/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.columnDataProfiles.get

GetConnectionRequest

Request message for GetConnection.

Fields
name

string

Required. Resource name in the format: projects/{project}/locations/{location}/connections/{connection}.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.connections.get

GetDeidentifyTemplateRequest

Request message for GetDeidentifyTemplate.

Fields
name

string

Required. Resource name of the organization and deidentify template to be read, for example organizations/433245324/deidentifyTemplates/432452342 or projects/project-id/deidentifyTemplates/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.deidentifyTemplates.get

GetDiscoveryConfigRequest

Request message for GetDiscoveryConfig.

Fields
name

string

Required. Resource name of the project and the configuration, for example projects/dlp-test-project/discoveryConfigs/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobTriggers.get

GetDlpJobRequest

The request message for [DlpJobs.GetDlpJob][].

Fields
name

string

Required. The name of the DlpJob resource.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobs.get

GetFileStoreDataProfileRequest

Request to get a file store data profile.

Fields
name

string

Required. Resource name, for example organizations/12345/locations/us/fileStoreDataProfiles/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.fileStoreProfiles.get

GetInspectTemplateRequest

Request message for GetInspectTemplate.

Fields
name

string

Required. Resource name of the organization and inspectTemplate to be read, for example organizations/433245324/inspectTemplates/432452342 or projects/project-id/inspectTemplates/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.inspectTemplates.get

GetJobTriggerRequest

Request message for GetJobTrigger.

Fields
name

string

Required. Resource name of the project and the triggeredJob, for example projects/dlp-test-project/jobTriggers/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobTriggers.get

GetProjectDataProfileRequest

Request to get a project data profile.

Fields
name

string

Required. Resource name, for example organizations/12345/locations/us/projectDataProfiles/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.projectDataProfiles.get

GetStoredInfoTypeRequest

Request message for GetStoredInfoType.

Fields
name

string

Required. Resource name of the organization and storedInfoType to be read, for example organizations/433245324/storedInfoTypes/432452342 or projects/project-id/storedInfoTypes/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.storedInfoTypes.get

GetTableDataProfileRequest

Request to get a table data profile.

Fields
name

string

Required. Resource name, for example organizations/12345/locations/us/tableDataProfiles/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.tableDataProfiles.get

HybridContentItem

An individual hybrid item to inspect. Will be stored temporarily during processing.

Fields
item

ContentItem

The item to inspect.

finding_details

HybridFindingDetails

Supplementary information that will be added to each finding.

HybridFindingDetails

Populate to associate additional data with each finding.

Fields
container_details

Container

Details about the container where the content being inspected is from.

file_offset

int64

Offset in bytes of the line, from the beginning of the file, where the finding is located. Populate if the item being scanned is only part of a bigger item, such as a shard of a file and you want to track the absolute position of the finding.

row_offset

int64

Offset of the row for tables. Populate if the row(s) being scanned are part of a bigger dataset and you want to keep track of their absolute position.

table_options

TableOptions

If the container is a table, additional information to make findings meaningful such as the columns that are primary keys. If not known ahead of time, can also be set within each inspect hybrid call and the two will be merged. Note that identifying_fields will only be stored to BigQuery, and only if the BigQuery action has been included.

labels

map<string, string>

Labels to represent user provided metadata about the data being inspected. If configured by the job, some key values may be required. The labels associated with Finding's produced by hybrid inspection.

Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?.

Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?.

No more than 10 labels can be associated with a given finding.

Examples:

  • "environment" : "production"
  • "pipeline" : "etl"

HybridInspectDlpJobRequest

Request to search for potentially sensitive info in a custom location.

Fields
name

string

Required. Resource name of the job to execute a hybrid inspect on, for example projects/dlp-test-project/dlpJob/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobs.hybridInspect
hybrid_item

HybridContentItem

The item to inspect.

HybridInspectJobTriggerRequest

Request to search for potentially sensitive info in a custom location.

Fields
name

string

Required. Resource name of the trigger to execute a hybrid inspect on, for example projects/dlp-test-project/jobTriggers/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobTriggers.hybridInspect
hybrid_item

HybridContentItem

The item to inspect.

HybridInspectResponse

This type has no fields.

Quota exceeded errors will be thrown once quota has been met.

HybridInspectStatistics

Statistics related to processing hybrid inspect requests.

Fields
processed_count

int64

The number of hybrid inspection requests processed within this job.

aborted_count

int64

The number of hybrid inspection requests aborted because the job ran out of quota or was ended before they could be processed.

pending_count

int64

The number of hybrid requests currently being processed. Only populated when called via method getDlpJob. A burst of traffic may cause hybrid inspect requests to be enqueued. Processing will take place as quickly as possible, but resource limitations may impact how long a request is enqueued for.

HybridOptions

Configuration to control jobs where the content being inspected is outside of Google Cloud Platform.

Fields
description

string

A short description of where the data is coming from. Will be stored once in the job. 256 max length.

required_finding_label_keys[]

string

These are labels that each inspection request must include within their 'finding_labels' map. Request may contain others, but any missing one of these will be rejected.

Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?.

No more than 10 keys can be required.

labels

map<string, string>

To organize findings, these labels will be added to each finding.

Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?.

Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?.

No more than 10 labels can be associated with a given finding.

Examples:

  • "environment" : "production"
  • "pipeline" : "etl"
table_options

TableOptions

If the container is a table, additional information to make findings meaningful such as the columns that are primary keys.

ImageLocation

Location of the finding within an image.

Fields
bounding_boxes[]

BoundingBox

Bounding boxes locating the pixels within the image containing the finding.

ImageTransformations

A type of transformation that is applied over images.

Fields
transforms[]

ImageTransformation

List of transforms to make.

ImageTransformation

Configuration for determining how redaction of images should occur.

Fields
redaction_color

Color

The color to use when redacting content from an image. If not specified, the default is black.

Union field target. Part of the image to transform. target can be only one of the following:
selected_info_types

SelectedInfoTypes

Apply transformation to the selected info_types.

all_info_types

AllInfoTypes

Apply transformation to all findings not specified in other ImageTransformation's selected_info_types. Only one instance is allowed within the ImageTransformations message.

all_text

AllText

Apply transformation to all text that doesn't match an infoType. Only one instance is allowed within the ImageTransformations message.

AllInfoTypes

This type has no fields.

Apply transformation to all findings.

AllText

This type has no fields.

Apply to all text.

SelectedInfoTypes

Apply transformation to the selected info_types.

Fields
info_types[]

InfoType

Required. InfoTypes to apply the transformation to. Required. Provided InfoType must be unique within the ImageTransformations message.

InfoType

Type of information detected by the API.

Fields
name

string

Name of the information type. Either a name of your choosing when creating a CustomInfoType, or one of the names listed at https://cloud.google.com/sensitive-data-protection/docs/infotypes-reference when specifying a built-in type. When sending Cloud DLP results to Data Catalog, infoType names should conform to the pattern [A-Za-z0-9$_-]{1,64}.

version

string

Optional version name for this InfoType.

sensitivity_score

SensitivityScore

Optional custom sensitivity for this InfoType. This only applies to data profiling.

InfoTypeCategory

Classification of infoTypes to organize them according to geographic location, industry, and data type.

Fields
Union field category. Categories of infotypes. category can be only one of the following:
location_category

LocationCategory

The region or country that issued the ID or document represented by the infoType.

industry_category

IndustryCategory

The group of relevant businesses where this infoType is commonly used

type_category

TypeCategory

The class of identifiers where this infoType belongs

IndustryCategory

Enum of the current industries in the category. We might add more industries in the future.

Enums
INDUSTRY_UNSPECIFIED Unused industry
FINANCE The infoType is typically used in the finance industry.
HEALTH The infoType is typically used in the health industry.
TELECOMMUNICATIONS The infoType is typically used in the telecommunications industry.

LocationCategory

Enum of the current locations. We might add more locations in the future.

Enums
LOCATION_UNSPECIFIED Unused location
GLOBAL The infoType is not issued by or tied to a specific region, but is used almost everywhere.
ARGENTINA The infoType is typically used in Argentina.
ARMENIA The infoType is typically used in Armenia.
AUSTRALIA The infoType is typically used in Australia.
AZERBAIJAN The infoType is typically used in Azerbaijan.
BELARUS The infoType is typically used in Belarus.
BELGIUM The infoType is typically used in Belgium.
BRAZIL The infoType is typically used in Brazil.
CANADA The infoType is typically used in Canada.
CHILE The infoType is typically used in Chile.
CHINA The infoType is typically used in China.
COLOMBIA The infoType is typically used in Colombia.
CROATIA The infoType is typically used in Croatia.
DENMARK The infoType is typically used in Denmark.
FRANCE The infoType is typically used in France.
FINLAND The infoType is typically used in Finland.
GERMANY The infoType is typically used in Germany.
HONG_KONG The infoType is typically used in Hong Kong.
INDIA The infoType is typically used in India.
INDONESIA The infoType is typically used in Indonesia.
IRELAND The infoType is typically used in Ireland.
ISRAEL The infoType is typically used in Israel.
ITALY The infoType is typically used in Italy.
JAPAN The infoType is typically used in Japan.
KAZAKHSTAN The infoType is typically used in Kazakhstan.
KOREA The infoType is typically used in Korea.
MEXICO The infoType is typically used in Mexico.
THE_NETHERLANDS The infoType is typically used in the Netherlands.
NEW_ZEALAND The infoType is typically used in New Zealand.
NORWAY The infoType is typically used in Norway.
PARAGUAY The infoType is typically used in Paraguay.
PERU The infoType is typically used in Peru.
POLAND The infoType is typically used in Poland.
PORTUGAL The infoType is typically used in Portugal.
RUSSIA The infoType is typically used in Russia.
SINGAPORE The infoType is typically used in Singapore.
SOUTH_AFRICA The infoType is typically used in South Africa.
SPAIN The infoType is typically used in Spain.
SWEDEN The infoType is typically used in Sweden.
SWITZERLAND The infoType is typically used in Switzerland.
TAIWAN The infoType is typically used in Taiwan.
THAILAND The infoType is typically used in Thailand.
TURKEY The infoType is typically used in Turkey.
UKRAINE The infoType is typically used in Ukraine.
UNITED_KINGDOM The infoType is typically used in the United Kingdom.
UNITED_STATES The infoType is typically used in the United States.
URUGUAY The infoType is typically used in Uruguay.
UZBEKISTAN The infoType is typically used in Uzbekistan.
VENEZUELA The infoType is typically used in Venezuela.
INTERNAL The infoType is typically used in Google internally.

TypeCategory

Enum of the current types in the category. We might add more types in the future.

Enums
TYPE_UNSPECIFIED Unused type
PII Personally identifiable information, for example, a name or phone number
SPII Personally identifiable information that is especially sensitive, for example, a passport number.
DEMOGRAPHIC Attributes that can partially identify someone, especially in combination with other attributes, like age, height, and gender.
CREDENTIAL Confidential or secret information, for example, a password.
GOVERNMENT_ID An identification document issued by a government.
DOCUMENT A document, for example, a resume or source code.
CONTEXTUAL_INFORMATION Information that is not sensitive on its own, but provides details about the circumstances surrounding an entity or an event.

InfoTypeDescription

InfoType description.

Fields
name

string

Internal name of the infoType.

display_name

string

Human readable form of the infoType name.

supported_by[]

InfoTypeSupportedBy

Which parts of the API supports this InfoType.

description

string

Description of the infotype. Translated when language is provided in the request.

versions[]

VersionDescription

A list of available versions for the infotype.

categories[]

InfoTypeCategory

The category of the infoType.

sensitivity_score

SensitivityScore

The default sensitivity of the infoType.

InfoTypeStats

Statistics regarding a specific InfoType.

Fields
info_type

InfoType

The type of finding this stat is for.

count

int64

Number of findings for this infoType.

InfoTypeSummary

The infoType details for this column.

Fields
info_type

InfoType

The infoType.

estimated_prevalence
(deprecated)

int32

Not populated for predicted infotypes.

InfoTypeSupportedBy

Parts of the APIs which use certain infoTypes.

Enums
ENUM_TYPE_UNSPECIFIED Unused.
INSPECT Supported by the inspect operations.
RISK_ANALYSIS Supported by the risk analysis operations.

InfoTypeTransformations

A type of transformation that will scan unstructured text and apply various PrimitiveTransformations to each finding, where the transformation is applied to only values that were identified as a specific info_type.

Fields
transformations[]

InfoTypeTransformation

Required. Transformation for each infoType. Cannot specify more than one for a given infoType.

InfoTypeTransformation

A transformation to apply to text that is identified as a specific info_type.

Fields
info_types[]

InfoType

InfoTypes to apply the transformation to. An empty list will cause this transformation to apply to all findings that correspond to infoTypes that were requested in InspectConfig.

primitive_transformation

PrimitiveTransformation

Required. Primitive transformation to apply to the infoType.

InspectConfig

Configuration description of the scanning process. When used with redactContent only info_types and min_likelihood are currently used.

Fields
info_types[]

InfoType

Restricts what info_types to look for. The values must correspond to InfoType values returned by ListInfoTypes or listed at https://cloud.google.com/sensitive-data-protection/docs/infotypes-reference.

When no InfoTypes or CustomInfoTypes are specified in a request, the system may automatically choose a default list of detectors to run, which may change over time.

If you need precise control and predictability as to what detectors are run you should specify specific InfoTypes listed in the reference, otherwise a default list will be used, which may change over time.

min_likelihood

Likelihood

Only returns findings equal to or above this threshold. The default is POSSIBLE.

In general, the highest likelihood setting yields the fewest findings in results and the lowest chance of a false positive. For more information, see Match likelihood.

min_likelihood_per_info_type[]

InfoTypeLikelihood

Minimum likelihood per infotype. For each infotype, a user can specify a minimum likelihood. The system only returns a finding if its likelihood is above this threshold. If this field is not set, the system uses the InspectConfig min_likelihood.

limits

FindingLimits

Configuration to control the number of findings returned. This is not used for data profiling.

When redacting sensitive data from images, finding limits don't apply. They can cause unexpected or inconsistent results, where only some data is redacted. Don't include finding limits in RedactImage requests. Otherwise, Cloud DLP returns an error.

When set within an InspectJobConfig, the specified maximum values aren't hard limits. If an inspection job reaches these limits, the job ends gradually, not abruptly. Therefore, the actual number of findings that Cloud DLP returns can be multiple times higher than these maximum values.

include_quote

bool

When true, a contextual quote from the data that triggered a finding is included in the response; see Finding.quote. This is not used for data profiling.

exclude_info_types

bool

When true, excludes type information of the findings. This is not used for data profiling.

custom_info_types[]

CustomInfoType

CustomInfoTypes provided by the user. See https://cloud.google.com/sensitive-data-protection/docs/creating-custom-infotypes to learn more.

content_options[]

ContentOption

Deprecated and unused.

rule_set[]

InspectionRuleSet

Set of rules to apply to the findings for this InspectConfig. Exclusion rules, contained in the set are executed in the end, other rules are executed in the order they are specified for each info type.

FindingLimits

Configuration to control the number of findings returned for inspection. This is not used for de-identification or data profiling.

When redacting sensitive data from images, finding limits don't apply. They can cause unexpected or inconsistent results, where only some data is redacted. Don't include finding limits in RedactImage requests. Otherwise, Cloud DLP returns an error.

Fields
max_findings_per_item

int32

Max number of findings that are returned for each item scanned.

When set within an InspectContentRequest, this field is ignored.

This value isn't a hard limit. If the number of findings for an item reaches this limit, the inspection of that item ends gradually, not abruptly. Therefore, the actual number of findings that Cloud DLP returns for the item can be multiple times higher than this value.

max_findings_per_request

int32

Max number of findings that are returned per request or job.

If you set this field in an InspectContentRequest, the resulting maximum value is the value that you set or 3,000, whichever is lower.

This value isn't a hard limit. If an inspection reaches this limit, the inspection ends gradually, not abruptly. Therefore, the actual number of findings that Cloud DLP returns can be multiple times higher than this value.

max_findings_per_info_type[]

InfoTypeLimit

Configuration of findings limit given for specified infoTypes.

InfoTypeLimit

Max findings configuration per infoType, per content item or long running DlpJob.

Fields
info_type

InfoType

Type of information the findings limit applies to. Only one limit per info_type should be provided. If InfoTypeLimit does not have an info_type, the DLP API applies the limit against all info_types that are found but not specified in another InfoTypeLimit.

max_findings

int32

Max findings limit for the given infoType.

InfoTypeLikelihood

Configuration for setting a minimum likelihood per infotype. Used to customize the minimum likelihood level for specific infotypes in the request. For example, use this if you want to lower the precision for PERSON_NAME without lowering the precision for the other infotypes in the request.

Fields
info_type

InfoType

Type of information the likelihood threshold applies to. Only one likelihood per info_type should be provided. If InfoTypeLikelihood does not have an info_type, the configuration fails.

min_likelihood

Likelihood

Only returns findings equal to or above this threshold. This field is required or else the configuration fails.

InspectContentRequest

Request to search for potentially sensitive info in a ContentItem.

Fields
parent

string

Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • serviceusage.services.use
inspect_config

InspectConfig

Configuration for the inspector. What specified here will override the template referenced by the inspect_template_name argument.

item

ContentItem

The item to inspect.

inspect_template_name

string

Template to use. Any configuration directly specified in inspect_config will override those set in the template. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged.

location_id

string

Deprecated. This field has no effect.

InspectContentResponse

Results of inspecting an item.

Fields
result

InspectResult

The findings.

InspectDataSourceDetails

The results of an inspect DataSource job.

Fields
requested_options

RequestedOptions

The configuration used for this job.

result

Result

A summary of the outcome of this inspection job.

RequestedOptions

Snapshot of the inspection configuration.

Fields
snapshot_inspect_template

InspectTemplate

If run with an InspectTemplate, a snapshot of its state at the time of this run.

job_config

InspectJobConfig

Inspect config.

Result

All result fields mentioned below are updated while the job is processing.

Fields
processed_bytes

int64

Total size in bytes that were processed.

total_estimated_bytes

int64

Estimate of the number of bytes to process.

info_type_stats[]

InfoTypeStats

Statistics of how many instances of each info type were found during inspect job.

num_rows_processed

int64

Number of rows scanned after sampling and time filtering (applicable for row based stores such as BigQuery).

hybrid_stats

HybridInspectStatistics

Statistics related to the processing of hybrid inspect.

InspectJobConfig

Controls what and how to inspect for findings.

Fields
storage_config

StorageConfig

The data to scan.

inspect_config

InspectConfig

How and what to scan for.

inspect_template_name

string

If provided, will be used as the default for all values in InspectConfig. inspect_config will be merged into the values persisted as part of the template.

actions[]

Action

Actions to execute at the completion of the job.

InspectResult

All the findings for a single scanned item.

Fields
findings[]

Finding

List of findings for an item.

findings_truncated

bool

If true, then this item might have more findings than were returned, and the findings returned are an arbitrary subset of all findings. The findings list might be truncated because the input items were too large, or because the server reached the maximum amount of resources allowed for a single API call. For best results, divide the input into smaller batches.

InspectTemplate

The inspectTemplate contains a configuration (set of types of sensitive data to be detected) to be used anywhere you otherwise would normally specify InspectConfig. See https://cloud.google.com/sensitive-data-protection/docs/concepts-templates to learn more.

Fields
name

string

Output only. The template name.

The template will have one of the following formats: projects/PROJECT_ID/inspectTemplates/TEMPLATE_ID OR organizations/ORGANIZATION_ID/inspectTemplates/TEMPLATE_ID;

display_name

string

Display name (max 256 chars).

description

string

Short description (max 256 chars).

create_time

Timestamp

Output only. The creation timestamp of an inspectTemplate.

update_time

Timestamp

Output only. The last update timestamp of an inspectTemplate.

inspect_config

InspectConfig

The core content of the template. Configuration of the scanning process.

InspectionRule

A single inspection rule to be applied to infoTypes, specified in InspectionRuleSet.

Fields
Union field type. Inspection rule types. type can be only one of the following:
hotword_rule

HotwordRule

Hotword-based detection rule.

exclusion_rule

ExclusionRule

Exclusion rule.

InspectionRuleSet

Rule set for modifying a set of infoTypes to alter behavior under certain circumstances, depending on the specific details of the rules within the set.

Fields
info_types[]

InfoType

List of infoTypes this rule set is applied to.

rules[]

InspectionRule

Set of rules to be applied to infoTypes. The rules are applied in order.

JobTrigger

Contains a configuration to make API calls on a repeating basis. See https://cloud.google.com/sensitive-data-protection/docs/concepts-job-triggers to learn more.

Fields
name

string

Unique resource name for the triggeredJob, assigned by the service when the triggeredJob is created, for example projects/dlp-test-project/jobTriggers/53234423.

display_name

string

Display name (max 100 chars)

description

string

User provided description (max 256 chars)

triggers[]

Trigger

A list of triggers which will be OR'ed together. Only one in the list needs to trigger for a job to be started. The list may contain only a single Schedule trigger and must have at least one object.

errors[]

Error

Output only. A stream of errors encountered when the trigger was activated. Repeated errors may result in the JobTrigger automatically being paused. Will return the last 100 errors. Whenever the JobTrigger is modified this list will be cleared.

create_time

Timestamp

Output only. The creation timestamp of a triggeredJob.

update_time

Timestamp

Output only. The last update timestamp of a triggeredJob.

last_run_time

Timestamp

Output only. The timestamp of the last time this trigger executed.

status

Status

Required. A status for this trigger.

Union field job. The configuration details for the specific type of job to run. job can be only one of the following:
inspect_job

InspectJobConfig

For inspect jobs, a snapshot of the configuration.

Status

Whether the trigger is currently active. If PAUSED or CANCELLED, no jobs will be created with this configuration. The service may automatically pause triggers experiencing frequent errors. To restart a job, set the status to HEALTHY after correcting user errors.

Enums
STATUS_UNSPECIFIED Unused.
HEALTHY Trigger is healthy.
PAUSED Trigger is temporarily paused.
CANCELLED Trigger is cancelled and can not be resumed.

Trigger

What event needs to occur for a new job to be started.

Fields
Union field trigger. What event needs to occur for a new job to be started. trigger can be only one of the following:
schedule

Schedule

Create a job on a repeating basis based on the elapse of time.

manual

Manual

For use with hybrid jobs. Jobs must be manually created and finished.

Key

A unique identifier for a Datastore entity. If a key's partition ID or any of its path kinds or names are reserved/read-only, the key is reserved/read-only. A reserved/read-only key is forbidden in certain documented contexts.

Fields
partition_id

PartitionId

Entities are partitioned into subsets, currently identified by a project ID and namespace ID. Queries are scoped to a single partition.

path[]

PathElement

The entity path. An entity path consists of one or more elements composed of a kind and a string or numerical identifier, which identify entities. The first element identifies a root entity, the second element identifies a child of the root entity, the third element identifies a child of the second entity, and so forth. The entities identified by all prefixes of the path are called the element's ancestors.

A path can never be empty, and a path can have at most 100 elements.

PathElement

A (kind, ID/name) pair used to construct a key path.

If either name or ID is set, the element is complete. If neither is set, the element is incomplete.

Fields
kind

string

The kind of the entity. A kind matching regex __.*__ is reserved/read-only. A kind must not contain more than 1500 bytes when UTF-8 encoded. Cannot be "".

Union field id_type. The type of ID. id_type can be only one of the following:
id

int64

The auto-allocated ID of the entity. Never equal to zero. Values less than zero are discouraged and may not be supported in the future.

name

string

The name of the entity. A name matching regex __.*__ is reserved/read-only. A name must not be more than 1500 bytes when UTF-8 encoded. Cannot be "".

KindExpression

A representation of a Datastore kind.

Fields
name

string

The name of the kind.

KmsWrappedCryptoKey

Include to use an existing data crypto key wrapped by KMS. The wrapped key must be a 128-, 192-, or 256-bit key. Authorization requires the following IAM permissions when sending a request to perform a crypto transformation using a KMS-wrapped crypto key: dlp.kms.encrypt

For more information, see Creating a wrapped key.

Note: When you use Cloud KMS for cryptographic operations, charges apply.

Fields
wrapped_key

bytes

Required. The wrapped data crypto key.

crypto_key_name

string

Required. The resource name of the KMS CryptoKey to use for unwrapping.

LargeCustomDictionaryConfig

Configuration for a custom dictionary created from a data source of any size up to the maximum size defined in the limits page. The artifacts of dictionary creation are stored in the specified Cloud Storage location. Consider using CustomInfoType.Dictionary for smaller dictionaries that satisfy the size requirements.

Fields
output_path

CloudStoragePath

Location to store dictionary artifacts in Cloud Storage. These files will only be accessible by project owners and the DLP API. If any of these artifacts are modified, the dictionary is considered invalid and can no longer be used.

Union field source. Source of the dictionary. source can be only one of the following:
cloud_storage_file_set

CloudStorageFileSet

Set of files containing newline-delimited lists of dictionary phrases.

big_query_field

BigQueryField

Field in a BigQuery table where each cell represents a dictionary phrase.

LargeCustomDictionaryStats

Summary statistics of a custom dictionary.

Fields
approx_num_phrases

int64

Approximate number of distinct phrases in the dictionary.

Likelihood

Coarse-grained confidence level of how well a particular finding satisfies the criteria to match a particular infoType.

Likelihood is calculated based on the number of signals a finding has that implies that the finding matches the infoType. For example, a string that has an '@' and a '.com' is more likely to be a match for an email address than a string that only has an '@'.

In general, the highest likelihood level has the strongest signals that indicate a match. That is, a finding with a high likelihood has a low chance of being a false positive.

For more information about each likelihood level and how likelihood works, see Match likelihood.

Enums
LIKELIHOOD_UNSPECIFIED Default value; same as POSSIBLE.
VERY_UNLIKELY Highest chance of a false positive.
UNLIKELY High chance of a false positive.
POSSIBLE Some matching signals. The default value.
LIKELY Low chance of a false positive.
VERY_LIKELY Confidence level is high. Lowest chance of a false positive.

ListColumnDataProfilesRequest

Request to list the profiles generated for a given organization or project.

Fields
parent

string

Required. Resource name of the organization or project, for example organizations/433245324/locations/europe or projects/project-id/locations/asia.

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.columnDataProfiles.list
page_token

string

Page token to continue retrieval.

page_size

int32

Size of the page. This value can be limited by the server. If zero, server returns a page of max size 100.

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant. Only one order field at a time is allowed.

Examples:

  • project_id asc
  • table_id
  • sensitivity_level desc

Supported fields are:

  • project_id: The Google Cloud project ID.
  • dataset_id: The ID of a BigQuery dataset.
  • table_id: The ID of a BigQuery table.
  • sensitivity_level: How sensitive the data in a column is, at most.
  • data_risk_level: How much risk is associated with this data.
  • profile_last_generated: When the profile was last updated in epoch seconds.
filter

string

Allows filtering.

Supported syntax:

  • Filter expressions are made up of one or more restrictions.
  • Restrictions can be combined by AND or OR logical operators. A sequence of restrictions implicitly uses AND.
  • A restriction has the form of {field} {operator} {value}.
  • Supported fields/values:
    • table_data_profile_name - The name of the related table data profile.
    • project_id - The Google Cloud project ID. (REQUIRED)
    • dataset_id - The BigQuery dataset ID. (REQUIRED)
    • table_id - The BigQuery table ID. (REQUIRED)
    • field_id - The ID of the BigQuery field.
    • info_type - The infotype detected in the resource.
    • sensitivity_level - HIGH|MEDIUM|LOW
    • data_risk_level: How much risk is associated with this data.
    • status_code - an RPC status code as defined in https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto
  • The operator must be = for project_id, dataset_id, and table_id. Other filters also support !=.

Examples:

  • project_id = 12345 AND status_code = 1
  • project_id = 12345 AND sensitivity_level = HIGH
  • project_id = 12345 AND info_type = STREET_ADDRESS

The length of this field should be no more than 500 characters.

ListColumnDataProfilesResponse

List of profiles generated for a given organization or project.

Fields
column_data_profiles[]

ColumnDataProfile

List of data profiles.

next_page_token

string

The next page token.

ListConnectionsRequest

Request message for ListConnections.

Fields
parent

string

Required. Resource name of the organization or project, for example, organizations/433245324/locations/europe or projects/project-id/locations/asia.

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.connections.list
page_size

int32

Optional. Number of results per page, max 1000.

page_token

string

Optional. Page token from a previous page to return the next set of results. If set, all other request fields must match the original request.

filter

string

Optional. Supported field/value: state - MISSING|AVAILABLE|ERROR

ListConnectionsResponse

Response message for ListConnections.

Fields
connections[]

Connection

List of connections.

next_page_token

string

Token to retrieve the next page of results. An empty value means there are no more results.

ListDeidentifyTemplatesRequest

Request message for ListDeidentifyTemplates.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}
  • Organizations scope, location specified: organizations/{org_id}/locations/{location_id}
  • Organizations scope, no location specified (defaults to global): organizations/{org_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.deidentifyTemplates.list
page_token

string

Page token to continue retrieval. Comes from the previous call to ListDeidentifyTemplates.

page_size

int32

Size of the page. This value can be limited by the server. If zero server returns a page of max size 100.

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant.

Example: name asc,update_time, create_time desc

Supported fields are:

  • create_time: corresponds to the time the template was created.
  • update_time: corresponds to the time the template was last updated.
  • name: corresponds to the template's name.
  • display_name: corresponds to the template's display name.
location_id

string

Deprecated. This field has no effect.

ListDeidentifyTemplatesResponse

Response message for ListDeidentifyTemplates.

Fields
deidentify_templates[]

DeidentifyTemplate

List of deidentify templates, up to page_size in ListDeidentifyTemplatesRequest.

next_page_token

string

If the next page is available then the next page token to be used in the following ListDeidentifyTemplates request.

ListDiscoveryConfigsRequest

Request message for ListDiscoveryConfigs.

Fields
parent

string

Required. Parent resource name.

The format of this value is as follows: projects/{project_id}/locations/{location_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.jobTriggers.list
page_token

string

Page token to continue retrieval. Comes from the previous call to ListDiscoveryConfigs. order_by field must not change for subsequent calls.

page_size

int32

Size of the page. This value can be limited by a server.

order_by

string

Comma-separated list of config fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant.

Example: name asc,update_time, create_time desc

Supported fields are:

  • last_run_time: corresponds to the last time the DiscoveryConfig ran.
  • name: corresponds to the DiscoveryConfig's name.
  • status: corresponds to DiscoveryConfig's status.

ListDiscoveryConfigsResponse

Response message for ListDiscoveryConfigs.

Fields
discovery_configs[]

DiscoveryConfig

List of configs, up to page_size in ListDiscoveryConfigsRequest.

next_page_token

string

If the next page is available then this value is the next page token to be used in the following ListDiscoveryConfigs request.

ListDlpJobsRequest

The request message for listing DLP jobs.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.jobs.list
filter

string

Allows filtering.

Supported syntax:

  • Filter expressions are made up of one or more restrictions.
  • Restrictions can be combined by AND or OR logical operators. A sequence of restrictions implicitly uses AND.
  • A restriction has the form of {field} {operator} {value}.
  • Supported fields/values for inspect jobs:
    • state - PENDING|RUNNING|CANCELED|FINISHED|FAILED
    • inspected_storage - DATASTORE|CLOUD_STORAGE|BIGQUERY
    • trigger_name - The name of the trigger that created the job.
    • 'end_time` - Corresponds to the time the job finished.
    • 'start_time` - Corresponds to the time the job finished.
  • Supported fields for risk analysis jobs:
    • state - RUNNING|CANCELED|FINISHED|FAILED
    • 'end_time` - Corresponds to the time the job finished.
    • 'start_time` - Corresponds to the time the job finished.
  • The operator must be = or !=.

Examples:

  • inspected_storage = cloud_storage AND state = done
  • inspected_storage = cloud_storage OR inspected_storage = bigquery
  • inspected_storage = cloud_storage AND (state = done OR state = canceled)
  • end_time > "2017-12-12T00:00:00+00:00"

The length of this field should be no more than 500 characters.

page_size

int32

The standard list page size.

page_token

string

The standard list page token.

type

DlpJobType

The type of job. Defaults to DlpJobType.INSPECT

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant.

Example: name asc, end_time asc, create_time desc

Supported fields are:

  • create_time: corresponds to the time the job was created.
  • end_time: corresponds to the time the job ended.
  • name: corresponds to the job's name.
  • state: corresponds to state
location_id

string

Deprecated. This field has no effect.

ListDlpJobsResponse

The response message for listing DLP jobs.

Fields
jobs[]

DlpJob

A list of DlpJobs that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListFileStoreDataProfilesRequest

Request to list the file store profiles generated for a given organization or project.

Fields
parent

string

Required. Resource name of the organization or project, for example organizations/433245324/locations/europe or projects/project-id/locations/asia.

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.fileStoreProfiles.list
page_token

string

Optional. Page token to continue retrieval.

page_size

int32

Optional. Size of the page. This value can be limited by the server. If zero, server returns a page of max size 100.

order_by

string

Optional. Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant. Only one order field at a time is allowed.

Examples:

  • project_id asc
  • name
  • sensitivity_level desc

Supported fields are:

  • project_id: The Google Cloud project ID.
  • sensitivity_level: How sensitive the data in a table is, at most.
  • data_risk_level: How much risk is associated with this data.
  • profile_last_generated: When the profile was last updated in epoch seconds.
  • last_modified: The last time the resource was modified.
  • resource_visibility: Visibility restriction for this resource.
  • name: The name of the profile.
  • create_time: The time the file store was first created.
filter

string

Optional. Allows filtering.

Supported syntax:

  • Filter expressions are made up of one or more restrictions.
  • Restrictions can be combined by AND or OR logical operators. A sequence of restrictions implicitly uses AND.
  • A restriction has the form of {field} {operator} {value}.
  • Supported fields/values:
    • project_id - The Google Cloud project ID.
    • account_id - The AWS account ID.
    • file_store_path - The path like "gs://bucket".
    • data_source_type - The profile's data source type, like "google/storage/bucket".
    • data_storage_location - The location where the file store's data is stored, like "us-central1".
    • sensitivity_level - HIGH|MODERATE|LOW
    • data_risk_level - HIGH|MODERATE|LOW
    • resource_visibility: PUBLIC|RESTRICTED
    • status_code - an RPC status code as defined in https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto
  • The operator must be = or !=.

Examples:

  • project_id = 12345 AND status_code = 1
  • project_id = 12345 AND sensitivity_level = HIGH
  • project_id = 12345 AND resource_visibility = PUBLIC
  • file_store_path = "gs://mybucket"

The length of this field should be no more than 500 characters.

ListFileStoreDataProfilesResponse

List of file store data profiles generated for a given organization or project.

Fields
file_store_data_profiles[]

FileStoreDataProfile

List of data profiles.

next_page_token

string

The next page token.

ListInfoTypesRequest

Request for the list of infoTypes.

Fields
parent

string

The parent resource name.

The format of this value is as follows:

`locations/{location_id}`
language_code

string

BCP-47 language code for localized infoType friendly names. If omitted, or if localized strings are not available, en-US strings will be returned.

filter

string

filter to only return infoTypes supported by certain parts of the API. Defaults to supported_by=INSPECT.

location_id

string

Deprecated. This field has no effect.

ListInfoTypesResponse

Response to the ListInfoTypes request.

Fields
info_types[]

InfoTypeDescription

Set of sensitive infoTypes.

ListInspectTemplatesRequest

Request message for ListInspectTemplates.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}
  • Organizations scope, location specified: organizations/{org_id}/locations/{location_id}
  • Organizations scope, no location specified (defaults to global): organizations/{org_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.inspectTemplates.list
page_token

string

Page token to continue retrieval. Comes from the previous call to ListInspectTemplates.

page_size

int32

Size of the page. This value can be limited by the server. If zero server returns a page of max size 100.

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant.

Example: name asc,update_time, create_time desc

Supported fields are:

  • create_time: corresponds to the time the template was created.
  • update_time: corresponds to the time the template was last updated.
  • name: corresponds to the template's name.
  • display_name: corresponds to the template's display name.
location_id

string

Deprecated. This field has no effect.

ListInspectTemplatesResponse

Response message for ListInspectTemplates.

Fields
inspect_templates[]

InspectTemplate

List of inspectTemplates, up to page_size in ListInspectTemplatesRequest.

next_page_token

string

If the next page is available then the next page token to be used in the following ListInspectTemplates request.

ListJobTriggersRequest

Request message for ListJobTriggers.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.jobTriggers.list
page_token

string

Page token to continue retrieval. Comes from the previous call to ListJobTriggers. order_by field must not change for subsequent calls.

page_size

int32

Size of the page. This value can be limited by a server.

order_by

string

Comma-separated list of triggeredJob fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant.

Example: name asc,update_time, create_time desc

Supported fields are:

  • create_time: corresponds to the time the JobTrigger was created.
  • update_time: corresponds to the time the JobTrigger was last updated.
  • last_run_time: corresponds to the last time the JobTrigger ran.
  • name: corresponds to the JobTrigger's name.
  • display_name: corresponds to the JobTrigger's display name.
  • status: corresponds to JobTrigger's status.
filter

string

Allows filtering.

Supported syntax:

  • Filter expressions are made up of one or more restrictions.
  • Restrictions can be combined by AND or OR logical operators. A sequence of restrictions implicitly uses AND.
  • A restriction has the form of {field} {operator} {value}.
  • Supported fields/values for inspect triggers:
    • status - HEALTHY|PAUSED|CANCELLED
    • inspected_storage - DATASTORE|CLOUD_STORAGE|BIGQUERY
    • 'last_run_time` - RFC 3339 formatted timestamp, surrounded by quotation marks. Nanoseconds are ignored.
    • 'error_count' - Number of errors that have occurred while running.
  • The operator must be = or != for status and inspected_storage.

Examples:

  • inspected_storage = cloud_storage AND status = HEALTHY
  • inspected_storage = cloud_storage OR inspected_storage = bigquery
  • inspected_storage = cloud_storage AND (state = PAUSED OR state = HEALTHY)
  • last_run_time > "2017-12-12T00:00:00+00:00"

The length of this field should be no more than 500 characters.

type

DlpJobType

The type of jobs. Will use DlpJobType.INSPECT if not set.

location_id

string

Deprecated. This field has no effect.

ListJobTriggersResponse

Response message for ListJobTriggers.

Fields
job_triggers[]

JobTrigger

List of triggeredJobs, up to page_size in ListJobTriggersRequest.

next_page_token

string

If the next page is available then this value is the next page token to be used in the following ListJobTriggers request.

ListProjectDataProfilesRequest

Request to list the profiles generated for a given organization or project.

Fields
parent

string

Required. organizations/{org_id}/locations/{loc_id}

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.projectDataProfiles.list
page_token

string

Page token to continue retrieval.

page_size

int32

Size of the page. This value can be limited by the server. If zero, server returns a page of max size 100.

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant. Only one order field at a time is allowed.

Examples: * project_id * sensitivity_level desc

Supported fields are:

  • project_id: Google Cloud project ID
  • sensitivity_level: How sensitive the data in a project is, at most.
  • data_risk_level: How much risk is associated with this data.
  • profile_last_generated: When the profile was last updated in epoch seconds.
filter

string

Allows filtering.

Supported syntax:

  • Filter expressions are made up of one or more restrictions.
  • Restrictions can be combined by AND or OR logical operators. A sequence of restrictions implicitly uses AND.
  • A restriction has the form of {field} {operator} {value}.
  • Supported fields/values:
  • The operator must be = or !=.

Examples:

  • project_id = 12345 AND status_code = 1
  • project_id = 12345 AND sensitivity_level = HIGH

The length of this field should be no more than 500 characters.

ListProjectDataProfilesResponse

List of profiles generated for a given organization or project.

Fields
project_data_profiles[]

ProjectDataProfile

List of data profiles.

next_page_token

string

The next page token.

ListStoredInfoTypesRequest

Request message for ListStoredInfoTypes.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.storedInfoTypes.list
page_token

string

Page token to continue retrieval. Comes from the previous call to ListStoredInfoTypes.

page_size

int32

Size of the page. This value can be limited by the server. If zero server returns a page of max size 100.

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant.

Example: name asc, display_name, create_time desc

Supported fields are:

  • create_time: corresponds to the time the most recent version of the resource was created.
  • state: corresponds to the state of the resource.
  • name: corresponds to resource name.
  • display_name: corresponds to info type's display name.
location_id

string

Deprecated. This field has no effect.

ListStoredInfoTypesResponse

Response message for ListStoredInfoTypes.

Fields
stored_info_types[]

StoredInfoType

List of storedInfoTypes, up to page_size in ListStoredInfoTypesRequest.

next_page_token

string

If the next page is available then the next page token to be used in the following ListStoredInfoTypes request.

ListTableDataProfilesRequest

Request to list the profiles generated for a given organization or project.

Fields
parent

string

Required. Resource name of the organization or project, for example organizations/433245324/locations/europe or projects/project-id/locations/asia.

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.tableDataProfiles.list
page_token

string

Page token to continue retrieval.

page_size

int32

Size of the page. This value can be limited by the server. If zero, server returns a page of max size 100.

order_by

string

Comma-separated list of fields to order by, followed by asc or desc postfix. This list is case insensitive. The default sorting order is ascending. Redundant space characters are insignificant. Only one order field at a time is allowed.

Examples: * project_id asc * table_id * sensitivity_level desc

Supported fields are:

  • project_id: The Google Cloud project ID.
  • dataset_id: The ID of a BigQuery dataset.
  • table_id: The ID of a BigQuery table.
  • sensitivity_level: How sensitive the data in a table is, at most.
  • data_risk_level: How much risk is associated with this data.
  • profile_last_generated: When the profile was last updated in epoch seconds.
  • last_modified: The last time the resource was modified.
  • resource_visibility: Visibility restriction for this resource.
  • row_count: Number of rows in this resource.
filter

string

Allows filtering.

Supported syntax:

  • Filter expressions are made up of one or more restrictions.
  • Restrictions can be combined by AND or OR logical operators. A sequence of restrictions implicitly uses AND.
  • A restriction has the form of {field} {operator} {value}.
  • Supported fields/values:
  • The operator must be = or !=.

Examples:

  • project_id = 12345 AND status_code = 1
  • project_id = 12345 AND sensitivity_level = HIGH
  • project_id = 12345 AND resource_visibility = PUBLIC

The length of this field should be no more than 500 characters.

ListTableDataProfilesResponse

List of profiles generated for a given organization or project.

Fields
table_data_profiles[]

TableDataProfile

List of data profiles.

next_page_token

string

The next page token.

Location

Specifies the location of the finding.

Fields
byte_range

Range

Zero-based byte offsets delimiting the finding. These are relative to the finding's containing element. Note that when the content is not textual, this references the UTF-8 encoded textual representation of the content. Omitted if content is an image.

codepoint_range

Range

Unicode character offsets delimiting the finding. These are relative to the finding's containing element. Provided when the content is text.

content_locations[]

ContentLocation

List of nested objects pointing to the precise location of the finding within the file or record.

container

Container

Information about the container where this finding occurred, if available.

Manual

This type has no fields.

Job trigger option for hybrid jobs. Jobs must be manually created and finished.

MatchingType

Type of the match which can be applied to different ways of matching, like Dictionary, regular expression and intersecting with findings of another info type.

Enums
MATCHING_TYPE_UNSPECIFIED Invalid.
MATCHING_TYPE_FULL_MATCH

Full match.

  • Dictionary: join of Dictionary results matched complete finding quote
  • Regex: all regex matches fill a finding quote start to end
  • Exclude info type: completely inside affecting info types findings
MATCHING_TYPE_PARTIAL_MATCH

Partial match.

  • Dictionary: at least one of the tokens in the finding matches
  • Regex: substring of the finding matches
  • Exclude info type: intersects with affecting info types findings
MATCHING_TYPE_INVERSE_MATCH

Inverse match.

  • Dictionary: no tokens in the finding match the dictionary
  • Regex: finding doesn't match the regex
  • Exclude info type: no intersection with affecting info types findings

MetadataLocation

Metadata Location

Fields
type

MetadataType

Type of metadata containing the finding.

Union field label. Label of the piece of metadata containing the finding, for example - latitude, author, caption. label can be only one of the following:
storage_label

StorageMetadataLabel

Storage metadata.

MetadataType

Type of metadata containing the finding.

Enums
METADATATYPE_UNSPECIFIED Unused
STORAGE_METADATA General file metadata provided by Cloud Storage.

NullPercentageLevel

Bucketized nullness percentage levels. A higher level means a higher percentage of the column is null.

Enums
NULL_PERCENTAGE_LEVEL_UNSPECIFIED Unused.
NULL_PERCENTAGE_VERY_LOW Very few null entries.
NULL_PERCENTAGE_LOW Some null entries.
NULL_PERCENTAGE_MEDIUM A few null entries.
NULL_PERCENTAGE_HIGH A lot of null entries.

OtherCloudDiscoveryStartingLocation

The other cloud starting location for discovery.

Fields
Union field location. The other cloud starting location for discovery. location can be only one of the following:
aws_location

AwsDiscoveryStartingLocation

The AWS starting location for discovery.

AwsDiscoveryStartingLocation

The AWS starting location for discovery.

Fields
Union field scope. The scope of this starting location. scope can be only one of the following:
account_id

string

The AWS account ID that this discovery config applies to. Within an AWS organization, you can find the AWS account ID inside an AWS account ARN. Example: arn:{partition}:organizations::{management_account_id}:account/{org_id}/{account_id}

all_asset_inventory_assets

bool

All AWS assets stored in Asset Inventory that didn't match other AWS discovery configs.

OtherCloudDiscoveryTarget

Target used to match against for discovery of resources from other clouds. An AWS connector in Security Command Center (Enterprise is required to use this feature.

Fields
data_source_type

DataSourceType

Required. The type of data profiles generated by this discovery target. Supported values are: * aws/s3/bucket

filter

DiscoveryOtherCloudFilter

Required. The resources that the discovery cadence applies to. The first target with a matching filter will be the one to apply to a resource.

conditions

DiscoveryOtherCloudConditions

Optional. In addition to matching the filter, these conditions must be true before a profile is generated.

Union field cadence. Type of cadence. cadence can be only one of the following:
generation_cadence

DiscoveryOtherCloudGenerationCadence

How often and when to update data profiles. New resources that match both the filter and conditions are scanned as quickly as possible depending on system capacity.

disabled

Disabled

Disable profiling for resources that match this filter.

OtherCloudResourceCollection

Match resources using regex filters.

Fields
Union field pattern. The first filter containing a pattern that matches a resource will be used. pattern can be only one of the following:
include_regexes

OtherCloudResourceRegexes

A collection of regular expressions to match a resource against.

OtherCloudResourceRegex

A pattern to match against one or more resources. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.

Fields
Union field resource_regex. The type of resource regex to use. resource_regex can be only one of the following:
amazon_s3_bucket_regex

AmazonS3BucketRegex

Regex for Amazon S3 buckets.

OtherCloudResourceRegexes

A collection of regular expressions to determine what resources to match against.

Fields
patterns[]

OtherCloudResourceRegex

A group of regular expression patterns to match against one or more resources. Maximum of 100 entries. The sum of all regular expression's length can't exceed 10 KiB.

OtherCloudSingleResourceReference

Identifies a single resource, like a single Amazon S3 bucket.

Fields
Union field resource. The resource to scan. resource can be only one of the following:
amazon_s3_bucket

AmazonS3Bucket

Amazon S3 bucket.

OtherInfoTypeSummary

Infotype details for other infoTypes found within a column.

Fields
info_type

InfoType

The other infoType.

estimated_prevalence

int32

Approximate percentage of non-null rows that contained data detected by this infotype.

excluded_from_analysis

bool

Whether this infoType was excluded from sensitivity and risk analysis due to factors such as low prevalence (subject to change).

OutputStorageConfig

Cloud repository for storing output.

Fields
output_schema

OutputSchema

Schema used for writing the findings for Inspect jobs. This field is only used for Inspect and must be unspecified for Risk jobs. Columns are derived from the Finding object. If appending to an existing table, any columns from the predefined schema that are missing will be added. No columns in the existing table will be deleted.

If unspecified, then all available columns will be used for a new table or an (existing) table with no schema, and no changes will be made to an existing table that has a schema. Only for use with external storage.

Union field type. Output storage types. type can be only one of the following:
table

BigQueryTable

Store findings in an existing table or a new table in an existing dataset. If table_id is not set a new one will be generated for you with the following format: dlp_googleapis_yyyy_mm_dd_[dlp_job_id]. Pacific time zone will be used for generating the date details.

For Inspect, each column in an existing output table must have the same name, type, and mode of a field in the Finding object.

For Risk, an existing output table should be the output of a previous Risk analysis job run on the same source table, with the same privacy metric and quasi-identifiers. Risk jobs that analyze the same table but compute a different privacy metric, or use different sets of quasi-identifiers, cannot store their results in the same table.

OutputSchema

Predefined schemas for storing findings. Only for use with external storage.

Enums
OUTPUT_SCHEMA_UNSPECIFIED Unused.
BASIC_COLUMNS Basic schema including only info_type, quote, certainty, and timestamp.
GCS_COLUMNS Schema tailored to findings from scanning Cloud Storage.
DATASTORE_COLUMNS Schema tailored to findings from scanning Google Datastore.
BIG_QUERY_COLUMNS Schema tailored to findings from scanning Google BigQuery.
ALL_COLUMNS Schema containing all columns.

PartitionId

Datastore partition ID. A partition ID identifies a grouping of entities. The grouping is always by project and namespace, however the namespace ID may be empty.

A partition ID contains several dimensions: project ID and namespace ID.

Fields
project_id

string

The ID of the project to which the entities belong.

namespace_id

string

If not empty, the ID of the namespace to which the entities belong.

PrimitiveTransformation

A rule for transforming a value.

Fields
Union field transformation. Type of transformation. transformation can be only one of the following:
replace_config

ReplaceValueConfig

Replace with a specified value.

redact_config

RedactConfig

Redact

character_mask_config

CharacterMaskConfig

Mask

crypto_replace_ffx_fpe_config

CryptoReplaceFfxFpeConfig

Ffx-Fpe

fixed_size_bucketing_config

FixedSizeBucketingConfig

Fixed size bucketing

bucketing_config

BucketingConfig

Bucketing

replace_with_info_type_config

ReplaceWithInfoTypeConfig

Replace with infotype

time_part_config

TimePartConfig

Time extraction

crypto_hash_config

CryptoHashConfig

Crypto

date_shift_config

DateShiftConfig

Date Shift

crypto_deterministic_config

CryptoDeterministicConfig

Deterministic Crypto

replace_dictionary_config

ReplaceDictionaryConfig

Replace with a value randomly drawn (with replacement) from a dictionary.

PrivacyMetric

Privacy metric to compute for reidentification risk analysis.

Fields
Union field type. Types of analysis. type can be only one of the following:
numerical_stats_config

NumericalStatsConfig

Numerical stats

categorical_stats_config

CategoricalStatsConfig

Categorical stats

k_anonymity_config

KAnonymityConfig

K-anonymity

l_diversity_config

LDiversityConfig

l-diversity

k_map_estimation_config

KMapEstimationConfig

k-map

delta_presence_estimation_config

DeltaPresenceEstimationConfig

delta-presence

CategoricalStatsConfig

Compute numerical stats over an individual column, including number of distinct values and value count distribution.

Fields
field

FieldId

Field to compute categorical stats on. All column types are supported except for arrays and structs. However, it may be more informative to use NumericalStats when the field type is supported, depending on the data.

DeltaPresenceEstimationConfig

δ-presence metric, used to estimate how likely it is for an attacker to figure out that one given individual appears in a de-identified dataset. Similarly to the k-map metric, we cannot compute δ-presence exactly without knowing the attack dataset, so we use a statistical model instead.

Fields
quasi_ids[]

QuasiId

Required. Fields considered to be quasi-identifiers. No two fields can have the same tag.

region_code

string

ISO 3166-1 alpha-2 region code to use in the statistical modeling. Set if no column is tagged with a region-specific InfoType (like US_ZIP_5) or a region code.

auxiliary_tables[]

StatisticalTable

Several auxiliary tables can be used in the analysis. Each custom_tag used to tag a quasi-identifiers field must appear in exactly one field of one auxiliary table.

KAnonymityConfig

k-anonymity metric, used for analysis of reidentification risk.

Fields
quasi_ids[]

FieldId

Set of fields to compute k-anonymity over. When multiple fields are specified, they are considered a single composite key. Structs and repeated data types are not supported; however, nested fields are supported so long as they are not structs themselves or nested within a repeated field.

entity_id

EntityId

Message indicating that multiple rows might be associated to a single individual. If the same entity_id is associated to multiple quasi-identifier tuples over distinct rows, we consider the entire collection of tuples as the composite quasi-identifier. This collection is a multiset: the order in which the different tuples appear in the dataset is ignored, but their frequency is taken into account.

Important note: a maximum of 1000 rows can be associated to a single entity ID. If more rows are associated with the same entity ID, some might be ignored.

KMapEstimationConfig

Reidentifiability metric. This corresponds to a risk model similar to what is called "journalist risk" in the literature, except the attack dataset is statistically modeled instead of being perfectly known. This can be done using publicly available data (like the US Census), or using a custom statistical model (indicated as one or several BigQuery tables), or by extrapolating from the distribution of values in the input dataset.

Fields
quasi_ids[]

TaggedField

Required. Fields considered to be quasi-identifiers. No two columns can have the same tag.

region_code

string

ISO 3166-1 alpha-2 region code to use in the statistical modeling. Set if no column is tagged with a region-specific InfoType (like US_ZIP_5) or a region code.

auxiliary_tables[]

AuxiliaryTable

Several auxiliary tables can be used in the analysis. Each custom_tag used to tag a quasi-identifiers column must appear in exactly one column of one auxiliary table.

AuxiliaryTable

An auxiliary table contains statistical information on the relative frequency of different quasi-identifiers values. It has one or several quasi-identifiers columns, and one column that indicates the relative frequency of each quasi-identifier tuple. If a tuple is present in the data but not in the auxiliary table, the corresponding relative frequency is assumed to be zero (and thus, the tuple is highly reidentifiable).

Fields
table

BigQueryTable

Required. Auxiliary table location.

quasi_ids[]

QuasiIdField

Required. Quasi-identifier columns.

relative_frequency

FieldId

Required. The relative frequency column must contain a floating-point number between 0 and 1 (inclusive). Null values are assumed to be zero.

QuasiIdField

A quasi-identifier column has a custom_tag, used to know which column in the data corresponds to which column in the statistical model.

Fields
field

FieldId

Identifies the column.

custom_tag

string

A auxiliary field.

TaggedField

A column with a semantic tag attached.

Fields
field

FieldId

Required. Identifies the column.

Union field tag. Semantic tag that identifies what a column contains, to determine which statistical model to use to estimate the reidentifiability of each value. [required] tag can be only one of the following:
info_type

InfoType

A column can be tagged with a InfoType to use the relevant public dataset as a statistical model of population, if available. We currently support US ZIP codes, region codes, ages and genders. To programmatically obtain the list of supported InfoTypes, use ListInfoTypes with the supported_by=RISK_ANALYSIS filter.

custom_tag

string

A column can be tagged with a custom tag. In this case, the user must indicate an auxiliary table that contains statistical information on the possible values of this column (below).

inferred

Empty

If no semantic tag is indicated, we infer the statistical model from the distribution of values in the input data

LDiversityConfig

l-diversity metric, used for analysis of reidentification risk.

Fields
quasi_ids[]

FieldId

Set of quasi-identifiers indicating how equivalence classes are defined for the l-diversity computation. When multiple fields are specified, they are considered a single composite key.

sensitive_attribute

FieldId

Sensitive field for computing the l-value.

NumericalStatsConfig

Compute numerical stats over an individual column, including min, max, and quantiles.

Fields
field

FieldId

Field to compute numerical stats on. Supported types are integer, float, date, datetime, timestamp, time.

ProfileGeneration

Whether a profile being created is the first generation or an update.

Enums
PROFILE_GENERATION_UNSPECIFIED Unused.
PROFILE_GENERATION_NEW The profile is the first profile for the resource.
PROFILE_GENERATION_UPDATE The profile is an update to a previous profile.

ProfileStatus

Success or errors for the profile generation.

Fields
status

Status

Profiling status code and optional message. The status.code value is 0 (default value) for OK.

timestamp

Timestamp

Time when the profile generation status was updated

ProjectDataProfile

An aggregated profile for this project, based on the resources profiled within it.

Fields
name

string

The resource name of the profile.

project_id

string

Project ID or account that was profiled.

profile_last_generated

Timestamp

The last time the profile was generated.

sensitivity_score

SensitivityScore

The sensitivity score of this project.

data_risk_level

DataRiskLevel

The data risk level of this project.

profile_status

ProfileStatus

Success or error status of the last attempt to profile the project.

table_data_profile_count

int64

The number of table data profiles generated for this project.

file_store_data_profile_count

int64

The number of file store data profiles generated for this project.

QuasiId

A column with a semantic tag attached.

Fields
field

FieldId

Required. Identifies the column.

Union field tag. Semantic tag that identifies what a column contains, to determine which statistical model to use to estimate the reidentifiability of each value. [required] tag can be only one of the following:
info_type

InfoType

A column can be tagged with a InfoType to use the relevant public dataset as a statistical model of population, if available. We currently support US ZIP codes, region codes, ages and genders. To programmatically obtain the list of supported InfoTypes, use ListInfoTypes with the supported_by=RISK_ANALYSIS filter.

custom_tag

string

A column can be tagged with a custom tag. In this case, the user must indicate an auxiliary table that contains statistical information on the possible values of this column (below).

inferred

Empty

If no semantic tag is indicated, we infer the statistical model from the distribution of values in the input data

QuoteInfo

Message for infoType-dependent details parsed from quote.

Fields
Union field parsed_quote. Object representation of the quote. parsed_quote can be only one of the following:
date_time

DateTime

The date time indicated by the quote.

Range

Generic half-open interval [start, end)

Fields
start

int64

Index of the first character of the range (inclusive).

end

int64

Index of the last character of the range (exclusive).

RecordCondition

A condition for determining whether a transformation should be applied to a field.

Fields
expressions

Expressions

An expression.

Condition

The field type of value and field do not need to match to be considered equal, but not all comparisons are possible. EQUAL_TO and NOT_EQUAL_TO attempt to compare even with incompatible types, but all other comparisons are invalid with incompatible types. A value of type:

  • string can be compared against all other types
  • boolean can only be compared against other booleans
  • integer can be compared against doubles or a string if the string value can be parsed as an integer.
  • double can be compared against integers or a string if the string can be parsed as a double.
  • Timestamp can be compared against strings in RFC 3339 date string format.
  • TimeOfDay can be compared against timestamps and strings in the format of 'HH:mm:ss'.

If we fail to compare do to type mismatch, a warning will be given and the condition will evaluate to false.

Fields
field

FieldId

Required. Field within the record this condition is evaluated against.

operator

RelationalOperator

Required. Operator used to compare the field or infoType to the value.

value

Value

Value to compare against. [Mandatory, except for EXISTS tests.]

Conditions

A collection of conditions.

Fields
conditions[]

Condition

A collection of conditions.

Expressions

An expression, consisting of an operator and conditions.

Fields
logical_operator

LogicalOperator

The operator to apply to the result of conditions. Default and currently only supported value is AND.

Union field type. Expression types. type can be only one of the following:
conditions

Conditions

Conditions to apply to the expression.

LogicalOperator

Logical operators for conditional checks.

Enums
LOGICAL_OPERATOR_UNSPECIFIED Unused
AND Conditional AND

RecordKey

Message for a unique key indicating a record that contains a finding.

Fields
id_values[]

string

Values of identifying columns in the given row. Order of values matches the order of identifying_fields specified in the scanning request.

Union field type. Type of key type can be only one of the following:
datastore_key

DatastoreKey

BigQuery key

big_query_key

BigQueryKey

Datastore key

RecordLocation

Location of a finding within a row or record.

Fields
record_key

RecordKey

Key of the finding.

field_id

FieldId

Field id of the field containing the finding.

table_location

TableLocation

Location within a ContentItem.Table.

RecordSuppression

Configuration to suppress records whose suppression conditions evaluate to true.

Fields
condition

RecordCondition

A condition that when it evaluates to true will result in the record being evaluated to be suppressed from the transformed content.

RecordTransformation

The field in a record to transform.

Fields
field_id

FieldId

For record transformations, provide a field.

container_timestamp

Timestamp

Findings container modification timestamp, if applicable.

container_version

string

Container version, if available ("generation" for Cloud Storage).

RecordTransformations

A type of transformation that is applied over structured data such as a table.

Fields
field_transformations[]

FieldTransformation

Transform the record by applying various field transformations.

record_suppressions[]

RecordSuppression

Configuration defining which records get suppressed entirely. Records that match any suppression rule are omitted from the output.

RedactConfig

This type has no fields.

Redact a given value. For example, if used with an InfoTypeTransformation transforming PHONE_NUMBER, and input 'My phone number is 206-555-0123', the output would be 'My phone number is '.

RedactImageRequest

Request to search for potentially sensitive info in an image and redact it by covering it with a colored rectangle.

Fields
parent

string

Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • serviceusage.services.use
location_id

string

Deprecated. This field has no effect.

inspect_config

InspectConfig

Configuration for the inspector.

image_redaction_configs[]

ImageRedactionConfig

The configuration for specifying what content to redact from images.

include_findings

bool

Whether the response should include findings along with the redacted image.

byte_item

ByteContentItem

The content must be PNG, JPEG, SVG or BMP.

ImageRedactionConfig

Configuration for determining how redaction of images should occur.

Fields
redaction_color

Color

The color to use when redacting content from an image. If not specified, the default is black.

Union field target. Type of information to redact from images. target can be only one of the following:
info_type

InfoType

Only one per info_type should be provided per request. If not specified, and redact_all_text is false, the DLP API will redact all text that it matches against all info_types that are found, but not specified in another ImageRedactionConfig.

redact_all_text

bool

If true, all text found in the image, regardless whether it matches an info_type, is redacted. Only one should be provided.

RedactImageResponse

Results of redacting an image.

Fields
redacted_image

bytes

The redacted image. The type will be the same as the original image.

extracted_text

string

If an image was being inspected and the InspectConfig's include_quote was set to true, then this field will include all text, if any, that was found in the image.

inspect_result

InspectResult

The findings. Populated when include_findings in the request is true.

ReidentifyContentRequest

Request to re-identify an item.

Fields
parent

string

Required. Parent resource name.

The format of this value varies depending on whether you have specified a processing location:

  • Projects scope, location specified: projects/{project_id}/locations/{location_id}
  • Projects scope, no location specified (defaults to global): projects/{project_id}

The following example parent string specifies a parent project with the identifier example-project, and specifies the europe-west3 location for processing data:

parent=projects/example-project/locations/europe-west3

Authorization requires the following IAM permission on the specified resource parent:

  • serviceusage.services.use
reidentify_config

DeidentifyConfig

Configuration for the re-identification of the content item. This field shares the same proto message type that is used for de-identification, however its usage here is for the reversal of the previous de-identification. Re-identification is performed by examining the transformations used to de-identify the items and executing the reverse. This requires that only reversible transformations be provided here. The reversible transformations are:

  • CryptoDeterministicConfig
  • CryptoReplaceFfxFpeConfig
inspect_config

InspectConfig

Configuration for the inspector.

item

ContentItem

The item to re-identify. Will be treated as text.

inspect_template_name

string

Template to use. Any configuration directly specified in inspect_config will override those set in the template. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged.

reidentify_template_name

string

Template to use. References an instance of DeidentifyTemplate. Any configuration directly specified in reidentify_config or inspect_config will override those set in the template. The DeidentifyTemplate used must include only reversible transformations. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged.

location_id

string

Deprecated. This field has no effect.

ReidentifyContentResponse

Results of re-identifying an item.

Fields
item

ContentItem

The re-identified item.

overview

TransformationOverview

An overview of the changes that were made to the item.

RelationalOperator

Operators available for comparing the value of fields.

Enums
RELATIONAL_OPERATOR_UNSPECIFIED Unused
EQUAL_TO Equal. Attempts to match even with incompatible types.
NOT_EQUAL_TO Not equal to. Attempts to match even with incompatible types.
GREATER_THAN Greater than.
LESS_THAN Less than.
GREATER_THAN_OR_EQUALS Greater than or equals.
LESS_THAN_OR_EQUALS Less than or equals.
EXISTS Exists

ReplaceDictionaryConfig

Replace each input value with a value randomly selected from the dictionary.

Fields
Union field type. Type of dictionary. type can be only one of the following:
word_list

WordList

A list of words to select from for random replacement. The limits page contains details about the size limits of dictionaries.

ReplaceValueConfig

Replace each input value with a given Value.

Fields
new_value

Value

Value to replace it with.

ReplaceWithInfoTypeConfig

This type has no fields.

Replace each matching finding with the name of the info_type.

ResourceVisibility

How broadly the data in the resource has been shared. New items may be added over time. A higher number means more restricted.

Enums
RESOURCE_VISIBILITY_UNSPECIFIED Unused.
RESOURCE_VISIBILITY_PUBLIC Visible to any user.
RESOURCE_VISIBILITY_INCONCLUSIVE May contain public items. For example, if a Cloud Storage bucket has uniform bucket level access disabled, some objects inside it may be public, but none are known yet.
RESOURCE_VISIBILITY_RESTRICTED Visible only to specific users.

RiskAnalysisJobConfig

Configuration for a risk analysis job. See https://cloud.google.com/sensitive-data-protection/docs/concepts-risk-analysis to learn more.

Fields
privacy_metric

PrivacyMetric

Privacy metric to compute.

source_table

BigQueryTable

Input dataset to compute metrics over.

actions[]

Action

Actions to execute at the completion of the job. Are executed in the order provided.

Schedule

Schedule for inspect job triggers.

Fields
Union field option. Type of schedule. option can be only one of the following:
recurrence_period_duration

Duration

With this option a job is started on a regular periodic basis. For example: every day (86400 seconds).

A scheduled start time will be skipped if the previous execution has not ended when its scheduled time occurs.

This value must be set to a time duration greater than or equal to 1 day and can be no longer than 60 days.

SearchConnectionsRequest

Request message for SearchConnections.

Fields
parent

string

Required. Resource name of the organization or project with a wildcard location, for example, organizations/433245324/locations/- or projects/project-id/locations/-.

Authorization requires the following IAM permission on the specified resource parent:

  • dlp.connections.search
page_size

int32

Optional. Number of results per page, max 1000.

page_token

string

Optional. Page token from a previous page to return the next set of results. If set, all other request fields must match the original request.

filter

string

Optional. Supported field/value: - state - MISSING|AVAILABLE|ERROR

SearchConnectionsResponse

Response message for SearchConnections.

Fields
connections[]

Connection

List of connections that match the search query. Note that only a subset of the fields will be populated, and only "name" is guaranteed to be set. For full details of a Connection, call GetConnection with the name.

next_page_token

string

Token to retrieve the next page of results. An empty value means there are no more results.

SecretManagerCredential

A credential consisting of a username and password, where the password is stored in a Secret Manager resource. Note: Secret Manager charges apply.

Fields
username

string

Required. The username.

password_secret_version_name

string

Required. The name of the Secret Manager resource that stores the password, in the form projects/project-id/secrets/secret-name/versions/version.

SecretsDiscoveryTarget

This type has no fields.

Discovery target for credentials and secrets in cloud resource metadata.

This target does not include any filtering or frequency controls. Cloud DLP will scan cloud resource metadata for secrets daily.

No inspect template should be included in the discovery config for a security benchmarks scan. Instead, the built-in list of secrets and credentials infoTypes will be used (see https://cloud.google.com/sensitive-data-protection/docs/infotypes-reference#credentials_and_secrets).

Credentials and secrets discovered will be reported as vulnerabilities to Security Command Center.

SensitivityScore

Score is calculated from of all elements in the data profile. A higher level means the data is more sensitive.

Fields
score

SensitivityScoreLevel

The sensitivity score applied to the resource.

SensitivityScoreLevel

Various sensitivity score levels for resources.

Enums
SENSITIVITY_SCORE_UNSPECIFIED Unused.
SENSITIVITY_LOW No sensitive information detected. The resource isn't publicly accessible.
SENSITIVITY_UNKNOWN Unable to determine sensitivity.
SENSITIVITY_MODERATE Medium risk. Contains personally identifiable information (PII), potentially sensitive data, or fields with free-text data that are at a higher risk of having intermittent sensitive data. Consider limiting access.
SENSITIVITY_HIGH High risk. Sensitive personally identifiable information (SPII) can be present. Exfiltration of data can lead to user data loss. Re-identification of users might be possible. Consider limiting usage and or removing SPII.

StatisticalTable

An auxiliary table containing statistical information on the relative frequency of different quasi-identifiers values. It has one or several quasi-identifiers columns, and one column that indicates the relative frequency of each quasi-identifier tuple. If a tuple is present in the data but not in the auxiliary table, the corresponding relative frequency is assumed to be zero (and thus, the tuple is highly reidentifiable).

Fields
table

BigQueryTable

Required. Auxiliary table location.

quasi_ids[]

QuasiIdentifierField

Required. Quasi-identifier columns.

relative_frequency

FieldId

Required. The relative frequency column must contain a floating-point number between 0 and 1 (inclusive). Null values are assumed to be zero.

QuasiIdentifierField

A quasi-identifier column has a custom_tag, used to know which column in the data corresponds to which column in the statistical model.

Fields
field

FieldId

Identifies the column.

custom_tag

string

A column can be tagged with a custom tag. In this case, the user must indicate an auxiliary table that contains statistical information on the possible values of this column (below).

StorageConfig

Shared message indicating Cloud storage type.

Fields
timespan_config

TimespanConfig

Configuration of the timespan of the items to include in scanning.

Union field type. Type of storage system to inspect. type can be only one of the following:
datastore_options

DatastoreOptions

Google Cloud Datastore options.

cloud_storage_options

CloudStorageOptions

Cloud Storage options.

big_query_options

BigQueryOptions

BigQuery options.

hybrid_options

HybridOptions

Hybrid inspection options.

TimespanConfig

Configuration of the timespan of the items to include in scanning. Currently only supported when inspecting Cloud Storage and BigQuery.

Fields
start_time

Timestamp

Exclude files, tables, or rows older than this value. If not set, no lower time limit is applied.

end_time

Timestamp

Exclude files, tables, or rows newer than this value. If not set, no upper time limit is applied.

timestamp_field

FieldId

Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery.

For BigQuery

If this value is not specified and the table was modified between the given start and end times, the entire table will be scanned. If this value is specified, then rows are filtered based on the given start and end times. Rows with a NULL value in the provided BigQuery column are skipped. Valid data types of the provided BigQuery column are: INTEGER, DATE, TIMESTAMP, and DATETIME.

If your BigQuery table is partitioned at ingestion time, you can use any of the following pseudo-columns as your timestamp field. When used with Cloud DLP, these pseudo-column names are case sensitive.

  • _PARTITIONTIME
  • _PARTITIONDATE
  • _PARTITION_LOAD_TIME

For Datastore

If this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are: TIMESTAMP.

See the known issue related to this operation.

enable_auto_population_of_timespan_config

bool

When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.

For BigQuery

Inspect jobs triggered by automatic population will scan data that is at least three hours old when the job starts. This is because streaming buffer rows are not read during inspection and reading up to the current timestamp will result in skipped rows.

See the known issue related to this operation.

StorageMetadataLabel

Storage metadata label to indicate which metadata entry contains findings.

Fields
key

string

Label name.

StoredInfoType

StoredInfoType resource message that contains information about the current version and any pending updates.

Fields
name

string

Resource name.

current_version

StoredInfoTypeVersion

Current version of the stored info type.

pending_versions[]

StoredInfoTypeVersion

Pending versions of the stored info type. Empty if no versions are pending.

StoredInfoTypeConfig

Configuration for stored infoTypes. All fields and subfield are provided by the user. For more information, see https://cloud.google.com/sensitive-data-protection/docs/creating-custom-infotypes.

Fields
display_name

string

Display name of the StoredInfoType (max 256 characters).

description

string

Description of the StoredInfoType (max 256 characters).

Union field type. Stored infotype types. type can be only one of the following:
large_custom_dictionary

LargeCustomDictionaryConfig

StoredInfoType where findings are defined by a dictionary of phrases.

dictionary

Dictionary

Store dictionary-based CustomInfoType.

regex

Regex

Store regular expression-based StoredInfoType.

StoredInfoTypeState

State of a StoredInfoType version.

Enums
STORED_INFO_TYPE_STATE_UNSPECIFIED Unused
PENDING StoredInfoType version is being created.
READY StoredInfoType version is ready for use.
FAILED StoredInfoType creation failed. All relevant error messages are returned in the StoredInfoTypeVersion message.
INVALID StoredInfoType is no longer valid because artifacts stored in user-controlled storage were modified. To fix an invalid StoredInfoType, use the UpdateStoredInfoType method to create a new version.

StoredInfoTypeStats

Statistics for a StoredInfoType.

Fields
Union field type. Stat types type can be only one of the following:
large_custom_dictionary

LargeCustomDictionaryStats

StoredInfoType where findings are defined by a dictionary of phrases.

StoredInfoTypeVersion

Version of a StoredInfoType, including the configuration used to build it, create timestamp, and current state.

Fields
config

StoredInfoTypeConfig

StoredInfoType configuration.

create_time

Timestamp

Create timestamp of the version. Read-only, determined by the system when the version is created.

state

StoredInfoTypeState

Stored info type version state. Read-only, updated by the system during dictionary creation.

errors[]

Error

Errors that occurred when creating this storedInfoType version, or anomalies detected in the storedInfoType data that render it unusable. Only the five most recent errors will be displayed, with the most recent error appearing first.

For example, some of the data for stored custom dictionaries is put in the user's Cloud Storage bucket, and if this data is modified or deleted by the user or another system, the dictionary becomes invalid.

If any errors occur, fix the problem indicated by the error message and use the UpdateStoredInfoType API method to create another version of the storedInfoType to continue using it, reusing the same config if it was not the source of the error.

stats

StoredInfoTypeStats

Statistics about this storedInfoType version.

StoredType

A reference to a StoredInfoType to use with scanning.

Fields
name

string

Resource name of the requested StoredInfoType, for example organizations/433245324/storedInfoTypes/432452342 or projects/project-id/storedInfoTypes/432452342.

create_time

Timestamp

Timestamp indicating when the version of the StoredInfoType used for inspection was created. Output-only field, populated by the system.

Table

Structured content to inspect. Up to 50,000 Values per request allowed. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-structured-text#inspecting_a_table to learn more.

Fields
headers[]

FieldId

Headers of the table.

rows[]

Row

Rows of the table.

Row

Values of the row.

Fields
values[]

Value

Individual cells.

TableDataProfile

The profile for a scanned table.

Fields
name

string

The name of the profile.

data_source_type

DataSourceType

The resource type that was profiled.

project_data_profile

string

The resource name of the project data profile for this table.

dataset_project_id

string

The Google Cloud project ID that owns the resource.

dataset_location

string

If supported, the location where the dataset's data is stored. See https://cloud.google.com/bigquery/docs/locations for supported locations.

dataset_id

string

If the resource is BigQuery, the dataset ID.

table_id

string

The table ID.

full_resource

string

The Cloud Asset Inventory resource that was profiled in order to generate this TableDataProfile. https://cloud.google.com/apis/design/resource_names#full_resource_name

profile_status

ProfileStatus

Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated.

state

State

State of a profile.

sensitivity_score

SensitivityScore

The sensitivity score of this table.

data_risk_level

DataRiskLevel

The data risk level of this table.

predicted_info_types[]

InfoTypeSummary

The infoTypes predicted from this table's data.

other_info_types[]

OtherInfoTypeSummary

Other infoTypes found in this table's data.

config_snapshot

DataProfileConfigSnapshot

The snapshot of the configurations used to generate the profile.

last_modified_time

Timestamp

The time when this table was last modified

expiration_time

Timestamp

Optional. The time when this table expires.

scanned_column_count

int64

The number of columns profiled in the table.

failed_column_count

int64

The number of columns skipped in the table because of an error.

table_size_bytes

int64

The size of the table when the profile was generated.

row_count

int64

Number of rows in the table when the profile was generated. This will not be populated for BigLake tables.

encryption_status

EncryptionStatus

How the table is encrypted.

resource_visibility

ResourceVisibility

How broadly a resource has been shared.

profile_last_generated

Timestamp

The last time the profile was generated.

resource_labels

map<string, string>

The labels applied to the resource at the time the profile was generated.

create_time

Timestamp

The time at which the table was created.

sample_findings_table

BigQueryTable

The BigQuery table to which the sample findings are written.

State

Possible states of a profile. New items may be added.

Enums
STATE_UNSPECIFIED Unused.
RUNNING The profile is currently running. Once a profile has finished it will transition to DONE.
DONE The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed.

TableLocation

Location of a finding within a table.

Fields
row_index

int64

The zero-based index of the row where the finding is located. Only populated for resources that have a natural ordering, not BigQuery. In BigQuery, to identify the row a finding came from, populate BigQueryOptions.identifying_fields with your primary key column names and when you store the findings the value of those columns will be stored inside of Finding.

TableOptions

Instructions regarding the table content being inspected.

Fields
identifying_fields[]

FieldId

The columns that are the primary keys for table objects included in ContentItem. A copy of this cell's value will stored alongside alongside each finding so that the finding can be traced to the specific row it came from. No more than 3 may be provided.

TableReference

Message defining the location of a BigQuery table with the projectId inferred from the parent project.

Fields
dataset_id

string

Dataset ID of the table.

table_id

string

Name of the table.

TimePartConfig

For use with Date, Timestamp, and TimeOfDay, extract or preserve a portion of the value.

Fields
part_to_extract

TimePart

The part of the time to keep.

TimePart

Components that make up time.

Enums
TIME_PART_UNSPECIFIED Unused
YEAR [0-9999]
MONTH [1-12]
DAY_OF_MONTH [1-31]
DAY_OF_WEEK [1-7]
WEEK_OF_YEAR [1-53]
HOUR_OF_DAY [0-23]

TransformationConfig

User specified templates and configs for how to deidentify structured, unstructures, and image files. User must provide either a unstructured deidentify template or at least one redact image config.

Fields
deidentify_template

string

De-identify template. If this template is specified, it will serve as the default de-identify template. This template cannot contain record_transformations since it can be used for unstructured content such as free-form text files. If this template is not set, a default ReplaceWithInfoTypeConfig will be used to de-identify unstructured content.

structured_deidentify_template

string

Structured de-identify template. If this template is specified, it will serve as the de-identify template for structured content such as delimited files and tables. If this template is not set but the deidentify_template is set, then deidentify_template will also apply to the structured content. If neither template is set, a default ReplaceWithInfoTypeConfig will be used to de-identify structured content.

image_redact_template

string

Image redact template. If this template is specified, it will serve as the de-identify template for images. If this template is not set, all findings in the image will be redacted with a black box.

TransformationContainerType

Describes functionality of a given container in its original format.

Enums
TRANSFORM_UNKNOWN_CONTAINER Unused.
TRANSFORM_BODY Body of a file.
TRANSFORM_METADATA Metadata for a file.
TRANSFORM_TABLE A table.

TransformationDescription

A flattened description of a PrimitiveTransformation or RecordSuppression.

Fields
type

TransformationType

The transformation type.

description

string

A description of the transformation. This is empty for a RECORD_SUPPRESSION, or is the output of calling toString() on the PrimitiveTransformation protocol buffer message for any other type of transformation.

condition

string

A human-readable string representation of the RecordCondition corresponding to this transformation. Set if a RecordCondition was used to determine whether or not to apply this transformation.

Examples: * (age_field > 85) * (age_field <= 18) * (zip_field exists) * (zip_field == 01234) && (city_field != "Springville") * (zip_field == 01234) && (age_field <= 18) && (city_field exists)

info_type

InfoType

Set if the transformation was limited to a specific InfoType.

TransformationDetails

Details about a single transformation. This object contains a description of the transformation, information about whether the transformation was successfully applied, and the precise location where the transformation occurred. These details are stored in a user-specified BigQuery table.

Fields
resource_name

string

The name of the job that completed the transformation.

container_name

string

The top level name of the container where the transformation is located (this will be the source file name or table name).

transformation[]

TransformationDescription

Description of transformation. This would only contain more than one element if there were multiple matching transformations and which one to apply was ambiguous. Not set for states that contain no transformation, currently only state that contains no transformation is TransformationResultStateType.METADATA_UNRETRIEVABLE.

status_details

TransformationResultStatus

Status of the transformation, if transformation was not successful, this will specify what caused it to fail, otherwise it will show that the transformation was successful.

transformed_bytes

int64

The number of bytes that were transformed. If transformation was unsuccessful or did not take place because there was no content to transform, this will be zero.

transformation_location

TransformationLocation

The precise location of the transformed content in the original container.

TransformationDetailsStorageConfig

Config for storing transformation details.

Fields
Union field type. Location to store the transformation summary. type can be only one of the following:
table

BigQueryTable

The BigQuery table in which to store the output. This may be an existing table or in a new table in an existing dataset. If table_id is not set a new one will be generated for you with the following format: dlp_googleapis_transformation_details_yyyy_mm_dd_[dlp_job_id]. Pacific time zone will be used for generating the date details.

TransformationErrorHandling

How to handle transformation errors during de-identification. A transformation error occurs when the requested transformation is incompatible with the data. For example, trying to de-identify an IP address using a DateShift transformation would result in a transformation error, since date info cannot be extracted from an IP address. Information about any incompatible transformations, and how they were handled, is returned in the response as part of the TransformationOverviews.

Fields
Union field mode. How transformation errors should be handled. mode can be only one of the following:
throw_error

ThrowError

Throw an error

leave_untransformed

LeaveUntransformed

Ignore errors

LeaveUntransformed

This type has no fields.

Skips the data without modifying it if the requested transformation would cause an error. For example, if a DateShift transformation were applied an an IP address, this mode would leave the IP address unchanged in the response.

ThrowError

This type has no fields.

Throw an error and fail the request when a transformation error occurs.

TransformationLocation

Specifies the location of a transformation.

Fields
container_type

TransformationContainerType

Information about the functionality of the container where this finding occurred, if available.

Union field location_type. Location type. location_type can be only one of the following:
finding_id

string

For infotype transformations, link to the corresponding findings ID so that location information does not need to be duplicated. Each findings ID correlates to an entry in the findings output table, this table only gets created when users specify to save findings (add the save findings action to the request).

record_transformation

RecordTransformation

For record transformations, provide a field and container information.

TransformationOverview

Overview of the modifications that occurred.

Fields
transformed_bytes

int64

Total size in bytes that were transformed in some way.

transformation_summaries[]

TransformationSummary

Transformations applied to the dataset.

TransformationResultStatus

The outcome of a transformation.

Fields
result_status_type

TransformationResultStatusType

Transformation result status type, this will be either SUCCESS, or it will be the reason for why the transformation was not completely successful.

details

Status

Detailed error codes and messages

TransformationResultStatusType

Enum of possible outcomes of transformations. SUCCESS if transformation and storing of transformation was successful, otherwise, reason for not transforming.

Enums
STATE_TYPE_UNSPECIFIED Unused.
INVALID_TRANSFORM This will be set when a finding could not be transformed (i.e. outside user set bucket range).
BIGQUERY_MAX_ROW_SIZE_EXCEEDED This will be set when a BigQuery transformation was successful but could not be stored back in BigQuery because the transformed row exceeds BigQuery's max row size.
METADATA_UNRETRIEVABLE This will be set when there is a finding in the custom metadata of a file, but at the write time of the transformed file, this key / value pair is unretrievable.
SUCCESS This will be set when the transformation and storing of it is successful.

TransformationSummary

Summary of a single transformation. Only one of 'transformation', 'field_transformation', or 'record_suppress' will be set.

Fields
info_type

InfoType

Set if the transformation was limited to a specific InfoType.

field

FieldId

Set if the transformation was limited to a specific FieldId.

transformation

PrimitiveTransformation

The specific transformation these stats apply to.

field_transformations[]

FieldTransformation

The field transformation that was applied. If multiple field transformations are requested for a single field, this list will contain all of them; otherwise, only one is supplied.

record_suppress

RecordSuppression

The specific suppression option these stats apply to.

results[]

SummaryResult

Collection of all transformations that took place or had an error.

transformed_bytes

int64

Total size in bytes that were transformed in some way.

SummaryResult

A collection that informs the user the number of times a particular TransformationResultCode and error details occurred.

Fields
count

int64

Number of transformations counted by this result.

code

TransformationResultCode

Outcome of the transformation.

details

string

A place for warnings or errors to show up if a transformation didn't work as expected.

TransformationResultCode

Possible outcomes of transformations.

Enums
TRANSFORMATION_RESULT_CODE_UNSPECIFIED Unused
SUCCESS Transformation completed without an error.
ERROR Transformation had an error.

TransformationType

An enum of rules that can be used to transform a value. Can be a record suppression, or one of the transformation rules specified under PrimitiveTransformation.

Enums
TRANSFORMATION_TYPE_UNSPECIFIED Unused
RECORD_SUPPRESSION Record suppression
REPLACE_VALUE Replace value
REPLACE_DICTIONARY Replace value using a dictionary.
REDACT Redact
CHARACTER_MASK Character mask
CRYPTO_REPLACE_FFX_FPE FFX-FPE
FIXED_SIZE_BUCKETING Fixed size bucketing
BUCKETING Bucketing
REPLACE_WITH_INFO_TYPE Replace with info type
TIME_PART Time part
CRYPTO_HASH Crypto hash
DATE_SHIFT Date shift
CRYPTO_DETERMINISTIC_CONFIG Deterministic crypto
REDACT_IMAGE Redact image

TransientCryptoKey

Use this to have a random data crypto key generated. It will be discarded after the request finishes.

Fields
name

string

Required. Name of the key. This is an arbitrary string used to differentiate different keys. A unique key is generated per name: two separate TransientCryptoKey protos share the same generated key if their names are the same. When the data crypto key is generated, this name is not used in any way (repeating the api call will result in a different key being generated).

UniquenessScoreLevel

Bucketized uniqueness score levels. A higher uniqueness score is a strong signal that the column may contain a unique identifier like user id. A low value indicates that the column contains few unique values like booleans or other classifiers.

Enums
UNIQUENESS_SCORE_LEVEL_UNSPECIFIED Some columns do not have estimated uniqueness. Possible reasons include having too few values.
UNIQUENESS_SCORE_LOW Low uniqueness, possibly a boolean, enum or similiarly typed column.
UNIQUENESS_SCORE_MEDIUM Medium uniqueness.
UNIQUENESS_SCORE_HIGH High uniqueness, possibly a column of free text or unique identifiers.

UnwrappedCryptoKey

Using raw keys is prone to security risks due to accidentally leaking the key. Choose another type of key if possible.

Fields
key

bytes

Required. A 128/192/256 bit key.

UpdateConnectionRequest

Request message for UpdateConnection.

Fields
name

string

Required. Resource name in the format: projects/{project}/locations/{location}/connections/{connection}.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.connections.update
connection

Connection

Required. The connection with new values for the relevant fields.

update_mask

FieldMask

Optional. Mask to control which fields get updated.

UpdateDeidentifyTemplateRequest

Request message for UpdateDeidentifyTemplate.

Fields
name

string

Required. Resource name of organization and deidentify template to be updated, for example organizations/433245324/deidentifyTemplates/432452342 or projects/project-id/deidentifyTemplates/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.deidentifyTemplates.update
deidentify_template

DeidentifyTemplate

New DeidentifyTemplate value.

update_mask

FieldMask

Mask to control which fields get updated.

UpdateDiscoveryConfigRequest

Request message for UpdateDiscoveryConfig.

Fields
name

string

Required. Resource name of the project and the configuration, for example projects/dlp-test-project/discoveryConfigs/53234423.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.jobTriggers.update
discovery_config

DiscoveryConfig

Required. New DiscoveryConfig value.

update_mask

FieldMask

Mask to control which fields get updated.

UpdateInspectTemplateRequest

Request message for UpdateInspectTemplate.

Fields
name

string

Required. Resource name of organization and inspectTemplate to be updated, for example organizations/433245324/inspectTemplates/432452342 or projects/project-id/inspectTemplates/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.inspectTemplates.update
inspect_template

InspectTemplate

New InspectTemplate value.

update_mask

FieldMask

Mask to control which fields get updated.

UpdateJobTriggerRequest

Request message for UpdateJobTrigger.

Fields
name

string

Required. Resource name of the project and the triggeredJob, for example projects/dlp-test-project/jobTriggers/53234423.

Authorization requires one or more of the following IAM permissions on the specified resource name:

  • dlp.jobTriggers.update
  • dlp.jobs.create
job_trigger

JobTrigger

New JobTrigger value.

update_mask

FieldMask

Mask to control which fields get updated.

UpdateStoredInfoTypeRequest

Request message for UpdateStoredInfoType.

Fields
name

string

Required. Resource name of organization and storedInfoType to be updated, for example organizations/433245324/storedInfoTypes/432452342 or projects/project-id/storedInfoTypes/432452342.

Authorization requires the following IAM permission on the specified resource name:

  • dlp.storedInfoTypes.update
config

StoredInfoTypeConfig

Updated configuration for the storedInfoType. If not provided, a new version of the storedInfoType will be created with the existing configuration.

update_mask

FieldMask

Mask to control which fields get updated.

Value

Set of primitive values supported by the system. Note that for the purposes of inspection or transformation, the number of bytes considered to comprise a 'Value' is based on its representation as a UTF-8 encoded string. For example, if 'integer_value' is set to 123456789, the number of bytes would be counted as 9, even though an int64 only holds up to 8 bytes of data.

Fields
Union field type. Value types type can be only one of the following:
integer_value

int64

integer

float_value

double

float

string_value

string

string

boolean_value

bool

boolean

timestamp_value

Timestamp

timestamp

time_value

TimeOfDay

time of day

date_value

Date

date

day_of_week_value

DayOfWeek

day of week

ValueFrequency

A value of a field, including its frequency.

Fields
value

Value

A value contained in the field in question.

count

int64

How many times the value is contained in the field.

VersionDescription

Details about each available version for an infotype.

Fields
version

string

Name of the version

description

string

Description of the version.