Profile BigQuery data in a single project

This page describes how to configure BigQuery data discovery at the project level. If you want to profile an organization or folder, see Profile BigQuery data in an organization or folder.

For more information about the discovery service, see Data profiles.

To start profiling data, you create a scan configuration.

Before you begin

  1. Make sure the Cloud Data Loss Prevention API is enabled on your project:

    1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
    2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    3. Make sure that billing is enabled for your Google Cloud project.

    4. Enable the required API.

      Enable the API

    5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    6. Make sure that billing is enabled for your Google Cloud project.

    7. Enable the required API.

      Enable the API

  2. Confirm that you have the IAM permissions that are required to configure data profiles at the project level.

  3. You must have an inspection template in each region where you have data to be profiled. If you want to use a single template for multiple regions, you can use a template that is stored in the global region. If organizational policies prevent you from creating an inspection template in the global region, then you must set a dedicated inspection template for each region. For more information, see Data residency considerations.

    This task lets you create an inspection template in the global region only. If you need dedicated inspection templates for one or more regions, you must create those templates before performing this task.

  4. You can configure Sensitive Data Protection to send notifications to Pub/Sub when certain events occur, such as when Sensitive Data Protection profiles a new table. If you want to use this feature, you must first create a Pub/Sub topic.

  5. You can configure Sensitive Data Protection to automatically attach tags to your resources. This feature lets you conditionally grant access to those resources based on their calculated sensitivity levels. If you want to use this feature, you must first complete the tasks in Control IAM access to resources based on data sensitivity.

Create a scan configuration

  1. Go to the Create scan configuration page.

    Go to Create scan configuration

  2. Go to your project. On the toolbar, click the project selector and select your project.

The following sections provide more information about the steps in the Create scan configuration page. At the end of each section, click Continue.

Select a discovery type

Select BigQuery.

Select scope

Do one of the following:

  • If you want to scan a single table, select Scan one table.

    For each table, you can have only one single-resource scan configuration. For more information, see Profile a single data resource.

    Fill in the details of the table that you want to profile.

  • If you want to perform standard project-level profiling, select Scan selected project.

Manage schedules

If the default profiling frequency suits your needs, you can skip this section of the Create scan configuration page.

Configure this section for the following reasons:

  • To make fine-grained adjustments to the profiling frequency of all your data or certain subsets of your data.
  • To specify the tables that you don't want to profile.
  • To specify the tables that you don't want profiled more than once.

To make fine-grained adjustments to profiling frequency, follow these steps:

  1. Click Add schedule.
  2. In the Filters section, you define one or more filters that specify which tables are in the schedule's scope.

    Specify at least one of the following:

    • A project ID or a regular expression that specifies one or more projects
    • A dataset ID or a regular expression that specifies one or more datasets
    • A table ID or a regular expression that specifies one or more tables

    Regular expressions must follow the RE2 syntax.

    For example, if you want all tables in a dataset to be included in the filter, specify that dataset's ID and leave the two other fields blank.

    If you want to add more filters, click Add filter and repeat this step.

  3. Click Frequency.

  4. In the Frequency section, specify whether Sensitive Data Protection should profile the tables you defined in your filters, and if so, how often:

    • If you never want the tables to be profiled, turn off Profile the tables.

    • If you want the tables to be profiled at least once, leave Profile the tables on.

      In the succeeding fields in this section, you specify whether the system should reprofile your data and what events should trigger a reprofile operation. For more information, see Frequency of data profile generation.

      1. For When schema changes, specify how often Sensitive Data Protection should check if the selected tables had schema changes after they were last profiled. Only tables with schema changes will be reprofiled.
      2. For Types of schema change, specify which types of schema changes should trigger a reprofile operation. Select one of the following:
        • New columns: Reprofile the tables that gained new columns.
        • Removed columns: Reprofile the tables that had columns removed.

        For example, suppose you have tables that gain new columns every day, and you need to profile their contents each time. You can set When schema changes to Reprofile daily, and set Types of schema change to New columns.

      3. For When table changes, specify how often Sensitive Data Protection should check if the selected tables had any changes after they were last profiled. Only tables with changes will be reprofiled. Examples of table changes are row deletions and schema changes.

        You must select a value that is the same as, or less frequent than, the value you set in the When schema changes field.

      4. For When inspect template changes, specify whether you want your data to be reprofiled when the associated inspection template is updated, and if so, how often.

        An inspection template change is detected when either of the following occurs:

        • The name of an inspection template changes in your scan configuration.
        • The updateTime of an inspection template changes.

      5. For example, if you set an inspection template for the us-west1 region and you update that inspection template, then only data in the us-west1 region will be reprofiled.

  5. Click Conditions.

  6. In Conditions section, specify any conditions that the tables, defined in your filters, must meet before Sensitive Data Protection profiles them. If you set minimum conditions and the time condition, Sensitive Data Protection only profiles tables that meet both types of conditions.

    • Minimum conditions: These conditions are useful if you want to delay profiling of a table until it has enough rows or until it reaches a certain age. Turn on the conditions you want to apply, and specify the minimum row count or duration.
    • Time condition: This condition is useful if you don't want old tables to ever be profiled. Turn on the time condition, and pick a date and time. Any table created on or before that date is excluded from profiling.

    Example conditions

    Suppose you have the following configuration:

    • Minimum conditions

      • Minimum row count: 10 rows
      • Minimum duration: 24 hours
    • Time condition

      • Timestamp: 5/4/22, 11:59 PM

    In this case, Sensitive Data Protection excludes any tables created on or before May 4, 2022, 11:59 PM. Among the tables created after this date and time, Sensitive Data Protection profiles only the tables that either have 10 rows or are at least 24 hours old.

  7. In the Tables to profile section, select one of the following, depending on the types of tables that you want to profile:

    • Profile all tables: Select this option if you want Sensitive Data Protection to profile all types of tables that match your filters and conditions.

      For table types that aren't supported, Sensitive Data Protection generates only partially populated profiles. Such profiles show errors indicating that the tables they pertain to aren't supported. Select this option if you want to see the partial profiles despite the error messages.

      When Sensitive Data Protection adds support for a new table type, it fully reprofiles tables of that type during the next scheduled run.

    • Profile supported tables: Select this option if you want Sensitive Data Protection to profile only the supported tables that match your filters and conditions. Unsupported tables won't have partial profiles.

    • Profile specific table types: Select this option if you want Sensitive Data Protection to profile only the types of tables that you select. In the list that appears, select one or more types.

      When Sensitive Data Protection adds support for a new table type, it doesn't automatically profile tables of that type. To profile newly supported table types, you must edit your scan configuration and select those types.

    If you don't select an option, Sensitive Data Protection profiles only BigQuery tables and shows errors for unsupported tables.

    Pricing for data profiling varies depending on the types of tables profiled. For more information, see Data profiling pricing.

  8. Click Done.

  9. If you want to add more schedules, click Add schedule and repeat the previous steps.

  10. To specify precedence between schedules, reorder them using the up and down arrows.

    The order of the schedules specifies how conflicts between schedules are resolved. If a table matches the filters of two different schedules, the schedule higher in the schedules list dictates the profiling frequency for that table.

    The last schedule in the list is always the one labeled Default schedule. This default schedule covers the tables in your selected scope that don't match any of the schedules that you created. This default schedule follows the system default profiling frequency.

  11. If you want to adjust the default schedule, click Edit schedule, and adjust the settings as needed.

Select inspection template

Depending on how you want to provide an inspection configuration, choose one of the following options. Regardless of which option you choose, Sensitive Data Protection scans your data in the region where that data is stored. That is, your data doesn't leave its region of origin.

Option 1: Create an inspection template

Choose this option if you want to create a new inspection template in the global region.

  1. Click Create new inspection template.
  2. Optional: To modify the default selection of infoTypes, click Manage infoTypes.

    For more information about how to manage built-in and custom infoTypes, see Manage infoTypes through the Google Cloud console.

    You must have at least one infoType selected to continue.

  3. Optional: Configure the inspection template further by adding rulesets and setting a confidence threshold. For more information, see Configure detection.

When Sensitive Data Protection creates the scan configuration, it stores this new inspection template in the global region.

Option 2: Use an existing inspection template

Choose this option if you have existing inspection templates that you want to use.

  1. Click Select existing inspection template.

  2. Enter the full resource name of the inspection template that you want to use. The Region field is automatically populated with the name of the region where your inspection template is stored.

    The inspection template that you enter must be in the same region as the data to be profiled.

    To respect data residency, Sensitive Data Protection doesn't use an inspection template outside the region where that template is stored.

    To find the full resource name of an inspection template, follow these steps:

    1. Go to your inspection templates list. This page opens on a separate tab.

      Go to inspection templates

    2. Switch to the project that contains the inspection template that you want to use.

    3. On the Templates tab, click the template ID of the template that you want to use.

    4. On the page that opens, copy the full resource name of the template. The full resource name follows this format:

      projects/PROJECT_ID/locations/REGION/inspectTemplates/TEMPLATE_ID
    5. On the Create scan configuration page, in the Template name field, paste the full resource name of the template.

  3. To add an inspection template for another region, click Add inspection template and enter the template's full resource name. Repeat this for each region where you have a dedicated inspection template.

  4. Optional: Add an inspection template that's stored in the global region. Sensitive Data Protection automatically uses that template for data in regions where you don't have a dedicated inspection template.

Add actions

In the following sections, you specify actions that you want Sensitive Data Protection to take after it generates the data profiles.

For information about how other Google Cloud services may charge you for configuring actions, see Pricing for exporting data profiles.

Publish to Security Command Center

Findings from data profiles provide context when you triage and develop response plans for your vulnerability and threat findings in Security Command Center.

Before you can use this action, Security Command Center must be activated at the organization level. Turning on Security Command Center at the organization level enables the flow of findings from integrated services like Sensitive Data Protection. Sensitive Data Protection works with Security Command Center in all service tiers.

If Security Command Center isn't activated at the organization level, Sensitive Data Protection findings won't appear in Security Command Center. For more information, see Check the activation level of Security Command Center.

To send the results of your data profiles to Security Command Center, make sure the Publish to Security Command Center option is turned on.

For more information, see Publish data profiles to Security Command Center.

Save data profile copies to BigQuery

Turning on Save data profile copies to BigQuery lets you keep a saved copy or history of all of your generated profiles. Doing so can be useful for creating audit reports and visualizing data profiles. You can also load this information into other systems.

Also, this option lets you see all of your data profiles in a single view, regardless of which region your data resides in. If you turn off this option, you can still view the data profiles in the Google Cloud console. However, in the Google Cloud console, you select one region at a time, and see only the data profiles for that region.

To export copies of the data profiles to a BigQuery table, follow these steps:

  1. Turn on Save data profile copies to BigQuery.

  2. Enter the details of the BigQuery table where you want to save the data profile copies:

    • For Project ID, enter the ID of an existing project where you want data profiles to be exported to.

    • For Dataset ID, enter the name of an existing dataset in the project where you want data profiles to be exported to.

    • For Table ID, enter a name for the BigQuery table where data profiles will be exported to. If this table doesn't exist, Sensitive Data Protection automatically creates it for you using the name you provide.

Sensitive Data Protection starts exporting profiles from the time you turn on this option. Profiles that were generated before you turned on exporting aren't saved to BigQuery.

For example queries that you can that you can use when analyzing data profiles, see Analyze data profiles.

Save sample discovery findings to BigQuery

Sensitive Data Protection can add sample findings to a BigQuery table of your choice. Sample findings represent a subset of all findings and might not represent all infoTypes that were discovered. Normally, the system generates around 10 sample findings per table, but this number can vary for each discovery run.

Each finding includes the actual string (also called quote) that was detected and its exact location.

This action is useful if you want to evaluate whether your inspection configuration is correctly matching the type of information that you want to flag as sensitive. Using the exported data profiles and the exported sample findings, you can run queries to get more information about the specific items that were flagged, the infoTypes they matched, their exact locations, their calculated sensitivity levels, and other details.

This example requires both Save data profile copies to BigQuery and Save sample discovery findings to BigQuery to be enabled.

The following query uses an INNER JOIN operation on both the table of exported data profiles and the table of exported sample findings. In the resulting table, each record shows the finding's quote, the infoType that it matched, the resource that contains the finding, and the calculated sensitivity level of the resource.

SELECT
 findings_table.quote,
 findings_table.infotype.name,
 findings_table.location.container_name,
 findings_table.location.data_profile_finding_record_location.field.name AS field_name,
 profiles_table.table_profile.dataset_project_id AS project_id,
 profiles_table.table_profile.dataset_id AS dataset_id,
 profiles_table.table_profile.table_id AS table_id,
 profiles_table.table_profile.sensitivity_score AS table_sensitivity_score
 FROM
 `FINDINGS_TABLE_PROJECT_ID.FINDINGS_TABLE_DATASET_ID.FINDINGS_TABLE_ID_latest_v1` AS findings_table
INNER JOIN
 `PROFILES_TABLE_PROJECT_ID.PROFILES_TABLE_DATASET_ID.PROFILES_TABLE_ID_latest_v1` AS profiles_table
ON
 findings_table.data_profile_resource_name=profiles_table.table_profile.name

To save sample findings to a BigQuery table, follow these steps:

  1. Turn on Save sample discovery findings to BigQuery.

  2. Enter the details of the BigQuery table where you want to save the sample findings.

    • For Project ID, enter the ID of an existing project where you want to export the findings to.

    • For Dataset ID, enter the name of an existing dataset in the project.

    • For Table ID, enter the name of the BigQuery table where want to save the findings to. If this table doesn't exist, Sensitive Data Protection automatically creates it for you using the name that you provide.

For information about the contents of each finding that is saved in the BigQuery table, see DataProfileFinding.

Attach tags to resources

Turning on Attach tags to resources instructs Sensitive Data Protection to automatically tag your data according to its calculated sensitivity level. This section requires you to first complete the tasks in Control IAM access to resources based on data sensitivity.

To automatically tag a resource according to its calculated sensitivity level, follow these steps:

  1. Turn on the Tag resources option.
  2. For each sensitivity level (high, moderate, low, and unknown), enter the path of the tag value that you created for the given sensitivity level.

    If you skip a sensitivity level, no tag is attached for it.

  3. To automatically lower the data risk level of a resource when the sensitivity level tag is present, select When a tag is applied to a resource, lower the data risk of its profile to LOW. This option helps you measure the improvement in your data security and privacy posture.

  4. Select one or both of the following options:

    • Tag a resource when it is profiled for the first time.
    • Tag a resource when its profile is updated. Select this option if you want Sensitive Data Protection to overwrite the sensitivity level tag value on succeeding discovery runs. Consequently, a principal's access to a resource changes automatically as the calculated data sensitivity level for that resource increases or decreases.

      Don't select this option if you plan to manually update the sensitivity level tag values that the discovery service attached to your resources. If you select this option, Sensitive Data Protection can overwrite your manual updates.

Publish to Pub/Sub

Turning on Publish to Pub/Sub lets you take programmatic actions based on profiling results. You can use Pub/Sub notifications to develop a workflow for catching and remediating findings with significant data risk or sensitivity.

To send notifications to a Pub/Sub topic, follow these steps:

  1. Turn on Publish to Pub/Sub.

    A list of options appears. Each option describes an event that causes Sensitive Data Protection to send a notification to Pub/Sub.

  2. Select the events that should trigger a Pub/Sub notification.

    If you select Send a Pub/Sub notification each time a profile is updated, Sensitive Data Protection sends a notification when there's a change in the sensitivity level, data risk level, detected infoTypes, public access, and other important metrics in the profile.

  3. For each event you select, follow these steps:

    1. Enter the name of the topic. The name must be in the following format:

      projects/PROJECT_ID/topics/TOPIC_ID
      

      Replace the following:

      • PROJECT_ID: the ID of the project associated with the Pub/Sub topic.
      • TOPIC_ID: the ID of the Pub/Sub topic.
    2. Specify whether to include the full table profile in the notification, or just the full resource name of the table that was profiled.

    3. Set the minimum data risk and sensitivity levels that must be met for Sensitive Data Protection to send a notification.

    4. Specify whether only one or both of the data risk and sensitivity conditions must be met. For example, if you choose AND, then both the data risk and the sensitivity conditions must be met before Sensitive Data Protection sends a notification.

Send to Dataplex as tags

This action lets you create tags in Dataplex based on insights from data profiles. This action is only applied to new and updated profiles. Existing profiles that aren't updated aren't sent to Dataplex.

Dataplex is a Google Cloud service that unifies distributed data and automates data management and governance for that data. When you enable this action, tables that you profile are automatically tagged in Dataplex according to insights gathered from the data profiles. You can then search your organization and projects for tables with specific tag values.

To send the data profiles to Dataplex, make sure that the Send to Dataplex as tags option is turned on.

For more information, see Tag tables in Dataplex based on insights from data profiles.

Set location to store configuration

Click the Resource location list, and select the region where you want to store this scan configuration. All scan configurations that you later create will also be stored in this location.

Where you choose to store your scan configuration doesn't affect the data to be scanned. Your data is scanned in the same region where that data is stored. For more information, see Data residency considerations.

Review and create

  1. If you want to make sure that profiling doesn't start automatically after you create the scan configuration, select Create scan in paused mode.

    This option is useful in the following cases:

    • You opted to save data profiles to BigQuery and you want to make sure the service agent has write access to the BigQuery table where the data profile copies will be saved.
    • You opted to save sample discovery findings to BigQuery and you want to make sure that the service agent has write access to the BigQuery table where the sample findings will be saved.
    • You configured Pub/Sub notifications and you want to grant publishing access to the service agent.
    • You enabled the Attach tags to resources action and you need to grant the service agent access to the sensitivity level tag.
  2. Review your settings and click Create.

    Sensitive Data Protection creates the scan configuration and adds it to the discovery scan configurations list.

To view or manage your scan configurations, see Manage scan configurations.

If your service agent has the roles needed to access and profile your data, then Sensitive Data Protection starts scanning your data shortly after you create the scan configuration or resume a paused configuration. Otherwise, Sensitive Data Protection shows an error when you view the scan configuration details.

What's next

  • Learn how to manage data profiles.
  • Learn how to manage scan configurations.
  • Learn how to receive and parse Pub/Sub messages published by the data profiler.
  • Learn how to troubleshoot issues with data profiles.