Bigtable Audit logging information

This page describes the audit logs created by Cloud Bigtable as part of Cloud Audit Logs.


Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" Your Cloud projects contain only the audit logs for resources that are directly within the project. Other entities, such as folders, organizations, and Cloud Billing accounts, contain the audit logs for the entity itself.

For a general overview of Cloud Audit Logs, see Cloud Audit Logs. For a deeper understanding of Cloud Audit Logs, review Understanding audit logs.

The following types of audit logs are available for Bigtable:

  • Admin Activity audit logs

    Includes "admin write" operations that write metadata or configuration information.

    You can't disable Admin Activity audit logs.

  • Data Access audit logs

    Includes "admin read" operations that read metadata or configuration information. Also includes "data read" and "data write" operations that read or write user-provided data.

    To receive Data Access audit logs, you must explicitly enable them.

Audited operations

The following summarizes which API operations correspond to each audit log type in Bigtable:

Admin Activity audit logging

Resource type Bigtable operation Log type
AppProfile CreateAppProfile ADMIN_WRITE
GetAppProfile ADMIN_READ
ListAppProfiles ADMIN_READ
UpdateAppProfile ADMIN_WRITE
EnableAppProfile ADMIN_WRITE
DisableAppProfile ADMIN_WRITE
DeleteAppProfile ADMIN_WRITE
Backup CreateBackup ADMIN_WRITE
UpdateBackup ADMIN_WRITE
DeleteBackup ADMIN_WRITE
RestoreTable ADMIN_WRITE
ListBackups ADMIN_READ
Cluster CreateCluster ADMIN_WRITE
UpdateCluster ADMIN_WRITE
PartialUpdateCluster ADMIN_WRITE
DeleteCluster ADMIN_WRITE
ListClusters ADMIN_READ
Instance CreateInstance ADMIN_WRITE
UpdateInstance ADMIN_WRITE
PartialUpdateInstance ADMIN_WRITE
DeleteInstance ADMIN_WRITE
GetInstance ADMIN_READ
ListInstances ADMIN_READ
Table CreateTable ADMIN_WRITE
ModifyColumnFamilies ADMIN_WRITE
GenerateConsistencyToken ADMIN_READ
CheckConsistency ADMIN_READ
CheckAndMutateRow ADMIN_WRITE

Data Access audit logging1

Resource type Bigtable operation Log type
  1. If you previously enabled Data Access audit logs for all Google Cloud services in the Cloud Audit Logs default configuration, you might need to take additional steps to enable Data Access audit logging for Bigtable. Affected customers will see a notification at the top of the Bigtable page of the Cloud Console.
  2. Admin API operation that is audited as part of Data Access audit logging.
Backup CreateBackup DATA READ
Table ReadRows DATA_READ
SampleRowKeys DATA_READ
CheckAndMutateRow DATA_READ
ReadModifyWriteRow DATA_WRITE
DropRowRange2 DATA_WRITE

For details about which fields are logged for Data Access audit logging, see Audit log fields.

Audit log format

Audit log entries—which can be viewed in Cloud Logging using the Logs Viewer, the Cloud Logging API, or the gcloud command-line tool—include the following objects:

  • The log entry itself, which is an object of type LogEntry. Useful fields include the following:

    • The logName contains the project identification and audit log type.
    • The resource contains the target of the audited operation.
    • The timeStamp contains the time of the audited operation.
    • The protoPayload contains the audited information.
  • The audit logging data, which is an AuditLog object held in the protoPayload field of the log entry.

  • Optional service-specific audit information, which is a service-specific object. For older integrations, this object is held in the serviceData field of the AuditLog object; newer integrations use the metadata field.

For other fields in these objects, and how to interpret them, review Understanding audit logs.

Log name

Cloud Audit Logs resource names indicate the Cloud project or other Google Cloud entity that owns the audit logs, and whether the log contains Admin Activity, Data Access, or System Event audit logging data. For example, the following shows log names for a project's Admin Activity audit logs and an organization's Data Access audit logs. The variables denote project and organization identifiers.


Service name

Bigtable audit logs use the service name for admin operations and for data operations.

For information about all logging services, see Mapping services to resources.

Resource types

Bigtable audit logs use the resource types bigtable_instance, bigtable_cluster, bigtable_table, and bigtable_backup. In addition, for IAM operations, audit logs use the resource type audited_resource.

For a list of other resource types, see Monitored resource types.

Enabling audit logging

Admin Activity audit logs are always enabled; you can't disable them.

Data Access audit logs are disabled by default and aren't written unless explicitly enabled (the exception is Data Access audit logs for BigQuery, which can't be disabled).

For instructions on enabling some or all of your Data Access audit logs, see Configuring Data Access logs.

Audit log permissions

Identity and Access Management permissions and roles determine which audit logs you can view or export. Logs reside in Cloud projects and in some other entities including organizations, folders, and Cloud Billing accounts. For more information, see Understanding roles.

To view Admin Activity audit logs, you must have one of the following IAM roles in the project that contains your audit logs:

  • Project Owner, Project Editor, or Project Viewer
  • The Logging Logs Viewer role
  • A custom IAM role with the logging.logEntries.list IAM permission

To view Data Access audit logs, you must have one of the following roles in the project that contains your audit logs:

If you are using audit logs from a non-project entity, such as an organization, then change the Cloud project roles to suitable organization roles.

Viewing logs

To find and view audit logs, you need to know the identifier of the Cloud project, folder, or organization for which you want to view audit logging information. You can further specify other indexed LogEntry fields, like resource.type; for details, review Finding log entries quickly.

The following are the audit log names; they include variables for the identifiers of the Cloud project, folder, or organization:




You have several options for viewing your audit log entries.


You can use the Logs Explorer in the Cloud Console to retrieve your audit log entries for your Cloud project:

  1. In the Cloud Console, go to the Logging > Logs Explorer page.

    Go to the Logs Explorer page

  2. On the Logs Explorer page, select an existing Cloud project.

  3. In the Query builder pane, do the following:

    • In Resource, select the Google Cloud resource type whose audit logs you want to see.

    • In Log name, select the audit log type that you want to see:

      • For Admin Activity audit logs, select activity.
      • For Data Access audit logs, select data_access.
      • For System Event audit logs, select system_event.
      • For Policy Denied audit logs, select policy.

    If you don't see these options, then there aren't any audit logs of that type available in the Cloud project.

    For more details about querying using the new Logs Explorer, see Building log queries.


The gcloud command-line tool provides a command-line interface to the Cloud Logging API. Supply a valid PROJECT_ID, FOLDER_ID, or ORGANIZATION_ID in each of the log names.

To read your Google Cloud project-level audit log entries, run the following command:

gcloud logging read "logName : projects/PROJECT_ID/logs/" --project=PROJECT_ID

To read your folder-level audit log entries, run the following command:

gcloud logging read "logName : folders/FOLDER_ID/logs/" --folder=FOLDER_ID

To read your organization-level audit log entries, run the following command:

gcloud logging read "logName : organizations/ORGANIZATION_ID/logs/" --organization=ORGANIZATION_ID

For more information about using the gcloud tool, see Reading log entries.


When building your queries, replace the variables with valid values, substitute the appropriate project-level, folder-level, or organization-level audit log name or identifiers as listed in the audit log names. For example, if your query includes a PROJECT_ID, then the project identifier you supply must refer to the currently selected Cloud project.

To use the Logging API to look at your audit log entries, do the following:

  1. Go to the Try this API section in the documentation for the entries.list method.

  2. Put the following into the Request body part of the Try this API form. Clicking on this prepopulated form automatically fills the request body, but you need to supply a valid PROJECT_ID in each of the log names.

      "resourceNames": [
      "pageSize": 5,
      "filter": "logName : projects/PROJECT_ID/logs/"
  3. Click Execute.

For more details about querying, see Logging query language.

For an example of an audit log entry and how to find the most important information in it, see Sample audit log entry.

Exporting audit logs

You can export audit logs in the same way that you export other kinds of logs. For details about how to export your logs, see Exporting logs. Here are some applications of exporting audit logs:

  • To keep audit logs for a longer period of time or to use more powerful search capabilities, you can export copies of your audit logs to Cloud Storage, BigQuery, or Pub/Sub. Using Pub/Sub, you can export to other applications, other repositories, and to third parties.

  • To manage your audit logs across an entire organization, you can create aggregated sinks that can export logs from any or all Cloud projects in the organization.

  • If your enabled Data Access audit logs are pushing your Cloud projects over their logs allotments, you can export and exclude the Data Access audit logs from Logging. For details, see Excluding logs.


Admin Activity audit logs and System Event audit logs are free.

Data Access audit logs and Policy Denied audit logs are chargeable.

For information about Cloud Logging pricing, see Google Cloud's operations suite pricing: Cloud Logging.

Managing costs

Bigtable is typically used for large, high-volume workloads. As a result, if you don't manage the log volume, Bigtable can generate an extremely high number of DATA_READ and DATA_WRITE logs, leading to unexpectedly high log storage costs. If you use Data Access audit logging, you should take steps to manage the log volume.

When you follow the best practices for Bigtable authentication, most Data Access audit log activity is generated by service accounts. A service account is an account that an application uses to authenticate and make API calls to Google Cloud services such as Bigtable. Managing service account logs is the most important step to reduce log volume. You might want to also limit logs using other criteria.

You can enable Data Access audit logging for Bigtable in the following ways:

After you enable logging, take the following steps to restrict the volume of logs.

Identify service accounts

First, identify the service accounts that you don't need logs for. The list of service accounts that are not useful and should not be logged depends on your application and business needs.To get a list of service accounts that have Bigtable Data API permissions, you can search IAM policies for your organization. You can also view them on the IAM Permissions Cloud Console page on the Members tab.

Set up log restrictions

Next, set up your log restrictions. There are two ways to manage your Bigtable log volume by limiting service account logs. You can either exempt service accounts using audit configuration, or you can exclude service account logs using logs exclusion filters. For each method, you can either use the Cloud Logging API or the Google Cloud Console.

Exempting service accounts using audit configuration

Exempting service accounts using audit configuration is the recommended approach because it lets you prevent certain logs from being generated in the first place. For detailed instructions, see the following:

Excluding service accounts using logs exclusion filters

Logs exclusions let you specify logs to be excluded from ingestion into your logs buckets. In this approach, logs are discarded after they have been created, so they still impose a processing load on the Bigtable service components that serve your data. Because of this load, we recommend that you use audit configuration instead. For more information on setting up filters, see the following: