Collate and route organization- and folder-level logs to supported destinations

This document describes how to create aggregated sinks. Aggregated sinks let you combine and route logs that are generated by the Google Cloud resources in your organization or folder to a centralized location.

Overview

Aggregated sinks combine and route log entries from the resources contained by an organization or folder to a destination.

If you want control over which logs can be queried in these resources, or routed through the sinks in these resources, then you can configure an aggregated sink to be non-intercepting or intercepting:

  • A non-intercepting aggregated sink routes logs through sinks in child resources. With this sink, you maintain visibility of logs in the resources in which they were generated. Non-intercepting sinks aren't visible to child resources.

    For example, you might create a non-intercepting aggregated sink that routes all log entries generated from the folders contained by an organization to a central Cloud Storage bucket. The logs are stored in the central Cloud Storage bucket, and also in the resources in which the logs were generated.

  • An intercepting aggregated sink prevents logs from being routed through sinks in child resources, except for _Required sinks. This sink can be useful in preventing duplicate copies of logs from being stored in multiple places.

    For example, consider Data Access audit logs, which can be large in volume and expensive to store multiple copies of. If you've enabled Data Access audit logs, you might create a folder-level intercepting sink that routes all Data Access audit logs to a central project for analysis. This intercepting sink also prevents sinks in child resources from routing copies of the logs elsewhere.

    Intercepting sinks prevent logs being passed through the Log Router of child resources, unless the logs also match the _Required sink. Because the logs are intercepted, the logs don't count towards log-based metrics or log-based alerts in the child resources. You can view intercepting sinks in the Log Router page of child resources.

For information about managing sinks, see Route logs to supported destinations: Manage sinks.

You can create up to 200 sinks per folder or organization.

Supported destinations

You can use non-intercepting aggregated sinks to route logs within or between the same organizations and folders to the following destinations:

  • Cloud Logging log buckets: Provides storage in Cloud Logging. A log bucket can store logs that are received by multiple Google Cloud projects. You can combine your Cloud Logging data with other data by upgrading a log bucket to use Log Analytics, and then creating a linked BigQuery dataset. For information about viewing logs stored in log buckets, see Query and view logs overview and View logs routed to Cloud Logging buckets.
  • Google Cloud projects: Route log entries to a different Google Cloud project. When you route logs to a different Google Cloud project, the destination project's Log Router receives the logs and processes them. The sinks in the destination project determine how the received log entries are routed. Error Reporting can analyze logs when the destination project routes those logs to a log bucket owned by the destination project.
  • Pub/Sub topics: Provides support for third-party integrations, such as Splunk. Log entries are formatted into JSON and then routed to a Pub/Sub topic. For information about viewing logs routed to Pub/Sub, see View logs routed to Pub/Sub.
  • BigQuery datasets: Provides storage of log entries in BigQuery datasets. You can use big data analysis capabilities on the stored logs. To combine your Cloud Logging data with other data sources, we recommend that you upgrade your log buckets to use Log Analytics and then create a linked BigQuery dataset. For information about viewing logs routed to BigQuery, see View logs routed to BigQuery.
  • Cloud Storage buckets: Provides storage of log data in Cloud Storage. Log entries are stored as JSON files. For information about viewing logs routed to Cloud Storage, see View logs routed to Cloud Storage.

Best practices for intercepting sinks

When you create an intercepting sink, we recommend that you do the following:

  • Consider whether child resources need independent control of routing their logs. If a child resource needs independent control of certain logs, ensure your intercepting sink does not route those logs.

  • Add contact information to the description of an intercepting sink. This might be helpful if those who manage the intercepting sink are different from those who manage the projects whose logs are being intercepted.

  • Test your sink configuration by first creating a non-intercepting aggregated sink to ensure the correct logs are being routed.

Aggregated sinks and VPC Service Controls

The following limitations apply when you use aggregated sinks and VPC Service Controls:

  • Aggregated sinks can access data from projects inside a service perimeter. To restrict aggregated sinks from accessing data inside a perimeter, we recommend using IAM to manage Logging permissions.

  • VPC Service Controls doesn't support adding folder or organization resources to service perimeters. Therefore, you can't use VPC Service Controls to protect folder- and organization-level logs, including aggregate logs. To manage Logging permissions at the folder or organizational level, we recommend using IAM.

  • If you route logs by using a folder- or organization-level sink to a resource that a service perimeter protects, then you must add an ingress rule to the service perimeter. The ingress rule must allow access to the resource from the service account that the aggregated sink uses. For more information, refer to the following pages:

  • When you specify an ingress or egress policy for a service perimeter, you can't use ANY_SERVICE_ACCOUNT and ANY_USER_ACCOUNT as an identity type when you use a log sink to route logs to Cloud Storage resources. However, you can use ANY_IDENTITY as the identity type.

Before you begin

Before you create a sink, ensure the following:

  • You have a Google Cloud folder or organization with logs that you can see in the Logs Explorer.

  • You have one of the following IAM roles for the Google Cloud organization or folder from which you're routing logs.

    • Owner (roles/owner)
    • Logging Admin (roles/logging.admin)
    • Logs Configuration Writer (roles/logging.configWriter)

    The permissions contained in these roles let you create, delete, or modify sinks. For information about setting IAM roles, see the Logging Access control guide.

  • You have a resource in a supported destination or have the ability to create one.

    The routing destination has to be created before the sink, through either Google Cloud CLI, Google Cloud console, or the Google Cloud APIs. You can create the destination in any Google Cloud project in any organization, but you need to make sure that the service account from the sink has permissions to write to the destination.

Create an aggregated sink

To create a non-intercepting aggregated sink, you create a sink in a Google Cloud organization or folder, and set the sink's includeChildren parameter to True. When you set the includeChildren parameter, the sink routes log entries from the organization or folder, plus (recursively) from any contained folders, billing accounts, or Google Cloud projects. To create an intercepting sink, set both the includeChildren and interceptChildren parameters to True.

To specify log entries that you want to route to your destination, you set the sink's inclusion and exclusion filters.

To create an aggregated sink for your folder or organization, do the following:

Console

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log Router:

    Go to Log Router

  2. Select an existing folder or organization.

  3. Select Create sink.

  4. In the Sink details panel, enter the following details:

    • Sink name: Provide an identifier for the sink; note that after you create the sink, you can't rename the sink but you can delete it and create a new sink.

    • Sink description (optional): Describe the purpose or use case for the sink.

  5. In the Sink destination panel, select the sink service and destination:

    • Select sink service: Select the service where you want your logs routed. If you are creating an intercepting sink, then you can only select a Google Cloud project as the destination.

    Based on the service that you select, you can select from the following destinations:

    • Cloud Logging bucket: Select or create a Logging bucket. If you create a log bucket, it must be at the project level. You can't create a log bucket at the folder or organization level.
    • BigQuery table: Select or create the particular dataset to receive the routed logs. You also have the option to use partitioned tables.
    • Cloud Storage bucket: Select or create the particular Cloud Storage bucket to receive the routed logs.
    • Pub/Sub topic: Select or create the particular topic to receive the routed logs.
    • Splunk: Select the Pub/Sub topic for your Splunk service.
    • Google Cloud project: Select the Google Cloud project to receive the route logs.

      For example, if your sink destination is a BigQuery dataset, the sink destination would be the following:

      bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID
      
  6. In the Choose logs to include in sink panel, do one of the following:

    • To create a non-intercepting aggregated sink, select Include logs ingested by this resource and all child resources.

    • To create an intercepting sink, select Intercept logs ingested by this organization and all child resources.

  7. Complete the dialog by entering a filter expression in the Build inclusion filter field that matches the log entries you want to include. If you don't set a filter, all logs from your selected resource are routed to the destination.

    For example, you might want to build a filter to route all Data Access audit logs to a single Logging bucket. This filter looks like the following:

    LOG_ID("cloudaudit.googleapis.com/data_access") OR LOG_ID("externalaudit.googleapis.com/data_access")
    

    Note that the length of a filter can't exceed 20,000 characters.

  8. (Optional) To verify you entered the correct filter, select Preview logs. This opens the Logs Explorer in a new tab with the filter prepopulated.

  9. (Optional) In the Choose logs to exclude from the sink panel, do the following:

    1. In the Exclusion filter name field, enter a name.

    2. In the Build an exclusion filter field, enter a filter expression that matches the log entries you want to exclude. You can also use the sample function to select a portion of the log entries to exclude.

      For example, to exclude the logs from a specific project from being routed to the destination, add the following exclusion filter:

      logName:projects/PROJECT_ID
      

      To exclude logs from multiple projects, use the logical-OR operator to join logName clauses.

    You can create up to 50 exclusion filters per sink. Note that the length of a filter can't exceed 20,000 characters.

  10. Select Create sink.

API

To create a sink, use organizations.sinks.create or folders.sinks.create in the Logging API. Prepare the arguments to the method as follows:

  1. Set the parent parameter to be the Google Cloud organization or folder in which to create the sink. The parent must be one of the following:

    • organizations/ORGANIZATION_ID
    • folders/FOLDER_ID
  2. In the LogSink object in the method request body, do one of the following:

    • To create a non-intercepting aggregated sink, set includeChildren to True.

    • To create an intercepting sink, set the includeChildren and interceptChildren parameters to True.

  3. Set the filter property to match the log entries you want to include. Note that the length of a filter can't exceed 20,000 characters.

    For some examples of useful filters, see Create filters for aggregated sinks.

  4. Set the remaining LogSink fields as you would for any sink. For more information, see Route logs to supported destinations.

  5. Call organizations.sinks.create or folders.sinks.create to create the sink.

  6. Retrieve the service account name from the writer_identity field returned from the API response.

  7. Give that service account permission to write to your sink destination.

    If you don't have permission to make that change to the sink destination, then send the service account name to someone who can make that change for you.

    For more information about granting service accounts permissions for resources, see the set destination permissions section.

gcloud

To create an aggregated sink, use the logging sinks create command. To create a non-intercepting aggregated sink, specify the --include-children flag. To create an intercepting sink, specify both the --include-children and --intercept-children flags.

  1. Supply the sink name, sink destination, filter, and the ID of the folder or organization from which you're routing logs: The following example creates a non-intercepting aggregated sink:

    gcloud logging sinks create SINK_NAME \
      SINK_DESTINATION  --include-children \
      --folder=FOLDER_ID --log-filter="LOG_FILTER"
    

    For example, if you're creating an aggregated sink at the folder level and whose destination is a BigQuery dataset, your command might look like the following:

    gcloud logging sinks create SINK_NAME \
      bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID --include-children \
      --folder=FOLDER_ID --log-filter="logName:activity"
    

    Notes:

    • To create a sink at the organization level, replace --folder=FOLDER_ID with --organization=ORGANIZATION_ID.

    • For the sink to include all resources within the organization, the --include-children flag must be set, even when the --organization flag is passed to create. When set to false (the default), a sink will only route logs from the host resource.

    • For some examples of useful filters, see Create filters for aggregated sinks.

  2. Retrieve the service account name used to create the sink from the command output.

  3. Give that service account permission to write to your sink destination.

    If you don't have permission to make that change to the sink destination, then send the service account name to someone who can make that change for you.

    For more information about granting service accounts permissions for resources, see the set destination permissions section.

Any changes made to a sink might take a few minutes to apply.

Create filters for aggregated sinks

Like any sink, your aggregated sink contains a filter that selects individual log entries. For examples of filters that you might use to create your aggregated sink, see Sample queries using the Logs Explorer.

Following are some examples of filter comparisons that are useful when using the aggregated sinks feature. Some examples use the following notation:

  • : is the substring operator. Don't substitute the = operator.
  • ... represents any additional filter comparisons.
  • Variables are indicated by colored text. Replace them with valid values.

Note that the length of a filter can't exceed 20,000 characters.

For more details about the filtering syntax, see Logging query language.

Select the log source

For an aggregated sink, for each child resource of the organization or folder, the sink's inclusion and exclusion filters are applied to each log entry that is sent to the child resource. A log entry that matches the inclusion filter and that isn't excluded, is routed.

If you want your sink to route logs from all child resources, then don't specify a project, folder, or organization in your sink's inclusion and exclusion filters. For example, suppose you configure an aggregated sink for an organization with the following filter:

resource.type="gce_instance"

With the previous filter, logs with a resource type of Compute Engine instances that are written to any child of that organization are routed by the aggregated sink to the destination.

However, there might be situations where you want to use an aggregated sink to route logs only from specific child resources. For example, for compliance reasons you might want to store audit logs from specific folders or projects in their own Cloud Storage bucket. In these situations, configure your inclusion filter to specify each child resource whose logs you want routed. If you want to route logs from a folder and all projects within that folder, then the filter must list the folder and each of the projects contained by that folder, and also join the statements with an OR clause.

The following filters restrict logs to specific Google Cloud projects, folders, or organizations:

logName:"projects/PROJECT_ID/logs/" AND ... 
logName:("projects/PROJECT_A_ID/logs/" OR "projects/PROJECT_B_ID/logs/") AND ... 
logName:"folders/FOLDER_ID/logs/" AND ... 
logName:"organizations/ORGANIZATION_ID/logs/" AND ... 

For example, to route only logs written to Compute Engine instances that were written to the folder my-folder, use the following filter:

logName:"folders/my-folder/logs/" AND resource.type="gce_instance"

With the previous filter, logs written to any resource other than my-folder, including logs written to Google Cloud projects that are children of my-folder, aren't routed to the destination.

Select the monitored resource

To route logs from only a specific monitored resource in a Google Cloud project, use multiple comparisons to specify the resource exactly:

logName:"projects/PROJECT_ID/logs" AND
resource.type=RESOURCE_TYPE AND
resource.labels.instance_id=INSTANCE_ID

For a list of resource types, see Monitored resource types.

Select a sample of log entries

To route a random sample of log entries, add the sample built-in function. For example, to route only ten percent of the log entries matching your current filter, use this addition:

sample(insertId, 0.10) AND ...

For more information, see the sample function.

For more information about Cloud Logging filters, see Logging query language.

Set destination permissions

This section describes how to grant Logging the Identity and Access Management permissions to write logs to your sink's destination. For the full list of Logging roles and permissions, see Access control.

When you create or update a sink that routes logs to any destination other than a log bucket in the current project, a service account for that sink is required. Logging automatically creates and manages the service account for you:

  • As of May 22, 2023, when you create a sink and no service account for the underlying resource exists, Logging creates the service account. Logging uses the same service account for all sinks in the underlying resource. Resources can be a Google Cloud project, an organization, a folder, or a billing account.
  • Before May 22, 2023, Logging created a service account for each sink. As of May 22, 2023, Logging uses a shared service account for all sinks in the underlying resource.

The writer identity of a sink is the identifier of the service account associated with that sink. All sinks have a writer identity unless they write to a log bucket in the current Google Cloud project.

To route logs to a resource protected by a service perimeter, you must add the service account for that sink to an access level and then assign it to the destination service perimeter. This isn't necessary for non-aggregated sinks. For details, see VPC Service Controls: Cloud Logging.

To set permissions for your sink to route to its destination, do the following:

Console

  1. To get information about the service account for your sink, do the following:

    1. In the navigation panel of the Google Cloud console, select Logging, and then select Log Router:

      Go to Log Router

    2. Select Menu and then select View sink details.

      In the Sink details panel, the writerIdentity field contains the identity of the service account. The serviceAccount: string is part of the service account identity. For example:

      serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.com
      
  2. On the destination project, grant the writer identity the role required for the service account to write to the destination. To grant a role to a principal, you must have the role of Owner (roles/owner):

    • For Cloud Storage destinations, add the sink's writer identity as a principal by using IAM, and then grant it the Storage Object Creator role (roles/storage.objectCreator).
    • For BigQuery destinations, add the sink's writer identity as a principal by using IAM, and then grant it the BigQuery Data Editor role (roles/bigquery.dataEditor).
    • For Pub/Sub destinations, including Splunk, add the sink's writer identity as a principal by using IAM, and then grant it the Pub/Sub Publisher role (roles/pubsub.publisher).
    • For Logging bucket destinations in different Google Cloud projects, add the sink's writer identity as a principal by using IAM, and then grant it the Logs Bucket Writer role (roles/logging.bucketWriter).
    • For Google Cloud projects destinations, add the sink's writer identity as a principal by using IAM, and then grant it the Logs Writer role (roles/logging.logWriter). Specifically, a principal needs the logging.logEntries.route permission.
    If you don't have Owner access to the destination of the sink, then ask a project owner to add the writer identity as a principal.

API

  1. To get information about the service account for your sink, call the API method organizations.sinks.get or folders.sinks.get.

    The writerIdentity field contains the identity of the service account. The serviceAccount: string is part of the service account identity. For example:

    serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.com
    
  2. On the destination project, grant the writer identity the role required for the service account to write to the destination. To grant a role to a principal, you must have the role of Owner (roles/owner):

    • For Cloud Storage destinations, add the sink's writer identity as a principal by using IAM, and then grant it the Storage Object Creator role (roles/storage.objectCreator).
    • For BigQuery destinations, add the sink's writer identity as a principal by using IAM, and then grant it the BigQuery Data Editor role (roles/bigquery.dataEditor).
    • For Pub/Sub destinations, including Splunk, add the sink's writer identity as a principal by using IAM, and then grant it the Pub/Sub Publisher role (roles/pubsub.publisher).
    • For Logging bucket destinations in different Google Cloud projects, add the sink's writer identity as a principal by using IAM, and then grant it the Logs Bucket Writer role (roles/logging.bucketWriter).
    • For Google Cloud projects destinations, add the sink's writer identity as a principal by using IAM, and then grant it the Logs Writer role (roles/logging.logWriter). Specifically, a principal needs the logging.logEntries.route permission.
    If you don't have Owner access to the destination of the sink, then ask a project owner to add the writer identity as a principal.

gcloud

  1. To get information about the service account for your sink, run the following command:

    gcloud logging sinks describe SINK_NAME
    

    The writerIdentity field contains the identity of the service account. The serviceAccount: string is part of the service account identity. For example:

    serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.com
    
  2. On the destination project, grant the writer identity the role required for the service account to write to the destination. To grant a role to a principal, you must have the role of Owner (roles/owner):

    • For Cloud Storage destinations, add the sink's writer identity as a principal by using IAM, and then grant it the Storage Object Creator role (roles/storage.objectCreator).
    • For BigQuery destinations, add the sink's writer identity as a principal by using IAM, and then grant it the BigQuery Data Editor role (roles/bigquery.dataEditor).
    • For Pub/Sub destinations, including Splunk, add the sink's writer identity as a principal by using IAM, and then grant it the Pub/Sub Publisher role (roles/pubsub.publisher).
    • For Logging bucket destinations in different Google Cloud projects, add the sink's writer identity as a principal by using IAM, and then grant it the Logs Bucket Writer role (roles/logging.bucketWriter).
    • For Google Cloud projects destinations, add the sink's writer identity as a principal by using IAM, and then grant it the Logs Writer role (roles/logging.logWriter). Specifically, a principal needs the logging.logEntries.route permission.
    If you don't have Owner access to the destination of the sink, then ask a project owner to add the writer identity as a principal.

What's next