Configure aggregated sinks

This document describes how to create aggregated sinks. For information on managing existing sinks, see Configure amd manage sinks.

Overview

Aggregated sinks combine and route log entries from the Google Cloud resources contained by an organization or folder. For instance, you might aggregate and route audit log entries from all the folders contained by an organization to a Cloud Storage bucket.

Without the aggregated sink feature, sinks are limited to routing log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account.

You can create aggregated sinks for Google Cloud folders and organizations. Because neither Cloud projects nor billing accounts contain child resources, you can't create aggregated sinks for those.

Supported destinations

You can use aggregated sinks to route logs within or between the same organizations and folders to the following destinations:

  • Cloud Storage: JSON files stored in Cloud Storage buckets.
  • Pub/Sub: JSON messages delivered to Pub/Sub topics. Supports third-party integrations, such as Splunk, with Logging.
  • BigQuery: Tables created in BigQuery datasets.
  • Another Cloud Logging bucket: Log entries held in Cloud Logging log buckets.

Before you begin

Before you create a sink, ensure the following:

  • You have a Google Cloud folder or organization with logs that you can see in the Logs Explorer.

  • You have one of the following IAM roles for the Google Cloud organization or folder from which you're routing logs.

    • Owner (roles/owner)
    • Logging Admin (roles/logging.admin)
    • Logs Configuration Writer (roles/logging.configWriter)

    The permissions contained in these roles allow you to create, delete, or modify sinks. For information on setting IAM roles, see the Logging Access control guide.

  • You have a resource in a supported destination or have the ability to create one.

    The routing destination has to be created before the sink, through either gcloud command-line tool, Cloud Console, or the Google Cloud APIs. You can create the destination in any Cloud project in any organization, but you need to make sure that the service account from the sink has permissions to write to the destination.

Create an aggregated sink

To use aggregated sinks, you create a sink in a Google Cloud organization or folder, and set the sink's includeChildren parameter to True. That sink can then route log entries from the organization or folder, plus (recursively) from any contained folders, billing accounts, or Cloud projects. You set the sink's inclusion and exclusion filters to specify log entries that you want to route to your destination.

You can create up to 200 sinks per folder or organization.

To create an aggregated sink for your folder or organization, do the following:

Console

  1. In the Cloud Console, go to the Logging > Log Router page.

    Go to the Log Router

  2. Select an existing folder or organization.

  3. Select Create sink.

  4. In the Sink details panel, enter the following details:

    • Sink name: Provide an identifier for the sink; note that after you create the sink, you can't rename the sink but you can delete it and create a new sink.

    • Sink description (optional): Describe the purpose or use case for the sink.

  5. In the Sink destination panel, select the sink service and destination:

    • Select sink service: Select the service where you want your logs routed.

    Based on the service that you select, you can select from the following destinations:

    • Cloud Logging bucket: Select or create a Logging bucket.
    • BigQuery table: Select or create the particular dataset to receive the routed logs. You also have the option to use partitioned tables.
    • Cloud Storage bucket: Select or create the particular Cloud Storage bucket to receive the routed logs.
    • Pub/Sub topic: Select or create the particular topic to receive the routed logs.
    • Splunk: Select the Pub/Sub topic for your Splunk service.

      For example, if your sink destination is a BigQuery dataset, the sink destination would be the following:

      bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID
      
  6. In the Choose logs to include in sink panel, do the following:

    1. Select Include logs ingested by this resource and all child resources, which creates an aggregated sink.

    2. In the Build inclusion filter field, enter a filter expression that matches the log entries you want to include. If you don't set a filter, all logs from your selected resource are routed to the destination.

      For example, you might want to build a filter to route all Data Access logs to a single Logging bucket. This filter looks like the following:

      LOG_ID("cloudaudit.googleapis.com/data_access") OR LOG_ID("externalaudit.googleapis.com/data_access")
      

      Note that the length of a filter can't exceed 20,000 characters.

    3. To verify you entered the correct filter, select Preview logs. This opens the Logs Explorer in a new tab with the filter prepopulated.

  7. (Optional) In the Choose logs to exclude from the sink panel, do the following:

    1. In the Exclusion filter name field, enter a name.

    2. In the Build an exclusion filter field, enter a filter expression that matches the log entries you want to exclude. You can also use the sample function to select a portion of the log entries to exclude.

    You can create up to 50 exclusion filters per sink. Note that the length of a filter can't exceed 20,000 characters.

  8. Select Create sink.

API

To create a sink, use organizations.sinks.create or folders.sinks.create in the Logging API. Prepare the arguments to the method as follows:

  1. Set the parent parameter to be the Google Cloud organization or folder in which to create the sink. The parent must be one of the following:

    • organizations/ORGANIZATION_ID
    • folders/FOLDER_ID
  2. In the LogSink object in the method request body, do the following:

  3. Call organizations.sinks.create or folders.sinks.create to create the sink.

  4. Retrieve the service account name from the writer_identity field returned from the API response.

  5. Give that service account permission to write to your sink destination.

    If you don't have permission to make that change to the sink destination, then send the service account name to someone who can make that change for you.

    For more information about granting service accounts permissions for resources, see the set destination permissions section.

gcloud

To create a sink, use the logging sinks create command.

  1. Supply the sink name, sink destination, filter, and the ID of the folder or organization from which you're routing logs:

    gcloud logging sinks create SINK_NAME
    SINK_DESTINATION  --include-children \
    --folder=FOLDER_ID filter
    

    For example, if you're creating an aggregated sink at the folder level and whose destination is a BigQuery dataset, your command might look like the following:

    gcloud logging sinks create SINK_NAME \
    bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID --include-children \
    --folder=FOLDER_ID --log-filter="logName:activity"

    Notes:

    • To create a sink at the organization level, replace --folder=FOLDER_ID with --organization=ORGANIZATION_ID.

    • For the sink to include all resources within the organization, the --include-children flag must be set, even when the --organization flag is passed to create. When set to false (the default), a sink will only route logs from the host resource.

    • For some examples of useful filters, see Create filters for aggregated sinks.

  2. Retrieve the service account name used to create the sink from the command output.

  3. Give that service account permission to write to your sink destination.

    If you don't have permission to make that change to the sink destination, then send the service account name to someone who can make that change for you.

    For more information about granting service accounts permissions for resources, see the set destination permissions section.

Create filters for aggregated sinks

Like any sink, your aggregated sink contains a filter that selects individual log entries. For examples of filters that you might use to create your aggregated sink, see Sample queries using the Logs Explorer.

Following are some examples of filter comparisons that are useful when using the aggregated sinks feature. Some examples use the following notation:

  • : is the substring operator. Don't substitute the = operator.
  • ... represents any additional filter comparisons.
  • Variables are indicated by colored text. Replace them with valid values.

Note that the length of a filter can't exceed 20,000 characters.

For more details about the filtering syntax, see Logging query language.

Select the log source

To route logs from specific Cloud projects, folders, or organizations, use one of the following sample comparisons:

logName:"projects/PROJECT_ID/logs/" AND ... 
logName:("projects/PROJECT_A_ID/logs/" OR "projects/PROJECT_B_ID/logs/") AND ... 
logName:"folders/FOLDER_ID/logs/" AND ... 
logName:"organizations/ORGANIZATION_ID/logs/" AND ... 

Select the monitored resource

To route logs from only a specific monitored resource in a Cloud project, use multiple comparisons to specify the resource exactly:

logName:"projects/PROJECT_ID/logs" AND
resource.type=RESOURCE_TYPE AND
resource.labels.instance_id=INSTANCE_ID

For a list of resource types, see Monitored resource types.

Select a sample of log entries

To route a random sample of log entries, add the sample built-in function. For example, to route only ten percent of the log entries matching your current filter, use this addition:

sample(insertId, 0.10) AND ...

For more information, see the sample function.

For more information about Cloud Logging filters, see Logging query language.

Set destination permissions

This section describes how to grant Logging the Identity and Access Management permissions to write logs to your sink's destination. For the full list of Logging roles and permissions, see Access control.

When you create a sink, Logging creates a new service account for the sink, called a unique writer identity. Your sink destination must permit this service account to write log entries. You can't manage this service account directly as it is owned and managed by Cloud Logging. The service account is deleted if the sink gets deleted.

To route logs to a resource protected by a service perimeter, you must add the service account for that sink to an access level and then assign it to the destination service perimeter. This isn't necessary for non-aggregated sinks. For details, see VPC Service Controls: Cloud Logging.

To set permissions for your sink to route to its destination, do the following:

Console

  1. Obtain the sink's writer identity—an email address—from the new sink. Go to the Log Router page, and select menu > View sink details. The writer identity appears in the Sink details panel.

  2. If you have Owner access to the destination, add the service account to the destination in the following way:

    • For Cloud Storage destinations, add the sink's writer identity to your Cloud Storage bucket and give it the Storage Object Creator role.
    • For BigQuery destinations, add the sink's writer identity to your dataset and give it the BigQuery Data Editor role.
    • For Pub/Sub, including Splunk, add the sink's writer identity to your topic and give it the Pub/Sub Publisher role.
    • For Logging bucket destinations in different Cloud projects, add the sink's writer identity to the destination bucket and give it the roles/logging.bucketWriter permission.

    If you don't have Owner access to the sink destination, send the writer identity service account name to someone who has that ability. That person should then follow the instructions in the previous step to add the writer identity to the sink destination.

API

  1. Get the service account from the writerIdentity field in your sink by calling the API method organizations.sinks.get or folders.sinks.get.

    The service account looks similar to the following:

    serviceAccount:p123456789012-12345@gcp-sa-logging.iam.gserviceaccount.com
    
  2. Add the new sink's writer identity to the destination's permission list, giving the writer permission to write to the destination.

    If you have IAM Owner access to the destination, add the service account to the destination in the following way:

    • For Cloud Storage destinations, add the sink's writer identity to your Cloud Storage bucket and give it the Storage Object Creator role.
    • For BigQuery destinations, add the sink's writer identity to your dataset and give it the BigQuery Data Editor role.
    • For Pub/Sub, including Splunk, add the sink's writer identity to your topic and give it the Pub/Sub Publisher role.
    • For Logging bucket destinations in different Cloud projects, add the sink's writer identity to the destination log bucket and give it the roles/logging.bucketWriter permission.

    If you don't have Owner access to the sink destination, send the writer identity service account name to someone who has that ability. That person should then follow the instructions in the previous step to add the writer identity to the sink destination.

gcloud

  1. Obtain the sink's writer identity—an email address—from the new sink:

  2. Get the service account from the writerIdentity field in your sink:

    gcloud logging sinks describe SINK_NAME
    

    The service account looks similar to the following:

    serviceAccount:p123456789012-12345@gcp-sa-logging.iam.gserviceaccount.com
    
  3. If you have IAM Owner access to the destination, add the service account to the destination in the following way:

    • For Cloud Storage destinations, add the sink's writer identity to your Cloud Storage bucket and give it the Storage Object Creator role.
    • For BigQuery destinations, add the sink's writer identity to your dataset and give it the BigQuery Data Editor role.
    • For Pub/Sub, including Splunk, add the sink's writer identity to your topic and give it the Pub/Sub Publisher role.
    • For Logging bucket destinations in different Cloud projects, add the sink's writer identity to the destination log bucket and give it the roles/logging.bucketWriter permission.

    If you don't have Owner access to the sink destination, send the writer identity service account name to someone who has that ability. That person should then follow the instructions in the previous step to add the writer identity to the sink destination.

    For example, if you're routing logs between Logging buckets in different Cloud projects, you would add roles/logging.bucketWriter to the service account as follows:

    1. Get the Identity and Access Management policy for the destination Cloud project and write it to a local file in JSON format:

      gcloud projects get-iam-policy DESTINATION_PROJECT_ID --format json > output.json
      
    2. Add an IAM condition that lets the service account write only to the destination you created. For example:

      {
      "bindings": [
       {
         "members": [
           "user:username@gmail.com"
         ],
         "role": "roles/owner"
       },
       {
         "members": [
           "SERVICE_ACCOUNT"
         ],
         "role": "roles/logging.bucketWriter",
         "condition": {
             "title": "Bucket writer condition example",
             "description": "Grants logging.bucketWriter role to service account SERVICE_ACCOUNT used by sink SINK_NAME",
             "expression":
               "resource.name.endsWith(\'locations/global/buckets/BUCKET_ID\')"
         }
       }
      ],
      "etag": "BwWd_6eERR4=",
      "version": 3
      }
    3. Update the IAM policy:

      gcloud projects set-iam-policy DESTINATION_PROJECT_ID output.json
      

What's next