Troubleshoot routing and storing logs

This document explains common routing and storage issues and how to use the Google Cloud console to view and troubleshoot configuration mistakes or unexpected results.

For general information about using logs in your sink destinations, see View logs in sink destinations.

Troubleshoot routing logs

This section describes how to troubleshoot common issues when routing your logs.

Destination contains unwanted logs

You are viewing the logs routed to a destination and determine that the destination contains unwanted logs.

To resolve this condition, update the exclusion filters for your sinks that route logs to the destination. Exclusion filters let you exclude selected logs from being routed to a destination.

For example, assume that you create an aggregated sink to route logs in an organization to a destination. To exclude the logs from a specific project from being routed to the destination, add the following exclusion filter to the sink:

logName:projects/PROJECT_ID

You can also exclude logs from multiple projects by using the logical-OR operator to join logName clauses.

Destination is missing logs

Perhaps the most common sink-related issue is that logs seem to be missing from a sink destination.

In some cases, an error isn't generated but you might notice that logs are unavailable when you try to access them in your destination. If you suspect that your sink isn't properly routing logs, then check your sink's system log-based metrics:

  • exports/byte_count: Number of bytes in log entries that were routed.
  • exports/log_entry_count: Number of log entries that were routed.
  • exports/error_count: Number of log entries that failed to be routed.

The metrics have labels that record the counts by sink name and destination name and let you know whether your sink is routing logs data successfully or or failing. For details about how to view metrics, see View log-based metrics.

If your sink metrics indicate that your sink isn't performing as you expected, here are some possible reasons and what to do about them:

Latency

  • No matching log entries have been received since you created or updated your sink; only new log entries are routed.

    Try waiting an hour and check your destination again.

  • Matching log entries are late-arriving.

    There can be a delay before you can view your logs in the destination. Late-arriving logs are especially common for sinks which have configured Cloud Storage buckets as their destinations. Try waiting a few hours and check your destination again.

Viewing scope/filter is incorrect

  • The scope you're using to view logs in Logging bucket destinations is incorrect.

    Scope your search to one or more storage views as follows:

    • If you're using the Logs Explorer, then use the Refine scope button.

    • If you're using the gcloud CLI, then use the gcloud logging read command and add a --view=AllLogs flag.

  • The time range you're using to select and view data in your sink destination is too narrow.

    Try broadening the time range that you're using when selecting data in your sink destination.

Error in sink filter

  • The sink's filter is incorrect and not capturing the logs you expected to see in your destination.

    • Edit your sink's filter by using the Log Router in the Google Cloud console. To verify you entered the correct filter, select Preview logs in the Edit sink panel. This opens the Logs Explorer in a new tab with the filter pre-populated. For instructions about viewing and managing your sinks, see Manage sinks.

View errors

For each of the supported sink destinations, Logging provides error messages for improperly configured sinks.

There are several ways to view these sink-related errors; these methods are described in the following sections:

  • View the error logs generated for the sink.
  • Receive sink error notifications by email.

Error logs

The recommended method for inspecting your sink-related errors in detail is to view the error log entries generated by the sink. For details about viewing logs, see View logs by using the Logs Explorer.

You can use the following query in the query-editor pane in the Logs Explorer to review your sink's error logs. The same query works in the Logging API and the gcloud CLI.

Before you copy the query, replace the variable SINK_NAME with the name of the sink you're trying to troubleshoot. You can find your sink's name on the Log Router page in the Google Cloud console.

logName:"logging.googleapis.com%2Fsink_error"
resource.type="logging_sink"
resource.labels.name="<var>SINK_NAME</var>"

For example, if your sink's name is my-sink-123, then the log entry might look similar to the following:

{
  "textPayload": "Cloud Logging export config error in my-logs-project, export sink my-sink-123: dataset_not_found ()",
  "insertId": "12akhzyb14452",
  "resource": {
    "type": "logging_sink",
    "labels": {
      "project_id": "my-logs-test-project",
      "destination": "",
      "name": "my-sink-123"
    }
  },
  "timestamp": "2021-08-02T17:01:28.620961700Z",
  "severity": "ERROR",
  "labels": {
    "error_code": "dataset_not_found",
    ...
    "destination": "bigquery.googleapis.com/projects/my-logs-project/datasets/my-dataset",
    "sink_id": "my-sink-123",
    "activity_type_name": "LoggingSinkConfigErrorV2"
  },
  "logName": "projects/cloud-logs-test-project/logs/logging.googleapis.com%2Fsink_error",
  "receiveTimestamp": "2021-08-02T17:01:30.148869575Z"
}

The LogEntry field labels and its nested key-value information helps you target the source of your sink's error; it contains the affected resource, affected sink, and error code. The labels.error_code field contains a shorthand description of the error, letting you know which component of your sink needs reconfiguring.

To update your sink, use the Log Router.

In the navigation panel of the Google Cloud console, select Logging, and then select Log Router:

Go to Log Router

Email notifications

If you're subscribed to a Google Cloud project or its parent resource as a Technical Essential Contact, you'll receive sink configuration error email notifications. If there aren't any Technical Essential Contacts configured for a resource, then users listed as IAM Project Owner roles/owner for the resource receive the email notification.

The email message contains the following information:

  • Resource ID: The name of the Google Cloud project or other Google Cloud resource where the sink was configured.
  • Sink name: The name of the sink that contains the configuration error.
  • Sink destination: The full path of the sink's routing destintation; for example, bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID
  • Error code: Shorthand description of the error category; for example, dataset_not_found
  • Error detail: Detailed information about the error, including recommendations for troubleshooting the underlying error.

To view and manage your sinks, use the Log Router.

In the navigation panel of the Google Cloud console, select Logging, and then select Log Router:

Go to Log Router

Any sink configuration errors that apply to the resource appear in the list as a Cloud Logging sink configuration error. Each error contains a link to one of the log entries generated by the faulty sink. To examine the underlying errors in detail, see the section Error logs.

Types of sink errors

The following sections describe broad categories of sink-related errors and how you can troubleshoot them.

Incorrect destination

If you set up a sink but then see a configuration error that the destination couldn't be found when Logging attempted to route logs, here are some possible reasons:

  • Your sink's configuration contains a misspelling or other formatting error in the specified sink destination.

    You need to update the sink's configuration to properly specify the existing destination.

  • The specified destination might have been deleted.

    You can either change the sink's configuration to use a different, existing destination or recreate the destination with the same name.

In either case, to fix any issues, go to the Log Router page.

In the navigation panel of the Google Cloud console, select Logging, and then select Log Router:

Go to Log Router

Your sink begins routing logs when the destination is found and new logs that match your filter are received by Logging.

Managing sinks issues

If you disabled a sink to stop storing logs in a log bucket but still see logs being routed, then wait a few minutes for changes to the sink to apply.

Permissions issues

If a sink tries to route a log entry but lacks the appropriate IAM permissions for the sink's destination, the sink reports an error, which you can view, and skips the log entry.

When you create a sink, the sink's service account must be granted the appropriate destination permissions. If you create the sink in the Google Cloud console in the same Google Cloud project, then the Google Cloud console assigns these permissions automatically. If you create the sink in a different Google Cloud project, or by using gcloud CLI or the Logging API, then you must configure the permissions manually.

If you're seeing permission-related errors for your sink, then add the necessary permissions to the destination or update your sink to use a different destination. For instructions on how to update these permissions, see Destination permissions.

There is a slight delay between creating the sink and using the sink's new service account to authorize writing to the destination. Your sink begins routing logs when any permissions are corrected and new logs that match your filter are received by Logging.

Organizational policy issues

If you're trying to route a log entry but encounter an organization policy that constrains Logging from writing to the sink's destination, then the sink can't route to the selected destination and reports an error.

If you're seeing errors related to organization policies, then you can do the following:

  • Update the organization policy for the destination to remove the constraints blocking the sink from routing log entries; this presupposes that you have the appropriate permissions to update the organization policy. For instructions, see Creating and editing policies.

  • If you can't update the organization policy, then update your sink in the Log Router page to use a compliant destination.

    In the navigation panel of the Google Cloud console, select Logging, and then select Log Router:

    Go to Log Router

Your sink begins routing logs when the organization policy no longer blocks the sink from writing to the destination and new logs that match your filter are received by Logging.

Encryption key issues

If you're using encryption keys, whether managed with Cloud Key Management Service or by you, to encrypt the data in the sink's destination, then you might see related errors. Here are some possible issues and ways to fix them:

  • Billing isn't being enabled for the Google Cloud project that contains the Cloud KMS key.

    • Even if the sink was successfully created with the correct destination, this error message displays if there isn't a valid billing account associated with the Google Cloud project that contains the key.

    • Make sure there is a valid billing account linked to the Google Cloud project that contains the key. If a billing account isn't linked to the Google Cloud project, enable billing for that Google Cloud project or use a Cloud KMS key contained by a Google Cloud project that has a valid billing account linked to it.

  • The Cloud KMS key can't be found.

    • The Google Cloud project that contains the Cloud KMS key configured to encrypt the data isn't found.

    • Use a valid Cloud KMS key from an existing Google Cloud project.

  • The location of the Cloud KMS key doesn't match the location of the destination.

    • If the Google Cloud project that contains the Cloud KMS key is located in a region that differs from the region of the destination, then encryption fails and the sink can't route data to that destination.

    • Use a Cloud KMS key contained by a Google Cloud project whose region matches the sink's destination.

  • Encryption key access is denied to the sink's service account.

    • Even if the sink was successfully created with the correct service account permissions, this error message displays if the sink destination uses an encryption key that doesn't give the service account sufficient permissions to encrypt or decrypt the data.

    • Grant the Cloud KMS CryptoKey Encrypter/Decrypter role for the service account specified in the sink's writerIdentity field for the key used in the destination. Also ensure that the Cloud KMS API is enabled.

Quota issues

When sinks write logs, destination-specific quotas apply to the Google Cloud projects in which the sinks were created. If the quotas are exhausted, then the sink stops routing logs to the destination.

For example, when routing data to BigQuery, you might see an error that tells you your per-table streaming insert quota has been exceeded for a certain table in your dataset. In this case, your sink might be routing too many log entries too quickly. The same concept applies to the other supported sink destinations, for example to Pub/Sub topics.

To fix the quota exhaustion issues, decrease the amount of log data being routed by updating your sink's filter to match fewer log entries. You might use the sample function in your filter to select a fraction of the total number of log entries.

Your sink begins routing logs to your destination when you've updated your sink to match fewer log entries or when your quotas are refreshed.

For details on the limits that might apply when you route logs, review the appropriate destination's quota information:

In addition to the general sink error types, here are the most common destination-specific error types and how you can fix them.

Errors routing to Cloud Storage

The following are the most common errors when routing logs to Cloud Storage:

  • Late-arriving log entries:

    • Routed log entries are saved to Cloud Storage buckets in hourly batches. It might take from 2 to 3 hours before the first entries begin to appear.

    • Routed log file shards with the suffix An ("Append") hold log entries that arrived late. If the Cloud Storage destination experiences an outage, then Cloud Logging buffers the data until the outage is over.

  • Unable to grant correct permissions to the destination:

    • Even if the sink was successfully created with the correct service account permissions, this error message displays if the access control model for the Cloud Storage bucket was set to uniform access when the bucket was created.

    • For existing Cloud Storage buckets, you can change the access control model for the first 90 days after bucket creation by using the Permissions tab. For new buckets, select the Fine-grained access control model during bucket creation. For details, see Creating Cloud Storage buckets.

Errors routing to BigQuery

The following are the most common errors when routing logs to BigQuery:

  • Invalid table schema:

    • Logs streamed to the table in your BigQuery dataset don't match the current table's schema. Common issues include trying to route log entries with different data types, which causes a schema mismatch. For example, one of the fields in the log entry is an integer, while a corresponding column in the schema has a string type.

    • Make sure that your log entries match the table's schema. After you fix the source of the error, you can rename your current table and let Logging create the table again.

    • BigQuery supports loading nested data into its tables. However, when loading data from Logging, the maximum nested depth limit for a column is 13 levels.

    When BigQuery identifies a schema mismatch, it creates a table within the corresponding dataset to store the error information. A table's type determines the table name. For date-sharded tables, the naming format is export_errors_YYYYMMDD. For partitioned tables, the naming format is export_errors. For information about the schema of the error tables and about how to prevent future field-type mismatches, see Mismatches in schema.

  • Log entries are outside of the permitted time boundaries:

    • Logs streamed to the partitioned BigQuery table are outside the permitted time boundaries. BigQuery doesn't accept logs that are too far in the past or future.

    • You can update your sink to route those logs to Cloud Storage and use a BigQuery load job. See the BigQuery documentation for further instructions.

  • Dataset doesn't allow the service account associated with the log sink to write to it:

    • Even if the sink was successfully created with the correct service account permissions, this error message displays if there isn't a valid billing account associated with the Google Cloud project that contains the sink destination.

    • Make sure there is a billing account linked to your Google Cloud project. If a billing account isn't linked to the sink destination Google Cloud project, enable billing for that Google Cloud project or update the sink destination so that it's located in a Google Cloud project that has a valid billing account linked to it.

  • Dataset contains duplicate log entries:

    • Duplicate log entries can occur when there are failures in streaming logs to BigQuery, including due to retries or misconfigurations. Cloud Logging deduplicates log entries with the same timestamp and insertId at query time. BigQuery doesn't eliminate duplicate log entries.

    • To ignore duplicate log entries in BigQuery, include the SELECT DISTINCT clause in your query. For example:

    SELECT DISTINCT insertId, timestamp FROM TABLE_NAME
    

Errors routing to Cloud Logging buckets

You might encounter a situation where you can see logs in the Logs Explorer that you excluded with your sink. You can still see these logs if any of following conditions are true:

  • You're running your query in the Google Cloud project that generated the logs.

    To fix this, ensure you're running your query in the correct Google Cloud project.

  • The excluded logs were sent to multiple log buckets; you're seeing a copy of the same log you meant to exclude.

    To fix this, check your sinks in the Log Router page to ensure you aren't including the logs in other sinks' filters.

  • You have access to views in the log bucket where the logs were sent. In this case, you can see those logs by default.

    To avoid seeing these logs in the Logs Explorer, you can refine the scope of your search to your source Google Cloud project or bucket.

Troubleshoot storing logs

Why can't I delete this bucket?

If you're trying to delete a bucket, do the following:

  • Ensure that you have the correct permissions to delete the bucket. For the list of the permissions that you need, see Access control with IAM.

  • Determine whether the bucket is locked by listing the bucket's attributes. If the bucket is locked, check the bucket's retention period. You can't delete a locked bucket until all of the logs in the bucket have fulfilled the bucket's retention period.

  • Verify that the log bucket doesn't have a linked BigQuery dataset. You can't delete a log bucket with a linked dataset.

    The following error is shown in response to a delete command on a log bucket that has a linked dataset:

    FAILED_PRECONDITION: This bucket is used for advanced analytics and has an active link. The link must be deleted first before deleting the bucket
    

    To list the links associated with a log bucket, run the [gcloud logging links list][link-list] command or run the projects.locations.buckets.links.list API method.

Which service accounts are routing logs to my bucket?

To determine if any service accounts have IAM permissions to route logs to your bucket, do the following:

  1. In the navigation panel of the Google Cloud console, select IAM:

    Go to IAM

  2. From the Permissions tab, view by Roles. You see a table with all the IAM roles and principals associated with your Google Cloud project.

  3. In the table's Filter text box, enter Logs Bucket Writer.

    You see any principals with the Logs Bucket Writer role. If a principal is a service account, its ID contains the string gserviceaccount.com.

  4. Optional: If you want to remove a service account from being able to route logs to your Google Cloud project, select the check box for the service account and click Remove.

Why do I see logs for a Google Cloud project even though I excluded them from my _Default sink?

You might be viewing logs in a log bucket in a centralized Google Cloud project, which aggregates logs from across your organization.

If you're using the Logs Explorer to access these logs and see logs that you excluded from the _Default sink, then your view might be scoped to the Google Cloud project level.

To fix this issue, select Scope by storage in the Refine scope panel and then select the _Default bucket in your Google Cloud project. You shouldn't see the excluded logs anymore.