Known issues for Eventarc

This page lists known issues for Eventarc.

You can also check for existing issues or open new issues in the public issue trackers.

  • Newly created triggers can take up to two minutes to become operational.

  • If you update a trigger before its generated event is delivered, the event is routed according to the previous filtering and delivered to the original destination within three days of the event generation. The new filtering is applied to events generated after your update.

  • There is known duplicate transmission of Cloud Audit Logs from some Google Cloud event sources. When duplicate logs are published, duplicate events are delivered to destinations. To avoid these duplicate events, you should create triggers for fields that ensure the event is unique. This applies to the following event types:

    • Cloud Storage (serviceName: storage.googleapis.com), methodName: storage.buckets.list
    • Compute Engine (serviceName: compute.googleapis.com), methodName: beta.compute.instances.insert
    • BigQuery (serviceName: bigquery.googleapis.com)

    Note that since Workflows handles event deduplication, you don't have to ensure that the event is unique when you create a trigger for Workflows.

  • Cross-project triggers are not yet supported. The service that receives the events for the trigger must be in the same Google Cloud project as the trigger. If requests to your service are triggered by messages published to a Pub/Sub topic, the topic must also be in the same project as the trigger. See Route events across Google Cloud projects.

  • Regardless of where the virtual machine instance is actually located, Cloud Audit Logs triggers for Compute Engine result in events that originate from a single region: us-central1. When creating your trigger, ensure that the trigger location is set to either us-central1 or global.

  • For some event providers, you can choose to encode the event payload as application/json or application/protobuf. However, an event payload formatted in JSON is larger than one formatted in Protobuf, and this might impact reliability depending on your event destination, and its limits on event size. When this limit is reached, the event is retried according to the retry characteristics of Eventarc's transport layer, Pub/Sub. Learn how to handle Pub/Sub message failures if the maximum number of retries is made.

  • While using Workflows as a destination for an Eventarc trigger, events larger than the maximum Workflows arguments size will fail to trigger workflow executions. For more information, see Quotas and limits.

  • The maximum nested depth limit on each structured log entry for triggers that use Cloud Audit Logs is 64 levels. Log events that exceed this limit are dropped and not delivered by Eventarc.

  • When creating an Eventarc trigger for the first time in a Google Cloud project, there might be a delay in provisioning the Eventarc service agent. This issue can usually be resolved by attempting to create the trigger again. For more information, see Permission denied errors.