Extend Datastore with Cloud Functions (2nd gen)

With Cloud Functions and Eventarc, you can deploy code to handle events triggered by changes in your Firestore in Datastore mode database. This allows you to add server-side functionality without running your own servers.

Datastore mode triggers

Eventarc supports the following Firestore in Datastore mode event triggers to let you create Cloud Functions (2nd gen) handlers tied to Firestore in Datastore mode events:

Event Type Trigger
google.cloud.datastore.entity.v1.created Triggered when an entity is written for the first time.
google.cloud.datastore.entity.v1.updated Triggered when an entity already exists and has any value changed.
google.cloud.datastore.entity.v1.deleted Triggered when an entity is deleted.
google.cloud.datastore.entity.v1.written Triggered when created, updated or deleted is triggered.
google.cloud.datastore.entity.v1.created.withAuthContext Same as created but adds authentication information.
google.cloud.datastore.entity.v1.updated.withAuthContext Same as updated but adds authentication information.
google.cloud.datastore.entity.v1.deleted.withAuthContext Same as deleted but adds authentication information.
google.cloud.datastore.entity.v1.written.withAuthContext Same as written but adds authentication information.

Datastore mode event triggers respond only to entity changes. An update to a Datastore mode entity where data is unchanged (a no-op write) does not generate an update or write event. You cannot generate events for only specific properties.

Include authentication context in the event

To include additional authentication information about the event, use an event trigger with the withAuthContext extension. This extension adds additional information about the principal that triggered the event. It adds the authtype and authid attributes in addition to the information returned in the base event. See the authcontext reference for more information about attribute values.

Write an entity-triggered function

To write a function that responds to Firestore in Datastore mode events, prepare to specify the following during deployment:

  • a trigger event type
  • a trigger event filter to select the entities associated with the function
  • the function code to run

Trigger event filters

When you specify an event filter, you can specify either an exact entity match or a path pattern. Use a path pattern to match multiple entities with the wildcards * or **.

For example, you can specify an exact entity match to respond to changes to the following entity:

users/marie

Use wildcards, * or **, to respond to changes in entities that match a pattern. The * wildcard matches a single segment, and the ** multi-segment wildcard matches zero or more segments in the pattern.

For single segment matches (*) you can also use a named capture group, such as users/{userId}.

The following table demonstrates valid path patterns:

Pattern Description
users/* or users/{userId} Matches all entities of kind users. Does not match descendant entities level like /users/marie/messages/33e2IxYBD9enzS50SJ68
users/** Matches all entities of kind users and all descendant entities like /users/marie/messages/33e2IxYBD9enzS50SJ68

To learn more about path patterns, see Eventarc path patterns.

Your trigger must always point to an entity, even if you're using a wildcard. See the following examples:

  • users/{userId=*}/{messages=*} is not valid because {messages=*} is a kind ID.

  • users/{userId=*}/{messages}/{messageId=*} is valid because {messageId=*} always points to an entity.

Character escaping

The section describes situations that require you to escape characters in kind IDs and entity IDs. Escaping a character lets the event filter correctly interpret the ID.

  • If a kind ID or entity ID includes a ~ or / character, you must escape the ID in your event filter. To escape an ID, use the format __escENCODED_ID__. Replace ENCODED_ID with a kind ID or entity ID that has all ~ and / characters replaced by their encoding IDs, which are the following:

    • ~: ~0
    • /: ~1

    For example, the kind ID user/profile becomes __escusers~1profile__. An example path pattern with this kind ID is __escusers~1profile__/{userId}

  • If you use the kind ID or entity ID of either . or .. in your event filter, you must escape the ID as follows:

    • .: __esc~2__
    • ..: __esc~2~2__

    You need to escape the . character only if the ID is exactly . or ... For example, the kind ID customers.info does not require escaping.

  • If your kind or entity ID is a numeric value instead of a string value, you must escape the ID with __idNUMERIC_VALUE__. For example, the path pattern for an entity of kind 111 and entity ID 222 is __id111__/__id222__.

  • If you migrated from Legacy Cloud Datastore to Firestore in Datastore mode, your database might contain legacy IDs in a non-UTF8 encoding. You must escape these IDs with __bytesBASE64_ENCODING__. Replace BASE64_ENCODING with the base-64 encoding of the ID. For example, the path pattern Task/{task} with escaping for non-UTF8 kind ID Task becomes __bytesVGFzaw==__/{task}.

Example functions

The following sample demonstrates how to receive Datastore mode events. To work with the data involved in an event, look at the value and old_value fields.

  • value: An EntityResult object that contains a post-operation entity snapshot. This field is not populated for delete events.
  • old_value: An EntityResult object that contains a pre-operation entity snapshot. This field is only populated for update and delete events.

Java

To learn how to install and use the client library for Datastore mode, see Datastore mode client libraries. For more information, see the Datastore mode Java API reference documentation.

To authenticate to Datastore mode, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import com.google.cloud.functions.CloudEventsFunction;
import com.google.events.cloud.datastore.v1.EntityEventData;
import com.google.protobuf.InvalidProtocolBufferException;
import io.cloudevents.CloudEvent;
import java.util.logging.Logger;

public class Datastore implements CloudEventsFunction {
  private static final Logger logger = Logger.getLogger(Datastore.class.getName());

  @Override
  public void accept(CloudEvent event) throws InvalidProtocolBufferException {
    EntityEventData datastoreEventData = EntityEventData.parseFrom(event.getData().toBytes());

    logger.info("Function triggered by event on: " + event.getSource());
    logger.info("Event type: " + event.getType());

    logger.info("Old value:");
    logger.info(datastoreEventData.getOldValue().toString());

    logger.info("New value:");
    logger.info(datastoreEventData.getValue().toString());
  }
}

Include the proto dependencies in your source

You must include the Datastore mode data.proto file in the source directory for your function. This file imports the following protos which you must also include in your source directory:

Use the same directory structure for the dependencies. For example, place struct.proto within google/protobuf.

These files are required to decode event data. If your function source does not include these files, it returns an error when it runs.

Event attributes

Each event includes data attributes that include information about the event such as the time the event triggered. Firestore in Datastore mode adds additional data about the database and entity involved in the event. You can access these attributes as follows:

Java
logger.info("Event time " + event.getTime());
logger.info("Event project: " + event.getExtension("project"));
logger.info("Event location: " + event.getExtension("location"));
logger.info("Database name: " + event.getExtension("database"));
logger.info("Database namespace: " + event.getExtension("namespace"));
logger.info("Database entity: " + event.getExtension("entity"));
// For withAuthContext events
logger.info("Auth information: " + event.getExtension("authid"));
logger.info("Auth information: " + event.getExtension("authtype"));

Deploy a function

Users deploying Cloud Functions must have the Cloud Functions Developer IAM role or a role that includes the same permissions. See also Additional configuration for deployment.

You can deploy a function using either the gcloud CLI or the Google Cloud console. The example below demonstrates deployment with the gcloud CLI. For details on deployment with the Google Cloud console, see Deploy Cloud Functions.

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Use the gcloud functions deploy command to deploy a function:

    gcloud functions deploy FUNCTION_NAME \
    --gen2 \
    --region=FUNCTION_LOCATION \
    --trigger-location=TRIGGER_LOCATION \
    --runtime=RUNTIME \
    --source=SOURCE_LOCATION \
    --entry-point=CODE_ENTRYPOINT \
    --trigger-event-filters="type=EVENT_FILTER_TYPE" \
    --trigger-event-filters="database=DATABASE" \
    --trigger-event-filters="namespace=NAMESPACE" \
    --trigger-event-filters-path-pattern="entity=ENTITY_OR_PATH" \
    

    The first argument, FUNCTION_NAME, is a name for your deployed function. The function name must start with a letter followed by up to 62 letters, numbers, hyphens, or underscores, and must end with a letter or a number. Replace FUNCTION_NAME with a valid function name. Then, add the following flags:

    • The --gen2 flag specifies that you want to deploy to Cloud Functions (2nd gen). Omitting this flag results in deployment to Cloud Functions (1st gen).

    • The --region=FUNCTION_LOCATION flag specifies the region in which to deploy your function.

      To maximize proximity, set FUNCTION_LOCATION to a region near your Firestore database. If your Firestore database is in a multi-region location, set the value to us-central1 for databases innam5 and to europe-west4 for databases in eur3. For regional Firestore locations, set to the same region.

    • The --trigger-location=TRIGGER_LOCATION flag specifies the location of the trigger. You must set TRIGGER_LOCATION to the location of your Datastore mode database.

    • The --runtime=RUNTIME flag specifies which language runtime your function uses. Cloud Functions supports several runtimes. See Runtimes for more information. Set RUNTIME to a supported runtime.

    • The --source=SOURCE_LOCATION flag specifies the location of your function source code. See the following for details:

      Set SOURCE_LOCATION to the location of your function source code.

    • The --entry-point=CODE_ENTRYPOINT flag specifies the entry point to your function in your source code. This is the code that your function executes when it runs. You must set CODE_ENTRYPOINT to a function name or fully-qualified class name that exists in your source code. See Function entry point for more information.

    • The --trigger-event-filters flags define the event filter which includes trigger type and the entity or path that triggers the events. Set the following attribute values to define your event filter:

      • type=EVENT_FILTER_TYPE: Firestore supports the following event types:

        • google.cloud.datastore.entity.v1.created: event is sent when an entity is written for the first time.
        • google.cloud.datastore.entity.v1.updated: event is sent when an entity already exists and has any value changed.
        • google.cloud.datastore.entity.v1.deleted: event is sent when an entity is deleted.
        • google.cloud.datastore.entity.v1.written: event is sent when an entity is created, updated, or deleted.
        • google.cloud.datastore.entity.v1.created.withAuthContext: event is sent when a document is written to for the first time and the event includes additional authentication information
        • google.cloud.datastore.entity.v1.updated.withAuthContext: event is sent when a document already exists and has any value changed. Includes additional authentication information
        • google.cloud.datastore.entity.v1.deleted.withAuthContext: event is sent when a document is deleted. Includes additional authentication information
        • google.cloud.datastore.entity.v1.written.withAuthContext: event is sent when a document is created, updated, or deleted and event. Includes additional authentication information

        Set EVENT_FILTER_TYPE to one of these event types.

      • database=DATABASE: the Firestore database. For the default database name, set DATABASE to (default).

      • namespace=NAMESPACE: the database namespace. For the default database name, set NAMESPACE to (default). Remove the flag to match any namespace.

      • entity=ENTITY_OR_PATH: the database path that triggers events when data is created, updated, or deleted. Accepted values for ENTITY_OR_PATH are:

        • Equal; for example, --trigger-event-filters="entity='users/marie'"
        • Path pattern; for example, --trigger-event-filters-path-pattern="entity='users/*'". For more information, see Understand path patterns.

      You can optionally specify additional configuration, networking, and security options when you deploy a function.

      For a complete reference on the deployment command and its flags, see the gcloud functions deploy documentation.

Example deployments

The following examples demonstrate deployments with the Google Cloud CLI.

Deploy a function for a database in the us-west2 region:

gcloud functions deploy gcfv2-trigger-datastore-node \
--gen2 \
--region=us-west2 \
--trigger-location=us-west2 \
--runtime=nodejs18 \
--source=gs://example_bucket-1/datastoreEventFunction.zip \
--entry-point=makeUpperCase \
--trigger-event-filters=type=google.cloud.datastore.entity.v1.written \
--trigger-event-filters=database='(default)' \
--trigger-event-filters-path-pattern="entity='messages/{pushId}'"

Deploy a function for a database in the nam5 multi-region:

gcloud functions deploy gcfv2-trigger-datastore-python \
--gen2 \
--region=us-central1 \
--trigger-location=nam5 \
--runtime=python311 \
--source=gs://example_bucket-1/datastoreEventFunction.zip \
--entry-point=make_upper_case \
--trigger-event-filters=type=google.cloud.datastore.entity.v1.written.withAuthContext \
--trigger-event-filters=database='(default)' \
--trigger-event-filters-path-pattern="entity='messages/{pushId}'"

Limitations

Note the following limitations for Firestore triggers for Cloud Functions:

  • Ordering is not guaranteed. Rapid changes can trigger function invocations in an unexpected order.
  • Events are delivered at least once, but a single event may result in multiple function invocations. Avoid depending on exactly-once mechanics, and write idempotent functions.
  • Firestore in Datastore mode requires Cloud Functions (2nd gen). Cloud Functions (1st gen) does not support Datastore mode.
  • Cloud Functions (1st gen) only works with "(default)" database and does not support Firestore named databases. Please use Cloud Functions (2nd gen) to configure events for named databases.
  • A trigger is associated with a single database. You cannot create a trigger that matches multiple databases.
  • Deleting a database does not automatically delete any triggers for that database. The trigger stops delivering events but continues to exist until you delete the trigger.
  • If a matched event exceeds the maximum request size, the event might not be delivered to Cloud Functions (1st gen).
    • Events not delivered because of request size are logged in platform logs and count towards the log usage for the project.
    • You can find these logs in the Logs Explorer with the message "Event cannot deliver to Cloud function due to size exceeding the limit for 1st gen..." of error severity. You can find the function name under the functionName field. If the receiveTimestamp field is still within an hour from now, you can infer the actual event content by reading the document in question with a snapshot before and after the timestamp.
    • To avoid such cadence, you can:
      • Migrate and upgrade to Cloud Functions (2nd gen)
      • Downsize the document
      • Delete the Cloud Functions in question
    • You can turn off the logging itself using exclusions but note that the offending events will still not be delivered.

Eventarc and Firestore in Datastore mode locations

Eventarc does not support multi-regions for Firestore event triggers, but you can still create triggers for Firestore databases in multi-region locations. Eventarc maps Firestore multi-region locations to the following Eventarc regions:

Firestore multi-region Eventarc region
nam5 us-central1
eur3 europe-west4

What's next