Google Cloud Firestore triggers (1st gen)
Cloud Run functions can handle events in Firestore in the same Google Cloud project as the function. You can read or update Firestore in response to these events using the Firestore APIs and client libraries.
In a typical lifecycle, a Firestore function does the following:
Waits for changes to a particular document.
Triggers when an event occurs and performs its tasks.
Receives a data object with a snapshot of the affected document. For
write
orupdate
events, the data object contains snapshots representing document state before and after the triggering event.
Event types
Firestore supports create
, update
, delete
, and write
events. The
write
event encompasses all modifications to a document.
Event type | Trigger |
---|---|
providers/cloud.firestore/eventTypes/document.create (default) |
Triggered when a document is written to for the first time. |
providers/cloud.firestore/eventTypes/document.update |
Triggered when a document already exists and has any value changed. |
providers/cloud.firestore/eventTypes/document.delete |
Triggered when a document with data is deleted. |
providers/cloud.firestore/eventTypes/document.write |
Triggered when a document is created, updated or deleted. |
Wildcards are written in triggers using curly braces, as follows:
"projects/YOUR_PROJECT_ID/databases/(default)/documents/collection/{document_wildcard}"
Specifying the document path
To trigger your function, specify a document path to listen to. Functions only respond to document changes, and cannot monitor specific fields or collections. Below are a few examples of valid document paths:
users/marie
: valid trigger. Monitors a single document,/users/marie
.users/{username}
: valid trigger. Monitors all user documents. Wildcards are used to monitor all documents in the collection.users/{username}/addresses
: invalid trigger. Refers to the subcollectionaddresses
, not a document.users/{username}/addresses/home
: valid trigger. Monitors the home address document for all users.users/{username}/addresses/{addressId}
: valid trigger. Monitors all address documents.
Using wildcards and parameters
If you do not know the specific document you want to monitor, use a {wildcard}
instead of the document ID:
users/{username}
listens for changes to all user documents.
In this example, when any field on any document in users
is changed, it
matches a wildcard called {username}
.
If a document in users
has
subcollections, and a
field in one of those subcollections' documents is changed, the {username}
wildcard is not triggered.
Wildcard matches are extracted from document paths. You can define as many wildcards as you like to substitute explicit collection or document IDs.
Event structure
This trigger invokes your function with an event similar to the one shown below:
{ "oldValue": { // Update and Delete operations only A Document object containing a pre-operation document snapshot }, "updateMask": { // Update operations only A DocumentMask object that lists changed fields. }, "value": { // A Document object containing a post-operation document snapshot } }
Each Document
object contains one or more Value
objects. See the Value
documentation
for type references. This is especially useful if you're using a typed language
(like Go) to write your functions.
Code sample
The sample Cloud Function below prints the fields of a triggering Cloud Firestore event:
Node.js
Python
Go
Java
C#
Ruby
PHP
The example below retrieves the value added by the user, converts the string at that location to uppercase, and replaces the value with the uppercase string:
Node.js
Python
Go
Java
C#
Ruby
PHP
Deploying your function
The following gcloud
command deploys a function that is triggered
by write events on the document /messages/{pushId}
:
gcloud functions deploy FUNCTION_NAME \ --no-gen2 \ --entry-point ENTRY_POINT \ --runtime RUNTIME \ --set-env-vars GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID \ --trigger-event "providers/cloud.firestore/eventTypes/document.write" \ --trigger-resource "projects/YOUR_PROJECT_ID/databases/(default)/documents/messages/{pushId}"
Argument | Description |
---|---|
FUNCTION_NAME |
The registered name of the Cloud Function you are deploying.
This can either be the name of a function in your
source code, or an arbitrary string. If FUNCTION_NAME is an
arbitrary string, then you must include the
--entry-point flag.
|
--entry-point ENTRY_POINT |
The name of a function or class in your source code. Optional, unless
you did not use FUNCTION_NAME
to specify the
function in your source code to be executed during deployment. In that
case, you must use --entry-point to supply the name of the
executable function.
|
--runtime RUNTIME |
The name of the runtime you are using. For a complete list, see the
gcloud reference.
|
--set-env-vars GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID |
The unique identifier of the project as a runtime environment variable. |
--trigger-event NAME |
The event type that the function will monitor for (one of
write , create , update or
delete ).
|
--trigger-resource NAME |
The fully qualified database path to which the function will listen.
This should conform to the following format:
"projects/YOUR_PROJECT_ID/databases/(default)/documents/PATH"
The {pushId} text is a wildcard parameter described above
in Specifying the document path.
|
Limitations
Note the following limitations for Firestore triggers for Cloud Run functions:
- Cloud Run functions (1st gen) prerequisites an existing "(default)" database in Firestore native mode. It does not support Firestore named databases or Datastore mode. Please use Cloud Run functions (2nd gen) to configure events in such cases.
- Ordering is not guaranteed. Rapid changes can trigger function invocations in an unexpected order.
- Events are delivered at least once, but a single event may result in multiple function invocations. Avoid depending on exactly-once mechanics, and write idempotent functions.
- Firestore in Datastore mode requires Cloud Run functions (2nd gen). Cloud Run functions (1st gen) does not support Datastore mode.
- A trigger is associated with a single database. You cannot create a trigger that matches multiple databases.
- Deleting a database does not automatically delete any triggers for that database. The trigger stops delivering events but continues to exist until you delete the trigger.
- If a matched event exceeds the maximum request size, the
event might not be delivered to Cloud Run functions (1st gen).
- Events not delivered because of request size are logged in platform logs and count towards the log usage for the project.
- You can find these logs in the Logs Explorer with the message "Event cannot deliver to
Cloud function due to size exceeding the limit for 1st gen..." of
error
severity. You can find the function name under thefunctionName
field. If thereceiveTimestamp
field is still within an hour from now, you can infer the actual event content by reading the document in question with a snapshot before and after the timestamp. - To avoid such cadence, you can:
- Migrate and upgrade to Cloud Run functions (2nd gen)
- Downsize the document
- Delete the Cloud Run functions in question
- You can turn off the logging itself using exclusions but note that the offending events will still not be delivered.