This page shows you how to resolve issues with Firestore.
The table below describes possible causes of increased latency:
|Latency cause||Types of operations affected||Resolution|
|Sustained traffic exceeding the 500-50-5 rule.||read, write||
For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.
Hot-spots (high read, write, and delete rates to a narrow document range) limit the ability of Firestore to scale. Review designing for scale and identify hot-spots in your application.
|Contention, either from updating a single document too frequently or from transactions.||read, write||
Reduce the write rate to individual documents.
Review data contention in transactions and how you use transactions.
|Slow merge-join queries.||read||
For example, queries with multiple equality filters (
|Large reads that return many documents.||read||Use pagination to split large reads.|
|Too many recent deletes.||read
This greatly affects operations that list collections in a database.
|If latency is caused by too many recent deletes, the issue should automatically resolve after some time. If the issue does not resolve, contact support.|
|Adding and removing listeners too quickly.||realtime listener queries||See the best practices for realtime updates.|
|Listening to large documents or to a query with many results.||realtime listener queries||See the best practices for realtime updates|
|Index fanout, especially for array fields and map fields.||write||Review your usage of array fields and map fields. For map fields, you can disable subfields from indexing.|
|Large writes and batched writes.||write||
Try reducing the number of writes in each batched write. Batched writes are atomic and many writes in a single batch can increase latency and contention. For example, a batch of 10 writes performs better than a batch of 500 writes.
For bulk data entry where you do not require atomicity, use a server client library with parallelized individual writes. Batched writes perform better than serialized writes but not better than parallel writes.
This section lists issues that you might encounter and provides suggestions for how to fix each of them.
The following can increase
- An increase in latency caused an operation take longer than the deadline (60 seconds by default) to complete.
DEADLINE_EXCEEDED A deadline was exceeded on the server.
To resolve this issue, see the guide to troubleshooting latency.
The following situations can increase
- A document receiving too many updates per second.
- Contention from overlapping transactions.
- Traffic increases that exceed the 500-50-5 rule or encounter hot-spots.
ABORTED Too much contention on these datastore entities. Please try again.
ABORTED Aborted due to cross-transaction contention. This occurs when multiple transactions attempt to access the same data, requiring Firestore to abort at least one in order to enforce serializability.
To resolve this issue:
- For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.
- Hot-spots limit the ability of Firestore to scale up, review designing for scale to identify hot-spots.
- Review data contention in transactions and your usage of transactions.
- Reduce the write rate to individual documents.
The following situations can lead to
- You exceeded the free tier quota and billing is not enabled for your project.
RESOURCE_EXHAUSTED Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space.
To resolve this issue:
- Wait for the daily reset of your free tier quota or enable billing for your project.
The following situations can cause
- Attempting to commit a document with an indexed field value greater than 1,500 bytes. This limits applies to the UTF-8 encoding of the field value.
- Attempting to commit a document with un-indexed field values greater than 1,048,487 bytes (1 MiB - 89 bytes). This limit applies to the sum of the field values in a document. For example, four fields of 256 KiB each exceed the limit.
1,500 bytes (indexed) and 1,048,487 bytes (un-indexed) are limits for field values. You cannot exceed these limits and they are not quotas that can be adjusted.
INVALID_ARGUMENT: The value of property field-name is longer than 1500 bytes
INVALID_ARGUMENT: The value of property field_name is longer than 1048487 bytes
To resolve this issue:
- For indexed field values, split the field into multiple fields. If possible, create an un-indexed field and move data that doesn't need to be indexed into the un-indexed field.
- For un-indexed field values, split the field into multiple fields or implement compression for the field value.