Troubleshooting

This page shows you how to resolve issues with Firestore.

Latency

The table below describes possible causes of increased latency:

Latency cause Types of operations affected Resolution
Sustained traffic exceeding the 500-50-5 rule. read, write

For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.

Hot-spots (high read, write, and delete rates to a narrow document range) limit the ability of Firestore to scale. Review designing for scale and identify hot-spots in your application.

Contention, either from updating a single document too frequently or from transactions. read, write

Keep the write rate to individual documents under one write per second.

Review data contention in transactions and how you use transactions.

Slow merge-join queries. read For example, queries with multiple equality filters (==) but not backed by composite indexes can result in slow merge-join queries. To improve performance, add composite indexes for these queries, see Reason #3 in Why is my Firestore query slow?
Large reads that return many documents. read Use pagination to split large reads.
Too many recent deletes. read
This greatly affects operations that list collections in a database.
If latency is caused by too many recent deletes, the issue should automatically resolve after some time. If the issue does not resolve, contact support.
Adding and removing listeners too quickly. realtime listener queries See the best practices for realtime updates.
Listening to large documents or to a query with many results. realtime listener queries See the best practices for realtime updates
Index fanout, especially for array fields and map fields. write Review your usage of array fields and map fields. For map fields, you can disable subfields from indexing.
Large writes and batched writes. write

Try reducing the number of writes in each batched write. Batched writes are atomic and many writes in a single batch can increase latency and contention. For example, a batch of 10 writes performs better than a batch of 500 writes.

For bulk data entry where you do not require atomicity, use a server client library with parallelized individual writes. Batched writes perform better than serialized writes but not better than parallel writes.

Error Codes

This section lists issues that you might encounter and provides suggestions for how to fix each of them.

DEADLINE_EXCEEDED

The following can increase DEADLINE_EXCEEDED errors:

  • An increase in latency caused an operation take longer than the deadline (60 seconds by default) to complete.
DEADLINE_EXCEEDED

A deadline was exceeded on the server.

To resolve this issue, see the guide to troubleshooting latency.

ABORTED

The following situations can increase ABORTED errors:

  • Exceeding the 1-write-per-second limit for a single document.
  • Contention from overlapping transactions.
  • Traffic increases that exceed the 500-50-5 rule or encounter hot-spots.
ABORTED

Too much contention on these datastore entities. Please try again.

Or

ABORTED

Aborted due to cross-transaction contention. This occurs when multiple
transactions attempt to access the same data, requiring Firestore to abort at
least one in order to enforce serializability.

To resolve this issue:

  • For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.
  • Hot-spots limit the ability of Firestore to scale up, review designing for scale to identify hot-spots.
  • Review data contention in transactions and your usage of transactions.
  • Keep the write rate to individual documents under one write per second.

RESOURCE_EXHAUSTED

The following situations can lead to RESOURCE_EXHAUSTED errors:

  • You exceeded the free tier quota and billing is not enabled for your project.
  • You exceeded 10,000 writes across the database or the 10 MiB/s throughput limit.
RESOURCE_EXHAUSTED

Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space.

To resolve this issue: