Troubleshooting

This page shows you how to resolve issues with Firestore.

Latency

The table below describes possible causes of increased latency:

Latency cause Types of operations affected Resolution
Sustained traffic exceeding the 500-50-5 rule. read, write

For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.

Hot-spots (high read, write, and delete rates to a narrow document range) limit the ability of Firestore to scale. Review designing for scale and identify hot-spots in your application.

Contention, either from updating a single document too frequently or from transactions. read, write

Reduce the write rate to individual documents.

Review data contention in transactions and how you use transactions.

Slow merge-join queries. read For example, queries with multiple equality filters (==) but not backed by composite indexes can result in slow merge-join queries. To improve performance, add composite indexes for these queries, see Reason #3 in Why is my Firestore query slow?
Large reads that return many documents. read Use pagination to split large reads.
Too many recent deletes. read
This greatly affects operations that list collections in a database.
If latency is caused by too many recent deletes, the issue should automatically resolve after some time. If the issue does not resolve, contact support.
Adding and removing listeners too quickly. realtime listener queries See the best practices for realtime updates.
Listening to large documents or to a query with many results. realtime listener queries See the best practices for realtime updates
Index fanout, especially for array fields and map fields. write Review your usage of array fields and map fields. For map fields, you can disable subfields from indexing.
Large writes and batched writes. write

Try reducing the number of writes in each batched write. Batched writes are atomic and many writes in a single batch can increase latency and contention. For example, a batch of 10 writes performs better than a batch of 500 writes.

For bulk data entry where you do not require atomicity, use a server client library with parallelized individual writes. Batched writes perform better than serialized writes but not better than parallel writes.

Error Codes

This section lists issues that you might encounter and provides suggestions for how to fix each of them.

DEADLINE_EXCEEDED

The following can increase DEADLINE_EXCEEDED errors:

  • An increase in latency caused an operation take longer than the deadline (60 seconds by default) to complete.
DEADLINE_EXCEEDED

A deadline was exceeded on the server.

To resolve this issue, see the guide to troubleshooting latency.

ABORTED

The following situations can increase ABORTED errors:

  • A document receiving too many updates per second.
  • Contention from overlapping transactions.
  • Traffic increases that exceed the 500-50-5 rule or encounter hot-spots.
ABORTED

Too much contention on these datastore entities. Please try again.

Or

ABORTED

Aborted due to cross-transaction contention. This occurs when multiple
transactions attempt to access the same data, requiring Firestore to abort at
least one in order to enforce serializability.

To resolve this issue:

  • For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.
  • Hot-spots limit the ability of Firestore to scale up, review designing for scale to identify hot-spots.
  • Review data contention in transactions and your usage of transactions.
  • Reduce the write rate to individual documents.

RESOURCE_EXHAUSTED

The following situations can lead to RESOURCE_EXHAUSTED errors:

  • You exceeded the free tier quota and billing is not enabled for your project.
RESOURCE_EXHAUSTED

Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space.

To resolve this issue:

INVALID_ARGUMENT

The following situations can cause INVALID_ARGUMENT errors:

  • Attempting to commit a document with an indexed field value greater than 1,500 bytes. This limits applies to the UTF-8 encoding of the field value.
  • Attempting to commit a document with un-indexed field values greater than 1,048,487 bytes (1 MiB - 89 bytes). This limit applies to the sum of the field values in a document. For example, four fields of 256 KiB each exceed the limit.

1,500 bytes (indexed) and 1,048,487 bytes (un-indexed) are limits for field values. You cannot exceed these limits and they are not quotas that can be adjusted.

INVALID_ARGUMENT: The value of property field-name is longer than 1500 bytes

or

INVALID_ARGUMENT: The value of property field_name is longer than 1048487 bytes

To resolve this issue:

  • For indexed field values, split the field into multiple fields. If possible, create an un-indexed field and move data that doesn't need to be indexed into the un-indexed field.
  • For un-indexed field values, split the field into multiple fields or implement compression for the field value.