Jump to Content
Databases

Build powerful gen AI applications with Firestore vector similarity search

April 11, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/Next24_Blog_blank_2-07.max-2500x2500.jpg
Dong Chang

Senior Product Manager

Amarnath Mullick

Staff Software Engineer

Try Gemini 1.5 Pro

Google's most advanced multimodal model in Vertex AI

Try it

Creating innovative AI-powered solutions for use cases such as product recommendations and chatbots often requires vector similarity search, or vector search for short. At Google Cloud Next ‘24, we announced the Firestore vector search in preview, using exact K-nearest neighbor (KNN) search. Developers can now perform vector search on transactional Firestore data without the hassle of copying data to another vector search solution, maintaining operational simplicity and efficiency.

Developers can now utilize Firestore vector search with popular orchestration frameworks such as LangChain and LlamaIndex through native integrations. We’ve also launched a new Firestore extension to make it easier for you to automatically compute vector embeddings on your data, and  create web services that make it easier for you to perform vector searches from a web or mobile application. 

In this blog, we’ll discuss how developers can get started with Firestore’s new vector search capabilities.

How to use KNN vector search in Firestore

The first step in utilizing vector search is to generate vector embeddings. Embeddings are representations of different kinds of data like text, images, video, etc in a continuous vector space, and capture semantic or syntactic similarities between the entities they represent.  Embeddings can be calculated using a service, such as the Vertex AI text-embeddings API

Once the embeddings are generated you can store them in Firestore using one of the supported SDKs. For example, let’s say you’ve generated an embedding using your favorite embedding model for the data in the field “description” in the collection “beans”. You can now add that generated embedding as a vector value to the field “embedding_field”. Simply run the following command using the NodeJS SDK:

lang-py
Loading...

Alternatively, rather than calling the embedding generation service from your application for a field, you can also automate the generation of the vector embeddings based on field values in your document and your favorite embedding model by using the Firestore vector search extension.

The next step is to create a Firestore KNN vector index on the “embedding_field” where the vector embeddings are stored. During the preview release, you will need to create the index using the gcloud command line tool. 

Continuing with our example, this is how you would create a Firestore KNN vector index:

Loading...

Once you have added all the vector embeddings and created the vector index, you are ready to run the K-Nearest Neighbor search. You will then utilize the “find_nearest” call to pass the query vector embedding with which to compare the stored embeddings and to specify the distance function you want to utilize.

In our example, to do a KNN search on the “embedding_field” in the “beans” collection using COSINE vector distance, you run the following query:

lang-py
Loading...

How to use pre-filtering with KNN vector search

One of the key benefits of Firestore’s KNN vector search is that it can be used in conjunction with some of the other query predicates like equality conditions to pre-filter the data set to only the vectors with which you want to do the search. This helps you to reduce the search space and get more relevant and faster results.

In order to pre-filter and run the vector search on the filtered data set, you will need to first create a composite index using the gcloud command line tool, by including the fields that you want to pre-filter along with the vector field.

For example, if you want to pre-filter on the field “type” in our “beans” collection for doing a KNN vector search, create a Firestore KNN composite vector index, using the command below:

Loading...

Once the index is created, you can do a KNN search with the pre-filter, as the example below:

lang-py
Loading...

Ecosystem integrations

To provide application developers with tools to help them quickly and more efficiently build retrieval augmented generation (RAG) solutions using vector search and foundation models, Firestore KNN vector search now has integrations with LangChain Vector Store and LlamaIndex. These integrations provide developers with access to accurate and reliable information stored in Firestore in their workflows, which is orchestrated using LangChain or LlamaIndex, enhancing the credibility and trustworthiness of LLM (large language model) responses. Additionally, it enables enhanced contextual understanding, by pulling in contextual information from Firestore resulting in highly relevant and personalized responses tailored to customer needs. 

For more information regarding the integrations, please see the Firestore for LangChain (or Datastore for LangChain), and the LlamaIndex-Firestore integration website

As indicated earlier, we also announced the availability of a new Firebase extension that will enable developers to use their favorite embedding model to automatically compute and store embeddings for a given field of a Firestore document. This extension also makes it easier to perform vector similarity searches by generating embeddings based on a query value, for input into vector search. For more information, see the Firestore vector search extension web page.

Pricing

Firestore customers are charged for the number of KNN vector index entries read during the computation and document reads only for resultant documents matching the query. For detailed pricing, please refer to the pricing page. 

Next steps

To learn more about Firestore and its vector search, check out the following resources:


Thanks to Minh Nguyen, Senior Product Manager Lead for Firestore for his contributions to this blog post.

Posted in