What is a vector database?

A vector database is any database that allows you to store, index, and query vector embeddings, or numerical representations of unstructured data, such as text, images, or audio.

Google Cloud integrates these enterprise-grade capabilities directly into its managed services, including AlloyDB for PostgreSQL, Spanner, and BigQuery helping you build intelligent applications without managing separate infrastructure.

Vector-enabled databases: unlock semantic search!

What are vector embeddings?

Vector embeddings are numerical representations of data, typically defined as arrays of floating-point numbers. They translate complex, unstructured data—like text, images, or audio—into a format that machine learning models can process.

By mapping this data into a vector space, embeddings capture semantic meaning; similar items are positioned closer together, while dissimilar items are farther apart. This spatial relationship helps systems to identify connections between data points based on context and meaning rather than just keyword matches.

Other supported data types

While some specialized databases only support vector embeddings, others support many different data and query types in addition to vector embeddings. This is critical for building generative AI applications on top of rich, real-world data. As the benefits of semantic query using vector embeddings becomes clear, most databases will add vector support. In the future, we believe that every database will be a vector database.

Learn how Vertex AI’s vector search supports building high-performance gen AI applications. Vertex AI’s vector search is based on Scalable Nearest Neighbor Search or ScaNN, a scalable and efficient vector search technology developed by Google Research, making it ideal for handling large datasets and real-time search requirements. Learn more about vector search and embeddings in the video below and get started with this quickstart guide.

Watch video to learn how to build LLM-powered apps with embeddings, vector search, and RAG.

How does a vector database work?

Efficiently querying a large set of vectors requires specialized indexing and search strategies that differ from traditional text or numeric fields. Because vectors don’t have a single logical ordering, vector databases rely on the following mechanisms to retrieve data:

  • Nearest neighbor search (KNN): The most common use case is identifying the "k" vectors that are closest to a query vector. This uses distance metrics such as dot product, cosine similarity, or Euclidean distance to measure proximity in the vector space.
  • Approximate nearest neighbors (ANN): Calculating the exact distance between a query vector and every other vector can be computationally expensive. To help reduce this cost, databases use ANN algorithms. These algorithms can significantly improve search speed by trading off a small amount of accuracy (recall)—an acceptable compromise for most semantic search applications.
  • Vector indexing: To enable faster lookups, vector indexes organize data so that clusters of nearby vectors are grouped together. Common structures include lists (which represent vector clusters), graphs (connecting vectors to neighbors), and trees (where branches represent subsets of clusters). Each index type offers different tradeoffs regarding lookup speed, memory consumption, and index creation time.
  • Metadata filtering: Most applications require more than just semantic similarity. For example, a user might search for a book similar to "a heartwarming story about a fish" (vector search) but limit the results to items "under $20" (metadata filter). Advanced vector databases combine these SQL predicates with vector similarity to execute powerful, hybrid queries.

Use cases for vector databases

Vector embeddings capture the semantic meaning of complex data. When combined with vector databases, which provide efficient indexing and retrieval, developers can build a wide range of intelligent applications and data processing tools.

Developers can use vector databases as an external knowledge base for large language models (LLMs). By retrieving relevant, proprietary context before sending a prompt to the model, applications can reduce hallucinations and provide factually accurate, domain-specific responses. This is essential for building AI-powered support agents, legal document analyzers, and internal knowledge management systems.

Vector databases allow developers to build personalization systems that go beyond collaborative filtering. By representing user behavior and product attributes as vectors, applications can identify similar items or match users to content that fits their preferences in real time. This architecture supports ecommerce product suggestions, content feeds, and media streaming recommendations.

Unlike traditional keyword search, vector databases enable semantic search applications that understand user intent. Developers can build search experiences that allow users to query by concept rather than exact phrasing. Additionally, because vectors can represent different data types in the same space, you can build multimodal search tools—allowing users to search for images using text descriptions or find related documents using an input image.

Vector databases can help identify irregular patterns in massive datasets. By establishing a vector space that represents "normal" behavior or transactions, developers can programmatically detect outliers that fall far from established clusters. This capability is critical for building financial fraud detection systems, network security monitoring tools, and IT infrastructure health checks.

In data engineering workflows, vector databases can help clean and unify disparate datasets. By comparing embeddings of customer records or product listings, systems can identify duplicate entries even when the text varies slightly (for example, "Main St." vs. "Main Street"). This helps organizations maintain a single, accurate view of their data.

Vector databases on Google Cloud

AlloyDB for PostgreSQL

AlloyDB for PostgreSQL combines the compatibility of PostgreSQL with Google’s scalable infrastructure. It includes built-in support for vector embeddings through the standard pgvector extension and enhances it with Google’s ScaNN index. This can allow for faster vector queries and enables "inline filtering," which can help optimize hybrid searches by evaluating vector similarity and metadata filters simultaneously for better performance.

Example: Hybrid search for real estate

A real estate application where users want to find homes based on "vibe" (for example, "mid-century modern with natural light") while strictly adhering to hard constraints (for example, "3 bedrooms," "under $800k," "in School District A").

  • Challenge: A standard vector search might return a "mid-century" home that is $2M or in the wrong district; a standard SQL query can filter by price but can't understand "mid-century vibe"
  • Solution: AlloyDB's inline filtering scans the vector index and checks the SQL metadata filters (price, location) simultaneously in a single pass
  • Result: The app returns homes that match the aesthetic and the budget in milliseconds, without the performance penalty of post-filtering results

Google Cloud integrates vector search capabilities directly into its core database services, helping you to operationalize generative AI using your existing data and workflows.

Spanner

Spanner, Google’s globally distributed database, supports vector search for transactional applications. It can provide highly available, scalable vector search using exact and approximate nearest neighbor algorithms. This allows global applications to implement features like real-time recommendations or semantic search while maintaining the strict consistency and reliability.

Example: Real-time recommendations for ecommerce

A global ecommerce platform wants to build a recommendation engine that handles vague user searches like "best hiking boots for rainy weather" while ensuring immediate product availability.

  • Challenge: Traditional keyword matching misses products that are relevant but don't contain the exact search terms (for example, a description saying "waterproof" might not match a search for "rainy"); additionally, verifying inventory availability across a separate vector database creates latency and data consistency risks during high-traffic events
  • Solution: The platform adds a vector column to their existing Spanner Products table and generates embeddings using Vertex AI via SQL; they use Spanner's vector search to run a hybrid query that finds semantically similar products while simultaneously enforcing a strict inventory check (InventoryCount > 0)
  • Result: Customers receive accurate, personalized product recommendations that are guaranteed to be in stock, delivered with the low latency and global consistency necessary for a live transaction

BigQuery

BigQuery can enable you to perform vector analysis on massive datasets without moving data out of your data warehouse. Using the VECTOR_SEARCH function, you can execute similarity searches using standard SQL. This is particularly useful for analytical use cases, such as clustering customers based on behavior or identifying similar product trends across billions of rows of data.

Example: Cybersecurity threat detection at scale

A security team needs to analyze petabytes of server logs to identify malicious activity. Attackers often slightly modify their code to evade exact-match keyword searches.

  • Challenge: Keyword searches miss subtle variations of known attacks (for example, changing a variable name in a malicious script)
  • Solution: The team uses BigQuery to generate embeddings for billions of log entries; they then run a VECTOR_SEARCH query to find all logs that are semantically similar to a known exploit signature, identifying new variants of the attack
  • Result: They can detect and cluster zero-day threats across years of historical data using simple SQL, without needing to move data to a specialized vector database

Solve your business challenges with Google Cloud

New customers get $300 in free credits to spend on Google Cloud.
Talk to a Google Cloud sales specialist to discuss your unique challenge in more detail.

Take the next step

Start building on Google Cloud with $300 in free credits and 20+ always free products.

Google Cloud