Jump to Content
Databases

Natural language support in AlloyDB for building gen AI apps with real-time data

April 10, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/Next24_Blog_blank_2-04.max-2500x2500.jpg
Sandy Ghai

Group Product Manager, AlloyDB

Try Gemini 1.5 Pro

Google's most advanced multimodal model in Vertex AI

Try it

Generative AI offers the opportunity to build more interactive, personalized, and complete experiences for your customers, even if you don’t have specialized AI/ML expertise. Since the introduction of foundation models to the broader market, the conversation has turned swiftly from the art of the possible to the art of the viable. How can developers build real, enterprise-grade experiences that are accurate, relevant, and secure?

Operational data bridges the gap between pre-trained foundation models and real enterprise applications. A pre-trained model can name the capital of France, but it doesn’t know which items you have in stock. That means that the generative AI experiences built using these models are only as good as the data and context application that developers used to ground them. And because operational databases have the most up-to-date data, they play a crucial role in building quality experiences.

This week at Google Cloud Next ‘24, we announced natural language support in AlloyDB for PostgreSQL to help developers integrate real-time operational data into generative AI applications. Now you can build applications that accurately query your data with natural language for maximum flexibility and expressiveness. This means generative AI apps can respond to a much broader and more unpredictable set of questions.

We also announced a new generative AI-friendly security model to accommodate these new access patterns. AlloyDB introduced parameterized secure views, a new kind of database view that locks down access to end-user data at the database level to help you protect against prompt injection attacks. Together, these advances in AlloyDB present a new paradigm for integrating real-time data into generative AI apps — one that’s flexible enough to answer the full gamut of end-users’ questions while maintaining accuracy and security.

AlloyDB AI's natural language support, with features like end-user access controls and NL2SQL primitives, are now available in technology preview through the downloadable AlloyDB Omni edition.

The need for improved flexibility

Because data is so critical to ensuring accurate, relevant responses, multiple approaches have emerged for integrating data into gen AI applications. 

Last year, we announced vector database capabilities in AlloyDB and Cloud SQL to enable the most common pattern for generative AI apps to retrieve data: retrieval augmented generation (RAG). RAG leverages semantic search on unstructured data to retrieve relevant information, and is particularly powerful for searching through a knowledge base or through unstructured user inputs like chat history to retrieve the context needed by the foundation model. 

AlloyDB’s end-to-end vector support made this super easy. With a few lines of code, you could turn your operational database into a vector database with automatic vector embeddings generation and easy vector queries using SQL. And now, with ScaNN in AlloyDB, we’re bringing Google’s state-of-the-art ScaNN algorithm — the same one that powers some of Google’s most popular services— into AlloyDB to enhance vector performance. 

Another popular approach for retrieving real-time data is to package simple structured queries in custom LLM extensions and plugins that package SQL. Here, LLM orchestration frameworks connect to APIs that can appropriately retrieve the needed information. These APIs can execute any database query, including structured SQL queries as well as vector search, on behalf of the LLM. The benefit of this approach is that it is predictable and easy to secure. AlloyDB’s LangChain integration makes it easy for you to build and integrate these database-backed APIs for common questions and access patterns. 

However, some use cases need to support more freeform conversations with users, where it’s difficult to predict the questions and required APIs ahead of time. For these situations, developers are increasingly using models to generate SQL on-the-fly. This initially appears to work well; it’s quite remarkable how well foundational models generate SQL, but it poses a number of challenges. 

First, it is very hard to ensure, with high confidence, that the generated SQL is not only syntactically correct, but is also semantically correct. For example, if you’re searching for a flight to New York City, it would be syntactically correct to generate an executable SQL query that filters for flights to JFK airport. But a semantically correct query would also add flights to LaGuardia airport too, because it also serves the city of New York. 

Second, providing the application with the ability to execute arbitrary, generated SQL introduces security challenges, making the app more vulnerable to prompt-injection and denial-of-service attacks.

To paraphrase: custom APIs are accurate and secure, but not flexible enough to handle the full range of end-user questions. Emerging natural-language-to-SQL (NL2SQL) approaches are much more flexible, but lack security and accuracy. 

Introducing natural language support to AlloyDB

AlloyDB AI provides the best of both worlds: accuracy, security, and flexibility. This not only makes it easier for you to incorporate real-time operational data into gen AI experiences, but makes it viable in an enterprise setting.

Natural language support means developers get maximum flexibility in interacting with data. In its easiest-to-use form, it takes in any natural language question, including ones that are broad or imprecise, and either returns an accurate result or suggests follow-ups such as clarifying questions. You can use this new interface to create a single LLM extension for answering questions, with flexible querying across datasets to not only get the right data, but the right insights, leveraging database joins, filters, and aggregations to improve analysis. 

Accuracy of responses

At the same time, AlloyDB AI comes packed with built-in features to improve the accuracy and relevance of responses in any scenario. These include:

  • Intent clarification: Because users are often imprecise with language, AlloyDB AI has mechanisms for interactively clarifying user intent. When AlloyDB AI encounters a question that it has difficulty translating, it responds with a follow-up question instead of making assumptions. These questions can then be passed back to the user to clarify intent by the application before final results are shared, improving the eventual accuracy of the interaction.

  • Context beyond schema: AlloyDB AI leverages semantic knowledge about the business and dataset to help translate the natural language input into a query that the database can execute. This knowledge includes existing in-database metadata and context as a baseline, but you can add knowledge bases, sample queries, and other sources of business context to improve results. 

Let’s consider an example. Imagine trying to book a flight using a customer service chatbot. You might ask the question: “When does the next Cymbal Air flight from San Francisco to New York City depart?” This seems like a simple question, but even the best NL2SQL technology can’t translate this correctly, for two reasons.

First, the question is ambiguous; it’s not obvious whether you’re asking about the scheduled departure time or estimated departure time. If the flight is delayed, it might return the wrong answer. To help with this scenario, AlloyDB AI doesn’t just return results, it might suggest a follow up question: “Are you looking for the scheduled departure time or the estimated departure time?” 

Second, the database schema itself does not contain all of the information needed to answer the question correctly. A database administrator likely has semantic knowledge that isn’t encoded in the schema, for example: that scheduled times are in the `flights` table, but estimated times are in the `flight status` table, or that the airline Cymbal Air corresponds to airline code `CY` in the `flights` table. AlloyDB AI’s context services make it easy for developers to incorporate this semantic knowledge about the dataset.

A new security model for gen AI

Most applications require fine-grained access control to protect user data. Traditionally, access control was enforced at the application level, but this was possible only when every SQL query that hit the database was composed by an application developer.

However, when developers and vendors — including AlloyDB — are synthesizing SQL on the fly and executing it on behalf of an LLM, a new security model is needed. LLMs alone cannot be trusted to enforce data access.

AlloyDB’s new parameterized secure views makes it easy to lock down access in the database itself. Unlike typical row-level security, parameterized secure views don’t require you to set up a unique database user per end-user. Instead, access to certain rows in the database view is limited based on the value of parameters, like a user id. This makes application development easier, is compatible with existing mechanisms for end-user identification and application connectivity, offers more robust security, and allows application developers to continue to take advantage of connection pooling to boost performance. 

How natural language support works

In its easiest-to-use form, AlloyDB AI provides an interface that takes in any natural language question, including ones that are broad or imprecise, and returns either accurate results or follow-ups like disambiguation questions as described above. Developers can use this interface to create a single tool for answering unpredictable questions, with flexible querying across datasets to not only get the right data, but the right insights, leveraging joins, filters, and aggregations in the database to improve analysis. 

For more advanced development, AlloyDB AI supports the broader spectrum of natural language interactions with structured data by making core primitives available to developers directly. These building blocks offer innovators more control and flexibility on intent clarification and context use, allowing you to stitch together the pieces as-needed to meet the specific needs of your applications.

Getting started

AlloyDB’s natural language support is coming soon to both AlloyDB in Google Cloud and AlloyDB Omni. In the interim, we’ve made a few primitives available as a technology preview in AlloyDB Omni today, including basic NL2SQL support and parameterized secure views.

To get started, follow our guide to deploy AlloyDB Omni on a Google Cloud virtual machine, on your server or laptop, or on the cloud of your choice.

Posted in