Choose models and infrastructure for your generative AI application

Learn which products, frameworks, and tools are the best match for building your generative AI application. The following figure shows common components in a Cloud-hosted generative AI application.
Venn diagram showing the components of a generative AI system Venn diagram showing the components of a generative AI system
  1. Application hosting: Compute to host your application. Your application can use Google Cloud's client libraries and SDKs to talk to different Cloud products.

  2. Model hosting: Scalable and secure hosting for a generative model.

  3. Model: Generative model for text, chat, images, code, embeddings, and multimodal.

  4. Grounding solution: Anchor model output to verifiable, updated sources of information.

  5. Database: Store your application's data. You might reuse your existing database as your grounding solution, by augmenting prompts via SQL query, and/or storing your data as vector embeddings using an extension like pgvector.

  6. Storage: Store files such as images, videos, or static web frontends. You might also use Storage for the raw grounding data (eg. PDFs) that you later convert into embeddings and store in a vector database.

The sections below walk through each of those components, helping you choose which Google Cloud products to try.

Application hosting infrastructure

Choose a product to host and serve your application workload, which makes calls out to the generative model.

Want managed serverless infrastructure?

close
check
Cloud Run
close

Can your application be containerized?

close
check
Kubernetes Engine
close

Model hosting infrastructure

Google Cloud provides multiple ways to host a generative model, from the flagship Vertex AI platform, to customizable and portable hosting on Google Kubernetes Engine.

Using Gemini and need enterprise features like scaling, security, data privacy, and observability

check
close
Gemini Developer API
check

Want fully managed infrastructure, with first-class generative AI tools and APIs?

close
check
Vertex AI
close

Does your model require a specialized kernel, legacy OS, or have special licensing terms?

close
check
Compute Engine
close

Model

Google Cloud provides a set of state-of-the-art foundation models through Vertex AI , including Gemini. You can also deploy a third-party model to either Vertex AI Model Garden or self-host on GKE , Cloud Run, or Compute Engine.

Generating code?

close
check
Codey (Vertex AI)
close

Generating images?

close
check
Imagen (Vertex AI)
close

Generating embeddings for search, classification, or clustering?

close
check
text-embedding (Vertex AI)
close

Ok, you want to generate text. Would you like to include images or video in your text prompts? (multi-modal)

close
check
Gemini (Vertex AI)
close

Ok, just text prompts. Want to leverage Google's most capable flagship model?

close
check
Gemini (Vertex AI)
close

Grounding

To ensure informed and accurate model responses, you may want to ground your generative AI application with real-time data. This is called retrieval-augmented generation (RAG).

You can implement grounding with your own data in a vector database, which is an optimal format for operations like similarity search. Google Cloud offers multiple vector database solutions, for different use cases.

Note: You can also ground with traditional (non vector) databases, by simply querying an existing database like Cloud SQL or Firestore, and using the result in your model prompt.

Want a simple solution, with no access to the underlying embeddings?

close
check
Vertex AI Search & Conversation
close

Do you need low-latency vector search, large-scale serving, or do you want to use a specialized vector DB?

close
check
Vertex AI Vector Search
close

Is your data accessed programmatically (OLTP)? Already using a SQL database?

close
check

Want to use Google AI models directly from your database? Require low latency?

check
close
close

Have a large analytical dataset (OLAP)? Require batch processing, and frequent SQL table access by humans or scripts (data science)?

check
BigQuery

Grounding with APIs

Instead of (or in addition to) using your own data for grounding, many online services offer APIs that you can use to retrieve grounding data to augment your model prompt.
Create, deploy, and manage extensions that connect large language models to the APIs of external systems.
Explore a variety of document loaders and API integrations for your gen AI apps, from YouTube to Google Scholar.
If you're using models hosted in Vertex AI, you can ground model responses using Vertex AI Search, Google Search, or inline/infile text.

Start building

LangChain is an open source framework for generative AI apps that allows you to build context into your prompts, and take action based on the model's response.

View code samples for popular use cases and deploy examples of generative AI applications that are secure, efficient, resilient, high-performing, and cost-effective.