Overview of Generative AI on Vertex AI

Generative AI on Vertex AI lets you build production-ready applications that are powered by state-of-the-art generative AI models hosted on Google's advanced, global infrastructure.

Get started


Enterprise ready

Enterprise ready

Deploy your generative AI applications at scale with enterprise-grade security, data residency, access transparency, and low latency.

State-of-the-art capabilities

State-of-the-art features

Expand the capabilities of your applications by using the 2,000,000-token context window supported by Gemini 1.5 Pro.

Open platform

Open platform

Vertex AI gives you access to over 100 models from third-party AI companies, including Anthropic's Claude 3.5 Sonnet, Meta Llama 3, and Mistral AI Mixtral 8x7B.

Core capabilities

  • Multimodal processing

    Process multiple types of input media at the same time, such as image, video, audio, and documents.

  • Embeddings generation

    Generate embeddings to perform tasks such as search, classification, clustering, and outlier detection.

  • Model tuning

    Adapt models to perform specific tasks with greater precision and accuracy.

  • Function calling

    Connect models to external APIs to extend the model's capabilities.

  • Grounding

    Connect models to external data sources to reduce hallucinations in responses.

  • Image generation

    Generate and edit images by using natural language text prompts.


Vertex AI and Google AI differences

Gemini API in Vertex AI and Google AI both let you incorporate the capabilities of Gemini models into your applications. The platform that's right for you depends on your goals as detailed in the following table.

API Designed for Features
Vertex AI Gemini API
  • Scaled deployments
  • Enterprise
  • Technical support
  • Modality-based pricing
  • Indemnity protection
  • 100+ models in Model Garden
Google AI Gemini API
  • Experimentation
  • Prototyping
  • Ease of use
  • Free tier
  • Token-based pricing

Migrate from Google AI to Vertex AI

Build using Vertex AI SDKs

Vertex AI provides SDKs in the following languages:

Python

from vertexai.generative_models import GenerativeModel
model = GenerativeModel(model_name="gemini-1.5-flash")
response = model.generate_content([Part.from_uri(IMAGE_URI, mime_type="image/jpeg"),"What is this?"])

Node.js

const vertexAI = new VertexAI({project: projectId, location: location});
const generativeVisionModel = vertexAI.getGenerativeModel({ model: "gemini-1.5-flash"});

const result = await model.generateContent([
  "What is this?",
  {inlineData: {data: imgDataInBase64, mimeType: 'image/png'}}
]);

Java

public static void main(String[] args) throws Exception {
  try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION); ) {
    GenerativeModel model = new GenerativeModel("gemini-1.5-flash", vertexAI);
  List<Content> contents = new ArrayList<>();
  contents.add(ContentMaker
                .fromMultiModalData(
                    "What is this?",
                    PartMaker.fromMimeTypeAndData("image/jpeg", IMAGE_URI)));
  GenerateContentResponse response = model.generateContent(contents);
  }
}

Go

model := client.GenerativeModel("gemini-1.5-flash", "us-central1")
img := genai.ImageData("jpeg", image_bytes)
prompt := genai.Text("What is this?")
resp, err := model.GenerateContent(ctx, img, prompt)

C#

var predictionServiceClient = new PredictionServiceClientBuilder {
  Endpoint = $"{location}-aiplatform.googleapis.com"
}.Build();

var generateContentRequest = new GenerateContentRequest {
  Model = $"projects/{projectId}/locations/{location}/publishers/google/models/gemini-1.5-flash",
  Contents = {
    new Content {
      Role = "USER",
      Parts = {
        new Part {Text = "What's in this?"},
        new Part {FileData = new() {MimeType = "image/jpeg", FileUri = fileUri}}
      }
    }
  }
};

GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(generateContentRequest);

Get started

More ways to get started