Migrate from Gemini 1.5 API to Gemini 2.0 API on Vertex AI

This guide shows how to migrate from Gemini 1.0 and Gemini 1.5 models (both Flash and Pro) to Gemini 2.0 models.

Differences between Gemini 1.5 and Gemini 2.0

The following are some differences between Gemini 2.0 and our 1.0 and 1.5 models:

Setup

Vertex AI SDK

If you reuse the Vertex AI SDK , the setup process is the same for our 1.5 and 2.0 models. For more information, see Introduction to the Vertex AI SDK for Python.

The following is a short code sample that installs the Vertex AI SDK for Python:

# pip install --upgrade --quiet google-cloud-aiplatform

import vertexai

vertexai.init(project="PROJECT_ID", location="LOCATION")

Replace PROJECT_ID with your Google Cloud project ID, and replace LOCATION with the location of your Google Cloud project (for example, us-central1).

Gen AI SDK

If you choose to use the Gen AI SDK, the setup process is different between the 1.0 and 1.5/2.0 models. For more information, visit Google Gen AI SDKs.

The following is a short code sample that installs the Gen AI SDK for Python:

# pip install --upgrade --quiet google-genai

from google import genai

client = genai.Client(vertexai=True, project="PROJECT_ID", location="LOCATION")

Replace PROJECT_ID with your Google Cloud project ID, and replace LOCATION with the location of your Google Cloud project (for example, us-central1).

Migrate to 2.0

The following sections include instructions on how to migrate to Gemini 2.0 from both the Vertex AI SDK and our new Gen AI SDK.

Vertex AI SDK

Each of the following pairs of code samples includes Gemini 1.5 code and Gemini 2.0 code that's been migrated from 1.5 code.

Simple text generation

The following code samples show the differences between the Gemini 1.5 API and Gemini 2.0 API for creating a text generation model:

Gemini 1.5 Gemini 2.0
from vertexai.generative_models import GenerativeModel


model = GenerativeModel("gemini-1.5-flash-002")
response = model.generate_content("The opposite of hot is")
print(response.text)
from vertexai.generative_models import GenerativeModel


model = GenerativeModel("gemini-2.0-flash-001")
response = model.generate_content("The opposite of hot is")
print(response.text)

Text generation with parameters

The following code samples show the differences between the Gemini 1.5 API and Gemini 2.0 API for creating a text generation model, with optional parameters:

Gemini 1.5 Gemini 2.0
from vertexai.generative_models import GenerativeModel


model = GenerativeModel("gemini-1.5-flash-002")

prompt = """ You are an expert at solving word problems.Solve the following problem: I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten was knit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items? Think about it step by step, and show your work."""

response = model.generate_content( prompt,generation_config={ "temperature": 0.1,"top_p": 1.0,"top_k": 40,"max_output_tokens": 800,} )

print(response.text)
from vertexai.generative_models import GenerativeModel


model = GenerativeModel("gemini-2.0-flash-001")

prompt = """ You are an expert at solving word problems.Solve the following problem: I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten was knit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items? Think about it step by step, and show your work."""

response = model.generate_content( prompt,generation_config={ "temperature": 0.1,"top_p": 1.0,"top_k": 40,"max_output_tokens": 800,} )

print(response.text)

Gen AI SDK

Each of the following pairs of code samples includes Gemini 1.5 code and Gemini 2.0 code that's been migrated from 1.5 code:

Gemini 1.5 Gemini 2.0
import vertexai


from vertexai.generative_models import GenerativeModel

vertexai.init(project="PROJECT_ID",
  location="LOCATION")

model = GenerativeModel("gemini-1.5-flash-002")

response = model.generate_content("The opposite of hot is")

print(response.text)
from google import genai


client = genai.Client(vertexai=True,
  project='YOUR_CLOUD_PROJECT',
  location='us-central1',
  http_options={'api_version': 'v1'})

response = client.models.generate_content(
  model='gemini-2.0-flash-001',
  contents='The opposite of hot is')

print(response.text)