The zero-shot optimizer lets you automatically refine and improve
user-written prompts. Often, a prompt may not produce the model response you
want due to ambiguous language, missing context, or the inclusion of irrelevant
information. This optimizer analyzes and rewrites an existing prompt to be
clearer, more effective, and better aligned with the model's capabilities,
ultimately leading to higher-quality responses. The zero-shot optimizer is particularly useful for: Adapting to Model Updates: When you upgrade to a newer version of a
model, your existing prompts might no longer perform optimally. Enhancing Prompt Comprehension: When the phrasing of a prompt is complex
or could be misinterpreted, the tool can rephrase it for maximum clarity and
precision, reducing the chance of an undesirable outcome. There are two ways to use the optimizer: Instruction Generation: Instead of writing complex system instructions
from scratch, you can describe your goal or task in plain language. The
optimizer will then generate a complete and well-structured set of system
instructions designed to achieve your objective. Prompt Refinement: You have a working prompt, but the model's output is
inconsistent, slightly off-topic, or lacks the detail you want. The
optimizer can help improve the prompt for a better output. The optimizer supports prompt optimization in all languages supported by
Gemini and is available through the Vertex AI
SDK
To ensure that the Compute Engine default service account has the necessary
permissions to optimize prompts,
ask your administrator to grant the Compute Engine default service account the
following IAM roles on the project:
For more information about granting roles, see Manage access to projects, folders, and organizations.
Your administrator might also be able to give the Compute Engine default service account
the required permissions through custom
roles or other predefined
roles.
This
Before you begin
roles/aiplatform.user
)
roles/aiplatform.serviceAgent
)
Optimize a prompt
# Import libraries
import vertexai
import logging
# Google Colab authentication
from google.colab import auth
PROJECT_NAME = "PROJECT"
auth.authenticate_user(project_id=PROJECT_NAME)
# Initialize the Vertex AI client
client = vertexai.Client(project=PROJECT_NAME, location='us-central1')
# Input original prompt to optimize
prompt = """You are a professional chef. Your goal is teaching how to cook healthy cooking recipes to your apprentice.
Given a question from your apprentice and some context, provide the correct answer to the question.
Use the context to return a single and correct answer with some explanation.
"""
# Optimize prompt
output = client.prompt_optimizer.optimize_prompt(prompt=prompt)
# View optimized prompt
print(output)
output
object is of type OptimizeResponse
and provides information
about the optimization process. The most important part is the
suggested_prompt
which contains the optimized prompt that you can use to get
better results from your model. The other fields, especially
applicable_guidelines
, are useful for understanding why and how your prompt
was improved, which can help you write better prompts in the future. Here's an
example of the output:{
"optimization_mode": "zero_shot",
"applicable_guidelines": [
{
"applicable_guideline": "Structure",
"suggested_improvement": "Add role definition.",
"text_before_change": "...",
"text_after_change": "Role: You are an AI assistant...\n\nTask Context:\n..."
},
{
"applicable_guideline": "RedundancyInstructions",
"suggested_improvement": "Remove redundant explanation.",
"text_before_change": "...",
"text_after_change": ""
}
],
"original_prompt": "...",
"suggested_prompt": "Role: You are an AI assistant...\n\nTask Context:\n..."
}
Zero-shot optimizer
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-21 UTC.