Use the zero-shot optimizer to automatically refine and improve your prompts. If a prompt has ambiguous language, missing context, or irrelevant information, it might not produce the response you want. The optimizer analyzes and rewrites your prompt to be clearer, more effective, and better aligned with the model's capabilities, which can lead to higher-quality responses. The zero-shot optimizer is useful in the following scenarios: This page shows you how to use the zero-shot prompt optimizer and includes the following sections: You can use the optimizer in two ways. The following table compares the two approaches.
To ensure that the Compute Engine default service account has the necessary
permissions to optimize prompts,
ask your administrator to grant the Compute Engine default service account the
following IAM roles on the project:
For more information about granting roles, see Manage access to projects, folders, and organizations.
Your administrator might also be able to give the Compute Engine default service account
the required permissions through custom
roles or other predefined
roles.
The optimizer is available through the Vertex AI SDK and supports prompt optimization in all languages that Gemini supports. The following sample shows how to call the optimizer: The The following is an example of the output:
When to use the zero-shot optimizer
Option
Description
Use Case
Instruction Generation
Generates a complete and well-structured set of system instructions based on a plain-language description of your goal.
When you need to create a new, complex prompt from scratch and want to ensure it's well-structured.
Prompt Refinement
Analyzes and improves an existing prompt to produce more consistent, detailed, or on-topic responses.
When you have a working prompt but the model's output is not meeting your quality standards.
Before you begin
roles/aiplatform.user
)
roles/aiplatform.serviceAgent
)
Optimize a prompt
# Import libraries
import vertexai
import logging
# Google Colab authentication
from google.colab import auth
PROJECT_NAME = "PROJECT"
auth.authenticate_user(project_id=PROJECT_NAME)
# Initialize the Vertex AI client
client = vertexai.Client(project=PROJECT_NAME, location='us-central1')
# Input original prompt to optimize
prompt = """You are a professional chef. Your goal is teaching how to cook healthy cooking recipes to your apprentice.
Given a question from your apprentice and some context, provide the correct answer to the question.
Use the context to return a single and correct answer with some explanation.
"""
# Optimize prompt
output = client.prompt_optimizer.optimize_prompt(prompt=prompt)
# View optimized prompt
print(output)
Understand the output
output
object is an OptimizeResponse
type that contains information about the optimization process. The response includes the following key fields:
suggested_prompt
: The optimized prompt that you can use to get better results from your model.applicable_guidelines
: Information about why and how your prompt was improved, which can help you write better prompts in the future.{
"optimization_mode": "zero_shot",
"applicable_guidelines": [
{
"applicable_guideline": "Structure",
"suggested_improvement": "Add role definition.",
"text_before_change": "...",
"text_after_change": "Role: You are an AI assistant...\n\nTask Context:\n..."
},
{
"applicable_guideline": "RedundancyInstructions",
"suggested_improvement": "Remove redundant explanation.",
"text_before_change": "...",
"text_after_change": ""
}
],
"original_prompt": "...",
"suggested_prompt": "Role: You are an AI assistant...\n\nTask Context:\n..."
}
Zero-shot optimizer
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-21 UTC.