The zero-shot optimizer lets you automatically refine and improve user-written prompts. Often, a prompt may not produce the model response you want due to ambiguous language, missing context, or the inclusion of irrelevant information. This optimizer analyzes and rewrites an existing prompt to be clearer, more effective, and better aligned with the model's capabilities, ultimately leading to higher-quality responses.
The zero-shot optimizer is particularly useful for:
Adapting to Model Updates: When you upgrade to a newer version of a model, your existing prompts might no longer perform optimally.
Enhancing Prompt Comprehension: When the phrasing of a prompt is complex or could be misinterpreted, the tool can rephrase it for maximum clarity and precision, reducing the chance of an undesirable outcome.
There are two ways to use the optimizer:
Instruction Generation: Instead of writing complex system instructions from scratch, you can describe your goal or task in plain language. The optimizer will then generate a complete and well-structured set of system instructions designed to achieve your objective.
Prompt Refinement: You have a working prompt, but the model's output is inconsistent, slightly off-topic, or lacks the detail you want. The optimizer can help improve the prompt for a better output.
The optimizer supports prompt optimization in all languages supported by Gemini and is available through the Vertex AI SDK as shown in the following sample:
# Import libraries
import vertexai
import logging
# Google Colab authentication
from google.colab import auth
PROJECT_NAME = "PROJECT"
auth.authenticate_user(project_id=PROJECT_NAME)
# Initialize the Vertex AI client
client = vertexai.Client(project=PROJECT_NAME, location='us-central1')
# Input original prompt to optimize
prompt = """You are a professional chef. Your goal is teaching how to cook healthy cooking recipes to your apprentice.
Given a question from your apprentice and some context, provide the correct answer to the question.
Use the context to return a single and correct answer with some explanation.
"""
# Optimize prompt
output = client.prompt_optimizer.optimize_prompt(prompt=prompt)
This output
object is of type OptimizeResponse
and provides information
about the optimization process. The most important part is the
suggested_prompt
which contains the optimized prompt that you can use to get
better results from your model. The other fields, especially
applicable_guidelines
, are useful for understanding why and how your prompt
was improved, which can help you write better prompts in the future. Here's an
example of the output:
{
"optimization_mode": "zero_shot",
"applicable_guidelines": [
{
"applicable_guideline": "Structure",
"suggested_improvement": "Add role definition.",
"text_before_change": "...",
"text_after_change": "Role: You are an AI assistant...\n\nTask Context:\n..."
},
{
"applicable_guideline": "RedundancyInstructions",
"suggested_improvement": "Remove redundant explanation.",
"text_before_change": "...",
"text_after_change": ""
}
],
"original_prompt": "...",
"suggested_prompt": "Role: You are an AI assistant...\n\nTask Context:\n..."
}