Zero-shot optimizer

Use the zero-shot optimizer to automatically refine and improve your prompts. If a prompt has ambiguous language, missing context, or irrelevant information, it might not produce the response you want. The optimizer analyzes and rewrites your prompt to be clearer, more effective, and better aligned with the model's capabilities, which can lead to higher-quality responses.

The zero-shot optimizer is useful in the following scenarios:

  • Adapt to model updates: When you upgrade to a newer model version, your existing prompts might not perform optimally.
  • Enhance prompt comprehension: If a prompt's phrasing is complex or could be misinterpreted, the optimizer can rephrase it for clarity and precision.

This page shows you how to use the zero-shot prompt optimizer and includes the following sections:

  • When to use the zero-shot optimizer: Describes the two main use cases for the optimizer: generating new instructions and refining existing prompts.
  • Optimize a prompt: Provides a code sample to demonstrate how to call the optimizer using the Vertex AI SDK.
  • Understand the output: Explains the structure of the response object, including the suggested prompt and improvement guidelines.

When to use the zero-shot optimizer

You can use the optimizer in two ways. The following table compares the two approaches.

Option Description Use Case
Instruction Generation Generates a complete and well-structured set of system instructions based on a plain-language description of your goal. When you need to create a new, complex prompt from scratch and want to ensure it's well-structured.
Prompt Refinement Analyzes and improves an existing prompt to produce more consistent, detailed, or on-topic responses. When you have a working prompt but the model's output is not meeting your quality standards.

Before you begin

To ensure that the Compute Engine default service account has the necessary permissions to optimize prompts, ask your administrator to grant the Compute Engine default service account the following IAM roles on the project:

For more information about granting roles, see Manage access to projects, folders, and organizations.

Your administrator might also be able to give the Compute Engine default service account the required permissions through custom roles or other predefined roles.

Optimize a prompt

The optimizer is available through the Vertex AI SDK and supports prompt optimization in all languages that Gemini supports. The following sample shows how to call the optimizer:

# Import libraries
import vertexai
import logging

# Google Colab authentication
from google.colab import auth
PROJECT_NAME = "PROJECT"
auth.authenticate_user(project_id=PROJECT_NAME)

# Initialize the Vertex AI client
client = vertexai.Client(project=PROJECT_NAME, location='us-central1')

# Input original prompt to optimize
prompt = """You are a professional chef. Your goal is teaching how to cook healthy cooking recipes to your apprentice.

Given a question from your apprentice and some context, provide the correct answer to the question.
Use the context to return a single and correct answer with some explanation.
"""

# Optimize prompt
output = client.prompt_optimizer.optimize_prompt(prompt=prompt)

# View optimized prompt
print(output)

Understand the output

The output object is an OptimizeResponse type that contains information about the optimization process. The response includes the following key fields:

  • suggested_prompt: The optimized prompt that you can use to get better results from your model.
  • applicable_guidelines: Information about why and how your prompt was improved, which can help you write better prompts in the future.

The following is an example of the output:

{
  "optimization_mode": "zero_shot",
  "applicable_guidelines": [
    {
      "applicable_guideline": "Structure",
      "suggested_improvement": "Add role definition.",
      "text_before_change": "...",
      "text_after_change": "Role: You are an AI assistant...\n\nTask Context:\n..."
    },
    {
      "applicable_guideline": "RedundancyInstructions",
      "suggested_improvement": "Remove redundant explanation.",
      "text_before_change": "...",
      "text_after_change": ""
    }
  ],
  "original_prompt": "...",
  "suggested_prompt": "Role: You are an AI assistant...\n\nTask Context:\n..."
}