This document describes how to use the Vertex AI prompt optimizer to automatically optimize prompt performance by improving the system instructions for a set of prompts.
The Vertex AI prompt optimizer can help you improve your prompts quickly at scale, without manually rewriting system instructions or individual prompts. This is especially useful when you want to use system instructions and prompts that were written for one model with a different model.
We offer two approaches for optimizing prompts:
- The zero-shot optimizer is a real-time low-latency optimizer that improves a single prompt or system instruction template. It is fast and requires no additional setup besides providing your original prompt or system instruction.
- The data-driven optimizer is a batch task-level iterative optimizer that improves prompts by evaluating the model's response to sample labeled prompts against specified evaluation metrics for your selected target model. It's for more advanced optimization that lets you configure the optimization parameters and provide a few labeled samples.
These methods are available to users through the user interface (UI) or the Vertex AI SDK.
Supported target models for optimization
The zero-shot optimizer is model independent and can improve prompts for any Google model.
The data-driven optimizer supports optimization for only generally available Gemini models.
What's next
Learn about zero-shot optimizer
Learn about data-driven optimizer