Optimize prompts

The Vertex AI prompt optimizer helps you improve prompt performance by automatically refining the system instructions for a set of prompts. Using the prompt optimizer can help you improve prompts at scale without manual rewriting, which is useful when you adapt prompts from one model to another.

This page compares the two prompt optimization approaches and lists the models that each method supports.

Compare prompt optimization approaches

Vertex AI offers two approaches for optimizing prompts: the zero-shot optimizer and the data-driven optimizer. You can use both methods through the Google Cloud console or the Vertex AI SDK.

The following table provides a high-level comparison of these approaches.

Option Description Pros Cons Use Case
Zero-shot optimizer A real-time, low-latency optimizer that improves a single prompt or system instruction template. Fast and requires no additional setup. Less configurable; optimizes a single prompt at a time. Improving individual prompts or system instructions.
Data-driven optimizer A batch, task-level iterative optimizer that uses labeled data and evaluation metrics. Highly configurable and enables more advanced optimization. Requires labeled data and more setup; slower batch process. Advanced optimization for specific tasks where performance can be measured against a dataset.

Important: The Prompt Optimizer feature is generally available. However, the Prompt Optimizer SDK library is experimental and subject to change without notice. You might encounter bugs or changes to APIs and functionality.

Supported target models for optimization

The zero-shot optimizer is model independent and can improve prompts for any Google model.

The data-driven optimizer supports optimization for generally available Gemini models.

What's next