Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
Stay organized with collections
Save and categorize content based on your preferences.
The zero-shot optimizer lets you automatically refine and improve
user-written prompts. Often, a prompt may not produce the model response you
want due to ambiguous language, missing context, or the inclusion of irrelevant
information. This optimizer analyzes and rewrites an existing prompt to be
clearer, more effective, and better aligned with the model's capabilities,
ultimately leading to higher-quality responses.
The zero-shot optimizer is particularly useful for:
Adapting to Model Updates: When you upgrade to a newer version of a
model, your existing prompts might no longer perform optimally.
Enhancing Prompt Comprehension: When the phrasing of a prompt is complex
or could be misinterpreted, the tool can rephrase it for maximum clarity and
precision, reducing the chance of an undesirable outcome.
There are two ways to use the optimizer:
Instruction Generation: Instead of writing complex system instructions
from scratch, you can describe your goal or task in plain language. The
optimizer will then generate a complete and well-structured set of system
instructions designed to achieve your objective.
Prompt Refinement: You have a working prompt, but the model's output is
inconsistent, slightly off-topic, or lacks the detail you want. The
optimizer can help improve the prompt for a better output.
The optimizer supports prompt optimization in all languages supported by
Gemini and is available through the Vertex AI
SDK
# Import librariesimportvertexaiimportlogging# Google Colab authenticationfromgoogle.colabimportauthPROJECT_NAME="PROJECT"auth.authenticate_user(project_id=PROJECT_NAME)# Initialize the Vertex AI clientclient=vertexai.Client(project=PROJECT_NAME,location='us-central1')# Input original prompt to optimizeprompt="""You are a professional chef. Your goal is teaching how to cook healthy cooking recipes to your apprentice.Given a question from your apprentice and some context, provide the correct answer to the question.Use the context to return a single and correct answer with some explanation."""# Optimize promptoutput=client.prompt_optimizer.optimize_prompt(prompt=prompt)# View optimized promptprint(output.model_dump_json(indent=2))
This output object is of type OptimizeResponse and provides information
about the optimization process. The most important part is the
suggested_prompt which contains the optimized prompt that you can use to get
better results from your model. The other fields, especially
applicable_guidelines, are useful for understanding why and how your prompt
was improved, which can help you write better prompts in the future. Here's an
example of the output:
{"optimization_mode":"zero_shot","applicable_guidelines":[{"applicable_guideline":"Structure","suggested_improvement":"Add role definition.","text_before_change":"...","text_after_change":"Role: You are an AI assistant...\n\nTask Context:\n..."},{"applicable_guideline":"RedundancyInstructions","suggested_improvement":"Remove redundant explanation.","text_before_change":"...","text_after_change":""}],"original_prompt":"...","suggested_prompt":"Role: You are an AI assistant...\n\nTask Context:\n..."}
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Zero-shot optimizer\n\nThe **zero-shot optimizer** lets you automatically refine and improve\nuser-written prompts. Often, a prompt may not produce the model response you\nwant due to ambiguous language, missing context, or the inclusion of irrelevant\ninformation. This optimizer analyzes and rewrites an existing prompt to be\nclearer, more effective, and better aligned with the model's capabilities,\nultimately leading to higher-quality responses.\n\nThe zero-shot optimizer is particularly useful for:\n\n- **Adapting to Model Updates:** When you upgrade to a newer version of a\n model, your existing prompts might no longer perform optimally.\n\n- **Enhancing Prompt Comprehension:** When the phrasing of a prompt is complex\n or could be misinterpreted, the tool can rephrase it for maximum clarity and\n precision, reducing the chance of an undesirable outcome.\n\nThere are two ways to use the optimizer:\n\n- **Instruction Generation**: Instead of writing complex system instructions\n from scratch, you can describe your goal or task in plain language. The\n optimizer will then generate a complete and well-structured set of system\n instructions designed to achieve your objective.\n\n- **Prompt Refinement**: You have a working prompt, but the model's output is\n inconsistent, slightly off-topic, or lacks the detail you want. The\n optimizer can help improve the prompt for a better output.\n\nThe optimizer supports prompt optimization in all languages supported by\nGemini and is available through the [Vertex AI\nSDK](/vertex-ai/generative-ai/docs/reference/libraries)\n\nBefore you begin\n----------------\n\n\nTo ensure that the [Compute Engine default service account](/iam/docs/service-account-types#default) has the necessary\npermissions to optimize prompts,\n\nask your administrator to grant the [Compute Engine default service account](/iam/docs/service-account-types#default) the\nfollowing IAM roles on the project:\n\n| **Important:** You must grant these roles to the [Compute Engine default service account](/iam/docs/service-account-types#default), *not* to your user account. Failure to grant the roles to the correct principal might result in permission errors.\n\n- [Vertex AI User](/iam/docs/roles-permissions/aiplatform#aiplatform.user) (`roles/aiplatform.user`)\n- [Vertex AI Service Agent](/iam/docs/roles-permissions/aiplatform#aiplatform.serviceAgent) (`roles/aiplatform.serviceAgent`)\n\n\nFor more information about granting roles, see [Manage access to projects, folders, and organizations](/iam/docs/granting-changing-revoking-access).\n\n\nYour administrator might also be able to give the [Compute Engine default service account](/iam/docs/service-account-types#default)\nthe required permissions through [custom\nroles](/iam/docs/creating-custom-roles) or other [predefined\nroles](/iam/docs/roles-overview#predefined).\n\nOptimize a prompt\n-----------------\n\n # Import libraries\n import https://cloud.google.com/python/docs/reference/vertexai/latest/\n import logging\n\n # Google Colab authentication\n from google.colab import auth\n PROJECT_NAME = \"PROJECT\"\n auth.authenticate_user(project_id=PROJECT_NAME)\n\n # Initialize the Vertex AI client\n client = https://cloud.google.com/python/docs/reference/vertexai/latest/.Client(project=PROJECT_NAME, location='us-central1')\n\n # Input original prompt to optimize\n prompt = \"\"\"You are a professional chef. Your goal is teaching how to cook healthy cooking recipes to your apprentice.\n\n Given a question from your apprentice and some context, provide the correct answer to the question.\n Use the context to return a single and correct answer with some explanation.\n \"\"\"\n\n # Optimize prompt\n output = client.prompt_optimizer.optimize_prompt(prompt=prompt)\n\n # View optimized prompt\n print(output.model_dump_json(indent=2))\n\nThis `output` object is of type `OptimizeResponse` and provides information\nabout the optimization process. The most important part is the\n`suggested_prompt` which contains the optimized prompt that you can use to get\nbetter results from your model. The other fields, especially\n`applicable_guidelines`, are useful for understanding why and how your prompt\nwas improved, which can help you write better prompts in the future. Here's an\nexample of the output: \n\n {\n \"optimization_mode\": \"zero_shot\",\n \"applicable_guidelines\": [\n {\n \"applicable_guideline\": \"Structure\",\n \"suggested_improvement\": \"Add role definition.\",\n \"text_before_change\": \"...\",\n \"text_after_change\": \"Role: You are an AI assistant...\\n\\nTask Context:\\n...\"\n },\n {\n \"applicable_guideline\": \"RedundancyInstructions\",\n \"suggested_improvement\": \"Remove redundant explanation.\",\n \"text_before_change\": \"...\",\n \"text_after_change\": \"\"\n }\n ],\n \"original_prompt\": \"...\",\n \"suggested_prompt\": \"Role: You are an AI assistant...\\n\\nTask Context:\\n...\"\n }"]]