[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Compare prompts\n\nThe Compare feature lets you see how a different prompt, model, or a parameter\nsetting changes the model's output. You can view each of the prompts and their\nresponses side by side to compare and analyze in the following ways:\n\n- With a new prompt.\n- With another saved prompt.\n- With a ground truth.\n\n| **Note:** The Compare feature doesn't support prompts with media or chat prompts with more than one exchange.\n\nBefore you begin\n----------------\n\nTo access the Compare feature, follow these steps:\n\n1. In the Google Cloud console, go to the **Create prompt** page.\n\n [Go to Create prompt](https://console.cloud.google.com/vertex-ai/studio/multimodal)\n2. Select **Compare** . The **Compare** page appears.\n\nCreate a prompt in the Compare feature\n--------------------------------------\n\nOn the **Compare** page, you can create a prompt before selecting another prompt\nto compare results.\n\nTo create a prompt, follow these steps:\n\n1. In the **New Prompt** field, enter your prompt.\n\n2. Click **Submit prompts**. The model's response appears below the prompt text\n that you entered.\n\n3. Click **Save as new** . A **Save prompt** dialog appears.\n\n4. Enter the name of your new prompt in the **Prompt name** field.\n\n5. Select your region in the **Region** field, or leave it as the default region.\n\n6. If a customer-managed encryption key (CMEK) applies, do the following:\n\n 1. Select the **Customer-managed encryption key (CMEK)** checkbox.\n 2. Select a key from the **Select a Cloud KMS key** field.\n7. Click **Save** , which saves your prompt in the list of prompts to use on\n the **Compare saved prompt** page.\n\n8. Click **Submit prompts** to compare the prompts and their responses.\n\nYou can update your prompts, and save updated versions as new prompts.\n\nCompare with a new prompt\n-------------------------\n\nTo compare your saved prompt with a new prompt, follow these steps:\n\n1. Click **Compare new prompt** . A **Compare** pane appears.\n2. Optional: Click **Switch model** to use a different model from the default model.\n3. Optional: Expand **Outputs**. \n\n##### Outputs:\n\n1. Optional: If you want the model to output in a specific format such as JSON, click the **Structured output** toggle. After you select **Structured output**, the Grounding options are turned off, because grounding isn't supported with structured output.\n2. Optional: Change the **Thinking budget** to one of the following options:\n - **Auto**: The model only thinks when it needs to. The model adjusts how much it thinks or analyzes a situation based on what's needed at the time.\n - **Manual**: You can adjust the thinking budget tokens.\n - **Off**: No thinking or budgets are used.\n\n\u003c!-- --\u003e\n\n4. Optional: Expand **Tools**. \n\n##### Tools:\n\n1. Select one of the following options:\n - **Grounding: Google**: Grounding with Google Search or Google Maps.\n - **Grounding: Your data**: Grounding with Vertex AI RAG Engine, Vertex AI Search or Elasticsearch.\n 1. If you select **Grounding: Your data**, select the data source that you want to use.\n\n\u003c!-- --\u003e\n\n5. Optional: Expand **Advanced**: \n\n##### Advanced:\n\n1. Select **Region**.\n2. Select **Safety Filter Settings** . A dialog appears. Keep the default of **Off** , or you can specify **Block few** , **Block\n some** , or **Block most** for each of the following options:\n - **Hate speech**: Negative or harmful comments targeting identity or protected attributes.\n - **Dangerous content**: Promotes or enables access to harmful goods, services, and activities.\n - **Sexually explicit content**: Contains references to sexual acts or other lewd content.\n - **Harassment content**: Malicious, intimidating, bullying, or abusive comments targeting another individual.\n3. Click **Save** to save the settings and close the dialog.\n4. Select the temperature from the **Temperature** field. The temperature controls the randomness in token selection. A lower temperature is good when you expect a true or correct response. A higher temperature can lead to diverse or unexpected results.\n5. Select the output token limit from the **Output token limit** field. Output token limit determines the maximum amount of text output from one prompt. A token is approximately four characters.\n6. Select the maximum responses from the **Max responses** field. If the maximum number of model responses generated per prompt. Because of safety filters or other policies, responses can still be blocked.\n7. Select a value from the **Top-P** field. The Top-p changes how the model selects tokens for output.\n8. Click toggle on the **Stream model responses** field. If selected, the responses are printed as they're generated.\n9. Enter a stop sequence in the **Add stop sequence** field. Press **Enter** after each sequence.\n\n\u003c!-- --\u003e\n\n6. Click **Save** to save changes to your settings.\n7. Click **Apply**.\n8. Click **Submit prompts** to compare the prompts and their responses.\n\nFor more information on token limits for each model, see [Control the thinking\nbudget](/vertex-ai/generative-ai/docs/thinking#budget).\n\nCompare with another saved prompt\n---------------------------------\n\nTo compare your saved prompt with another saved prompt, follow these steps:\n\n1. Click **Compare saved prompt** . The **Existing Prompt** pane appears.\n2. Choose up to two existing prompts to compare.\n\n 1. Select a **Prompt name** . If you have many prompts in your list, click in the **Filter** field, and select the property that you want to filter by. Enter a value, and press \u003ckbd\u003eEnter\u003c/kbd\u003e.\n 2. Click **Apply** . The **Compare** page displays the prompt that you've selected alongside other prompts that you've created or selected for comparison.\n3. Click **Submit prompts** to compare the prompts and their responses.\n\nCompare with a ground truth\n---------------------------\n\nGround truth is your preferred answer to the prompt. All other model responses\nare evaluated against the ground truth answer.\n\nTo compare your saved prompt with a ground truth, follow these steps:\n\n1. Click **Ground truth** . The **Ground truth** pane appears.\n2. Enter your ground truth to generate additional evaluation metrics.\n3. Click **Save** to save the ground truth.\n4. Click **Submit prompts** to compare the prompts and their responses.\n\nThe evaluation metrics that are generated when you compare a prompt with a\nground truth aren't affected by the region that you select.\n\nWhat's next\n-----------\n\n- Explore more examples of prompts in the [Prompt gallery](/vertex-ai/generative-ai/docs/prompt-gallery).\n- For more information about evaluating your models, see [Gen AI evaluation service\n overview](/vertex-ai/generative-ai/docs/models/evaluation-overview)."]]