Vertex AI Model Optimizer は、ニーズに最適な Gemini モデルを自動的に適用することで、モデルの選択を簡素化するように設計された動的エンドポイントです。これにより、プロンプトを単一のメタエンドポイントに向けることができ、このサービスでは費用と品質の優先度に基づいて、クエリに最適な Gemini モデル(Pro、Flash など)がインテリジェントに選択されます。
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True
# from google import genai# from google.genai.types import (# FeatureSelectionPreference,# GenerateContentConfig,# HttpOptions,# ModelSelectionConfig# )## client = genai.Client(http_options=HttpOptions(api_version="v1beta1"))# response = client.models.generate_content(# model="model-optimizer-exp-04-09",# contents="How does AI work?",# config=GenerateContentConfig(# model_selection_config=ModelSelectionConfig(# feature_selection_preference=FeatureSelectionPreference.BALANCED # Options: PRIORITIZE_QUALITY, BALANCED, PRIORITIZE_COST# ),# ),# )# print(response.text)# # Example response:# # Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...# ## # Here's a simplified overview:# # ...
サポートされていない機能の処理
Model Optimizer は、テキストの入力と出力のみをサポートします。ただし、リクエストにはサポートされていないさまざまなモダリティやツールが含まれている可能性があります。以降のセクションでは、サポートされていないこれらの機能を Model Optimizer で処理する方法について説明します。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-08-28 UTC。"],[],[],null,["| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n[launch stage descriptions](/products#product-launch-stages). \n| To see an example of using Vertex AI Model Optimizer,\n| run the \"Getting Started with Model Optimizer\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fmodel-optimizer%2Fintro_model_optimizer.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fmodel-optimizer%2Fintro_model_optimizer.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb)\n\nVertex AI Model Optimizer is a dynamic endpoint designed to simplify\nmodel selection by automatically applying the Gemini model which best\nmeets your needs. This lets you point your prompts at a single meta-endpoint and the service\nintelligently selects the most appropriate Gemini model for your query\n(Pro, Flash, etc.) based on your cost and quality preferences.\n\nFor more information on Model Optimizer pricing, see\n[Pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing#token-based-pricing).\n\nBenefits\n\nModel Optimizer lets you:\n\n- Simplify your model selection rather than choosing a model for each application\n- Optimize for cost, quality, or both, letting you balance performance and budget\n- Integrate seamlessly with existing Gemini APIs and SDKs\n- Track usage and identify potential for cost savings\n- Efficiently handle text-based tasks without a need for manual endpoint selection\n\nSupported models\n\n- Gemini 2.0 Flash (GA)\n- Gemini 2.5 Pro (preview, 03-25)\n\nLanguage support\n\nModel Optimizer supports all languages that are also supported by the Gemini models. (See Gemini Language support)\n\nModality\n\nModel Optimizer supports text use cases, including:\n\n- Coding, including function calling and code execution\n- Summarization\n- Single and multi-turn chat\n- Question and answering\n\nFor limitations and how to handle them, see [Handling unsupported features](#handling_unsupported_features).\n\nGetting started\n\nTo get started with Model Optimizer, see [our quickstart Colab notebook](https://colab.sandbox.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb).\n\nUsing Vertex AI Model Optimizer \n\nPython\n\nInstall \n\n```\npip install --upgrade google-genai\n```\n\n\nTo learn more, see the\n[SDK reference documentation](https://googleapis.github.io/python-genai/).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n # from google import genai\n # from google.genai.types import (\n # FeatureSelectionPreference,\n # GenerateContentConfig,\n # HttpOptions,\n # ModelSelectionConfig\n # )\n #\n # client = genai.Client(http_options=HttpOptions(api_version=\"v1beta1\"))\n # response = client.models.generate_content(\n # model=\"model-optimizer-exp-04-09\",\n # contents=\"How does AI work?\",\n # config=GenerateContentConfig(\n # model_selection_config=ModelSelectionConfig(\n # feature_selection_preference=FeatureSelectionPreference.BALANCED # Options: PRIORITIZE_QUALITY, BALANCED, PRIORITIZE_COST\n # ),\n # ),\n # )\n # print(response.text)\n # # Example response:\n # # Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...\n # #\n # # Here's a simplified overview:\n # # ...\n\n\u003cbr /\u003e\n\nHandling unsupported features\n\nModel Optimizer only supports text input and output. However, the\nrequest could include different modalities or tools that aren't supported. The\nfollowing sections cover how Model Optimizer handles these\nunsupported features.\n\nMultimodal requests\n\nRequests that include prompts with multimodal data, such as video, images or audio, will throw an `INVALID_ARGUMENT` error.\n\nUnsupported tools\n\nModel Optimizer only supports function declaration for requests. If a request contains other tool types including `google_maps`, `google_search`, `enterprise_web_search`, `retrieval`, or `browse`, an `INVALID_ARGUMENT` error is thrown.\n\nSend feedback\n\nTo send feedback about your experience with Model Optimizer, fill out our [feedback survey](https://forms.gle/mn9NGFrNgPUJDpTF6).\n\nIf you have questions, technical issues, or feedback about Model Optimizer, contact\n[model-optimizer-support@google.com](mailto:model-optimizer-support@google.com).\n\nCustomer discussion group\n\nTo connect directly with the development team, you can join the Vertex AI Model Optimizer Listening Group, where you can learn about the product and help us understand how to make the features work better for you. The group's activities include:\n\n- Virtual workshops to learn more about the features\n- Feedback surveys to share your needs and priorities\n- 1:1 sessions with Google Cloud employees as we explore new features\n\nActivities are offered about once every 6-8 weeks. You can take part in as many or as few as you'd like, or you can opt out entirely at any time.\nTo join the group, complete the [Vertex AI Model Optimizer discussion group sign up form](https://forms.gle/oqKp6qrTgCdYaFEP9)."]]