自 2025 年 4 月 29 日起,Gemini 1.5 Pro 和 Gemini 1.5 Flash 模型將無法用於先前未使用這些模型的專案,包括新專案。詳情請參閱「
模型版本和生命週期」。
使用 Gemini 和 Chat Completions API 描述圖片
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
使用 Chat Completions API 產生圖片的文字說明,透過 OpenAI 程式庫將要求傳送至 Vertex AI 模型。
深入探索
如需包含這個程式碼範例的詳細說明文件,請參閱下列內容:
程式碼範例
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[],[],null,["# Describe an image by using Gemini and the Chat Completions API\n\nGenerate text description of an image by using the Chat Completions API, which lets you send requests to Vertex AI models by using the OpenAI libraries.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Examples](/vertex-ai/generative-ai/docs/migrate/openai/examples)\n- [Generate content with the Gemini API in Vertex AI](/vertex-ai/generative-ai/docs/model-reference/inference)\n\nCode sample\n-----------\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Python API\nreference documentation](/python/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n\n from google.auth import default\n import google.auth.transport.requests\n\n import openai\n\n # TODO(developer): Update and un-comment below lines\n # project_id = \"PROJECT_ID\"\n # location = \"us-central1\"\n\n # Programmatically get an access token\n credentials, _ = default(scopes=[\"https://www.googleapis.com/auth/cloud-platform\"])\n credentials.refresh(google.auth.transport.requests.Request())\n\n # OpenAI Client\n client = openai.OpenAI(\n base_url=f\"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi\",\n api_key=credentials.token,\n )\n\n response = client.chat.completions.create(\n model=\"google/gemini-2.0-flash-001\",\n messages=[\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"Describe the following image:\"},\n {\n \"type\": \"image_url\",\n \"image_url\": \"gs://cloud-samples-data/generative-ai/image/scones.jpg\",\n },\n ],\n }\n ],\n )\n\n print(response)\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]