範例

本指南提供使用 Gemini 模型搭配 OpenAI 相容的 Chat Completions API 的範例,涵蓋下列主題:

您可以透過下列兩種方式呼叫 Chat Completions API:

方法 說明 用途
呼叫受管理 Gemini 模型 將要求傳送至 Google 代管的端點,以使用特定 Gemini 模型。 最適合一般用途、快速設定,以及存取最新 Google 模型,不必費心管理基礎架構。
呼叫自行部署的模型 將要求傳送至您在 Vertex AI 上部署模型時建立的端點。 如果您需要微調模型的專屬端點,或需要預設端點無法提供的特定設定,這個選項就非常適合。

使用 Chat Completions API 呼叫 Gemini

您可以傳送非串流或串流要求。

要求類型 說明 優點 缺點
非串流 系統會生成完整的回覆,然後以單一區塊的形式傳回。 實作較簡單,可立即取得完整回覆。 使用者必須等待系統生成完整的回覆,因此感覺延遲時間較長。
串流 系統會在生成回應時,以小塊的形式傳回。如要啟用串流功能,請在要求主體中設定 "stream": true 降低感知延遲時間,並以遞增方式顯示回覆,提供更具互動性的體驗。 需要更複雜的用戶端邏輯,才能處理傳入的資料串流。

傳送非串流要求

以下範例說明如何傳送非串流要求。

REST

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi/chat/completions \
  -d '{
    "model": "google/${MODEL_ID}",
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'
  

Python

在試用這個範例之前,請先按照Python使用用戶端程式庫的 Vertex AI 快速入門中的操作說明進行設定。 詳情請參閱 Vertex AI Python API 參考說明文件

如要向 Vertex AI 進行驗證,請設定應用程式預設憑證。 詳情請參閱「為本機開發環境設定驗證」。

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash-001",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)

print(response)

傳送串流要求

下列範例說明如何設定 "stream": true 來傳送串流要求。

REST

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi/chat/completions \
  -d '{
    "model": "google/${MODEL_ID}",
    "stream": true,
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'
  

Python

在試用這個範例之前,請先按照Python使用用戶端程式庫的 Vertex AI 快速入門中的操作說明進行設定。 詳情請參閱 Vertex AI Python API 參考說明文件

如要向 Vertex AI 進行驗證,請設定應用程式預設憑證。 詳情請參閱「為本機開發環境設定驗證」。

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash-001",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
    stream=True,
)
for chunk in response:
    print(chunk)

將提示和圖片傳送至 Vertex AI 的 Gemini API

以下範例顯示如何傳送包含文字和圖片的多模態要求。

Python

在試用這個範例之前,請先按照Python使用用戶端程式庫的 Vertex AI 快速入門中的操作說明進行設定。 詳情請參閱 Vertex AI Python API 參考說明文件

如要向 Vertex AI 進行驗證,請設定應用程式預設憑證。 詳情請參閱「為本機開發環境設定驗證」。


from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash-001",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe the following image:"},
                {
                    "type": "image_url",
                    "image_url": "gs://cloud-samples-data/generative-ai/image/scones.jpg",
                },
            ],
        }
    ],
)

print(response)

使用 Chat Completions API 呼叫自行部署的模型

傳送非串流要求

下列範例說明如何將非串流要求傳送至自行部署的模型。

REST

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/global/endpoints/${ENDPOINT}/chat/completions \
  -d '{
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'

Python

在試用這個範例之前,請先按照Python使用用戶端程式庫的 Vertex AI 快速入門中的操作說明進行設定。 詳情請參閱 Vertex AI Python API 參考說明文件

如要向 Vertex AI 進行驗證,請設定應用程式預設憑證。 詳情請參閱「為本機開發環境設定驗證」。

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"
# model_id = "gemma-2-9b-it"
# endpoint_id = "YOUR_ENDPOINT_ID"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/{endpoint_id}",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model=model_id,
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)
print(response)

傳送串流要求

下列範例說明如何將串流要求傳送至自行部署的模型。

REST

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
    https://aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/global/endpoints/${ENDPOINT}/chat/completions \
    -d '{
      "stream": true,
      "messages": [{
        "role": "user",
        "content": "Write a story about a magic backpack."
      }]
    }'
  

Python

在試用這個範例之前,請先按照Python使用用戶端程式庫的 Vertex AI 快速入門中的操作說明進行設定。 詳情請參閱 Vertex AI Python API 參考說明文件

如要向 Vertex AI 進行驗證,請設定應用程式預設憑證。 詳情請參閱「為本機開發環境設定驗證」。

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"
# model_id = "gemma-2-9b-it"
# endpoint_id = "YOUR_ENDPOINT_ID"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/{endpoint_id}",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model=model_id,
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
    stream=True,
)
for chunk in response:
    print(chunk)

extra_body 個樣本

您可以使用 extra_body 欄位,在要求中傳遞 Google 專屬參數。

  • REST API:如要使用 REST API 傳遞參數,請在 google 物件中新增參數。

    {
      ...,
      "extra_body": {
         "google": {
           ...,
           "thought_tag_marker": "..."
         }
       }
    }
    
  • Python SDK:如要使用 Python SDK 傳遞參數,請在 extra_body 引數中提供參數字典。

    client.chat.completions.create(
      ...,
      extra_body = {
        'extra_body': { 'google': { ... } }
      },
    )
    

extra_content 個樣本

您可以使用 REST API 的 extra_content 欄位,在訊息或工具呼叫中新增額外資訊。

  • 使用字串 content 欄位

    {
      "messages": [
        { "role": "...", "content": "...", "extra_content": { "google": { ... } } }
      ]
    }
    
  • 多部分 content 欄位中的每則訊息

    {
      "messages": [
        {
          "role": "...",
          "content": [
            { "type": "...", ..., "extra_content": { "google": { ... } } }
          ]
        }
    }
    
  • 每次呼叫工具

    {
      "messages": [
        {
          "role": "...",
          "tool_calls": [
            {
              ...,
              "extra_content": { "google": { ... } }
            }
          ]
        }
      ]
    }
    

curl 要求範例

您可以透過這些 curl 要求直接與 API 互動,不必使用 SDK。

透過 thinking_config 支付 extra_body 費用

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/endpoints/openapi/chat/completions \
  -d '{ \
    "model": "google/gemini-2.5-flash-preview-04-17", \
    "messages": [ \
      { "role": "user", \
      "content": [ \
        { "type": "text", \
          "text": "Are there any primes number of the form n*ceil(log(n))" \
        }] }], \
    "extra_body": { \
      "google": { \
          "thinking_config": { \
          "include_thoughts": true, "thinking_budget": 10000 \
        }, \
        "thought_tag_marker": "think" } }, \
    "stream": true }'

多模態要求

Chat Completions API 支援多種多模態輸入內容,包括音訊和影片。

使用 image_url 傳遞圖片資料

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/us-central1/endpoints/openapi/chat/completions \
  -d '{ \
    "model": "google/gemini-2.0-flash-001", \
    "messages": [{ "role": "user", "content": [ \
      { "type": "text", "text": "Describe this image" }, \
      { "type": "image_url", "image_url": "gs://cloud-samples-data/generative-ai/image/scones.jpg" }] }] }'

使用 input_audio 傳遞音訊資料

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/us-central1/endpoints/openapi/chat/completions \
  -d '{ \
    "model": "google/gemini-2.0-flash-001", \
    "messages": [ \
      { "role": "user", \
        "content": [ \
          { "type": "text", "text": "Describe this: " }, \
          { "type": "input_audio", "input_audio": { \
            "format": "audio/mp3", \
            "data": "gs://cloud-samples-data/generative-ai/audio/pixel.mp3" } }] }] }'

結構化輸出內容

您可以使用 response_format 參數,要求模型輸出結構化 JSON。

Python SDK 範例

from pydantic import BaseModel
from openai import OpenAI

client = OpenAI()

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: list[str]

completion = client.beta.chat.completions.parse(
    model="google/gemini-2.5-flash-preview-04-17",
    messages=[
        {"role": "system", "content": "Extract the event information."},
        {"role": "user", "content": "Alice and Bob are going to a science fair on Friday."},
    ],
    response_format=CalendarEvent,
)

print(completion.choices[0].message.parsed)

後續步驟