示例

本指南提供了有关如何将与 OpenAI 兼容的 Chat Completions API 与 Gemini 模型搭配使用的示例,涵盖以下主题:

您可以通过以下两种方式调用 Chat Completions API:

方法 说明 使用场景
调用受管理的 Gemini 模型 向特定 Gemini 模型的 Google 管理端点发送请求。 非常适合一般使用场景,可快速设置,无需管理基础架构即可使用最新的 Google 模型。
调用自行部署的模型 向通过在 Vertex AI 上部署模型而创建的端点发送请求。 如果您需要为微调后的模型提供专用端点,或者需要默认端点上没有的特定配置,那么此选项非常适合您。

使用 Chat Completions API 调用 Gemini

您可以发送非流式请求或流式请求。

请求类型 说明 优点 缺点
非流式 系统会生成完整回答,然后以单个块的形式将其发送回去。 实现起来更简单;可立即获得完整回答。 用户需要等待整个回答生成完毕,因此感知到的延迟时间较长。
流式 系统会在生成响应时以小块的形式返回响应。如需启用流式传输,请在请求正文中设置 "stream": true 感知到的延迟时间更短;由于回答会逐步显示,因此可提供更具互动性的体验。 需要更复杂的客户端逻辑来处理传入的数据流。

发送非流式传输请求

以下示例展示了如何发送非流式请求。

REST

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi/chat/completions \
  -d '{
    "model": "google/${MODEL_ID}",
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'
  

Python

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Python 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Python API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash-001",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)

print(response)

发送流式传输请求

以下示例展示了如何通过设置 "stream": true 来发送流式传输请求。

REST

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi/chat/completions \
  -d '{
    "model": "google/${MODEL_ID}",
    "stream": true,
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'
  

Python

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Python 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Python API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash-001",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
    stream=True,
)
for chunk in response:
    print(chunk)

向 Vertex AI 中的 Gemini API 发送提示和图片

以下示例展示了如何发送包含文本和图片的多模态请求。

Python

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Python 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Python API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证


from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash-001",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe the following image:"},
                {
                    "type": "image_url",
                    "image_url": "gs://cloud-samples-data/generative-ai/image/scones.jpg",
                },
            ],
        }
    ],
)

print(response)

使用 Chat Completions API 调用自行部署的模型

发送非流式传输请求

以下示例展示了如何向自行部署的模型发送非流式请求。

REST

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/global/endpoints/${ENDPOINT}/chat/completions \
  -d '{
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'

Python

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Python 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Python API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"
# model_id = "gemma-2-9b-it"
# endpoint_id = "YOUR_ENDPOINT_ID"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/{endpoint_id}",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model=model_id,
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)
print(response)

发送流式传输请求

以下示例展示了如何向自行部署的模型发送流式请求。

REST

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
    https://aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/global/endpoints/${ENDPOINT}/chat/completions \
    -d '{
      "stream": true,
      "messages": [{
        "role": "user",
        "content": "Write a story about a magic backpack."
      }]
    }'
  

Python

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Python 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Python API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.auth import default
import google.auth.transport.requests

import openai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"
# model_id = "gemma-2-9b-it"
# endpoint_id = "YOUR_ENDPOINT_ID"

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
credentials.refresh(google.auth.transport.requests.Request())

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/{endpoint_id}",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model=model_id,
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
    stream=True,
)
for chunk in response:
    print(chunk)

extra_body 个示例

您可以使用 extra_body 字段在请求中传递 Google 特定的参数。

  • REST API:如需使用 REST API 传递参数,请将参数添加到 google 对象中。

    {
      ...,
      "extra_body": {
         "google": {
           ...,
           "thought_tag_marker": "..."
         }
       }
    }
    
  • Python SDK:如需使用 Python SDK 传递参数,请在字典中将这些参数提供给 extra_body 实参。

    client.chat.completions.create(
      ...,
      extra_body = {
        'extra_body': { 'google': { ... } }
      },
    )
    

extra_content 个示例

您可以使用 REST API 中的 extra_content 字段向消息或工具调用添加额外信息。

  • 包含字符串 content 字段

    {
      "messages": [
        { "role": "...", "content": "...", "extra_content": { "google": { ... } } }
      ]
    }
    
  • 多部分 content 字段中的每条消息

    {
      "messages": [
        {
          "role": "...",
          "content": [
            { "type": "...", ..., "extra_content": { "google": { ... } } }
          ]
        }
    }
    
  • 每次工具调用

    {
      "messages": [
        {
          "role": "...",
          "tool_calls": [
            {
              ...,
              "extra_content": { "google": { ... } }
            }
          ]
        }
      ]
    }
    

示例 curl 请求

您可以使用这些 curl 请求直接与 API 进行交互,而无需使用 SDK。

thinking_configextra_body 搭配使用

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/endpoints/openapi/chat/completions \
  -d '{ \
    "model": "google/gemini-2.5-flash-preview-04-17", \
    "messages": [ \
      { "role": "user", \
      "content": [ \
        { "type": "text", \
          "text": "Are there any primes number of the form n*ceil(log(n))" \
        }] }], \
    "extra_body": { \
      "google": { \
          "thinking_config": { \
          "include_thoughts": true, "thinking_budget": 10000 \
        }, \
        "thought_tag_marker": "think" } }, \
    "stream": true }'

多模态请求

Chat Completions API 支持各种多模态输入,包括音频和视频。

使用 image_url 传递图片数据

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/us-central1/endpoints/openapi/chat/completions \
  -d '{ \
    "model": "google/gemini-2.0-flash-001", \
    "messages": [{ "role": "user", "content": [ \
      { "type": "text", "text": "Describe this image" }, \
      { "type": "image_url", "image_url": "gs://cloud-samples-data/generative-ai/image/scones.jpg" }] }] }'

使用 input_audio 传递音频数据

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/us-central1/endpoints/openapi/chat/completions \
  -d '{ \
    "model": "google/gemini-2.0-flash-001", \
    "messages": [ \
      { "role": "user", \
        "content": [ \
          { "type": "text", "text": "Describe this: " }, \
          { "type": "input_audio", "input_audio": { \
            "format": "audio/mp3", \
            "data": "gs://cloud-samples-data/generative-ai/audio/pixel.mp3" } }] }] }'

结构化输出

您可以使用 response_format 参数来请求模型提供结构化 JSON 输出。

使用 Python SDK 的示例

from pydantic import BaseModel
from openai import OpenAI

client = OpenAI()

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: list[str]

completion = client.beta.chat.completions.parse(
    model="google/gemini-2.5-flash-preview-04-17",
    messages=[
        {"role": "system", "content": "Extract the event information."},
        {"role": "user", "content": "Alice and Bob are going to a science fair on Friday."},
    ],
    response_format=CalendarEvent,
)

print(completion.choices[0].message.parsed)

后续步骤