部署 GKE 推理网关


本页面介绍如何部署 GKE 推理网关。

本页面适用于负责管理 GKE 基础设施的网络专家,以及管理 AI 工作负载的平台管理员。

在阅读本页面之前,请确保您熟悉以下内容:

GKE 推理网关在 Google Kubernetes Engine (GKE) 网关的基础上进行了增强,可优化生成式 AI 应用提供的内容。借助 GKE 推理网关,您可以优化 GKE 上生成式 AI 工作负载提供的内容。它可高效管理和扩缩 AI 工作负载,实现工作负载特定的性能目标(例如延迟时间),并提高资源利用率、可观测性和 AI 安全性。

准备工作

在开始之前,请确保您已执行以下任务:

  • 启用 Google Kubernetes Engine API。
  • 启用 Google Kubernetes Engine API
  • 如果您要使用 Google Cloud CLI 执行此任务,请安装初始化 gcloud CLI。 如果您之前安装了 gcloud CLI,请运行 gcloud components update 以获取最新版本。
  • 根据需要启用 Compute Engine API、Network Services API 和 Model Armor API。

    前往启用对 API 的访问,然后按照说明操作。

GKE Gateway Controller 要求

  • GKE 1.32.3 版。
  • Google Cloud CLI 407.0.0 版或更高版本。
  • Gateway API 仅支持在 VPC 原生集群上使用。
  • 必须启用代理专用子网。
  • 集群必须启用 HttpLoadBalancing 插件。
  • 如果您使用的是 Istio,则必须将 Istio 升级到以下版本之一:
    • 1.15.2 或更高版本
    • 1.14.5 或更高版本
    • 1.13.9 或更高版本
  • 如果您使用的是共享 VPC,则需要在宿主项目中将 Compute Network User 角色分配给服务项目的 GKE 服务账号。

限制和局限

有以下限制和局限:

  • 不支持多集群网关。
  • GKE 推理网关仅支持 gke-l7-regional-external-managedgke-l7-rilb GatewayClass 资源。
  • 不支持跨区域内部应用负载均衡器。

配置 GKE 推理网关

如需配置 GKE 推理网关,请参考以下示例。假设一个团队运行 vLLMLlama3 模型,并在尝试使用如下两个不同的 LoRA 微调适配器的效果:“food-review”和“cad-fabricator”。

配置 GKE 推理网关的大概工作流如下:

  1. 准备环境:设置必要的基础设施和组件。
  2. 创建推理池:使用 InferencePool 自定义资源定义模型服务器池。
  3. 指定模型部署目标:使用 InferenceModel 自定义资源指定模型目标。
  4. 创建网关:使用 Gateway API 公开推理服务。
  5. 创建 HTTPRoute:定义 HTTP 流量路由到推理服务的方式。
  6. 发送推理请求:向已部署的模型发送请求。

准备环境

  1. 安装 Helm

  2. 创建 GKE 集群:

  3. 如需在 GKE 集群中安装 InferencePoolInferenceModel 自定义资源定义 (CRD),请运行以下命令:

    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/download/v0.3.0/manifests.yaml
    

    VERSION 替换为要安装的 CRD 的版本(例如 v0.3.0)。

  4. 如果您使用的 GKE 版本低于 v1.32.2-gke.1182001,并且想要将 Model Armor 与 GKE 推理网关搭配使用,则必须安装流量和路由扩展 CRD:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-gateway-api/refs/heads/main/config/crd/networking.gke.io_gcptrafficextensions.yaml
    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-gateway-api/refs/heads/main/config/crd/networking.gke.io_gcproutingextensions.yaml
    
  5. 如需设置授权以爬取指标,请创建 inference-gateway-sa-metrics-reader-secret Secret:

    kubectl apply -f - <<EOF
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: inference-gateway-metrics-reader
    rules:
    - nonResourceURLs:
      - /metrics
      verbs:
      - get
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: inference-gateway-sa-metrics-reader
      namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: inference-gateway-sa-metrics-reader-role-binding
      namespace: default
    subjects:
    - kind: ServiceAccount
      name: inference-gateway-sa-metrics-reader
      namespace: default
    roleRef:
      kind: ClusterRole
      name: inference-gateway-metrics-reader
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: inference-gateway-sa-metrics-reader-secret
      namespace: default
      annotations:
        kubernetes.io/service-account.name: inference-gateway-sa-metrics-reader
    type: kubernetes.io/service-account-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: inference-gateway-sa-metrics-reader-secret-read
    rules:
    - resources:
      - secrets
      apiGroups: [""]
      verbs: ["get", "list", "watch"]
      resourceNames: ["inference-gateway-sa-metrics-reader-secret"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: gmp-system:collector:inference-gateway-sa-metrics-reader-secret-read
      namespace: default
    roleRef:
      name: inference-gateway-sa-metrics-reader-secret-read
      kind: ClusterRole
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - name: collector
      namespace: gmp-system
      kind: ServiceAccount
    EOF
    

创建模型服务器和模型部署

本部分介绍如何部署模型服务器和模型。此示例使用提供 Llama3 模型的 vLLM 模型服务器。相应部署将被标记为 app:vllm-llama3-8b-instruct。此部署还使用了 Hugging Face 中的两个名为 food-reviewcad-fabricator 的 LoRA 适配器。

您可以根据自己的模型服务器容器和模型、服务端口以及部署名称来调整此示例。您还可以在部署中配置 LoRA 适配器,或部署基础模型。以下步骤介绍如何创建必要的 Kubernetes 资源。

  1. 创建一个 Kubernetes Secret 来存储您的 Hugging Face 令牌。此令牌用于访问 LoRA 适配器:

    kubectl create secret generic hf-token --from-literal=token=HF_TOKEN
    

    HF_TOKEN 替换为您的 Hugging Face 令牌。

  2. 如需部署在 nvidia-h100-80gb 加速器类型上,请将以下清单保存为 vllm-llama3-8b-instruct.yaml。此清单定义了一个包含模型和模型服务器的 Kubernetes Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: vllm-llama3-8b-instruct
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: vllm-llama3-8b-instruct
      template:
        metadata:
          labels:
            app: vllm-llama3-8b-instruct
        spec:
          containers:
            - name: vllm
              image: "vllm/vllm-openai:latest"
              imagePullPolicy: Always
              command: ["python3", "-m", "vllm.entrypoints.openai.api_server"]
              args:
              - "--model"
              - "meta-llama/Llama-3.1-8B-Instruct"
              - "--tensor-parallel-size"
              - "1"
              - "--port"
              - "8000"
              - "--enable-lora"
              - "--max-loras"
              - "2"
              - "--max-cpu-loras"
              - "12"
              env:
                # Enabling LoRA support temporarily disables automatic v1, we want to force it on
                # until 0.8.3 vLLM is released.
                - name: PORT
                  value: "8000"
                - name: HUGGING_FACE_HUB_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: hf-token
                      key: token
                - name: VLLM_ALLOW_RUNTIME_LORA_UPDATING
                  value: "true"
              ports:
                - containerPort: 8000
                  name: http
                  protocol: TCP
              lifecycle:
                preStop:
                  # vLLM stops accepting connections when it receives SIGTERM, so we need to sleep
                  # to give upstream gateways a chance to take us out of rotation. The time we wait
                  # is dependent on the time it takes for all upstreams to completely remove us from
                  # rotation. Older or simpler load balancers might take upwards of 30s, but we expect
                  # our deployment to run behind a modern gateway like Envoy which is designed to
                  # probe for readiness aggressively.
                  sleep:
                    # Upstream gateway probers for health should be set on a low period, such as 5s,
                    # and the shorter we can tighten that bound the faster that we release
                    # accelerators during controlled shutdowns. However, we should expect variance,
                    # as load balancers may have internal delays, and we don't want to drop requests
                    # normally, so we're often aiming to set this value to a p99 propagation latency
                    # of readiness -> load balancer taking backend out of rotation, not the average.
                    #
                    # This value is generally stable and must often be experimentally determined on
                    # for a given load balancer and health check period. We set the value here to
                    # the highest value we observe on a supported load balancer, and we recommend
                    # tuning this value down and verifying no requests are dropped.
                    #
                    # If this value is updated, be sure to update terminationGracePeriodSeconds.
                    #
                    seconds: 30
                  #
                  # IMPORTANT: preStop.sleep is beta as of Kubernetes 1.30 - for older versions
                  # replace with this exec action.
                  #exec:
                  #  command:
                  #  - /usr/bin/sleep
                  #  - 30
              livenessProbe:
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                # vLLM's health check is simple, so we can more aggressively probe it.  Liveness
                # check endpoints should always be suitable for aggressive probing.
                periodSeconds: 1
                successThreshold: 1
                # vLLM has a very simple health implementation, which means that any failure is
                # likely significant. However, any liveness triggered restart requires the very
                # large core model to be reloaded, and so we should bias towards ensuring the
                # server is definitely unhealthy vs immediately restarting. Use 5 attempts as
                # evidence of a serious problem.
                failureThreshold: 5
                timeoutSeconds: 1
              readinessProbe:
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                # vLLM's health check is simple, so we can more aggressively probe it.  Readiness
                # check endpoints should always be suitable for aggressive probing, but may be
                # slightly more expensive than readiness probes.
                periodSeconds: 1
                successThreshold: 1
                # vLLM has a very simple health implementation, which means that any failure is
                # likely significant,
                failureThreshold: 1
                timeoutSeconds: 1
              # We set a startup probe so that we don't begin directing traffic or checking
              # liveness to this instance until the model is loaded.
              startupProbe:
                # Failure threshold is when we believe startup will not happen at all, and is set
                # to the maximum possible time we believe loading a model will take. In our
                # default configuration we are downloading a model from HuggingFace, which may
                # take a long time, then the model must load into the accelerator. We choose
                # 10 minutes as a reasonable maximum startup time before giving up and attempting
                # to restart the pod.
                #
                # IMPORTANT: If the core model takes more than 10 minutes to load, pods will crash
                # loop forever. Be sure to set this appropriately.
                failureThreshold: 3600
                # Set delay to start low so that if the base model changes to something smaller
                # or an optimization is deployed, we don't wait unnecessarily.
                initialDelaySeconds: 2
                # As a startup probe, this stops running and so we can more aggressively probe
                # even a moderately complex startup - this is a very important workload.
                periodSeconds: 1
                httpGet:
                  # vLLM does not start the OpenAI server (and hence make /health available)
                  # until models are loaded. This may not be true for all model servers.
                  path: /health
                  port: http
                  scheme: HTTP
    
              resources:
                limits:
                  nvidia.com/gpu: 1
                requests:
                  nvidia.com/gpu: 1
              volumeMounts:
                - mountPath: /data
                  name: data
                - mountPath: /dev/shm
                  name: shm
                - name: adapters
                  mountPath: "/adapters"
          initContainers:
            - name: lora-adapter-syncer
              tty: true
              stdin: true
              image: us-central1-docker.pkg.dev/k8s-staging-images/gateway-api-inference-extension/lora-syncer:main
              restartPolicy: Always
              imagePullPolicy: Always
              env:
                - name: DYNAMIC_LORA_ROLLOUT_CONFIG
                  value: "/config/configmap.yaml"
              volumeMounts: # DO NOT USE subPath, dynamic configmap updates don't work on subPaths
              - name: config-volume
                mountPath:  /config
          restartPolicy: Always
    
          # vLLM allows VLLM_PORT to be specified as an environment variable, but a user might
          # create a 'vllm' service in their namespace. That auto-injects VLLM_PORT in docker
          # compatible form as `tcp://<IP>:<PORT>` instead of the numeric value vLLM accepts
          # causing CrashLoopBackoff. Set service environment injection off by default.
          enableServiceLinks: false
    
          # Generally, the termination grace period needs to last longer than the slowest request
          # we expect to serve plus any extra time spent waiting for load balancers to take the
          # model server out of rotation.
          #
          # An easy starting point is the p99 or max request latency measured for your workload,
          # although LLM request latencies vary significantly if clients send longer inputs or
          # trigger longer outputs. Since steady state p99 will be higher than the latency
          # to drain a server, you may wish to slightly this value either experimentally or
          # via the calculation below.
          #
          # For most models you can derive an upper bound for the maximum drain latency as
          # follows:
          #
          #   1. Identify the maximum context length the model was trained on, or the maximum
          #      allowed length of output tokens configured on vLLM (llama2-7b was trained to
          #      4k context length, while llama3-8b was trained to 128k).
          #   2. Output tokens are the more compute intensive to calculate and the accelerator
          #      will have a maximum concurrency (batch size) - the time per output token at
          #      maximum batch with no prompt tokens being processed is the slowest an output
          #      token can be generated (for this model it would be about 100ms TPOT at a max
          #      batch size around 50)
          #   3. Calculate the worst case request duration if a request starts immediately
          #      before the server stops accepting new connections - generally when it receives
          #      SIGTERM (for this model that is about 4096 / 10 ~ 40s)
          #   4. If there are any requests generating prompt tokens that will delay when those
          #      output tokens start, and prompt token generation is roughly 6x faster than
          #      compute-bound output token generation, so add 20% to the time from above (40s +
          #      16s ~ 55s)
          #
          # Thus we think it will take us at worst about 55s to complete the longest possible
          # request the model is likely to receive at maximum concurrency (highest latency)
          # once requests stop being sent.
          #
          # NOTE: This number will be lower than steady state p99 latency since we stop receiving
          #       new requests which require continuous prompt token computation.
          # NOTE: The max timeout for backend connections from gateway to model servers should
          #       be configured based on steady state p99 latency, not drain p99 latency
          #
          #   5. Add the time the pod takes in its preStop hook to allow the load balancers have
          #      stopped sending us new requests (55s + 30s ~ 85s)
          #
          # Because termination grace period controls when the Kubelet forcibly terminates a
          # stuck or hung process (a possibility due to a GPU crash), there is operational safety
          # in keeping the value roughly proportional to the time to finish serving. There is also
          # value in adding a bit of extra time to deal with unexpectedly long workloads.
          #
          #   6. Add a 50% safety buffer to this time since the operational impact should be low
          #      (85s * 1.5 ~ 130s)
          #
          # One additional source of drain latency is that some workloads may run close to
          # saturation and have queued requests on each server. Since traffic in excess of the
          # max sustainable QPS will result in timeouts as the queues grow, we assume that failure
          # to drain in time due to excess queues at the time of shutdown is an expected failure
          # mode of server overload. If your workload occasionally experiences high queue depths
          # due to periodic traffic, consider increasing the safety margin above to account for
          # time to drain queued requests.
          terminationGracePeriodSeconds: 130
          nodeSelector:
            cloud.google.com/gke-accelerator: "nvidia-h100-80gb"
          volumes:
            - name: data
              emptyDir: {}
            - name: shm
              emptyDir:
                medium: Memory
            - name: adapters
              emptyDir: {}
            - name: config-volume
              configMap:
                name: vllm-llama3-8b-adapters
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: vllm-llama3-8b-adapters
    data:
      configmap.yaml: |
          vLLMLoRAConfig:
            name: vllm-llama3.1-8b-instruct
            port: 8000
            defaultBaseModel: meta-llama/Llama-3.1-8B-Instruct
            ensureExist:
              models:
              - id: food-review
                source: Kawon/llama3.1-food-finetune_v14_r8
              - id: cad-fabricator
                source: redcathode/fabricator
    ---
    kind: HealthCheckPolicy
    apiVersion: networking.gke.io/v1
    metadata:
      name: health-check-policy
      namespace: default
    spec:
      targetRef:
        group: "inference.networking.x-k8s.io"
        kind: InferencePool
        name: vllm-llama3-8b-instruct
      default:
        config:
          type: HTTP
          httpHealthCheck:
              requestPath: /health
              port: 8000
    
  3. 将示例清单应用于集群:

    kubectl apply -f vllm-llama3-8b-instruct.yaml
    

应用清单后,设置以下关键字段和参数:

  • replicas:指定 Deployment 中的 Pod 数量。
  • image:指定模型服务器的 Docker 映像。
  • command:指定容器启动时要运行的命令。
  • args:指定要传递给命令的实参。
  • env:指定容器的环境变量。
  • ports:指定容器公开的端口。
  • resources:指定容器的资源请求及其限额,例如 GPU。
  • volumeMounts:指定如何将卷装载到容器中。
  • initContainers:指定在应用容器之前运行的容器。
  • restartPolicy:指定 Pod 的重启政策。
  • terminationGracePeriodSeconds:指定 Pod 终止的宽限期。
  • volumes:指定 Pod 使用的卷。

您可以修改这些字段,以满足您的特定要求。

创建推理池

InferencePool Kubernetes 自定义资源定义了一组采用相同的基础大语言模型 (LLM) 和计算配置的 Pod。selector 字段指定此池包含的 Pod。此选择器中的标签必须与应用于模型服务器 Pod 的标签完全一致。targetPort 字段定义模型服务器在 Pod 内使用的端口。extensionRef 字段引用可为推理池提供额外功能的扩展服务。InferencePool 使 GKE 推理网关能够将流量路由到模型服务器 Pod。

在创建 InferencePool 之前,请确保 InferencePool 选择的 Pod 已在运行。

如需使用 Helm 创建 InferencePool,请执行以下步骤:

helm install vllm-llama3-8b-instruct \
  --set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
  --set provider.name=gke \
  --version v0.3.0 \
  oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool

更改以下字段以与您的 Deployment 相符:

  • inferencePool.modelServers.matchLabels.app:用于选择模型服务器 Pod 的标签的键。

Helm 安装程序会自动安装必要的超时政策、端点选择器以及所需的 Pod,以便实现可观测性。

这会创建一个 InferencePool 对象:其中 vllm-llama3-8b-instruct 引用 Pod 内的模型端点服务。它还会为创建的 InferencePool 创建一个名为 app:vllm-llama3-8b-instruct-epp 的端点选择器 Deployment。

指定模型部署目标

InferenceModel 自定义资源定义要提供的特定模型,为 LoRA 调优模型及其服务重要性提供支持。您必须通过创建 InferenceModel 资源来定义在 InferencePool 上提供哪些模型。这些 InferenceModel 资源可以引用 InferencePool 中模型服务器支持的基础模型或 LoRA 适配器。

modelName 字段指定基础模型或 LoRA 适配器的名称。Criticality 字段指定模型的服务重要性。poolRef 字段指定要在其上提供此模型的 InferencePool

如需创建 InferenceModel,请执行以下步骤:

  1. 将以下示例清单保存为 inferencemodel.yaml

    apiVersion: inference.networking.x-k8s.io/v1alpha2
    kind: InferenceModel
    metadata:
      name: inferencemodel-sample
    spec:
      modelName: MODEL_NAME
      criticality: VALUE
      poolRef:
        name: INFERENCE_POOL_NAME
    

    替换以下内容:

    • MODEL_NAME:基础模型或 LoRA 适配器的名称。例如 food-review
    • VALUE:所选服务重要性。您可以从 CriticalStandardSheddable 中进行选择。例如 Standard
    • INFERENCE_POOL_NAME:您在上一步中创建的 InferencePool 的名称。例如 vllm-llama3-8b-instruct
  2. 将示例清单应用于集群:

    kubectl apply -f inferencemodel.yaml
    

以下示例将创建一个 InferenceModel 对象,该对象将在 vllm-llama3-8b-instruct InferencePool 上配置 food-review LoRA 模型,并使其具有 Standard 服务重要性。InferenceModel 对象还会将要提供的基础模型配置为具有 Critical 优先级。

apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
  name: food-review
spec:
  modelName: food-review
  criticality: Standard
  poolRef:
    name: vllm-llama3-8b-instruct
  targetModels:
  - name: food-review
    weight: 100

---
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
  name: llama3-base-model
spec:
  modelName: meta-llama/Llama-3.1-8B-Instruct
  criticality: Critical
  poolRef:
    name: vllm-llama3-8b-instruct

创建网关

网关资源是外部流量进入 Kubernetes 集群的入口点。它定义用于接受传入连接的监听器。

GKE 推理网关可与以下网关类搭配使用:

  • gke-l7-rilb:对于区域级内部应用负载均衡器。
  • gke-l7-regional-external-managed

如需了解详情,请参阅网关类文档。

如需创建网关,请执行以下步骤:

  1. 将以下示例清单保存为 gateway.yaml

    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: GATEWAY_NAME
    spec:
      gatewayClassName: GATEWAY_CLASS
      listeners:
        - protocol: HTTP
          port: 80
          name: http
    

    GATEWAY_NAME 替换为网关资源的唯一名称(例如 inference-gateway),并将 GATEWAY_CLASS 替换为要使用的网关类(例如 gke-l7-regional-external-managed)。

  2. 将清单应用到您的集群:

    kubectl apply -f gateway.yaml
    

注意:如需详细了解如何配置 TLS 以使用 HTTPS 保护网关,请参阅 GKE 文档中的 TLS 配置

创建 HTTPRoute

HTTPRoute 资源定义 GKE 网关如何将传入的 HTTP 请求路由到后端服务,在本例中,后端服务是您的 InferencePoolHTTPRoute 资源指定匹配规则(例如,标头或路径)以及应将流量转发到的后端。

  1. 如需创建 HTTPRoute,请将以下示例清单保存为 httproute.yaml

    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: HTTPROUTE_NAME
    spec:
      parentRefs:
      - name: GATEWAY_NAME
      rules:
      - matches:
        - path:
            type: PathPrefix
            value: PATH_PREFIX
        backendRefs:
        - name: INFERENCE_POOL_NAME
          group: inference.networking.x-k8s.io
          kind: InferencePool
    

    替换以下内容:

    • HTTPROUTE_NAMEHTTPRoute 资源的唯一名称。例如 my-route
    • GATEWAY_NAME:您创建的 Gateway 资源的名称。例如 inference-gateway
    • PATH_PREFIX:用于匹配传入请求的路径前缀。例如,/ 可匹配所有路径。
    • INFERENCE_POOL_NAME:要将流量路由到的 InferencePool 资源的名称。例如 vllm-llama3-8b-instruct
  2. 将清单应用到您的集群:

    kubectl apply -f httproute.yaml
    

发送推理请求

配置 GKE 推理网关后,您便可以向已部署的模型发送推理请求。这样一来,您就可以根据输入提示和指定参数生成文本。

如需发送推理请求,请执行以下步骤:

  1. 如需获取网关端点,请运行以下命令:

    IP=$(kubectl get gateway/GATEWAY_NAME -o jsonpath='{.status.addresses[0].value}')
    PORT=PORT_NUMBER # Use 80 for HTTP
    

    替换以下内容:

    • GATEWAY_NAME:网关资源的名称。
    • PORT_NUMBER:您在网关中配置的端口号。
  2. 如需使用 curl/v1/completions 端点发送请求,请运行以下命令:

    curl -i -X POST ${IP}:${PORT}/v1/completions \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer $(gcloud auth print-access-token)' \
    -d '{
        "model": "MODEL_NAME",
        "prompt": "PROMPT_TEXT",
        "max_tokens": MAX_TOKENS,
        "temperature": "TEMPERATURE"
    }'
    

    替换以下内容:

    • MODEL_NAME:要使用的模型或 LoRA 适配器的名称。
    • PROMPT_TEXT:模型的输入提示。
    • MAX_TOKENS:回答中可生成的 token 数量上限。
    • TEMPERATURE:控制输出的随机性。使用值 0 可获得确定性输出,使用更高的值则可获得更具创造性的输出。

以下示例展示了如何向 GKE 推理网关发送示例请求:

curl -i -X POST ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -H 'Authorization: Bearer $(gcloud auth print-access-token)' -d '{
    "model": "food-review",
    "prompt": "What is the best pizza in the world?",
    "max_tokens": 2048,
    "temperature": "0"
}'

请注意以下事项:

  • 请求正文:请求正文可以包含其他参数,例如 stoptop_p。如需查看完整的选项列表,请参阅 OpenAI API 规范
  • 错误处理:在客户端代码中实现适当的错误处理,以处理响应中可能出现的错误。例如,检查 curl 响应中的 HTTP 状态代码。非 200 状态代码通常表示错误。
  • 身份验证和授权:对于生产部署,请使用身份验证和授权机制保护您的 API 端点。在请求中添加相应的标头(例如 Authorization)。

后续步骤