[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Run LLM inference on Cloud Run GPUs with vLLM\n\nThe following codelab shows how to run a backend service that runs [vLLM](https://github.com/vllm-project/vllm), which is an\ninference engine for production systems, along with Google's [Gemma 2](https://developers.googleblog.com/en/smaller-safer-more-transparent-advancing-responsible-ai-with-gemma/), which is\na 2 billion parameters instruction-tuned model.\n\nSee the entire codelab at [Run LLM inference on Cloud Run GPUs with vLLM](https://codelabs.developers.google.com/codelabs/how-to-run-inference-cloud-run-gpu-vllm#0)."]]