[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Run LLM inference on Cloud Run GPUs with Hugging Face TGI\n\nThe following example shows how to run a backend service that runs the [Hugging Face Text Generation Inference (TGI) toolkit](https://huggingface.co/docs/text-generation-inference), which is a toolkit for deploying and serving Large Language Models (LLMs), using Llama 3.\n\nSee the entire example at [Deploy Llama 3.1 8B with TGI DLC on Cloud Run](https://huggingface.co/docs/google-cloud/examples/cloud-run-tgi-deployment)."]]