[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Run LLM inference on Cloud Run GPUs with Hugging Face Transformers.js\n\nThe following codelab shows how to run a backend service that runs the [Transformers.js package](https://www.npmjs.com/package/@huggingface/transformers). The Transformers.js package is functionally equivalent to the [Hugging Face transformers python library](https://github.com/huggingface/transformers) together with Google's [Gemma 2](https://developers.googleblog.com/en/smaller-safer-more-transparent-advancing-responsible-ai-with-gemma/) model.\n\nSee the entire codelab at [How to Run Transformers.js on Cloud Run GPUs](https://codelabs.developers.google.com/codelabs/how-to-use-transformers-js-cloud-run-gpu#0)."]]