[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Run batch inference using GPUs on Cloud Run jobs\n\n| **Preview\n| --- GPU support for Cloud Run jobs**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nYou can run batch inference with [Meta's Llama 3.2-1b LLM](https://huggingface.co/meta-llama/Llama-3.2-1B) and [vLLM](https://github.com/vllm-project/vllm) on a Cloud Run job, then write the results directly to Cloud Storage using Cloud Run volume mounts.\n\nSee a step-by-step instructional codelab at [How to run batch inference on Cloud Run jobs](https://codelabs.developers.google.com/codelabs/cloud-run/how-to-batch-inference-cloud-run-jobs)."]]