Organízate con las colecciones
Guarda y clasifica el contenido según tus preferencias.
Puedes ajustar un modelo de Gemma 3 en un trabajo de Cloud Run y, a continuación, ofrecer el modelo ajustado en un servicio de Cloud Run mediante vLLM.
[[["Es fácil de entender","easyToUnderstand","thumb-up"],["Me ofreció una solución al problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Es difícil de entender","hardToUnderstand","thumb-down"],["La información o el código de muestra no son correctos","incorrectInformationOrSampleCode","thumb-down"],["Me faltan las muestras o la información que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-08-21 (UTC)."],[],[],null,["# Fine tune LLMs using GPUs with Cloud Run jobs\n\n| **Preview\n| --- GPU support for Cloud Run jobs**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nYou can fine tune a [Gemma 3 model](https://ai.google.dev/gemma/docs/core/model_card_3) on a Cloud Run job, then\nserve the fine tuned model on a Cloud Run service using [vLLM](https://github.com/vllm-project/vllm).\n\nSee a step-by-step instructional codelab at [How to fine tune a model using Cloud Run jobs](https://codelabs.developers.google.com/codelabs/cloud-run/how-to-fine-tune-model-cloud-run-jobs#0)."]]