[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# About supervised fine-tuning for Translation LLM models\n\nSupervised fine-tuning is a good option when you have a translation task with\navailable labeled text data. It's particularly effective for domain-specific\napplications where the translation significantly differs from the general data\nthe large model was originally trained on.\n\nSupervised fine-tuning adapts model behavior with a labeled dataset. This process\nadjusts the model's weights to minimize the difference between its predictions\nand the actual labels.\n\nSupported models\n----------------\n\nThe following Translation LLM models support supervised tuning:\n\n- `translation-llm-002` (In Public Preview, supports text only)\n\nLimitations\n-----------\n\n- Maximum input and output tokens:\n - Serving: 1,000 (\\~4000 characters)\n- Validation dataset size: 1024 examples\n- Training dataset file size: Up to 1GB for JSONL\n- Training example length: 1,000 (\\~4000 characters)\n- Adapter size:\n - `Translation LLM V2`: Supported value is only 4. Using any other values (e.g., 1 or 8) will result in failure.\n\nUse cases for using supervised fine-tuning\n------------------------------------------\n\nGeneral pretrained translation model works well when the text to be translated is based on\ngeneral commonplace text structures that the model learned from. If you want a\nmodel to learn something niche or domain-specific that deviates from general\ntranslation, then you might want to consider\ntuning that model. For example, you can use model tuning to teach the model the\nfollowing:\n\n- Specific content of an industry domain with jargon or style\n- Specific structures or formats for generating output.\n- Specific behaviors such as when to provide a terse or verbose output.\n- Specific customized outputs for specific types of inputs.\n\nConfigure a tuning job region\n-----------------------------\n\nUser data, such as the transformed dataset and the tuned model, is stored in the\ntuning job region. The only supported region is `us-central1`.\n\n- If you use the Vertex AI SDK, you can specify the region at\n initialization. For example:\n\n import https://cloud.google.com/python/docs/reference/vertexai/latest/\n https://cloud.google.com/python/docs/reference/vertexai/latest/.init(project='myproject', location='us-central1')\n\n- If you create a supervised fine-tuning job by sending a POST request using\n the\n [`tuningJobs.create`](/vertex-ai/docs/reference/rest/v1/projects.locations.tuningJobs/create)\n method, then you use the URL to specify the region where the tuning job\n runs. For example, in the following URL, you specify a region by\n replacing both instances of \u003cvar translate=\"no\"\u003e\u003ccode translate=\"no\" dir=\"ltr\"\u003eTUNING_JOB_REGION\u003c/code\u003e\u003c/var\u003e with the region\n where the job runs.\n\n https://\u003cvar translate=\"no\"\u003eTUNING_JOB_REGION\u003c/var\u003e-aiplatform.googleapis.com/v1/projects/\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e/locations/\u003cvar translate=\"no\"\u003eTUNING_JOB_REGION\u003c/var\u003e/tuningJobs\n\n- If you use the [Google Cloud console](/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning#create_a_text_model_supervised_tuning_job),\n you can select the region name in the **Region**\n drop down field on the **Model details** page. This is the same page\n where you select the base model and a tuned model name.\n\nQuota\n-----\n\nQuota is enforced on the number of concurrent tuning jobs. Every project comes\nwith a default quota to run at least one tuning job. This is a global quota,\nshared across all available regions and supported models. If you want to run more jobs concurrently, you need to [request additional quota](/docs/quota_detail/view_manage#requesting_higher_quota) for `Global concurrent tuning jobs`.\n\nPricing\n-------\n\nSupervised fine-tuning for `translation-llm-002` is in [Preview](/products#product-launch-stages). While tuning is in Preview,\nthere is no charge to tune a model or to use it for inference.\n\nTraining tokens are calculated by the total number of tokens in your training dataset,\nmultiplied by your number of epochs.\n\nWhat's next\n-----------\n\n- Prepare a [supervised fine-tuning dataset](/vertex-ai/generative-ai/docs/models/translation-supervised-tuning-prepare)."]]