For translation tasks, Vertex AI Studio offers the choice between Gemini models and two specialized translation models from the Cloud Translation API:
Translation LLM - Google's newest highest quality LLM-style translation offering. Achieves the highest quality translation while serving at reasonable latencies (~3x faster than Gemini 2.0 Flash).
Cloud Translation Neural Machine Translation (NMT) model - Google's premier real-time translation offering achieving ~100ms latency translations. It achieves the highest quality of all models benchmarked serving at comparable latencies and continues to see ongoing quality advancements. NMT can achieve latencies up to 20x faster than Gemini 2.0 Flash.
Key advantages & differentiators of the Translation LLM
- Unmatched Translation Quality - Translation LLM offers the highest translation quality achieving significantly higher performance on benchmarks compared to other benchmarked models. Translation LLM is far more likely to significantly rewrite a sentence to make it sound more natural in the target language, rather than giving less natural "word-for-word" translations that are often seen in other translation models.
- Superior Quality/Latency Trade off - Translation LLM provides LLM-powered translations at latencies significantly better than Gemini 2.0 Flash. While slower than NMT, we suspect this latency will still be acceptable for a broad range of applications requiring high quality.
Model feature comparison
Feature | Translation LLM (Powered by Gemini) | NMT model |
---|---|---|
Description | A translation-specialized Large Language Model powered by Gemini, fine-tuned for translation. Available on Vertex AI and the Cloud Translation - Advanced API. | Google's Neural Machine Translation model, available through Cloud Translation - Advanced and Cloud Translation - Basic APIs . Optimized for simplicity and scale. |
Quality | Highest quality translation. Outperforms NMT, Gemini 2.0 Flash, and Gemini 2.0 Pro in quality. More likely to rewrite sentences for natural flow. Shows significant error reduction. | Medium to high quality depending on the language pair. Among the best-performing real-time NMT models for many language-domain combinations. |
Latency | Latency is significantly better than Gemini 2.0 Flash, but still slower than NMT. | Fastest Real Time Translation. Low latency, suitable for chat and real-time applications. Achieves latencies up to 20x faster than Gemini 2.0 Flash |
Language support | Supports 19 languages. The language list includes Arabic, Chinese, Czech, Dutch, English, French, German, Hindi, Indonesian, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish,Thai, Turkish, Ukrainian, and Vietnamese. Significant language expansion coming in April 2025. | Supports 189 languages, including Cantonese, Fijian, and Balinese. Translations from any language to any other in the supported list are possible. |
Customization | Support for - Advanced Glossaries, Supervised Fine-tuning on Vertex AI for domain/customer-specific adaptations, Adaptive translation for real-time style customization with a few examples. | Support for - Glossaries to control terminology, and training Custom Models with AutoML Translation in Cloud Translation - Advanced API. |
Translation features | HTML translation | HTML, Batch, and Formatted document translation |
API Integration | Cloud Translation - Advanced API, Vertex AI API | Cloud Translation - Basic API, Cloud Translation - Advanced API, Vertex AI API |
Usage
This section shows you how to use Vertex AI Studio to rapidly translate text from one language to another. You can use the Translation LLM, Gemini, or the NMT model to translate text by using the Google Cloud console or API. Note that the languages that each model supports can vary. Before you request translations, check that the model you're using supports your source and target languages.
Console
In the Vertex AI section of the Google Cloud console, go to the Translate text page in Vertex AI Studio.
In the Run settings pane, select a translation model in the Model field.
To change the model settings (such as temperature), expand Advanced.
Set the source and target languages.
In the input field, enter the text to translate.
Click Submit.
To get the code or curl command that demonstrate how to request translations, click
Get code.
Note that in Vertex AI Studio, the Translation LLM lets you provide example translations to tailor model responses to more closely match your style, tone, and industry domain. The model uses your examples as few-shot context before translating your text.
API
Select the model to use for your translations.
Use the Vertex AI API and translation LLM to translate text.
Before using any of the request data, make the following replacements:
- PROJECT_NUMBER_OR_ID: The numeric or alphanumeric ID of your Google Cloud project
- LOCATION: The location where you want to run this operation.
For example,
us-central1
. - SOURCE_LANGUAGE_CODE: The language code of the input text. Set to one of the language codes listed in adaptive translation.
- TARGET_LANGUAGE_CODE: The target language to translate the input text to. Set to one of the language codes listed in adaptive translation.
- SOURCE_TEXT: Text in the source language to translate.
- MIME_TYPE (Optional): The format of the source text, such as
text/html
ortext/plain
. By default, the MIME type is set totext/plain
.
HTTP method and URL:
POST https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/cloud-translate-text:predict
Request JSON body:
"instances": [{ "source_language_code": "SOURCE_LANGUAGE_CODE ", "target_language_code": "TARGET_LANGUAGE_CODE ", "contents": ["SOURCE_TEXT "], "mimeType": "MIME_TYPE ", "model": "projects/PROJECT_ID /locations/LOCATION /models/general/translation-llm" }]
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project:PROJECT_NUMBER_OR_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/cloud-translate-text:predict"
PowerShell (Windows)
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER_OR_ID " }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/cloud-translate-text:predict" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "translations": [ { "translatedText": "TRANSLATED_TEXT " } ], "languageCode": "TARGET_LANGUAGE " }
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
async function translate() { const request = { instances: [{ source_language_code: SOURCE_LANGUAGE_CODE, target_language_code: TARGET_LANGUAGE_CODE, contents: [SOURCE_TEXT], model: "projects/PROJECT_ID/locations/LOCATION/models/general/translation-llm" }] }; const {google} = require('googleapis'); const aiplatform = google.cloud('aiplatform'); const endpoint = aiplatform.predictionEndpoint('projects/PROJECT_ID/locations/LOCATION/publishers/google/models/cloud-translate-text'); const [response] = await endpoint.predict(request) console.log('Translating') console.log(response) }
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
from google.cloud import aiplatform def translate(): # Create a client endpoint = aiplatform.Endpoint('projects/PROJECT_ID/locations/LOCATION/publishers/google/models/cloud-translate-text') # Initialize the request instances=[{ "source_language_code": 'SOURCE_LANGUAGE_CODE', "target_language_code": 'TARGET_LANGUAGE_CODE', "contents": ["SOURCE_TEXT"], "model": "projects/PROJECT_ID/locations/LOCATION/models/general/translation-llm" }] # Make the request response = endpoint.predict(instances=instances) # Handle the response print(response)
Use the Vertex AI API and Gemini to translate text.
You can further customize Gemini responses through open prompting and prompt engineering.
Before using any of the request data, make the following replacements:
- PROJECT_NUMBER_OR_ID: the numeric or alphanumeric ID of your Google Cloud project.
- LOCATION: The location to process the request. Available
options include the following:
Click to expand a partial list of available regions
us-central1
us-west4
northamerica-northeast1
us-east4
us-west1
asia-northeast3
asia-southeast1
asia-northeast1
- MODEL_ID: The ID of the model, such as
gemini-1.0-pro-002
- SOURCE_LANGUAGE_CODE: The language of the input text.
- TARGET_LANGUAGE_CODE: The target language to translate the input text to.
- SOURCE_TEXT: The text to translate.
- TEMPERATURE:
The temperature is used for sampling during response generation, which occurs when
topP
andtopK
are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of0
means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.
- TOP_P:
Top-P changes how the model selects tokens for output. Tokens are selected
from the most (see top-K) to least probable until the sum of their probabilities
equals the top-P value. For example, if tokens A, B, and C have a probability of
0.3, 0.2, and 0.1 and the top-P value is
0.5
, then the model will select either A or B as the next token by using temperature and excludes C as a candidate.Specify a lower value for less random responses and a higher value for more random responses.
- TOP_K:
Top-K changes how the model selects tokens for output. A top-K of
1
means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of3
means that the next token is selected from among the three most probable tokens by using temperature.For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses.
- MAX_OUTPUT_TOKENS:
Maximum number of tokens that can be generated in the response. A token is
approximately four characters. 100 tokens correspond to roughly 60-80 words.
Specify a lower value for shorter responses and a higher value for potentially longer responses.
- SAFETY_CATEGORY:
The safety category to configure a threshold for. Acceptable values include the following:
Click to expand safety categories
HARM_CATEGORY_SEXUALLY_EXPLICIT
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_DANGEROUS_CONTENT
- THRESHOLD:
The threshold for blocking responses that could belong to the specified safety category based on
probability. Acceptable values include the following:
Click to expand blocking thresholds
BLOCK_NONE
BLOCK_ONLY_HIGH
BLOCK_MEDIUM_AND_ABOVE
(default)BLOCK_LOW_AND_ABOVE
BLOCK_LOW_AND_ABOVE
blocks the most whileBLOCK_ONLY_HIGH
blocks the least.
HTTP method and URL:
POST https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/MODEL_ID :streamGenerateContent
Request JSON body:
{ "contents": [ { "role": "user", "parts": [ { "text": "SOURCE_LANGUAGE_CODE :SOURCE_TEXT \nTARGET_LANGUAGE_CODE :" } ] } ], "generation_config": { "temperature":TEMPERATURE , "topP":TOP_P , "topK":TOP_K , "candidateCount": 1, "maxOutputTokens":MAX_OUTPUT_TOKENS } "safetySettings": [ { "category": "SAFETY_CATEGORY ", "threshold": "THRESHOLD " } ] }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project:PROJECT_NUMBER_OR_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/MODEL_ID :streamGenerateContent"
PowerShell (Windows)
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER_OR_ID " }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/MODEL_ID :streamGenerateContent" | Select-Object -Expand Content
You should receive a successful status code (2xx) and an empty response.
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
const {VertexAI} = require('@google-cloud/vertexai'); // Initialize Vertex with your Cloud project and location const vertex_ai = new VertexAI({project: 'PROJECT_ID', location: 'LOCATION'}); const model = 'gemini-1.0-pro'; // Instantiate the models const generativeModel = vertex_ai.preview.getGenerativeModel({ model: model, generationConfig: { 'candidate_count': 1, 'max_output_tokens': MAX_OUTPUT_TOKENS, 'temperature': TEMPERATURE, 'top_p': TOP_P, 'top_k': TOP_K, }, safetySettings: [ { 'category': 'HARM_CATEGORY_HATE_SPEECH', 'threshold': 'BLOCK_MEDIUM_AND_ABOVE' }, { 'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'threshold': 'BLOCK_MEDIUM_AND_ABOVE' }, { 'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'threshold': 'BLOCK_MEDIUM_AND_ABOVE' }, { 'category': 'HARM_CATEGORY_HARASSMENT', 'threshold': 'BLOCK_MEDIUM_AND_ABOVE' } ], }); async function generateContent() { const req = { contents: [ {role: 'user', parts: [{text: `SOURCE_LANGUAGE_CODE: TEXT TARGET_LANGUAGE_CODE:`}]} ], }; const streamingResp = await generativeModel.generateContentStream(req); for await (const item of streamingResp.stream) { process.stdout.write('stream chunk: ' + JSON.stringify(item) + '\n'); } process.stdout.write('aggregated response: ' + JSON.stringify(await streamingResp.response)); } generateContent();
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
import base64 import vertexai from vertexai.generative_models import GenerativeModel, Part, FinishReason import vertexai.preview.generative_models as generative_models def generate(): vertexai.init(project="PROJECT_ID", location="LOCATION") model = GenerativeModel("gemini-1.0-pro") responses = model.generate_content( ["""SOURCE_LANGUAGE_CODE: TEXT TARGET_LANGUAGE_CODE:"""], generation_config=generation_config, safety_settings=safety_settings, ) print(responses) generation_config = { "candidate_count": 1, "max_output_tokens": MAX_OUTPUT_TOKENS, "temperature": TEMPERATURE, "top_p": TOP_P, "top_k": TOP_K, } safety_settings = { generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, } generate()
Use the Cloud Translation API and the NMT model to translate text.
Before using any of the request data, make the following replacements:
- PROJECT_NUMBER_OR_ID: the numeric or alphanumeric ID of your Google Cloud project.
- SOURCE_LANGUAGE: (Optional) The language code of the input text. For supported language codes, see Language support.
- TARGET_LANGUAGE: The target language to translate the input text to. Set to one of the supported language codes.
- SOURCE_TEXT: The text to translate.
HTTP method and URL:
POST https://translation.googleapis.com/v3/projects/PROJECT_ID :translateText
Request JSON body:
{ "sourceLanguageCode": "SOURCE_LANGUAGE ", "targetLanguageCode": "TARGET_LANGUAGE ", "contents": ["SOURCE_TEXT1 ", "SOURCE_TEXT2 "] }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project:PROJECT_NUMBER_OR_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://translation.googleapis.com/v3/projects/PROJECT_ID :translateText"
PowerShell (Windows)
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER_OR_ID " }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://translation.googleapis.com/v3/projects/PROJECT_ID :translateText" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "translations": [ { "translatedText": "TRANSLATED_TEXT1 " }, { "translatedText": "TRANSLATED_TEXT2 " } ] }
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Custom translations
Customize responses from the translation LLM by providing your own example translations. Custom translations only work with the translation LLM.
You can request customized translation through the Vertex AI Studio console or API with one difference. The console supports custom translations only when you provide examples in a TMX or TSV file. The API supports custom translations only when you provide examples (up to 5 sentence pairs) inline as part of the translation request.
Data requirements
If you provide example translations in a file for the Google Cloud console, the examples must be written as segment pairs in a TMX or TSV file. Each pair includes a source language segment and its translated counterpart. For more information, see Prepare example translations in the Cloud Translation documentation.
To get the most accurate results, include specific examples from a wide variety of scenarios. You must include at least five sentence pairs but no more than 10,000 pairs. Also, a segment pair can be at most a total of 512 characters.
Console
In the Vertex AI section of the Google Cloud console, go to the Translate text page in Vertex AI Studio.
In the Run settings pane, configure your translation settings.
- In the Model field, select Translation LLM.
- To change the temperature, expand Advanced.
Click Add examples.
- Select a local file or a file from Cloud Storage. Vertex AI Studio determines the source and target languages from your file.
- Select the number of examples for the model to use before generating a response.
The number of examples you select counts toward the input character limit per request of 3,000.
In the input field, enter the text to translate.
Click Submit.
Vertex AI automatically selects your specified number of reference sentences that are most similar to your input. The translation model identifies patterns from your examples and then applies those patterns when generating a response.
The output limit per request is 3,000 characters. Any text beyond this limit is dropped.
To get the code or curl command that demonstrate how to request translations, click
Get code.
API
To request custom translations, include up to five reference sentence pairs in your translation request. The translation model uses all of them to identify patterns from your examples and then applies those patterns when generating a response.
Before using any of the request data, make the following replacements:
- PROJECT_NUMBER_OR_ID: The numeric or alphanumeric ID of your Google Cloud project
- LOCATION: The location where you want to run this operation.
For example,
us-central1
. - REFERENCE_SOURCE: A sentence in the source language that is part of a reference sentence pair.
- REFERENCE_TARGET: A sentence in the target language that is part of a reference sentence pair.
- SOURCE_LANGUAGE: The language code of the input text.
- TARGET_LANGUAGE: The target language to translate the input text to.
- SOURCE_TEXT: Text in the source language to translate.
- MIME_TYPE (Optional): The format of the source text, such as
text/html
ortext/plain
. By default, the MIME type is set totext/plain
.
HTTP method and URL:
POST https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/translate-llm:predict
Request JSON body:
"instances": [{ "reference_sentence_config": { "reference_sentence_pair_lists": [ { "reference_sentence_pairs": [{ "source_sentence": "REFERENCE_SOURCE_1_1 ", "target_sentence": "REFERENCE_TARGET_1_1 " }, { "source_sentence": "REFERENCE_SOURCE_1_2 ", "target_sentence": "REFERENCE_SOURCE_1_2 " }] } ], "source_language_code": "SOURCE_LANGUAGE_CODE ", "target_language_code": "TARGET_LANGUAGE_CODE " }, "contents": ["SOURCE_TEXT "], "mimeType": "MIME_TYPE " }]
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project:PROJECT_NUMBER_OR_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/translate-llm:predict"
PowerShell (Windows)
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_NUMBER_OR_ID " }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION -aiplatform.googleapis.com/v1/projects/PROJECT_ID /locations/LOCATION /publishers/google/models/translate-llm:predict" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "translations": [ { "translatedText": "TRANSLATED_TEXT " } ], "languageCode": "TARGET_LANGUAGE " }
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
async function translate() { const request = { instances: [{ "reference_sentence_config": { "reference_sentence_pair_lists": [{ "reference_sentence_pairs": [{ "source_sentence": 'SAMPLE_REFERENCE_SOURCE_1', "target_sentence": 'SAMPLE_REFERENCE_TARGET_1' }, "reference_sentence_pairs": { "source_sentence": 'SAMPLE_REFERENCE_SOURCE_2', "target_sentence": 'SAMPLE_REFERENCE_TARGET_2' }] }], "source_language_code": 'SOURCE_LANGUAGE_CODE', "target_language_code": 'TARGET_LANGUAGE_CODE' }, "contents": ["SOURCE_TEXT"] }] }; const {google} = require('googleapis'); const aiplatform = google.cloud('aiplatform'); const endpoint = aiplatform.predictionEndpoint('projects/PROJECT_ID/locations/LOCATION/publishers/google/models/translate-llm'); const [response] = await endpoint.predict(request) console.log('Translating') console.log(response) }
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
from google.cloud import aiplatform def translate(): # Create a client endpoint = aiplatform.Endpoint('projects/PROJECT_ID/locations/LOCATION/publishers/google/models/translate-llm') # Initialize the request instances=[{ "reference_sentence_config": { "reference_sentence_pair_lists": [{ "reference_sentence_pairs": [{ "source_sentence": 'SAMPLE_REFERENCE_SOURCE_1', "target_sentence": 'SAMPLE_REFERENCE_TARGET_1' }, "reference_sentence_pairs": { "source_sentence": 'SAMPLE_REFERENCE_SOURCE_2', "target_sentence": 'SAMPLE_REFERENCE_TARGET_2' }] }], "source_language_code": 'SOURCE_LANGUAGE_CODE', "target_language_code": 'TARGET_LANGUAGE_CODE' }, "contents": ["SOURCE_TEXT"] }] # Make the request response = endpoint.predict(instances=instances) # Handle the response print(response)
You can also use the Cloud Translation API to create a dataset and import your example sentence pairs. When you use the Cloud Translation API to request translations, you can include your dataset to customize responses. The dataset persists and can be reused with multiple translation requests. For more information, see Request adaptive translations in the Cloud Translation documentation.
Supported languages
Translation LLM
For the Translation LLM, you can translate to and from any of the following languages.
Language name | Language code |
---|---|
Arabic | ar |
Chinese | zh-CN |
Czech | cs |
Dutch | nl |
English | en |
French | fr |
German | de |
Hindi | hi |
Indonesian | id |
Italian | it |
Japanese | ja |
Korean | ko |
Polish | pl |
Portuguese | pt |
Russian | ru |
Spanish | es |
Thai | th |
Turkish | tr |
Ukrainian | uk |
Vietnamese | vi |
Gemini and NMT
For information about which languages Gemini and the NMT model support, see the following documentation: