Mistral AI models

Mistral AI models on Vertex AI offer fully managed and serverless models as APIs. To use a Mistral AI model on Vertex AI, send a request directly to the Vertex AI API endpoint. Because Mistral AI models use a managed API, there's no need to provision or manage infrastructure.

You can stream your responses to reduce the end-user latency perception. A streamed response uses server-sent events (SSE) to incrementally stream the response.

You pay for Mistral AI models as you use them (pay as you go). For pay-as-you-go pricing, see Mistral AI model pricing on the Vertex AI pricing page.

Available Mistral AI models

The following models are available from Mistral AI to use in Vertex AI. To access a Mistral AI model, go to its Model Garden model card.

Mistral Large (24.11)

Mistral Large (24.11) is the latest version of Mistral AI's Large model now with improved reasoning and function calling capabilities.

  • Agent-centric: best-in-class agentic capabilities with native function calling and JSON outputs.
  • Multi-lingual by design: dozens of languages supported, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish
  • Proficient in coding: trained on 80+ coding languages such as Python, Java, C, C++, JavaScript, and Bash. Also trained on more specific languages such as Swift and Fortran
  • Advanced reasoning: state-of-the-art mathematical and reasoning capabilities.
Go to the Mistral Large (24.11) model card

Mistral Large (2407)

Mistral Large (2407) is Mistral AI's flagship model for text generation. It reaches top-tier reasoning capabilities and can be used for complex multilingual tasks, including text understanding, transformation, and code generation. For more information, see Mistral AI's post about Mistral Large (2407).

Mistral Large (2407) stands out on the following dimensions:

  • Multi-lingual by design: dozens of languages supported, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch and Polish.
  • Proficient in coding: trained on 80+ coding languages such as Python, Java, C, C++, JavaScript, and Bash. Also trained on more specific languages such as Swift and Fortran.
  • Agent-centric: best-in-class agentic capabilities with native function calling and JSON outputting.
  • Advanced Reasoning: state-of-the-art mathematical and reasoning capabilities.
Go to the Mistral Large (2407) model card

Mistral Nemo

Mistral Nemo is Mistral AI's most cost efficient proprietary model. It is the ideal choice for low-latency workloads and simple tasks that can be done in bulk, such as classification, customer support, and text generation. For more information, see Mistral AI's documentation.

Mistral Nemo is optimized for the following use cases:

  • Generating and classifying text.
  • Building agents for customer support scenarios.
  • Generating code, completion, review, and comments. Supports all mainstream coding languages.
Go to the Mistral Nemo model card

Codestral

Codestral is a generative model that has been specifically designed and optimized for code generation tasks, including fill-in-the-middle and code completion. Codestral was trained on more than 80 programming languages, enabling it to perform well on both common and less common languages. For more information, see Mistral AI's code generation documentation.

Codestral is optimized for the following use cases:

  • Generating code, providing code completion, suggestions, and translation.
  • Understanding your code to provide a summary and explanation of your code.
  • Reviewing the quality of your code by helping refactor your code, fixing bugs, and generating test cases.
Go to the Codestral model card

Use Mistral AI models

When you send requests to use Mistral AI's models, use the following model names:

  • For Mistral Large (24.11), use mistral-large-2411.
  • For Mistral Large (2407), use mistral-large@2407.
  • For Mistral Nemo, use mistral-nemo@2407.
  • For Codestral, use codestral@2405.

For more information about using the Mistral AI SDK, see the Mistral AI Vertex AI documentation.

Before you begin

To use Mistral AI models with Vertex AI, you must perform the following steps. The Vertex AI API (aiplatform.googleapis.com) must be enabled to use Vertex AI. If you already have an existing project with the Vertex AI API enabled, you can use that project instead of creating a new project.

Make sure you have the required permissions to enable and use partner models. For more information, see Grant the required permissions.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Vertex AI API.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Vertex AI API.

    Enable the API

  8. Go to one of the Mistral AI Model Garden model cards, then click Enable.

Make a streaming call to a Mistral AI model

The following sample makes a streaming call to a Mistral AI model.

REST

After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • LOCATION: A region that supports Mistral AI models.
  • MODEL: The model name you want to use. In the request body, exclude the @ model version number.
  • ROLE: The role associated with a message. You can specify a user or an assistant. The first message must use the user role. The models operate with alternating user and assistant turns. If the final message uses the assistant role, then the response content continues immediately from the content in that message. You can use this to constrain part of the model's response.
  • STREAM: A boolean that specifies whether the response is streamed or not. Stream your response to reduce the end-use latency perception. Set to true to stream the response and false to return the response all at once.
  • CONTENT: The content, such as text, of the user or assistant message.
  • MAX_OUTPUT_TOKENS: Maximum number of tokens that can be generated in the response. A token is approximately 3.5 characters. 100 tokens correspond to roughly 60-80 words.

    Specify a lower value for shorter responses and a higher value for potentially longer responses.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/mistralai/models/MODEL:streamRawPredict

Request JSON body:

{
"model": MODEL,
  "messages": [
   {
    "role": "ROLE",
    "content": "CONTENT"
   }],
  "max_tokens": MAX_TOKENS,
  "stream": true
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/mistralai/models/MODEL:streamRawPredict"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/mistralai/models/MODEL:streamRawPredict" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Make a unary call to a Mistral AI model

The following sample makes a unary call to a Mistral AI model.

REST

After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • LOCATION: A region that supports Mistral AI models.
  • MODEL: The model name you want to use. In the request body, exclude the @ model version number.
  • ROLE: The role associated with a message. You can specify a user or an assistant. The first message must use the user role. The models operate with alternating user and assistant turns. If the final message uses the assistant role, then the response content continues immediately from the content in that message. You can use this to constrain part of the model's response.
  • STREAM: A boolean that specifies whether the response is streamed or not. Stream your response to reduce the end-use latency perception. Set to true to stream the response and false to return the response all at once.
  • CONTENT: The content, such as text, of the user or assistant message.
  • MAX_OUTPUT_TOKENS: Maximum number of tokens that can be generated in the response. A token is approximately 3.5 characters. 100 tokens correspond to roughly 60-80 words.

    Specify a lower value for shorter responses and a higher value for potentially longer responses.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/mistralai/models/MODEL:rawPredict

Request JSON body:

{
"model": MODEL,
  "messages": [
   {
    "role": "ROLE",
    "content": "CONTENT"
   }],
  "max_tokens": MAX_TOKENS,
  "stream": false
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/mistralai/models/MODEL:rawPredict"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/mistralai/models/MODEL:rawPredict" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Mistral AI model region availability and quotas

For Mistral AI models, a quota applies for each region where the model is available. The quota is specified in queries per minute (QPM) and tokens per minute (TPM). TPM includes both input and output tokens.

The supported regions, default quotas, and maximum context length for each Mistral AI model is listed in the following tables:

Mistral Large (24.11)

Region Quota system Supported context length
us-central1 60 QPM, 200,000 TPM 128,000 tokens
europe-west4 60 QPM, 200,000 TPM 128,000 tokens

Mistral Large (2407)

Region Quota system Supported context length
us-central1 60 QPM, 200,000 TPM 128,000 tokens
europe-west4 60 QPM, 200,000 TPM 128,000 tokens

Mistral Nemo

Region Quota system Supported context length
us-central1 60 QPM, 200,000 TPM 128,000 tokens
europe-west4 60 QPM, 200,000 TPM 128,000 tokens

Codestral

Region Quota system Supported context length
us-central1 60 QPM, 200,000 TPM 32,000 tokens
europe-west4 60 QPM, 200,000 TPM 32,000 tokens

If you want to increase any of your quotas for Generative AI on Vertex AI, you can use the Google Cloud console to request a quota increase. To learn more about quotas, see Work with quotas.