This document describes how to create a text embedding using the Vertex AI Text embeddings API.
Vertex AI text embeddings API uses dense vector representations: gemini-embedding-001, for example, uses 3072-dimensional vectors. Dense vector embedding models use deep-learning methods similar to the ones used by large language models. Unlike sparse vectors, which tend to directly map words to numbers, dense vectors are designed to better represent the meaning of a piece of text. The benefit of using dense vector embeddings in generative AI is that instead of searching for direct word or syntax matches, you can better search for passages that align to the meaning of the query, even if the passages don't use the same language.
The vectors are normalized, so you can use cosine similarity, dot product, or Euclidean distance to provide the same similarity rankings.
- To learn more about embeddings, see the embeddings APIs overview.
- To learn about text embedding models, see Text embeddings.
- For information about which languages each embeddings model supports, see Supported text languages.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator
), which contains theresourcemanager.projects.create
permission. Learn how to grant roles.
-
Enable the Vertex AI API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin
), which contains theserviceusage.services.enable
permission. Learn how to grant roles. -
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator
), which contains theresourcemanager.projects.create
permission. Learn how to grant roles.
-
Enable the Vertex AI API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin
), which contains theserviceusage.services.enable
permission. Learn how to grant roles. - Choose a task type for your embeddings job.
Supported models
Google models
You can get text embeddings by using the following models:
Model name | Description | Output Dimensions | Max sequence length | Supported text languages |
---|---|---|---|---|
gemini-embedding-001 |
State-of-the-art performance across English, multilingual and code tasks. It unifies the previously specialized models like text-embedding-005 and text-multilingual-embedding-002 and achieves better performance in their respective domains. Read our Tech Report for more detail. |
up to 3072 | 2048 tokens | Supported text languages |
text-embedding-005 |
Specialized in English and code tasks. | up to 768 | 2048 tokens | English |
text-multilingual-embedding-002 |
Specialized in multilingual tasks. | up to 768 | 2048 tokens | Supported text languages |
For superior embedding quality, gemini-embedding-001
is our large
model designed to provide the highest performance.
Open models
You can get text embeddings by using the following models:
Model name | Description | Output dimensions | Max sequence length | Supported text languages |
---|---|---|---|---|
multilingual-e5-small |
Part of the E5 family of text embedding models. Small variant contains 12 layers. | Up to 384 | 512 tokens | Supported languages |
multilingual-e5-large |
Part of the E5 family of text embedding models. Large variant contains 24 layers. | Up to 1024 | 512 tokens | Supported languages |
To get started, see the E5 family model card. For more information on open models, see Open models for MaaS
Get text embeddings for a snippet of text
You can get text embeddings for a snippet of text by using the Vertex AI API or the Vertex AI SDK for Python.
API limits
For each request, you're limited to 250 input texts. The API has a maximum input
token limit of 20,000. Inputs exceeding this limit result in a 400 error. Each
individual input text is further limited to 2048 tokens; any excess is silently
truncated. You can also disable silent truncation by setting autoTruncate
to
false
.
For more information, see Text embedding limits.
Choose an embedding dimension
All models produce a full-length embedding vector by default. For
gemini-embedding-001
, this vector has 3072 dimensions, and other
models produce 768-dimensional vectors. However, by using the
output_dimensionality
parameter, users can control the size of the output
embedding vector. Selecting a smaller output dimensionality can save storage
space and increase computational efficiency for downstream applications, while
sacrificing little in terms of quality.
The following examples use the gemini-embedding-001
model.
Python
Install
pip install --upgrade google-genai
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Go
Learn how to install or update the Go.
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Add an embedding to a vector database
After you've generated your embedding you can add embeddings to a vector database, like Vector Search. This enables low-latency retrieval, and is critical as the size of your data increases.
To learn more about Vector Search, see Overview of Vector Search.
What's next
- To learn more about rate limits, see Generative AI on Vertex AI rate limits.
- To get batch predictions for embeddings, see Get batch text embeddings
predictions
- To learn more about multimodal embeddings, see Get multimodal embeddings
- To tune an embedding, see Tune text embeddings
- To learn more about the research behind
text-embedding-005
andtext-multilingual-embedding-002
, see the research paper Gecko: Versatile Text Embeddings Distilled from Large Language Models.