- 1.71.1 (latest)
- 1.71.0
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
MultiModalEmbeddingModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Generates embedding vectors from images and videos.
Examples::
model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")
embeddings = model.get_embeddings(
image=image,
video=video,
contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding
Methods
MultiModalEmbeddingModel
MultiModalEmbeddingModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Creates a _ModelGardenModel.
This constructor should not be called directly.
Use {model_class}.from_pretrained(model_name=...)
instead.
Parameters | |
---|---|
Name | Description |
model_id |
str
Identifier of a Model Garden Model. Example: "text-bison@001" |
endpoint_name |
typing.Optional[str]
Vertex Endpoint resource name for the model |
from_pretrained
from_pretrained(model_name: str) -> vertexai._model_garden._model_garden_models.T
Loads a _ModelGardenModel.
Parameter | |
---|---|
Name | Description |
model_name |
str
Name of the model. |
Exceptions | |
---|---|
Type | Description |
ValueError |
If model_name is unknown. |
ValueError |
If model does not support this class. |
get_embeddings
get_embeddings(
image: typing.Optional[vertexai.vision_models.Image] = None,
video: typing.Optional[vertexai.vision_models.Video] = None,
contextual_text: typing.Optional[str] = None,
dimension: typing.Optional[int] = None,
video_segment_config: typing.Optional[
vertexai.vision_models.VideoSegmentConfig
] = None,
) -> vertexai.vision_models.MultiModalEmbeddingResponse
Gets embedding vectors from the provided image.
Parameters | |
---|---|
Name | Description |
image |
Image
Optional. The image to generate embeddings for. One of |
video |
Video
Optional. The video to generate embeddings for. One of |
contextual_text |
str
Optional. Contextual text for your input image or video. If provided, the model will also generate an embedding vector for the provided contextual text. The returned image and text embedding vectors are in the same semantic space with the same dimensionality, and the vectors can be used interchangeably for use cases like searching image by text or searching text by image. One of |
dimension |
int
Optional. The number of embedding dimensions. Lower values offer decreased latency when using these embeddings for subsequent tasks, while higher values offer better accuracy. Available values: |
video_segment_config |
VideoSegmentConfig
Optional. The specific video segments (in seconds) the embeddings are generated for. |
Returns | |
---|---|
Type | Description |
MultiModalEmbeddingResponse |
The image and text embedding vectors. |