- 1.73.0 (latest)
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Classes for working with vision models.
Classes
GeneratedImage
GeneratedImage(
image_bytes: typing.Optional[bytes],
generation_parameters: typing.Dict[str, typing.Any],
gcs_uri: typing.Optional[str] = None,
)
Generated image.
Image
Image(
image_bytes: typing.Optional[bytes] = None, gcs_uri: typing.Optional[str] = None
)
Image.
ImageCaptioningModel
ImageCaptioningModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Generates captions from image.
Examples::
model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
ImageGenerationModel
ImageGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Generates images from text prompt.
Examples::
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
# Optional:
number_of_images=1,
seed=0,
)
response[0].show()
response[0].save("image1.png")
ImageGenerationResponse
ImageGenerationResponse(images: typing.List[GeneratedImage])
Image generation response.
ImageQnAModel
ImageQnAModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Answers questions about an image.
Examples::
model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
ImageTextModel
ImageTextModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Generates text from images.
Examples::
model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
MultiModalEmbeddingModel
MultiModalEmbeddingModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Generates embedding vectors from images and videos.
Examples::
model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")
embeddings = model.get_embeddings(
image=image,
video=video,
contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding
MultiModalEmbeddingResponse
MultiModalEmbeddingResponse(
_prediction_response: typing.Any,
image_embedding: typing.Optional[typing.List[float]] = None,
video_embeddings: typing.Optional[
typing.List[vertexai.vision_models.VideoEmbedding]
] = None,
text_embedding: typing.Optional[typing.List[float]] = None,
)
The multimodal embedding response.
Video
Video(
video_bytes: typing.Optional[bytes] = None, gcs_uri: typing.Optional[str] = None
)
Video.
VideoEmbedding
VideoEmbedding(
start_offset_sec: int, end_offset_sec: int, embedding: typing.List[float]
)
Embeddings generated from video with offset times.
VideoSegmentConfig
VideoSegmentConfig(
start_offset_sec: int = 0, end_offset_sec: int = 120, interval_sec: int = 16
)
The specific video segments (in seconds) the embeddings are generated for.
WatermarkVerificationModel
WatermarkVerificationModel(
model_id: str, endpoint_name: typing.Optional[str] = None
)
Verifies if an image has a watermark.
WatermarkVerificationResponse
WatermarkVerificationResponse(
_prediction_response: Any, watermark_verification_result: Optional[str] = None
)
WatermarkVerificationResponse(_prediction_response: Any, watermark_verification_result: Optional[str] = None)