Module vision_models (1.53.0)

Classes for working with vision models.

Classes

GeneratedImage

GeneratedImage(
    image_bytes: typing.Optional[bytes],
    generation_parameters: typing.Dict[str, typing.Any],
    gcs_uri: typing.Optional[str] = None,
)

Generated image.

Image

Image(
    image_bytes: typing.Optional[bytes] = None, gcs_uri: typing.Optional[str] = None
)

Image.

ImageCaptioningModel

ImageCaptioningModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Generates captions from image.

Examples::

model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

ImageGenerationModel

ImageGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Generates images from text prompt.

Examples::

model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
    # Optional:
    number_of_images=1,
    seed=0,
)
response[0].show()
response[0].save("image1.png")

ImageGenerationResponse

ImageGenerationResponse(images: typing.List[GeneratedImage])

Image generation response.

ImageQnAModel

ImageQnAModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Answers questions about an image.

Examples::

model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

ImageTextModel

ImageTextModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Generates text from images.

Examples::

model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")

captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

MultiModalEmbeddingModel

MultiModalEmbeddingModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Generates embedding vectors from images and videos.

Examples::

model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")

embeddings = model.get_embeddings(
    image=image,
    video=video,
    contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding

MultiModalEmbeddingResponse

MultiModalEmbeddingResponse(
    _prediction_response: typing.Any,
    image_embedding: typing.Optional[typing.List[float]] = None,
    video_embeddings: typing.Optional[
        typing.List[vertexai.vision_models.VideoEmbedding]
    ] = None,
    text_embedding: typing.Optional[typing.List[float]] = None,
)

The multimodal embedding response.

Video

Video(
    video_bytes: typing.Optional[bytes] = None, gcs_uri: typing.Optional[str] = None
)

Video.

VideoEmbedding

VideoEmbedding(
    start_offset_sec: int, end_offset_sec: int, embedding: typing.List[float]
)

Embeddings generated from video with offset times.

VideoSegmentConfig

VideoSegmentConfig(
    start_offset_sec: int = 0, end_offset_sec: int = 120, interval_sec: int = 16
)

The specific video segments (in seconds) the embeddings are generated for.

WatermarkVerificationModel

WatermarkVerificationModel(
    model_id: str, endpoint_name: typing.Optional[str] = None
)

Verifies if an image has a watermark.

WatermarkVerificationResponse

WatermarkVerificationResponse(
    _prediction_response: Any, watermark_verification_result: Optional[str] = None
)

WatermarkVerificationResponse(_prediction_response: Any, watermark_verification_result: Optional[str] = None)