ImageGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Generates images from text prompt.
Examples::
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
# Optional:
number_of_images=1,
seed=0,
)
response[0].show()
response[0].save("image1.png")
Methods
ImageGenerationModel
ImageGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Creates a _ModelGardenModel.
This constructor should not be called directly.
Use {model_class}.from_pretrained(model_name=...)
instead.
edit_image
edit_image(
*,
prompt: str,
base_image: typing.Optional[vertexai.vision_models.Image] = None,
mask: typing.Optional[vertexai.vision_models.Image] = None,
reference_images: typing.Optional[
typing.List[vertexai.vision_models.ReferenceImage]
] = None,
negative_prompt: typing.Optional[str] = None,
number_of_images: int = 1,
guidance_scale: typing.Optional[float] = None,
edit_mode: typing.Optional[
typing.Literal[
"inpainting-insert", "inpainting-remove", "outpainting", "product-image"
]
] = None,
mask_mode: typing.Optional[
typing.Literal["background", "foreground", "semantic"]
] = None,
segmentation_classes: typing.Optional[typing.List[str]] = None,
mask_dilation: typing.Optional[float] = None,
product_position: typing.Optional[typing.Literal["fixed", "reposition"]] = None,
output_mime_type: typing.Optional[typing.Literal["image/png", "image/jpeg"]] = None,
compression_quality: typing.Optional[float] = None,
language: typing.Optional[str] = None,
seed: typing.Optional[int] = None,
output_gcs_uri: typing.Optional[str] = None,
safety_filter_level: typing.Optional[
typing.Literal["block_most", "block_some", "block_few", "block_fewest"]
] = None,
person_generation: typing.Optional[
typing.Literal["dont_allow", "allow_adult", "allow_all"]
] = None
) -> vertexai.preview.vision_models.ImageGenerationResponse
Edits an existing image based on text prompt.
from_pretrained
from_pretrained(model_name: str) -> vertexai._model_garden._model_garden_models.T
Loads a _ModelGardenModel.
Exceptions | |
---|---|
Type | Description |
ValueError |
If model_name is unknown. |
ValueError |
If model does not support this class. |
generate_images
generate_images(
prompt: str,
*,
negative_prompt: typing.Optional[str] = None,
number_of_images: int = 1,
aspect_ratio: typing.Optional[
typing.Literal["1:1", "9:16", "16:9", "4:3", "3:4"]
] = None,
guidance_scale: typing.Optional[float] = None,
language: typing.Optional[str] = None,
seed: typing.Optional[int] = None,
output_gcs_uri: typing.Optional[str] = None,
add_watermark: typing.Optional[bool] = True,
safety_filter_level: typing.Optional[
typing.Literal["block_most", "block_some", "block_few", "block_fewest"]
] = None,
person_generation: typing.Optional[
typing.Literal["dont_allow", "allow_adult", "allow_all"]
] = None
) -> vertexai.preview.vision_models.ImageGenerationResponse
Generates images from text prompt.
upscale_image
upscale_image(
image: typing.Union[
vertexai.vision_models.Image, vertexai.preview.vision_models.GeneratedImage
],
new_size: typing.Optional[int] = 2048,
upscale_factor: typing.Optional[typing.Literal["x2", "x4"]] = None,
output_mime_type: typing.Optional[
typing.Literal["image/png", "image/jpeg"]
] = "image/png",
output_compression_quality: typing.Optional[int] = None,
output_gcs_uri: typing.Optional[str] = None,
) -> vertexai.vision_models.Image
Upscales an image.
This supports upscaling images generated through the generate_images()
method, or upscaling a new image.
Examples::
# Upscale a generated image
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
)
model.upscale_image(image=response[0])
# Upscale a new 1024x1024 image
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image)
# Upscale a new arbitrary sized image using a x2 or x4 upscaling factor
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, upscale_factor="x2")
# Upscale an image and get the result in JPEG format
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, output_mime_type="image/jpeg",
output_compression_quality=90)
Parameters | |
---|---|
Name | Description |
image |
Union[GeneratedImage, Image]
Required. The generated image to upscale. |
new_size |
int
The size of the biggest dimension of the upscaled image. Only 2048 and 4096 are currently supported. Results in a 2048x2048 or 4096x4096 image. Defaults to 2048 if not provided. |