The Imagen API lets you create high quality images in seconds, using text prompt to guide the generation. You can also upscale images using Imagen API.
View Imagen for Generation model card
Supported Models
Model | Code |
---|---|
Image Generation | imagen-3.0-generate-002 imagen-3.0-generate-001 imagen-3.0-fast-generate-001 imagegeneration@006 imagegeneration@005 imagegeneration@002 |
For more information on how the features each model supports, see Model versioning.
Example syntax
Syntax to create an image from a text prompt.
Syntax
Syntax to generate an image.
REST
curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_VERSION}:predict \ -d '{ "instances": [ { "prompt": "..." } ], "parameters": { "sampleCount": ... } }'
Python
generation_model = ImageGenerationModel.from_pretrained("imagen-3.0-generate-001") response = generation_model.generate_images( prompt="...", negative_prompt="...", aspect_ratio=..., ) response.images[0].show()
Parameter list
See examples for implementation details.
Generate images
REST
Parameters | |
---|---|
prompt |
Required: The text prompt for the image. The The The The The The |
sampleCount |
Required: The number of images to generate. The default value is 4. The The The The The The |
seed |
Optional: The random seed for image generation. This is not available when
|
enhancePrompt |
Optional: An optional parameter to use an LLM-based prompt rewriting feature to deliver higher quality images that better reflect the original prompt's intent. Disabling this feature may impact image quality and prompt adherence. The The The The The The |
negativePrompt |
Optional: A description of what to discourage in the generated images. The The The The The The |
aspectRatio |
Optional: The aspect ratio for the image. The default value is "1:1". The The The The The The |
outputOptions |
Optional: Describes the output image format in an
|
sampleImageStyle |
Optional: Describes the style for the generated images. The following values are supported:
|
personGeneration |
Optional: Allow generation of people by the model. The following values are supported:
The default value is |
safetySetting |
Optional: Adds a filter level to safety filtering. The following values are supported:
The default value is |
addWatermark |
Optional: Add an invisible watermark to the generated images. The default value is
|
storageUri |
Optional: Cloud Storage URI to store the generated images. |
Output options object
The outputOptions
object describes the image output.
Parameters | |
---|---|
outputOptions.mimeType |
Optional: The image format that the output should be saved as. The following values are supported:
The default value is |
outputOptions.compressionQuality |
Optional: The level of compression if the output type is
|
Response
The response body from the REST request.
Parameter | |
---|---|
predictions |
An array of
|
Vision generative model result object
Information about the model result.
Parameter | |
---|---|
bytesBase64Encoded |
The base64 encoded generated image. Not present if the output image did not pass responsible AI filters. |
mimeType |
The type of the generated image. Not present if the output image did not pass responsible AI filters. |
raiFilteredReason |
The responsible AI filter reason. Only returned if
|
safetyAttributes.categories |
The safety attribute name. Only returned if
|
safetyAttributes.scores |
The safety attribute score. Only returned if
|
Python
Parameters | |
---|---|
prompt |
Required: The text prompt for the image. The The The The The |
number_of_images |
Required: The number of images to generate. The default value is 1. The The The The The |
seed |
Optional: The random seed for image generation. This is not available when
|
negative_prompt |
Optional: A description of what to discourage in the generated images. The The The The The |
aspect_ratio |
Optional: The aspect ratio for the image. The default value is "1:1". The The The |
output_mime_type |
Optional: The image format that the output should be saved as. The following values are supported:
The default value is |
compression_quality |
Optional: The level of compression if the output mime type is
|
language |
Optional: The language of the text prompt for the image. The following values are supported:
The default value is |
output_gcs_uri |
Optional: Cloud Storage URI to store the generated images. |
add_watermark |
Optional: Add a watermark to the generated image. The default value is
|
safety_filter_level |
Optional: Adds a filter level to safety filtering. The following values are supported:
The default value is |
person_generation |
Optional: Allow generation of people by the model. The following values are supported:
The default value is |
Upscale images
REST
Parameter | |
---|---|
mode |
Required: Must be set to |
upscaleConfig |
Required: |
outputOptions |
Optional: Describes the output image format in an
|
storageUri |
Optional: Cloud Storage URI for where to store the generated images. |
Upscale config object
Parameter | |
---|---|
upscaleConfig.upscaleFactor |
Required: The upscale factor. The supported values are |
Response
The response body from the REST request.
Parameter | |
---|---|
predictions |
An array of
|
Examples
The following examples show how to use the Imagen models to generate images.
Generate images
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- MODEL_VERSION: The
imagegeneration
model version to use. Available values:-
Imagen 3:
imagen-3.0-generate-002
(newest model)imagen-3.0-generate-001
imagen-3.0-fast-generate-001
- Low latency model version.
-
Default model version:
imagegeneration
- Uses the default model version v.006. As a best practice, you should always specify a model version, especially in production environments.
For more information about model versions and features, see model versions.
-
Imagen 3:
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - TEXT_PROMPT: The text prompt that guides what images the model generates. This field is required for both generation and editing.
- IMAGE_COUNT: The number of generated images.
Accepted integer values: 1-8 (
imagegeneration@002
), 1-4 (all other model versions). Default value: 4. - ADD_WATERMARK: boolean. Optional. Whether to enable a watermark for generated images.
Any image generated when the field is set to
true
contains a digital SynthID that you can use to verify a watermarked image. If you omit this field, the default value oftrue
is used; you must set the value tofalse
to disable this feature. You can use theseed
field to get deterministic output only when this field is set tofalse
. - ASPECT_RATIO: string. Optional. A generation mode parameter that controls aspect
ratio. Supported ratio values and their intended use:
1:1
(default, square)3:4
(Ads, social media)4:3
(TV, photography)16:9
(landscape)9:16
(portrait)
- ENABLE_PROMPT_REWRITING: boolean. Optional. A parameter to use an LLM-based prompt
rewriting feature to deliver higher quality images that better reflect the original
prompt's intent. Disabling this feature may impact image quality and
prompt adherence. Default value:
true
. - INCLUDE_RAI_REASON: boolean. Optional. Whether to enable the
Responsible AI
filtered reason code in responses with blocked input or output. Default value:
false
. - INCLUDE_SAFETY_ATTRIBUTES: boolean. Optional. Whether to enable rounded
Responsible AI scores for a list of safety attributes in responses for unfiltered input and
output. Safety attribute categories:
"Death, Harm & Tragedy"
,"Firearms & Weapons"
,"Hate"
,"Health"
,"Illicit Drugs"
,"Politics"
,"Porn"
,"Religion & Belief"
,"Toxic"
,"Violence"
,"Vulgarity"
,"War & Conflict"
. Default value:false
. - MIME_TYPE: string. Optional. The MIME type of the content of the image. Available
values:
image/jpeg
image/gif
image/png
image/webp
image/bmp
image/tiff
image/vnd.microsoft.icon
- COMPRESSION_QUALITY: integer. Optional. Only applies to JPEG output
files. The level of detail the model preserves for images generated in JPEG file format. Values:
0
to100
, where a higher number means more compression. Default:75
. - PERSON_SETTING: string. Optional. The safety setting that controls the type of
people or face generation the model allows. Available values:
allow_adult
(default): Allow generation of adults only, except for celebrity generation. Celebrity generation is not allowed for any setting.dont_allow
: Disable the inclusion of people or faces in generated images.
- SAFETY_SETTING: string. Optional. A setting that controls safety filter thresholds
for generated images. Available values:
block_low_and_above
: The highest safety threshold, resulting in the largest amount of generated images that are filtered. Previous value:block_most
.block_medium_and_above
(default): A medium safety threshold that balances filtering for potentially harmful and safe content. Previous value:block_some
.block_only_high
: A safety threshold that reduces the number of requests blocked due to safety filters. This setting might increase objectionable content generated by Imagen. Previous value:block_few
.
- SEED_NUMBER: integer. Optional. Any non-negative integer you provide to make output
images deterministic. Providing the same seed number always results in the same output images. If
the model you're using supports digital watermarking, you must set
"addWatermark": false
to use this field. Accepted integer values:1
-2147483647
. - OUTPUT_STORAGE_URI: string. Optional. The Cloud Storage bucket to store the output
images. If not provided, base64-encoded image bytes are returned in the response. Sample value:
gs://image-bucket/output/
.
Additional optional parameters
Use the following optional variables depending on your use
case. Add some or all of the following parameters in the "parameters": {}
object.
This list shows common optional parameters and isn't meant to be exhaustive. For more
information about optional parameters,
see Imagen API reference: Generate images.
"parameters": { "sampleCount": IMAGE_COUNT, "addWatermark": ADD_WATERMARK, "aspectRatio": "ASPECT_RATIO", "enhancePrompt": ENABLE_PROMPT_REWRITING, "includeRaiReason": INCLUDE_RAI_REASON, "includeSafetyAttributes": INCLUDE_SAFETY_ATTRIBUTES, "outputOptions": { "mimeType": "MIME_TYPE", "compressionQuality": COMPRESSION_QUALITY }, "personGeneration": "PERSON_SETTING", "safetySetting": "SAFETY_SETTING", "seed": SEED_NUMBER, "storageUri": "OUTPUT_STORAGE_URI" }
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict
Request JSON body:
{ "instances": [ { "prompt": "TEXT_PROMPT" } ], "parameters": { "sampleCount": IMAGE_COUNT } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict" | Select-Object -Expand Content
"sampleCount": 2
. The response returns two prediction objects, with
the generated image bytes base64-encoded.
{ "predictions": [ { "bytesBase64Encoded": "BASE64_IMG_BYTES", "mimeType": "image/png" }, { "mimeType": "image/png", "bytesBase64Encoded": "BASE64_IMG_BYTES" } ] }
If you use a model that supports prompt enhancement, the response includes an additional
prompt
field with the enhanced prompt used for generation:
{ "predictions": [ { "mimeType": "MIME_TYPE", "prompt": "ENHANCED_PROMPT_1", "bytesBase64Encoded": "BASE64_IMG_BYTES_1" }, { "mimeType": "MIME_TYPE", "prompt": "ENHANCED_PROMPT_2", "bytesBase64Encoded": "BASE64_IMG_BYTES_2" } ] }
Python
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
In this sample you call the generate_images
method on the
ImageGenerationModel
(@006
version) and save generated
images locally. You then can optionally use the show()
method in a notebook to show you the generated images. For more information on
model versions and features, see model versions.
Upscale images
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - PROJECT_ID: Your Google Cloud project ID.
- B64_BASE_IMAGE: The base image to edit or upscale. The image must be specified as a base64-encoded byte string. Size limit: 10 MB.
- IMAGE_SOURCE: The Cloud Storage location of the image you
want to edit or upscale. For example:
gs://output-bucket/source-photos/photo.png
. - UPSCALE_FACTOR: Optional. The factor to which the image will be upscaled. If not
specified, the upscale factor will be determined from the longer side of the input image and
sampleImageSize
. Available values:x2
orx4
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@002:predict
Request JSON body:
{ "instances": [ { "prompt": "", "image": { // use one of the following to specify the image to upscale "bytesBase64Encoded": "B64_BASE_IMAGE" "gcsUri": "IMAGE_SOURCE" // end of base image input options }, } ], "parameters": { "sampleCount": 1, "mode": "upscale", "upscaleConfig": { "upscaleFactor": "UPSCALE_FACTOR" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@002:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@002:predict" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "predictions": [ { "mimeType": "image/png", "bytesBase64Encoded": "iVBOR..[base64-encoded-upscaled-image]...YII=" } ] }
What's next
- For more information, see Imagen on Vertex AI overview and Generate images using text prompts.