Generate and edit images from text descriptions in seconds using Gemini 2.5 Flash Image and Imagen image generation models with available APIs in Python, Java, and Go programming languages.
New customers get up to $300 in free credits to generate images and more on Vertex AI.
Overview
Text-to-image AI is a type of artificial intelligence that can generate and edit images from text descriptions. This technology has the potential to transform how we interact with and create visual content. Google Cloud text-to-AI tools and resources, including pre-trained AI models like Imagen, Gemini 2.5 Flash Image, and Veo, available in Vertex AI, are designed to help developers easily implement text-to-image generation in their applications.
Text-to-image AI can be used in application development to generate mockups, prototypes, illustrations, test data, educational content, and visualizations for debugging. Google Cloud's Vertex AI and Cloud Vision API giving developers access to a suite of image processing capabilities, including text detection, object detection, and image classification. Document AI can be used to extract text from scanned documents to generate text description images.
Imagen and Gemini 2.5 Flash Image, are Google's key text-to-image models.
Imagen: Imagen is a specialized, pure image model. It's built as a diffusion engine, which means its primary focus is on generating high-quality, polished, and photorealistic images from text prompts. Its strength lies in "pattern-matching text to pixels" to create beautiful, visually appealing outputs.
Gemini 2.5 Flash Image: This is a natively multimodal Large Language Model (LLM). Unlike a dedicated image model, it treats images as another form of "language." This means it was trained from the ground up to understand and process both text and images in a single, unified step. This architecture is what unlocks its unique capabilities beyond simple generation.
You can access these text-to-image AI models through Vertex AI on Google Cloud or Google AI Studio. To use the models, just provide a text prompt, select parameters (some models allow you to select parameters that control the style, creativity, and accuracy of the generated image) and finally generate the image.
How It Works
Text-to-image AI uses natural language processing (NLP) to convert the text description into a machine-readable format. Once converted into a machine-readable format, the machine learning model is trained on a massive dataset of text and images, learns to identify patterns, and to uses them to generate or edit images. Google Cloud's text-to-image AI uses a deep learning model called Imagen, a state-of-the-art model that can generate photorealistic images from text descriptions.
Common Uses
Learn how to use the text-to-image generation feature of Imagen on Vertex AI and export an upscaled version of a generated image. This quickstart shows you how to use Imagen image generation in the Google Cloud console.
Learn how to use the text-to-image generation feature of Imagen on Vertex AI and export an upscaled version of a generated image. This quickstart shows you how to use Imagen image generation in the Google Cloud console.
With Gemini 2.5 Flash Image you can combine different images into one seamless new visual. Use multiple reference images to create a single, unified image. You can also edit images with simple, natural language instructions. From removing a person from a group photo to fixing a small detail like a stain, you can make changes through a simple conversation.
Additionally, Imagen on Vertex AI lets you edit Imagen-generated or existing images. You can specify part of the image to modify in addition to a text description of the updates (mask-base editing).
With Gemini 2.5 Flash Image you can combine different images into one seamless new visual. Use multiple reference images to create a single, unified image. You can also edit images with simple, natural language instructions. From removing a person from a group photo to fixing a small detail like a stain, you can make changes through a simple conversation.
Additionally, Imagen on Vertex AI lets you edit Imagen-generated or existing images. You can specify part of the image to modify in addition to a text description of the updates (mask-base editing).
Generate relevant descriptions for images, including detailed metadata, automated captioning, and quick descriptions of products and visual assets.
Generate relevant descriptions for images, including detailed metadata, automated captioning, and quick descriptions of products and visual assets.
Digital watermarking is automatically added to images generated by certain AI models on Vertex AI, such as Imagen and Gemini 2.5 Flash Image. This is done using a technology created by Google Deepmind called SynthID, which embeds an invisible watermark directly into the image's pixels.
To detect the digital watermark in an image on Vertex AI, you can use the built-in detection tools. Using Vertex AI Media Studio, you can simply upload the image you want to verify and if a SynthID watermark is detected, the image will display a "SynthID detected" badge.
Digital watermarking is automatically added to images generated by certain AI models on Vertex AI, such as Imagen and Gemini 2.5 Flash Image. This is done using a technology created by Google Deepmind called SynthID, which embeds an invisible watermark directly into the image's pixels.
To detect the digital watermark in an image on Vertex AI, you can use the built-in detection tools. Using Vertex AI Media Studio, you can simply upload the image you want to verify and if a SynthID watermark is detected, the image will display a "SynthID detected" badge.