Quickstart: Send text prompts to Gemini using Vertex AI Studio

You can use Vertex AI Studio to design, test, and manage prompts for Google's Gemini large language models (LLMs).

In this quickstart, you:

  • Send these prompts to the Gemini API using samples from the generative AI prompt gallery, including the following:
    • A summarization text prompt
    • A code generation prompt
  • View the code used to generate the responses

Before you begin prompting in Vertex AI Studio

This quickstart requires you to complete the following steps to set up a Google Cloud project and enable the Vertex AI API.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Vertex AI API.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Vertex AI API.

    Enable the API

Sample prompts in Vertex AI Studio

A prompt is a natural language request submitted to a language model that generates a response. Prompts can contain questions, instructions, contextual information, few-shot examples, and partial input for the model to complete. After the model receives a prompt, depending on the type of model used, it can generate text, embeddings, code, images, videos, music, and more.

The sample prompts in Vertex AI Studio prompt gallery are predesigned to help demonstrate model capabilities. Each prompt is preconfigured with specified model and parameter values so you can open the sample prompt and click Submit to generate a response.

Test the Gemini flash model using a summarization text prompt

Send a summarization text prompt to the Vertex AI Gemini API. A summarization task extracts the most important information from text. You can provide information in the prompt to help the model create a summary, or ask the model to create a summary on its own.

  1. Go to the Prompt gallery page from the Vertex AI section in the Google Cloud console.
    Go to prompt gallery

  2. In the Tasks drop-down menu, select Summarize.

  3. Open the Audio summarization card.

    This sample prompt includes an audio file and requests a summary of the file contents in a bulleted list.

    The audio summarization prompt text and audio file

  4. Notice that in the settings panel, the model's default value is set to Gemini-1.5-flash-002. You can choose a different Gemini model from the list.

    The Gemini model in the settings panel

  5. At the bottom of the Prompt box, click Submit to generate the summary.

    The Submit button in the Prompt box

    The output is displayed in the Response box.

  6. To view the Vertex AI API code used to generate the transcript summary, click Get code.

    In the Get code panel, you can choose your preferred language to get the sample code for the prompt, or you can open the Python code in a Colab Enterprise notebook.

Test the Gemini flash model using a code generation prompt

Send a code generation prompt to the Vertex AI Gemini API. A code generation task generates code using a natural language description.

  1. Go to the Prompt gallery page from the Vertex AI section in the Google Cloud console.
    Go to prompt gallery

  2. In the Tasks drop-down menu, select Code.

  3. Open the Generate code from comments card.

    This sample prompt includes a system instruction that tells the model how to respond and some incomplete Java methods.

    The code generation prompt text

  4. Notice that in the settings panel, the model's default value is set to Gemini-1.5-flash-002. You can choose a different Gemini model from the list.

    The Gemini model in the settings panel

  5. At the bottom of the Prompt box, click Submit to complete each method by generating code in the areas marked <WRITE CODE HERE>.

    The output is displayed in the Response box.

  6. To view the Vertex AI API code used to generate the transcript summary, click Get code.

    In the Get code panel, you can choose your preferred language to get the sample code for the prompt, or you can open the Python code in a Colab Enterprise notebook.

Discover what's next with prompts