Vertex AI の Gemini API クイックスタート

このクイックスタートでは、選択した言語の Google Gen AI SDK をインストールし、最初の API リクエストを行う方法について説明します。サンプルは、API キーまたはアプリケーションのデフォルト認証情報(ADC)のどちらを使用して Vertex AI に対する認証を行うかどうかによって若干異なります。

認証方法を選択してください。


始める前に

ADC をまだ構成していない場合は、次の手順を行います。

プロジェクトを構成する

プロジェクトを選択して課金を有効にし、Vertex AI API を有効にして、gcloud CLI をインストールします。

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  2. Verify that billing is enabled for your Google Cloud project.

  3. Enable the Vertex AI API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  4. Install the Google Cloud CLI.

  5. 連携 ID を使用するように gcloud CLI を構成します。

    詳細については、連携 ID を使用して gcloud CLI にログインするをご覧ください。

  6. gcloud CLI を初期化するには、次のコマンドを実行します。

    gcloud init

    ローカル認証情報を作成する

    Create local authentication credentials for your user account:

    gcloud auth application-default login

    If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    必要なロール

    Vertex AI で Gemini API を使用するために必要な権限を取得するには、プロジェクトに対する Vertex AI ユーザー roles/aiplatform.user)IAM ロールを付与するよう管理者に依頼してください。ロールの付与については、プロジェクト、フォルダ、組織に対するアクセス権の管理をご覧ください。

    必要な権限は、カスタムロールや他の事前定義ロールから取得することもできます。

    SDK をインストールして環境を設定する

    ローカルマシンで、次のいずれかのタブをクリックして、プログラミング言語の SDK をインストールします。

    Python Gen AI SDK

    次のコマンドを実行して Gen AI SDK for Python をインストールし、更新します。

    pip install --upgrade google-genai

    環境変数を設定します。

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    Go Gen AI SDK

    次のコマンドを実行して、Gen AI SDK for Go をインストールして更新します。

    go get google.golang.org/genai

    環境変数を設定します。

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    Node.js Gen AI SDK

    次のコマンドを実行して Gen AI SDK for Node.js をインストールし、更新します。

    npm install @google/genai

    環境変数を設定します。

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    Java Gen AI SDK

    次のコマンドを実行して Gen AI SDK for Java をインストールし、更新します。

    Maven

    pom.xml に次の行を追加します。

    <dependencies>
      <dependency>
        <groupId>com.google.genai</groupId>
        <artifactId>google-genai</artifactId>
        <version>0.7.0</version>
      </dependency>
    </dependencies>
    

    環境変数を設定します。

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    REST

    環境変数を設定します。

    GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    GOOGLE_CLOUD_LOCATION=global
    API_ENDPOINT=YOUR_API_ENDPOINT
    MODEL_ID="gemini-2.5-flash"
    GENERATE_CONTENT_API="generateContent"

    最初のリクエストを送信する

    generateContent メソッドを使用して、Vertex AI の Gemini API にリクエストを送信します。

    Python

    from google import genai
    from google.genai.types import HttpOptions
    
    client = genai.Client(http_options=HttpOptions(api_version="v1"))
    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents="How does AI work?",
    )
    print(response.text)
    # Example response:
    # Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
    #
    # Here's a simplified overview:
    # ...

    Go

    import (
    	"context"
    	"fmt"
    	"io"
    
    	"google.golang.org/genai"
    )
    
    // generateWithText shows how to generate text using a text prompt.
    func generateWithText(w io.Writer) error {
    	ctx := context.Background()
    
    	client, err := genai.NewClient(ctx, &genai.ClientConfig{
    		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
    	})
    	if err != nil {
    		return fmt.Errorf("failed to create genai client: %w", err)
    	}
    
    	resp, err := client.Models.GenerateContent(ctx,
    		"gemini-2.5-flash",
    		genai.Text("How does AI work?"),
    		nil,
    	)
    	if err != nil {
    		return fmt.Errorf("failed to generate content: %w", err)
    	}
    
    	respText := resp.Text()
    
    	fmt.Fprintln(w, respText)
    	// Example response:
    	// That's a great question! Understanding how AI works can feel like ...
    	// ...
    	// **1. The Foundation: Data and Algorithms**
    	// ...
    
    	return nil
    }
    

    Node.js

    const {GoogleGenAI} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const response = await client.models.generateContent({
        model: 'gemini-2.5-flash',
        contents: 'How does AI work?',
      });
    
      console.log(response.text);
    
      return response.text;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.HttpOptions;
    
    public class TextGenerationWithText {
    
      public static void main(String[] args) {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash";
        generateContent(modelId);
      }
    
      // Generates text with text input
      public static String generateContent(String modelId) {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (Client client =
            Client.builder()
                .location("global")
                .vertexAI(true)
                .httpOptions(HttpOptions.builder().apiVersion("v1").build())
                .build()) {
    
          GenerateContentResponse response =
              client.models.generateContent(modelId, "How does AI work?", null);
    
          System.out.print(response.text());
          // Example response:
          // Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
          //
          // Here's a simplified overview:
          // ...
          return response.text();
        }
      }
    }

    REST

    このプロンプト リクエストを送信するには、コマンドラインから curl コマンドを実行するか、アプリケーションに REST 呼び出しを含めます。

    curl
    -X POST
    -H "Content-Type: application/json"
    -H "Authorization: Bearer $(gcloud auth print-access-token)"
    "https://${API_ENDPOINT}/v1/projects/${GOOGLE_CLOUD_PROJECT}/locations/${GOOGLE_CLOUD_LOCATION}/publishers/google/models/${MODEL_ID}:${GENERATE_CONTENT_API}" -d
    $'{
      "contents": {
        "role": "user",
        "parts": {
          "text": "Explain how AI works in a few words"
        }
      }
    }'

    モデルによりレスポンスが返されます。レスポンスはセクション内で生成され、各セクションの安全性が個別に評価されます。

    画像を生成

    Gemini は、会話形式で画像を生成して処理できます。テキスト、画像、またはその両方を組み合わせて Gemini にプロンプトを入力すると、画像の生成や編集など、画像に関連するさまざまなタスクを実行できます。次のコードが示すのは、説明的なプロンプトに基づいて画像を生成する方法です。

    構成に responseModalities: ["TEXT", "IMAGE"] を含める必要があります。これらのモデルでは、画像のみの出力はサポートされていません。

    Python

    from google import genai
    from google.genai.types import GenerateContentConfig, Modality
    from PIL import Image
    from io import BytesIO
    
    client = genai.Client()
    
    response = client.models.generate_content(
        model="gemini-2.5-flash-image",
        contents=("Generate an image of the Eiffel tower with fireworks in the background."),
        config=GenerateContentConfig(
            response_modalities=[Modality.TEXT, Modality.IMAGE],
            candidate_count=1,
            safety_settings=[
                {"method": "PROBABILITY"},
                {"category": "HARM_CATEGORY_DANGEROUS_CONTENT"},
                {"threshold": "BLOCK_MEDIUM_AND_ABOVE"},
            ],
        ),
    )
    for part in response.candidates[0].content.parts:
        if part.text:
            print(part.text)
        elif part.inline_data:
            image = Image.open(BytesIO((part.inline_data.data)))
            image.save("output_folder/example-image-eiffel-tower.png")
    # Example response:
    #   I will generate an image of the Eiffel Tower at night, with a vibrant display of
    #   colorful fireworks exploding in the dark sky behind it. The tower will be
    #   illuminated, standing tall as the focal point of the scene, with the bursts of
    #   light from the fireworks creating a festive atmosphere.

    Node.js

    const fs = require('fs');
    const {GoogleGenAI, Modality} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION =
      process.env.GOOGLE_CLOUD_LOCATION || 'us-central1';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const response = await client.models.generateContentStream({
        model: 'gemini-2.5-flash-image',
        contents:
          'Generate an image of the Eiffel tower with fireworks in the background.',
        config: {
          responseModalities: [Modality.TEXT, Modality.IMAGE],
        },
      });
    
      const generatedFileNames = [];
      let imageIndex = 0;
      for await (const chunk of response) {
        const text = chunk.text;
        const data = chunk.data;
        if (text) {
          console.debug(text);
        } else if (data) {
          const fileName = `generate_content_streaming_image_${imageIndex++}.png`;
          console.debug(`Writing response image to file: ${fileName}.`);
          try {
            fs.writeFileSync(fileName, data);
            generatedFileNames.push(fileName);
          } catch (error) {
            console.error(`Failed to write image file ${fileName}:`, error);
          }
        }
      }
    
      return generatedFileNames;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.Blob;
    import com.google.genai.types.Candidate;
    import com.google.genai.types.Content;
    import com.google.genai.types.GenerateContentConfig;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.Part;
    import com.google.genai.types.SafetySetting;
    import java.awt.image.BufferedImage;
    import java.io.ByteArrayInputStream;
    import java.io.File;
    import java.io.IOException;
    import java.util.ArrayList;
    import java.util.List;
    import javax.imageio.ImageIO;
    
    public class ImageGenMmFlashWithText {
    
      public static void main(String[] args) throws IOException {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash-image";
        String outputFile = "resources/output/example-image-eiffel-tower.png";
        generateContent(modelId, outputFile);
      }
    
      // Generates an image with text input
      public static void generateContent(String modelId, String outputFile) throws IOException {
        // Client Initialization. Once created, it can be reused for multiple requests.
        try (Client client = Client.builder().location("global").vertexAI(true).build()) {
    
          GenerateContentConfig contentConfig =
              GenerateContentConfig.builder()
                  .responseModalities("TEXT", "IMAGE")
                  .candidateCount(1)
                  .safetySettings(
                      SafetySetting.builder()
                          .method("PROBABILITY")
                          .category("HARM_CATEGORY_DANGEROUS_CONTENT")
                          .threshold("BLOCK_MEDIUM_AND_ABOVE")
                          .build())
                  .build();
    
          GenerateContentResponse response =
              client.models.generateContent(
                  modelId,
                  "Generate an image of the Eiffel tower with fireworks in the background.",
                  contentConfig);
    
          // Get parts of the response
          List<Part> parts =
              response
                  .candidates()
                  .flatMap(candidates -> candidates.stream().findFirst())
                  .flatMap(Candidate::content)
                  .flatMap(Content::parts)
                  .orElse(new ArrayList<>());
    
          // For each part print text if present, otherwise read image data if present and
          // write it to the output file
          for (Part part : parts) {
            if (part.text().isPresent()) {
              System.out.println(part.text().get());
            } else if (part.inlineData().flatMap(Blob::data).isPresent()) {
              BufferedImage image =
                  ImageIO.read(new ByteArrayInputStream(part.inlineData().flatMap(Blob::data).get()));
              ImageIO.write(image, "png", new File(outputFile));
            }
          }
    
          System.out.println("Content written to: " + outputFile);
          // Example response:
          // Here is the Eiffel Tower with fireworks in the background...
          //
          // Content written to: resources/output/example-image-eiffel-tower.png
        }
      }
    }

    画像理解

    Gemini は画像を理解することもできます。次のコードでは、前のセクションで生成された画像を使用し、別のモデルを使用して画像に関する情報を推論します。

    Python

    from google import genai
    from google.genai.types import HttpOptions, Part
    
    client = genai.Client(http_options=HttpOptions(api_version="v1"))
    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents=[
            "What is shown in this image?",
            Part.from_uri(
                file_uri="gs://cloud-samples-data/generative-ai/image/scones.jpg",
                mime_type="image/jpeg",
            ),
        ],
    )
    print(response.text)
    # Example response:
    # The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...

    Go

    import (
    	"context"
    	"fmt"
    	"io"
    
    	genai "google.golang.org/genai"
    )
    
    // generateWithTextImage shows how to generate text using both text and image input
    func generateWithTextImage(w io.Writer) error {
    	ctx := context.Background()
    
    	client, err := genai.NewClient(ctx, &genai.ClientConfig{
    		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
    	})
    	if err != nil {
    		return fmt.Errorf("failed to create genai client: %w", err)
    	}
    
    	modelName := "gemini-2.5-flash"
    	contents := []*genai.Content{
    		{Parts: []*genai.Part{
    			{Text: "What is shown in this image?"},
    			{FileData: &genai.FileData{
    				// Image source: https://storage.googleapis.com/cloud-samples-data/generative-ai/image/scones.jpg
    				FileURI:  "gs://cloud-samples-data/generative-ai/image/scones.jpg",
    				MIMEType: "image/jpeg",
    			}},
    		},
    			Role: "user"},
    	}
    
    	resp, err := client.Models.GenerateContent(ctx, modelName, contents, nil)
    	if err != nil {
    		return fmt.Errorf("failed to generate content: %w", err)
    	}
    
    	respText := resp.Text()
    
    	fmt.Fprintln(w, respText)
    
    	// Example response:
    	// The image shows an overhead shot of a rustic, artistic arrangement on a surface that ...
    
    	return nil
    }
    

    Node.js

    const {GoogleGenAI} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const image = {
        fileData: {
          fileUri: 'gs://cloud-samples-data/generative-ai/image/scones.jpg',
          mimeType: 'image/jpeg',
        },
      };
    
      const response = await client.models.generateContent({
        model: 'gemini-2.5-flash',
        contents: [image, 'What is shown in this image?'],
      });
    
      console.log(response.text);
    
      return response.text;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.Content;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.HttpOptions;
    import com.google.genai.types.Part;
    
    public class TextGenerationWithTextAndImage {
    
      public static void main(String[] args) {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash";
        generateContent(modelId);
      }
    
      // Generates text with text and image input
      public static String generateContent(String modelId) {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (Client client =
            Client.builder()
                .location("global")
                .vertexAI(true)
                .httpOptions(HttpOptions.builder().apiVersion("v1").build())
                .build()) {
    
          GenerateContentResponse response =
              client.models.generateContent(
                  modelId,
                  Content.fromParts(
                      Part.fromText("What is shown in this image?"),
                      Part.fromUri(
                          "gs://cloud-samples-data/generative-ai/image/scones.jpg", "image/jpeg")),
                  null);
    
          System.out.print(response.text());
          // Example response:
          // The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...
          return response.text();
        }
      }
    }

    コードを実行する

    Vertex AI の Gemini API のコード実行機能を使用すると、モデルは Python コードを生成して実行し、最終的な出力に到達するまで結果から反復的に学習できます。Vertex AI は、関数呼び出しと同様にコード実行をツールとして提供します。コードベースの推論を活用し、テキスト出力を生成するアプリケーションをこの機能を使って構築できます。次に例を示します。

    Python

    from google import genai
    from google.genai.types import (
        HttpOptions,
        Tool,
        ToolCodeExecution,
        GenerateContentConfig,
    )
    
    client = genai.Client(http_options=HttpOptions(api_version="v1"))
    model_id = "gemini-2.5-flash"
    
    code_execution_tool = Tool(code_execution=ToolCodeExecution())
    response = client.models.generate_content(
        model=model_id,
        contents="Calculate 20th fibonacci number. Then find the nearest palindrome to it.",
        config=GenerateContentConfig(
            tools=[code_execution_tool],
            temperature=0,
        ),
    )
    print("# Code:")
    print(response.executable_code)
    print("# Outcome:")
    print(response.code_execution_result)
    
    # Example response:
    # # Code:
    # def fibonacci(n):
    #     if n <= 0:
    #         return 0
    #     elif n == 1:
    #         return 1
    #     else:
    #         a, b = 0, 1
    #         for _ in range(2, n + 1):
    #             a, b = b, a + b
    #         return b
    #
    # fib_20 = fibonacci(20)
    # print(f'{fib_20=}')
    #
    # # Outcome:
    # fib_20=6765

    Go

    import (
    	"context"
    	"fmt"
    	"io"
    
    	genai "google.golang.org/genai"
    )
    
    // generateWithCodeExec shows how to generate text using the code execution tool.
    func generateWithCodeExec(w io.Writer) error {
    	ctx := context.Background()
    
    	client, err := genai.NewClient(ctx, &genai.ClientConfig{
    		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
    	})
    	if err != nil {
    		return fmt.Errorf("failed to create genai client: %w", err)
    	}
    
    	prompt := "Calculate 20th fibonacci number. Then find the nearest palindrome to it."
    	contents := []*genai.Content{
    		{Parts: []*genai.Part{
    			{Text: prompt},
    		},
    			Role: "user"},
    	}
    	config := &genai.GenerateContentConfig{
    		Tools: []*genai.Tool{
    			{CodeExecution: &genai.ToolCodeExecution{}},
    		},
    		Temperature: genai.Ptr(float32(0.0)),
    	}
    	modelName := "gemini-2.5-flash"
    
    	resp, err := client.Models.GenerateContent(ctx, modelName, contents, config)
    	if err != nil {
    		return fmt.Errorf("failed to generate content: %w", err)
    	}
    
    	for _, p := range resp.Candidates[0].Content.Parts {
    		if p.Text != "" {
    			fmt.Fprintf(w, "Gemini: %s", p.Text)
    		}
    		if p.ExecutableCode != nil {
    			fmt.Fprintf(w, "Language: %s\n%s\n", p.ExecutableCode.Language, p.ExecutableCode.Code)
    		}
    		if p.CodeExecutionResult != nil {
    			fmt.Fprintf(w, "Outcome: %s\n%s\n", p.CodeExecutionResult.Outcome, p.CodeExecutionResult.Output)
    		}
    	}
    
    	// Example response:
    	// Gemini: Okay, I can do that. First, I'll calculate the 20th Fibonacci number. Then, I need ...
    	//
    	// Language: PYTHON
    	//
    	// def fibonacci(n):
    	//    ...
    	//
    	// fib_20 = fibonacci(20)
    	// print(f'{fib_20=}')
    	//
    	// Outcome: OUTCOME_OK
    	// fib_20=6765
    	//
    	// Now that I have the 20th Fibonacci number (6765), I need to find the nearest palindrome. ...
    	// ...
    
    	return nil
    }
    

    Node.js

    const {GoogleGenAI} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const response = await client.models.generateContent({
        model: 'gemini-2.5-flash',
        contents:
          'What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50.',
        config: {
          tools: [{codeExecution: {}}],
          temperature: 0,
        },
      });
    
      console.debug(response.executableCode);
      console.debug(response.codeExecutionResult);
    
      return response.codeExecutionResult;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.GenerateContentConfig;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.HttpOptions;
    import com.google.genai.types.Tool;
    import com.google.genai.types.ToolCodeExecution;
    
    public class ToolsCodeExecWithText {
    
      public static void main(String[] args) {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash";
        generateContent(modelId);
      }
    
      // Generates text using the Code Execution tool
      public static String generateContent(String modelId) {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (Client client =
            Client.builder()
                .location("global")
                .vertexAI(true)
                .httpOptions(HttpOptions.builder().apiVersion("v1").build())
                .build()) {
    
          // Create a GenerateContentConfig and set codeExecution tool
          GenerateContentConfig contentConfig =
              GenerateContentConfig.builder()
                  .tools(Tool.builder().codeExecution(ToolCodeExecution.builder().build()).build())
                  .temperature(0.0F)
                  .build();
    
          GenerateContentResponse response =
              client.models.generateContent(
                  modelId,
                  "Calculate 20th fibonacci number. Then find the nearest palindrome to it.",
                  contentConfig);
    
          System.out.println("Code: \n" + response.executableCode());
          System.out.println("Outcome: \n" + response.codeExecutionResult());
          // Example response
          // Code:
          // def fibonacci(n):
          //    if n <= 0:
          //        return 0
          //    elif n == 1:
          //        return 1
          //    else:
          //        a, b = 1, 1
          //        for _ in range(2, n):
          //            a, b = b, a + b
          //        return b
          //
          // fib_20 = fibonacci(20)
          // print(f'{fib_20=}')
          //
          // Outcome:
          // fib_20=6765
          return response.executableCode();
        }
      }
    }

    コード実行の例については、コード実行のドキュメントをご覧ください。

    次のステップ

    最初の API リクエストが完了したので、本番環境コード用に高度な Vertex AI 機能を設定する方法に関する次のガイドをご覧ください。