Structured output

This document shows you how to use structured output to control the format of responses from Gemini models. You'll learn about the following:

You can require a model's generated output to adhere to a specific schema to receive consistently formatted responses. For example, if you use an established data schema for other tasks, you can require the model to follow the same schema. This lets you directly extract data from the model's output without post-processing.

To specify the structure of a model's output, define a response schema, which works like a blueprint for model responses. When you include the response schema in your prompt, the model's response always follows your defined schema.

You can control generated output when using the following models:

For Open Models, follow this user guide.

Example use cases

Common use cases for applying a response schema include the following:

  • Validating JSON output: Generative model outputs can vary. By including a response schema, you can require the model to return valid JSON that conforms to your schema. This allows downstream tasks to reliably process the output.
  • Constraining model responses: You can constrain how a model responds. For example, you can have a model annotate text with a predefined set of labels, such as positive or negative, instead of allowing the model to generate its own labels like good or bad.

Considerations

Before you use a response schema, review the following considerations and potential limitations:

  • You can only define and use a response schema with the API. The Google Cloud console doesn't support this feature.
  • The size of your response schema counts toward the input token limit.
  • Structured output supports only certain output formats, such as application/json or text/x.enum. For more information, see the responseMimeType parameter in the Gemini API reference.
  • Structured output supports a subset of the Vertex AI schema reference. For more information, see Supported schema fields.
  • A complex schema can result in an InvalidArgument: 400 error. Complexity might come from long property names, long array length limits, enums with many values, objects with lots of optional properties, or a combination of these factors.

    If you get this error with a valid schema, try one or more of the following changes to resolve the error:

    • Shorten property or enum names.
    • Flatten nested arrays.
    • Reduce the number of properties with constraints, such as numbers with minimum and maximum limits.
    • Reduce the number of properties with complex constraints, such as properties with complex formats like date-time.
    • Reduce the number of optional properties.
    • Reduce the number of valid values for enums.

Supported schema fields

Structured output supports the following fields from the Vertex AI schema. If you use an unsupported field, Vertex AI handles your request but ignores the field.

  • anyOf
  • enum: only string enums are supported
  • format
  • items
  • maximum
  • maxItems
  • minimum
  • minItems
  • nullable
  • properties
  • propertyOrdering*
  • required

* propertyOrdering is specifically for structured output and not part of the Vertex AI schema. This field defines the order in which properties are generated. The listed properties must be unique and must be valid keys in the properties dictionary.

For the format field, Vertex AI supports the following values: date, date-time, duration, and time. The OpenAPI Initiative Registry describes the format of each value.

Before you begin

Before sending your request, do the following:

  • Define a response schema. The schema specifies the structure of the model's output, including field names and the expected data type for each field. Use only the supported fields. All other fields are ignored.
  • Include the schema in the responseSchema field. Do not duplicate the schema in your prompt, as this can lower the quality of the generated output.

Model behavior and response schema

Keep the following points in mind about how the model interprets the schema and prompt:

  • The model uses both the schema and the prompt for context. Use a clear structure and unambiguous field names in your schema so that your intent is clear.
  • Fields are optional by default. The model can choose to populate a field or skip it. To require the model to provide a value for a field, define it as required. If there's insufficient context in the prompt for a required field, the model generates a value based on its training data.
  • If you don't get the results you expect, adjust the prompt or schema. Try adding more context to your prompt. You can also review the model's response without structured output to see how it responds naturally, and then update your schema to better fit the model's typical output structure.

Send a prompt with a response schema

By default, all fields in a schema are optional. To require the model to return a value for a field, define that field as required in your schema.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

from google import genai
from google.genai.types import HttpOptions

response_schema = {
    "type": "ARRAY",
    "items": {
        "type": "OBJECT",
        "properties": {
            "recipe_name": {"type": "STRING"},
            "ingredients": {"type": "ARRAY", "items": {"type": "STRING"}},
        },
        "required": ["recipe_name", "ingredients"],
    },
}

prompt = """
    List a few popular cookie recipes.
"""

client = genai.Client(http_options=HttpOptions(api_version="v1"))
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents=prompt,
    config={
        "response_mime_type": "application/json",
        "response_schema": response_schema,
    },
)

print(response.text)
# Example output:
# [
#     {
#         "ingredients": [
#             "2 1/4 cups all-purpose flour",
#             "1 teaspoon baking soda",
#             "1 teaspoon salt",
#             "1 cup (2 sticks) unsalted butter, softened",
#             "3/4 cup granulated sugar",
#             "3/4 cup packed brown sugar",
#             "1 teaspoon vanilla extract",
#             "2 large eggs",
#             "2 cups chocolate chips",
#         ],
#         "recipe_name": "Chocolate Chip Cookies",
#     }
# ]

Go

Learn how to install or update the Go.

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

import (
	"context"
	"fmt"
	"io"

	genai "google.golang.org/genai"
)

// generateWithRespSchema shows how to use a response schema to generate output in a specific format.
func generateWithRespSchema(w io.Writer) error {
	ctx := context.Background()

	client, err := genai.NewClient(ctx, &genai.ClientConfig{
		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
	})
	if err != nil {
		return fmt.Errorf("failed to create genai client: %w", err)
	}

	config := &genai.GenerateContentConfig{
		ResponseMIMEType: "application/json",
		// See the OpenAPI specification for more details and examples:
		//   https://spec.openapis.org/oas/v3.0.3.html#schema-object
		ResponseSchema: &genai.Schema{
			Type: "array",
			Items: &genai.Schema{
				Type: "object",
				Properties: map[string]*genai.Schema{
					"recipe_name": {Type: "string"},
					"ingredients": {
						Type:  "array",
						Items: &genai.Schema{Type: "string"},
					},
				},
				Required: []string{"recipe_name", "ingredients"},
			},
		},
	}
	contents := []*genai.Content{
		{Parts: []*genai.Part{
			{Text: "List a few popular cookie recipes."},
		},
			Role: "user"},
	}
	modelName := "gemini-2.5-flash"

	resp, err := client.Models.GenerateContent(ctx, modelName, contents, config)
	if err != nil {
		return fmt.Errorf("failed to generate content: %w", err)
	}

	respText := resp.Text()

	fmt.Fprintln(w, respText)

	// Example response:
	// [
	//   {
	//     "ingredients": [
	//       "2 1/4 cups all-purpose flour",
	//       "1 teaspoon baking soda",
	//       ...
	//     ],
	//     "recipe_name": "Chocolate Chip Cookies"
	//   },
	//   {
	//     ...
	//   },
	//   ...
	// ]

	return nil
}

REST

Before using any of the request data, make the following replacements:

  • GENERATE_RESPONSE_METHOD: The type of response that you want the model to generate. Choose a method that generates how you want the model's response to be returned:
    • streamGenerateContent: The response is streamed as it's being generated to reduce the perception of latency to a human audience.
    • generateContent: The response is returned after it's fully generated.
  • LOCATION: The region to process the request.
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model that you want to use.
  • ROLE: The role in a conversation associated with the content. Specifying a role is required even in singleturn use cases. Acceptable values include the following:
    • USER: Specifies content that's sent by you.
  • TEXT: The text instructions to include in the prompt.
  • RESPONSE_MIME_TYPE: The format type of the generated candidate text. For a list of supported values, see the responseMimeType parameter in the Gemini API.
  • RESPONSE_SCHEMA: Schema for the model to follow when generating responses. For more information, see the Schema reference.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD

Request JSON body:

{
  "contents": {
    "role": "ROLE",
    "parts": {
      "text": "TEXT"
    }
  },
  "generation_config": {
    "responseMimeType": "RESPONSE_MIME_TYPE",
    "responseSchema": RESPONSE_SCHEMA,
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Example curl command

LOCATION="us-central1"
MODEL_ID="gemini-2.5-flash"
PROJECT_ID="test-project"
GENERATE_RESPONSE_<<METHO>D="generateContent"

cat  EOF  request.json
{
  "contents": {
    "role": "user",
    "parts": {
      "text": "List a few popular cookie recipes."
    }
  },
  "generation_config": {
    "maxOutputTokens": 2048,
    "responseMimeType": "application/json",
    "responseSchema": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "recipe_name": {
            "type": "string",
          },
        },
        "required": ["recipe_name"],
      },
    }
  }
}
EOF

curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:${GENERATE_RESPONSE_METHOD} \
-d '@request.json'

Example schemas for JSON output

The following sections show sample prompts and response schemas for different use cases. A sample model response is included after each code sample.

Forecast the weather for each day of the week

The following example outputs a forecast object for each day of the week that includes an array of properties such as the expected temperature and humidity level for the day. Some properties are set to nullable so the model can return a null value when it doesn't have enough context to generate a meaningful response. This strategy helps reduce hallucinations.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

from google import genai
from google.genai.types import GenerateContentConfig, HttpOptions

response_schema = {
    "type": "OBJECT",
    "properties": {
        "forecast": {
            "type": "ARRAY",
            "items": {
                "type": "OBJECT",
                "properties": {
                    "Day": {"type": "STRING", "nullable": True},
                    "Forecast": {"type": "STRING", "nullable": True},
                    "Temperature": {"type": "INTEGER", "nullable": True},
                    "Humidity": {"type": "STRING", "nullable": True},
                    "Wind Speed": {"type": "INTEGER", "nullable": True},
                },
                "required": ["Day", "Temperature", "Forecast", "Wind Speed"],
            },
        }
    },
}

prompt = """
    The week ahead brings a mix of weather conditions.
    Sunday is expected to be sunny with a temperature of 77°F and a humidity level of 50%. Winds will be light at around 10 km/h.
    Monday will see partly cloudy skies with a slightly cooler temperature of 72°F and the winds will pick up slightly to around 15 km/h.
    Tuesday brings rain showers, with temperatures dropping to 64°F and humidity rising to 70%.
    Wednesday may see thunderstorms, with a temperature of 68°F.
    Thursday will be cloudy with a temperature of 66°F and moderate humidity at 60%.
    Friday returns to partly cloudy conditions, with a temperature of 73°F and the Winds will be light at 12 km/h.
    Finally, Saturday rounds off the week with sunny skies, a temperature of 80°F, and a humidity level of 40%. Winds will be gentle at 8 km/h.
"""

client = genai.Client(http_options=HttpOptions(api_version="v1"))
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents=prompt,
    config=GenerateContentConfig(
        response_mime_type="application/json",
        response_schema=response_schema,
    ),
)

print(response.text)
# Example output:
# {"forecast": [{"Day": "Sunday", "Forecast": "sunny", "Temperature": 77, "Wind Speed": 10, "Humidity": "50%"},
#   {"Day": "Monday", "Forecast": "partly cloudy", "Temperature": 72, "Wind Speed": 15},
#   {"Day": "Tuesday", "Forecast": "rain showers", "Temperature": 64, "Wind Speed": null, "Humidity": "70%"},
#   {"Day": "Wednesday", "Forecast": "thunderstorms", "Temperature": 68, "Wind Speed": null},
#   {"Day": "Thursday", "Forecast": "cloudy", "Temperature": 66, "Wind Speed": null, "Humidity": "60%"},
#   {"Day": "Friday", "Forecast": "partly cloudy", "Temperature": 73, "Wind Speed": 12},
#   {"Day": "Saturday", "Forecast": "sunny", "Temperature": 80, "Wind Speed": 8, "Humidity": "40%"}]}

Go

Learn how to install or update the Go.

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

import (
	"context"
	"fmt"
	"io"

	genai "google.golang.org/genai"
)

// generateWithNullables shows how to use the response schema with nullable values.
func generateWithNullables(w io.Writer) error {
	ctx := context.Background()

	client, err := genai.NewClient(ctx, &genai.ClientConfig{
		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
	})
	if err != nil {
		return fmt.Errorf("failed to create genai client: %w", err)
	}

	modelName := "gemini-2.5-flash"
	prompt := `
The week ahead brings a mix of weather conditions.
Sunday is expected to be sunny with a temperature of 77°F and a humidity level of 50%. Winds will be light at around 10 km/h.
Monday will see partly cloudy skies with a slightly cooler temperature of 72°F and the winds will pick up slightly to around 15 km/h.
Tuesday brings rain showers, with temperatures dropping to 64°F and humidity rising to 70%.
Wednesday may see thunderstorms, with a temperature of 68°F.
Thursday will be cloudy with a temperature of 66°F and moderate humidity at 60%.
Friday returns to partly cloudy conditions, with a temperature of 73°F and the Winds will be light at 12 km/h.
Finally, Saturday rounds off the week with sunny skies, a temperature of 80°F, and a humidity level of 40%. Winds will be gentle at 8 km/h.
`
	contents := []*genai.Content{
		{Parts: []*genai.Part{
			{Text: prompt},
		},
			Role: "user"},
	}
	config := &genai.GenerateContentConfig{
		ResponseMIMEType: "application/json",
		// See the OpenAPI specification for more details and examples:
		//   https://spec.openapis.org/oas/v3.0.3.html#schema-object
		ResponseSchema: &genai.Schema{
			Type: "object",
			Properties: map[string]*genai.Schema{
				"forecast": {
					Type: "array",
					Items: &genai.Schema{
						Type: "object",
						Properties: map[string]*genai.Schema{
							"Day":         {Type: "string", Nullable: genai.Ptr(true)},
							"Forecast":    {Type: "string", Nullable: genai.Ptr(true)},
							"Temperature": {Type: "integer", Nullable: genai.Ptr(true)},
							"Humidity":    {Type: "string", Nullable: genai.Ptr(true)},
							"Wind Speed":  {Type: "integer", Nullable: genai.Ptr(true)},
						},
						Required: []string{"Day", "Temperature", "Forecast", "Wind Speed"},
					},
				},
			},
		},
	}

	resp, err := client.Models.GenerateContent(ctx, modelName, contents, config)
	if err != nil {
		return fmt.Errorf("failed to generate content: %w", err)
	}

	respText := resp.Text()

	fmt.Fprintln(w, respText)

	// Example response:
	// {
	// 	"forecast": [
	// 		{"Day": "Sunday", "Forecast": "Sunny", "Temperature": 77, "Wind Speed": 10, "Humidity": "50%"},
	// 		{"Day": "Monday", "Forecast": "Partly Cloudy", "Temperature": 72, "Wind Speed": 15},
	// 		{"Day": "Tuesday", "Forecast": "Rain Showers", "Temperature": 64, "Wind Speed": null, "Humidity": "70%"},
	// 		{"Day": "Wednesday", "Forecast": "Thunderstorms", "Temperature": 68, "Wind Speed": null},
	// 		{"Day": "Thursday", "Forecast": "Cloudy", "Temperature": 66, "Wind Speed": null, "Humidity": "60%"},
	// 		{"Day": "Friday", "Forecast": "Partly Cloudy", "Temperature": 73, "Wind Speed": 12},
	// 		{"Day": "Saturday", "Forecast": "Sunny", "Temperature": 80, "Wind Speed": 8, "Humidity": "40%"}
	// 	]
	// }

	return nil
}

Classify a product

The following example includes enums where the model must classify an object's type and condition from a list of given values.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

from google import genai
from google.genai.types import GenerateContentConfig, HttpOptions

client = genai.Client(http_options=HttpOptions(api_version="v1"))
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="What type of instrument is an oboe?",
    config=GenerateContentConfig(
        response_mime_type="text/x.enum",
        response_schema={
            "type": "STRING",
            "enum": ["Percussion", "String", "Woodwind", "Brass", "Keyboard"],
        },
    ),
)

print(response.text)
# Example output:
# Woodwind

Go

Learn how to install or update the Go.

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

import (
	"context"
	"fmt"
	"io"

	genai "google.golang.org/genai"
)

// generateWithEnumSchema shows how to use enum schema to generate output.
func generateWithEnumSchema(w io.Writer) error {
	ctx := context.Background()

	client, err := genai.NewClient(ctx, &genai.ClientConfig{
		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
	})
	if err != nil {
		return fmt.Errorf("failed to create genai client: %w", err)
	}

	modelName := "gemini-2.5-flash"
	contents := []*genai.Content{
		{Parts: []*genai.Part{
			{Text: "What type of instrument is an oboe?"},
		}, Role: "user"},
	}
	config := &genai.GenerateContentConfig{
		ResponseMIMEType: "text/x.enum",
		ResponseSchema: &genai.Schema{
			Type: "STRING",
			Enum: []string{"Percussion", "String", "Woodwind", "Brass", "Keyboard"},
		},
	}

	resp, err := client.Models.GenerateContent(ctx, modelName, contents, config)
	if err != nil {
		return fmt.Errorf("failed to generate content: %w", err)
	}

	respText := resp.Text()

	fmt.Fprintln(w, respText)

	// Example response:
	// Woodwind

	return nil
}

Node.js

Install

npm install @google/genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

const {GoogleGenAI, Type} = require('@google/genai');

const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';

async function generateContent(
  projectId = GOOGLE_CLOUD_PROJECT,
  location = GOOGLE_CLOUD_LOCATION
) {
  const ai = new GoogleGenAI({
    vertexai: true,
    project: projectId,
    location: location,
  });

  const responseSchema = {
    type: Type.STRING,
    enum: ['Percussion', 'String', 'Woodwind', 'Brass', 'Keyboard'],
  };

  const response = await ai.models.generateContent({
    model: 'gemini-2.5-flash',
    contents: 'What type of instrument is an oboe?',
    config: {
      responseMimeType: 'text/x.enum',
      responseSchema: responseSchema,
    },
  });

  console.log(response.text);

  return response.text;
}

Java

Learn how to install or update the Java.

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True


import com.google.genai.Client;
import com.google.genai.types.GenerateContentConfig;
import com.google.genai.types.GenerateContentResponse;
import com.google.genai.types.HttpOptions;
import com.google.genai.types.Schema;
import com.google.genai.types.Type;
import java.util.List;

public class ControlledGenerationWithEnumSchema {

  public static void main(String[] args) {
    // TODO(developer): Replace these variables before running the sample.
    String contents = "What type of instrument is an oboe?";
    String modelId = "gemini-2.5-flash";
    generateContent(modelId, contents);
  }

  // Generates content with an enum response schema
  public static String generateContent(String modelId, String contents) {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (Client client =
        Client.builder()
            .location("global")
            .vertexAI(true)
            .httpOptions(HttpOptions.builder().apiVersion("v1").build())
            .build()) {

      // Define the response schema with an enum.
      Schema responseSchema =
          Schema.builder()
              .type(Type.Known.STRING)
              .enum_(List.of("Percussion", "String", "Woodwind", "Brass", "Keyboard"))
              .build();

      GenerateContentConfig config =
          GenerateContentConfig.builder()
              .responseMimeType("text/x.enum")
              .responseSchema(responseSchema)
              .build();

      GenerateContentResponse response = client.models.generateContent(modelId, contents, config);

      System.out.print(response.text());
      // Example response:
      // Woodwind
      return response.text();
    }
  }
}