Gerar conteúdo com chamadas de função

Gerar conteúdo com chamadas de função. Este exemplo demonstra um cenário de modalidade de texto com uma função e um comando.

Mais informações

Para ver a documentação detalhada que inclui este exemplo de código, consulte:

Exemplo de código

C#

Antes de testar esse exemplo, siga as instruções de configuração para C# no Guia de início rápido da Vertex AI sobre como usar bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Vertex AI para C#.

Para autenticar na Vertex AI, configure o Application Default Credentials. Para mais informações, consulte Configurar a autenticação para um ambiente de desenvolvimento local.


using Google.Cloud.AIPlatform.V1;
using System;
using System.Threading.Tasks;
using Type = Google.Cloud.AIPlatform.V1.Type;
using Value = Google.Protobuf.WellKnownTypes.Value;

public class FunctionCalling
{
    public async Task<string> GenerateFunctionCall(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001")
    {
        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        // Define the user's prompt in a Content object that we can reuse in
        // model calls
        var userPromptContent = new Content
        {
            Role = "USER",
            Parts =
            {
                new Part { Text = "What is the weather like in Boston?" }
            }
        };

        // Specify a function declaration and parameters for an API request
        var functionName = "get_current_weather";
        var getCurrentWeatherFunc = new FunctionDeclaration
        {
            Name = functionName,
            Description = "Get the current weather in a given location",
            Parameters = new OpenApiSchema
            {
                Type = Type.Object,
                Properties =
                {
                    ["location"] = new()
                    {
                        Type = Type.String,
                        Description = "Get the current weather in a given location"
                    },
                    ["unit"] = new()
                    {
                        Type = Type.String,
                        Description = "The unit of measurement for the temperature",
                        Enum = {"celsius", "fahrenheit"}
                    }
                },
                Required = { "location" }
            }
        };

        // Send the prompt and instruct the model to generate content using the tool that you just created
        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            GenerationConfig = new GenerationConfig
            {
                Temperature = 0f
            },
            Contents =
            {
                userPromptContent
            },
            Tools =
            {
                new Tool
                {
                    FunctionDeclarations = { getCurrentWeatherFunc }
                }
            }
        };

        GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(generateContentRequest);

        var functionCall = response.Candidates[0].Content.Parts[0].FunctionCall;
        Console.WriteLine(functionCall);

        string apiResponse = "";

        // Check the function name that the model responded with, and make an API call to an external system
        if (functionCall.Name == functionName)
        {
            // Extract the arguments to use in your API call
            string locationCity = functionCall.Args.Fields["location"].StringValue;

            // Here you can use your preferred method to make an API request to
            // fetch the current weather

            // In this example, we'll use synthetic data to simulate a response
            // payload from an external API
            apiResponse = @"{ ""location"": ""Boston, MA"",
                    ""temperature"": 38, ""description"": ""Partly Cloudy""}";
        }

        // Return the API response to Gemini so it can generate a model response or request another function call
        generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                userPromptContent, // User prompt
                response.Candidates[0].Content, // Function call response,
                new Content
                {
                    Parts =
                    {
                        new Part
                        {
                            FunctionResponse = new()
                            {
                                Name = functionName,
                                Response = new()
                                {
                                    Fields =
                                    {
                                        { "content", new Value { StringValue = apiResponse } }
                                    }
                                }
                            }
                        }
                    }
                }
            },
            Tools =
            {
                new Tool
                {
                    FunctionDeclarations = { getCurrentWeatherFunc }
                }
            }
        };

        response = await predictionServiceClient.GenerateContentAsync(generateContentRequest);

        string responseText = response.Candidates[0].Content.Parts[0].Text;
        Console.WriteLine(responseText);

        return responseText;
    }
}

Go

Antes de testar essa amostra, siga as instruções de configuração para Go Guia de início rápido da Vertex AI: como usar bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Vertex AI para Go.

Para autenticar na Vertex AI, configure o Application Default Credentials. Para mais informações, consulte Configurar a autenticação para um ambiente de desenvolvimento local.

import (
	"context"
	"encoding/json"
	"errors"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

// functionCallsBasic opens a chat session and sends 2 messages to the model:
// - first, to convert a text into a structured function call request
// - second, to convert a structured function call response into natural language.
// Writes output of second call to w.
func functionCallsBasic(w io.Writer, projectID, location, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"
	ctx := context.Background()
	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	// Build an OpenAPI schema, in memory
	params := &genai.Schema{
		Type: genai.TypeObject,
		Properties: map[string]*genai.Schema{
			"location": {
				Type:        genai.TypeString,
				Description: "location",
			},
		},
	}
	fundecl := &genai.FunctionDeclaration{
		Name:        "getCurrentWeather",
		Description: "Get the current weather in a given location",
		Parameters:  params,
	}
	model.Tools = []*genai.Tool{
		{FunctionDeclarations: []*genai.FunctionDeclaration{fundecl}},
	}

	chat := model.StartChat()

	resp, err := chat.SendMessage(ctx, genai.Text("What's the weather like in Boston?"))
	if err != nil {
		return err
	}
	if len(resp.Candidates) == 0 ||
		len(resp.Candidates[0].Content.Parts) == 0 {
		return errors.New("empty response from model")
	}

	// The model has returned a function call to the declared function `getCurrentWeather`
	// with a value for the argument `location`.
	_, err = json.MarshalIndent(resp.Candidates[0].Content.Parts[0], "", "  ")
	if err != nil {
		return fmt.Errorf("json.MarshalIndent: %w", err)
	}

	// In this example, we'll use synthetic data to simulate a response payload from an external API
	weather := map[string]string{
		"location":    "Boston",
		"temperature": "38",
		"description": "Partly Cloudy",
		"icon":        "partly-cloudy",
		"humidity":    "65",
		"wind":        "{\"speed\": \"10\", \"direction\": \"NW\"}",
	}
	weather_json, _ := json.Marshal(weather)

	// Create a function call response, to simulate the result of a call to a
	// real service
	funresp := &genai.FunctionResponse{
		Name: "getCurrentWeather",
		Response: map[string]any{
			"currentWeather": weather_json,
		},
	}
	_, err = json.MarshalIndent(funresp, "", "  ")
	if err != nil {
		return fmt.Errorf("json.MarshalIndent: %w", err)
	}

	// And provide the function call response to the model
	resp, err = chat.SendMessage(ctx, funresp)
	if err != nil {
		return err
	}
	if len(resp.Candidates) == 0 ||
		len(resp.Candidates[0].Content.Parts) == 0 {
		return errors.New("empty response from model")
	}

	// The model has taken the function call response as input, and has
	// reformulated the response to the user.
	content, err := json.MarshalIndent(resp.Candidates[0].Content.Parts[0], "", "  ")
	if err != nil {
		return fmt.Errorf("json.MarshalIndent: %w", err)
	}

	fmt.Fprintf(w, "generated summary:\n%s\n", content)
	return nil
}

Java

Antes de testar essa amostra, siga as instruções de configuração para Java Guia de início rápido da Vertex AI: como usar bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Vertex AI para Java.

Para autenticar na Vertex AI, configure o Application Default Credentials. Para mais informações, consulte Configurar a autenticação para um ambiente de desenvolvimento local.

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.Content;
import com.google.cloud.vertexai.api.FunctionDeclaration;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.api.Schema;
import com.google.cloud.vertexai.api.Tool;
import com.google.cloud.vertexai.api.Type;
import com.google.cloud.vertexai.generativeai.ChatSession;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;

public class FunctionCalling {
  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    String promptText = "What's the weather like in Paris?";

    whatsTheWeatherLike(projectId, location, modelName, promptText);
  }

  // A request involving the interaction with an external tool
  public static String whatsTheWeatherLike(String projectId, String location,
                                           String modelName, String promptText)
      throws IOException {
    // Initialize client that will be used to send requests.
    // This client only needs to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {

      FunctionDeclaration functionDeclaration = FunctionDeclaration.newBuilder()
          .setName("getCurrentWeather")
          .setDescription("Get the current weather in a given location")
          .setParameters(
              Schema.newBuilder()
                  .setType(Type.OBJECT)
                  .putProperties("location", Schema.newBuilder()
                      .setType(Type.STRING)
                      .setDescription("location")
                      .build()
                  )
                  .addRequired("location")
                  .build()
          )
          .build();

      System.out.println("Function declaration:");
      System.out.println(functionDeclaration);

      // Add the function to a "tool"
      Tool tool = Tool.newBuilder()
          .addFunctionDeclarations(functionDeclaration)
          .build();

      // Start a chat session from a model, with the use of the declared function.
      GenerativeModel model = new GenerativeModel(modelName, vertexAI)
          .withTools(Arrays.asList(tool));
      ChatSession chat = model.startChat();

      System.out.println(String.format("Ask the question: %s", promptText));
      GenerateContentResponse response = chat.sendMessage(promptText);

      // The model will most likely return a function call to the declared
      // function `getCurrentWeather` with "Paris" as the value for the
      // argument `location`.
      System.out.println("\nPrint response: ");
      System.out.println(ResponseHandler.getContent(response));

      // Provide an answer to the model so that it knows what the result
      // of a "function call" is.
      Content content =
          ContentMaker.fromMultiModalData(
              PartMaker.fromFunctionResponse(
                  "getCurrentWeather",
                  Collections.singletonMap("currentWeather", "sunny")));
      System.out.println("Provide the function response: ");
      System.out.println(content);
      response = chat.sendMessage(content);

      // See what the model replies now
      System.out.println("Print response: ");
      String finalAnswer = ResponseHandler.getText(response);
      System.out.println(finalAnswer);

      return finalAnswer;
    }
  }
}

Python

Antes de testar essa amostra, siga as instruções de configuração para Python Guia de início rápido da Vertex AI: como usar bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Vertex AI para Python.

Para autenticar na Vertex AI, configure o Application Default Credentials. Para mais informações, consulte Configurar a autenticação para um ambiente de desenvolvimento local.

import vertexai
from vertexai.generative_models import (
    Content,
    FunctionDeclaration,
    GenerationConfig,
    GenerativeModel,
    Part,
    Tool,
)

# Initialize Vertex AI
# TODO (developer): update project_id
vertexai.init(project=PROJECT_ID, location="us-central1")

# Initialize Gemini model
model = GenerativeModel("gemini-1.5-flash-002")

# Define the user's prompt in a Content object that we can reuse in model calls
user_prompt_content = Content(
    role="user",
    parts=[
        Part.from_text("What is the weather like in Boston?"),
    ],
)

# Specify a function declaration and parameters for an API request
function_name = "get_current_weather"
get_current_weather_func = FunctionDeclaration(
    name=function_name,
    description="Get the current weather in a given location",
    # Function parameters are specified in OpenAPI JSON schema format
    parameters={
        "type": "object",
        "properties": {"location": {"type": "string", "description": "Location"}},
    },
)

# Define a tool that includes the above get_current_weather_func
weather_tool = Tool(
    function_declarations=[get_current_weather_func],
)

# Send the prompt and instruct the model to generate content using the Tool that you just created
response = model.generate_content(
    user_prompt_content,
    generation_config=GenerationConfig(temperature=0),
    tools=[weather_tool],
)
function_call = response.candidates[0].function_calls[0]
print(function_call)

# Check the function name that the model responded with, and make an API call to an external system
if function_call.name == function_name:
    # Extract the arguments to use in your API call
    location = function_call.args["location"]  # noqa: F841

    # Here you can use your preferred method to make an API request to fetch the current weather, for example:
    # api_response = requests.post(weather_api_url, data={"location": location})

    # In this example, we'll use synthetic data to simulate a response payload from an external API
    api_response = """{ "location": "Boston, MA", "temperature": 38, "description": "Partly Cloudy",
                    "icon": "partly-cloudy", "humidity": 65, "wind": { "speed": 10, "direction": "NW" } }"""

# Return the API response to Gemini so it can generate a model response or request another function call
response = model.generate_content(
    [
        user_prompt_content,  # User prompt
        response.candidates[0].content,  # Function call response
        Content(
            parts=[
                Part.from_function_response(
                    name=function_name,
                    response={
                        "content": api_response,  # Return the API response to Gemini
                    },
                ),
            ],
        ),
    ],
    tools=[weather_tool],
)

# Get the model response
print(response.text)

A seguir

Para pesquisar e filtrar exemplos de código de outros produtos do Google Cloud, consulte a pesquisa de exemplos de código do Google Cloud.