Generare risposte di testo utilizzando l'API Gemini con chiamate di funzione esterne in uno scenario di chat

Genera risposte di testo utilizzando l'API Gemini con chiamate di funzione esterne. Questo esempio mostra uno scenario di chat con due funzioni e due prompt sequenziali.

Per saperne di più

Per la documentazione dettagliata che include questo esempio di codice, consulta quanto segue:

Esempio di codice

Node.js

Prima di provare questo esempio, segui le istruzioni di configurazione Node.js riportate nella guida rapida all'utilizzo delle librerie client di Vertex AI. Per ulteriori informazioni, consulta la documentazione di riferimento dell'API Node.js di Vertex AI.

Per autenticarti in Vertex AI, configura le Credenziali predefinite dell'applicazione. Per ulteriori informazioni, consulta Configurare l'autenticazione per un ambiente di sviluppo locale.

const {
  VertexAI,
  FunctionDeclarationSchemaType,
} = require('@google-cloud/vertexai');

const functionDeclarations = [
  {
    function_declarations: [
      {
        name: 'get_current_weather',
        description: 'get weather in a given location',
        parameters: {
          type: FunctionDeclarationSchemaType.OBJECT,
          properties: {
            location: {type: FunctionDeclarationSchemaType.STRING},
            unit: {
              type: FunctionDeclarationSchemaType.STRING,
              enum: ['celsius', 'fahrenheit'],
            },
          },
          required: ['location'],
        },
      },
    ],
  },
];

const functionResponseParts = [
  {
    functionResponse: {
      name: 'get_current_weather',
      response: {name: 'get_current_weather', content: {weather: 'super nice'}},
    },
  },
];

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function functionCallingStreamChat(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
  });

  // Create a chat session and pass your function declarations
  const chat = generativeModel.startChat({
    tools: functionDeclarations,
  });

  const chatInput1 = 'What is the weather in Boston?';

  // This should include a functionCall response from the model
  const result1 = await chat.sendMessageStream(chatInput1);
  for await (const item of result1.stream) {
    console.log(item.candidates[0]);
  }
  await result1.response;

  // Send a follow up message with a FunctionResponse
  const result2 = await chat.sendMessageStream(functionResponseParts);
  for await (const item of result2.stream) {
    console.log(item.candidates[0]);
  }

  // This should include a text response from the model using the response content
  // provided above
  const response2 = await result2.response;
  console.log(response2.candidates[0].content.parts[0].text);
}

Python

Prima di provare questo esempio, segui le istruzioni di configurazione Python riportate nella guida rapida all'utilizzo delle librerie client di Vertex AI. Per ulteriori informazioni, consulta la documentazione di riferimento dell'API Python di Vertex AI.

Per autenticarti in Vertex AI, configura le Credenziali predefinite dell'applicazione. Per ulteriori informazioni, consulta Configurare l'autenticazione per un ambiente di sviluppo locale.

import vertexai

from vertexai.generative_models import (
    FunctionDeclaration,
    GenerationConfig,
    GenerativeModel,
    Part,
    Tool,
)

# TODO(developer): Update & uncomment below line
# PROJECT_ID = "your-project-id"

# Initialize Vertex AI
vertexai.init(project=PROJECT_ID, location="us-central1")

# Specify a function declaration and parameters for an API request
get_product_sku = "get_product_sku"
get_product_sku_func = FunctionDeclaration(
    name=get_product_sku,
    description="Get the SKU for a product",
    # Function parameters are specified in OpenAPI JSON schema format
    parameters={
        "type": "object",
        "properties": {
            "product_name": {"type": "string", "description": "Product name"}
        },
    },
)

# Specify another function declaration and parameters for an API request
get_store_location_func = FunctionDeclaration(
    name="get_store_location",
    description="Get the location of the closest store",
    # Function parameters are specified in JSON schema format
    parameters={
        "type": "object",
        "properties": {"location": {"type": "string", "description": "Location"}},
    },
)

# Define a tool that includes the above functions
retail_tool = Tool(
    function_declarations=[
        get_product_sku_func,
        get_store_location_func,
    ],
)

# Initialize Gemini model
model = GenerativeModel(
    model_name="gemini-1.5-flash-001",
    generation_config=GenerationConfig(temperature=0),
    tools=[retail_tool],
)

# Start a chat session
chat = model.start_chat()

# Send a prompt for the first conversation turn that should invoke the get_product_sku function
response = chat.send_message("Do you have the Pixel 8 Pro in stock?")

function_call = response.candidates[0].function_calls[0]
print(function_call)

# Check the function name that the model responded with, and make an API call to an external system
if function_call.name == get_product_sku:
    # Extract the arguments to use in your API call
    product_name = function_call.args["product_name"]  # noqa: F841

    # Here you can use your preferred method to make an API request to retrieve the product SKU, as in:
    # api_response = requests.post(product_api_url, data={"product_name": product_name})

    # In this example, we'll use synthetic data to simulate a response payload from an external API
    api_response = {"sku": "GA04834-US", "in_stock": "Yes"}

# Return the API response to Gemini, so it can generate a model response or request another function call
response = chat.send_message(
    Part.from_function_response(
        name=get_product_sku,
        response={
            "content": api_response,
        },
    ),
)
# Extract the text from the model response
print(response.text)

# Send a prompt for the second conversation turn that should invoke the get_store_location function
response = chat.send_message(
    "Is there a store in Mountain View, CA that I can visit to try it out?"
)

function_call = response.candidates[0].function_calls[0]
print(function_call)

# Check the function name that the model responded with, and make an API call to an external system
if function_call.name == "get_store_location":
    # Extract the arguments to use in your API call
    location = function_call.args["location"]  # noqa: F841

    # Here you can use your preferred method to make an API request to retrieve store location closest to the user, as in:
    # api_response = requests.post(store_api_url, data={"location": location})

    # In this example, we'll use synthetic data to simulate a response payload from an external API
    api_response = {"store": "2000 N Shoreline Blvd, Mountain View, CA 94043, US"}

# Return the API response to Gemini, so it can generate a model response or request another function call
response = chat.send_message(
    Part.from_function_response(
        name="get_store_location",
        response={
            "content": api_response,
        },
    ),
)

# Extract the text from the model response
print(response.text)
# Example response:
# name: "get_product_sku"
# args {
#   fields { key: "product_name" value {string_value: "Pixel 8 Pro" }
#   }
# }
# Yes, we have the Pixel 8 Pro in stock.
# name: "get_store_location"
# args {
#   fields { key: "location" value { string_value: "Mountain View, CA" }
#   }
# }
# Yes, there is a store located at 2000 N Shoreline Blvd, Mountain View, CA 94043, US.

Passaggi successivi

Per cercare e filtrare gli esempi di codice per altri prodotti Google Cloud, consulta il browser di esempi di Google Cloud.