Large Language Models (LLMs) are powerful at solving many types of problems. However, they are constrained by the following limitations:
- They are frozen after training, leading to stale knowledge.
- They can't query or modify external data.
Function calling can address these shortcomings. Function calling is sometimes referred to as tool use because it allows the model to use external tools such as APIs and functions.
When submitting a prompt to the LLM, you also
provide the model with a set of tools that it can use to respond to the user's prompt. For
example, you could provide a function get_weather
that takes a location
parameter and returns information about the weather conditions at that location.
While processing a prompt, the model can choose to delegate certain
data processing tasks to the functions that you identify. The model does not
call the functions directly. Instead, the model provides structured data output
that includes the function to call and parameter values to use. For example, for
a prompt What is the weather like in Boston?
, the model can delegate processing
to the get_weather
function and provide the location parameter value Boston, MA
.
You can use the structured output from the model to invoke external APIs. For
example, you could connect to a weather service API, provide the location
Boston, MA
, and receive information about temperature, cloud cover, and wind
conditions.
You can then provide the API output back to the model, allowing it to complete
its response to the prompt. For the weather example, the model may provide the
following response: It is currently 38 degrees Fahrenheit in Boston, MA with partly cloudy skies.
Supported models
The following models provide support for function calling:
Model | Version | Function calling launch stage | Support for parallel function calling | Support for forced function calling |
---|---|---|---|---|
Gemini 1.0 Pro | all versions |
General Availability | No | No |
Gemini 1.5 Flash | all versions |
General Availability | Yes | Yes |
Gemini 1.5 Pro | all versions |
General Availability | Yes | Yes |
Use cases of function calling
You can use function calling for the following tasks:
Use Case | Example description | Example link |
---|---|---|
Integrate with external APIs | Get weather information using a meteorological API | Notebook tutorial |
Convert addresses to latitude/longitude coordinates | Notebook tutorial | |
Convert currencies using a currency exchange API | Codelab | |
Build advanced chatbots | Answer customer questions about products and services | Notebook tutorial |
Create an assistant to answer financial and news questions about companies | Notebook tutorial | |
Structure and control function calls | Extract structured entities from raw log data | Notebook tutorial |
Extract single or multiple parameters from user input | Notebook tutorial | |
Handle lists and nested data structures in function calls | Notebook tutorial | |
Handle function calling behavior | Handle parallel function calls and responses | Notebook tutorial |
Manage when and which functions the model can call | Notebook tutorial | |
Query databases with natural language | Convert natural language questions into SQL queries for BigQuery | Sample app |
Multimodal function calling | Use images, videos, audio, and PDFs as input to trigger function calls | Notebook tutorial |
Here are some more use cases:
Interpret voice commands: Create functions that correspond with in-vehicle tasks. For example, you can create functions that turn on the radio or activate the air conditioning. Send audio files of the user's voice commands to the model, and ask the model to convert the audio into text and identify the function that the user wants to call.
Automate workflows based on environmental triggers: Create functions to represent processes that can be automated. Provide the model with data from environmental sensors and ask it to parse and process the data to determine whether one or more of the workflows should be activated. For example, a model could process temperature data in a warehouse and choose to activate a sprinkler function.
Automate the assignment of support tickets: Provide the model with support tickets, logs, and context-aware rules. Ask the model to process all of this information to determine who the ticket should be assigned to. Call a function to assign the ticket to the person suggested by the model.
Retrieve information from a knowledge base: Create functions that retrieve academic articles on a given subject and summarize them. Enable the model to answer questions about academic subjects and provide citations for its answers.
How to create a function calling application
To enable a user to interface with the model and use function calling, you must create code that performs the following tasks:
- Set up your environment.
- Define and describe a set of available functions using function declarations.
- Submit a user's prompt and the function declarations to the model.
- Invoke a function using the structured data output from the model.
- Provide the function output to the model.
You can create an application that manages all of these tasks. This application can be a text chatbot, a voice agent, an automated workflow, or any other program.
You can use function calling to generate a single text response or to support a chat session. Ad hoc text responses are useful for specific business tasks, including code generation. Chat sessions are useful in freeform, conversational scenarios, where a user is likely to ask follow-up questions.
If you use function calling to generate a single response, you must provide the model with the full context of the interaction. On the other hand, if you use function calling in the context of a chat session, the session stores the context for you and includes it in every model request. In both cases, Vertex AI stores the history of the interaction on the client side.
This guide demonstrates how you can use function calling to generate a single text response. For an end-to-end sample, see Text examples. To learn how to use function calling to support a chat session, see Chat examples.
Step 1: Set up your environment
Import the required modules and initialize the model:
Python
import vertexai
from vertexai.generative_models import (
Content,
FunctionDeclaration,
GenerationConfig,
GenerativeModel,
Part,
Tool,
)
# Initialize Vertex AI
# TODO(developer): Update and un-comment below lines
# PROJECT_ID = 'your-project-id'
vertexai.init(project=PROJECT_ID, location="us-central1")
# Initialize Gemini model
model = GenerativeModel(model_name="gemini-1.5-flash-002")
Step 2: Declare a set of functions
The application must declare a set of functions that the model can use to process the prompt.
The maximum number of function declarations that can be provided with the request is 128.
You must provide function declarations in a schema format that's compatible
with the OpenAPI schema. Vertex AI offers limited support of the OpenAPI schema. The following
attributes are supported: type
, nullable
, required
, format
,
description
, properties
, items
, enum
. The following attributes are not
supported: default
, optional
, maximum
, oneOf
. For best practices related to the function declarations, including tips for names and descriptions, see
Best practices.
If you use the REST API, specify the schema using JSON. If you use the
Vertex AI SDK for Python, you can specify the schema either manually using a Python dictionary or automatically with the from_func
helper function.
JSON
{
"contents": ...,
"tools": [
{
"function_declarations": [
{
"name": "find_movies",
"description": "find movie titles currently playing in theaters based on any description, genre, title words, etc.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616"
},
"description": {
"type": "string",
"description": "Any kind of description including category or genre, title words, attributes, etc."
}
},
"required": [
"description"
]
}
},
{
"name": "find_theaters",
"description": "find theaters based on location and optionally movie title which are is currently playing in theaters",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616"
},
"movie": {
"type": "string",
"description": "Any movie title"
}
},
"required": [
"location"
]
}
},
{
"name": "get_showtimes",
"description": "Find the start times for movies playing in a specific theater",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616"
},
"movie": {
"type": "string",
"description": "Any movie title"
},
"theater": {
"type": "string",
"description": "Name of the theater"
},
"date": {
"type": "string",
"description": "Date for requested showtime"
}
},
"required": [
"location",
"movie",
"theater",
"date"
]
}
}
]
}
]
}
Python dictionary
The following function declaration takes a single string
parameter:
function_name = "get_current_weather"
get_current_weather_func = FunctionDeclaration(
name=function_name,
description="Get the current weather in a given location",
# Function parameters are specified in JSON schema format
parameters={
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city name of the location for which to get the weather."}
},
},
)
The following function declaration takes both object and array parameters:
extract_sale_records_func = FunctionDeclaration(
name="extract_sale_records",
description="Extract sale records from a document.",
parameters={
"type": "object",
"properties": {
"records": {
"type": "array",
"description": "A list of sale records",
"items": {
"description": "Data for a sale record",
"type": "object",
"properties": {
"id": {"type": "integer", "description": "The unique id of the sale."},
"date": {"type": "string", "description": "Date of the sale, in the format of MMDDYY, e.g., 031023"},
"total_amount": {"type": "number", "description": "The total amount of the sale."},
"customer_name": {"type": "string", "description": "The name of the customer, including first name and last name."},
"customer_contact": {"type": "string", "description": "The phone number of the customer, e.g., 650-123-4567."},
},
"required": ["id", "date", "total_amount"],
},
},
},
"required": ["records"],
},
)
Python from function
The following code sample declares a function that multiplies an array of numbers and uses from_func
to generate the FunctionDeclaration
schema.
# Define a function. Could be a local function or you can import the requests library to call an API
def multiply_numbers(numbers):
"""
Calculates the product of all numbers in an array.
Args:
numbers: An array of numbers to be multiplied.
Returns:
The product of all the numbers. If the array is empty, returns 1.
"""
if not numbers: # Handle empty array
return 1
product = 1
for num in numbers:
product *= num
return product
multiply_number_func = FunctionDeclaration.from_func(multiply_numbers)
'''
multiply_number_func contains the following schema:
name: "multiply_numbers"
description: "Calculates the product of all numbers in an array."
parameters {
type_: OBJECT
properties {
key: "numbers"
value {
description: "An array of numbers to be multiplied."
title: "Numbers"
}
}
required: "numbers"
description: "Calculates the product of all numbers in an array."
title: "multiply_numbers"
}
'''
Step 3: Submit the prompt and function declarations to the model
When the user provides a prompt, the application must provide the model with the user prompt and the function declarations. To configure how the model generates results, the application can provide the model with a generation configuration. To configure how the model uses the function declarations, the application can provide the model with a tool configuration.
Define the user prompt
The following is an example of a user prompt: "What is the weather like in Boston?"
The following is an example of how you can define the user prompt:
Python
# Define the user's prompt in a Content object that we can reuse in model calls
user_prompt_content = Content(
role="user",
parts=[
Part.from_text("What is the weather like in Boston?"),
],
)
For best practices related to the user prompt, see Best practices - User prompt.
Generation configuration
The model can generate different results for different parameter values. The
temperature parameter controls the degree of randomness in this generation.
Lower temperatures are good for functions that require deterministic parameter
values, while higher temperatures are good for functions with parameters that
accept more diverse or creative parameter values. A temperature of 0
is
deterministic. In this case, responses for a given prompt are mostly
deterministic, but a small amount of variation is still possible. To learn
more, see Gemini API.
To set this parameter, submit a generation configuration (generation_config
)
along with the prompt and the function declarations. You can update the
temperature
parameter during a chat conversation using the Vertex AI
API and an updated generation_config
. For an example of setting the
temperature
parameter, see
How to submit the prompt and the function declarations.
For best practices related to the generation configuration, see Best practices - Generation configuration.
Tool configuration
You can place some constraints on how the model should use the function declarations that you provide it with. For example, instead of allowing the model to choose between a natural language response and a function call, you can force it to only predict function calls ("forced function calling" or "function calling with controlled generation"). You can also choose to provide the model with a full set of function declarations, but restrict its responses to a subset of these functions.
To place these constraints, submit a tool configuration (tool_config
) along
with the prompt and the function declarations. In the configuration, you can
specify one of the following modes:
Mode | Description |
---|---|
AUTO |
The default model behavior. The model decides whether to predict function calls or a natural language response. |
ANY |
The model is constrained to always predict a function call. If allowed_function_names is not provided, the model picks from all of the available function declarations. If allowed_function_names is provided, the model picks from the set of allowed functions. |
NONE |
The model must not predict function calls. This behaviour is equivalent to a model request without any associated function declarations. |
For a list of models that support the ANY
mode ("forced function calling"),
see supported models.
To learn more, see Function Calling API.
How to submit the prompt and the function declarations
The following is an example of how can you submit the prompt and the function
declarations to the model, and constrain the model to predict only
get_current_weather
function calls.
Python
# Define a tool that includes some of the functions that we declared earlier
tool = Tool(
function_declarations=[get_current_weather_func, extract_sale_records_func, multiply_number_func],
)
# Send the prompt and instruct the model to generate content using the Tool object that you just created
response = model.generate_content(
user_prompt_content,
generation_config=GenerationConfig(temperature=0),
tools=[tool],
tool_config=ToolConfig(
function_calling_config=ToolConfig.FunctionCallingConfig(
# ANY mode forces the model to predict only function calls
mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
# Allowed function calls to predict when the mode is ANY. If empty, any of
# the provided function calls will be predicted.
allowed_function_names=["get_current_weather"],
)
)
)
If the model determines that it needs the output of a particular function, the response that the application receives from the model contains the function name and the parameter values that the function should be called with.
The following is an example of a model response to the user prompt "What is the weather like in Boston?". The model proposes calling
the get_current_weather
function with the parameter Boston, MA
.
candidates { content { role: "model" parts { function_call { name: "get_current_weather" args { fields { key: "location" value { string_value: "Boston, MA" } } } } } } ... }
For prompts such as "Get weather details in New Delhi and San Francisco?", the model may propose several parallel function calls. To learn more, see Parallel function calling example.
Step 4: Invoke an external API
If the application receives a function name and parameter values from the model, the application must connect to an external API and call the function.
The following example uses synthetic data to simulate a response payload from an external API:
Python
# Check the function name that the model responded with, and make an API call to an external system
if (response.candidates[0].function_calls[0].name == "get_current_weather"):
# Extract the arguments to use in your API call
location = response.candidates[0].function_calls[0].args["location"]
# Here you can use your preferred method to make an API request to fetch the current weather, for example:
# api_response = requests.post(weather_api_url, data={"location": location})
# In this example, we'll use synthetic data to simulate a response payload from an external API
api_response = """{ "location": "Boston, MA", "temperature": 38, "description": "Partly Cloudy",
"icon": "partly-cloudy", "humidity": 65, "wind": { "speed": 10, "direction": "NW" } }"""
For best practices related to API invocation, see Best practices - API invocation.
Step 5: Provide the function's output to the model
After an application receives a response from an external API, the application must provide this response to the model. The following is an example of how you can do this using Python:
Python
response = model.generate_content(
[
user_prompt_content, # User prompt
response.candidates[0].content, # Function call response
Content(
parts=[
Part.from_function_response(
name="get_current_weather",
response={
"content": api_response, # Return the API response to Gemini
},
)
],
),
],
tools=[weather_tool],
)
# Get the model summary response
summary = response.text
If the model had proposed several parallel function calls, the application must provide all of the responses back to the model. To learn more, see Parallel function calling example.
The model may determine that the output of another function is necessary for responding to the prompt. In this case, the response that the application receives from the model contains another function name and another set of parameter values.
If the model determines that the API response is sufficient for responding to the user's prompt, it creates a natural language response and returns it to the application. In this case, the application must pass the response back to the user. The following is an example of a response:
It is currently 38 degrees Fahrenheit in Boston, MA with partly cloudy skies. The humidity is 65% and the wind is blowing at 10 mph from the northwest.
Examples of function calling
Text examples
You can use function calling to generate a single text response. Ad hoc text responses are useful for specific business tasks, including code generation.
If you use function calling to generate a single response, you must provide the model with the full context of the interaction. Vertex AI stores the history of the interaction on the client side.
Python
This example demonstrates a text scenario with one function and one
prompt. It uses the GenerativeModel
class and its methods. For more
information about using the Vertex AI SDK for Python with Gemini multimodal
models, see
Introduction to multimodal classes in the Vertex AI SDK for Python.
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python.
For more information, see the
Python API reference documentation.
Python
Node.js
This example demonstrates a text scenario with one function and one prompt.
Before trying this sample, follow the Node.js setup instructions in the
Vertex AI quickstart using
client libraries.
For more information, see the
Vertex AI Node.js API
reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials.
For more information, see
Set up authentication for a local development environment.
Node.js
Go
This example demonstrates a text scenario with one function and one prompt.
Go
Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Go API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
C#
This example demonstrates a text scenario with one function and one prompt.
C#
Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI C# API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
REST
This example demonstrates a text scenario with three functions and one prompt.
In this example, you call the generative AI model twice.
- In the first call, you provide the model with the prompt and the function declarations.
- In the second call, you provide the model with the API response.
First model request
The request must define a prompt in the text
parameter. This example defines
the following prompt: "Which theaters in Mountain View show the Barbie movie?".
The request must also define a tool (tools
) with a set of function
declarations (functionDeclarations
). These function declarations must be
specified in a format that's compatible with the
OpenAPI schema. This example
defines the following functions:
find_movies
finds movie titles playing in theaters.find_theatres
finds theaters based on location.get_showtimes
finds the start times for movies playing in a specific theater.
To learn more about the parameters of the model request, see Gemini API.
Replace my-project with the name of your Google Cloud project.
First model request
PROJECT_ID=my-project MODEL_ID=gemini-1.0-pro API=streamGenerateContent curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:${API} -d '{ "contents": { "role": "user", "parts": { "text": "Which theaters in Mountain View show the Barbie movie?" } }, "tools": [ { "function_declarations": [ { "name": "find_movies", "description": "find movie titles currently playing in theaters based on any description, genre, title words, etc.", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" }, "description": { "type": "string", "description": "Any kind of description including category or genre, title words, attributes, etc." } }, "required": [ "description" ] } }, { "name": "find_theaters", "description": "find theaters based on location and optionally movie title which are is currently playing in theaters", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" }, "movie": { "type": "string", "description": "Any movie title" } }, "required": [ "location" ] } }, { "name": "get_showtimes", "description": "Find the start times for movies playing in a specific theater", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" }, "movie": { "type": "string", "description": "Any movie title" }, "theater": { "type": "string", "description": "Name of the theater" }, "date": { "type": "string", "description": "Date for requested showtime" } }, "required": [ "location", "movie", "theater", "date" ] } } ] } ] }'
For the prompt "Which theaters in Mountain View show the Barbie movie?", the model
might return the function find_theatres
with parameters Barbie
and
Mountain View, CA
.
Response to first model request
[{ "candidates": [ { "content": { "parts": [ { "functionCall": { "name": "find_theaters", "args": { "movie": "Barbie", "location": "Mountain View, CA" } } } ] }, "finishReason": "STOP", "safetyRatings": [ { "category": "HARM_CATEGORY_HARASSMENT", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_HATE_SPEECH", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "probability": "NEGLIGIBLE" } ] } ], "usageMetadata": { "promptTokenCount": 9, "totalTokenCount": 9 } }]
Second model request
This example uses synthetic data instead of calling the external API.
There are two results, each with two parameters (name
and address
):
name
:AMC Mountain View 16
,address
:2000 W El Camino Real, Mountain View, CA 94040
name
:Regal Edwards 14
,address
:245 Castro St, Mountain View, CA 94040
Replace my-project with the name of your Google Cloud project.
Second model request
PROJECT_ID=my-project MODEL_ID=gemini-1.0-pro API=streamGenerateContent curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:${API} -d '{ "contents": [{ "role": "user", "parts": [{ "text": "Which theaters in Mountain View show the Barbie movie?" }] }, { "role": "model", "parts": [{ "functionCall": { "name": "find_theaters", "args": { "location": "Mountain View, CA", "movie": "Barbie" } } }] }, { "parts": [{ "functionResponse": { "name": "find_theaters", "response": { "name": "find_theaters", "content": { "movie": "Barbie", "theaters": [{ "name": "AMC Mountain View 16", "address": "2000 W El Camino Real, Mountain View, CA 94040" }, { "name": "Regal Edwards 14", "address": "245 Castro St, Mountain View, CA 94040" }] } } } }] }], "tools": [{ "functionDeclarations": [{ "name": "find_movies", "description": "find movie titles currently playing in theaters based on any description, genre, title words, etc.", "parameters": { "type": "OBJECT", "properties": { "location": { "type": "STRING", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" }, "description": { "type": "STRING", "description": "Any kind of description including category or genre, title words, attributes, etc." } }, "required": ["description"] } }, { "name": "find_theaters", "description": "find theaters based on location and optionally movie title which are is currently playing in theaters", "parameters": { "type": "OBJECT", "properties": { "location": { "type": "STRING", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" }, "movie": { "type": "STRING", "description": "Any movie title" } }, "required": ["location"] } }, { "name": "get_showtimes", "description": "Find the start times for movies playing in a specific theater", "parameters": { "type": "OBJECT", "properties": { "location": { "type": "STRING", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" }, "movie": { "type": "STRING", "description": "Any movie title" }, "theater": { "type": "STRING", "description": "Name of the theater" }, "date": { "type": "STRING", "description": "Date for requested showtime" } }, "required": ["location", "movie", "theater", "date"] } }] }] }'
The model's response might be similar to the following:
Response to second model request
{ "candidates": [ { "content": { "parts": [ { "text": " OK. Barbie is showing in two theaters in Mountain View, CA: AMC Mountain View 16 and Regal Edwards 14." } ] } } ], "usageMetadata": { "promptTokenCount": 9, "candidatesTokenCount": 27, "totalTokenCount": 36 } }
Chat examples
You can use function calling to support a chat session. Chat sessions are useful in freeform, conversational scenarios, where a user is likely to ask follow-up questions.
If you use function calling in the context of a chat session, the session stores the context for you and includes it in every model request. Vertex AI stores the history of the interaction on the client side.
Python
This example demonstrates a chat scenario with two functions and two
sequential prompts. It uses the GenerativeModel
class and its methods. For
more information about using the Vertex AI SDK for Python with multimodal models, see
Introduction to multimodal classes in the Vertex AI SDK for Python.
To learn how to install or update Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Go
Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Go API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Parallel function calling example
For prompts such as "Get weather details in New Delhi and San Francisco?", the model may propose several parallel function calls. For a list of models that support parallel function calling, see Supported models.
REST
This example demonstrates a scenario with one get_current_weather
function.
The user prompt is "Get weather details in New Delhi and San Francisco?". The
model proposes two parallel get_current_weather
function calls: one with the
parameter New Delhi
and the other with the parameter San Francisco
.
To learn more about the parameters of the model request, see Gemini API.
{ "candidates": [ { "content": { "role": "model", "parts": [ { "functionCall": { "name": "get_current_weather", "args": { "location": "New Delhi" } } }, { "functionCall": { "name": "get_current_weather", "args": { "location": "San Francisco" } } } ] }, ... } ], ... }
The following command demonstrates how you can provide the function output to the model. Replace my-project with the name of your Google Cloud project.
Model request
PROJECT_ID=my-project MODEL_ID=gemini-1.5-pro-002 VERSION="v1" LOCATION="us-central1" ENDPOINT=${LOCATION}-aiplatform.googleapis.com API="generateContent" curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://${ENDPOINT}/${VERSION}/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:${API} -d '{ "contents": [ { "role": "user", "parts": { "text": "What is difference in temperature in New Delhi and San Francisco?" } }, { "role": "model", "parts": [ { "functionCall": { "name": "get_current_weather", "args": { "location": "New Delhi" } } }, { "functionCall": { "name": "get_current_weather", "args": { "location": "San Francisco" } } } ] }, { "role": "user", "parts": [ { "functionResponse": { "name": "get_current_weather", "response": { "temperature": 30.5, "unit": "C" } } }, { "functionResponse": { "name": "get_current_weather", "response": { "temperature": 20, "unit": "C" } } } ] } ], "tools": [ { "function_declarations": [ { "name": "get_current_weather", "description": "Get the current weather in a specific location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" } }, "required": [ "location" ] } } ] } ] }'
The natural language response created by the model is similar to the following:
Model response
[ { "candidates": [ { "content": { "parts": [ { "text": "The temperature in New Delhi is 30.5C and the temperature in San Francisco is 20C. The difference is 10.5C. \n" } ] }, "finishReason": "STOP", ... } ] ... } ]
Python
Go
Best practices for function calling
Function name
Function name should start with a letter or an underscore and contains only characters a-z, A-Z, 0-9, underscores, dots or dashes with a maximum length of 64.
Function description
Write function descriptions clearly and verbosely. For example, for a
book_flight_ticket
function:
- The following is an example of a good function description:
book flight tickets after confirming users' specific requirements, such as time, departure, destination, party size and preferred airline
- The following is an example of a bad function description:
book flight ticket
Function parameters
Function parameter and nested attribute names should start with a letter or an underscore and contains only characters a-z, A-Z, 0-9, or underscores with a maximum length of 64. Don't use period (.
), dash (-
), or space characters in the function parameter names and nested attributes.
Instead, use underscore (_
) characters or any other characters.
Descriptions
Write clear and verbose parameter descriptions, including details such as your
preferred format or values. For example, for
a book_flight_ticket
function:
- The following is a good example of a
departure
parameter description:Use the 3 char airport code to represent the airport. For example, SJC or SFO. Don't use the city name.
- The following is a bad example of a
departure
parameter description:the departure airport
Types
If possible, use strongly typed parameters to reduce model hallucinations. For
example, if the parameter values are from a finite set, add an enum
field
instead of putting the set of values into the description. If the parameter
value is always an integer, set the type to integer
rather than number
.
System instructions
When using functions with date, time, or location parameters, include the current date, time, or relevant location information (for example, city and country) in the system instruction. This ensures the model has the necessary context to process the request accurately, even if the user's prompt lacks details.
User prompt
For best results, prepend the user prompt with the following details:
- Additional context for the model-for example,
You are a flight API assistant to help with searching flights based on user preferences.
- Details or instructions on how and when to use the functions-for example,
Don't make assumptions on the departure or destination airports. Always use a future date for the departure or destination time.
- Instructions to ask clarifying questions if user queries are ambiguous-for example,
Ask clarifying questions if not enough information is available.
Generation configuration
For the temperature parameter, use 0
or another low value. This instructs
the model to generate more confident results and reduces hallucinations.
API invocation
If the model proposes the invocation of a function that would send an order, update a database, or otherwise have significant consequences, validate the function call with the user before executing it.
Pricing
The pricing for function calling is based on the number of characters within the text inputs and outputs. To learn more, see Vertex AI pricing.
Here, text input (prompt) refers to the user prompt for the current conversation turn, the function declarations for the current conversation turn, and the history of the conversation. The history of the conversation includes the queries, the function calls, and the function responses of previous conversation turns. Vertex AI truncates the history of the conversation at 32,000 characters.
Text output (response) refers to the function calls and the text responses for the current conversation turn.
What's next
See the API reference for function calling.
Learn about Vertex AI extensions.
Learn about LangChain on Vertex AI.