- Supported models: Lists the models that support function calling.
- Example syntax: Shows the basic structure of a function calling API request.
- API parameters: Details the parameters used in function calling, such as
FunctionDeclaration
andFunctionCallingConfig
. - Examples: Provides code examples for sending function declarations and configuring function call behavior.
Function calling improves the LLM's ability to provide relevant and contextual answers.
With the Function Calling API, you can provide custom functions to a generative AI model. The model doesn't directly invoke these functions. Instead, it generates structured data output that specifies the function name and suggested arguments. This output enables you to call external APIs or information systems, such as databases, customer relationship management (CRM) systems, and document repositories. You can then provide the resulting API output back to the model to improve its response quality.
For a conceptual overview of function calling, see Function calling.
Supported models
- Gemini 2.5 Flash-Lite
- Gemini 2.5 Flash with Live API native audio
Preview - Gemini 2.0 Flash with Live API
Preview - Vertex AI Model Optimizer
Experimental - Gemini 2.5 Pro
- Gemini 2.5 Flash
- Gemini 2.0 Flash
- Gemini 2.0 Flash-Lite
Limitations:
- You can provide a maximum of 128 function declarations with each request.
Example syntax
The following example shows the syntax for a function call API request.
curl
curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent \ -d '{ "contents": [{ ... }], "tools": [{ "function_declarations": [ { ... } ] }] }'
API parameters
This section describes the parameters for function calling. For implementation details, see the Examples section.
FunctionDeclaration
A FunctionDeclaration
defines a function that the model can generate JSON inputs for, based on OpenAPI 3.0 specifications.
Parameters | |
---|---|
|
The name of the function to call. The name must start with a letter or an underscore. It can contain letters (a-z, A-Z), numbers (0-9), underscores, dots, or dashes, with a maximum length of 64 characters. |
|
Optional: A description of the function's purpose. The model uses this description to decide how and whether to call the function. For best results, we recommend that you include a description. |
|
Optional: The parameters of the function, described in the OpenAPI JSON Schema Object format. |
|
Optional: The output from the function, described in the OpenAPI JSON Schema Object format. |
For more information, see Function calling.
Schema
A Schema
defines the format of the input and output data in a function call, based on the
OpenAPI 3.0 Schema specification.
Parameters | |
---|---|
type |
The data type. Must be one of the following:
|
description |
Optional: A description of the data. |
enum |
Optional: The possible values for an element of a primitive type. |
items |
Optional: The schema for the elements of an |
properties |
Optional: The schema for the properties of an |
required |
Optional: The required properties of an |
nullable |
Optional: Indicates if the value can be |
FunctionCallingConfig
FunctionCallingConfig
lets you control the model's behavior and determine which function to call.
Parameters | |
---|---|
|
Optional:
|
|
Optional: A list of function names to call. You can only set this when the |
functionCall
A functionCall
is a prediction returned from the model. It contains the name of the function to call (functionDeclaration.name
) and a structured JSON object with the parameters and their values.
Parameters | |
---|---|
|
The name of the function to call. |
|
The function parameters and their values in a JSON object format. See Function calling for parameter details. |
functionResponse
A functionResponse
is the output from a FunctionCall
. It contains the name of the function that was called and a structured JSON object with the function's output. You provide this response back to the model to use as context.
Parameters | |
---|---|
|
The name of the function that was called. |
|
The function's response in a JSON object format. |
Examples
Send a function declaration
The following example shows how to send a query and a function declaration to the model.
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- MODEL_ID: The ID of the model that's being processed.
- ROLE: The identity of the entity that creates the message.
- TEXT: The prompt to send to the model.
- NAME: The name of the function to call.
- DESCRIPTION: Description and purpose of the function.
- For other fields, see the Parameter list table.
HTTP method and URL:
POST https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/MODEL_ID:generateContent
Request JSON body:
{ "contents": [{ "role": "ROLE", "parts": [{ "text": "TEXT" }] }], "tools": [{ "function_declarations": [ { "name": "NAME", "description": "DESCRIPTION", "parameters": { "type": "TYPE", "properties": { "location": { "type": "TYPE", "description": "DESCRIPTION" } }, "required": [ "location" ] } } ] }] }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/MODEL_ID:generateContent"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/MODEL_ID:generateContent" | Select-Object -Expand Content
Example curl command
PROJECT_ID=myproject
LOCATION=us-central1
MODEL_ID=gemini-2.5-flash
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent \
-d '{
"contents": [{
"role": "user",
"parts": [{
"text": "What is the weather in Boston?"
}]
}],
"tools": [{
"functionDeclarations": [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616"
}
},
"required": [
"location"
]
}
}
]
}]
}'
Gen AI SDK for Python
Node.js
Java
Go
REST (OpenAI)
You can call the Function Calling API by using the OpenAI library. For more information, see Call Vertex AI models by using the OpenAI library.
Before using any of the request data, make the following replacements:
- PROJECT_ID: .
- MODEL_ID: The ID of the model that's being processed.
HTTP method and URL:
POST https://aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/global/endpoints/openapi/chat/completions
Request JSON body:
{ "model": "google/MODEL_ID", "messages": [ { "role": "user", "content": "What is the weather in Boston?" } ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "OBJECT", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" } }, "required": ["location"] } } } ] }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/global/endpoints/openapi/chat/completions"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/global/endpoints/openapi/chat/completions" | Select-Object -Expand Content
Python (OpenAI)
You can call the Function Calling API by using the OpenAI library. For more information, see Call Vertex AI models by using the OpenAI library.
Configure function call behavior
The following example shows how to pass a FunctionCallingConfig
to the model.
You can use functionCallingConfig
to require the model to output a specific function call. To configure this behavior:
- Set the function calling
mode
toANY
. Specify the function names that you want to use in
allowed_function_names
. Ifallowed_function_names
is empty, any of the provided functions can be returned.
REST
PROJECT_ID=myproject
LOCATION=us-central1
MODEL_ID=gemini-2.5-flash
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent \
-d '{
"contents": [{
"role": "user",
"parts": [{
"text": "Do you have the White Pixel 8 Pro 128GB in stock in the US?"
}]
}],
"tools": [{
"functionDeclarations": [
{
"name": "get_product_sku",
"description": "Get the available inventory for a Google products, e.g: Pixel phones, Pixel Watches, Google Home etc",
"parameters": {
"type": "object",
"properties": {
"product_name": {"type": "string", "description": "Product name"}
}
}
},
{
"name": "get_store_location",
"description": "Get the location of the closest store",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "Location"}
},
}
}
]
}],
"toolConfig": {
"functionCallingConfig": {
"mode":"ANY",
"allowedFunctionNames": ["get_product_sku"]
}
},
"generationConfig": {
"temperature": 0.95,
"topP": 1.0,
"maxOutputTokens": 8192
}
}'
Gen AI SDK for Python
Node.js
Go
REST (OpenAI)
You can call the Function Calling API by using the OpenAI library. For more information, see Call Vertex AI models by using the OpenAI library.
Before using any of the request data, make the following replacements:
- PROJECT_ID: .
- MODEL_ID: The ID of the model that's being processed.
HTTP method and URL:
POST https://aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/global/endpoints/openapi/chat/completions
Request JSON body:
{ "model": "google/MODEL_ID", "messages": [ { "role": "user", "content": "What is the weather in Boston?" } ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "OBJECT", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA or a zip code e.g. 95616" } }, "required": ["location"] } } } ], "tool_choice": "auto" }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/global/endpoints/openapi/chat/completions"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/global/endpoints/openapi/chat/completions" | Select-Object -Expand Content
Python (OpenAI)
You can call the Function Calling API by using the OpenAI library. For more information, see Call Vertex AI models by using the OpenAI library.
What's next
To learn more, see the following documentation: