Live API-supported models come with the built-in ability to use the
following tools: To enable a particular tool for usage in returned responses, include the name of
the tool in the You can use the Live API with the following models: * Reach out to your Google account team representative to request
access. Use function calling to create a description of a
function, then pass that description to the model in a request. The response
from the model includes the name of a function that matches the description and
the arguments to call it with. All functions must be declared at the start of the session by sending tool
definitions as part of the To enable function calling, include You can use code execution with the Live API
to generate and execute Python code directly. To enable code execution for your
responses, include You can use Grounding with Google Search with
the Live API by including You can use Vertex AI RAG Engine with the Live API for grounding,
storing, and retrieving contexts: For more information, see Use Vertex AI RAG Engine in Gemini
Live API. Gemini 2.5 Flash with Live API
introduces native audio capabilities, enhancing the standard Live API
features. Native audio provides richer and more natural voice interactions
through 30 HD voices in 24
languages. It also
includes two new features exclusive to native audio:
Proactive Audio and
Affective Dialog. Proactive Audio allows the model to respond only when
relevant. When enabled, the model generates text transcripts and audio responses
proactively, but only for queries directed to the device. Non-device-directed
queries are ignored. To use Proactive Audio, configure the Affective Dialog allows models using Live API
native audio to better understand and respond appropriately to users' emotional
expressions, leading to more nuanced conversations. To enable Affective Dialog, set For more information on using the Live API, see:
tools
list when you initialize the model. The following
sections provide examples of how to use each of the built-in tools in your code.Supported models
Model version
Availability level
gemini-live-2.5-flash
Private GA*
gemini-live-2.5-flash-preview-native-audio
Public preview
Function calling
LiveConnectConfig
message.function_declarations
in the tools
list:Python
import asyncio
from google import genai
from google.genai import types
client = genai.Client(
vertexai=True,
project=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION,
)
model = "gemini-live-2.5-flash"
# Simple function definitions
turn_on_the_lights = {"name": "turn_on_the_lights"}
turn_off_the_lights = {"name": "turn_off_the_lights"}
tools = [{"function_declarations": [turn_on_the_lights, turn_off_the_lights]}]
config = {"response_modalities": ["TEXT"], "tools": tools}
async def main():
async with client.aio.live.connect(model=model, config=config) as session:
prompt = "Turn on the lights please"
await session.send_client_content(turns={"parts": [{"text": prompt}]})
async for chunk in session.receive():
if chunk.server_content:
if chunk.text is not None:
print(chunk.text)
elif chunk.tool_call:
function_responses = []
for fc in tool_call.function_calls:
function_response = types.FunctionResponse(
name=fc.name,
response={ "result": "ok" } # simple, hard-coded function response
)
function_responses.append(function_response)
await session.send_tool_response(function_responses=function_responses)
if __name__ == "__main__":
asyncio.run(main())
Python
Code execution
code_execution
in the tools
list:Python
import asyncio
from google import genai
from google.genai import types
client = genai.Client(
vertexai=True,
project=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION,
)
model = "gemini-live-2.5-flash"
tools = [{'code_execution': {}}]
config = {"response_modalities": ["TEXT"], "tools": tools}
async def main():
async with client.aio.live.connect(model=model, config=config) as session:
prompt = "Compute the largest prime palindrome under 100000."
await session.send_client_content(turns={"parts": [{"text": prompt}]})
async for chunk in session.receive():
if chunk.server_content:
if chunk.text is not None:
print(chunk.text)
model_turn = chunk.server_content.model_turn
if model_turn:
for part in model_turn.parts:
if part.executable_code is not None:
print(part.executable_code.code)
if part.code_execution_result is not None:
print(part.code_execution_result.output)
if __name__ == "__main__":
asyncio.run(main())
Grounding with Google Search
google_search
in the tools
list:Python
import asyncio
from google import genai
from google.genai import types
client = genai.Client(
vertexai=True,
project=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION,
)
model = "gemini-live-2.5-flash"
tools = [{'google_search': {}}]
config = {"response_modalities": ["TEXT"], "tools": tools}
async def main():
async with client.aio.live.connect(model=model, config=config) as session:
prompt = "When did the last Brazil vs. Argentina soccer match happen?"
await session.send_client_content(turns={"parts": [{"text": prompt}]})
async for chunk in session.receive():
if chunk.server_content:
if chunk.text is not None:
print(chunk.text)
# The model might generate and execute Python code to use Search
model_turn = chunk.server_content.model_turn
if model_turn:
for part in model_turn.parts:
if part.executable_code is not None:
print(part.executable_code.code)
if part.code_execution_result is not None:
print(part.code_execution_result.output)
if __name__ == "__main__":
asyncio.run(main())
Grounding with Vertex AI RAG Engine (Preview)
Python
from google import genai
from google.genai import types
from google.genai.types import (Content, LiveConnectConfig, HttpOptions, Modality, Part)
from IPython import display
PROJECT_ID=YOUR_PROJECT_ID
LOCATION=YOUR_LOCATION
TEXT_INPUT=YOUR_TEXT_INPUT
MODEL_NAME="gemini-live-2.5-flash"
client = genai.Client(
vertexai=True,
project=PROJECT_ID,
location=LOCATION,
)
rag_store=types.VertexRagStore(
rag_resources=[
types.VertexRagStoreRagResource(
rag_corpus=
(Public preview) Native audio
Use Proactive Audio
proactivity
field in
the setup message and set proactive_audio
to true
:Python
config = LiveConnectConfig(
response_modalities=["AUDIO"],
proactivity=ProactivityConfig(proactive_audio=True),
)
Use Affective Dialog
enable_affective_dialog
to
true
in the setup message:Python
config = LiveConnectConfig(
response_modalities=["AUDIO"],
enable_affective_dialog=True,
)
More information
Built-in tools for the Live API
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-15 UTC.