LanggraphAgent(
model: str,
*,
tools: typing.Optional[typing.Sequence[_ToolLike]] = None,
model_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
model_tool_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
model_builder: typing.Optional[typing.Callable[[...], BaseLanguageModel]] = None,
runnable_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
runnable_builder: typing.Optional[
typing.Callable[[...], RunnableSerializable]
] = None,
checkpointer_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
checkpointer_builder: typing.Optional[
typing.Callable[[...], BaseCheckpointSaver]
] = None,
enable_tracing: bool = False
)
A LangGraph Agent.
Methods
LanggraphAgent
LanggraphAgent(
model: str,
*,
tools: typing.Optional[typing.Sequence[_ToolLike]] = None,
model_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
model_tool_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
model_builder: typing.Optional[typing.Callable[[...], BaseLanguageModel]] = None,
runnable_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
runnable_builder: typing.Optional[
typing.Callable[[...], RunnableSerializable]
] = None,
checkpointer_kwargs: typing.Optional[typing.Mapping[str, typing.Any]] = None,
checkpointer_builder: typing.Optional[
typing.Callable[[...], BaseCheckpointSaver]
] = None,
enable_tracing: bool = False
)
Initializes the LangGraph Agent.
Under-the-hood, assuming .set_up() is called, this will correspond to
model = model_builder(model_name=model, model_kwargs=model_kwargs)
runnable = runnable_builder(
model=model,
tools=tools,
model_tool_kwargs=model_tool_kwargs,
runnable_kwargs=runnable_kwargs,
)
When everything is based on their default values, this corresponds to
# model_builder
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name=model, **model_kwargs)
# runnable_builder
from langchain import agents
from langchain_core.runnables.history import RunnableWithMessageHistory
llm_with_tools = llm.bind_tools(tools=tools, **model_tool_kwargs)
agent_executor = agents.AgentExecutor(
agent=prompt | llm_with_tools | output_parser,
tools=tools,
**agent_executor_kwargs,
)
runnable = RunnableWithMessageHistory(
runnable=agent_executor,
get_session_history=chat_history,
**runnable_kwargs,
)
By default, no checkpointer is used. To enable checkpointing, provide a
checkpointer_builder
function that returns a checkpointer instance.
Example using Spanner:
def checkpointer_builder(instance_id, database_id, project_id, **kwargs):
from langchain_google_spanner import SpannerCheckpointSaver
checkpointer = SpannerCheckpointSaver(instance_id, database_id, project_id)
with checkpointer.cursor() as cur:
cur.execute("DROP TABLE IF EXISTS checkpoints")
cur.execute("DROP TABLE IF EXISTS checkpoint_writes")
checkpointer.setup()
return checkpointer
Example using an in-memory checkpointer:
def checkpointer_builder(**kwargs):
from langgraph.checkpoint.memory import MemorySaver
return MemorySaver()
The checkpointer_builder
function will be called with any keyword
arguments passed to the agent's constructor. Ensure your
checkpointer_builder
function accepts **kwargs
to handle these
arguments, even if unused.
Parameters | |
---|---|
Name | Description |
model |
str
Optional. The name of the model (e.g. "gemini-1.0-pro"). |
tools |
Sequence[langchain_core.tools.BaseTool, Callable]
Optional. The tools for the agent to be able to use. All input callables (e.g. function or class method) will be converted to a langchain.tools.base.StructuredTool. Defaults to None. |
model_kwargs |
Mapping[str, Any]
Optional. Additional keyword arguments for the constructor of chat_models.ChatVertexAI. An example would be { # temperature (float): Sampling temperature, it controls the # degree of randomness in token selection. "temperature": 0.28, # max_output_tokens (int): Token limit determines the # maximum amount of text output from one prompt. "max_output_tokens": 1000, # top_p (float): Tokens are selected from most probable to # least, until the sum of their probabilities equals the # top_p value. "top_p": 0.95, # top_k (int): How the model selects tokens for output, the # next token is selected from among the top_k most probable # tokens. "top_k": 40, } |
model_tool_kwargs |
Mapping[str, Any]
Optional. Additional keyword arguments when binding tools to the model using |
model_builder |
Callable[..., "BaseLanguageModel"]
Optional. Callable that returns a new language model. Defaults to a a callable that returns ChatVertexAI based on |
runnable_kwargs |
Mapping[str, Any]
Optional. Additional keyword arguments for the constructor of langchain.runnables.history.RunnableWithMessageHistory if chat_history is specified. If chat_history is None, this will be ignored. |
runnable_builder |
Callable[..., "RunnableSerializable"]
Optional. Callable that returns a new runnable. This can be used for customizing the orchestration logic of the Agent based on the model returned by |
checkpointer_kwargs |
Mapping[str, Any]
Optional. Additional keyword arguments for the constructor of the checkpointer returned by |
checkpointer_builder |
Callable[..., "BaseCheckpointSaver"]
Optional. Callable that returns a checkpointer. This can be used for defining the checkpointer of the Agent. Defaults to None. |
enable_tracing |
bool
Optional. Whether to enable tracing in Cloud Trace. Defaults to False. |
Exceptions | |
---|---|
Type | Description |
TypeError |
If there is an invalid tool (e.g. function with an input |
tha |
did not specify its type).: |
clone
clone() -> vertexai.preview.reasoning_engines.templates.langgraph.LanggraphAgent
Returns a clone of the LanggraphAgent.
get_state
get_state(
config: typing.Optional[RunnableConfig] = None, **kwargs: typing.Any
) -> typing.Dict[str, typing.Any]
Gets the current state of the Agent.
Parameter | |
---|---|
Name | Description |
config |
Optional[RunnableConfig]
Optional. The config for invoking the Agent. |
Returns | |
---|---|
Type | Description |
Dict[str, Any] |
The current state of the Agent. |
get_state_history
get_state_history(
config: typing.Optional[RunnableConfig] = None, **kwargs: typing.Any
) -> typing.Iterable[typing.Any]
Gets the state history of the Agent.
Parameter | |
---|---|
Name | Description |
config |
Optional[RunnableConfig]
Optional. The config for invoking the Agent. |
query
query(
*,
input: typing.Union[str, typing.Mapping[str, typing.Any]],
config: typing.Optional[RunnableConfig] = None,
**kwargs: typing.Any
) -> typing.Dict[str, typing.Any]
Queries the Agent with the given input and config.
Parameters | |
---|---|
Name | Description |
input |
Union[str, Mapping[str, Any]]
Required. The input to be passed to the Agent. |
config |
langchain_core.runnables.RunnableConfig
Optional. The config (if any) to be used for invoking the Agent. |
register_operations
register_operations() -> typing.Mapping[str, typing.Sequence[str]]
Registers the operations of the Agent.
This mapping defines how different operation modes (e.g., "", "stream")
are implemented by specific methods of the Agent. The "default" mode,
represented by the empty string , is associated with the
queryAPI,
while the "stream" mode is associated with the
stream_query` API.
Returns | |
---|---|
Type | Description |
Mapping[str, Sequence[str]] |
A mapping of operation modes to a list of method names that implement those operation modes. |
set_up
set_up()
Sets up the agent for execution of queries at runtime.
It initializes the model, binds the model with tools, and connects it with the prompt template and output parser.
This method should not be called for an object that being passed to the ReasoningEngine service for deployment, as it initializes clients that can not be serialized.
stream_query
stream_query(
*,
input: typing.Union[str, typing.Mapping[str, typing.Any]],
config: typing.Optional[RunnableConfig] = None,
**kwargs
) -> typing.Iterable[typing.Any]
Stream queries the Agent with the given input and config.
Parameters | |
---|---|
Name | Description |
input |
Union[str, Mapping[str, Any]]
Required. The input to be passed to the Agent. |
config |
langchain_core.runnables.RunnableConfig
Optional. The config (if any) to be used for invoking the Agent. |
update_state
update_state(
config: typing.Optional[RunnableConfig] = None, **kwargs: typing.Any
) -> typing.Dict[str, typing.Any]
Updates the state of the Agent.
Parameter | |
---|---|
Name | Description |
config |
Optional[RunnableConfig]
Optional. The config for invoking the Agent. |
Returns | |
---|---|
Type | Description |
Dict[str, Any] |
The updated state of the Agent. |