agent.query(input="What is the exchange rate from US dollars to SEK today?")
다음과 동일합니다(전체 형식).
agent.query(input={"input":[# The input is represented as a list of messages (each message as a dict){# The role (e.g. "system", "user", "assistant", "tool")"role":"user",# The type (e.g. "text", "tool_use", "image_url", "media")"type":"text",# The rest of the message (this varies based on the type)"text":"What is the exchange rate from US dollars to Swedish currency?",},]})
역할은 모델이 응답할 때 다양한 유형의 메시지를 구분하는 데 사용됩니다. 입력에서 role를 생략하면 기본값은 "user"입니다.
역할
설명
system
채팅 모델에 동작 방식을 알리고 추가 컨텍스트를 제공하는 데 사용됩니다. 일부 채팅 모델 제공업체에서는 지원되지 않습니다.
user
일반적으로 텍스트 또는 기타 대화형 입력의 형태로 모델과 상호작용하는 사용자의 입력을 나타냅니다.
assistant
모델의 응답을 나타내며, 여기에는 텍스트 또는 도구 호출 요청이 포함될 수 있습니다.
tool
외부 데이터 또는 처리가 검색된 후 도구 호출 결과를 모델에 다시 전달하는 데 사용되는 메시지입니다.
메시지의 type에 따라 나머지 메시지가 해석되는 방식도 결정됩니다(멀티모달 콘텐츠 처리 참고).
멀티모달 콘텐츠로 에이전트 쿼리
다음 에이전트(입력을 모델에 전달하고 도구를 사용하지 않음)를 사용하여 멀티모달 입력을 에이전트에게 전달하는 방법을 설명합니다.
멀티모달 메시지는 type 및 해당 데이터를 지정하는 콘텐츠 블록을 통해 표현됩니다. 일반적으로 멀티모달 콘텐츠의 경우 type을 "media"로, file_uri를 Cloud Storage URI로, mime_type을 파일 해석용으로 지정합니다.
이미지
agent.query(input={"input":[{"type":"text","text":"Describe the attached media in 5 words!"},{"type":"media","mime_type":"image/jpeg","file_uri":"gs://cloud-samples-data/generative-ai/image/cricket.jpeg"},]})
동영상
agent.query(input={"input":[{"type":"text","text":"Describe the attached media in 5 words!"},{"type":"media","mime_type":"video/mp4","file_uri":"gs://cloud-samples-data/generative-ai/video/pixel8.mp4"},]})
오디오
agent.query(input={"input":[{"type":"text","text":"Describe the attached media in 5 words!"},{"type":"media","mime_type":"audio/mp3","file_uri":"gs://cloud-samples-data/generative-ai/audio/pixel.mp3"},]})
importuuidrun_id=uuid.uuid4()# Generate an ID for tracking the run later.response=agent.query(input="What is the exchange rate from US dollars to Swedish currency?",config={# Specify the RunnableConfig here."run_id":run_id# Optional."tags":["config-tag"],# Optional."metadata":{"config-key":"config-value"},# Optional."configurable":{"session_id":"SESSION_ID"}# Optional.},)print(response)
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Use a LangChain agent\n\nIn addition to the general instructions for [using an agent](/vertex-ai/generative-ai/docs/agent-engine/use),\nthis page describes features that are specific to `LangchainAgent`.\n\nBefore you begin\n----------------\n\nThis tutorial assumes that you have read and followed the instructions in:\n\n- [Develop a LangChain agent](/vertex-ai/generative-ai/docs/agent-engine/develop/langchain): to develop `agent` as an instance of `LangchainAgent`.\n- [User authentication](/vertex-ai/generative-ai/docs/agent-engine/set-up#authentication) to authenticate as a user for querying the agent.\n\nSupported operations\n--------------------\n\nThe following operations are supported for `LangchainAgent`:\n\n- [`query`](/vertex-ai/generative-ai/docs/agent-engine/use#query-agent): for getting a response to a query synchronously.\n- [`stream_query`](/vertex-ai/generative-ai/docs/agent-engine/use#stream-responses): for streaming a response to a query.\n\nBoth `query` and `stream_query` methods support the same type of arguments:\n\n- [`input`](#input-messages): the messages to be sent to the agent.\n- [`config`](#runnable-configuration): the configuration (if applicable) for the context of the query.\n\nQuery the agent\n---------------\n\nThe command: \n\n agent.query(input=\"What is the exchange rate from US dollars to SEK today?\")\n\nis equivalent to the following (in full form): \n\n agent.query(input={\n \"input\": [ # The input is represented as a list of messages (each message as a dict)\n {\n # The role (e.g. \"system\", \"user\", \"assistant\", \"tool\")\n \"role\": \"user\",\n # The type (e.g. \"text\", \"tool_use\", \"image_url\", \"media\")\n \"type\": \"text\",\n # The rest of the message (this varies based on the type)\n \"text\": \"What is the exchange rate from US dollars to Swedish currency?\",\n },\n ]\n })\n\nRoles are used to help the model distinguish between different types of [messages](https://python.langchain.com/docs/concepts/messages)\nwhen responding. When the `role` is omitted in the input, it defaults to `\"user\"`.\n\nThe `type` of the message will also determine how the rest of the message is\ninterpreted (see [Handle multi-modal content](#multimodal-content)).\n\nQuery the agent with multi-modal content\n----------------------------------------\n\nWe will use the following agent (which forwards the input to the model and does\nnot use any tools) to illustrate how to pass in multimodal inputs to an agent:\n**Note:** there isn't any known support for multi-modal outputs. \n\n agent = agent_engines.LangchainAgent(\n model=\"gemini-2.0-flash\",\n runnable_builder=lambda model, **kwargs: model,\n )\n\nMultimodal messages are represented through content blocks that specify a `type`\nand corresponding data. In general, for multimodal content, you would specify\nthe `type` to be `\"media\"`, the `file_uri` to point to a Cloud Storage URI,\nand the `mime_type` for interpreting the file. \n\n### Image\n\n agent.query(input={\"input\": [\n {\"type\": \"text\", \"text\": \"Describe the attached media in 5 words!\"},\n {\"type\": \"media\", \"mime_type\": \"image/jpeg\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/image/cricket.jpeg\"},\n ]})\n\n### Video\n\n agent.query(input={\"input\": [\n {\"type\": \"text\", \"text\": \"Describe the attached media in 5 words!\"},\n {\"type\": \"media\", \"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/video/pixel8.mp4\"},\n ]})\n\n### Audio\n\n agent.query(input={\"input\": [\n {\"type\": \"text\", \"text\": \"Describe the attached media in 5 words!\"},\n {\"type\": \"media\", \"mime_type\": \"audio/mp3\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/audio/pixel.mp3\"},\n ]})\n\nFor the list of MIME types supported by Gemini, visit the documentation on:\n\n- [Image](/vertex-ai/generative-ai/docs/multimodal/image-understanding#image-requirements)\n- [Video](/vertex-ai/generative-ai/docs/multimodal/video-understanding#video-requirements)\n- [Audio](/vertex-ai/generative-ai/docs/multimodal/audio-understanding#audio-requirements)\n\nQuery the agent with a runnable configuration\n---------------------------------------------\n\nWhen querying the agent, you can also specify a `config` for the agent (which\nfollows the schema of a [`RunnableConfig`](https://python.langchain.com/docs/concepts/runnables/#runnableconfig)).\nTwo common scenarios are:\n\n- Default configuration parameters:\n - `run_id` / `run_name`: identifier for the run.\n - `tags` / `metadata`: classifier for the run when [tracing with OpenTelemetry](/vertex-ai/generative-ai/docs/agent-engine/develop/custom#tracing).\n- Custom configuration parameters (via `configurable`):\n - `session_id`: the session under which the run is happening (see [Store chat history](/vertex-ai/generative-ai/docs/agent-engine/develop/langchain#chat-history)).\n - `thread_id`: the thread under which the run is happening (see [Store Checkpoints](/vertex-ai/generative-ai/docs/agent-engine/develop/langgraph#store-checkpoints)).\n\nAs an example: \n\n import uuid\n\n run_id = uuid.uuid4() # Generate an ID for tracking the run later.\n\n response = agent.query(\n input=\"What is the exchange rate from US dollars to Swedish currency?\",\n config={ # Specify the RunnableConfig here.\n \"run_id\": run_id # Optional.\n \"tags\": [\"config-tag\"], # Optional.\n \"metadata\": {\"config-key\": \"config-value\"}, # Optional.\n \"configurable\": {\"session_id\": \"SESSION_ID\"} # Optional.\n },\n )\n\n print(response)\n\nWhat's next\n-----------\n\n- [Use an agent](/vertex-ai/generative-ai/docs/agent-engine/use).\n- [Evaluate an agent](/vertex-ai/generative-ai/docs/agent-engine/evaluate).\n- [Manage deployed agents](/vertex-ai/generative-ai/docs/agent-engine/manage).\n- [Get support](/vertex-ai/generative-ai/docs/agent-engine/support)."]]