获得解答和后续跟进

本页介绍了 Vertex AI Search 的搜索功能,包括回答和后续跟进,并展示了如何使用方法调用为通用搜索应用实现此功能。

提供总结答案和后续问题的搜索基于答案方法。应答方法 取代了旧版 search 方法的摘要功能 以及已废弃的 converse 方法的所有功能。 answer 方法还具有一些重要的附加功能,例如 处理复杂查询的能力。

回答方法的特点

该回答方法的主要特点如下:

  • 能够为复杂查询生成答案。例如,答案是 方法可以分解如下所示的复合查询, 转换为多个较小的查询,以返回更优的结果,这些查询用于 提供更优质的回答:

    • “2024 年 Google Cloud 和 Google Ads 分别的收入是多少?”
    • “自创立至今,Google 的销售额已达 10 亿美元 收入?”
  • 能够在多轮中将搜索和回答生成功能 调用应答方法。

  • 能够与搜索方法配对以缩短搜索延迟时间。您 可以分别调用搜索方法和应答方法,并呈现 在不同时间在不同 iframe 中显示搜索结果和答案。这个 表示您可以在 毫秒。您无需等待系统生成回答,即可显示搜索结果。

答案和跟进查询的功能可分为三个阶段: 搜索并回答:

何时使用“回答”功能,何时使用“搜索”功能

Vertex AI Search 有两种用于查询应用的方法。它们的区别在于 特征重叠。

在以下情况下使用 answer 方法:

  • 您需要 AI 生成的搜索结果回答(或摘要)。

  • 您希望实现多轮搜索,也就是说,搜索能够保留上下文,以便进行后续问题。

在以下情况下,请使用搜索方法:

  • 您只需要搜索结果,而不需要生成的回答。

  • 您希望系统返回 10 个以上的搜索结果(“蓝色链接”)。

  • 你存在以下任一情况:

    • 媒体或医疗保健数据
    • 您自己的嵌入
    • 同义词或重定向控件
    • 分面
    • 用户的国家/地区代码

在以下情况下,请结合使用“回答”和“搜索”方法:

  • 您希望返回 10 条以上的搜索结果,并且希望 回答。

  • 您遇到了延迟问题,希望在系统返回生成的回答之前快速返回并显示搜索结果。

查询阶段功能

回答和跟进功能支持自然语言查询处理。

本部分介绍并说明了用于查询重新表述和分类的各种选项。

查询重新表述

查询重新表述功能默认处于开启状态。此功能会选择最佳改写方式 自动查询查询以改进搜索结果。此功能还可以处理不需要重述的询问。

  • 将复杂查询分解为多个查询并执行同步操作 子查询。

    例如:一个复杂查询被分解为四个较小的、 查询。

    用户输入 基于复杂查询创建的子查询
    Andie Ram 和 Arnaud Clément 有哪些共同的工作和爱好?
    • 安迪·拉姆的职业
    • Arnaud Clément 的职业
    • Andie Ram 的爱好
    • Arnaud Clément 的爱好
  • 合成多轮询问,使后续问题具有上下文感知能力和状态。

    例如:在每个回合中根据用户输入合成的查询可能看起来 如下所示:

    用户输入 查询合成
    第 1 年:上学用笔记本电脑 学校笔记本电脑
    第 2 步:不是 Mac 学校用笔记本电脑,不是 Mac
    第 3 轮:屏幕更大,我还需要无线键盘 鼠标 适合学校使用的大屏幕笔记本电脑(而非 Mac 版),配备无线键盘和 鼠标
    第 4 轮,带上背包 适用于学校的大屏笔记本电脑(不是 Mac),配有无线键盘和鼠标,以及一个背包
  • 简化长查询,以提高检索效率。

    例如:长查询会缩短为简单查询。

    用户输入 简化查询
    我想找出“添加到购物车”的原因 按钮不能正常工作。似乎当用户点击该按钮时,商品不会添加到购物车,并且他们会收到错误消息。我查看了代码,发现代码似乎是 没错,所以我不确定可能是什么问题。你能帮我吗 排查此问题? “添加到购物车”按钮在网站上不起作用。
  • 执行多步推理

    多步推理基于 ReAct(推理 + 行动)范式,该范式使 LLM 能够使用自然语言推理来解决复杂任务。默认情况下,步数上限为 5。

    例如:

    用户输入 生成答案的两个步骤
    Google 成立多少年后,收入达到了 100 亿美元? 第 1 步
    [思路]:我需要知道 Google 成立的时间,然后才能查询那之后的收入。
    [Act] 搜索:Google 是什么时候成立的?[观察搜索结果]:“1998”

    第 2 步
    [想法]:现在我需要知道 Google 的年收入 了解该数字首次突破 10 亿美元。
    [操作] 搜索:1998 年以来的 Google 收入
    [观察搜索结果] Google 1998 年的收入、Google 1999 年的收入……
    [答案]:Google 于 1998 年成立 [2],在 5 年后的 2003 年收入就超过了 10 亿美元 [1]。

查询分类

查询分类选项用于识别对抗性查询和非答案查询 搜索查询的内容默认情况下,查询分类选项处于关闭状态。

如需详细了解对抗性和无答案搜索查询,请参阅忽略对抗性查询 查询忽略非摘要跳转 查询

搜索阶段功能

对于搜索,答案方法具有与搜索方法相同的选项。例如:

回答阶段功能

在答案阶段,根据搜索结果生成答案时,您需要 可以启用与搜索方法中相同的功能。例如:

  • 获取指示 回答中每个句子的来源。如需了解详情,请参阅添加引文

  • 使用提示前言自定义回答的语气、风格、 和详细程度 如需了解详情,请参阅指定自定义序言

  • 选择要用于生成答案的 Vertex AI 模型。 如需了解详情,请参阅回答生成模型版本和生命周期

  • 选择是否忽略被归类为对抗性或非查询性质的询问。

    如需详细了解对抗性询问和非回答询问,请参阅忽略对抗性询问忽略非摘要询问。未回答 搜索查询也称为非摘要搜索查询

准备工作

根据您使用的应用类型,满足以下要求:

  • 如果您有结构化或非结构化搜索应用,请确保: 已启用:Advanced LLM features(高级 LLM 功能)

  • 如果您有网站搜索应用,请确保已开启以下功能:

  • 如果您有一款混合搜索应用(即,一款连接到 请联系 Google 客户支持团队,请求将您添加到 answer API 与混合搜索的许可名单。

搜索和回答(基本)

以下命令展示了如何调用 answer 方法和 会返回生成的答案和搜索结果列表,其中包含指向 来源。

此命令仅显示所需的输入。这些选项将保留在 默认值。

REST

如需进行搜索并获取包含生成的答案的结果,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"}
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。例如,“将 BigQuery 与 Spanner 数据库?”。

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

查询阶段命令

本部分介绍了如何为 answer 方法调用。

搜索和回答(已停用重述功能)

以下命令展示了如何调用 answer 方法和 返回生成的答案和搜索结果列表。由于重述选项已停用,因此回答可能与上一个回答不同。

REST

如需搜索并获取包含生成答案的结果,而不应用查询重述功能,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "queryUnderstandingSpec": {
               "queryRephraserSpec": {
                  "disable": true
            }
        }
          }'
    
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。例如,“将 BigQuery 与 Spanner 数据库?”。

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

搜索并回答(指定最大步数)

以下命令展示了如何调用 answer 方法并返回生成的答案和搜索结果列表。由于重述步骤的数量增加,因此答案与之前的答案不同。

REST

如需搜索并获取包含生成的答案的结果(最多允许 5 个重述步骤),请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "queryUnderstandingSpec": {
                "queryRephraserSpec": {
                    "maxRephraseSteps": MAX_REPHRASE
                 }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“将 BigQuery 与 Spanner 数据库?”。
    • MAX_REPHRASE:重述步骤的数量上限。允许的最大值为 5。如果未设置或设置的值小于 1,则值为默认值 1

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

使用查询分类进行搜索和回答

以下命令展示了如何调用 answer 方法来询问询问是敌对性询问、非答案询问,还是两者都不是。

响应包含查询的分类类型,但答案本身不受分类影响。 如果您想根据询问类型更改回答行为,可以在回答阶段执行此操作。请参阅忽略对抗性 查询忽略非摘要跳转 查询

REST

如需确定询问是恶意询问还是非答案询问,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "queryUnderstandingSpec": {
                "queryClassificationSpec": {
                    "types": ["QUERY_CLASSIFICATION_TYPE"]
                 }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。例如:“hello”。
    • QUERY_CLASSIFICATION_TYPE:查询类型 您要标识的名称:ADVERSARIAL_QUERYNON_ANSWER_SEEKING_QUERY,或同时设置两者。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

搜索阶段命令:使用搜索结果选项进行搜索和回答

本部分介绍了如何为 answer 方法调用的搜索阶段部分指定选项,例如设置返回的文档数量上限、提升和过滤等选项,以及如何在您提供自己的搜索结果时获取回答。

以下命令展示了如何调用 answer 方法和 指定搜索结果返回方式的各种选项。(搜索 与答案无关的结果。)

REST

如需设置与搜索结果的返回内容和方式相关的各种选项,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
              "searchSpec": {
              "searchParams": {
                "maxReturnResults": MAX_RETURN_RESULTS,
                "filter": "FILTER",
                "boostSpec": BOOST_SPEC,
                "orderBy": "ORDER_BY",
                "searchResultMode": SEARCH_RESULT_MODE
               }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“比较 BigQuery 和 Spanner 数据库?”
    • MAX_RETURN_RESULTS:搜索次数 结果。默认值为 10。
    • FILTER:过滤条件用于指定要查询的文档。如果文档的元数据符合过滤条件规范,则系统会对该文档进行查询。如需了解详情(包括过滤条件语法),请参阅过滤结构化或非结构化数据的通用搜索
    • BOOST_SPEC:通过增强规范,您可以 提升搜索结果中特定文档的排名,这可能会影响答案。 如需了解详情(包括提升幅度规范的语法),请参阅提升搜索结果的排名
    • ORDER_BY:文档的返回顺序。文档可按 文档 对象。orderBy 表达式区分大小写。 如果此字段无法识别,则 INVALID_ARGUMENT 返回。
    • SEARCH_RESULT_MODE:用于指定搜索结果模式:DOCUMENTSCHUNKS。如需了解详情,请参阅解析和分块文档以及 ContentSearchSpec。 此字段仅在 v1alpha 版的 API 中可用。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

回应阶段命令

本部分介绍了如何为 answer 方法调用。

忽略对抗性查询和非答案搜索查询

以下命令展示了如何避免回答对抗性查询和 调用 answer 方法时,针对非答案搜索查询的查询。

REST

如需跳过回答具有对抗性或非查询性质的询问,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "ignoreAdversarialQuery": true,
               "ignoreNonAnswerSeekingQuery": true
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

仅显示相关回答

Vertex AI Search 可以评估搜索结果与查询的相关性。如果没有结果被判定为足够相关,那么您可以选择返回回退回答“We do not have a summary for your query.”,而不是根据不相关或相关性较低的结果生成回答

以下命令展示了如何在出现以下情况时返回后备答案: 调用 answer 方法时返回不相关的结果。

REST

如需在未找到任何相关结果时返回回退答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "ignoreLowRelevantContent": true
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

指定回答模型

以下命令展示了如何更改用于生成回答的模型版本。

如需了解支持的模型,请参阅答案生成模型版本和生命周期

REST

如需使用与默认模型不同的模型生成回答,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "modelSpec": {
                  "modelVersion": "MODEL_VERSION",
               }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。
    • MODEL_VERSION:要用于生成 回答。如需了解详情,请参阅回答生成模型版本和生命周期

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

指定自定义前导码

以下命令展示了如何为生成的回答设置序言。答 序言包含用于自定义答案的自然语言说明。您可以请求自定义设置,例如长度、详细程度、输出风格(例如“简单”)、输出语言、答案重点和格式(例如表格、项目符号和 XML)。例如,前言可以是“用 10 岁孩子的语言进行说明”。

序言会显著影响所生成 回答。如需了解应在前言中写入的内容以及优秀前言的示例,请参阅自定义前言简介

REST

如需使用默认模型以外的模型生成答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "promptSpec": {
                   "preamble": "PREAMBLE",
               }
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。
    • PREAMBLE: 自定义答案。例如,您可以尝试使用 show the answer format in an ordered listgive a very detailed answer

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

添加引用

以下命令展示了如何请求将引用添加到 回答。

REST

如需使用默认模型以外的模型生成答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "includeCitations": INCLUDE_CITATIONS
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。
    • INCLUDE_CITATIONS:指定是否包含 引用元数据。默认值为 false

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

设置答案语言代码

以下命令展示了如何为回答设置语言代码。

REST

如需使用默认模型以外的模型生成答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "answerLanguageCode": "ANSWER_LANGUAGE_CODE"
               }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:您要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • ANSWER_LANGUAGE_CODE: 回答。请使用 BCP47:用于标识语言的标记中定义的语言标记。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-1.5-flash-001/answer_gen/v2",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

后续问题的指令

后续查询是多轮查询。在跟进会话中的第一个查询之后, “轮流”而不考虑之前的互动情况。借助跟进问题,回答方法还可以建议相关问题,供用户选择,而无需输入自己的跟进问题。

上述部分中所述的所有解答和后续功能,例如 引用、过滤器、安全搜索、忽略特定类型的查询以及使用 可以将用于自定义答案的序言与后续内容一同应用。

跟进会话示例

以下是包含跟进信息的会话示例。假设您想了解墨西哥的度假信息:

  • 第 1 步:

    • :一年中什么时候最适合去墨西哥度假?

    • 跟进回答:在干燥的天气下,墨西哥度假的最佳时间是 从 11 月持续到次年 4 月

  • 第 2 步:

    • :汇率是多少?

    • 回答并进行跟进:1 美元约等于 17.65 墨西哥比索。

  • 第 3 轮

    • 您:12 月的平均温度是多少?

    • 回答并进行跟进:平均温度在 70-78°F 之间。 坎昆的平均温度约为 77°F。

如果不进行跟进,系统就无法回答“汇率是多少?”这一问题,因为常规搜索无法知道您想要的是墨西哥汇率。同样,如果没有跟进信息,就不会有相关的背景信息 专门为你提供了墨西哥的温度

当您问“一年中什么时候最适合度假 “墨西哥?”,除了回答您的问题外,通过回答和后续跟进,您还可以建议 您可能会提出的问题,例如“哪个月份最便宜 去墨西哥度假呢?”以及“旅游月份在墨西哥是什么时候?”之类的问题。

启用相关问题功能后,系统会在 ConverseConversationResponse 中以字符串的形式返回问题。

关于会话

如需了解跟进在 Vertex AI Search 中的运作方式,您需要了解会话。

会话由文本查询组成 和响应。

这些查询和响应对有时称为“轮流”。在上例中,第二轮对话由“汇率是多少?”和“1 美元大约相当于 17.65 墨西哥比索”组成。

会话会存储在应用中。 在应用中,会话由会话 资源

除了包含查询和响应消息之外,会话 资源包含:

  • 唯一名称(会话 ID)。

  • 状态(进行中或已完成)。

  • 用户伪 ID,即用于跟踪用户的访问者 ID。可以通过编程方式进行分配。

  • 开始时间和结束时间。

  • 回合,即查询答案对。

存储会话信息并获取响应

您可以使用命令行生成搜索响应和答案 并将这些数据与会话中的每个查询一起存储

REST

要使用命令行创建会话,并通过 请按以下步骤操作:

  1. 指定您要存储会话的应用:

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions" \
      -d '{
            "userPseudoId": "USER_PSEUDO_ID"
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。

    • APP_ID:Vertex AI Search 应用的 ID。

    • USER_PSEUDO_ID:这是用于跟踪搜索访问者的唯一标识符。例如,您可以使用 HTTP Cookie 来实现此功能,HTTP Cookie 可唯一标识单个设备上的访问者。

    命令和结果示例

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)"
    -H "Content-Type: application/json"
    "https://discoveryengine.googleapis.com/v1/projects/my-project-123/locations/global/collections/default_collection/engines/my-app/sessions"
    -d '{
    "userPseudoId": "test_user"
    }'
    
    { "name": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943", "state": "IN_PROGRESS", "userPseudoId": "test_user", "startTime": "2024-09-13T18:47:10.465311Z", "endTime": "2024-09-13T18:47:10.465311Z" }
  2. 记下会话 ID,即 name: 字段末尾的 JSON 响应。在示例结果中,ID 为 5386462384953257772。 您在下一步中需要用到此 ID。

  3. 生成回答并将其添加到应用中的会话:

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "session": "projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID",
              "searchSpec":{ "searchParams": {"filter": "FILTER"} }
    }'
    
    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • QUERY:一个包含 问题或搜索查询。
    • SESSION_ID:您在第 1 步中创建的会话的 ID。这些是 name: 字段(参见第 2 步)。对于会话,请在每轮对话中使用相同的会话 ID。
    • FILTER:用于过滤搜索的文本字段 过滤表达式默认值为空字符串。过滤条件的构建方式因您拥有的非结构化数据是否包含元数据、结构化数据或网站数据而异。如需了解详情,请参阅过滤结构化或非结构化数据的通用搜索过滤网站搜索

    命令和结果示例

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)"
    -H "Content-Type: application/json"
    "https://discoveryengine.googleapis.com/v1/projects/my-project-123/locations/global/collections/default_collection/engines/my-app/servingConfigs/default_search:answer"
    -d '{
    "query": { "text": "Compare bigquery with spanner database?"},
    "session":  "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943",
    }'
        
    { "answer": { "name": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943/answers/4861507376861383072", "state": "SUCCEEDED", "answerText": "BigQuery and Spanner are both powerful tools that can be used together to handle transactional and analytical workloads. Spanner is a fully managed relational database optimized for transactional workloads, while BigQuery is a serverless data warehouse designed for business agility. Spanner provides seamless replication across regions in Google Cloud and processes over 1 billion requests per second at peak. BigQuery analyzes over 110 terabytes of data per second. Users can leverage federated queries to read data from Spanner and write to a native BigQuery table. \n", "steps": [ { "state": "SUCCEEDED", "description": "Rephrase the query and search.", "actions": [ { "searchAction": { "query": "Compare bigquery with spanner database?" }, "observation": { "searchResults": [ { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/ecc0e7547253f4ca3ff3328ce89995af", "uri": "https://cloud.google.com/blog/topics/developers-practitioners/how-spanner-and-bigquery-work-together-handle-transactional-and-analytical-workloads", "title": "How Spanner and BigQuery work together to handle transactional and analytical workloads | Google Cloud Blog", "snippetInfo": [ { "snippet": "Using Cloud \u003cb\u003eSpanner\u003c/b\u003e and \u003cb\u003eBigQuery\u003c/b\u003e also allows customers to build their \u003cb\u003edata\u003c/b\u003e clouds using Google Cloud, a unified, open approach to \u003cb\u003edata\u003c/b\u003e-driven transformation ...", "snippetStatus": "SUCCESS" } ] }, { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/d7e238f73608a860e00b752ef80e2941", "uri": "https://cloud.google.com/blog/products/databases/cloud-spanner-gets-stronger-with-bigquery-federated-queries", "title": "Cloud Spanner gets stronger with BigQuery-federated queries | Google Cloud Blog", "snippetInfo": [ { "snippet": "As enterprises compete for market share, their need for real-time insights has given rise to increased demand for transactional \u003cb\u003edatabases\u003c/b\u003e to support \u003cb\u003edata\u003c/b\u003e ...", "snippetStatus": "SUCCESS" } ] }, { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/e10a5a3c267dc61579e7c00fefe656eb", "uri": "https://cloud.google.com/blog/topics/developers-practitioners/replicating-cloud-spanner-bigquery-scale", "title": "Replicating from Cloud Spanner to BigQuery at scale | Google Cloud Blog", "snippetInfo": [ { "snippet": "... \u003cb\u003eSpanner data\u003c/b\u003e into \u003cb\u003eBigQuery\u003c/b\u003e for analytics. In this post, you will learn how to efficiently use this feature to replicate large tables with high throughput ...", "snippetStatus": "SUCCESS" } ] }, ... { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/8100ad36e1cac149eb9fc180a41d8f25", "uri": "https://cloud.google.com/blog/products/gcp/from-nosql-to-new-sql-how-spanner-became-a-global-mission-critical-database", "title": "How Spanner became a global, mission-critical database | Google Cloud Blog", "snippetInfo": [ { "snippet": "... SQL \u003cb\u003evs\u003c/b\u003e. NoSQL dichotomy may no longer be relevant." The \u003cb\u003eSpanner\u003c/b\u003e SQL query processor, while recognizable as a standard implementation, has unique ...", "snippetStatus": "SUCCESS" } ] } ] } } ] } ] }, "session": { "name": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943", "state": "IN_PROGRESS", "userPseudoId": "test_user", "turns": [ { "query": { "queryId": "projects/123456/locations/global/questions/741830", "text": "Compare bigquery with spanner database?" }, "answer": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943/answers/4861507376861383072" } ], "startTime": "2024-09-13T18:47:10.465311Z", "endTime": "2024-09-13T18:47:10.465311Z" }, "answerQueryToken": "NMwKDAjFkpK3BhDU24uZAhIkNjZlNDIyZWYtMDAwMC0yMjVmLWIxMmQtZjQwMzA0M2FkYmNj" }
  4. 针对会话中的每个新查询重复执行第 3 步。

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def create_session(
    project_id: str,
    location: str,
    engine_id: str,
    user_pseudo_id: str,
) -> discoveryengine.Session:
    """Creates a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        user_pseudo_id: A unique identifier for tracking visitors. For example, this
          could be implemented with an HTTP cookie, which should be able to
          uniquely identify a visitor on a single device.
    Returns:
        discoveryengine.Session: The newly created Session.
    """

    client = discoveryengine.ConversationalSearchServiceClient()

    session = client.create_session(
        # The full resource name of the engine
        parent=f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}",
        session=discoveryengine.Session(user_pseudo_id=user_pseudo_id),
    )

    # Send Session name in `answer_query()`
    print(f"Session: {session.name}")
    return session

从数据存储区获取会话

以下命令展示了如何调用 get 方法和 从数据存储区获取会话。

REST

如需从数据存储区获取会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • SESSION_ID:您要获取的会话的 ID。

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def get_session(
    project_id: str,
    location: str,
    engine_id: str,
    session_id: str,
) -> discoveryengine.Session:
    """Retrieves a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        session_id: The ID of the session.
    """

    client = discoveryengine.ConversationalSearchServiceClient()

    # The full resource name of the session
    name = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions/{session_id}"

    session = client.get_session(name=name)

    print(f"Session details: {session}")
    return session

从应用中删除会话

以下命令展示了如何调用 delete 方法和 从数据存储区中删除会话。

默认情况下,系统会自动删除超过 60 天的会话。 不过,如果您想删除特定会话(例如,如果它包含敏感内容),请使用此 API 调用进行删除。

REST

如需从应用中删除会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X DELETE -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • SESSION_ID:您要删除的会话的 ID。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def delete_session(
    project_id: str,
    location: str,
    engine_id: str,
    session_id: str,
) -> None:
    """Deletes a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        session_id: The ID of the session.
    """

    client = discoveryengine.ConversationalSearchServiceClient()

    # The full resource name of the session
    name = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions/{session_id}"

    client.delete_session(name=name)

    print(f"Session {name} deleted.")

更新会话

您可能因下列多种原因而需要更新会话。例如,执行以下任一操作:

  • 将会话标记为已完成
  • 将一个会话中的邮件合并到另一个会话中
  • 更改用户的伪 ID

以下命令展示了如何调用 patch 方法并更新数据存储区中的会话。

REST

如需通过应用更新会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X PATCH \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID?updateMask=state" \
      -d '{
            "state": "NEW_STATE"
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • SESSION_ID:您要创建的会话的 ID 要更新的广告系列。
    • NEW_STATE:状态的新值,例如 IN_PROGRESS

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine
from google.protobuf import field_mask_pb2


def update_session(
    project_id: str,
    location: str,
    engine_id: str,
    session_id: str,
) -> discoveryengine.Session:
    """Updates a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        session_id: The ID of the session.
    Returns:
        discoveryengine.Session: The updated Session.
    """
    client = discoveryengine.ConversationalSearchServiceClient()

    # The full resource name of the session
    name = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions/{session_id}"

    session = discoveryengine.Session(
        name=name,
        state=discoveryengine.Session.State.IN_PROGRESS,  # Options: IN_PROGRESS, STATE_UNSPECIFIED
    )

    # Fields to Update
    update_mask = field_mask_pb2.FieldMask(paths=["state"])

    session = client.update_session(session=session, update_mask=update_mask)
    print(f"Updated session: {session.name}")
    return session

列出所有会话

以下命令展示了如何调用 list 方法并列出数据存储区中的会话。

REST

如需列出应用的会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def list_sessions(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.ListSessionsResponse:
    """Lists all sessions associated with a data store.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
    Returns:
        discoveryengine.ListSessionsResponse: The list of sessions.
    """

    client = discoveryengine.ConversationalSearchServiceClient()

    # The full resource name of the engine
    parent = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}"

    response = client.list_sessions(
        request=discoveryengine.ListSessionsRequest(
            parent=parent,
            filter='state="IN_PROGRESS"',  # Optional: Filter requests by userPseudoId or state
            order_by="update_time",  # Optional: Sort results
        )
    )

    print("Sessions:")
    for session in response.sessions:
        print(session)

    return response

列出用户的会话

以下命令展示了如何调用 list 方法来 列出与用户或访问者关联的会话。

REST

如需列出与用户或访问者关联的会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions?filter=userPseudoId=USER_PSEUDO_ID"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • USER_PSEUDO_ID:用户的伪 ID 列出其会话。

Python

有关详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def list_sessions(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.ListSessionsResponse:
    """Lists all sessions associated with a data store.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
    Returns:
        discoveryengine.ListSessionsResponse: The list of sessions.
    """

    client = discoveryengine.ConversationalSearchServiceClient()

    # The full resource name of the engine
    parent = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}"

    response = client.list_sessions(
        request=discoveryengine.ListSessionsRequest(
            parent=parent,
            filter='state="IN_PROGRESS"',  # Optional: Filter requests by userPseudoId or state
            order_by="update_time",  # Optional: Sort results
        )
    )

    print("Sessions:")
    for session in response.sessions:
        print(session)

    return response

列出用户和状态的会话

以下命令展示了如何调用 list 方法来 列出特定用户处于给定状态的会话。

REST

列出与给定用户关联的处于打开或关闭状态的用户的会话 用户或访问者,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions?filter=userPseudoId=USER_PSEUDO_ID%20AND%20state=STATE"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • USER_PSEUDO_ID:您要列出其会话的用户的伪 ID。
    • STATE:会话状态:STATE_UNSPECIFIED(已关闭或未知)或 IN_PROGRESS(未结)。

Python

如需了解详情,请参阅 Vertex AI Agent Builder Python API 参考文档

如需向 Vertex AI Agent Builder 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def list_sessions(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.ListSessionsResponse:
    """Lists all sessions associated with a data store.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
    Returns:
        discoveryengine.ListSessionsResponse: The list of sessions.
    """

    client = discoveryengine.ConversationalSearchServiceClient()

    # The full resource name of the engine
    parent = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}"

    response = client.list_sessions(
        request=discoveryengine.ListSessionsRequest(
            parent=parent,
            filter='state="IN_PROGRESS"',  # Optional: Filter requests by userPseudoId or state
            order_by="update_time",  # Optional: Sort results
        )
    )

    print("Sessions:")
    for session in response.sessions:
        print(session)

    return response