获取答案和后续跟进

本页介绍了 Vertex AI Search 的“搜索并回答”功能和后续问题功能,并展示了如何使用方法调用为自定义搜索应用实现这些功能。

基于回答方法进行搜索并提出后续问题。answer 方法取代了旧版 search 方法的总结功能以及已弃用的 converse 方法的所有功能。回答方法还具有一些重要的附加功能,例如能够处理复杂的查询。

回答方法的特点

回答方法的主要特点如下:

  • 能够生成复杂查询的答案。例如,回答方法可以将复合查询(例如以下查询)分解为多个较小的查询,以返回更好的结果,从而提供更好的答案:

    • “Google Cloud 和 Google Ads 在 2024 年的收入分别是多少?”
    • “Google 在成立多少年后实现了 10 亿美元的收入?”
  • 能够通过在每轮对话中调用回答方法,在多轮对话中将搜索和回答生成相结合。

  • 能够与搜索方法搭配使用,以缩短搜索延迟时间。您可以分别调用搜索方法和回答方法,并在不同的 iframe 中于不同时间呈现搜索结果和回答。这意味着,您可以在几毫秒内向用户显示搜索结果(10 个蓝色链接)。您无需等待生成答案,即可显示搜索结果。

回答和后续问题功能可分为查询、搜索和回答三个阶段:

何时使用回答,何时使用搜索

Vertex AI Search 有两种用于查询应用的查询方法。它们具有不同的但重叠的功能。

在以下情况下使用 answer 方法:

  • 您希望获得 AI 生成的搜索结果回答(或摘要)。

  • 您希望进行多回合搜索,即能够保留上下文以便提出后续问题的搜索。

在以下情况下,请使用 search 方法:

  • 您只需要搜索结果,不需要生成的答案。

  • 您有以下任一情况:

    • 媒体或医疗保健数据
    • 您自己的嵌入
    • 同义词或重定向控件
    • 分面
    • 用户国家/地区代码
  • 您需要浏览通用数据存储区中的数据。

在以下情况下,请同时使用 answer 方法和 search 方法:

  • 您希望返回十个以上的搜索结果,并且希望获得生成的答案。

  • 您存在延迟时间问题,并且希望在返回生成的答案之前快速返回并显示搜索结果。

查询阶段功能

回答和后续问题功能支持自然语言查询处理。

本部分将介绍并举例说明查询措辞调整和分类的各种选项。

查询措辞调整

默认情况下,查询措辞调整功能处于开启状态。此功能会自动选择最佳方式来改写查询,以改进搜索结果。此功能还可以处理不需要重新措辞的查询。

  • 将复杂查询分解为多个查询,并执行同步子查询。

    例如:一个复杂的查询被分解为四个较小且更简单的查询。

    用户输入 根据复杂查询创建的子查询
    Andie Ram 和 Arnaud Clément 在工作和爱好方面有哪些共同点?
    • Andie Ram 的职业
    • Arnaud Clément 职业生涯
    • Andie Ram 的爱好
    • Arnaud Clément 的爱好
  • 合成多轮对话查询,使后续问题具有上下文感知能力和状态感知能力。

    例如,根据用户在每个对话轮次中的输入内容合成的查询可能如下所示:

    用户输入 查询已合成
    第一回合:学生笔记本电脑 学校专用笔记本电脑
    第 2 回合:非 Mac 适用于学校的笔记本电脑(非 Mac)
    第三轮:更大的屏幕,还需要无线键盘和鼠标 适合学校使用的大屏笔记本电脑,而非配备无线键盘和鼠标的 Mac
    第 4 步:以及一个背包 用于学校的较大屏幕笔记本电脑(非 Mac),配备无线键盘和鼠标,以及一个笔记本电脑背包
  • 简化长查询以改进检索(需要高级 LLM 功能)。

    例如:将长查询缩短为典型查询。

    用户输入 查询简化
    我们网站上的“添加到购物车”按钮无法正常运作,我正在尝试找出原因。似乎当用户点击该按钮时,商品未添加到购物车,并且用户收到了一条错误消息。我检查了代码,发现它似乎是正确的,因此不确定问题可能出在哪里。您能帮我排查此问题吗? 网站上的“添加到购物车”按钮无法正常使用。
  • 执行多步推理

    多步推理基于 ReAct(推理 + 行动)范式,可让 LLM 使用自然语言推理来解决复杂的任务。默认情况下,步骤数量上限为 5。

    例如:

    用户输入 生成答案的两个步骤
    Google 在成立多少年后实现了 10 亿美元的收入? 第 1 步
    [思路]:我需要知道 Google 的成立时间,然后才能查询自那时以来的收入。
    [操作] 搜索:Google 是哪一年成立的?[观察搜索结果]:“1998 年”

    第 2 步
    [想法]:现在,我需要搜索 Google 自 1998 年以来的年收入,并找出其首次超过 10 亿美元的时间。
    [操作] 搜索:自 1998 年以来的 Google 收入
    [观察搜索结果] 1998 年的 Google 收入、1999 年的 Google 收入……
    [回答]:Google 于 1998 年成立 [2],在 2003 年(成立 5 年后)收入超过 10 亿美元 [1]。

    多步骤推理需要高级 LLM 功能

查询分类

查询分类选项用于识别对抗性查询和非寻求答案的查询。默认情况下,查询分类选项处于关闭状态。

如需详细了解对抗性查询和非答案寻求型查询,请参阅忽略对抗性查询忽略非总结寻求型查询

搜索阶段功能

对于搜索,回答方法具有与搜索方法相同的选项。例如:

回答阶段功能

在回答阶段,当系统根据搜索结果生成回答时,您可以启用与搜索方法中相同的功能。例如:

  • 获取引用,以指明回答中每个句子的来源。如需了解详情,请参阅添加引用

  • 使用提示前导语可自定义回答的语气、风格和详细程度等。如需了解详情,请参阅指定自定义前导

  • 选择用于生成答案的 Vertex AI 模型。 如需了解详情,请参阅回答生成模型版本和生命周期

  • 选择是否忽略被归类为对抗性或非寻求答案的查询。

    如需详细了解对抗性查询和非答案寻求型查询,请参阅忽略对抗性查询忽略非总结寻求型查询。非答案寻求型查询也称为非摘要寻求型查询。

搜索方法中未提供的其他回答阶段功能包括:

  • 获取每个声明(生成的回答中的句子)的支持度得分。 支持度得分是一个介于 [0,1] 范围内的浮点值,用于指示声明在数据存储区中的数据支持程度。如需了解详情,请参阅返回基础支持得分

  • 获取答案的汇总支持得分。支持得分表示回答与数据存储区中的数据的相关程度。如需了解详情,请参阅返回事实依据支持得分

  • 仅返回有充分依据的回答。您可以选择仅返回符合特定支持分数阈值的答案。如需了解详情,请参阅仅显示有事实依据的回答

  • 选择返回相关问题。相关问题是用户可以选择的建议,而不是输入自己的问题。

  • 向查询中添加个性化信息,以便为各个用户量身定制答案。如需了解详情,请参阅个性化回答

如需接收包含图表或图片以及文本的多模态回答,您可以使用以下选项:

  • 获取包含图表的回答,这些图表会绘制回答中包含的数据。如需了解详情,请参阅为答案生成图表

  • 正在从数据存储区检索图片。如果数据存储区包含图片,则回答方法可以在回答中返回图片。如果请求引用,还可以通过引用返回数据存储区中的图片。如需了解详情,请参阅从数据存储区检索现有映像

准备工作

根据您拥有的应用类型,完成以下要求:

  • 如果您有结构化数据、非结构化数据或网站搜索应用,请开启以下选项:

    • 企业版功能:可让您使用核心生成式回答功能。这包括所有回答生成功能,但不包括高级生成式回答功能,例如相关问题、查询简化、多轮查询以及返回图片和图表的多模态回答。
    • 高级 LLM 功能:此功能可让您使用需要多步推理、查询简化、多轮查询、相关问题和返回图片及图表的多模态回答等高级生成式回答功能。
  • 此外,如果您有网站搜索数据存储区,请开启高级网站索引编制

搜索和回答(基本)

以下命令展示了如何调用 answer 方法并返回生成的答案和搜索结果列表(其中包含指向来源的链接)。

此命令仅显示必需的输入。选项保留为默认值。

REST

如需搜索并获取包含生成答案的结果,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"}
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“比较 BigQuery 和 Spanner 数据库?”。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

查询阶段命令

本部分展示了如何为 answer 方法调用的查询阶段指定选项。

搜索和回答(已停用改述功能)

以下命令展示了如何调用 answer 方法并返回生成的答案和搜索结果列表。由于重新措辞选项处于停用状态,因此回答可能与前面的回答不同。

REST

如需搜索并获取包含生成答案的结果,但不应用查询措辞调整,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "queryUnderstandingSpec": {
               "queryRephraserSpec": {
                  "disable": true
            }
        }
          }'
    
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“比较 BigQuery 和 Spanner 数据库?”。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

搜索并回答(指定最大步数)

以下命令展示了如何调用 answer 方法并返回生成的答案和搜索结果列表。此答案与前面的答案不同,因为重述步骤的数量有所增加。

REST

如需搜索并获取包含生成答案的结果,同时允许最多 5 个重新措辞步骤,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "queryUnderstandingSpec": {
                "queryRephraserSpec": {
                    "maxRephraseSteps": MAX_REPHRASE
                 }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“比较 BigQuery 和 Spanner 数据库?”。
    • MAX_REPHRASE:重新措辞步骤数上限。允许的最大值为 5。 如果未设置或设置为小于 1 的值,则该值为默认值 1

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

通过查询分类进行搜索和回答

以下命令展示了如何调用 answer 方法来查询某个查询是否具有对抗性、是否不寻求答案,或者两者都不是。

响应包含查询的分类类型,但答案本身不受分类的影响。 如果您想根据查询类型更改回答行为,可以在回答阶段执行此操作。请参阅忽略对抗性查询忽略非总结性查询

REST

如需确定查询是否具有对抗性或是否旨在寻求答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "queryUnderstandingSpec": {
                "queryClassificationSpec": {
                    "types": ["QUERY_CLASSIFICATION_TYPE"]
                 }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“hello”。
    • QUERY_CLASSIFICATION_TYPE:您要识别的查询类型:ADVERSARIAL_QUERYNON_ANSWER_SEEKING_QUERY 或两者兼有。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

搜索阶段命令:搜索并根据搜索结果选项回答问题

本部分介绍了如何为 answer 方法调用的搜索阶段部分指定选项,例如设置返回的文档数量上限、加权和过滤,以及如何在提供自己的搜索结果时获取答案。

以下命令展示了如何调用 answer 方法,以及如何指定各种选项来控制搜索结果的返回方式。(搜索结果与回答无关。)

REST

如需设置与返回哪些搜索结果以及如何返回搜索结果相关的各种选项,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
              "searchSpec": {
              "searchParams": {
                "maxReturnResults": MAX_RETURN_RESULTS,
                "filter": "FILTER",
                "boostSpec": BOOST_SPEC,
                "orderBy": "ORDER_BY",
                "searchResultMode": SEARCH_RESULT_MODE
               }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“比较 BigQuery 和 Spanner 数据库?”
    • MAX_RETURN_RESULTS:要返回的搜索结果数量。默认值为 10。最大值为 25
    • FILTER:过滤条件用于指定要查询的文档。如果文档的元数据符合过滤条件规范,则系统会查询该文档。如需了解详情(包括过滤条件语法),请参阅过滤结构化或非结构化数据的自定义搜索
    • BOOST_SPEC:通过提升规范,您可以提升搜索结果中某些文档的排名,这可能会影响答案。 如需了解详情(包括提升规范的语法),请参阅提升搜索结果
    • ORDER_BY:返回文档的顺序。文档可以按 Document 对象中的某个字段排序。orderBy 表达式区分大小写。 如果此字段无法识别,则返回 INVALID_ARGUMENT
    • SEARCH_RESULT_MODE:用于指定搜索结果模式:DOCUMENTSCHUNKS。如需了解详情,请参阅解析文档并将其分块ContentSearchSpec。 此字段仅在 v1alpha 版 API 中可用。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

回答阶段的指令

本部分介绍了如何自定义 answer 方法调用。 您可以根据需要组合使用以下选项。

忽略对抗性查询和非答案寻求型查询

以下命令展示了如何在调用 answer 方法时避免回答对抗性查询和非答案寻求型查询。

REST

如需跳过对抗性查询或非寻求答案的查询,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "ignoreAdversarialQuery": true,
               "ignoreNonAnswerSeekingQuery": true
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

仅显示相关答案

Vertex AI Search 可以评估搜索结果与查询的相关性。如果确定没有足够相关的结果,您可以选择返回后备答案“We do not have a summary for your query.”,而不是根据不相关或相关性极低的结果生成答案。

以下命令展示了如何在调用 answer 方法时遇到不相关结果的情况下返回后备答案。

REST

如需在未找到相关结果时返回后备答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "ignoreLowRelevantContent": true
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

返回基准支持得分

以下命令展示了如何返回答案和声明的依据支持得分。

如需了解 Vertex AI 中的依据功能,请参阅使用 RAG 检查依据groundingConfigs.check 方法由 answer 方法调用。

REST

如需针对每个声明(答案中的每个句子)返回支持得分,并针对整个答案返回汇总支持得分,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "groundingSpec": {
               "includeGroundingSupports": true,
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。

仅显示有充分依据的回答

以下命令展示了如何仅返回那些被认为在语料库(数据存储区中的信息)中有充分依据的答案。接地程度较差的回答会被过滤掉。

您可以为事实依据支持得分选择低级或高级阈值。然后,系统仅在回答达到或超出该级别时返回回答。您可以尝试使用这两个过滤阈值以及不使用阈值,以确定哪种过滤级别最有可能为用户带来最佳结果。

如需了解 Vertex AI 中的依据功能,请参阅使用 RAG 检查依据groundingConfigs.check 方法由 answer 方法调用。

REST

如需仅在答案达到支持分数阈值时返回答案,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "groundingSpec": {
               "filteringLevel": "FILTER_LEVEL"
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • FILTER_LEVEL:一种枚举,用于根据依据支持得分过滤答案。选项为:FILTERING_LEVEL_LOWFILTERING_LEVEL_HIGH。如果不包含 filteringLevel,则不会对答案应用支持分数过滤条件。

指定回答模型

以下命令展示了如何更改用于生成答案的模型版本。

如需了解支持的模型,请参阅回答生成模型版本和生命周期

REST

如需使用不同于默认模型的模型生成回答,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "modelSpec": {
                  "modelVersion": "MODEL_VERSION",
               }
             }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • MODEL_VERSION:您要用于生成答案的模型版本。如需了解详情,请参阅回答生成模型版本和生命周期

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

指定自定义前导码

以下命令展示了如何为生成的答案设置序言。前导包含用于自定义答案的自然语言指令。您可以请求自定义,例如长度、详细程度、输出样式(例如“简单”)、输出语言、回答重点和格式(例如表格、项目符号和 XML)。例如,前导提示可以是“用十岁小孩能理解的方式解释”。

前导文本可能会对生成的答案的质量产生显著影响。如需了解前序中应写入的内容以及优质前序的示例,请参阅关于自定义前序

REST

如需使用不同于默认模型的模型生成回答,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "promptSpec": {
                   "preamble": "PREAMBLE",
               }
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • PREAMBLE:用于自定义回答的自然语言指令。例如,尝试 show the answer format in an ordered listgive a very detailed answer

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

回答方法可以建议相关问题,用户可以选择这些问题,而无需自行输入问题。例如,当您询问“墨西哥一年中哪个时间最适合度假?”时,回答方法除了回答您的问题外,还可以建议您可能还会问的其他问题,例如“墨西哥哪个月份度假最便宜?”和“墨西哥的旅游旺季是哪几个月?”。

如需接收相关问题,您必须在每次查询中指定包含相关问题,并且相关问题会以字符串数组的形式在响应中返回。

准备工作

确保您已为相应应用开启高级 LLM 功能

过程

以下命令展示了如何请求在答案中包含相关问题。

REST

如需获取包含生成答案的相关问题,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "relatedQuestionsSpec": { "enable": true }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。例如,“我可以将哪些类型的数据导入 Vertex AI Search?”。

包含引用

以下命令展示了如何请求在回答中包含引用。

REST

如需使用不同于默认模型的模型生成回答,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "includeCitations": INCLUDE_CITATIONS
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • INCLUDE_CITATIONS:指定是否在答案中包含引用元数据。默认值为 false

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

设置答案语言代码

以下命令展示了如何为答案设置语言代码。

REST

如需使用不同于默认模型的模型生成回答,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
               "answerLanguageCode": "ANSWER_LANGUAGE_CODE"
               }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • ANSWER_LANGUAGE_CODE:回答的语言代码。使用 BCP47:用于标识语言的标记中定义的语言标记。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1 as discoveryengine

# TODO(developer): Uncomment these variables before running the sample.
# project_id = "YOUR_PROJECT_ID"
# location = "YOUR_LOCATION"                    # Values: "global", "us", "eu"
# engine_id = "YOUR_APP_ID"


def answer_query_sample(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.AnswerQueryResponse:
    #  For more information, refer to:
    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
    client_options = (
        ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")
        if location != "global"
        else None
    )

    # Create a client
    client = discoveryengine.ConversationalSearchServiceClient(
        client_options=client_options
    )

    # The full resource name of the Search serving config
    serving_config = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_serving_config"

    # Optional: Options for query phase
    # The `query_understanding_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/QueryUnderstandingSpec
    query_understanding_spec = discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec(
        query_rephraser_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryRephraserSpec(
            disable=False,  # Optional: Disable query rephraser
            max_rephrase_steps=1,  # Optional: Number of rephrase steps
        ),
        # Optional: Classify query types
        query_classification_spec=discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec(
            types=[
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.ADVERSARIAL_QUERY,
                discoveryengine.AnswerQueryRequest.QueryUnderstandingSpec.QueryClassificationSpec.Type.NON_ANSWER_SEEKING_QUERY,
            ]  # Options: ADVERSARIAL_QUERY, NON_ANSWER_SEEKING_QUERY or both
        ),
    )

    # Optional: Options for answer phase
    # The `answer_generation_spec` below includes all available query phase options.
    # For more details, refer to https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1/AnswerGenerationSpec
    answer_generation_spec = discoveryengine.AnswerQueryRequest.AnswerGenerationSpec(
        ignore_adversarial_query=False,  # Optional: Ignore adversarial query
        ignore_non_answer_seeking_query=False,  # Optional: Ignore non-answer seeking query
        ignore_low_relevant_content=False,  # Optional: Return fallback answer when content is not relevant
        model_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.ModelSpec(
            model_version="gemini-2.0-flash-001/answer_gen/v1",  # Optional: Model to use for answer generation
        ),
        prompt_spec=discoveryengine.AnswerQueryRequest.AnswerGenerationSpec.PromptSpec(
            preamble="Give a detailed answer.",  # Optional: Natural language instructions for customizing the answer.
        ),
        include_citations=True,  # Optional: Include citations in the response
        answer_language_code="en",  # Optional: Language code of the answer
    )

    # Initialize request argument(s)
    request = discoveryengine.AnswerQueryRequest(
        serving_config=serving_config,
        query=discoveryengine.Query(text="What is Vertex AI Search?"),
        session=None,  # Optional: include previous session ID to continue a conversation
        query_understanding_spec=query_understanding_spec,
        answer_generation_spec=answer_generation_spec,
        user_pseudo_id="user-pseudo-id",  # Optional: Add user pseudo-identifier for queries.
    )

    # Make the request
    response = client.answer_query(request)

    # Handle the response
    print(response)

    return response

个性化回答

如果有关于用户的特定信息(例如个人资料中的数据),您可以在 endUserMetadata 对象中指定该信息,以便为用户提供个性化的查询结果。

例如,如果已登录的用户正在搜索有关升级手机的信息,那么其个人资料中的信息(例如当前手机型号和移动流量套餐)可用于生成个性化回答。

如需添加提出查询的用户的个人信息,并生成考虑到该个人信息的回答,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
    -d '{
        "query": { "text": "QUERY"},
        "endUserSpec": {
           "endUserMetadata": [
             {
               "chunkInfo": {
                  "content": "PERSONALIZED_INFO",
                  "documentMetadata":  { "title": "INFO_DESCRIPTION"}
               }
             }
           ]
        }
      }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • PERSONALIZATION_INFO:一个字符串,其中包含发出查询的用户特有的信息。例如, This customer has a Pixel 6 Pro purchased over a period of 24-months starting 2023-01-15. This customer is on the Business Plus International plan. No payment is due at this time. 此字符串的长度上限为 8,000 个字符。
    • INFO_DESCRIPTION:简要描述个性化信息的字符串,例如 Customer profile data, including model, plan, and billing status.。模型在生成针对查询的自定义回答时,会同时使用此说明和个性化信息。

为答案生成图表

回答方法可以生成图表,并将其作为查询回答的一部分返回。

您可以明确要求回答中包含图表,例如“绘制有可用数据的小企业付款的年同比增长率”。如果系统确定有足够的数据,则会返回图表。通常,系统会返回一些回答文本以及图表。

此外,如果有足够的数据来创建图表,即使查询未明确请求图表,回答方法也可以返回图表。例如,“在 2010 年到 2020 年这十年间,更多人获得清洁饮用水与 HDI 分数提高之间有何关联?”

每个回答只会生成一个图表。不过,该图表可能是包含其他较小图表的复合图表。复合图表示例:

复合图表包含四个较小的图表

限制

查询必须使用英语。

常见故障场景

您不一定会收到包含图片的回答。如果没有足够的数据,则无法生成相应数据。

其他失败场景包括代码执行失败和超时。如果出现上述任一情况,请重新措辞查询内容,然后重试。

准备工作

在运行请求生成图表的查询之前,请执行以下操作:

  • 确保您已为相应应用开启高级 LLM 功能

  • 确保您使用的是 Gemini 2.0 或更高版本的模型。如需了解相关模型,请参阅回答生成模型版本和生命周期

  • 如果您有非结构化数据存储区,其中包含大量表格和图片的文档,请开启布局解析器。虽然这不是绝对必要的,但可以获得更优质的结果。

过程

REST

按如下方式调用 answer 方法,以返回一个回答,该回答可以包含根据数据存储区中的数据生成的图表:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1beta/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
              "model_spec": {
                 "model_version": "MODEL_VERSION"
             },
              "multimodalSpec": {
                 "imageSource": "IMAGE_SOURCE"
                 }
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由格式英文文本字符串。
    • MODEL_VERSION:模型版本 gemini-2.0-flash-001/answer_gen/v1 或更高版本。如需了解详情,请参阅回答生成模型版本和生命周期
    • IMAGE_SOURCE:一种枚举,用于请求答案包含生成的图表 (FIGURE_GENERATION_ONLY),或者答案可以包含生成的图表或数据存储区中的现有图片 (ALL_AVAILABLE_SOURCES)。

从数据存储区检索现有图片

您可以选择在回答中返回数据存储区中的图片,并在引用参考资料中返回这些图片。数据存储区必须是非结构化数据存储区,并且已开启布局解析器。

如需在返回的回答中获取图表,您必须开启高级 LLM 功能

如果 imageSourceCORPUS_IMAGE_ONLYALL_AVAILABLE_SOURCESanswer 方法可以根据需要从数据存储区检索图片。不过,开启此功能并不意味着系统始终会返回图片。

每个答案最多可包含一张图片。引用可以包含多张图片。

限制

  • 您使用的应用必须连接到非结构化数据存储区。无法从网站或结构化数据存储区返回图片。

  • 查询必须使用英语。

  • 通过布局解析器进行的图片注解必须应用于数据存储区。如需了解布局解析器,请参阅解析文档并将其分块

过程

REST

按如下方式调用 answer 方法,以返回可在回答中包含数据存储区中的图片的回答:

  1. 运行以下 curl 命令:

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1beta/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "answerGenerationSpec": {
              "model_spec": {
                 "model_version": "MODEL_VERSION"
              },
              includeCitations: true,
              "multimodalSpec": {
                 "imageSource": "IMAGE_SOURCE"
                 }
            }
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:要查询的 Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由格式英文文本字符串。
    • MODEL_VERSION:模型版本 gemini-2.0-flash-001/answer_gen/v1 或更高版本。如需了解详情,请参阅回答生成模型版本和生命周期
    • IMAGE_SOURCE:一种枚举,用于请求答案包含数据存储区中的图片 (CORPUS_IMAGE_ONLY),或者答案可以包含数据存储区中的图片或生成的图表 (ALL_AVAILABLE_SOURCES)。

后续问题的指令

后续问题是多轮查询。在后续会话中,当用户提出第一个问题后,后续的“回合”会考虑之前的互动。借助后续问题,回答方法还可以建议相关问题,用户可以选择这些问题,而无需自行输入后续问题。如需获取相关问题的建议,您必须开启高级 LLM 功能

前面部分中介绍的所有回答和后续问题功能(例如引用、过滤条件、SafeSearch、忽略特定类型的查询以及使用序言自定义回答)都可以与后续问题一起使用。

后续会话示例

以下是包含后续问题的对话示例。假设您想了解在墨西哥度假的相关信息:

  • 第 1 轮

    • :一年中什么时候最适合去墨西哥度假?

    • 回答并跟进:墨西哥的最佳度假时间是 11 月到 4 月的旱季。

  • Turn 2:

    • :汇率是多少?

    • 回答并跟进:1 美元约等于 17.65 墨西哥比索。

  • 第 3 回合

    • :12 月的平均气温是多少?

    • 回答并跟进:平均温度在 70-78°F 之间。 坎昆的平均温度约为 77°F。

如果没有后续问题,系统就无法回答“汇率是多少?”这个问题,因为常规搜索不知道您想要了解墨西哥的汇率。同样,如果没有后续对话,我们就无法获得为您提供墨西哥具体温度所需的背景信息。

关于会话

如需了解后续问题的运作方式,您需要先了解会话。

会话由用户提供的文本查询和 Vertex AI Search 提供的回答组成。

这些查询和响应对有时称为对话轮次。在上述示例中,第二个对话轮次由“汇率是多少?”和“1 美元约等于 17.65 墨西哥比索”组成。

会话与应用一起存储。在应用中,会话由 session 资源表示。

除了包含查询和响应消息之外,会话资源还具有以下属性:

  • 唯一名称(会话 ID)。

  • 状态(进行中或已完成)。

  • 用户伪 ID,即用于跟踪用户的访问者 ID。可以通过编程方式分配。如果将该 ID 映射到应用的用户事件中的用户伪 ID,该模型可以帮助您为用户提供个性化结果。

  • 开始时间和结束时间。

  • 一个回合,即一个查询-回答对。

准备工作

在运行请求后续问题的查询之前,请确保您已为应用启用高级 LLM 功能

存储会话信息并获取回答

您可以使用命令行生成搜索响应和答案,并将这些内容与会话中的每个查询一起存储。

REST

如需使用命令行创建会话并根据用户输入生成回答,请按以下步骤操作:

  1. 指定要存储会话的应用:

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions" \
      -d '{
            "userPseudoId": "USER_PSEUDO_ID"
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。

    • APP_ID:Vertex AI Search 应用的 ID。

    • USER_PSEUDO_ID:一个 UTF-8 编码的字符串,用作跟踪用户的唯一假名化标识符。长度上限为 128 个字符。 Google 强烈建议使用此字段,因为它可以提高模型性能和个性化质量。您可以为此字段使用 HTTP Cookie,该 Cookie 可唯一标识单个设备上的访问者。以下是一些重要注意事项:

      • 当访问者登录或退出网站时,此标识符不会发生变化。
      • 不得为多个用户设置相同的标识符。 否则,相同的用户 ID 可能会合并不同用户的事件历史记录,从而降低模型质量。
      • 此字段不得包含个人身份信息 (PII)。
      • 对于给定的搜索或浏览请求,此字段必须映射到用户事件中对应的 userPseudoId 字段。

      如需了解详情,请参阅 userPseudoId

    命令和结果示例

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)"
    -H "Content-Type: application/json"
    "https://discoveryengine.googleapis.com/v1/projects/my-project-123/locations/global/collections/default_collection/engines/my-app/sessions"
    -d '{
    "userPseudoId": "test_user"
    }'
    
    { "name": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943", "state": "IN_PROGRESS", "userPseudoId": "test_user", "startTime": "2024-09-13T18:47:10.465311Z", "endTime": "2024-09-13T18:47:10.465311Z" }
  2. 记下 JSON 响应中 name: 字段末尾的数字,即会话 ID。在示例结果中,ID 为 5386462384953257772。 您需要在下一步中使用此 ID。

  3. 生成回答并将其添加到应用中的会话:

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/servingConfigs/default_search:answer" \
      -d '{
            "query": { "text": "QUERY"},
            "session": "projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID",
              "searchSpec":{ "searchParams": {"filter": "FILTER"} }
    }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • QUERY:包含问题或搜索查询的自由文本字符串。
    • SESSION_ID:您在第 1 步中创建的会话的 ID。这些数字是 name: 字段末尾的数字,已在第 2 步中记下。对于一个会话,在每一轮对话中都使用相同的会话 ID。
    • FILTER:一个文本字段,用于使用过滤表达式过滤搜索结果。默认值为空字符串。过滤器的构建方式因您拥有的是包含元数据的非结构化数据、结构化数据还是网站数据而异。如需了解详情,请参阅过滤自定义搜索结果中的结构化或非结构化数据过滤网站搜索结果

    命令和结果示例

    curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)"
    -H "Content-Type: application/json"
    "https://discoveryengine.googleapis.com/v1/projects/my-project-123/locations/global/collections/default_collection/engines/my-app/servingConfigs/default_search:answer"
    -d '{
    "query": { "text": "Compare bigquery with spanner database?"},
    "session":  "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943",
    }'
        
    { "answer": { "name": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943/answers/4861507376861383072", "state": "SUCCEEDED", "answerText": "BigQuery and Spanner are both powerful tools that can be used together to handle transactional and analytical workloads. Spanner is a fully managed relational database optimized for transactional workloads, while BigQuery is a serverless data warehouse designed for business agility. Spanner provides seamless replication across regions in Google Cloud and processes over 1 billion requests per second at peak. BigQuery analyzes over 110 terabytes of data per second. Users can leverage federated queries to read data from Spanner and write to a native BigQuery table. \n", "steps": [ { "state": "SUCCEEDED", "description": "Rephrase the query and search.", "actions": [ { "searchAction": { "query": "Compare bigquery with spanner database?" }, "observation": { "searchResults": [ { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/ecc0e7547253f4ca3ff3328ce89995af", "uri": "https://cloud.google.com/blog/topics/developers-practitioners/how-spanner-and-bigquery-work-together-handle-transactional-and-analytical-workloads", "title": "How Spanner and BigQuery work together to handle transactional and analytical workloads | Google Cloud Blog", "snippetInfo": [ { "snippet": "Using Cloud \u003cb\u003eSpanner\u003c/b\u003e and \u003cb\u003eBigQuery\u003c/b\u003e also allows customers to build their \u003cb\u003edata\u003c/b\u003e clouds using Google Cloud, a unified, open approach to \u003cb\u003edata\u003c/b\u003e-driven transformation ...", "snippetStatus": "SUCCESS" } ] }, { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/d7e238f73608a860e00b752ef80e2941", "uri": "https://cloud.google.com/blog/products/databases/cloud-spanner-gets-stronger-with-bigquery-federated-queries", "title": "Cloud Spanner gets stronger with BigQuery-federated queries | Google Cloud Blog", "snippetInfo": [ { "snippet": "As enterprises compete for market share, their need for real-time insights has given rise to increased demand for transactional \u003cb\u003edatabases\u003c/b\u003e to support \u003cb\u003edata\u003c/b\u003e ...", "snippetStatus": "SUCCESS" } ] }, { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/e10a5a3c267dc61579e7c00fefe656eb", "uri": "https://cloud.google.com/blog/topics/developers-practitioners/replicating-cloud-spanner-bigquery-scale", "title": "Replicating from Cloud Spanner to BigQuery at scale | Google Cloud Blog", "snippetInfo": [ { "snippet": "... \u003cb\u003eSpanner data\u003c/b\u003e into \u003cb\u003eBigQuery\u003c/b\u003e for analytics. In this post, you will learn how to efficiently use this feature to replicate large tables with high throughput ...", "snippetStatus": "SUCCESS" } ] }, ... { "document": "projects/123456/locations/global/collections/default_collection/dataStores/my-data-store/branches/0/documents/8100ad36e1cac149eb9fc180a41d8f25", "uri": "https://cloud.google.com/blog/products/gcp/from-nosql-to-new-sql-how-spanner-became-a-global-mission-critical-database", "title": "How Spanner became a global, mission-critical database | Google Cloud Blog", "snippetInfo": [ { "snippet": "... SQL \u003cb\u003evs\u003c/b\u003e. NoSQL dichotomy may no longer be relevant." The \u003cb\u003eSpanner\u003c/b\u003e SQL query processor, while recognizable as a standard implementation, has unique ...", "snippetStatus": "SUCCESS" } ] } ] } } ] } ] }, "session": { "name": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943", "state": "IN_PROGRESS", "userPseudoId": "test_user", "turns": [ { "query": { "queryId": "projects/123456/locations/global/questions/741830", "text": "Compare bigquery with spanner database?" }, "answer": "projects/123456/locations/global/collections/default_collection/engines/my-app/sessions/16002628354770206943/answers/4861507376861383072" } ], "startTime": "2024-09-13T18:47:10.465311Z", "endTime": "2024-09-13T18:47:10.465311Z" }, "answerQueryToken": "NMwKDAjFkpK3BhDU24uZAhIkNjZlNDIyZWYtMDAwMC0yMjVmLWIxMmQtZjQwMzA0M2FkYmNj" }
  4. 针对会话中的每个新查询重复第 3 步。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def create_session(
    project_id: str,
    location: str,
    engine_id: str,
    user_pseudo_id: str,
) -> discoveryengine.Session:
    """Creates a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        user_pseudo_id: A unique identifier for tracking visitors. For example, this
          could be implemented with an HTTP cookie, which should be able to
          uniquely identify a visitor on a single device.
    Returns:
        discoveryengine.Session: The newly created Session.
    """

    client = discoveryengine.SessionServiceClient()

    session = client.create_session(
        # The full resource name of the engine
        parent=f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}",
        session=discoveryengine.Session(user_pseudo_id=user_pseudo_id),
    )

    # Send Session name in `answer_query()`
    print(f"Session: {session.name}")
    return session

从数据存储区获取会话

以下命令展示了如何调用 get 方法并从数据存储区获取会话。

REST

如需从数据存储区获取会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • SESSION_ID:您要获取的会话的 ID。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def get_session(
    project_id: str,
    location: str,
    engine_id: str,
    session_id: str,
) -> discoveryengine.Session:
    """Retrieves a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        session_id: The ID of the session.
    """

    client = discoveryengine.SessionServiceClient()

    # The full resource name of the session
    name = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions/{session_id}"

    session = client.get_session(name=name)

    print(f"Session details: {session}")
    return session

从应用中删除会话

以下命令展示了如何调用 delete 方法并从数据存储区中删除会话。

默认情况下,系统会自动删除保留时长超过 60 天的会话。 不过,如果您想删除特定会话(例如,如果该会话包含敏感内容),请使用此 API 调用来删除它。

REST

如需从应用中删除会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X DELETE -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • SESSION_ID:要删除的会话的 ID。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def delete_session(
    project_id: str,
    location: str,
    engine_id: str,
    session_id: str,
) -> None:
    """Deletes a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        session_id: The ID of the session.
    """

    client = discoveryengine.SessionServiceClient()

    # The full resource name of the session
    name = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions/{session_id}"

    client.delete_session(name=name)

    print(f"Session {name} deleted.")

更新会话

您可能因多种原因而需要更新会话。例如,执行以下任一操作:

  • 将会话标记为已完成
  • 将一个会话中的消息合并到另一个会话中
  • 更改用户的伪 ID

以下命令展示了如何调用 patch 方法并更新数据存储区中的会话。

REST

如需从应用更新会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X PATCH \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions/SESSION_ID?updateMask=state" \
      -d '{
            "state": "NEW_STATE"
          }'
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • SESSION_ID:您要更新的会话的 ID。
    • NEW_STATE:状态的新值,例如 IN_PROGRESS

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine
from google.protobuf import field_mask_pb2


def update_session(
    project_id: str,
    location: str,
    engine_id: str,
    session_id: str,
) -> discoveryengine.Session:
    """Updates a session.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
        session_id: The ID of the session.
    Returns:
        discoveryengine.Session: The updated Session.
    """
    client = discoveryengine.SessionServiceClient()

    # The full resource name of the session
    name = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions/{session_id}"

    session = discoveryengine.Session(
        name=name,
        state=discoveryengine.Session.State.IN_PROGRESS,  # Options: IN_PROGRESS, STATE_UNSPECIFIED
    )

    # Fields to Update
    update_mask = field_mask_pb2.FieldMask(paths=["state"])

    session = client.update_session(session=session, update_mask=update_mask)
    print(f"Updated session: {session.name}")
    return session

列出所有会话

以下命令展示了如何调用 list 方法并列出数据存储区中的会话。

REST

如需列出应用的会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def list_sessions(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.ListSessionsResponse:
    """Lists all sessions associated with a data store.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
    Returns:
        discoveryengine.ListSessionsResponse: The list of sessions.
    """

    client = discoveryengine.SessionServiceClient()

    # The full resource name of the engine
    parent = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}"

    response = client.list_sessions(
        request=discoveryengine.ListSessionsRequest(
            parent=parent,
            filter='state="IN_PROGRESS"',  # Optional: Filter requests by userPseudoId or state
            order_by="update_time",  # Optional: Sort results
        )
    )

    print("Sessions:")
    for session in response.sessions:
        print(session)

    return response

列出用户的会话

以下命令展示了如何调用 list 方法来列出与用户或访问者关联的会话。

REST

如需列出与用户或访问者相关联的会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions?filter=userPseudoId=USER_PSEUDO_ID"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • USER_PSEUDO_ID:您要列出其会话的用户的伪 ID。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def list_sessions(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.ListSessionsResponse:
    """Lists all sessions associated with a data store.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
    Returns:
        discoveryengine.ListSessionsResponse: The list of sessions.
    """

    client = discoveryengine.SessionServiceClient()

    # The full resource name of the engine
    parent = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}"

    response = client.list_sessions(
        request=discoveryengine.ListSessionsRequest(
            parent=parent,
            filter='state="IN_PROGRESS"',  # Optional: Filter requests by userPseudoId or state
            order_by="update_time",  # Optional: Sort results
        )
    )

    print("Sessions:")
    for session in response.sessions:
        print(session)

    return response

列出用户和状态的会话

以下命令展示了如何调用 list 方法来列出特定用户的指定状态的会话。

REST

如需列出与指定用户或访问者相关联的开放或已关闭的用户会话,请执行以下操作:

  1. 运行以下 curl 命令:

    curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/global/collections/default_collection/engines/APP_ID/sessions?filter=userPseudoId=USER_PSEUDO_ID%20AND%20state=STATE"
    

    替换以下内容:

    • PROJECT_ID:您的 Google Cloud 项目的 ID。
    • APP_ID:Vertex AI Search 应用的 ID。
    • USER_PSEUDO_ID:您要列出其会话的用户的伪 ID。
    • STATE:会话的状态:STATE_UNSPECIFIED(已关闭或未知)或 IN_PROGRESS(已打开)。

Python

如需了解详情,请参阅 AI Applications Python API 参考文档

如需向 AI 应用进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import discoveryengine_v1 as discoveryengine


def list_sessions(
    project_id: str,
    location: str,
    engine_id: str,
) -> discoveryengine.ListSessionsResponse:
    """Lists all sessions associated with a data store.

    Args:
        project_id: The ID of your Google Cloud project.
        location: The location of the app.
        engine_id: The ID of the app.
    Returns:
        discoveryengine.ListSessionsResponse: The list of sessions.
    """

    client = discoveryengine.SessionServiceClient()

    # The full resource name of the engine
    parent = f"projects/{project_id}/locations/{location}/collections/default_collection/engines/{engine_id}"

    response = client.list_sessions(
        request=discoveryengine.ListSessionsRequest(
            parent=parent,
            filter='state="IN_PROGRESS"',  # Optional: Filter requests by userPseudoId or state
            order_by="update_time",  # Optional: Sort results
        )
    )

    print("Sessions:")
    for session in response.sessions:
        print(session)

    return response