Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.
If you might use Agent Assist or other CCAI products now or in the future, consider using AnalyzeContent instead of sessions.detectIntent. AnalyzeContent has additional functionality for Agent Assist and other CCAI products.
Required. The name of the session this query is sent to. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>,
-projects//locations//agent/sessions/,
-projects//agent/environments//users//sessions/,
-projects//locations//agent/environments//users//sessions/`,
If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment (Environment ID might be referred to as environment name at some places). If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters. For more information, see the API interactions guide.
Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
The natural language speech audio to be processed. This field should be populated iff queryInput is set to an input audio config. A single request can contain up to 1 minute of speech audio data.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-05 UTC."],[[["This document describes how to use the `detectIntent` method to process natural language queries and receive structured data, noting that it's non-idempotent due to potential updates to contexts and session entity types."],["The `detectIntent` method uses a `POST` request to a specified endpoint, with a required session parameter that includes project, location, and session details, as well as authorization that requires the `dialogflow.sessions.detectIntent` IAM permission on the session resource."],["The request body for `detectIntent` requires structured data, including query parameters, input specifications, audio configuration, and input audio, with specific formatting guidelines for each field, such as a base64-encoded string for `inputAudio`."],["The method offers optional request-level overrides to the agent's speech synthesizer settings via the `outputAudioConfigMask` field, which is a comma-separated list of field names to be overridden."],["Successful calls to the `detectIntent` method return a `DetectIntentResponse` and authorization requires either the `https://www.googleapis.com/auth/cloud-platform` or the `https://www.googleapis.com/auth/dialogflow` OAuth scope."]]],[]]