Method: projects.locations.agent.sessions.detectIntent

Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.

If you might use Agent Assist or other CCAI products now or in the future, consider using AnalyzeContent instead of sessions.detectIntent. AnalyzeContent has additional functionality for Agent Assist and other CCAI products.

Note: Always use agent versions for production traffic. See Versions and environments.

HTTP request

POST https://{endpoint}/v2beta1/{session=projects/*/locations/*/agent/sessions/*}:detectIntent

Where {endpoint} is one of the supported service endpoints.

The URLs use gRPC Transcoding syntax.

Path parameters



Required. The name of the session this query is sent to. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment (Environment ID might be referred to as environment name at some places). If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters. For more information, see the API interactions guide.

Note: Always use agent versions for production traffic. See Versions and environments.

Authorization requires the following IAM permission on the specified resource session:

  • dialogflow.sessions.detectIntent

Request body

The request body contains data with the following structure:

JSON representation
  "queryParams": {
    object (QueryParameters)
  "queryInput": {
    object (QueryInput)
  "outputAudioConfig": {
    object (OutputAudioConfig)
  "outputAudioConfigMask": string,
  "inputAudio": string

object (QueryParameters)

The parameters of this query.


object (QueryInput)

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.


object (OutputAudioConfig)

Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.


string (FieldMask format)

Mask for outputAudioConfig indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.

If unspecified or empty, outputAudioConfig replaces the agent-level config in its entirety.

This is a comma-separated list of fully qualified names of fields. Example: "user.displayName,photo".


string (bytes format)

The natural language speech audio to be processed. This field should be populated iff queryInput is set to an input audio config. A single request can contain up to 1 minute of speech audio data.

A base64-encoded string.

Response body

If successful, the response body contains an instance of DetectIntentResponse.

Authorization scopes

Requires one of the following OAuth scopes:


For more information, see the Authentication Overview.