REST Resource: projects.locations.agents

Resource: Agent

Agents are best described as Natural Language Understanding (NLU) modules that transform user requests into actionable data. You can include agents in your app, product, or service to determine user intent and respond to the user in a natural way.

After you create an agent, you can add Intents, Entity Types, Flows, Fulfillments, Webhooks, and so on to manage the conversation flows..

JSON representation
{
  "name": string,
  "displayName": string,
  "defaultLanguageCode": string,
  "supportedLanguageCodes": [
    string
  ],
  "timeZone": string,
  "description": string,
  "avatarUri": string,
  "speechToTextSettings": {
    object (SpeechToTextSettings)
  },
  "startFlow": string,
  "securitySettings": string,
  "enableStackdriverLogging": boolean,
  "enableSpellCorrection": boolean,
  "locked": boolean,
  "advancedSettings": {
    object (AdvancedSettings)
  },
  "textToSpeechSettings": {
    object (TextToSpeechSettings)
  }
}
Fields
name

string

The unique identifier of the agent. Required for the Agents.UpdateAgent method. Agents.CreateAgent populates the name automatically. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>.

displayName

string

Required. The human-readable name of the agent, unique within the location.

defaultLanguageCode

string

Required. Immutable. The default language of the agent as a language tag. See Language Support for a list of the currently supported language codes. This field cannot be set by the Agents.UpdateAgent method.

supportedLanguageCodes[]

string

The list of all languages supported by the agent (except for the defaultLanguageCode).

timeZone

string

Required. The time zone of the agent from the time zone database, e.g., America/New_York, Europe/Paris.

description

string

The description of the agent. The maximum length is 500 characters. If exceeded, the request is rejected.

avatarUri

string

The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted Web Demo integration.

speechToTextSettings

object (SpeechToTextSettings)

Speech recognition related settings.

startFlow

string

Immutable. Name of the start flow in this agent. A start flow will be automatically created when the agent is created, and can only be deleted by deleting the agent. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>.

securitySettings

string

Name of the SecuritySettings reference for the agent. Format: projects/<Project ID>/locations/<Location ID>/securitySettings/<Security Settings ID>.

enableStackdriverLogging
(deprecated)

boolean

Indicates if stackdriver logging is enabled for the agent. Please use agent.advanced_settings instead.

enableSpellCorrection

boolean

Indicates if automatic spell correction is enabled in detect intent requests.

locked

boolean

Indicates whether the agent is locked for changes. If the agent is locked, modifications to the agent will be rejected except for [agents.restore][].

advancedSettings

object (AdvancedSettings)

Hierarchical advanced settings for this agent. The settings exposed at the lower level overrides the settings exposed at the higher level.

textToSpeechSettings

object (TextToSpeechSettings)

Settings on instructing the speech synthesizer on how to generate the output audio content.

SpeechToTextSettings

Settings related to speech recognition.

JSON representation
{
  "enableSpeechAdaptation": boolean
}
Fields
enableSpeechAdaptation

boolean

Whether to use speech adaptation for speech recognition.

AdvancedSettings

Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playbackInterruptionSettings at fulfillment level only overrides the playbackInterruptionSettings at the agent level, leaving other settings at the agent level unchanged.

DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel.

Hierarchy: Agent->Flow->Page->Fulfillment/Parameter.

JSON representation
{
  "audioExportGcsDestination": {
    object (GcsDestination)
  },
  "loggingSettings": {
    object (LoggingSettings)
  }
}
Fields
audioExportGcsDestination

object (GcsDestination)

If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level

loggingSettings

object (LoggingSettings)

Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level.

GcsDestination

Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow.

JSON representation
{
  "uri": string
}
Fields
uri

string

Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: gs://bucket/object-name-or-prefix Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation.

LoggingSettings

Define behaviors on logging.

JSON representation
{
  "enableStackdriverLogging": boolean,
  "enableInteractionLogging": boolean
}
Fields
enableStackdriverLogging

boolean

If true, StackDriver logging is currently enabled.

enableInteractionLogging

boolean

If true, DF Interaction logging is currently enabled.

TextToSpeechSettings

Settings related to speech synthesizing.

JSON representation
{
  "synthesizeSpeechConfigs": {
    string: {
      object (SynthesizeSpeechConfig)
    },
    ...
  }
}
Fields
synthesizeSpeechConfigs

map (key: string, value: object (SynthesizeSpeechConfig))

Configuration of how speech should be synthesized, mapping from language (https://cloud.google.com/dialogflow/cx/docs/reference/language) to SynthesizeSpeechConfig.

These settings affect:

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

JSON representation
{
  "speakingRate": number,
  "pitch": number,
  "volumeGainDb": number,
  "effectsProfileId": [
    string
  ],
  "voice": {
    object (VoiceSelectionParams)
  }
}
Fields
speakingRate

number

Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.

pitch

number

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.

volumeGainDb

number

Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.

effectsProfileId[]

string

Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.

voice

object (VoiceSelectionParams)

Optional. The desired voice of the synthesized audio.

VoiceSelectionParams

Description of which voice to use for speech synthesis.

JSON representation
{
  "name": string,
  "ssmlGender": enum (SsmlVoiceGender)
}
Fields
name

string

Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as languageCode and ssmlGender.

For the list of available voices, please refer to Supported voices and languages.

ssmlGender

enum (SsmlVoiceGender)

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as languageCode and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer substitutes a voice with a different gender rather than failing the request.

SsmlVoiceGender

Gender of the voice as described in SSML voice element.

Enums
SSML_VOICE_GENDER_UNSPECIFIED An unspecified gender, which means that the client doesn't care which gender the selected voice will have.
SSML_VOICE_GENDER_MALE A male voice.
SSML_VOICE_GENDER_FEMALE A female voice.
SSML_VOICE_GENDER_NEUTRAL A gender-neutral voice.

Methods

create

Creates an agent in the specified location.

delete

Deletes the specified agent.

export

Exports the specified agent to a binary file.

get

Retrieves the specified agent.

getValidationResult

Gets the latest agent validation result.

list

Returns the list of all agents in the specified location.

patch

Updates the specified agent.

restore

Restores the specified agent from a binary file.

validate

Validates the specified agent and creates or updates validation results.