REST Resource: projects.conversationProfiles

Resource: ConversationProfile

Defines the services to connect to incoming Dialogflow conversations.

JSON representation
{
  "name": string,
  "displayName": string,
  "createTime": string,
  "updateTime": string,
  "automatedAgentConfig": {
    object (AutomatedAgentConfig)
  },
  "humanAgentAssistantConfig": {
    object (HumanAgentAssistantConfig)
  },
  "humanAgentHandoffConfig": {
    object (HumanAgentHandoffConfig)
  },
  "notificationConfig": {
    object (NotificationConfig)
  },
  "loggingConfig": {
    object (LoggingConfig)
  },
  "newMessageEventNotificationConfig": {
    object (NotificationConfig)
  },
  "sttConfig": {
    object (SpeechToTextConfig)
  },
  "languageCode": string,
  "timeZone": string,
  "securitySettings": string,
  "ttsConfig": {
    object (SynthesizeSpeechConfig)
  }
}
Fields
name

string

The unique identifier of this conversation profile. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

displayName

string

Required. Human readable name for this profile. Max length 1024 bytes.

createTime

string (Timestamp format)

Output only. Create time of the conversation profile.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

updateTime

string (Timestamp format)

Output only. Update time of the conversation profile.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

automatedAgentConfig

object (AutomatedAgentConfig)

Configuration for an automated agent to use with this profile.

humanAgentAssistantConfig

object (HumanAgentAssistantConfig)

Configuration for agent assistance to use with this profile.

humanAgentHandoffConfig

object (HumanAgentHandoffConfig)

Configuration for connecting to a live agent.

Currently, this feature is not general available, please contact Google to get access.

notificationConfig

object (NotificationConfig)

Configuration for publishing conversation lifecycle events.

loggingConfig

object (LoggingConfig)

Configuration for logging conversation lifecycle events.

newMessageEventNotificationConfig

object (NotificationConfig)

Configuration for publishing new message events. Event will be sent in format of ConversationEvent

sttConfig

object (SpeechToTextConfig)

Settings for speech transcription.

languageCode

string

Language code for the conversation profile. If not specified, the language is en-US. Language at ConversationProfile should be set for all non en-us languages. This should be a BCP-47 language tag. Example: "en-US".

timeZone

string

The time zone of this conversational profile from the time zone database, e.g., America/New_York, Europe/Paris. Defaults to America/New_York.

securitySettings

string

Name of the CX SecuritySettings reference for the agent. Format: projects/<Project ID>/locations/<Location ID>/securitySettings/<Security Settings ID>.

ttsConfig

object (SynthesizeSpeechConfig)

Configuration for Text-to-Speech synthesization.

Used by Phone Gateway to specify synthesization options. If agent defines synthesization options as well, agent settings overrides the option here.

AutomatedAgentConfig

Defines the Automated Agent to connect to a conversation.

JSON representation
{
  "agent": string,
  "sessionTtl": string
}
Fields
agent

string

Required. ID of the Dialogflow agent environment to use.

This project needs to either be the same project as the conversation or you need to grant service-<Conversation Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com the Dialogflow API Service Agent role in this project.

  • For ES agents, use format: projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID or '-'>. If environment is not specified, the default draft environment is used. Refer to DetectIntentRequest for more details.

  • For CX agents, use format projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/environments/<Environment ID or '-'>. If environment is not specified, the default draft environment is used.

sessionTtl

string (Duration format)

Optional. Configure lifetime of the Dialogflow session. By default, a Dialogflow CX session remains active and its data is stored for 30 minutes after the last request is sent for the session. This value should be no longer than 1 day.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

HumanAgentAssistantConfig

Defines the Human Agent Assistant to connect to a conversation.

JSON representation
{
  "notificationConfig": {
    object (NotificationConfig)
  },
  "humanAgentSuggestionConfig": {
    object (SuggestionConfig)
  },
  "endUserSuggestionConfig": {
    object (SuggestionConfig)
  },
  "messageAnalysisConfig": {
    object (MessageAnalysisConfig)
  }
}
Fields
notificationConfig

object (NotificationConfig)

Pub/Sub topic on which to publish new agent assistant events.

humanAgentSuggestionConfig

object (SuggestionConfig)

Configuration for agent assistance of human agent participant.

endUserSuggestionConfig

object (SuggestionConfig)

Configuration for agent assistance of end user participant.

Currently, this feature is not general available, please contact Google to get access.

messageAnalysisConfig

object (MessageAnalysisConfig)

Configuration for message analysis.

NotificationConfig

Defines notification behavior.

JSON representation
{
  "topic": string,
  "messageFormat": enum (MessageFormat)
}
Fields
topic

string

Name of the Pub/Sub topic to publish conversation events like CONVERSATION_STARTED as serialized ConversationEvent protos.

For telephony integration to receive notification, make sure either this topic is in the same project as the conversation or you grant service-<Conversation Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com the Dialogflow Service Agent role in the topic project.

For chat integration to receive notification, make sure API caller has been granted the Dialogflow Service Agent role for the topic.

Format: projects/<Project ID>/locations/<Location ID>/topics/<Topic ID>.

messageFormat

enum (MessageFormat)

Format of message.

MessageFormat

Format of cloud pub/sub message.

Enums
MESSAGE_FORMAT_UNSPECIFIED If it is unspecified, PROTO will be used.
PROTO Pub/Sub message will be serialized proto.
JSON Pub/Sub message will be json.

SuggestionConfig

Detail human agent assistant config.

JSON representation
{
  "featureConfigs": [
    {
      object (SuggestionFeatureConfig)
    }
  ],
  "groupSuggestionResponses": boolean,
  "generators": [
    string
  ],
  "disableHighLatencyFeaturesSyncDelivery": boolean
}
Fields
featureConfigs[]

object (SuggestionFeatureConfig)

Configuration of different suggestion features. One feature can have only one config.

groupSuggestionResponses

boolean

If groupSuggestionResponses is false, and there are multiple featureConfigs in event based suggestion or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or StreamingAnalyzeContentResponse.

If groupSuggestionResponses set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse.

generators[]

string

Optional. List of various generator resource names used in the conversation profile.

disableHighLatencyFeaturesSyncDelivery

boolean

Optional. When disableHighLatencyFeaturesSyncDelivery is true and using the AnalyzeContent API, we will not deliver the responses from high latency features in the API response. The humanAgentAssistantConfig.notification_config must be configured and enableEventBasedSuggestion must be set to true to receive the responses from high latency features in Pub/Sub. High latency feature(s): KNOWLEDGE_ASSIST

SuggestionFeatureConfig

Config for suggestion features.

JSON representation
{
  "suggestionFeature": {
    object (SuggestionFeature)
  },
  "enableEventBasedSuggestion": boolean,
  "disableAgentQueryLogging": boolean,
  "enableQuerySuggestionWhenNoAnswer": boolean,
  "enableConversationAugmentedQuery": boolean,
  "enableQuerySuggestionOnly": boolean,
  "suggestionTriggerSettings": {
    object (SuggestionTriggerSettings)
  },
  "queryConfig": {
    object (SuggestionQueryConfig)
  },
  "conversationModelConfig": {
    object (ConversationModelConfig)
  },
  "conversationProcessConfig": {
    object (ConversationProcessConfig)
  }
}
Fields
suggestionFeature

object (SuggestionFeature)

The suggestion feature.

enableEventBasedSuggestion

boolean

Automatically iterates all participants and tries to compile suggestions.

Supported features: ARTICLE_SUGGESTION, FAQ, DIALOGFLOW_ASSIST, ENTITY_EXTRACTION, KNOWLEDGE_ASSIST.

disableAgentQueryLogging

boolean

Optional. Disable the logging of search queries sent by human agents. It can prevent those queries from being stored at answer records.

Supported features: KNOWLEDGE_SEARCH.

enableQuerySuggestionWhenNoAnswer

boolean

Optional. Enable query suggestion even if we can't find its answer. By default, queries are suggested only if we find its answer. Supported features: KNOWLEDGE_ASSIST

enableConversationAugmentedQuery

boolean

Optional. Enable including conversation context during query answer generation. Supported features: KNOWLEDGE_SEARCH.

enableQuerySuggestionOnly

boolean

Optional. Enable query suggestion only. Supported features: KNOWLEDGE_ASSIST

suggestionTriggerSettings

object (SuggestionTriggerSettings)

Settings of suggestion trigger.

Currently, only ARTICLE_SUGGESTION, FAQ, and DIALOGFLOW_ASSIST will use this field.

queryConfig

object (SuggestionQueryConfig)

Configs of query.

conversationModelConfig

object (ConversationModelConfig)

Configs of custom conversation model.

conversationProcessConfig

object (ConversationProcessConfig)

Configs for processing conversation.

SuggestionFeature

The type of Human Agent Assistant API suggestion to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.

JSON representation
{
  "type": enum (Type)
}
Fields
type

enum (Type)

Type of Human Agent Assistant API feature to request.

Type

Defines the type of Human Agent Assistant feature.

Enums
TYPE_UNSPECIFIED Unspecified feature type.
ARTICLE_SUGGESTION Run article suggestion model for chat.
FAQ Run FAQ model.
SMART_REPLY Run smart reply model for chat.
DIALOGFLOW_ASSIST Run Dialogflow assist model for chat, which will return automated agent response as suggestion.
CONVERSATION_SUMMARIZATION Run conversation summarization model for chat.
KNOWLEDGE_ASSIST Run knowledge assist with automatic query generation.

SuggestionTriggerSettings

Settings of suggestion trigger.

JSON representation
{
  "noSmallTalk": boolean,
  "onlyEndUser": boolean
}
Fields
noSmallTalk

boolean

Do not trigger if last utterance is small talk.

onlyEndUser

boolean

Only trigger suggestion if participant role of last utterance is END_USER.

SuggestionQueryConfig

Config for suggestion query.

JSON representation
{
  "maxResults": integer,
  "confidenceThreshold": number,
  "contextFilterSettings": {
    object (ContextFilterSettings)
  },
  "sections": {
    object (Sections)
  },
  "contextSize": integer,

  // Union field query_source can be only one of the following:
  "knowledgeBaseQuerySource": {
    object (KnowledgeBaseQuerySource)
  },
  "documentQuerySource": {
    object (DocumentQuerySource)
  },
  "dialogflowQuerySource": {
    object (DialogflowQuerySource)
  }
  // End of list of possible types for union field query_source.
}
Fields
maxResults

integer

Maximum number of results to return. Currently, if unset, defaults to 10. And the max number is 20.

confidenceThreshold

number

Confidence threshold of query result.

Agent Assist gives each suggestion a score in the range [0.0, 1.0], based on the relevance between the suggestion and the current conversation context. A score of 0.0 has no relevance, while a score of 1.0 has high relevance. Only suggestions with a score greater than or equal to the value of this field are included in the results.

For a baseline model (the default), the recommended value is in the range [0.05, 0.1].

For a custom model, there is no recommended value. Tune this value by starting from a very low value and slowly increasing until you have desired results.

If this field is not set, it is default to 0.0, which means that all suggestions are returned.

Supported features: ARTICLE_SUGGESTION, FAQ, SMART_REPLY, SMART_COMPOSE, KNOWLEDGE_SEARCH, KNOWLEDGE_ASSIST, ENTITY_EXTRACTION.

contextFilterSettings

object (ContextFilterSettings)

Determines how recent conversation context is filtered when generating suggestions. If unspecified, no messages will be dropped.

sections

object (Sections)

Optional. The customized sections chosen to return when requesting a summary of a conversation.

contextSize

integer

Optional. The number of recent messages to include in the context. Supported features: KNOWLEDGE_ASSIST.

Union field query_source. Source of query. query_source can be only one of the following:
knowledgeBaseQuerySource

object (KnowledgeBaseQuerySource)

Query from knowledgebase. It is used by: ARTICLE_SUGGESTION, FAQ.

documentQuerySource

object (DocumentQuerySource)

Query from knowledge base document. It is used by: SMART_REPLY, SMART_COMPOSE.

dialogflowQuerySource

object (DialogflowQuerySource)

Query from Dialogflow agent. It is used by DIALOGFLOW_ASSIST, ENTITY_EXTRACTION.

KnowledgeBaseQuerySource

Knowledge base source settings.

Supported features: ARTICLE_SUGGESTION, FAQ.

JSON representation
{
  "knowledgeBases": [
    string
  ]
}
Fields
knowledgeBases[]

string

Required. Knowledge bases to query. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>. Currently, only one knowledge base is supported.

DocumentQuerySource

Document source settings.

Supported features: SMART_REPLY, SMART_COMPOSE.

JSON representation
{
  "documents": [
    string
  ]
}
Fields
documents[]

string

Required. Knowledge documents to query from. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<KnowledgeBase ID>/documents/<Document ID>. Currently, only one document is supported.

DialogflowQuerySource

Dialogflow source setting.

Supported feature: DIALOGFLOW_ASSIST, ENTITY_EXTRACTION.

JSON representation
{
  "agent": string,
  "humanAgentSideConfig": {
    object (HumanAgentSideConfig)
  }
}
Fields
agent

string

Required. The name of a dialogflow virtual agent used for end user side intent detection and suggestion. Format: projects/<Project ID>/locations/<Location ID>/agent. When multiple agents are allowed in the same Dialogflow project.

humanAgentSideConfig

object (HumanAgentSideConfig)

The Dialogflow assist configuration for human agent.

HumanAgentSideConfig

The configuration used for human agent side Dialogflow assist suggestion.

JSON representation
{
  "agent": string
}
Fields
agent

string

Optional. The name of a dialogflow virtual agent used for intent detection and suggestion triggered by human agent. Format: projects/<Project ID>/locations/<Location ID>/agent.

ContextFilterSettings

Settings that determine how to filter recent conversation context when generating suggestions.

JSON representation
{
  "dropHandoffMessages": boolean,
  "dropVirtualAgentMessages": boolean,
  "dropIvrMessages": boolean
}
Fields
dropHandoffMessages

boolean

If set to true, the last message from virtual agent (hand off message) and the message before it (trigger message of hand off) are dropped.

dropVirtualAgentMessages

boolean

If set to true, all messages from virtual agent are dropped.

dropIvrMessages

boolean

If set to true, all messages from ivr stage are dropped.

Sections

Custom sections to return when requesting a summary of a conversation. This is only supported when baselineModelVersion == '2.0'.

Supported features: CONVERSATION_SUMMARIZATION, CONVERSATION_SUMMARIZATION_VOICE.

JSON representation
{
  "sectionTypes": [
    enum (SectionType)
  ]
}
Fields
sectionTypes[]

enum (SectionType)

The selected sections chosen to return when requesting a summary of a conversation. A duplicate selected section will be treated as a single selected section. If section types are not provided, the default will be {SITUATION, ACTION, RESULT}.

SectionType

Selectable sections to return when requesting a summary of a conversation.

Enums
SECTION_TYPE_UNSPECIFIED Undefined section type, does not return anything.
SITUATION What the customer needs help with or has question about. Section name: "situation".
ACTION What the agent does to help the customer. Section name: "action".
RESOLUTION Result of the customer service. A single word describing the result of the conversation. Section name: "resolution".
REASON_FOR_CANCELLATION Reason for cancellation if the customer requests for a cancellation. "N/A" otherwise. Section name: "reason_for_cancellation".
CUSTOMER_SATISFACTION "Unsatisfied" or "Satisfied" depending on the customer's feelings at the end of the conversation. Section name: "customer_satisfaction".
ENTITIES Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by "entities/".

ConversationModelConfig

Custom conversation models used in agent assist feature.

Supported feature: ARTICLE_SUGGESTION, SMART_COMPOSE, SMART_REPLY, CONVERSATION_SUMMARIZATION.

JSON representation
{
  "model": string,
  "baselineModelVersion": string
}
Fields
model

string

Conversation model resource name. Format: projects/<Project ID>/conversationModels/<Model ID>.

baselineModelVersion

string

Version of current baseline model. It will be ignored if model is set. Valid versions are: Article Suggestion baseline model: - 0.9 - 1.0 (default) Summarization baseline model: - 1.0

ConversationProcessConfig

Config to process conversation.

JSON representation
{
  "recentSentencesCount": integer
}
Fields
recentSentencesCount

integer

Number of recent non-small-talk sentences to use as context for article and FAQ suggestion

MessageAnalysisConfig

Configuration for analyses to run on each conversation message.

JSON representation
{
  "enableEntityExtraction": boolean,
  "enableSentimentAnalysis": boolean
}
Fields
enableEntityExtraction

boolean

Enable entity extraction in conversation messages on agent assist stage. If unspecified, defaults to false.

Currently, this feature is not general available, please contact Google to get access.

enableSentimentAnalysis

boolean

Enable sentiment analysis in conversation messages on agent assist stage. If unspecified, defaults to false. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral: https://cloud.google.com/natural-language/docs/basics#sentimentAnalysis For Participants.StreamingAnalyzeContent method, result will be in StreamingAnalyzeContentResponse.message.SentimentAnalysisResult. For Participants.AnalyzeContent method, result will be in AnalyzeContentResponse.message.SentimentAnalysisResult For Conversations.ListMessages method, result will be in ListMessagesResponse.messages.SentimentAnalysisResult If Pub/Sub notification is configured, result will be in ConversationEvent.new_message_payload.SentimentAnalysisResult.

HumanAgentHandoffConfig

Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation.

Currently, this feature is not general available, please contact Google to get access.

JSON representation
{

  // Union field agent_service can be only one of the following:
  "livePersonConfig": {
    object (LivePersonConfig)
  },
  "salesforceLiveAgentConfig": {
    object (SalesforceLiveAgentConfig)
  }
  // End of list of possible types for union field agent_service.
}
Fields
Union field agent_service. Required. Specifies which agent service to connect for human agent handoff. agent_service can be only one of the following:
livePersonConfig

object (LivePersonConfig)

Uses LivePerson.

salesforceLiveAgentConfig

object (SalesforceLiveAgentConfig)

Uses Salesforce Live Agent.

LivePersonConfig

Configuration specific to LivePerson.

JSON representation
{
  "accountNumber": string
}
Fields
accountNumber

string

Required. Account number of the LivePerson account to connect. This is the account number you input at the login page.

SalesforceLiveAgentConfig

Configuration specific to Salesforce Live Agent.

JSON representation
{
  "organizationId": string,
  "deploymentId": string,
  "buttonId": string,
  "endpointDomain": string
}
Fields
organizationId

string

Required. The organization ID of the Salesforce account.

deploymentId

string

Required. Live Agent deployment ID.

buttonId

string

Required. Live Agent chat button ID.

endpointDomain

string

Required. Domain of the Live Agent endpoint for this agent. You can find the endpoint URL in the Live Agent settings page. For example if URL has the form https://d.la4-c2-phx.salesforceliveagent.com/..., you should fill in d.la4-c2-phx.salesforceliveagent.com.

LoggingConfig

Defines logging behavior for conversation lifecycle events.

JSON representation
{
  "enableStackdriverLogging": boolean
}
Fields
enableStackdriverLogging

boolean

Whether to log conversation events like CONVERSATION_STARTED to Stackdriver in the conversation project as JSON format ConversationEvent protos.

SpeechToTextConfig

Configures speech transcription for ConversationProfile.

JSON representation
{
  "speechModelVariant": enum (SpeechModelVariant),
  "model": string,
  "phraseSets": [
    string
  ],
  "audioEncoding": enum (AudioEncoding),
  "sampleRateHertz": integer,
  "languageCode": string,
  "enableWordInfo": boolean,
  "useTimeoutBasedEndpointing": boolean
}
Fields
speechModelVariant

enum (SpeechModelVariant)

The speech model used in speech to text. SPEECH_MODEL_VARIANT_UNSPECIFIED, USE_BEST_AVAILABLE will be treated as USE_ENHANCED. It can be overridden in AnalyzeContentRequest and StreamingAnalyzeContentRequest request. If enhanced model variant is specified and an enhanced version of the specified model for the language does not exist, then it would emit an error.

model

string

Which Speech model to select. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then Dialogflow auto-selects a model based on other parameters in the SpeechToTextConfig and Agent settings. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search

Leave this field unspecified to use Agent Speech settings for model selection.

phraseSets[]

string

List of names of Cloud Speech phrase sets that are used for transcription.

audioEncoding

enum (AudioEncoding)

Audio encoding of the audio content to process.

sampleRateHertz

integer

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

languageCode

string

The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

enableWordInfo

boolean

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

useTimeoutBasedEndpointing

boolean

Use timeout based endpointing, interpreting endpointer sensitivy as seconds of timeout value.

SpeechModelVariant

Variant of the specified Speech model to use.

See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.

Enums
SPEECH_MODEL_VARIANT_UNSPECIFIED No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE.
USE_BEST_AVAILABLE

Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for.

Please see the Dialogflow docs for how to make your project eligible for enhanced models.

USE_STANDARD Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models.
USE_ENHANCED

Use an enhanced model variant:

  • If an enhanced variant does not exist for the given model and request language, Dialogflow falls back to the standard variant.

The Cloud Speech documentation describes which models have enhanced variants.

  • If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the Dialogflow docs for how to make your project eligible.

AudioEncoding

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.

Enums
AUDIO_ENCODING_UNSPECIFIED Not specified.
AUDIO_ENCODING_LINEAR_16 Uncompressed 16-bit signed little-endian samples (Linear PCM).
AUDIO_ENCODING_FLAC FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16. FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported.
AUDIO_ENCODING_MULAW 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
AUDIO_ENCODING_AMR Adaptive Multi-Rate Narrowband codec. sampleRateHertz must be 8000.
AUDIO_ENCODING_AMR_WB Adaptive Multi-Rate Wideband codec. sampleRateHertz must be 16000.
AUDIO_ENCODING_OGG_OPUS Opus encoded audio frames in Ogg container (OggOpus). sampleRateHertz must be 16000.
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sampleRateHertz must be 16000.
AUDIO_ENCODING_ALAW 8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law.

Methods

clearSuggestionFeatureConfig

Clears a suggestion feature from a conversation profile for the given participant role.

create

Creates a conversation profile in the specified project.

delete

Deletes the specified conversation profile.

get

Retrieves the specified conversation profile.

list

Returns the list of all conversation profiles in the specified project.

patch

Updates the specified conversation profile.

setSuggestionFeatureConfig

Adds or updates a suggestion feature in a conversation profile.