public sealed class InputAudioConfig : IMessage<InputAudioConfig>, IEquatable<InputAudioConfig>, IDeepCloneable<InputAudioConfig>, IBufferMessage, IMessage
Reference documentation and code samples for the Google Cloud Dialogflow v2beta1 API class InputAudioConfig.
Instructs the speech recognizer on how to process the audio content.
public bool DisableNoSpeechRecognizedEvent { get; set; }
Only used in
[Participants.AnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.AnalyzeContent]
and
[Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent].
If false and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED event to Dialogflow agent.
If true, Dialogflow returns
[SpeechWordInfo][google.cloud.dialogflow.v2beta1.SpeechWordInfo] in
[StreamingRecognitionResult][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult]
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
Required. The language of the supplied audio. Dialogflow does not do
translations. See Language
Support
for a list of the currently supported language codes. Note that queries in
the same session do not necessarily need to specify the same language.
If false (default), recognition does not cease until the
client closes the stream.
If true, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eInputAudioConfig\u003c/code\u003e class within the Google Cloud Dialogflow v2beta1 API is used to configure how the speech recognizer should process audio content, including settings such as audio encoding, sample rate, and language.\u003c/p\u003e\n"],["\u003cp\u003eThis class provides properties to fine-tune speech recognition, including \u003ccode\u003eAudioEncoding\u003c/code\u003e, \u003ccode\u003eSampleRateHertz\u003c/code\u003e, \u003ccode\u003eLanguageCode\u003c/code\u003e, and \u003ccode\u003eModel\u003c/code\u003e, which allow specifying the audio format, quality, language, and speech model used.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003eInputAudioConfig\u003c/code\u003e allows for advanced speech recognition features like enabling automatic punctuation, retrieving word-level information with \u003ccode\u003eEnableWordInfo\u003c/code\u003e, and adapting to specific phrases using \u003ccode\u003ePhraseHints\u003c/code\u003e or \u003ccode\u003eSpeechContexts\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eSingleUtterance\u003c/code\u003e property allows controlling whether the recognizer should detect a single spoken utterance and then stop or continue listening until the stream is closed, which is useful for managing user input in streaming scenarios.\u003c/p\u003e\n"],["\u003cp\u003eThe class also allows customization of the \u003ccode\u003eBargeInConfig\u003c/code\u003e, which defines how the recognizer behaves when there is an interruption during the processing of the input audio.\u003c/p\u003e\n"]]],[],null,["# Google Cloud Dialogflow v2beta1 API - Class InputAudioConfig (1.0.0-beta23)\n\nVersion latestkeyboard_arrow_down\n\n- [1.0.0-beta23 (latest)](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.InputAudioConfig)\n- [1.0.0-beta22](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/1.0.0-beta22/Google.Cloud.Dialogflow.V2Beta1.InputAudioConfig) \n\n public sealed class InputAudioConfig : IMessage\u003cInputAudioConfig\u003e, IEquatable\u003cInputAudioConfig\u003e, IDeepCloneable\u003cInputAudioConfig\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Google Cloud Dialogflow v2beta1 API class InputAudioConfig.\n\nInstructs the speech recognizer on how to process the audio content. \n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e InputAudioConfig \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[InputAudioConfig](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.InputAudioConfig), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[InputAudioConfig](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.InputAudioConfig), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[InputAudioConfig](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.InputAudioConfig), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.Dialogflow.V2Beta1](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1)\n\nAssembly\n--------\n\nGoogle.Cloud.Dialogflow.V2Beta1.dll\n\nConstructors\n------------\n\n### InputAudioConfig()\n\n public InputAudioConfig()\n\n### InputAudioConfig(InputAudioConfig)\n\n public InputAudioConfig(InputAudioConfig other)\n\nProperties\n----------\n\n### AudioEncoding\n\n public AudioEncoding AudioEncoding { get; set; }\n\nRequired. Audio encoding of the audio content to process.\n\n### BargeInConfig\n\n public BargeInConfig BargeInConfig { get; set; }\n\nConfiguration of barge-in behavior during the streaming of input audio.\n\n### DefaultNoSpeechTimeout\n\n public Duration DefaultNoSpeechTimeout { get; set; }\n\nIf set, use this no-speech timeout when the agent does not provide a\nno-speech timeout itself.\n\n### DisableNoSpeechRecognizedEvent\n\n public bool DisableNoSpeechRecognizedEvent { get; set; }\n\nOnly used in\n\\[Participants.AnalyzeContent\\]\\[google.cloud.dialogflow.v2beta1.Participants.AnalyzeContent\\]\nand\n\\[Participants.StreamingAnalyzeContent\\]\\[google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent\\].\nIf `false` and recognition doesn't return any result, trigger\n`NO_SPEECH_RECOGNIZED` event to Dialogflow agent.\n\n### EnableAutomaticPunctuation\n\n public bool EnableAutomaticPunctuation { get; set; }\n\nEnable automatic punctuation option at the speech backend.\n\n### EnableWordInfo\n\n public bool EnableWordInfo { get; set; }\n\nIf `true`, Dialogflow returns\n\\[SpeechWordInfo\\]\\[google.cloud.dialogflow.v2beta1.SpeechWordInfo\\] in\n\\[StreamingRecognitionResult\\]\\[google.cloud.dialogflow.v2beta1.StreamingRecognitionResult\\]\nwith information about the recognized speech words, e.g. start and end time\noffsets. If false or unspecified, Speech doesn't return any word-level\ninformation.\n\n### LanguageCode\n\n public string LanguageCode { get; set; }\n\nRequired. The language of the supplied audio. Dialogflow does not do\ntranslations. See [Language\nSupport](https://cloud.google.com/dialogflow/docs/reference/language)\nfor a list of the currently supported language codes. Note that queries in\nthe same session do not necessarily need to specify the same language.\n\n### Model\n\n public string Model { get; set; }\n\nOptional. Which Speech model to select for the given request.\nFor more information, see\n[Speech models](https://cloud.google.com/dialogflow/es/docs/speech-models).\n\n### ModelVariant\n\n public SpeechModelVariant ModelVariant { get; set; }\n\nWhich variant of the \\[Speech\nmodel\\]\\[google.cloud.dialogflow.v2beta1.InputAudioConfig.model\\] to use.\n\n### OptOutConformerModelMigration\n\n public bool OptOutConformerModelMigration { get; set; }\n\nIf `true`, the request will opt out for STT conformer model migration.\nThis field will be deprecated once force migration takes place in June\n\n1. Please refer to [Dialogflow ES Speech model\n migration](https://cloud.google.com/dialogflow/es/docs/speech-model-migration).\n\n### PhraseHints\n\n [Obsolete]\n public RepeatedField\u003cstring\u003e PhraseHints { get; }\n\nA list of strings containing words and phrases that the speech\nrecognizer should recognize with higher likelihood.\n\nSee [the Cloud Speech\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)\nfor more details.\n\nThis field is deprecated. Please use [`speech_contexts`]() instead. If you\nspecify both [`phrase_hints`]() and [`speech_contexts`](), Dialogflow will\ntreat the [`phrase_hints`]() as a single additional [`SpeechContext`]().\n\n### PhraseSets\n\n public RepeatedField\u003cstring\u003e PhraseSets { get; }\n\nA collection of phrase set resources to use for speech adaptation.\n\n### PhraseSetsAsPhraseSetNames\n\n public ResourceNameList\u003cPhraseSetName\u003e PhraseSetsAsPhraseSetNames { get; }\n\n[PhraseSetName](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.PhraseSetName)-typed view over the [PhraseSets](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.InputAudioConfig#Google_Cloud_Dialogflow_V2Beta1_InputAudioConfig_PhraseSets) resource name property.\n\n### SampleRateHertz\n\n public int SampleRateHertz { get; set; }\n\nRequired. Sample rate (in Hertz) of the audio content sent in the query.\nRefer to [Cloud Speech API\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics) for\nmore details.\n\n### SingleUtterance\n\n public bool SingleUtterance { get; set; }\n\nIf `false` (default), recognition does not cease until the\nclient closes the stream.\nIf `true`, the recognizer will detect a single spoken utterance in input\naudio. Recognition ceases when it detects the audio's voice has\nstopped or paused. In this case, once a detected intent is received, the\nclient should close the stream and start a new request with a new stream as\nneeded.\nNote: This setting is relevant only for streaming methods.\nNote: When specified, InputAudioConfig.single_utterance takes precedence\nover StreamingDetectIntentRequest.single_utterance.\n\n### SpeechContexts\n\n public RepeatedField\u003cSpeechContext\u003e SpeechContexts { get; }\n\nContext information to assist speech recognition.\n\nSee [the Cloud Speech\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)\nfor more details."]]