Reference documentation and code samples for the Cloud Speech V2 Client class RecognitionConfig.
Provides information to the Recognizer that specifies how to process the recognition request.
Generated from protobuf message google.cloud.speech.v2.RecognitionConfig
Namespace
Google \ Cloud \ Speech \ V2Methods
__construct
Constructor.
Parameters | |
---|---|
Name | Description |
data |
array
Optional. Data for populating the Message object. |
↳ auto_decoding_config |
AutoDetectDecodingConfig
Automatically detect decoding parameters. Preferred for supported formats. |
↳ explicit_decoding_config |
ExplicitDecodingConfig
Explicitly specified decoding parameters. Required if using headerless PCM audio (linear16, mulaw, alaw). |
↳ model |
string
Optional. Which model to use for recognition requests. Select the model best suited to your domain to get best results. Guidance for choosing which model to use can be found in the Transcription Models Documentation and the models supported in each region can be found in the Table Of Supported Models. |
↳ language_codes |
array
Optional. The language of the supplied audio as a BCP-47 language tag. Language tags are normalized to BCP-47 before they are used eg "en-us" becomes "en-US". Supported languages for each model are listed in the Table of Supported Models. If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. |
↳ features |
RecognitionFeatures
Speech recognition features to enable. |
↳ adaptation |
SpeechAdaptation
Speech adaptation context that weights recognizer predictions for specific words and phrases. |
↳ transcript_normalization |
TranscriptNormalization
Optional. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts. |
↳ translation_config |
TranslationConfig
Optional. Optional configuration used to automatically run translation on the given audio to the desired language for supported models. |
getAutoDecodingConfig
Automatically detect decoding parameters.
Preferred for supported formats.
Returns | |
---|---|
Type | Description |
AutoDetectDecodingConfig|null |
hasAutoDecodingConfig
setAutoDecodingConfig
Automatically detect decoding parameters.
Preferred for supported formats.
Parameter | |
---|---|
Name | Description |
var |
AutoDetectDecodingConfig
|
Returns | |
---|---|
Type | Description |
$this |
getExplicitDecodingConfig
Explicitly specified decoding parameters.
Required if using headerless PCM audio (linear16, mulaw, alaw).
Returns | |
---|---|
Type | Description |
ExplicitDecodingConfig|null |
hasExplicitDecodingConfig
setExplicitDecodingConfig
Explicitly specified decoding parameters.
Required if using headerless PCM audio (linear16, mulaw, alaw).
Parameter | |
---|---|
Name | Description |
var |
ExplicitDecodingConfig
|
Returns | |
---|---|
Type | Description |
$this |
getModel
Optional. Which model to use for recognition requests. Select the model best suited to your domain to get best results.
Guidance for choosing which model to use can be found in the Transcription Models Documentation and the models supported in each region can be found in the Table Of Supported Models.
Returns | |
---|---|
Type | Description |
string |
setModel
Optional. Which model to use for recognition requests. Select the model best suited to your domain to get best results.
Guidance for choosing which model to use can be found in the Transcription Models Documentation and the models supported in each region can be found in the Table Of Supported Models.
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |
getLanguageCodes
Optional. The language of the supplied audio as a BCP-47 language tag.
Language tags are normalized to BCP-47 before they are used eg "en-us" becomes "en-US". Supported languages for each model are listed in the Table of Supported Models. If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio.
Returns | |
---|---|
Type | Description |
Google\Protobuf\Internal\RepeatedField |
setLanguageCodes
Optional. The language of the supplied audio as a BCP-47 language tag.
Language tags are normalized to BCP-47 before they are used eg "en-us" becomes "en-US". Supported languages for each model are listed in the Table of Supported Models. If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio.
Parameter | |
---|---|
Name | Description |
var |
string[]
|
Returns | |
---|---|
Type | Description |
$this |
getFeatures
Speech recognition features to enable.
Returns | |
---|---|
Type | Description |
RecognitionFeatures|null |
hasFeatures
clearFeatures
setFeatures
Speech recognition features to enable.
Parameter | |
---|---|
Name | Description |
var |
RecognitionFeatures
|
Returns | |
---|---|
Type | Description |
$this |
getAdaptation
Speech adaptation context that weights recognizer predictions for specific words and phrases.
Returns | |
---|---|
Type | Description |
SpeechAdaptation|null |
hasAdaptation
clearAdaptation
setAdaptation
Speech adaptation context that weights recognizer predictions for specific words and phrases.
Parameter | |
---|---|
Name | Description |
var |
SpeechAdaptation
|
Returns | |
---|---|
Type | Description |
$this |
getTranscriptNormalization
Optional. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
Returns | |
---|---|
Type | Description |
TranscriptNormalization|null |
hasTranscriptNormalization
clearTranscriptNormalization
setTranscriptNormalization
Optional. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
Parameter | |
---|---|
Name | Description |
var |
TranscriptNormalization
|
Returns | |
---|---|
Type | Description |
$this |
getTranslationConfig
Optional. Optional configuration used to automatically run translation on the given audio to the desired language for supported models.
Returns | |
---|---|
Type | Description |
TranslationConfig|null |
hasTranslationConfig
clearTranslationConfig
setTranslationConfig
Optional. Optional configuration used to automatically run translation on the given audio to the desired language for supported models.
Parameter | |
---|---|
Name | Description |
var |
TranslationConfig
|
Returns | |
---|---|
Type | Description |
$this |
getDecodingConfig
Returns | |
---|---|
Type | Description |
string |