Reference documentation and code samples for the Cloud Speech-to-Text V1 API class Google::Cloud::Speech::V1::StreamingRecognizeRequest.
The top-level message sent by the client for the StreamingRecognize method.
Multiple StreamingRecognizeRequest messages are sent. The first message
must contain a streaming_config message and must not contain
audio_content. All subsequent messages must contain audio_content and
must not contain a streaming_config message.
Inherits
Object
Extended By
Google::Protobuf::MessageExts::ClassMethods
Includes
Google::Protobuf::MessageExts
Methods
#audio_content
defaudio_content()->::String
Returns
(::String) — The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest messages. The first
StreamingRecognizeRequest message must not contain audio_content data
and all subsequent StreamingRecognizeRequest messages must contain
audio_content data. The audio bytes must be encoded as specified in
RecognitionConfig. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
Note: The following fields are mutually exclusive: audio_content, streaming_config. If a field in that set is populated, all other fields in the set will automatically be cleared.
#audio_content=
defaudio_content=(value)->::String
Parameter
value (::String) — The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest messages. The first
StreamingRecognizeRequest message must not contain audio_content data
and all subsequent StreamingRecognizeRequest messages must contain
audio_content data. The audio bytes must be encoded as specified in
RecognitionConfig. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
Note: The following fields are mutually exclusive: audio_content, streaming_config. If a field in that set is populated, all other fields in the set will automatically be cleared.
Returns
(::String) — The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest messages. The first
StreamingRecognizeRequest message must not contain audio_content data
and all subsequent StreamingRecognizeRequest messages must contain
audio_content data. The audio bytes must be encoded as specified in
RecognitionConfig. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
Note: The following fields are mutually exclusive: audio_content, streaming_config. If a field in that set is populated, all other fields in the set will automatically be cleared.
(::Google::Cloud::Speech::V1::StreamingRecognitionConfig) — Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest message must contain a
streaming_config message.
Note: The following fields are mutually exclusive: streaming_config, audio_content. If a field in that set is populated, all other fields in the set will automatically be cleared.
value (::Google::Cloud::Speech::V1::StreamingRecognitionConfig) — Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest message must contain a
streaming_config message.
Note: The following fields are mutually exclusive: streaming_config, audio_content. If a field in that set is populated, all other fields in the set will automatically be cleared.
Returns
(::Google::Cloud::Speech::V1::StreamingRecognitionConfig) — Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest message must contain a
streaming_config message.
Note: The following fields are mutually exclusive: streaming_config, audio_content. If a field in that set is populated, all other fields in the set will automatically be cleared.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Cloud Speech-to-Text V1 API - Class Google::Cloud::Speech::V1::StreamingRecognizeRequest (v1.3.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.3.0 (latest)](/ruby/docs/reference/google-cloud-speech-v1/latest/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [1.2.1](/ruby/docs/reference/google-cloud-speech-v1/1.2.1/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [1.1.0](/ruby/docs/reference/google-cloud-speech-v1/1.1.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [1.0.1](/ruby/docs/reference/google-cloud-speech-v1/1.0.1/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.17.0](/ruby/docs/reference/google-cloud-speech-v1/0.17.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.16.2](/ruby/docs/reference/google-cloud-speech-v1/0.16.2/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.15.0](/ruby/docs/reference/google-cloud-speech-v1/0.15.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.14.0](/ruby/docs/reference/google-cloud-speech-v1/0.14.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.13.1](/ruby/docs/reference/google-cloud-speech-v1/0.13.1/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.12.1](/ruby/docs/reference/google-cloud-speech-v1/0.12.1/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.11.0](/ruby/docs/reference/google-cloud-speech-v1/0.11.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.10.2](/ruby/docs/reference/google-cloud-speech-v1/0.10.2/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.9.0](/ruby/docs/reference/google-cloud-speech-v1/0.9.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.8.0](/ruby/docs/reference/google-cloud-speech-v1/0.8.0/Google-Cloud-Speech-V1-StreamingRecognizeRequest)\n- [0.7.3](/ruby/docs/reference/google-cloud-speech-v1/0.7.3/Google-Cloud-Speech-V1-StreamingRecognizeRequest) \nReference documentation and code samples for the Cloud Speech-to-Text V1 API class Google::Cloud::Speech::V1::StreamingRecognizeRequest.\n\nThe top-level message sent by the client for the `StreamingRecognize` method.\nMultiple `StreamingRecognizeRequest` messages are sent. The first message\nmust contain a `streaming_config` message and must not contain\n`audio_content`. All subsequent messages must contain `audio_content` and\nmust not contain a `streaming_config` message. \n\nInherits\n--------\n\n- Object \n\nExtended By\n-----------\n\n- Google::Protobuf::MessageExts::ClassMethods \n\nIncludes\n--------\n\n- Google::Protobuf::MessageExts\n\nMethods\n-------\n\n### #audio_content\n\n def audio_content() -\u003e ::String\n\n**Returns**\n\n- (::String) --- The audio data to be recognized. Sequential chunks of audio data are sent in sequential `StreamingRecognizeRequest` messages. The first `StreamingRecognizeRequest` message must not contain `audio_content` data and all subsequent `StreamingRecognizeRequest` messages must contain `audio_content` data. The audio bytes must be encoded as specified in `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a pure binary representation (not base64). See [content limits](https://cloud.google.com/speech-to-text/quotas#content).\n\n\n Note: The following fields are mutually exclusive: `audio_content`, `streaming_config`. If a field in that set is populated, all other fields in the set will automatically be cleared.\n\n### #audio_content=\n\n def audio_content=(value) -\u003e ::String\n\n**Parameter**\n\n- **value** (::String) --- The audio data to be recognized. Sequential chunks of audio data are sent in sequential `StreamingRecognizeRequest` messages. The first `StreamingRecognizeRequest` message must not contain `audio_content` data and all subsequent `StreamingRecognizeRequest` messages must contain `audio_content` data. The audio bytes must be encoded as specified in `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a pure binary representation (not base64). See [content limits](https://cloud.google.com/speech-to-text/quotas#content).\n\n\nNote: The following fields are mutually exclusive: `audio_content`, `streaming_config`. If a field in that set is populated, all other fields in the set will automatically be cleared. \n**Returns**\n\n- (::String) --- The audio data to be recognized. Sequential chunks of audio data are sent in sequential `StreamingRecognizeRequest` messages. The first `StreamingRecognizeRequest` message must not contain `audio_content` data and all subsequent `StreamingRecognizeRequest` messages must contain `audio_content` data. The audio bytes must be encoded as specified in `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a pure binary representation (not base64). See [content limits](https://cloud.google.com/speech-to-text/quotas#content).\n\n\n Note: The following fields are mutually exclusive: `audio_content`, `streaming_config`. If a field in that set is populated, all other fields in the set will automatically be cleared.\n\n### #streaming_config\n\n def streaming_config() -\u003e ::Google::Cloud::Speech::V1::StreamingRecognitionConfig\n\n**Returns**\n\n- ([::Google::Cloud::Speech::V1::StreamingRecognitionConfig](./Google-Cloud-Speech-V1-StreamingRecognitionConfig)) --- Provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.\n\n\n Note: The following fields are mutually exclusive: `streaming_config`, `audio_content`. If a field in that set is populated, all other fields in the set will automatically be cleared.\n\n### #streaming_config=\n\n def streaming_config=(value) -\u003e ::Google::Cloud::Speech::V1::StreamingRecognitionConfig\n\n**Parameter**\n\n- **value** ([::Google::Cloud::Speech::V1::StreamingRecognitionConfig](./Google-Cloud-Speech-V1-StreamingRecognitionConfig)) --- Provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.\n\n\nNote: The following fields are mutually exclusive: `streaming_config`, `audio_content`. If a field in that set is populated, all other fields in the set will automatically be cleared. \n**Returns**\n\n- ([::Google::Cloud::Speech::V1::StreamingRecognitionConfig](./Google-Cloud-Speech-V1-StreamingRecognitionConfig)) --- Provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.\n\n\n Note: The following fields are mutually exclusive: `streaming_config`, `audio_content`. If a field in that set is populated, all other fields in the set will automatically be cleared."]]