- 4.48.0 (latest)
- 4.47.0
- 4.46.0
- 4.44.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.32.0
- 4.31.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.16.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.0
- 4.8.0
- 4.7.0
- 4.6.0
- 4.4.0
- 4.3.0
- 4.2.0
- 4.1.0
- 4.0.0
- 3.0.0
- 2.6.1
- 2.5.9
- 2.4.0
- 2.3.0
- 2.2.15
public class SpeechClient implements BackgroundResource
Service Description: Service that implements Google Cloud Speech API.
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig config = RecognitionConfig.newBuilder().build();
RecognitionAudio audio = RecognitionAudio.newBuilder().build();
RecognizeResponse response = speechClient.recognize(config, audio);
}
Note: close() needs to be called on the SpeechClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
The surface of this class includes several types of Java methods for each of the API's methods:
- A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
- A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
- A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of SpeechSettings to create(). For example:
To customize credentials:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
SpeechSettings speechSettings =
SpeechSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
SpeechClient speechClient = SpeechClient.create(speechSettings);
To customize the endpoint:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
SpeechSettings speechSettings = SpeechSettings.newBuilder().setEndpoint(myEndpoint).build();
SpeechClient speechClient = SpeechClient.create(speechSettings);
Please refer to the GitHub repository's samples for more quickstart code snippets.
Implements
BackgroundResourceStatic Methods
create()
public static final SpeechClient create()
Constructs an instance of SpeechClient with default settings.
Type | Description |
SpeechClient |
Type | Description |
IOException |
create(SpeechSettings settings)
public static final SpeechClient create(SpeechSettings settings)
Constructs an instance of SpeechClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set.
Name | Description |
settings | SpeechSettings |
Type | Description |
SpeechClient |
Type | Description |
IOException |
create(SpeechStub stub)
public static final SpeechClient create(SpeechStub stub)
Constructs an instance of SpeechClient, using the given stub for making calls. This is for advanced usage - prefer using create(SpeechSettings).
Name | Description |
stub | SpeechStub |
Type | Description |
SpeechClient |
Constructors
SpeechClient(SpeechSettings settings)
protected SpeechClient(SpeechSettings settings)
Constructs an instance of SpeechClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred.
Name | Description |
settings | SpeechSettings |
SpeechClient(SpeechStub stub)
protected SpeechClient(SpeechStub stub)
Name | Description |
stub | SpeechStub |
Methods
awaitTermination(long duration, TimeUnit unit)
public boolean awaitTermination(long duration, TimeUnit unit)
Name | Description |
duration | long |
unit | TimeUnit |
Type | Description |
boolean |
Type | Description |
InterruptedException |
close()
public final void close()
getOperationsClient()
public final OperationsClient getOperationsClient()
Returns the OperationsClient that can be used to query the status of a long-running operation returned by another API method call.
Type | Description |
OperationsClient |
getSettings()
public final SpeechSettings getSettings()
Type | Description |
SpeechSettings |
getStub()
public SpeechStub getStub()
Type | Description |
SpeechStub |
isShutdown()
public boolean isShutdown()
Type | Description |
boolean |
isTerminated()
public boolean isTerminated()
Type | Description |
boolean |
longRunningRecognizeAsync(LongRunningRecognizeRequest request)
public final OperationFuture<LongRunningRecognizeResponse,LongRunningRecognizeMetadata> longRunningRecognizeAsync(LongRunningRecognizeRequest request)
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface. Returns either an Operation.error
or an Operation.response
which contains a
LongRunningRecognizeResponse
message. For more information on asynchronous speech
recognition, see the how-to.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
LongRunningRecognizeRequest request =
LongRunningRecognizeRequest.newBuilder()
.setConfig(RecognitionConfig.newBuilder().build())
.setAudio(RecognitionAudio.newBuilder().build())
.setOutputConfig(TranscriptOutputConfig.newBuilder().build())
.build();
LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();
}
Name | Description |
request | LongRunningRecognizeRequest The request object containing all of the parameters for the API call. |
Type | Description |
OperationFuture<LongRunningRecognizeResponse,LongRunningRecognizeMetadata> |
longRunningRecognizeAsync(RecognitionConfig config, RecognitionAudio audio)
public final OperationFuture<LongRunningRecognizeResponse,LongRunningRecognizeMetadata> longRunningRecognizeAsync(RecognitionConfig config, RecognitionAudio audio)
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface. Returns either an Operation.error
or an Operation.response
which contains a
LongRunningRecognizeResponse
message. For more information on asynchronous speech
recognition, see the how-to.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig config = RecognitionConfig.newBuilder().build();
RecognitionAudio audio = RecognitionAudio.newBuilder().build();
LongRunningRecognizeResponse response =
speechClient.longRunningRecognizeAsync(config, audio).get();
}
Name | Description |
config | RecognitionConfig Required. Provides information to the recognizer that specifies how to process the request. |
audio | RecognitionAudio Required. The audio data to be recognized. |
Type | Description |
OperationFuture<LongRunningRecognizeResponse,LongRunningRecognizeMetadata> |
longRunningRecognizeCallable()
public final UnaryCallable<LongRunningRecognizeRequest,Operation> longRunningRecognizeCallable()
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface. Returns either an Operation.error
or an Operation.response
which contains a
LongRunningRecognizeResponse
message. For more information on asynchronous speech
recognition, see the how-to.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
LongRunningRecognizeRequest request =
LongRunningRecognizeRequest.newBuilder()
.setConfig(RecognitionConfig.newBuilder().build())
.setAudio(RecognitionAudio.newBuilder().build())
.setOutputConfig(TranscriptOutputConfig.newBuilder().build())
.build();
ApiFuture<Operation> future = speechClient.longRunningRecognizeCallable().futureCall(request);
// Do something.
Operation response = future.get();
}
Type | Description |
UnaryCallable<LongRunningRecognizeRequest,Operation> |
longRunningRecognizeOperationCallable()
public final OperationCallable<LongRunningRecognizeRequest,LongRunningRecognizeResponse,LongRunningRecognizeMetadata> longRunningRecognizeOperationCallable()
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface. Returns either an Operation.error
or an Operation.response
which contains a
LongRunningRecognizeResponse
message. For more information on asynchronous speech
recognition, see the how-to.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
LongRunningRecognizeRequest request =
LongRunningRecognizeRequest.newBuilder()
.setConfig(RecognitionConfig.newBuilder().build())
.setAudio(RecognitionAudio.newBuilder().build())
.setOutputConfig(TranscriptOutputConfig.newBuilder().build())
.build();
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> future =
speechClient.longRunningRecognizeOperationCallable().futureCall(request);
// Do something.
LongRunningRecognizeResponse response = future.get();
}
Type | Description |
OperationCallable<LongRunningRecognizeRequest,LongRunningRecognizeResponse,LongRunningRecognizeMetadata> |
recognize(RecognitionConfig config, RecognitionAudio audio)
public final RecognizeResponse recognize(RecognitionConfig config, RecognitionAudio audio)
Performs synchronous speech recognition: receive results after all audio has been sent and processed.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig config = RecognitionConfig.newBuilder().build();
RecognitionAudio audio = RecognitionAudio.newBuilder().build();
RecognizeResponse response = speechClient.recognize(config, audio);
}
Name | Description |
config | RecognitionConfig Required. Provides information to the recognizer that specifies how to process the request. |
audio | RecognitionAudio Required. The audio data to be recognized. |
Type | Description |
RecognizeResponse |
recognize(RecognizeRequest request)
public final RecognizeResponse recognize(RecognizeRequest request)
Performs synchronous speech recognition: receive results after all audio has been sent and processed.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
RecognizeRequest request =
RecognizeRequest.newBuilder()
.setConfig(RecognitionConfig.newBuilder().build())
.setAudio(RecognitionAudio.newBuilder().build())
.build();
RecognizeResponse response = speechClient.recognize(request);
}
Name | Description |
request | RecognizeRequest The request object containing all of the parameters for the API call. |
Type | Description |
RecognizeResponse |
recognizeCallable()
public final UnaryCallable<RecognizeRequest,RecognizeResponse> recognizeCallable()
Performs synchronous speech recognition: receive results after all audio has been sent and processed.
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
RecognizeRequest request =
RecognizeRequest.newBuilder()
.setConfig(RecognitionConfig.newBuilder().build())
.setAudio(RecognitionAudio.newBuilder().build())
.build();
ApiFuture<RecognizeResponse> future = speechClient.recognizeCallable().futureCall(request);
// Do something.
RecognizeResponse response = future.get();
}
Type | Description |
UnaryCallable<RecognizeRequest,RecognizeResponse> |
shutdown()
public void shutdown()
shutdownNow()
public void shutdownNow()
streamingRecognizeCallable()
public final BidiStreamingCallable<StreamingRecognizeRequest,StreamingRecognizeResponse> streamingRecognizeCallable()
Performs bidirectional streaming speech recognition: receive results while sending audio. This method is only available via the gRPC API (not REST).
Sample code:
// This snippet has been automatically generated for illustrative purposes only.
// It may require modifications to work in your environment.
try (SpeechClient speechClient = SpeechClient.create()) {
BidiStream<StreamingRecognizeRequest, StreamingRecognizeResponse> bidiStream =
speechClient.streamingRecognizeCallable().call();
StreamingRecognizeRequest request = StreamingRecognizeRequest.newBuilder().build();
bidiStream.send(request);
for (StreamingRecognizeResponse response : bidiStream) {
// Do something when a response is received.
}
}
Type | Description |
BidiStreamingCallable<StreamingRecognizeRequest,StreamingRecognizeResponse> |