- 2.59.0 (latest)
- 2.58.0
- 2.57.0
- 2.55.0
- 2.54.1
- 2.53.0
- 2.52.0
- 2.51.0
- 2.50.0
- 2.49.0
- 2.48.1
- 2.47.0
- 2.46.1
- 2.45.0
- 2.43.0
- 2.42.0
- 2.41.0
- 2.39.0
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.1
- 2.33.0
- 2.32.1
- 2.31.1
- 2.30.1
- 2.24.0
- 2.23.3
- 2.22.0
- 2.21.0
- 2.20.1
- 2.19.6
- 2.18.7
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.2
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.1
- 2.7.1
Classes
AccumulatingBatchReceiver<T>
A simple ThresholdBatchReceiver that just accumulates batches.
BatchEntry<ElementT,ElementResultT>
This class contains the element and its corresponding unresolved future, which would be resolved when batch is successful or failed.
BatcherImpl<ElementT,ElementResultT,RequestT,ResponseT>
Queues up the elements until #flush() is called; once batching is over, returned future resolves.
This class is not thread-safe, and expects to be used from a single thread.
BatchingCallSettings<ElementT,ElementResultT,RequestT,ResponseT>
This is an extension of UnaryCallSettings class to configure a UnaryCallable for calls to an API method that supports batching. The batching settings are provided using the instance of BatchingSettings.
Retry configuration will be applied on each batching RPC request.
Sample settings configuration:
BatchingCallSettings batchingCallSettings = // Default BatchingCallSettings from the client
BatchingCallSettings customBatchingCallSettings =
batchingCallSettings
.toBuilder()
.setRetryableCodes(StatusCode.Code.UNAVAILABLE, ...)
.setRetrySettings(RetrySettings.newBuilder()...build())
.setBatchingSettings(BatchingSettings.newBuilder()...build())
.build();
BatchingCallSettings.Builder<ElementT,ElementResultT,RequestT,ResponseT>
A base builder class for BatchingCallSettings. See the class documentation of BatchingCallSettings for a description of the different values that can be set.
BatchingFlowController<T>
Wraps a FlowController for use by batching.
BatchingSettings
Represents the batching settings to use for an API method that is capable of batching.
By default the settings are configured to not use batching (i.e. the batch size threshold is 1). This is the safest default behavior, which has meaning in all possible scenarios. Users are expected to configure actual batching thresholds explicitly: the element count, the request bytes count and the delay.
Warning: With the wrong settings, it is possible to cause long periods of dead waiting time.
When batching is turned on for an API method, a call to that method will result in the request being queued up with other requests. When any of the set thresholds are reached, the queued up requests are packaged together in a batch and set to the service as a single RPC. When the response comes back, it is split apart into individual responses according to the individual input requests.
There are several supported thresholds:
- Delay Threshold: Counting from the time that the first message is queued, once this delay has passed, then send the batch. The default value is 1 millisecond.
- Message Count Threshold: Once this many messages are queued, send all of the messages in a single call, even if the delay threshold hasn't elapsed yet. The default value is 1 message.
- Request Byte Threshold: Once the number of bytes in the batched request reaches this threshold, send all of the messages in a single call, even if neither the delay or message count thresholds have been exceeded yet. The default value is 1 byte.
These thresholds are treated as triggers, not as limits. Thus, if a request is made with 2x the message count threshold, it will not be split apart (unless one of the limits listed further down is crossed); only one batch will be sent. Each threshold is an independent trigger and doesn't have any knowledge of the other thresholds.
Two of the values above also have limits:
- Message Count Limit: The limit of the number of messages that the server will accept in a single request.
- Request Byte Limit: The limit of the byte size of a request that the server will accept.
For these values, individual requests that surpass the limit are rejected, and the batching logic will not batch together requests if the resulting batch will surpass the limit. Thus, a batch can be sent that is actually under the threshold if the next request would put the combined request over the limit.
Batching also supports FlowControl. This can be used to prevent the batching implementation from accumulating messages without limit, resulting eventually in an OutOfMemory exception. This can occur if messages are created and added to batching faster than they can be processed. The flow control behavior is controlled using FlowControlSettings.
BatchingSettings.Builder
See the class documentation of BatchingSettings for a description of the different values that can be set.
BatchingThresholds
Factory methods for general-purpose batching thresholds.
DynamicFlowControlSettings
Settings for dynamic flow control
DynamicFlowControlSettings.Builder
FlowControlEventStats
Record the statistics of flow control events.
This class is populated by FlowController, which will record throttling events. Currently it only keeps the last flow control event, but it could be expanded to record more information in the future. The events can be used to dynamically adjust concurrency in the client. For example:
// Increase flow control limits if there was throttling in the past 5 minutes and throttled time
// was longer than 1 minute.
while(true) {
FlowControlEvent event = flowControlEventStats.getLastFlowControlEvent();
if (event != null
&& event.getTimestampMs() > System.currentMillis() - TimeUnit.MINUTES.toMillis(5)
&& event.getThrottledTimeInMs() > TimeUnit.MINUTES.toMillis(1)) {
flowController.increaseThresholds(elementSteps, byteSteps);
}
Thread.sleep(TimeUnit.MINUTE.toMillis(10));
}
FlowControlEventStats.FlowControlEvent
A flow control event. Record throttled time if LimitExceededBehavior is LimitExceededBehavior#Block, or the exception if the behavior is LimitExceededBehavior#ThrowException.
FlowControlSettings
Settings for FlowController.
FlowControlSettings.Builder
FlowController
Provides flow control capability.
NumericThreshold<E>
A threshold which accumulates a count based on the provided ElementCounter.
PartitionKey
ThresholdBatcher<E>
Queues up elements until either a duration of time has passed or any threshold in a given set of thresholds is breached, and then delivers the elements in a batch to the consumer.
ThresholdBatcher.Builder<E>
Builder for a ThresholdBatcher.
Interfaces
BatchMerger<B>
BatchResource
Represent the resource used by a batch including element and byte. It can also be extended to other things to determine if adding a new element needs to be flow controlled or if the current batch needs to be flushed.
Batcher<ElementT,ElementResultT>
Represents a batching context where individual elements will be accumulated and flushed in a large batch request at some point in the future. The buffered elements can be flushed manually or when triggered by an internal threshold. This is intended to be used for high throughput scenarios at the cost of latency.
Batcher instances are not thread safe. To use across different threads, create a new Batcher instance per thread.
BatchingDescriptor<ElementT,ElementResultT,RequestT,ResponseT>
An adapter that packs and unpacks the elements in and out of batch requests and responses.
This interface should be implemented by either a service specific client or autogenerated by gapic-generator.
Example implementation:
class ListDescriptor implements BatchingDescriptor<String, String, List<String>, List<String>> {
RequestBuilder<String, List<String>> newRequestBuilder(List<String> prototype) {
return new RequestBuilder<String, List<String>>() {
void add(String element) {
list.add(element);
}
List<String> build() {
return list.clone();
}
};
}
void splitResponse(List<String> callableResponse, List<SettableApiFuture<String>> batch) {
for (int i = 0; i < batchresponse.size();="" i++)="" {="" batch.get(i).set(batchresponse.get(i);="" }="" }="" void="" splitexception(throwable="" throwable,="" list<settableapifuture<string>>="" batch)="" {="" for="" (settableapifuture<string>="" result="" :="" batch)="" {="" result.setexception(throwable);="" }="" }="" long="" countbytes(string="" element)="" {="" return="" element.length();="" }="" }="">
BatchingRequestBuilder<ElementT,RequestT>
Adapter to pack individual elements into a larger batch request.
The implementation for this interface will be implemented by service specific client or auto generated by the gapic-generator.
BatchingThreshold<E>
The interface representing a threshold to be used in ThresholdBatcher. Thresholds do not need to be thread-safe if they are only used inside ThresholdBatcher.
ElementCounter<E>
Interface representing an object that provides a numerical count given an object of the parameterized type.
RequestBuilder<RequestT>
ThresholdBatchReceiver<BatchT>
Interface representing an object that receives batches from a ThresholdBatcher and takes action on them. Implementations of ThresholdBatchReceiver should be thread-safe.
Enums
FlowController.LimitExceededBehavior
Enumeration of behaviors that FlowController can use in case the flow control limits are exceeded.
Exceptions
BatchingException
Represents exception occurred during batching.
FlowController.FlowControlException
Base exception that signals a flow control state.
FlowController.FlowControlRuntimeException
Runtime exception that can be used in place of FlowControlException when an unchecked exception is required.
FlowController.MaxOutstandingElementCountReachedException
Exception thrown when client-side flow control is enforced based on the maximum number of outstanding in-memory elements.
FlowController.MaxOutstandingRequestBytesReachedException
Exception thrown when client-side flow control is enforced based on the maximum number of unacknowledged in-memory bytes.