Package (2.19.2)



A simple ThresholdBatchReceiver that just accumulates batches.


This class contains the element and its corresponding unresolved future, which would be resolved when batch is successful or failed.


Queues up the elements until #flush() is called; once batching is over, returned future resolves.

This class is not thread-safe, and expects to be used from a single thread.


This is an extension of UnaryCallSettings class to configure a UnaryCallable for calls to an API method that supports batching. The batching settings are provided using the instance of BatchingSettings.

Retry configuration will be applied on each batching RPC request.

Sample settings configuration:

 BatchingCallSettings batchingCallSettings = // Default BatchingCallSettings from the client
 BatchingCallSettings customBatchingCallSettings =
         .setRetryableCodes(StatusCode.Code.UNAVAILABLE, ...)


A base builder class for BatchingCallSettings. See the class documentation of BatchingCallSettings for a description of the different values that can be set.


Wraps a FlowController for use by batching.


Represents the batching settings to use for an API method that is capable of batching.

By default the settings are configured to not use batching (i.e. the batch size threshold is 1). This is the safest default behavior, which has meaning in all possible scenarios. Users are expected to configure actual batching thresholds explicitly: the element count, the request bytes count and the delay.

Warning: With the wrong settings, it is possible to cause long periods of dead waiting time.

When batching is turned on for an API method, a call to that method will result in the request being queued up with other requests. When any of the set thresholds are reached, the queued up requests are packaged together in a batch and set to the service as a single RPC. When the response comes back, it is split apart into individual responses according to the individual input requests.

There are several supported thresholds:

  • Delay Threshold: Counting from the time that the first message is queued, once this delay has passed, then send the batch. The default value is 1 millisecond.
  • Message Count Threshold: Once this many messages are queued, send all of the messages in a single call, even if the delay threshold hasn't elapsed yet. The default value is 1 message.
  • Request Byte Threshold: Once the number of bytes in the batched request reaches this threshold, send all of the messages in a single call, even if neither the delay or message count thresholds have been exceeded yet. The default value is 1 byte.

These thresholds are treated as triggers, not as limits. Thus, if a request is made with 2x the message count threshold, it will not be split apart (unless one of the limits listed further down is crossed); only one batch will be sent. Each threshold is an independent trigger and doesn't have any knowledge of the other thresholds.

Two of the values above also have limits:

  • Message Count Limit: The limit of the number of messages that the server will accept in a single request.
  • Request Byte Limit: The limit of the byte size of a request that the server will accept.

For these values, individual requests that surpass the limit are rejected, and the batching logic will not batch together requests if the resulting batch will surpass the limit. Thus, a batch can be sent that is actually under the threshold if the next request would put the combined request over the limit.

Batching also supports FlowControl. This can be used to prevent the batching implementation from accumulating messages without limit, resulting eventually in an OutOfMemory exception. This can occur if messages are created and added to batching faster than they can be processed. The flow control behavior is controlled using FlowControlSettings.


See the class documentation of BatchingSettings for a description of the different values that can be set.


Factory methods for general-purpose batching thresholds.


Settings for dynamic flow control



Record the statistics of flow control events.

This class is populated by FlowController, which will record throttling events. Currently it only keeps the last flow control event, but it could be expanded to record more information in the future. The events can be used to dynamically adjust concurrency in the client. For example:

 // Increase flow control limits if there was throttling in the past 5 minutes and throttled time
 // was longer than 1 minute.
 while(true) {
    FlowControlEvent event = flowControlEventStats.getLastFlowControlEvent();
    if (event != null
         && event.getTimestampMs() > System.currentMillis() - TimeUnit.MINUTES.toMillis(5)
         && event.getThrottledTimeInMs() > TimeUnit.MINUTES.toMillis(1)) {
      flowController.increaseThresholds(elementSteps, byteSteps);


A flow control event. Record throttled time if LimitExceededBehavior is LimitExceededBehavior#Block, or the exception if the behavior is LimitExceededBehavior#ThrowException.


Settings for FlowController.



Provides flow control capability.


A threshold which accumulates a count based on the provided ElementCounter.



Queues up elements until either a duration of time has passed or any threshold in a given set of thresholds is breached, and then delivers the elements in a batch to the consumer.


Builder for a ThresholdBatcher.




Represents a batching context where individual elements will be accumulated and flushed in a large batch request at some point in the future. The buffered elements can be flushed manually or when triggered by an internal threshold. This is intended to be used for high throughput scenarios at the cost of latency.

Batcher instances are not thread safe. To use across different threads, create a new Batcher instance per thread.


An adapter that packs and unpacks the elements in and out of batch requests and responses.

This interface should be implemented by either a service specific client or autogenerated by gapic-generator.

Example implementation:

 class ListDescriptor implements BatchingDescriptor<String, String, List<String>, List<String>> {

   RequestBuilder<String, List<String>> newRequestBuilder(List<String> prototype) {
     return new RequestBuilder<String, List<String>>() {

       void add(String element) {

       List<String> build() {
         return list.clone();

   void splitResponse(List<String> callableResponse, List<SettableApiFuture<String>> batch) {
     for (int i = 0; i < batchresponse.size();="" i++)="" {="" batch.get(i).set(batchresponse.get(i);="" }="" }="" void="" splitexception(throwable="" throwable,="" list<settableapifuture<string>>="" batch)="" {="" for="" (settableapifuture<string>="" result="" :="" batch)="" {="" result.setexception(throwable);="" }="" }="" long="" countbytes(string="" element)="" {="" return="" element.length();="" }="" }="">


Adapter to pack individual elements into a larger batch request.

The implementation for this interface will be implemented by service specific client or auto generated by the gapic-generator.


The interface representing a threshold to be used in ThresholdBatcher. Thresholds do not need to be thread-safe if they are only used inside ThresholdBatcher.


Interface representing an object that provides a numerical count given an object of the parameterized type.



Interface representing an object that receives batches from a ThresholdBatcher and takes action on them. Implementations of ThresholdBatchReceiver should be thread-safe.



Enumeration of behaviors that FlowController can use in case the flow control limits are exceeded.



Represents exception occurred during batching.


Base exception that signals a flow control state.


Runtime exception that can be used in place of FlowControlException when an unchecked exception is required.


Exception thrown when client-side flow control is enforced based on the maximum number of outstanding in-memory elements.


Exception thrown when client-side flow control is enforced based on the maximum number of unacknowledged in-memory bytes.