conottle

Maven Central

conottle

A Java concurrent API to throttle the maximum concurrency to process tasks for any given client while the total number of clients being serviced in parallel can also be throttled

User story

As an API user, I want to execute tasks for any given client with a configurable maximum concurrency while the total number of clients being serviced in parallel can also be limited.

Prerequisite

Java 8 or better

Get it…

Maven Central

Install as a compile-scope dependency in Maven or other build tools alike.

Use it…

API

public interface ClientTaskExecutor {
    /**
     * @param command
     *         {@link Runnable} command to run asynchronously. All such commands under the same {@code clientId} are run
     *         in parallel, albeit throttled at a maximum concurrency.
     * @param clientId
     *         A key representing a client whose tasks are throttled while running in parallel
     * @return {@link Future} holding the run status of the {@code command}
     */
    Future<Void> execute(Runnable command, Object clientId);

    /**
     * @param task
     *         {@link Callable} task to run asynchronously. All such tasks under the same {@code clientId} are run in
     *         parallel, albeit throttled at a maximum concurrency.
     * @param clientId
     *         A key representing a client whose tasks are throttled while running in parallel
     * @param <V>
     *         Type of the task result
     * @return {@link Future} representing the result of the {@code task}
     */
    <V> Future<V> submit(Callable<V> task, Object clientId);
}

The interface uses Future as the return type, mainly to reduce conceptual weight of the API. The implementation actually returns CompletableFuture, and can be used/cast as such if need be.

Sample usage

import java.util.concurrent.Executors;

class submit {
    Conottle conottle = Conottle.builder()
            .maxClientsPermitted(100)
            .maxParallelismPerClient(4)
            .workerExecutorService(Executors.newCachedThreadPool())
            .build();

    @Test
    void customized() {
        int clientCount = 2;
        int clientTaskCount = 10;
        List<Future<Task>> futures = new ArrayList<>(); // class Task implements Callable<Task>
        int maxActiveExecutorCount = 0;
        for (int c = 0; c < clientCount; c++) {
            String clientId = "clientId-" + (c + 1);
            for (int t = 0; t < clientTaskCount; t++) {
                futures.add(this.conottle.submit(new Task(clientId + "-task-" + t, MIN_TASK_DURATION), clientId));
                maxActiveExecutorCount = Math.max(maxActiveExecutorCount, conottle.countActiveExecutors());
            }
        }
        assertEquals(clientCount, maxActiveExecutorCount, "should be 1:1 between a client and its executor");
        int taskTotal = futures.size();
        assertEquals(clientTaskCount * clientCount, taskTotal);
        int doneCount = 0;
        for (Future<Task> future : futures) {
            if (future.isDone()) {
                doneCount++;
            }
        }
        assertTrue(doneCount < futures.size());
        info.log("not all of the {} tasks were done immediately", taskTotal);
        info.atDebug().log("{} out of {} were done", doneCount, futures.size());
        for (Future<Task> future : futures) {
            await().until(future::isDone);
        }
        info.log("all of the {} tasks were done eventually", taskTotal);
        await().until(() -> this.conottle.countActiveExecutors() == 0);
        info.log("no active executor lingers when all tasks complete");
    }

    @AfterEach
    void close() {
        this.conottle.close();
    }
}

All builder parameters are optional:

This API has no technical/programmatic upper limit on the parameter values to set for total number of parallelism or clients to be supported. Once set, the only limit is on runtime concurrency at any given moment: Before proceeding, excessive tasks or clients will have to wait for active ones to run for completion - that is, the throttling effect.