Boost Spring App with ParallelFlux: Efficient Concurrent API Calls
Boost Spring App with ParallelFlux: Efficient Concurrent API Calls

Optimizing ParallelFlux for Concurrent API Calls in Project Reactor

Boost Spring app performance using Project Reactor's ParallelFlux to handle multiple concurrent API calls efficiently.6 min


Working with multiple concurrent API calls effectively can be tricky. Imagine you’re building a Spring application using Project Reactor and have to fetch data concurrently for multiple IDs via remote calls. Without proper concurrency handling, your app can quickly slow down, frustrating users with delays.

This is exactly where ParallelFlux in Project Reactor shines—enabling you to efficiently handle multiple tasks concurrently. Let’s discuss how you can unlock its full potential to reduce response times and keep your app responsive.

What Exactly Is ParallelFlux?

In Project Reactor, a regular Flux processes elements sequentially. If you’ve got a heavy process—like complex data aggregation—it tends to become a bottleneck affecting performance. That’s when you reach out for a ParallelFlux.

ParallelFlux splits data streams, allowing parallel execution on multiple processor cores. Think of it as a multi-lane highway versus a narrow one-lane road—ParallelFlux makes traffic (processing tasks) flow smoothly by spreading it over multiple lanes (threads).

Dealing with Multiple IDs Simultaneously

Let’s walk through a common scenario. Suppose you’re working on a web service that receives three customer IDs. You need to fetch customer data from remote APIs for all three customers rapidly, then process and return results quickly.

Doing this sequentially is inefficient since each call must wait for the previous one to finish, creating unnecessary delays. Here’s how ParallelFlux can help you handle these simultaneous calls efficiently:


Flux.just("id1", "id2", "id3")
    .parallel()
    .runOn(Schedulers.parallel())
    .flatMap(this::fetchCustomerData)
    .sequential()
    .collectList()
    .subscribe(customers -> {
        System.out.println("Fetched customers: " + customers);
    });

In this example, we use parallel() to switch a regular Flux into a ParallelFlux, run each element concurrently with runOn(Schedulers.parallel()), then revert to a sequential view before collecting results. This approach significantly reduces the overall response time.

Parallelizing Multiple Downstream API Calls

When processing each customer ID, you might need to invoke several downstream APIs. Simply put, each ID triggers multiple external service calls to gather details like billing information, account history, and personal details.

The naive way would be to chain these calls sequentially, something like this:


Mono getCustomerDetails(String id) {
    return fetchPersonalDetails(id)
        .flatMap(personal ->
            fetchBillingDetails(id)
            .flatMap(billing ->
                fetchAccountHistory(id)
                .map(history -> new CustomerDetails(personal, billing, history))
            ));
}

Although easy to follow, this sequential approach means you wait for personal details before calling billing, and billing before account history—slow and inefficient.

Instead, leveraging Project Reactor’s Mono.zip() helps you invoke all three downstream APIs concurrently:


Mono getCustomerDetails(String id) {
    return Mono.zip(
            fetchPersonalDetails(id).subscribeOn(Schedulers.boundedElastic()),
            fetchBillingDetails(id).subscribeOn(Schedulers.boundedElastic()),
            fetchAccountHistory(id).subscribeOn(Schedulers.boundedElastic())
        )
        .map(tuple -> new CustomerDetails(tuple.getT1(), tuple.getT2(), tuple.getT3()));
}

Here, each API call executes on its own thread, running simultaneously instead of waiting. Thus, the total execution time becomes roughly equivalent to the slowest service instead of the aggregate of all three.

Challenges Encountered with Unintentional Sequential Execution

You may have noticed strange behavior: Even when using ParallelFlux, downstream API calls seem sequential. Why? The common culprit is incorrect use of schedulers or sharing the same thread pool across operations.

Consider this:

  • You invoke ParallelFlux but use flatMap or another operator without properly switching scheduler contexts.
  • All operators share the same scheduler threads, causing tasks to block each other unintentionally.

This happens frequently if reactive operators aren’t thoughtfully placed or configured. To achieve true parallelism, you need careful management of schedulers.

Boosting Performance with Reactive Programming

Reactive programming, especially using frameworks like Project Reactor, promotes asynchronous, event-driven execution, naturally suited to aggressive concurrency requirements. By carefully applying reactive operators, code becomes non-blocking and far more responsive.

Reactive programming not only handles multiple asynchronous API calls efficiently but also simplifies error handling and data transformation. This ensures your apps stay resilient even when external dependencies become slow or unreliable.

When encountering parallelism limitations, consider refactoring your code around operators tailored for parallel execution, such as Mono.zip(), flatMap(), and effective Scheduler use.

Leveraging Project Reactor’s Features for Optimal Results

Besides ParallelFlux, Project Reactor offers robust threading and scheduling features through its Scheduler API. There are several helpful schedulers:

  • Schedulers.parallel(): Uses a thread pool sized to available CPU cores—ideal for computationally intensive tasks.
  • Schedulers.boundedElastic(): A well-suited scheduler for blocking operations, including most external or database calls.

Here’s how to properly assign your downstream calls to schedulers for maximum parallel throughput:


Flux.just("id1", "id2", "id3")
    .parallel()
    .runOn(Schedulers.parallel())
    .flatMap(id -> getCustomerDetails(id))
    .sequential()
    .collectList()
    .subscribe(customers -> {
        System.out.println("Optimized customers list: " + customers);
    });

By assigning the proper schedulers, you ensure optimal CPU usage and task distribution without excessive thread contention.

ParallelFlux Optimization Best Practices

To harness ParallelFlux effectively, follow these best practices:

  • Choose schedulers wisely: Ensure appropriate scheduler choice—parallel for CPU-bound and boundedElastic for blocking operations. This Stack Overflow thread explains scheduler selection well.
  • Fine-tune Parallelism: Use the parallel(N) operator where N fits your CPU cores, preventing thread starvation or overuse.
  • Avoid Blocking APIs in Reactive Streams: Keep reactive streams fully non-blocking. If unavoidable, isolate blocking calls on boundedElastic scheduler.
  • Configure resources correctly: Keep thread pools efficiently sized and avoid unnecessary context switching.

If you’re interacting with JavaScript services or endpoints frequently, you might enjoy digging into even more practical insights in my JavaScript tutorials.

Recap & Looking Ahead

Utilizing ParallelFlux effectively in Project Reactor helps you drastically improve app responsiveness and resource utilization. By properly invoking concurrent API calls, carefully choosing schedulers, and avoiding common pitfalls, you deliver seamless user experiences even during heavy load.

Reactive programming continues to evolve. As features get richer and APIs more refined, staying well-versed will significantly enhance the quality and responsiveness of your applications.

Are you ready to implement ParallelFlux optimization techniques in your next project? Let me know in the comments below—or ask if you have any specific challenges to tackle today!


Like it? Share with your friends!

Shivateja Keerthi
Hey there! I'm Shivateja Keerthi, a full-stack developer who loves diving deep into code, fixing tricky bugs, and figuring out why things break. I mainly work with JavaScript and Python, and I enjoy sharing everything I learn - especially about debugging, troubleshooting errors, and making development smoother. If you've ever struggled with weird bugs or just want to get better at coding, you're in the right place. Through my blog, I share tips, solutions, and insights to help you code smarter and debug faster. Let’s make coding less frustrating and more fun! My LinkedIn Follow Me on X

0 Comments

Your email address will not be published. Required fields are marked *