In this blog post, I want to clear up some fuzziness that seems to surround Reactive Streams. It is all too easy to defeat the goals that can be achieved with Reactive Streams, especially where the application is part of an environment with both synchronous and asynchronous inputs and outputs.
A reactive approach to software development enables an application to be more responsive than a "traditional", blocking app would be. The Reactive Streams standard defines rules that can apply wherever interactions across a boundary happen asynchronously, to ensure resilience and flexibility. Understanding how it differs from blocking code is essential in making an informed decision whether to apply it, and in using it properly.
What is a Reactive Stream?
"Reactive Streams" is an effort to standardize asynchronous stream processing. There is a multitude of libraries and frameworks that apply it, especially in the JVM ecosystem. Examples of these include Akka, Kafka, Project Reactor, Vert.x 3.0, Mutiny and more. The Reactive Streams website states its main goal is to "… govern exchange of stream data across an asynchronous boundary … while ensuring that the receiving side is not forced to buffer arbitrary amounts of data". In other words, it aims to provide rules for "asynchronous stream processing with non-blocking backpressure".
A stream of beer
I want to help readers understand, so let me explain through a rough analogy. Think about a bartender (a stream processor) that can turn the beer tap (the stream publisher) on or off (applying backpressure to the stream), so they can fill all their glasses with just the right amount of beer and the right amount of foam. The bartender has plenty of room on the counter to place the beers before they are picked up. A waiter (another processor) can pick up as many beers as they can carry on their serving tray. One or more customers (stream subscribers) that are ready for another beer receive one from the waiter at their request, or kindly refuse in case they’ve had too much to drink already.
With the previous analogy, the concept of backpressure should be apparent. But what about the asynchronous and non-blocking nature of reactive streams? The asynchronous nature means that a stream publisher doesn’t wait for acknowledgement by subscribers to keep streaming, but it also means that it doesn’t have to give guarantees about the speed or frequency of stream elements. That way, it is up to the subscriber to "react" to the publisher, either by applying backpressure if it considers the stream too fast, or by cancelling its subscription altogether if it deems the stream too slow.
There is no strict definition of "non-blocking", as it has come to mean slightly different things based on the context. Nonetheless, I will try to give an explanation based on my interpretation for the case of Reactive Streams. The non-blocking nature of reactive streams might seem deceptively insignificant, but it is integral to how reactive streams are applied, and it is part of why making your own implementation would be a daunting task.
While "synchronous" and "blocking", as antonyms of these two concepts, are very similar in definition, "non-blocking" is a little different from "asynchronous". In short, if asynchronous is a "fire-and-forget" concept, non-blocking is closer to "fire-or-something-and-forget".
The waiter that just knows
Non-blocking is different from asynchronous.
Let me explain this by continuing the beer analogy:
Imagine there are two bars, Bar One and Bar Two, both with a bartender and a waiter. The waiters can’t talk to the bartenders. Maybe it’s really busy and the music is very loud. Without talking, the waiters can instead give backpressure to the bartenders simply by refilling their serving tray less frequently. Both the waiters do various chores, like bringing beer to customers, replacing empty beer kegs and cleaning tables. They’ll happily do whatever needs doing without hesitation, but there is one difference between Bar One and Bar Two.
In Bar One, sometimes there might be no beers on the counter, then the waiter cleans some tables before they check the counter again. The waiter can tell when to replace the beer keg if the next beer on the counter is foamy instead of flat.
In Bar Two, the beers on the counter are always flat. Sometimes the counter is empty, then the waiter does something else. But an empty counter doesn’t mean the keg needs to be replaced! There is simply no way of knowing. Of course, they can go to replace the keg, but they can’t decide that based on the stream of beers alone.
While the streams of beers in both bars are asynchronous, the stream in Bar One is also non-blocking. Because of that, Bar One’s waiter could react to the information that the stream of beers provided: the foamy beer meant that the beer keg needed to be replaced You could say Bar Two has a waiter and Bar One has a… reacter?
To block or not to block
One of the reasons that Reactive Streams standard is non-blocking, is because it is resilient against failure, which is a central idea in the Reactive Manifesto. According to Reactive Streams, A publisher can signal something more than just its stream elements to a subscriber.
The publisher can signal
onError, if there is a failure, or
onComplete, if there are no more stream elements.
In this way, a subscriber can solely rely on one of these signals to proceed without information and management outside the bounds of the stream.
If a subscriber acts based on an external input, it is a good indicator that this subscriber is blocking!
The asynchronous and non-blocking aspects of Reactive Streams are essential to familiarize yourself with if you want to leverage its advantages in the features you make as a developer. While this standard is a great way to enable responsiveness in your apps, unintentionally introducing waits and blocks to reactive streams quickly defeats its intended advantages.