Event Loop mechanism and how it works

Praveen Ray
4 min readJun 6, 2021

--

Looping infinitely

Event Loop was ‘invented’ by the NodeJS and it has since become quite popular as a general purpose Framework. In Java, we have Vertx framework which is similar in concept. However, it’s not always clear exactly what is an Even Loop and what problem it solves in first place.

What problem it’s trying to solve?

It’s trying to achieve concurrency with only one thread of Execution! Usually concurrency was managed with Threads which are mapped to hardware threads in the OS. If you had too many units of work, the OS would need to distribute the units of work across (limited number) of threads. This switching becomes a bottleneck under heavy load. Not to mention sharing of data across multiple threads requires locking which in turn introduces complexity and further delays. Event Loops were invented to avoid this kind of switching and hopefully achieve faster executions.

So, what is an Event Loop ?

It’s a glorifed while loop which spins infinitely. Normally, an infinitely spinning while loop would render your CPU unusable (to any other piece of code but the while loop). So, how is this useful?

In order to answer this question, we must look at other basic element of the Event Loop — polling. Each iteration of the while loop starts by waiting on a number of blocking events:

while (wait-for-something-to-happen-on-one-of(
socket100,
fileHandle200,
pipeCalledX
)
) {
one of these have have become available:
socket 100 or
file 200 or
Pipe X has
indicated it has data available
Process data that is available from the sources which have it Go back to waiting}

The ‘wait’ part is interesting and makes the Event Loop not spin needlessly (and waste CPU cycles). The ‘wait’ is intercepted by the OS and if none of those channels (FileA, Socket100 or PipeX) is ready, it ‘suspends’ the thread on which Event Loop is executing. As soon as data is available, the thread is resumed and the body of while loop starts executing.

Your entire program is present inside the body of the while loop. As part of processing the data, your code might attempt to read or write from other sockets or files. All your calls to read/write must be done using Non-Blocking APIs. The Event Loop framework (nodeJs or vertx) simply adds the resource from which your code is trying to read to it’s list of ‘wait-for’ resources (that get checked in the while condition).

So, let’s say, your code makes a call to read from FileB somewhere in the body of the while loop:

readFrom(FileB, callbackFunctionPointer)

The Framework, upon encountering this call, doesn’t rush to read from FileB. It simply adds FileB to the list it’s maintaining which will be waited upon later. The next iteration of while loop then looks like (note the newly added fileHandleB):

while (
wait-for-something-to-happen-on-one-of-these(
socket100,
fileHandle200,
pipeCalledX,
fileHandleB
)
) {
If fileHandleB is indicating availability, call the
callbackFunctionPointer with data on fileHandleB
}

As you can see, it’s crucial to use non-blocking APIs inside the Loop body — otherwise you risk blocking other resources from being checked for availability in time. Let’s say, you call a version of API to read from file B which blocks until data is available:

var dataFromFile = readFrom(FileB)

While this statement is blocked waiting for data to become available, socket100 is ready with data to be read. Remember the hardware and OS are fully capable of multi-tasking. But, since the EventLoop is stuck waiting on FileB, it can’t handle socket100 — until waiting on FileB completes and it goes back to while (wait-on-something-to-happen) statement. This is wasteful — instead of waiting on FileB, the EventLoop could have processed socket100’s data! It can be even worse — if socket100 is getting data at a fast enough rate, it’ll start overflowing and there would be real loss of data packets!

So, what is the magic behind ‘wait-on-something-to-happen’?

The actual mechanism behind the wait is epoll on Linux, kqueue on Macs and IOCP on Windows.

All of these work with the basic idea of watching over a multitude of resources and detecting when one of the resources is ready with data to be read or ready to accept data to be written.

What about multi processing?

If Event Loop runs on one thread, how do we make use of multiple cores in modern CPUs?

The vertx framework achieves this by running multiple Event Loops — one for each core. Both nodejs and vertx also have concept of Worker Pools which are threadpools where one can spawn off CPU intensive work.

So, what’s the point of all this complexity?

The most important point is to not start drinking the kool-aid as soon as you encounter it. Chances are, your performance goals can easily be met with good old Thread Pools and Blocking APIs — which are far easier to work with and maintain. Test before your drink. Test out the performance gains of single threaded Event Loop based frameworks with POCs before going all in. It’s a major commitment to switch over to non-blocking codebase and it comes with steep learning curve — not to mention difficulty in debugging the code.

--

--