Pattern: Hybrids

This is the last part of the network patterns section of the book. It doesn’t cover a specific pattern itself, but rather the concept of making a hybrid pattern that uses one or more of the patterns described in this section.

Although any of these architectures can be applied to any kind of service (we saw FTP in the previous chapters), there’s been a lot of attention given to HTTP servers in modern times. This is unsurprising given the prevalence of the web. The Ruby community is at the forefront of this web movement and has its fair share of different HTTP servers to choose from. Hence, the real-world examples we’ll look at in this chapter are all HTTP servers.

Let’s dive in to some examples.

nginx

The nginx project provides an extremely high-performance network server written in C. Indeed, its web site claims it can serve 1 million concurrent requests on a single server. nginx is often used in the Ruby world as an HTTP proxy in front of web application servers, but it can speak HTTP, SMTP, and others.

So how does nginx achieve this kind of concurrency?

At its core, nginx uses the Preforking pattern. However, inside each forked process is the Evented pattern. This makes a lot of sense as a high-performance choice for a few reasons.

First, all of the spawning costs are paid at boot time when nginx forks child processes. This ensures that nginx can take maximum advantage of multiple cores and server resources. Second, the Evented pattern is notable in that it doesn’t spawn anything and doesn’t use threads. One issue when using threads is the overhead required for the kernel to manage and switch context between all of the active threads.

nginx is packed with tons of other features that make it blazing fast, including tight memory management that can only be accomplished in a language like C, but at its core it uses a hybrid of the patterns described in the last few chapters.

Puma

The puma rubygem provides “a Ruby web server built for concurrency”. Puma is designed as the go-to HTTP server for Ruby implementations without a GIL (Rubinius or JRuby) because it leans heavily on threads. The Puma README provides a good overview of where it’s applicable and reminds us about the effect of the GIL on threading.

So how does Puma achieve its concurrency?

At a high level Puma uses a Thread Pool to provide concurrency. The main thread always accepts new connections and then queues up the connection to the thread pool for handling. This is the whole story for HTTP connections that don’t use keep-alive. But Puma does support HTTP keep-alive. When a connection is handled and its first request asks for the connection to be kept alive, Puma respects this and doesn’t close it.

But now Puma can no longer just accept on that connection; it needs to monitor it for new requests and process those as well. It does this with an Evented reactor. When a new request arrives for a previously kept alive connection, that request is again queued up to the Thread Pool for handling.

So Puma’s request handling is always done by a pool of threads. This is supported by a reactor that monitors any persistent connections.

Again, Puma is full of other optimizations, but at its core it’s built on a hybrid of the patterns from the last few chapters.

EventMachine

EventMachine is well known in the Ruby world as an event-driven I/O library. It uses the Reactor pattern to provide high stability and scalability. Its internal are written in C but provide a Ruby interface as a C extension.

So how does EventMachine achieve its concurrency?

At its core EventMachine is an implementation of an Evented pattern. It’s a single-threaded event loop that can handle network events on many concurrent connections. But EventMachine also provides a Thread Pool for deferring any long-running or blocking operations that would otherwise slow down the Reactor.

EventMachine supports a ton of features, including the ability to monitor spawned processes, network protocol implementations and more. This example of using multiple architectures is just one way that it improves concurrency.