Nagle’s algorithm is a so-called optimization applied to all TCP connections by default.
This optimization is most applicable to applications which don’t do buffering and send very small amounts of data at a time. As such, it’s often disabled by servers where those criteria don’t apply. Let’s review the algorithm:
After a program writes to a socket there are three possible outcomes:
- If there’s sufficient data in the local buffers to comprise an entire TCP packet then send it all immediately.
- If there’s no pending data in the local buffers and no pending acknowledgement of receipt from the receiving end, then send it immediately.
- If there’s a pending acknowledgement of receipt from the other end and not enough data to comprise an entire TCP packet, then put the data into the local buffer.
This algorithm guards against sending many tiny TCP packets. It was originally designed to combat protocols like telnet where one key stroke is entered at a time and otherwise each character could be sent across the network without delay.
If you’re working with a protocol like HTTP where the request/response are usually sufficiently large enough to comprise at least one TCP packet, this algorithm will typically have have no effect except to slow down the last packet sent. The algorithm is meant to guard against shooting yourself in the foot during very specific situations, such as implementing telnet. Given Ruby’s buffering and the most common kinds of protocols implemented on TCP, you probably want to disable this algorithm.
For example, every Ruby web server disables this option. Here’s how it can be done:
require 'socket' server = TCPServer.new(4481) # Disable Nagle's algorithm. Tell the server to send with 'no delay'. server.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)