Processes Have Resource Limits
In the last chapter we looked at the fact that open resources are represented by file descriptors. You may have noticed that when resources aren’t being closed the file descriptor numbers continue to increase. It begs the question: how many file descriptors can one process have?
The answer depends on your system configuration, but the important point is there are some resource limits imposed on a process by the kernel.
Finding the Limits
We’ll continue on the subject of file descriptors. Using Ruby we can ask directly for the maximum number of allowed file descriptors:
On my machine this snippet outputs:
We used a method called
Process.getrlimit and asked for the maximum number of open files using the symbol
:NOFILE. It returned a two-element Array.
The first element in the Array is the soft limit for the number of file descriptors, the second element in the Array is the hard limit for the number of file descriptors.
Soft Limits vs. Hard Limits
What’s the difference? Glad you asked. The soft limit isn’t really a limit. Meaning that if you exceed the soft limit (in this case by opening more than 2560 resources at once) an exception will be raised, but you can always change that limit if you want to.
Note that the hard limit on my system for the number of file descriptors is a ridiculously large integer. Is it even possible to open that many? Likely not, I'm sure you'd run into hardware constraints before that many resources could be opened at once.
On my system that number actually represents infinity. It's repeated in the constant
Process::RLIM_INFINITY. Try comparing those two values to be sure. So, on my system, I can effectively open as many resources as I'd like, once I bump the soft limit for my needs.
So any process is able to change its own soft limit, but what about the hard limit? Typically that can only be done by a superuser. However, your process is also able to bump the hard limit assuming it has the required permissions. If you’re interested in changing the limits at a system-wide level then start by having a look at sysctl(8).
Bumping the Soft Limit
Let’s go ahead and bump the soft limit for the current process:
Process.setrlimit(:NOFILE, 4096) p Process.getrlimit(:NOFILE)
You can see that we set a new limit for the number of open files, and upon asking for that limit again both the hard limit and the soft limit were set to the new value 4096.
We can optionally pass a third argument to
Process.setrlimit specifying a new hard limit as well, assuming we have the permissions to do so. Note that lowering the hard limit, as we did in that last snippet, is irreversible: once it comes down it won’t go back up.
The following example is a common way to raise the soft limit of a system resource to be equal with the hard limit, the maximum allowed value.
Exceeding the Limit
Note that exceeding the soft limit will raise
# Set the maximum number of open files to 3. We know this # will be maxed out because the standard streams occupy # the first three file descriptors. Process.setrlimit(:NOFILE, 3) File.open('/dev/null')
Errno::EMFILE: Too many open files - /dev/null
You can use these same methods to check and modify limits on other system resources. Some common ones are:
# The maximum number of simultaneous processes # allowed for the current user. Process.getrlimit(:NPROC) # The largest size file that may be created. Process.getrlimit(:FSIZE) # The maximum size of the stack segment of the # process. Process.getrlimit(:STACK)
Have a look at the documentation (http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-setrlimit) for
Process.getrlimit for a full listing of the available options.
In the Real World
Needing to modify limits for system resources isn’t a common need for most programs. However, for some specialized tools this can be very important.
One use case is any process needing to handle thousands of simultaneous network connections. An example of this is the httperf(1) http performance tool. A command like
httperf --hog --server www --num-conn 5000 will ask httperf(1) to create 5000 concurrent connections. Obviously this will be a problem on my system due to its default soft limit, so httperf(1) will need to bump its soft limit before it can properly do its testing.
Another real world use case for limiting system resources is a situation where you execute third-party code and need to keep it within certain constraints. You could set limits for the processes running that code and revoke the permissions required to change them, hence ensuring that they don’t use more resources than you allow for them.
Process.setrlimit map to getrlimit(2) and setrlimit(2), respectively.