Spawning Terminal Processes
A common interaction in a Ruby program is ‘shelling out’ from your program to run a command in a terminal. This happens especially when I’m writing a Ruby script to glue together some common commands for myself. There are several ways you can spawn processes to run terminal commands in Ruby.
Before we look at the different ways of ‘shelling out’ let’s look at the mechanism they’re all using under the hood.
fork + exec
All of the methods described below are variations on one theme: fork(2) + execve(2).
We’ve had a good look at fork(2) in previous chapters, but this is our first look at execve(2). It’s pretty simple, execve(2) allows you to replace the current process with a different process.
Put another way: execve(2) allows you to transform the current process into any other process. You can take a Ruby process and turn it into a Python process, or an ls(1) process, or another Ruby process.
execve(2) transforms the process and never returns. Once you’ve transformed your Ruby process into something else you can never come back.
exec 'ls', '--help'
The fork + exec combo is a common one when spawning new processes. execve(2) is a very powerful and efficient way to transform the current process into another one; the only catch is that your current process is gone. That’s where fork(2) comes in handy.
You can use fork(2) to create a new process, then use execve(2) to transform that process into anything you like. Voila! Your current process is still running just as it was before and you were able to spawn any other process that you want to.
If your program depends on the output from the execve(2) call you can use the tools you learned in previous chapters to handle that. Process.wait
will ensure that your program waits for the child process to finish whatever it’s doing so you can get the result back.
File descriptors and exec
At the OS level, a call to execve(2) doesn’t close any open file descriptors by default.
However, a call to exec
in Ruby will close all open file descriptors by default (excluding the standard streams).
In other words, the default OS behaviour when you exec('ls')
would be to give ls
a copy of any open file descriptors, eg. a database connection. This is rarely what you want, so Ruby’s default is to close all open file descriptors before doing an exec
.
This default behaviour of closing file descriptors on exec
prevents file descriptor ‘leaks’. A leak may happen when you fork + exec to spawn another process that has no need for the file descriptors you currently have open (like your database connections, logfiles, etc.) A leak can waste resources but, even worse, can lead to havoc when you try to close your database connection, only to find that some other process erroneously still has the connection open.
However, you may sometimes want to keep a file descriptor open, to pass an open logfile or live socket to another program being booted via exec
(The Unicorn web server uses this exact behavoiur to enable restarts without losing any connections. By passing the open listener socket to the new version of itself through an exec
, it ensures that the listener socket is never closed during a restart.}. You can control this behaviour by passing an options hash to exec
mapping file descriptor numbers to IO objects, as seen in the following example.
hosts = File.open('/etc/hosts')
python_code = %Q[import os; print os.fdopen(#{hosts.fileno}).read()]
# The hash as the last arguments maps any file descriptors that should
# stay open through the exec.
exec 'python', '-c', python_code, {hosts.fileno => hosts}
In this example we start up a Ruby program and open the /etc/hosts
file. Then we exec
a python
process and tell it to open the file descriptor number that Ruby received for opening the /etc/hosts
file. You can see that python
recognizes this file descriptor (because it was shared via execve(2)) and is able to read from it without having to open the file again.
Notice the options hash mapping the file descriptor number to the IO
object. If you remove that hash, the Python program won’t be able to open the file descriptor, that declaration keeps it open through the execve(2).
Unlike fork(2), execve(2) does not share memory with the newly created process. In the python example above, whatever was allocated in memory for the use of the Ruby program was essentially wiped away when execve(2) was called leaving the python program with a blank slate in terms of memory usage.
Arguments to exec
Notice in all of the examples above I sent an array of arguments to exec
, rather than passing them as a string? There’s a subtle difference to the two argument forms.
Pass a string to exec
and it will actually start up a shell process and pass the string to the shell to interpret. Pass an array and it will skip the shell and set up the array directly as the ARGV
to the new process.
Generally you want to avoid passing a string unless you really need to. Pass an array where possible. Passing a string and running code through the shell can raise security concerns. If user input is involved it may be possible for them to inject a malicious command directly in a shell, potentially gaining access to any privileges the current process has. In a case where you want to do something like exec('ls * | awk '{print($1)}')
you’ll have to pass it as a string.
Kernel#system
system('ls')
system('ls', '--help')
system('git log | tail -10')
The return value of Kernel#system
reflects the exit code of the terminal command in the most basic way. If the exit code of the terminal command was 0 then it returns true
, otherwise it returns false
.
The standard streams of the terminal command are shared with the current process (through the magic of fork(2)), so any output coming from the terminal command should be seen in the same way output is seen from the current process.
Kernel#`
`ls`
`ls --help`
%x[git log | tail -10]
Kernel#`
works slightly differently. The value returned is the STDOUT
of the terminal program collected into a String.
As mentioned, it’s using fork(2) under the hood and it doesn’t do anything special with STDERR
, so you can see in the second example that STDERR
is printed to the screen just as with Kernel#system
.
Kernel#`
and %x[]
do the exact same thing.
Process.spawn
# This call will start up the 'rails server' process with the
# RAILS_ENV environment variable set to 'test'.
Process.spawn({'RAILS_ENV' => 'test'}, 'rails server')
# This call will merge STDERR with STDOUT for the duration
# of the 'ls --help' program.
Process.spawn('ls', '--zz', STDERR => STDOUT)
Process.spawn
is a bit different than the others in that it is non-blocking.
If you compare the following two examples you will see that Kernel#system
will block until the command is finished, whereas Process.spawn
will return immediately.
# Do it the blocking way
system 'sleep 5'
# Do it the non-blocking way
Process.spawn 'sleep 5'
# Do it the blocking way with Process.spawn
# Notice that it returns the pid of the child process
pid = Process.spawn 'sleep 5'
Process.waitpid(pid)
The last example in this code block is a really great example of the flexibility of Unix programming. In previous chapters we talked a lot about Process.wait
, but it was always in the context of forking and then running some Ruby code. You can see from this example that the kernel cares not what you are doing in your process, it will always work the same.
So even though we fork(2) and then run the sleep(1) program (a C program) the kernel still knows how to wait for that process to finish. Not only that, it will be able to properly return the exit code just as was happening in our Ruby programs.
All code looks the same to the kernel; that's what makes it such a flexible system. You can use any programming language to interact with any other programming language, and all will be treated equally.
Process.spawn
takes many options that allow you to control the behaviour of the child process. I showed a few useful ones in the example above. Consult the official rdoc (http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-spawn) for an exhaustive list.
IO.popen
# This example will return a file descriptor (IO object). Reading from it
# will return what was printed to STDOUT from the shell command.
IO.popen('ls')
The most common usage for IO.popen
is an implementation of Unix pipes in pure Ruby. That’s where the ‘p’ comes from in popen. Underneath it’s still doing the fork+exec, but it’s also setting up a pipe to communicate with the spawned process. That pipe is passed as the block argument in the block form of IO.popen
.
# An IO object is passed into the block. In this case we open the stream
# for writing, so the stream is set to the STDIN of the spawned process.
#
# If we open the stream for reading (the default) then
# the stream is set to the STDOUT of the spawned process.
IO.popen('less', 'w') { |stream|
stream.puts "some\ndata"
}
With IO.popen
you have to choose which stream you have access to. You can’t access them all at once.
open3
Open3
allows simultaneous access to the STDIN, STDOUT, and STDERR of a spawned process.
# This is available as part of the standard library.
require 'open3'
Open3.popen3('grep', 'data') { |stdin, stdout, stderr|
stdin.puts "some\ndata"
stdin.close
puts stdout.read
}
# Open3 will use Process.spawn when available. Options can be passed to
# Process.spawn like so:
Open3.popen3('ls', '-uhh', :err => :out) { |stdin, stdout, stderr|
puts stdout.read
}
Open3
acts like a more flexible version of IO.popen
, for those times when you need it.
In the Real World
All of these methods are common in the Real World. Since they all differ in their behaviour you have to select one based on your needs.
One drawback to all of these methods is that they rely on fork(2). What’s wrong with that? Imagine this scenario: You have a big Ruby app that is using hundreds of MB of memory. You need to shell out. If you use any of the methods above you’ll incur the cost of forking.
Even if you’re shelling out to a simple ls(1) call the kernel will still need to make sure that all of the memory that your Ruby process is using is available for that new ls(1) process. Why? Because that’s the API of fork(2). When you fork(2) the process the kernel doesn’t know that you’re about to transform that process with an exec(2). You may be forking in order to run Ruby code, in which case you’ll need to have all of the memory available.
It’s good to keep in mind that fork(2) has a cost, and sometimes it can be a performance bottleneck. What if you need to shell out a lot and don’t want to incur the cost of fork(2)?
There are some native Unix system calls for spawning processes without the overhead of fork(2). Unfortunately they don’t have support in the Ruby language core library. However, there is a Rubygem that provides a Ruby interface to these system calls. The posix-spawn project provides access to posix_spawn(2), which is available on most Unix systems.
posix-spawn
mimics the Process.spawn
API. In fact, most of the options that you pass to Process.spawn
can also be passed to POSIX::Spawn.spawn
. So you can keep using the same API and yet reap the benefits of faster, more resource efficient spawning.
At a basic level posix_spawn(2) is a subset of fork(2). Recall the two discerning attributes of a new child process from fork(2): 1) it gets an exact copy of everything that the parent process had in memory, and 2) it gets a copy of all the file descriptors that the parent process had open.
posix_spawn(2) preserves #2, but not #1. That’s the big difference between the two. So you can expect a newly spawned process to have access to any of the file descriptors opened by the parent, but it won’t share any of the memory. This is what makes posix_spawn(2) faster and more efficient than fork(2). But keep in mind that it also makes it less flexible.
System Calls
Ruby’s Kernel#system
maps to system(3), Kernel#exec
maps to execve(2), IO.popen
maps to popen(3), posix-spawn uses posix_spawn(2). Ruby controls the ‘close-on-exec’ behaviour using fcntl(2) with the FD_CLOEXEC
option.