The Unix Shell

Opening

Nelle Nemo, a marine biologist, has just returned from a six-month survey of the North Pacific Gyre, where she has been sampling gelatinous marine life in the Great Pacific Garbage Patch. She has 1520 samples in all, and now needs to:

  1. Run each sample through an assay machine that will measure the relative abundance of 300 different proteins. The machine's output for a single sample is a file with one line for each protein.
  2. Calculate statistics for each of the proteins separately using a program her supervisor wrote called goostat.
  3. Compare the statistics for each protein with corresponding statistics for each other protein using a program one of the other graduate students wrote called goodiff.
  4. Write up. Her supervisor would really like her to do this by the end of the month so that her paper can appear in an upcoming special issue of Aquatic Goo Letters.

It takes about half an hour for the assay machine to process each sample. The good news is, it only takes two minutes to set each one up. Since her lab has eight assay machines that she can use in parallel, this step will "only" take about two weeks.

The bad news is, if she has to run goostat and goodiff by hand, she'll have to enter filenames and click "OK" roughly 3002 times (300 runs of goostat, plus 300×299 runs of goodiff). At 30 seconds each, that will 750 hours, or 18 weeks. Not only would she miss her paper deadline, the chances of her getting all 90,000 commands right are approximately zero.

This chapter is about what she should do instead. More specifically, it's about how she can use a command shell to automate the repetitive steps in her processing pipeline, so that her computer can work 24 hours a day while she writes her paper. As a bonus, once she has put a processing pipeline together, she will be able to use it again whenever she collects more data.

Instructors

Many people have questioned whether we should still teach the shell. After all, anyone who wants to rename several thousand data files can easily do so interactively in the Python interpreter, and anyone who's doing serious data analysis is probably going to do most of their work inside the IPython Notebook or R Studio. So why teach the shell?

The first answer is, "Because so much else depends on it." Installing software, configuring your default editor, and controlling remote machines frequently assume a basic familiarity with the shell, and with related ideas like standard input and output. Many tools also use its terminology (for example, the %ls and %cd magic commands in IPython).

The second answer is, "Because it's an easy way to introduce some fundamental ideas about how to use computers." As we teach people how to use the Unix shell, we teach them that they should get the computer to repeat things (via tab completion, ! followed by a command number, and for loops) rather than repeating things themselves. We also teach them to take things they've discovered they do frequently and save them for later re-use (via shell scripts), to give things sensible names, and to write a little bit of documentation (like comment at the top of shell scripts) to make their future selves' lives better.

Finally, and perhaps most importantly, teaching people the shell lets us teach them to think about programming in terms of function composition. In the case of the shell, this takes the form of pipelines rather than nested function calls, but the core idea of "small pieces, loosely joined" is the same.

All of this material can be covered in three hours as long as learners using Windows do not run into roadblocks such as:

  • not being able to figure out where their home directory is (particularly if they're using Cygwin);
  • not being able to run a plain text editor; and
  • the shell refusing to run scripts that include DOS line endings.

Here are some ways to approach this material:

  • Have learners open a shell and then do whoami, pwd, and ls. Then have them create a directory called bootcamp and cd into it, so that everything else they do during the lesson is unlikely to harm whatever files they already have.
  • Get them to run an editor and save a file in their bootcamp directory as early as possible. Doing this is usually the biggest stumbling block during the entire lesson: many will try to run the same editor as the instructor (which may leave them trapped in the awful nether hell that is Vim), or will not know how to navigate to the right directory to save their file, or will run a word processor rather than a plain text editor. The quickest way past these problems is to have more knowledgeable learners help those who need it.
  • Tab completion sounds like a small thing: it isn't. Re-running old commands using !123 or !wc isn't a small thing either, and neither are wildcard expansion and for loops. Each one is an opportunity to repeat one of the big ideas of Software Carpentry: if the computer can repeat it, some programmer somewhere will almost certainly have built some way for the computer to repeat it.
  • Building up a pipeline with four or five stages, then putting it in a shell script for re-use and calling that script inside a for loop, is a great opportunity to show how "seven plus or minus two" connects to programming. Once we have figured out how to do something moderately complicated, we make it re-usable and give it a name so that it only takes up one slot in working memory rather than several. It is also a good opportunity to talk about exploratory programming: rather than designing a program up front, we can do a few useful things and then retroactively decide which are worth encapsulating for future re-use.
  • We have to leave out many important things because of time constraints, including file permissions, job control, and SSH. If learners already understand the basic material, this can be covered instead using the online lessons as guidelines.
  • Installing Bash and a reasonable set of Unix commands on Windows always involves some fiddling and frustration. Please see the latest set of installation guidelines for advice, and try it out yourself before teaching a class.

What and Why

Objectives

  • Draw a diagram to show how the shell relates to the keyboard, the screen, the operating system, and users' programs.
  • Explain when and why command-line interfaces should be used instead of graphical interfaces.

Duration: 10 minutes.

Lesson

At a high level, computers do four things:

  • run programs;
  • store data;
  • communicate with each other; and
  • interact with us.

They can do the last of these in many different ways, including direct brain-computer links and speech interfaces. Since these are still in their infancy, most of us use windows, icons, mice, and pointers. These technologies didn't become widespread until the 1980s, but their roots go back to Doug Engelbart's work in the 1960s, which you can see in what has been called "The Mother of All Demos".

Going back even further, the only way to interact with early computers was to rewire them. But in between, from the 1950s to the 1980s, most people used a technology based on the old-fashioned typewriter. That technology is still widely used today, and is the subject of this chapter.

DECWriter LA-36
Figure 1: DECWriter LA-36

When I say "typewriter", I actually mean a line printer connected to a keyboard (Figure 1). These devices only allowed input and output of the letters, numbers, and punctuation found on a standard keyboard, so programming languages and interfaces had to be designed around that constraint—though if you had enough time on your hands, you could draw simple pictures using just those characters (Figure 2).

                    ,-.             __
                  ,'   `---.___.---'  `.
                ,'   ,-                 `-._
              ,'    /                       \
           ,\/     /                        \\
       )`._)>)     |                         \\
       `>,'    _   \                  /       ||
         )      \   |   |            |        |\\
.   ,   /        \  |    `.          |        | ))
\`. \`-'          )-|      `.        |        /((
 \ `-`   .`     _/  \ _     )`-.___.--\      /  `'
  `._         ,'     `j`.__/           `.    \
    / ,    ,'         \   /`             \   /
    \__   /           _) (               _) (
      `--'           /____\             /____\
Figure 2: ASCII Art

Today, this kind of interface is called a command-line user interface, or CLUI, to distinguish it from the graphical user interface, or GUI, that most of us now use. The heart of a CLUI is a read-evaluate-print loop, or REPL: when the user types a command, the computer reads it, executes it, and prints its output. (In the case of old teletype terminals, it literally printed the output onto paper, a line at a time.) The user then types another command, and so on until the user logs off.

This description makes it sound as though the user sends commands directly to the computer, and the computer sends output directly to the user. In fact, there is usually a program in between called a command shell (Figure 3). What the user types goes into the shell; it figures out what commands to run and orders the computer to execute them.

The Command Shell
Figure 3: The Command Shell

A shell is a program like any other. What's special about it is that its job is to run other programs, rather than to do calculations itself. The most popular Unix shell is Bash, the Bourne Again SHell (so-called because it's derived from a shell written by Stephen Bourne—this is what passes for wit among programmers). Bash is the default shell on most modern implementations of Unix, and in most packages that provide Unix-like tools for Windows, such as Cygwin.

Using Bash, or any other shell, sometimes feels more like programming that like using a mouse. Commands are terse (often only a couple of characters long), their names are frequently cryptic, and their output is lines of text rather than something visual like a graph. On the other hand, the shell allows us to combine existing tools in powerful ways with only a few keystrokes, and to set up pipelines to handle large volumes of data automatically. In addition, the command line is often the easiest way to interact with remote machines. As clusters and cloud computing become more popular for scientific data crunching, being able to drive them is becoming a necessary skill.

Summary

  • A shell is a program whose primary purpose is to read commands and run other programs.
  • The shell's main advantages are its high action-to-keystroke ratio, its support for automating repetitive tasks, and that it can be used to access networked machines.
  • The shell's main disadvantages are its primarily textual nature and how cryptic its commands and operation can be.

Files and Directories

Objectives

  • Explain the similarities and differences between a file and a directory.
  • Translate an absolute path into a relative path and vice versa.
  • Construct absolute and relative paths that identify specific files and directories.
  • Explain the steps in the shell's read-run-print cycle.
  • Identify the actual command, flags, and filenames in a command-line call.
  • Demonstrate the use of tab completion, and explain its advantages.

Duration: 15 minutes (longer if people have trouble getting an editor to work).

Lesson

The part of the operating system responsible for managing files and directories is called the file system. It organizes our data into files, which hold information, and directories (also called "folders"), which hold files or other directories.

Several commands are frequently used to create, inspect, rename, and delete files and directories. To start exploring them, let's log in to the computer by typing our user ID and password. Most systems will print stars to obscure the password, or nothing at all, in case some evildoer is shoulder surfing behind us.

login: vlad
password: ********
$

Once we have logged in we'll see a prompt, which is how the shell tells us that it's waiting for input. This is usually just a dollar sign, but which may show extra information such as our user ID or the current time. Type the command whoami, then press the Enter key (sometimes marked Return) to send the command to the shell. The command's output is the ID of the current user, i.e., it shows us who the shell thinks we are:

$ whoami
vlad
$

More specifically, when we type whoami the shell:

  1. finds a program called whoami,
  2. runs it,
  3. waits for it to display its output, and
  4. displays a new prompt to tell us that it's ready for more commands.

Getting Around

Next, let's find out where we are by running a command called pwd (which stands for "print working directory"). At any moment, our current working directory is our current default directory, i.e., the directory that the computer assumes we want to run commands in unless we explicitly specify something else. Here, the computer's response is /users/vlad, which is Vlad's home directory:

$ pwd
/users/vlad
$

Alphabet Soup

If the command to find out who we are is whoami, the command to find out where we are ought to be called whereami, so why is it pwd instead? The usual answer is that in the early 1970s, when Unix was first being developed, every keystroke counted: the devices of the day were slow, and backspacing on a teletype was so painful that cutting the number of keystrokes in order to cut the number of typing mistakes was actually a win for usability. The reality is that commands were added to Unix one by one, without any master plan, by people who were immersed in its jargon. The result is as inconsistent as the roolz uv Inglish speling, but we're stuck with it now.

To understand what a "home directory" is, let's have a look at how the file system as a whole is organized (Figure 4). At the top is the root directory that holds everything else the computer is storing. We refer to it using a slash character / on its own; this is the leading slash in /users/vlad.

File System
Figure 4: File System

Inside that directory (or underneath it, if you're drawing a tree) are several other directories: bin (which is where some built-in programs are stored), data (for miscellaneous data files), users (where users' personal directories are located), tmp (for temporary files that don't need to be stored long-term), and so on. We know that our current working directory /users/vlad is stored inside /users because /users is the first part of its name. Similarly, we know that /users is stored inside the root directory / because its name begins with /.

Underneath /users, we find one directory for each user with an account on this machine (Figure 5). The Mummy's files are stored in /users/imhotep, Wolfman's in /users/larry, and ours in /users/vlad, which is why vlad is the last part of the directory's name. Notice, by the way, that there are two meanings for the / character. When it appears at the front of a file or directory name, it refers to the root directory. When it appears inside a name, it's just a separator.

Home Directories
Figure 5: Home Directories

Let's see what's in Vlad's home directory by running ls, which stands for "listing":

$ ls
bin          data      mail       music
notes.txt    papers    pizza.cfg  solar
solar.pdf    swc
$

ls prints the names of the files and directories in the current directory in alphabetical order, arranged neatly into columns. To make its output more comprehensible, we can give it the flag -F by typing ls -F. This tells ls to add a trailing / to the names of directories:

$ ls -F
bin/         data/     mail/      music/
notes.txt    papers/   pizza.cfg  solar/
solar.pdf    swc/
$

As you can see, /users/vlad contains seven sub-directories (Figure 6). The names that don't have trailing slashes—notes.txt, pizza.cfg, and solar.pdf—are plain old files.

Vlad's Home Directory
Figure 6: Vlad's Home Directory

What's In A Name?

You may have noticed that all of Vlad's files' names are "something dot something". This is just a convention: we can call a file mythesis or almost anything else we want. However, most people use two-part names most of the time to help them (and their programs) tell different kinds of files apart. The second part of such a name is called the filename extension, and indicates what type of data the file holds: .txt signals a plain text file, .pdf indicates a PDF document, .cfg is a configuration file full of parameters for some program or other, and so on.

It's important to remember that this is just a convention. Files contain bytes: it's up to us and our programs to interpret those bytes according to the rules for PDF documents, images, and so on. For example, naming a PNG image of a whale as whale.mp3 doesn't somehow magically turn it into a recording of whalesong.

Now let's take a look at what's in Vlad's data directory by running the command ls -F data. The second parameter—the one without a leading dash—tells ls that we want a listing of something other than our current working directory:

$ ls -F data
amino_acids.txt   elements/     morse.txt
pdb/              planets.txt   sunspot.txt
$

The output shows us that there are four text files and two sub-sub-directories. Organizing things hierarchically in this way is a good way to keep track of our work: it's possible to put hundreds of files in our home directory, just as it's possible to pile hundreds of printed papers on our desk, but in the long run it's a self-defeating strategy.

Notice, by the way that we spelled the directory name data. It doesn't have a trailing slash: that's added to directory names by ls when we use the -F flag to help us tell things apart. And it doesn't begin with a slash because it's a relative path, i.e., it tells ls how to find something from where we are, rather than from the root of the file system (Figure 7).

Absolute and Relative Paths
Figure 7: Absolute and Relative Paths

If we run ls -F /data (with a leading slash) we get a different answer, because /data is an absolute path:

$ ls -F /data
access.log    backup/    hardware.cfg
network.cfg
$

The leading / tells the computer to follow the path from the root of the filesystem, so it always refers to exactly one directory, no matter where we are when we run the command.

What if we want to change our current working directory? Before we do this, pwd shows us that we're in /users/vlad, and ls without any parameters shows us that directory's contents:

$ pwd
/users/vlad
$ ls
bin/         data/     mail/      music/
notes.txt    papers/   pizza.cfg  solar/
solar.pdf    swc/
$

We can use cd followed by a directory name to change our working directory. cd stands for "change directory", which is a bit misleading: the command doesn't change the directory, it changes the shell's idea of what directory we are in.

$ cd data
$

cd doesn't print anything, but if we run pwd after it, we can see that we are now in /users/vlad/data. If we run ls without parameters now, it lists the contents of /users/vlad/data, because that's where we now are:

$ pwd
/users/vlad/data
$ ls
amino_acids.txt   elements/     morse.txt
pdb/              planets.txt   sunspot.txt
$

OK, we can go down the directory tree: how do we go up? We could use an absolute path:

$ cd /users/vlad
$

but it's almost always simpler to use cd .. to go up one level:

$ pwd
/users/vlad/data
$ cd ..

.. is a special directory name meaning "the directory containing this one", or, more succinctly, the parent of the current directory.

No Output = OK

In general, shell programs do not output anything if no errors occurred. The principle behind this is "If there is nothing to report, shut up.", i. e. if there is output, either you specifically requested the program to do so or something bad happened.

This does not apply to programs like ls or pwd, since the purpose of such programs is to output something.

Sure enough, if we run pwd after running cd .., we're back in /users/vlad:

$ pwd
/users/vlad
$

The special directory .. doesn't usually show up when we run ls. If we want to display it, we can give ls the -a flag:

$ ls -F -a
./           ../       bin/       data/
mail/        music/    notes.txt  papers/
pizza.cfg    solar/    solar.pdf  swc/

-a stands for "show all"; it forces ls to show us file and directory names that begin with ., such as .. (which, if we're in /users/vlad, refers to the /users directory). As you can see, it also displays another special directory that's just called ., which means "the current working directory". It may seem redundant to have a name for it, but we'll see some uses for it soon.

Orthogonality

The special names . and .. don't belong to ls; they are interpreted the same way by every program. For example, if we are in /users/vlad/data, the command ls .. will give us a listing of /users/vlad. Programmers call this orthogonality: the meanings of the parts are the same no matter how they're combined. Orthogonal systems tend to be easier for people to learn because there are fewer special cases and exceptions to keep track of.

Everything we have seen so far works on Unix and its descendents, such as Linux and Mac OS X. Things are a bit different on Windows. A typical directory path on a Windows 7 machine might be C:\Users\vlad. The first part, C:, is a drive letter that identifies which disk we're talking about. This notation dates back to the days of floppy drives; today, different "drives" are usually different filesystems on the network.

Instead of a forward slash, Windows uses a backslash to separate the names in a path. This causes headaches because Unix uses backslash for input of special characters. For example, if we want to put a space in a filename on Unix, we would write the filename as my\ results.txt. Please don't ever do this, though: if you put spaces, question marks, and other special characters in filenames on Unix, you can confuse the shell for reasons that we'll see shortly.

Finally, Windows filenames and directory names are case insensitive: upper and lower case letters mean the same thing. This means that the path name C:\Users\vlad could be spelled c:\users\VLAD, C:\Users\Vlad, and so on. Some people argue that this is more natural: after all, "VLAD" in all upper case and "Vlad" spelled normally refer to the same person. However, it causes headaches for programmers, and can be difficult for people to understand if their first language doesn't use a cased alphabet.

For Cygwin Users

Cygwin tries to make Windows paths look more like Unix paths by allowing us to use a forward slash instead of a backslash as a separator. It also allows us to refer to the C drive as /cygdrive/c/ instead of as C:. (The latter usually works too, but not always.) Paths are still case insensitive, though, which means that if you try to put files called backup.txt (in all lower case) and Backup.txt (with a capital 'B') into the same directory, the second will overwrite the first.

Cygwin does something else that frequently causes confusion. By default, it interprets a path like /home/vlad to mean C:\cygwin\home\vlad, i.e., it acts as if C:\cygwin was the root of the filesystem. This is sometimes helpful, but if you are using an editor like Notepad, and want to save a file in what Cygwin thinks of as your home directory, you need to keep this translation in mind.

Nelle's Pipeline: Organizing Files

Knowing just this much about files and directories, Nelle is ready to organize the files that the protein assay machine will create. First, she creates a directory called north-pacific-gyre (to remind herself where the data came from). Inside that, she creates a directory called 2012-07-03, which is the date she started processing the samples. She used to use names like conference-paper and revised-results, but she found them hard to understand after a couple of years. (The final straw was when she found herself creating a directory called revised-revised-results-3.)

Each of her physical samples is labelled according to her lab's convention with a unique ten-character ID, such as "NENE01729A". This is what she used in her collection log to record the location, time, depth, and other characteristics of the sample, so she decides to use it as part of each data file's name. Since the assay machine's output is plain text, she will call her files NENE01729A.txt, NENE01812A.txt, and so on. All 1520 files will go into the same directory.

If she is in her home directory, Nelle can see what files she has using the command:

$ ls north-pacific-gyre/2012-07-03/

Since this is a lot to type, she can take advantage of Bash's command completion. If she types:

$ ls no

and then presses tab, Bash will automatically complete the directory name for her:

$ ls north-pacific-gyre/

If she presses tab again, Bash will add 2012-07-03/ to the command, since it's the only possible completion. Pressing tab again does nothing, since there are 1520 possibilities; pressing tab twice brings up a list of all the files, and so on. This is called tab completion, and we will see it in many other tools as we go on.

Summary

  • The file system is responsible for managing information on disk.
  • Information is stored in files, which are stored in directories (folders).
  • Directories can also store other directories, which forms a directory tree.
  • / on its own is the root directory of the whole filesystem.
  • A relative path specifies a location starting from the current location.
  • An absolute path specifies a location from the root of the filesystem.
  • Directory names in a path are separated with '/' on Unix, but '\' on Windows.
  • '..' means "the directory above the current one"; '.' on its own means "the current directory".
  • Most files' names are something.extension; the extension isn't required, and doesn't guarantee anything, but is normally used to indicate the type of data in the file.
  • cd path changes the current working directory.
  • ls path prints a listing of a specific file or directory; ls on its own lists the current working directory.
  • pwd prints the user's current working directory (current default location in the filesystem).
  • whoami shows the user's current identity.
  • Most commands take options (flags) which begin with a '-'.

Challenges

Please refer to Figure 8 when answering the challenges below.

File System Layout for Challenge Questions
Figure 8: File System Layout for Challenge Questions
  1. If pwd displays /users/thing, what will ls ../backup display?

    1. ../backup: No such file or directory
    2. 2012-12-01 2013-01-08 2013-01-27
    3. 2012-12-01/ 2013-01-08/ 2013-01-27/
    4. original pnas_final pnas_sub
  2. If pwd displays /users/fly, what command will display

    thesis/ papers/ articles/
    
    1. ls pwd
    2. ls -r -F
    3. ls -r -F /users/fly
    4. Either #2 or #3 above, but not #1.
  3. What does the command cd without a directory name do?

    1. It has no effect.
    2. It changes the working directory to /.
    3. It changes the working directory to the user's home directory.
    4. It is an error.
  4. We said earlier that spaces in path names have to be marked with a leading backslash in order for the shell to interpret them properly. Why? What happens if we run a command like:

    $ ls my\ thesis\ files
    

    without the backslashes?

Creating Things

Objectives

  • Create a directory hierarchy that matches a given diagram.
  • Create files in that hierarchy using an editor or by copying and renaming existing files.
  • Display the contents of a directory using the command line.
  • Delete specified files and/or directories.

Duration: 10 minutes.

Lesson

We now know how to explore files and directories, but how do we create them in the first place? Let's go back to Vlad's home directory, /users/vlad, and use ls -F to see what it contains:

$ pwd
/users/vlad
$ ls -F
bin/         data/     mail/      music/
notes.txt    papers/   pizza.cfg  solar/
solar.pdf    swc/
$

Let's create a new directory called thesis using the command mkdir thesis (which has no output):

$ mkdir thesis
$

As you might (or might not) guess from its name, mkdir means "make directory". Since thesis is a relative path (i.e., doesn't have a leading slash), the new directory is made below the current one:

$ ls -F
bin/         data/     mail/      music/
notes.txt    papers/   pizza.cfg  solar/
solar.pdf    swc/      thesis/
$

However, there's nothing in it yet:

$ ls -F thesis
$

Let's change our working directory to thesis using cd, then run an editor called Nano to create a file called nano draft.txt:

$ cd thesis
$ nano draft.txt

Nano is a very simple text editor that only a programmer could really love. Figure 9 shows what it looks like when it runs: the cursor is the blinking square in the upper left, and the two lines across the bottom show us the available commands. (By convention, Unix documentation uses the caret ^ followed by a letter to mean "control plus that letter", so ^O means Control+O.)

The Nano Editor
Figure 9: The Nano Editor

Let's type in a short quotation (Figure 10) then use Control-O to write our data to disk. Once our quotation is saved, we can use Control-X to quit the editor and return to the shell.

Nano in Action
Figure 10: Nano in Action

Which Editor?

When we say, "nano is a text editor," we really do mean "text": it can only work with plain character data, not tables, images, or any other human-friendly media. We use it in examples because almost anyone can drive it anywhere without training, but please use something more powerful for real work. On Unix systems (such as Linux and Mac OS X), many programmers use Emacs or Vim (both of which are completely unintuitive, even by Unix standards), or a graphical editor such as Gedit. On Windows, you may wish to use Notepad++.

No matter what editor you use, you will need to know where it searches for and saves files. If you start it from the shell, it will (probably) use your current working directory as its default location. If you use your computer's start menu, it may want to save files in your desktop or documents directory instead. You can change this by navigating to another directory the first time you "Save As..."

nano doesn't leave any output on the screen after it exits. But ls now shows that we have created a file called draft.txt:

$ ls
draft.txt
$

If we just want to quickly create a file with no content, we can use the command touch.

$ touch draft_empty.txt
$ ls
draft.txt   draft_empty.txt
$

The touch command has allowed us to create a file without leaving the command line. As with text editors like nano, touch will create the file in your current working directory.

If we touch a file that already exists, the time of last modification for the file is updated to the present time, but the file is otherwise unaltered. We can demonstrate this by running ls with the -l flag to give a long listing of files in the current directory which includes information about the date and time of last modification.

$ ls -l
-rw-r--r-- 1 username username 512 Oct  9 15:46 draft.txt
-rw-r--r-- 1 username username   0 Oct  9 15:47 draft_empty.txt
$ touch draft.txt
$ ls -l
-rw-r--r-- 1 username username 512 Oct  9 15:48 draft.txt
-rw-r--r-- 1 username username   0 Oct  9 15:47 draft_empty.txt

Note how the modification time of draft.txt changes after we touch it.

We can run ls with the -s flag (for "size") to show us how large draft.txt is:

$ ls -s
   1  draft.txt        0  draft_empty.txt
$

Unfortunately, Unix reports sizes in disk blocks by default, which might be the least helpful default imaginable. If we add the -h flag, ls switches to more human-friendly units:

$ ls -s -h
 512  draft.txt        0  draft_empty.txt
$

Here, 512 is the number of bytes in the draft.txt file. This is more than we actually typed in because the smallest unit of storage on the disk is typically a block of 512 bytes.

Let's start tidying up by running rm draft.txt:

$ rm draft.txt
$

This command removes files ("rm" is short for "remove"). If we run ls, only the draft_empty.txt file remains:

$ ls
draft_empty.txt
$

Now we'll finish cleaning up by running rm draft_empty.txt:

$ rm draft_empty.txt
$

If we run ls again, its output is empty, which tells us that both of our files are gone:

$ ls
$

Deleting Is Forever

Unix doesn't have a trash bin: when we delete files, they are unhooked from the file system so that their storage space on disk can be recycled. Tools for finding and recovering deleted files do exist, but there's no guarantee they'll work in any particular situation, since the computer may recycle the file's disk space right away.

Let's re-create draft.txt as an empty file, and then move up one directory to /users/vlad using cd ..:

$ pwd
/users/vlad/thesis
$ touch draft.txt
$ ls
draft.txt
$ cd ..
$

If we try to remove the entire thesis directory using rm thesis, we get an error message:

$ rm thesis
rm: cannot remove `thesis': Is a directory
$

This happens because rm only works on files, not directories. The right command is rmdir, which is short for "remove directory": It doesn't work yet either, though, because the directory we're trying to remove isn't empty:

$ rmdir thesis
rmdir: failed to remove `thesis': Directory not empty
$

This little safety feature can save you a lot of grief, particularly if you are a bad typist. If we really want to get rid of thesis we should first delete the file draft.txt:

$ rm thesis/draft.txt
$

The directory is now empty, so rmdir can delete it:

$ rmdir thesis
$

With Great Power Comes Great Responsibility

Removing the files in a directory just so that we can remove the directory quickly becomes tedious. Instead, we can use rm with the -r flag (which stands for "recursive"):

$ rm -r thesis
$

This removes everything in the directory, then the directory itself. If the directory contains sub-directories, rm -r does the same thing to them, and so on. It's very handy, but can do a lot of damage if used without care.

Let's create that directory and file one more time. (Note that this time we're running touch with the path thesis/draft.txt, rather than going into the thesis directory and running touch on draft.txt there.)

$ pwd
/users/vlad/thesis
$ mkdir thesis
$ touch thesis/draft.txt
$ ls thesis
draft.txt
$

draft.txt isn't a particularly informative name, so let's change the file's name using mv, which is short for "move":

$ mv thesis/draft.txt thesis/quotes.txt
$

The first parameter tells mv what we're "moving", while the second is where it's to go. In this case, we're moving thesis/draft.txt to thesis/quotes.txt, which has the same effect as renaming the file. Sure enough, ls shows us that thesis now contains one file called quotes.txt:

$ ls thesis
quotes.txt
$

Just for the sake of inconsistency, mv also works on directories—there is no separate mvdir command.

Let's move quotes.txt into the current working directory. We use mv once again, but this time we'll just use the name of a directory as the second parameter to tell mv that we want to keep the filename, but put the file somewhere new. (This is why the command is called "move".) In this case, the directory name we use is the special directory name . that we mentioned earlier):

$ mv thesis/quotes.txt .
$

The effect is to move the file from the directory it was in to the current directory. ls now shows us that thesis is empty, and that quotes.txt is in our current directory:

$ ls thesis
$ ls quotes.txt
quotes.txt
$

(Notice that ls with a filename or directory name as an parameter only lists that file or directory.)

The cp command works very much like mv, except it copies a file instead of moving it. We can check that it did the right thing using ls with two paths as parameters—like most Unix commands, ls can be given thousands of paths at once:

$ cp quotes.txt thesis/quotations.txt
$ ls quotes.txt thesis/quotations.txt
quotes.txt   thesis/quotations.txt
$

To prove that we made a copy, let's delete the quotes.txt file in the current directory, and then run that same ls again. This time, it tells us that it can't find quotes.txt in the current directory, but it does find the copy in thesis that we didn't delete:

$ ls quotes.txt thesis/quotations.txt
ls: cannot access quotes.txt: No such file or directory
thesis/quotations.txt
$

Another Useful Abbreviation

The shell interprets the character ~ (tilde) at the start of a path to mean "the current user's home directory". For example, if Vlad's home directory is /home/vlad, then ~/data is equivalent to /home/vlad/data. This only works if it is the first character in the path: /~ is not the user's home directory, and here/there/~/elsewhere is not /home/vlad/elsewhere.

Summary

  • Unix documentation uses '^A' to mean "control-A".
  • The shell does not have a trash bin: once something is deleted, it's really gone.
  • mkdir path creates a new directory.
  • cp old new copies a file.
  • mv old new moves (renames) a file or directory.
  • nano is a very simple text editor—please use something else for real work.
  • touch path creates an empty file if it doesn't already exist.
  • rm path removes (deletes) a file.
  • rmdir path removes (deletes) an empty directory.

Challenges

  1. What is the output of the closing ls command in the sequence shown below?

    $ pwd
    /home/thing/data
    $ ls
    proteins.dat
    $ mkdir recombine
    $ mv proteins.dat recombine
    $ cp recombine/proteins.dat ../proteins-saved.dat
    $ ls
    
  2. Suppose that:

    $ ls -F
    analyzed/  fructose.dat    raw/   sucrose.dat
    

    What command(s) could you run so that the commands below will produce the output shown?

    $ ls
    analyzed   raw
    $ ls analyzed
    fructose.dat    sucrose.dat
    
  3. What does cp do when given several filenames and a directory name, as in:

    $ mkdir backup
    $ cp thesis/citations.txt thesis/quotations.txt backup
    

    What does cp do when given three or more filenames, as in:

    $ ls -F
    intro.txt    methods.txt    survey.txt
    $ cp intro.txt methods.txt survey.txt
    

    Why do you think cp's behavior is different from mv's?

  4. The command ls -R lists the contents of directories recursively, i.e., lists their sub-directories, sub-sub-directories, and so on in alphabetical order at each level. The command ls -t lists things by time of last change, with most recently changed files or directories first. In what order does ls -R -t display things?

Wildcards

Objectives

  • Write wildcard expressions that select certain subsets of the files in one or more directories.
  • Write wildcard expressions to create multiple files.
  • Explain when wildcards are expanded.
  • Understand the difference between different wildcard characters.
  • Explain what globbing is.
  • Understand how to escape special characters.

Duration: 15-20 minutes.

Lesson

At this point we have a basic toolkit which allows us to do things in the shell that you can normally do in Windows, Mac OS, or GNU/Linux with your mouse. For example, we can create, delete and move individual files. We will now learn about wildcards, which will allow us to expand our toolkit so that we can perform operations on multiple files with a single command. We'll start in an empty directory and create four files:

$ touch file1.txt file2.txt otherfile1.txt otherfile2.txt
$ ls
file1.txt   file2.txt   otherfile1.txt   otherfile2.txt
$

If we want to see all filenames that start with "file", we can run the command ls file*:

$ ls file*
file1.txt   file2.txt
$

Asterisk (*) Wildcard

The asterisk wildcard * in the expression file* matches zero or more characters of any kind. Thus the expression file* matches anything that starts with "file". On the other hand, *file* would match anything containing the word "file" at all, such as file1.txt, file2.txt, otherfile1.txt, and otherfile2.txt.

When the shell sees a wildcard, it expands it to create a list of matching filenames before passing those names to whatever command is being run. This means that when you execute ls file*, the shell first expands it to ls file1.txt file2.txt in the above example, and then runs that command. This process of expansion is called globbing. Therefore commands like ls and rm never see the wildcard characters, just what those wildcards matched. This is another example of orthogonal design.

There are four other basic wildcard characters we will explore:

  • The question mark, ?, matches any single character, so text?.txt would match text1.txt or textz.txt, but not text_1.txt.
  • The square brackets, [ and ], which can be used to specify a set of possible characters. For example, text[135].txt would match text1.txt, text3.txt, and text5.txt. You can also specify a natural sequence of characters using a dash. For example, text[1-6].txt would match text1.txt, text2.txt, ..., text6.txt.
  • The curly braces, {}, are similar to the square brackets, except that they take a list of items separated by commas, and that they always expand all elements of the list, whether or not the corresponding filenames exist. For example, text{1,5}.txt would expand to text1.txt text5.txt whether or not text1.txt and text5.txt actually exist. The elements of the list need not be single characters, so text{12,5}.txt expands to text12.txt text5.txt. Natural sequences can be specified using .., i.e. text{1..5}.txt would expand to text1.txt text2.txt text3.txt text4.txt text5.txt.

We can use any number of wildcards at a time: for example, p*.p?* matches anything that starts with a 'p' and ends with '.', 'p', and at least one more character (since the '?' has to match one character, and the final '*' can match any number of characters). Thus, p*.p?* would match preferred.practice, and even p.pi (since the first '*' can match no characters at all), but not quality.practice (doesn't start with 'p') or preferred.p (there isn't at least one character after the '.p').

Let's see some more examples of wildcards in action. First we will remove everything in the current directory:

$ rm *
$ ls
$

Now that our working directory is empty, let's create some empty files using the touch command and the curly braces:

$ touch file{4,6,7}.txt file_{a..e}.txt
$ 

The curly braces expand the second token file{4,6,7}.txt into three separate file names, and the third token file_{a..e}.txt expands into five separate file names. So, now if we list files using the asterisk wildcard, we get:

$ ls file*
file4.txt   file7.txt    file_b.txt   file_d.txt
file6.txt   file_a.txt   file_c.txt   file_e.txt
$

We have matched all files in our current directory that begin with file. Now, let us list a subset of those files using the question mark:

$ ls file?.txt
file4.txt   file6.txt    file7.txt
$

Note how this glob pattern matched only the numbered files. That's because only the numbered files have a single character between file and .txt. The lettered files have two characters between file and .txt:

$ ls file??.txt
file_a.txt   file_b.txt    file_c.txt   file_d.txt   file_e.txt
$
It's also possible to combine and nest wildcards. For example, file{4,_?}.txt would match file4.txt and file_?.txt, where ? matches any single character. To demonstrate this, let's re-create all of our files:

$ touch file{4,6,7}.txt file_{a..e}.txt
$ ls
file4.txt   file7.txt    file_b.txt   file_d.txt
file6.txt   file_a.txt   file_c.txt   file_e.txt
$ 

Now let's run ls file{4,_?}.txt:

$ ls file{4,_?}.txt
file4.txt   file_b.txt   file_d.txt
file_a.txt   file_c.txt   file_e.txt
$

Finally, it is important to understand what happens if a wildcard expression doesn't match any files. In this case, the wildcard characters are interpreted literally. For example, if we were to enter ls file[AB].txt, the system will first try to find fileA.txt or fileB.txt. If it found either of those, then the wildcard expression would be expanded appropriately, but when it fails to find either of them, the expression remains unevaluated as file[AB].txt, and is then passed to ls. At this point, ls will try to find a single file that is actually named file[AB].txt. We can see that this is the case if we try ls file[AB].txt:

$ ls file[AB].txt
ls: file[AB].txt: No such file or directory
$

The system tells us that file[AB].txt doesn't exist since this was the file it was looking for after the wildcard expression found no matches. Still not convinced? Let's actually create a file named file[AB].txt. To do this, we need some way of telling the computer that we actually want the [ and ] characters to be in the filename, and that we don't mean them as wildcards.

Escaping Special Characters

Wildcard characters, like *, [], and {} are all special characters in bash. Another example of a special character which we have already encountered, and that is not a wildcard character, is the forward slash /, which is used to delimit directory levels. It is possible to escape special characters, i.e. turn off their special meanings. We can escape a single character by preceding it with a backslash, \. This tells the shell to interpret that character literally. For example, writing \/ tells bash to interpret the forward slash literally. We can escape more than one character by enclosing a set of characters in either single or double quotes. Single quotes preserve the literal value of each character enclosed. Double quotes preserve the literal value of each character enclosed with the exception of $, `, and \, which all retain their special meanings. Therefore, inside double quotes, backslashes \ preceding any of these three special characters will be removed. Otherwise, the backslash will remain. For the example above of creating a file named file[AB].txt, one can do any of the following: touch "file[AB].txt", touch 'file[AB].txt', touch file\[AB\].txt, or touch "file\[AB\].txt".

Now we can create the file file[AB].txt:

$ touch "file[AB].txt"
$ ls
file4.txt   file7.txt      file_a.txt   file_c.txt   file_e.txt
file6.txt   file[AB].txt   file_b.txt   file_d.txt
$

If we do ls file[AB].txt, it will find the file we've just created:

$ ls file[AB].txt
file[AB].txt
$

But if we create a file named fileA.txt, this will be found first by the wildcard expansion, since it first looks for fileA.txt and fileB.txt:

$ touch fileA.txt
$ ls file[AB].txt
fileA.txt
$

If we really do want to look for something called file[AB].txt, and not fileA.txt or fileB.txt, then we have to escape the special characters "[" and "]". We can do this:

$ ls file\[AB\].txt
file[AB].txt
$

or this:

$ ls "file[AB].txt"
file[AB].txt
$

Single quotes would work as well in this case.

Summary

  • Use wildcards to match filenames.
  • * is a wildcard pattern that matches zero or more characters in a pathname.
  • ? is a wildcard pattern that matches any single character.
  • [] and {} allow you to match specified sets of characters in different ways.
  • Wildcards are useful when performing actions on more than one file at a time.
  • The shell matches wildcards before running commands.
  • We can use wildcard patterns for nearly any command (cp, mv, rm, etc.).
  • Globbing is the process through which wildcard patterns are matched to filenames.
  • You can use multiple wildcard expressions in a single glob.

Challenges

  1. In the lesson we started in an empty directory and created several files using the command:

    $ touch file{4,6,7}.txt file{a..e}.txt
    $
    

    What files, if any, would have been created if we had instead used square brackets? I.e.

    $ touch file[467].txt file[a-e].txt
    $
    
  2. We also discussed the effect of running ls file[AB].txt when neither the file fileA.txt, nor the file fileB.txt, nor the file file[AB].txt existed. What would be the effect of running ls file{A,B}.txt under these circumstances?

  3. Recall the discussion of escaping characters using single quotes, double quotes, and the backslash.

    What would the output of ls be if we issued the following bash commands?

    $ touch "apple_file\ orange_file"
    $ touch "apple_file orange_file"
    $
    
  4. How can we create the same files as in the previous problem without any quotes? Hint: This requires some careful thought about how to apply the backslash notation. Would single quotes have produced different output files if they were used instead of double quotes in the previous problem?

Pipes and Filters

Objectives

  • Redirect a command's output to a file.
  • Process a file instead of keyboard input using redirection.
  • Construct command pipelines with two or more stages.
  • Explain what usually happens if a program or pipeline isn't given any input to process.
  • Explain Unix's "small pieces, loosely joined" philosophy.

Duration: 20-30 minutes.

Lesson

Now that we know a few basic commands and have learned how to use wildcards, we can finally look at the shell's most powerful feature: the ease with which it lets you combine existing programs in new ways. We'll start with a directory called molecules that contains six files describing some simple organic molecules. The .pdb extension indicates that these files are in Protein Data Bank format, a simple text format that specifies the type and position of each atom in the molecule.

$ ls molecules
cubane.pdb    ethane.pdb    methane.pdb
octane.pdb    pentane.pdb   propane.pdb
$

Let's go into that directory with cd and run the command wc *.pdb. wc is the "word count" command; it counts the number of lines, words, and characters in files:

$ cd molecules
$ wc *.pdb
  20  156 1158 cubane.pdb
  12   84  622 ethane.pdb
   9   57  422 methane.pdb
  30  246 1828 octane.pdb
  21  165 1226 pentane.pdb
  15  111  825 propane.pdb
 107  819 6081 total
$

If we run wc -l instead of just wc, the output shows only the number of lines per file:

$ wc -l *.pdb
  20  cubane.pdb
  12  ethane.pdb
   9  methane.pdb
  30  octane.pdb
  21  pentane.pdb
  15  propane.pdb
 107  total
$

We can also use -w to get only the number of words, or -c to get only the number of characters.

Now, which of these files is shortest? It's an easy question to answer when there are only six files, but what if there were 6000? That's the kind of job we want a computer to do.

Our first step toward a solution is to run the command:

$ wc -l *.pdb > lengths

The > tells the shell to redirect the command's output to a file instead of printing it to the screen. The shell will create the file if it doesn't exist, or overwrite the contents of that file if it does. (This is why there is no screen output: everything that wc would have printed has gone into the file lengths instead.)

ls lengths confirms that the file exists:

$ ls lengths
lengths
$

We can now send the content of lengths to the screen using cat lengths. cat stands for "concatenate": it prints the contents of files one after another. In this case, there's only one file, so cat just shows us what it contains:

$ cat lengths
  20  cubane.pdb
  12  ethane.pdb
   9  methane.pdb
  30  octane.pdb
  21  pentane.pdb
  15  propane.pdb
 107  total
$

Now let's use the sort command to sort its contents. This does not change the file. Instead, it sends the sorted result to the screen:

$ sort lengths
  9  methane.pdb
 12  ethane.pdb
 15  propane.pdb
 20  cubane.pdb
 21  pentane.pdb
 30  octane.pdb
107  total
$

We can put the sorted list of lines in another temporary file called sorted-lengths by putting > sorted-lengths after the command, just as we used > lengths to put the output of wc into lengths. Once we've done that, we can run another command called head to get the first few lines in sorted-lengths:

$ sort lengths > sorted-lengths
$ head -1 sorted-lengths
  9  methane.pdb
$

Giving head the parameter -1 tells us we only want the first line of the file; -20 would get the first 20, and so on. Since sorted-lengths the lengths of our files ordered from least to greatest, the output of head must be the file with the fewest lines.

If you think this is confusing, you're in good company: even once you understand what wc, sort, and head do, all those intermediate files make it hard to follow what's going on. How can we make it easier to understand?

Let's start by getting rid of the sorted-lengths file by running sort and head together:

$ sort lengths | head -1
  9  methane.pdb
$

The vertical bar between the two commands is called a pipe. It tells the shell that we want to use the output of the command on the left as the input to the command on the right. The computer might create a temporary file if it needs to, or copy data from one program to the other in memory, or something else entirely: we don't have to know or care.

Well, if we don't need to create the temporary file sorted-lengths, can we get rid of the lengths file too? The answer is "yes": we can use another pipe to send the output of wc directly to sort, which then sends its output to head:

$ wc -l *.pdb | sort | head -1
  9  methane.pdb
$

This is exactly like a mathematician nesting functions like sin(πx)2 and saying "the square of the sine of x times π": in our case, the calculation is "head of sort of word count of *.pdb".

Inside Pipes

Here's what actually happens behind the scenes when we create a pipe. When a computer runs a program—any program—it creates a process in memory to hold the program's software and its current state. Every process has an input channel called standard input. (By this point, you may be surprised that the name is so memorable, but don't worry: most Unix programmers call it stdin.) Every process also has a default output channel called standard output, or stdout (Figure 11).

A Process with Standard Input and Output
Figure 11: A Process with Standard Input and Output

The shell is just another program, and runs in a process like any other. Under normal circumstances, whatever we type on the keyboard is sent to the shell on its standard input, and whatever it produces on standard output is displayed on our screen (Figure 12):

The Shell as a Process
Figure 12: The Shell as a Process

When we run a program, the shell creates a new process. It then temporarily sends whatever we type on our keyboard to that process's standard input, and whatever the process sends to standard output to the screen (Figure 13):

Running a Process
Figure 13: Running a Process

Here's what happens when we run wc -l *.pdb > lengths. The shell starts by telling the computer to create a new process to run the wc program. Since we've provided some filenames as parameters, wc reads from them instead of from standard input. And since we've used > to redirect output to a file, the shell connects the process's standard output to that file (Figure 14).

Running One Program with Redirection
Figure 14: Running One Program with Redirection

If we run wc -l *.pdb | sort instead, the shell creates two processes, one for each process in the pipe, so that wc and sort run simultaneously. The standard output of wc is fed directly to the standard input of sort; since there's no redirection with >, sort's output goes to the screen (Figure 15):

Running Two Programs in a Pipe
Figure 15: Running Two Programs in a Pipe

And if we run wc -l *.pdb | sort | head -1, we get the three processes shown in Figure 16 with data flowing from the files, through wc to sort, and from sort through head to the screen.

Running the Full Pipeline
Figure 16: Running the Full Pipeline

This simple idea is why Unix has been so successful. Instead of creating enormous programs that try to do many different things, Unix programmers focus on creating lots of simple tools that each do one job well, and work well with each other. Ten such tools can be combined in 100 ways, and that's only looking at pairings: when we start to look at pipes with multiple stages, the possibilities are almost uncountable.

This programming model is called pipes and filters. We've already seen pipes; a filter is a program that transforms a stream of input into a stream of output. Almost all of the standard Unix tools can work this way: unless told to do otherwise, they read from standard input, do something with what they've read, and write to standard output.

The key is that any program that reads lines of text from standard input, and writes lines of text to standard output, can be combined with every other program that behaves this way as well. You can and should write your programs this way, so that you and other people can put those programs into pipes to multiply their power.

Redirecting Input

As well as using > to redirect a program's output, we can use < to redirect its input, i.e., to read from a file instead of from standard input. For example, instead of writing wc ammonia.pdb, we could write wc < ammonia.pdb. In the first case, wc gets a command line parameter telling it what file to open. In the second, wc doesn't have any command line parameters, so it reads from standard input, but we have told the shell to send the contents of ammonia.pdb to wc's standard input.

Nelle's Pipeline: Checking Files

Nelle has run her samples through the assay machines and created 1520 files in the north-pacific-gyre/2012-07-03 directory described earlier. As a quick sanity check, she types:

$ cd north-pacific-gyre/2012-07-03
$ wc -l *.txt

The output is 1520 lines that look like this:

 300 NENE01729A.txt
 300 NENE01729B.txt
 300 NENE01736A.txt
 300 NENE01751A.txt
 300 NENE01751B.txt
 300 NENE01812A.txt
 ... ...

Now she types this:

$ wc -l *.txt | sort | head -5
 240 NENE02018B.txt
 300 NENE01729A.txt
 300 NENE01729B.txt
 300 NENE01736A.txt
 300 NENE01751A.txt

Whoops: one of the files is 60 lines shorter than the others. When she goes back and checks it, she sees that she did that assay at 8:00 on a Monday morning—someone was probably in using the machine on the weekend, and she forgot to reset it. Before re-running that sample, she checks to see if any files have too much data:

$ wc -l *.txt | sort | tail -5
 300 NENE02040A.txt
 300 NENE02040B.txt
 300 NENE02040Z.txt
 300 NENE02043A.txt
 300 NENE02043B.txt

Those numbers look good—but what's that 'Z' doing there in the third-to-last line? All of her samples should be marked 'A' or 'B'; by convention, her lab uses 'Z' to indicate samples with missing information. To find others like it, she does this:

$ ls *Z.txt
NENE01971Z.txt    NENE02040Z.txt

Sure enough, when she checks the log on her laptop, there's no depth recorded for either of those samples. Since it's too late to get the information any other way, she must exclude those two files from her analysis. She could just delete them using rm, but there are actually some analyses she might do later where depth doesn't matter, so instead, she'll just be careful later on to select files using the wildcard expression *[AB].txt. As always, the '*' matches any number of characters, and recall from the previous section that [AB] matches either an 'A' or a 'B', so this matches all the valid data files she has.

Summary

  • command > file redirects a command's output to a file.
  • first | second is a pipeline: the output of the first command is used as the input to the second.
  • The best way to use the shell is to use pipes to combine simple single-purpose programs (filters).
  • cat displays the contents of its inputs.
  • head displays the first few lines of its input.
  • sort sorts its inputs.
  • tail displays the last few lines of its input.
  • wc counts lines, words, and characters in its inputs.

Challenges

  1. If we run sort on each of the files shown on the left in the table below, without the -n flag, the output is as shown on the right:

    1
    10
    2
    19
    22
    6
    
    1
    10
    19
    2
    22
    6
    
     1
    10
     2
    19
    22
     6
    
     1
     2
     6
    10
    19
    22
    

    Explain why we get different answers for the two files.

  2. What is the difference between:

    wc -l < *.dat
    

    and:

    wc -l *.dat
    
  3. The command uniq removes adjacent duplicated lines from its input. For example, if a file salmon.txt contains:

    coho
    coho
    steelhead
    coho
    steelhead
    steelhead
    

    then uniq salmon.txt produces:

    coho
    steelhead
    coho
    steelhead
    

    Why do you think uniq only removes adjacent duplicated lines? (Hint: think about very large data sets.) What other command could you combine with it in a pipe to remove all duplicated lines?

  4. A file called animals.txt contains the following data:

    2012-11-05,deer
    2012-11-05,rabbit
    2012-11-05,raccoon
    2012-11-06,rabbit
    2012-11-06,deer
    2012-11-06,fox
    2012-11-07,rabbit
    2012-11-07,bear
    

    Fill in the table showing what lines of text pass through each pipe in the pipeline below.

    cat animals.txt | head -5 | tail -3 | sort -r > final.txt




  5. The command:

    $ cut -d , -f 2 animals.txt
    

    produces the following output:

    deer
    rabbit
    raccoon
    rabbit
    deer
    fox
    rabbit
    bear
    

    What other command(s) could be added to this in a pipeline to find out what animals have been seen (without any duplicates in the animals' names)?

Loops

Objectives

  • Write a loop that applies one or more commands separately to each file in a set of files.
  • Trace the values taken on by a loop variable during execution of the loop.
  • Explain the difference between a variable's name and its value.
  • Explain why spaces and some punctuation characters shouldn't be used in files' names.
  • Demonstrate how to see what commands have recently been executed.
  • Re-run recently executed commands without retyping them.

Duration: 15-20 minutes.

Lesson

Wildcards and tabs are one way to save on typing. Another is to tell the shell to do something over and over again. Suppose we have several hundred genome data files in a directory with names like basilisk.dat, unicorn.dat, and so on. When new files arrive, we'd like to rename the existing ones to original-basilisk.dat, original-unicorn.dat, etc. We can't use:

mv *.dat original-*.dat

because that would expand (in the two-file case) to:

mv basilisk.dat unicorn.dat

This wouldn't back up our files: it would replace the content of unicorn.dat with whatever's in basilisk.dat.

Instead, we can use a loop to do some operation once for each thing in a list. Here's a simple example that displays the first three lines of each file in turn:

$ for filename in basilisk.dat unicorn.dat
> do
>    head -3 $filename
> done
COMMON NAME: basilisk
CLASSIFICATION: basiliscus vulgaris
UPDATED: 1745-05-02
COMMON NAME: unicorn
CLASSIFICATION: equus monoceros
UPDATED: 1738-11-24

When the shell sees the keyword for, it knows it is supposed to repeat a command (or group of commands) once for each thing in a list. In this case, the list is the two filenames. Each time through the loop, the name of the thing currently being operated on is assigned to the variable filename. Inside the loop, we get the variable's value by putting $ in front of it (this is known as shell parameter expansion), so the first time through the loop, $filename is basilisk.dat, and the second time, unicorn.dat. Finally, the command that's actually being run is our old friend head, so this loop prints out the first three lines of each data file in turn.

Follow the Prompt

You may have noticed that the shell prompt changed from $ to > and back again as we were typing in our loop. The second prompt, >, is different to remind us that we haven't finished typing a complete command yet.

What's In a Name?

We have called the variable in this loop filename in order to make its purpose clearer to human readers. The shell itself doesn't care what the variable is called (but when we use the variable value it is a good practice to use ${filename} instead of $filename to avoid some types of problems); if we wrote this loop as:

for x in basilisk.dat unicorn.dat
do
    head -3 $x
done

or:

for temperature in basilisk.dat unicorn.dat
do
    head -3 $temperature
done

it would work exactly the same way. Don't do this. Programs are only useful if people can understand them, so using meaningless names (like x) or misleading names (like temperature) increases the likelihood of the program being wrong.

Here's a slightly more complicated loop:

for filename in *.dat
do
    echo $filename
    head -100 $filename | tail -20
done

The shell starts by expanding *.dat to create the list of files it will process. The loop body then executes two commands for each of those files. The first, echo, just prints its command-line parameters to standard output. For example:

echo hello there

prints:

hello there

In this case, since the shell expands $filename to be the name of a file, echo $filename just prints the name of the file. Note that we can't write this as:

for filename in *.dat
do
    $filename
    head -100 $filename | tail -20
done

because then the first time through the loop, when $filename expanded to basilisk.dat, the shell would try to run basilisk.dat as a program. Finally, the head and tail combination selects lines 80-100 from whatever file is being processed.

Spaces in Names

Filename expansion in loops is another reason you should not use spaces in filenames. Suppose our data files are named:

basilisk.dat
red dragon.dat
unicorn.dat

If we try to process them using:

for filename in *.dat
do
    echo $filename
    head -100 $filename | tail -20
done

then *.dat will expand to:

basilisk.dat red dragon.dat unicorn.dat

which means that filename will be assigned each of the following values in turn:

basilisk.dat
red
dragon.dat
unicorn.dat

The highlighted lines show the problem: instead of getting one name red dragon.dat, the commands in the loop will get red and dragon.dat separately. To make matters worse, the file red dragon.dat won't be processed at all. There are ways to get around this, the simplest of which is to put quote marks around "$filename", but the safest thing is to use dashes, underscores, or some other printable character instead of spaces in your file names.

Going back to our original file renaming problem, we can solve it using this loop:

for filename in *.dat
do
    mv $filename original-$filename
done

This loop runs the mv command once for each filename. The first time, when $filename expands to basilisk.dat, the shell executes:

mv basilisk.dat original-basilisk.dat

The second time, the command is:

mv unicorn.dat original-unicorn.dat

Measure Twice, Run Once

A loop is a way to do many things at once—or to make many mistakes at once if it does the wrong thing. One way to check what a loop would do is to echo the commands it would run instead of actually running them. For example, we could write our file renaming loop like this:

for filename in *.dat
do
    echo mv $filename original-$filename
done

Instead of running mv, this loop runs echo, which prints out:

mv basilisk.dat original-basilisk.dat
mv unicorn.dat original-unicorn.dat

without actually running those commands. We can then use up-arrow to redisplay the loop, back-arrow to get to the word echo, delete it, and then press "enter" to run the loop with the actual mv commands. This isn't foolproof, but it's a handy way to see what's going to happen when you're still learning how loops work.

So far we have been renaming files of the form someanimal.dat to original-someanimal.dat. But, what if instead of original-someanimal.dat we want someanimal.dat.original or someanimal-original.dat? For someanimal.dat.original we can do

for filename in *.dat
do
    mv $filename $filename.original
done

But for someanimal-original.dat we will need to go a little deeper in shell parameter expansion and use its find and replace feature. That syntax is ${variable/find/substitute/} where find is the longest string in variable value that we want to be replaced by substitute. We can achieve this task using

for filename in *.dat
do
    mv $filename ${filename/.dat/-original.dat/}
done

Nelle's Pipeline: Processing Files

Nelle is now ready to process her data files. Since she's still learning how to use the shell, she decides to build up the required commands in stages. Her first step is to make sure that she can select the right files—remember, these are ones whose names end in 'A' or 'B', rather than 'Z':

$ cd north-pacific-gyre/2012-07-03
$ for datafile in *[AB].txt
do
    echo $datafile
done
NENE01729A.txt
NENE01729B.txt
NENE01736A.txt
...
NENE02043A.txt
NENE02043B.txt
$

Her next step is to figure out what to call the files that the goostat analysis program will create. Prefixing each input file's name with "stats" seems simple, so she modifies her loop to do that:

$ for datafile in *[AB].txt
do
    echo $datafile stats-$datafile
done
NENE01729A.txt stats-NENE01729A.txt
NENE01729B.txt stats-NENE01729B.txt
NENE01736A.txt stats-NENE01736A.txt
...
NENE02043A.txt stats-NENE02043A.txt
NENE02043B.txt stats-NENE02043B.txt
$

She hasn't actually run goostats yet, but now she's sure she can select the right files and generate the right output filenames.

Typing in commands over and over again is becoming tedious, though, and Nelle is worried about making mistakes, so instead of re-entering her loop, she presses the up arrow. In response, Bash redisplays the whole loop on one line (using semi-colons to separate the pieces):

$ for datafile in *[AB].txt; do echo $datafile stats-$datafile; done

Using the left arrow key, Nelle backs up and changes the command echo to goostats:

$ for datafile in *[AB].txt; do goostats $datafile stats-$datafile; done

When she presses enter, Bash runs the modified command. However, nothing appears to happen—there is no output. After a moment, Nelle realizes that since her script doesn't print anything to the screen any longer, she has no idea whether it is running, much less how quickly. She kills the job by typing Control-C, uses up-arrow to repeat the command, and edits it to read:

$ for datafile in *[AB].txt; do echo $datafile; goostats $datafile stats-$datafile; done

When she runs her program now, it produces one line of output every five seconds or so:

NENE01729A.txt
NENE01729B.txt
NENE01736A.txt
...
$

1518 times 5 seconds, divided by 60, tells her that her script will take about two hours to run. As a final check, she opens another terminal window, goes into north-pacific-gyre/2012-07-03, and uses cat NENE01729B.txt to examine one of the output files. It looks good, so she decides to get some coffee and catch up on her reading.

Those Who Know History Can Choose to Repeat It

Another way to repeat previous work is to use the history command to get a list of the last few hundred commands that have been executed, and then to use !123 (where "123" is replaced by the command number) to repeat one of those commands. For example, if Nelle types this:

$ $ history | tail -5
  456  ls -l NENE0*.txt
  457  rm stats-NENE01729B.txt.txt
  458  goostats NENE01729B.txt stats-NENE01729B.txt
  459  ls -l NENE0*.txt
  460  history

then she can re-run goostats on NENE01729B.txt simply by typing !458.

Summary

  • Use a for loop to repeat commands once for every thing in a list.
  • Every for loop needs a variable to refer to the current "thing".
  • Use $name to expand a variable (i.e., get its value).
  • Do not use spaces, quotes, or wildcard characters such as '*' or '?' in filenames, as it complicates variable expansion.
  • Give files consistent names that are easy to match with wildcard patterns to make it easy to select them for looping.
  • Use the up-arrow key to scroll up through previous commands to edit and repeat them.
  • Use history to display recent commands, and !number to repeat a command by number.

Challenges

  1. Suppose that ls initially displays:

    fructose.dat    glucose.dat   sucrose.dat
    

    What is the output of:

    for datafile in *.dat
    do
        ls *.dat
    done
    

    In the same directory, what is the effect of this loop?

    for sugar in *.dat
    do
        echo $sugar
        cat $sugar > xylose.dat
    done
    
    1. Prints fructose.dat, glucose.dat, and sucrose.dat, and copies sucrose.dat to create xylose.dat.
    2. Prints fructose.dat, glucose.dat, and sucrose.dat, and concatenates all three files to create xylose.dat.
    3. Prints fructose.dat, glucose.dat, sucrose.dat, and xylose.dat, and copies sucrose.dat to create xylose.dat.
    4. None of the above.
  2. The expr does simple arithmetic using command-line parameters:

    $ expr 3 + 5
    8
    $ expr 30 / 5 - 2
    4
    

    Given this, what is the output of:

    for left in 2 3
    do
        for right in $left
        do
            expr $left + $right
        done
    done
    
  3. Describe in words what the following loop does.

    for how in frog11 prcb redig
    do
        $how -limit 0.01 NENE01729B.txt
    done
    
  4. The loop:

    for num in {1..3}
    do
        print $num
    done
    

    prints:

    1
    2
    3
    

    However, the loop:

    for num in {0.1..0.3}
    do
        print $num
    done
    

    prints:

    {0.1..0.3}
    

    Write a loop that prints:

    0.1
    0.2
    0.3
    

Shell Scripts

Objectives

  • Write a shell script that runs a command or series of commands for a fixed set of files.
  • Run a shell script from the command line.
  • Write a shell script that operates on a set of files defined by the user on the command line.
  • Create pipelines that include user-written shell scripts.

Duration: 15 minutes.

Lesson

We are finally ready to see what makes the shell such a powerful programming environment. We are going to take the commands we repeat frequently and save them in files so that we can re-run all those operations again later by typing a single command. For historical reasons, a bunch of commands saved in a file is usually called a shell script, but make no mistake: these are actually small programs.

Let's start by putting the following line in the file middle.sh:

head -20 cholesterol.pdb | tail -5

This is a variation on the pipe we constructed earlier: it selects lines 15-20 of the file cholesterol.pdb. Remember, we are not running it as a command just yet: we are putting the commands in a file.

Text vs. Whatever

We usually call programs like Microsoft Word or LibreOffice Writer "text editors" (a more correct name is word processor), but we need to be a bit more careful when it comes to programming. By default, Microsoft Word uses .doc (and LibreOffice Writer uses .odt) files to store not only text, but also formatting information about fonts, headings, and so on. This extra information isn't stored as characters, and doesn't mean anything to the shell interpreter: it expects input files to contain nothing but the letters, digits, and punctuation on a standard computer keyboard. When editing programs, therefore, you must either use a plain text editor, or be careful to save files as plain text.

Once we have saved the file, we can ask the shell to execute the commands it contains. Our shell is called bash, so we run the following command:

$ bash middle.sh
ATOM     14  C           1      -1.463  -0.666   1.001  1.00  0.00
ATOM     15  C           1       0.762  -0.929   0.295  1.00  0.00
ATOM     16  C           1       0.771  -0.937   1.840  1.00  0.00
ATOM     17  C           1      -0.664  -0.610   2.293  1.00  0.00
ATOM     18  C           1      -4.705   2.108  -0.396  1.00  0.00

Sure enough, our script's output is exactly what we would get if we ran that pipeline directly.

Running Scripts Without Calling Bash

We actually don't need to call Bash explicitly when we want to run a shell script. Instead, we can change the permissions on the shell script so that the operating system knows it is runnable, then put a special line at the start to tell the OS what program to use to run it. Please see the appendix for an explanation of how to do this.

What if we want to select lines from an arbitrary file? We could edit middle.sh each time to change the filename, but that would probably take longer than just retyping the command. Instead, let's edit middle.sh and replace cholesterol.pdb with a special variable called $1:

$ cat middle.sh
head -20 $1 | tail -5

Inside a shell script, $1 means "the first filename (or other parameter) on the command line". We can now run our script like this:

$ bash middle.sh cholesterol.pdb
ATOM     14  C           1      -1.463  -0.666   1.001  1.00  0.00
ATOM     15  C           1       0.762  -0.929   0.295  1.00  0.00
ATOM     16  C           1       0.771  -0.937   1.840  1.00  0.00
ATOM     17  C           1      -0.664  -0.610   2.293  1.00  0.00
ATOM     18  C           1      -4.705   2.108  -0.396  1.00  0.00

or on a different file like this:

$ bash middle.sh vitamin_a.pdb
ATOM     14  C           1       1.788  -0.987  -0.861
ATOM     15  C           1       2.994  -0.265  -0.829
ATOM     16  C           1       4.237  -0.901  -1.024
ATOM     17  C           1       5.406  -0.117  -1.087
ATOM     18  C           1      -0.696  -2.628  -0.641

We still need to edit middle.sh each time we want to adjust the range of lines, though. Let's fix that by using the special variables $2 and $3:

$ cat middle.sh
head $2 $1 | tail $3
$ bash middle.sh vitamin_a.pdb -20 -5
ATOM     14  C           1       1.788  -0.987  -0.861
ATOM     15  C           1       2.994  -0.265  -0.829
ATOM     16  C           1       4.237  -0.901  -1.024
ATOM     17  C           1       5.406  -0.117  -1.087
ATOM     18  C           1      -0.696  -2.628  -0.641

What if we want to process many files in a single pipeline? For example, if we want to sort our PDB files by length, we would type:

$ wc -l *.pdb | sort -n

because wc -l lists the number of lines in the files, and sort -n sorts things numerically. We could put this in a file, but then it would only ever sort a list of PDB files in the current directory. If we want to be able to get a sorted list of other kinds of files, possibly from many sub-directories, we need a way to get all those names into the script. We can't use $1, $2, and so on because we don't know how many files there are. Instead, we use the special variable $*, which means, "All of the command-line parameters to the shell script." Here's an example:

$ cat sorted.sh
wc -l $* | sort -n
$ bash sorted.sh *.dat backup/*.dat
      29 chloratin.dat
      89 backup/chloratin.dat
      91 sphagnoi.dat
     156 sphag2.dat
     172 backup/sphag-merged.dat
     182 girmanis.dat

Why Isn't It Doing Anything?

What happens if a script is supposed to process a bunch of files, but we don't give it any filenames? For example, what if we type:

$ bash sorted.sh

but don't say *.dat (or anything else)? In this case, $* expands to nothing at all, so the pipeline inside the script is effectively:

wc -l | sort -n

Since it doesn't have any filenames, wc assumes it is supposed to process standard input, so it just sits there and waits for us to give it some data interactively. From the outside, though, all we see is it sitting there: the script doesn't appear to do anything.

But now consider a script called distinct.sh:

sort $* | uniq

Let's run it as part of a pipeline without any filenames:

$ head -5 *.dat | bash distinct.sh

This is equivalent to:

$ head -5 *.dat | sort | uniq

which is actually what we want.

We have two more things to do before we're finished with our simple shell scripts. If you look at a script like:

wc -l $* | sort -n

you can probably puzzle out what it does. On the other hand, if you look at this script:

# List files sorted by number of lines.
wc -l $* | sort -n

you don't have to puzzle it out—the comment at the top of the script tells you what it does. A line or two of documentation like this make it much easier for other people (including your future self) to re-use your work. The only caveat is that each time you modify the script, you should check that the comment is still accurate: an explanation that sends the reader in the wrong direction is worse than none at all.

Second, suppose we have just run a series of commands that did something useful—for example, that created a graph we'd like to use in a paper. We'd like to be able to re-create the graph later if we need to, so we want to save the commands in a file. Instead of typing them in again (and potentially getting them wrong), we can do this:

$ history | tail -4 > redo-figure-3.sh

The file redo-figure-3.sh now contains:

 297 goostats -r NENE01729B.txt stats-NENE01729B.txt
 298 goodiff stats-NENE01729B.txt /data/validated/01729.txt > 01729-differences.txt
 299 cut -d ',' -f 2-3 01729-differences.txt > 01729-time-series.txt
 300 ygraph --format scatter --color bw --borders none 01729-time-series.txt figure-3.png

After a moment's work in an editor to remove the serial numbers on the command, we have a completely accurate record of how we created that figure.

Unnumbering

Nelle could also use colrm (short for "column removal") to remove the serial numbers on her previous commands.

In practice, most people develop shell scripts by running commands at the shell prompt a few times to make sure they're doing something useful, and doing it correctly, then saving them in a file for re-use. This style of work allows people to recycle what they discover about their data and their workflow with just a few extra keystrokes.

Nelle's Pipeline: Creating a Script

An off-hand comment from her supervisor has made Nelle realize that she should have provided a couple of extra parameters to goostats when she processed her files. This might have been a disaster if she had done all the analysis by hand, but thanks to for loops, it will only take a couple of hours to re-do.

Experience has taught her, though, that if something needs to be done twice, it will probably need to be done a third or fourth time as well. She runs the editor and writes the following:

# Calculate reduced stats for data files at J = 100 c/bp.
for datafile in $*
do
    echo $datafile
    goostats -J 100 -r $datafile stats-$datafile
done

(The parameters -J 100 and -r are the ones her supervisor said she should have used.) She saves this in a file called do-stats.sh, so that she can now re-do the first stage of her analysis by typing:

$ bash do-stats.sh *[AB].txt

She can also do this:

$ bash do-stats.sh *[AB].txt | wc -l

so that the output is just the number of files processed, rather than the names of the files that were processed.

One thing to note about Nelle's script is her choice to let the person running it decide what files to process. She could have written the script as:

# Calculate reduced stats for  A and Site B data files at J = 100 c/bp.
for datafile in *[AB].txt
do
    echo $datafile
    goostats -J 100 -r $datafile stats-$datafile
done

The advantage is that this always selects the right files: she doesn't have to remember to exclude the 'Z' files. The disadvantage is that it always selects just those files—she can't run it on all files (including the 'Z' files), or on the 'G' or 'H' files her colleagues in Antarctica are producing, without editing the script. If she wanted to be more adventurous, she could modify her script to check for command-line parameters, and use *[AB].txt if none were provided. Of course, this introduces another tradeoff between flexibility and complexity; we'll explore this later.

Summary

  • Save commands in files (usually called shell scripts) for re-use.
  • Use bash filename to run saved commands.
  • $* refers to all of a shell script's command-line parameters.
  • $1, $2, etc., refer to specified command-line parameters.
  • Letting users decide what files to process is more flexible and more consistent with built-in Unix commands.

Challenges

  1. Leah has several hundred data files, each of which is formatted like this:

    2013-11-05,deer,5
    2013-11-05,rabbit,22
    2013-11-05,raccoon,7
    2013-11-06,rabbit,19
    2013-11-06,deer,2
    2013-11-06,fox,1
    2013-11-07,rabbit,18
    2013-11-07,bear,1
    

    Write a shell script called species.sh that takes any number of filenames as command-line parameters, and uses cut, sort, and uniq to print a list of the unique species appearing in each of those files separately.

  2. Write a shell script called longest.sh that takes the name of a directory and a filename extension as its parameters, and prints out the name of the most recently modified file in that directory with that extension. For example:

    $ bash largest.sh /tmp/data pdb
    

    would print the name of the PDB file in /tmp/data that has been changed most recently.

  3. If you run the command:

    history | tail -5 > recent.sh
    

    the last command in the file is the history command itself, i.e., the shell has added history to the command log before actually running it. In fact, the shell always adds commands to the log before running them. Why do you think it does this?

  4. Joel's data directory contains three files: fructose.dat, glucose.dat, and sucrose.dat. Explain what each of the following shell scripts does when run as bash scriptname *.dat.

    1.
    echo *.*
    
    2.
    for filename in $1 $2 $3
    do
        cat $filename
    done
    
    3.
    echo $*.dat
    

Finding Things

Objectives

  • Use grep to select lines from text files that match simple patterns.
  • Use find to find files whose names match simple patterns.
  • Use the output of one command as the command-line parameters to another command.
  • Explain what is meant by "text" and "binary" files, and why many common tools don't handle the latter well.

Duration: 15 minutes.

Lesson

Google vs. Grep
Figure 17: Google vs. Grep

You can often guess someone's age by listening to how people talk about search (Figure 17). Just as young people use "Google" as a verb, crusty old Unix programmers use "grep". The word is a contraction of "global/regular expression/print", a common sequence of operations in early Unix text editors. It is also the name of a very useful command-line program.

grep finds and prints lines in files that match a pattern. For our examples, we will use a file that contains three haikus taken from a 1998 competition in Salon magazine:

The Tao that is seen
Is not the true Tao, until
You bring fresh toner.

With searching comes loss
and the presence of absence:
"My Thesis" not found.

Yesterday it worked
Today it is not working
Software is like that.

Let's find lines that contain the word "not":

$ grep not haiku.txt
Is not the true Tao, until
"My Thesis" not found
Today it is not working
$

Here, not is the pattern we're searching for. It's pretty simple: every alphanumeric character matches against itself. After the pattern comes the name or names of the files we're searching in. The output is the three lines in the file that contain the letters "not".

Let's try a different pattern: "day".

$ grep day haiku.txt
Yesterday it worked
Today it is not working
$

This time, the output is lines containing the words "Yesterday" and "Today", which both have the letters "day". If we give grep the -w flag, it restricts matches to word boundaries, so that only lines with the word "day" will be printed:

$ grep -w day haiku.txt
$

In this case, there aren't any, so grep's output is empty.

Another useful option is -n, which numbers the lines that match:

$ grep -n it haiku.txt
5:With searching comes loss
9:Yesterday it worked
10:Today it is not working
$

Here, we can see that lines 5, 9, and 10 contain the letters "it".

As with other Unix commands, we can combine flags. For example, since -i makes matching case-insensitive, and -v inverts the match, using them both only prints lines that don't match the pattern in any mix of upper and lower case:

$ grep -i -v the haiku.txt
You bring fresh toner.

With searching comes loss

Yesterday it worked
Today it is not working
Software is like that.
$

grep has lots of other options. To find out what they are, we can type man grep. man is the Unix "manual" command. It prints a description of a command and its options, and (if you're lucky) provides a few examples of how to use it:

$ man grep
GREP(1)                                                                                              GREP(1)

NAME
       grep, egrep, fgrep - print lines matching a pattern

SYNOPSIS
       grep [OPTIONS] PATTERN [FILE...]
       grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...]

DESCRIPTION
       grep  searches the named input FILEs (or standard input if no files are named, or if a single hyphen-
       minus (-) is given as file name) for lines containing a match to the given PATTERN.  By default, grep
       prints the matching lines.
       …        …        …

OPTIONS
   Generic Program Information
       --help Print  a  usage  message  briefly summarizing these command-line options and the bug-reporting
              address, then exit.

       -V, --version
              Print the version number of grep to the standard output stream.  This version number should be
              included in all bug reports (see below).

   Matcher Selection
       -E, --extended-regexp
              Interpret  PATTERN  as  an  extended regular expression (ERE, see below).  (-E is specified by
              POSIX.)

       -F, --fixed-strings
              Interpret PATTERN as a list of fixed strings, separated by newlines, any of  which  is  to  be
              matched.  (-F is specified by POSIX.)
    …        …        …

Wildcards

grep's real power doesn't come from its options, though; it comes from the fact that patterns can include wildcards. (The technical name for these is regular expressions, which is what the "re" in "grep" stands for.) Regular expressions are complex enough that we devoted an entire section of the website to them; if you want to do complex searches, please check it out. As a taster, we can find lines that have an 'o' in the second position like this:

$ grep -E '^.o' haiku.txt
You bring fresh toner.
Today it is not working
Software is like that.

We use the -E flag and put the pattern in quotes to prevent the shell from trying to interpret it. (If the pattern contained a '*', for example, the shell would try to expand it before running grep.) The '^' in the pattern anchors the match to the start of the line. The '.' matches a single character (just like '?' in the shell), while the 'o' matches an actual 'o'.

While grep finds lines in files, the find command finds files themselves. Again, it has a lot of options; to show how the simplest ones work, we'll use the directory tree in Figure 18:

Sample Files and Directories
Figure 18: Sample Files and Directories

Vlad's home directory contains one file called notes.txt and four subdirectories: thesis (which is sadly empty), data (which contains two files first.txt and second.txt), a tools directory that contains the programs format and stats, and an empty subdirectory called old.

For our first command, let's run find . -type d. . is the directory where we want our search to start; -type d means "things that are directories". Sure enough, find's output is the names of the five directories in our little tree (including ., the current working directory):

$ find . -type d
./
./data
./thesis
./tools
./tools/old
$

If we change -type d to -type f, we get a listing of all the files instead:

$ find . -type f
./data/first.txt
./data/second.txt
./notes.txt
./tools/format
./tools/stats
$

find automatically goes into subdirectories, their subdirectories, and so on to find everything that matches the pattern we've given it. If we don't want it to, we can use -maxdepth to restrict the depth of search:

$ find . -maxdepth 1 -type f
./notes.txt
$

The opposite of -maxdepth is -mindepth, which tells find to only report things that are at or below a certain depth. -mindepth 2 therefore finds all the files that are two or more levels below us:

$ find . -mindepth 2 -type f
./data/first.txt
./data/second.txt
./tools/format
./tools/stats
$

Another option is -empty, which restricts matches to empty files and directories:

$ find . -empty
./thesis
./tools/old
$

Now let's try matching by name:

$ find . -name *.txt
./notes.txt
$

We expected it to find all the text files, but it only prints out ./notes.txt. The problem is that the shell expands wildcard characters like * before commands run. Since *.txt in the current directory expands to notes.txt, the command we actually ran was:

$ find . -name notes.txt

find did what we asked; we just asked for the wrong thing.

To get what we want, let's do what we did with grep: put *.txt in single quotes to prevent the shell from expanding the * wildcard. This way, find actually gets the pattern *.txt, not the expanded filename notes.txt:

$ find . -name '*.txt'
./data/first.txt
./data/second.txt
./notes.txt
$

As we said earlier, the command line's power lies in combining tools. We've seen how to do that with pipes; let's look at another technique. As we just saw, find . -name '*.txt' gives us a list of all text files in or below the current directory. How can we combine that with wc -l to count the lines in all those files?

One way is to put the find command inside $():

$ wc -l $(find . -name '*.txt')
  70  ./data/first.txt
 420  ./data/second.txt
  30  ./notes.txt
 520  total
$

When the shell executes this command, the first thing it does is run whatever is inside the $(). It then replaces the $() expression with that command's output. Since the output of find is the three filenames ./data/first.txt, ./data/second.txt, and ./notes.txt, the shell constructs the command:

$ wc -l ./data/first.txt ./data/second.txt ./notes.txt

which is what we wanted. This expansion is exactly what the shell does when it expands wildcards like * and ?, but lets us use any command we want as our own "wildcard".

It's very common to use find and grep together. The first finds files that match a pattern; the second looks for lines inside those files that match another pattern. Here, for example, we can find PDB files that contain iron atoms by looking for the string "FE" in all the .pdb files below the current directory:

$ grep FE $(find . -name '*.pdb')
./human/heme.pdb:ATOM  25  FE  1  -0.924  0.535  -0.518
$

Binary Files

We have focused exclusively on finding things in text files. What if your data is stored as images, in databases, or in some other format? One option would be to extend tools like grep to handle those formats. This hasn't happened, and probably won't, because there are too many formats to support.

The second option is to convert the data to text, or extract the text-ish bits from the data. This is probably the most common approach, since it only requires people to build one tool per data format (to extract information). On the one hand, it makes simple things easy to do. On the negative side, complex things are usually impossible. For example, it's easy enough to write a program that will extract X and Y dimensions from image files for grep to play with, but how would you write something to find values in a spreadsheet whose cells contained formulas?

The third choice is to recognize that the shell and text processing have their limits, and to use a programming language such as Python instead. When the time comes to do this, don't be too hard on the shell: many modern programming languages, Python included, have borrowed a lot of ideas from it, and imitation is also the sincerest form of praise.

Nelle's Pipeline: The Second Stage

Nelle now has a directory called north-pacific-gyre/2012-07-03 containing 1518 data files, and needs to compare each one against all of the others to find the hundred pairs with the highest pairwise scores. Armed with what she has learned so far, she writes the following script

# Compare all pairs of files.
for left in $*
do
    for right in $*
    do
        echo $left $right $(goodiff $left $right)
    done
done

The outermost loop assigns the name of each file to the variable left in turn. The inner loop does the same thing for the variable right each time the outer loop executes, so inside the inner loop, left and right are given each pair of filenames.

Each time it runs the command inside the inner loop, the shell starts by running goodiff on the two files in order to expand the $() expression. Once it's done that, it passes the output, along with the names of the files, to echo. Thus, if Nelle saves this script as pairwise.sh and runs it as:

$ bash pairwise.sh stats-*.txt

the shell runs:

echo stats-NENE01729A.txt stats-NENE01729A.txt $(goodiff stats-NENE01729A.txt stats-NENE01729A.txt)
echo stats-NENE01729A.txt stats-NENE01729B.txt $(goodiff stats-NENE01729A.txt stats-NENE01729B.txt)
echo stats-NENE01729A.txt stats-NENE01736A.txt $(goodiff stats-NENE01729A.txt stats-NENE01736A.txt)
...

which turns into:

echo stats-NENE01729A.txt stats-NENE01729A.txt files are identical
echo stats-NENE01729A.txt stats-NENE01729B.txt 0.97182
echo stats-NENE01729A.txt stats-NENE01736A.txt 0.45223
...

which in turn prints:

stats-NENE01729A.txt stats-NENE01729A.txt files are identical
stats-NENE01729A.txt stats-NENE01729B.txt 0.97182
stats-NENE01729A.txt stats-NENE01736A.txt 0.45223
...

That's a good start, but Nelle can do better. First, she notices that when the two input files are the same, the output is the words "files are identical" instead of a numerical score. She can remove these lines like this:

$ bash pairwise.sh stats-*.txt | grep -v 'files are identical'

or put the call to grep inside the shell script (which will be less error-prone):

for left in $*
do
    for right in $*
    do
        echo $left $right $(goodiff $left $right)
    done
done | grep -v 'files are identical'

This works because dodone counts as a single command in Bash. If Nelle wanted to make this clearer, she could put parentheses around the loop:

(for left in $*
do
    for right in $*
    do
        echo $left $right $(goodiff $left $right)
    done
done) | grep -v 'files are identical'

The last thing Nelle needs to do before writing up is find the 100 best pairwise matches. She has seen this before: sort the lines numerically, then use head to select the top lines. However, the numbers she wants to sort by are at the end of the line, rather than beginning. Reading the output of man sort tells her that the -k flag will let her specify which column of input she wants to use as a sort key, but the syntax looks a little complicated. Instead, she moves the score to the front of each line:

(for left in $*
do
    for right in $*
    do
        echo $(goodiff $left $right) $left $right
    done
done) | grep -v 'files are identical'

and then adds two more commands to the pipeline:

(for left in $*
do
    for right in $*
    do
        echo $(goodiff $left $right) $left $right
    done
done) | grep -v 'files are identical' | sort -n -r | head -100

Since this is hard to read, she uses \ to tell the shell that she's continuing commands on the next line:

(for left in $*
do
    for right in $*
    do
        echo $(goodiff $left $right) $left $right
    done
done) \
| grep -v 'files are identical' \
| sort -n -r \
| head -100

She then runs:

$ bash pairwise.sh stats-*.txt > top100.txt

and heads off to a seminar, confident that by the time she comes back tomorrow, top100.txt will contain the results she needs for her paper.

Summary

  • Everything is stored as bytes, but the bytes in binary files do not represent characters.
  • Use nested loops to run commands for every combination of two lists of things.
  • Use \ to break one logical line into several physical lines.
  • Use parentheses () to keep things combined.
  • Use $(command) to insert a command's output in place.
  • find finds files with specific properties that match patterns.
  • grep selects lines in files that match patterns.
  • man command displays the manual page for a given command.

Challenges

  1. The middle.sh script's numerical parameters are the number of lines to take from the head of the file, and the number of lines to take from the tail of that range. It would be more intuitive if the parameters were the index of the first line to keep and the number of lines, e.g., if:

    $ bash middle.sh somefile 50 5
    

    selected lines 50-54 instead of lines 46-50. Use $() and the expr command to do this.

  2. Write a short explanatory comment for the following shell script:

    find . -name '*.dat' -print | wc -l | sort -n
    
  3. The -v flag to grep inverts pattern matching, so that only lines which do not match the pattern are printed. Given that, which of the following commands will find all files in /data whose names end in ose.dat (e.g., sucrose.dat or maltose.dat), but do not contain the word temp?

    1.
    find /data -name '*.dat' -print | grep ose | grep -v temp
    
    2.
    find /data -name ose.dat -print | grep -v temp
    
    3.
    grep -v temp $(find /data -name '*ose.dat' -print)
    
    4. None of the above.
  4. Nelle has saved the IDs of some suspicious observations in a file called maybe-faked.txt:

    NENE01909C
    NEMA04220A
    NEMA04301A
    

    Which of the following scripts will search all of the .txt files whose names begin with NE for references to these records?

    1.
    for pat in maybe-faked.txt
    do
        find . -name 'NE*.txt' -print | grep $pat
    done
    
    2.
    for pat in $(cat maybe-faked.txt)
    do
        grep $pat $(find . -name 'NE*.txt' -print)
    done
    
    3.
    for pat in $(cat maybe-faked.txt)
    do
        find . -name 'NE*.txt' -print | grep $pat
    done
    
    4.. None of the above.

Summary

The Unix shell is older than most of the people who use it. It has survived so long because it is one of the most productive programming environments ever created—maybe even the most productive. Its syntax may be cryptic, but people who have mastered it can experiment with different commands interactively, then use what they have learned to automate their work. Graphical user interfaces may be better at the first, but the shell is still unbeaten at the second. And as Alfred North Whitehead wrote in 1911, "Civilization advances by extending the number of important operations which we can perform without thinking about them."