Bash Golf Part 2

Published at 2022-01-01T23:36:15+00:00; Updated at 2022-01-05

This is the second blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.

2021-11-29 Bash Golf Part 1

2022-01-01 Bash Golf Part 2 (You are currently reading this)

2023-12-10 Bash Golf Part 3

2025-09-14 Bash Golf Part 4

Table of Contents

Redirection

Let's have a closer look at Bash redirection. As you might already know that there are 3 standard file descriptors:

These are most certainly the ones you are using on regular basis. "/proc/self/fd" lists all file descriptors which are open by the current process (in this case: the current Bash shell itself):

The following examples demonstrate two different ways to accomplish the same thing. The difference is that the first command is directly printing out "Foo" to stdout and the second command is explicitly redirecting stdout to its own stdout file descriptor:

Update: A reader pointed out, that the redirection should actually go to `/proc/self/fd/1` and not `0`. But apparently, either way works for this particular example. Do you know why?

Other useful redirections are:

It is, however, not possible to redirect multiple times within the same command. E.g. the following won't work. You would expect stdin to be redirected to stderr and then stderr to be redirected to /dev/null. But as the example shows, Foo is still printed out:

Update: A reader sent me an email and pointed out that the order of the redirections is important.

As you can see, the following will not print out anything:

A good description (also pointed out by the reader) can be found here:

Order of redirection

Ok, back to the original blog post. You can also use grouping here (neither of these commands will print out anything to stdout):

A handy way to list all open file descriptors is to use the "lsof" command (that's not a Bash built-in), whereas $$ is the process id (pid) of the current shell process:

Let's create our own descriptor "3" for redirection to a file named "foo":

You can also override the default file descriptors, as the following example script demonstrates:

Let's execute it:

HERE

I have mentioned HERE-documents and HERE-strings already in this post. Let's do some more examples. The following "cat" receives a multi line string from stdin. In this case, the input multi line string is a HERE-document. As you can see, it also interpolates variables (in this case the output of "date" running in a subshell).

You can also write it the following way, but that's less readable (it's good for an obfuscation contest):

Besides of an HERE-document, there is also a so-called HERE-string. Besides of...

...you can use a HERE-string like that:

Or even shorter, you can do:

You can also use a Bash regex to accomplish the same thing, but the points of the examples so far were to demonstrate HERE-{documents,strings} and not Bash regular expressions:

You can also use it with "read":

The following is good for an obfuscation contest too:

RANDOM

Random is a special built-in variable containing a different pseudo random number each time it's used.

That's very useful if you want to randomly delay the execution of your scripts when you run it on many servers concurrently, just to spread the server load (which might be caused by the script run) better.

Let's say you want to introduce a random delay of 1 minute. You can accomplish it with:

set -x and set -e and pipefile

In my opinion, -x and -e and pipefile are the most useful Bash options. Let's have a look at them one after another.

-x

-x prints commands and their arguments as they are executed. This helps to develop and debug your Bash code:

You can also set it when calling an external script without modifying the script itself:

Let's do that on one of the example scripts we covered earlier:

-e

This is a very important option you want to use when you are paranoid. This means, you should always "set -e" in your scripts when you need to make absolutely sure that your script runs successfully (with that I mean that no command should exit with an unexpected status code).

Ok, let's dig deeper:

As you can see in the following example, the Bash terminates after the execution of "grep" as "foo" is not matching "bar". Therefore, grep exits with 1 (unsuccessfully) and the shell aborts. And therefore, "bar" will not be printed out anymore:

Whereas the outcome changes when the regex matches:

So does it mean that grep will always make the shell terminate whenever its exit code isn't 0? This will render "set -e" quite unusable. Frankly, there are other commands where an exit status other than 0 should not terminate the whole script abruptly. Usually, what you want is to branch your code based on the outcome (exit code) of a command:

...but the example above won't reach any of the branches and won't print out anything, as the script terminates right after grep.

The proper solution is to use grep as an expression in a conditional (e.g. in an if-else statement):

You can also temporally undo "set -e" if there is no other way:

Why does calling "foo" with no arguments make the script terminate? Because as no argument was given, the "shift" won't have anything to do as the argument list $@ is empty, and therefore "shift" fails with a non-zero status.

Why would you want to use "shift" after function-local variable assignments? Have a look at my personal Bash coding style guide for an explanation :-):

./2021-05-16-personal-bash-coding-style-guide.gmi

pipefail

The pipefail option makes it so that not only the exit code of the last command of the pipe counts regards its exit code but any command of the pipe:

The following greps for paul in passwd and converts all lowercase letters to uppercase letters. The exit code of the pipe is 0, as the last command of the pipe (converting from lowercase to uppercase) succeeded:

Let's look at another example, where "TheRock" doesn't exist in the passwd file. However, the pipes exit status is still 0 (success). This is so because the last command ("tr" in this case) still succeeded. It is just that it didn't get any input on stdin to process:

To change this behaviour, pipefile can be used. Now, the pipes exit status is 1 (fail), because the pipe contains at least one command (in this case grep) which exited with status 1:

E-Mail your comments to `paul@nospam.buetow.org` :-)

Other related posts are:

2025-09-14 Bash Golf Part 4

2023-12-10 Bash Golf Part 3

2022-01-01 Bash Golf Part 2 (You are currently reading this)

2021-11-29 Bash Golf Part 1

2021-06-05 Gemtexter - One Bash script to rule it all

2021-05-16 Personal Bash coding style guide

Back to the main site