The Linux Command Line---Errors And Signals And Traps (Oh My!) - Part 2

The Linux Command Line by William Shotts


Errors are not the only way that a script can terminate unexpectedly. You also have to be concerned with signals. Consider the following program:
#!/bin/bash

echo "this script will endlessly loop until you stop it"
while true; do
 : # Do nothing
done
After you launch this script it will appear to hang. Actually, like most programs that appear to hang, it is really stuck inside a loop. In this case, it is waiting for the true command to return a non-zero exit status, which it never does. Once started, the script will continue until bash receives a signal that will stop it. You can send such a signal by typing Ctrl-c which is the signal called SIGINT (short for SIGnal INTerrupt).

Cleaning Up After Yourself

Okay, so a signal can come along and make your script terminate. Why does it matter? Well, in many cases it doesn't matter and you can ignore signals, but in some cases it will matter.
Let's take a look at another script:
#!/bin/bash

# Program to print a text file with headers and footers

TEMP_FILE=/tmp/printfile.txt

pr $1 > $TEMP_FILE

echo -n "Print file? [y/n]: "
read
if [ "$REPLY" = "y" ]; then
 lpr $TEMP_FILE
fi
This script processes a text file specified on the command line with the pr command and stores the result in a temporary file. Next, it asks the user if they want to print the file. If the user types "y", then the temporary file is passed to the lpr program for printing (you may substitute less for lpr if you don't actually have a printer attached to your system.)
Now, I admit this script has a lot of design problems. While it needs a file name passed on the command line, it doesn't check that it got one, and it doesn't check that the file actually exists. But the problem I want to focus on here is the fact that when the script terminates, it leaves behind the temporary file.
Good practice would dictate that we delete the temporary file $TEMP_FILE when the script terminates. This is easily accomplished by adding the following to the end of the script:
rm $TEMP_FILE
This would seem to solve the problem, but what happens if the user types ctrl-c when the "Print file? [y/n]:" prompt appears? The script will terminate at the read command and the rm command is never executed. Clearly, we need a way to respond to signals such as SIGINT when the Ctrl-c key is typed.
Fortunately, bash provides a method to perform commands if and when signals are received.

trap

The trap command allows you to execute a command when a signal is received by your script. It works like this:
trap arg signals
"signals" is a list of signals to intercept and "arg" is a command to execute when one of the signals is received. For our printing script, we might handle the signal problem this way:
#!/bin/bash

# Program to print a text file with headers and footers

TEMP_FILE=/tmp/printfile.txt

trap "rm $TEMP_FILE; exit" SIGHUP SIGINT SIGTERM

pr $1 > $TEMP_FILE

echo -n "Print file? [y/n]: "
read
if [ "$REPLY" = "y" ]; then
 lpr $TEMP_FILE
fi
rm $TEMP_FILE
Here we have added a trap command that will execute "rm $TEMP_FILE" if any of the listed signals is received. The three signals listed are the most common ones that you will encounter, but there are many more that can be specified. For a complete list, type "trap -l". In addition to listing the signals by name, you may alternately specify them by number.

A clean_up Function

While the trap command has solved the problem, we can see that it has some limitations. Most importantly, it will only accept a single string containing the command to be performed when the signal is received. You could get clever and use ";" and put multiple commands in the string to get more complex behavior, but frankly, it's ugly. A better way would be to create a function that is called when you want to perform any actions at the end of your script. In my scripts, I call this function clean_up.
#!/bin/bash

# Program to print a text file with headers and footers

TEMP_FILE=/tmp/printfile.txt

clean_up() {

 # Perform program exit housekeeping
 rm $TEMP_FILE
 exit
}

trap clean_up SIGHUP SIGINT SIGTERM

pr $1 > $TEMP_FILE

echo -n "Print file? [y/n]: "
read
if [ "$REPLY" = "y" ]; then
 lpr $TEMP_FILE
fi
clean_up
The use of a clean up function is a good idea for your error handling routines too. After all, when your program terminates (for whatever reason), you should clean up after yourself. Here is finished version of our program with improved error and signal handling:
#!/bin/bash

# Program to print a text file with headers and footers

# Usage: printfile file

# Create a temporary file name that gives preference
# to the user's local tmp directory and has a name
# that is resistant to "temp race attacks"

if [ -d "~/tmp" ]; then
 TEMP_DIR=~/tmp
else
 TEMP_DIR=/tmp
fi
TEMP_FILE=$TEMP_DIR/printfile.$$.$RANDOM
PROGNAME=$(basename $0)

usage() {

 # Display usage message on standard error
 echo "Usage: $PROGNAME file" 1>&2
}

clean_up() {

 # Perform program exit housekeeping
 # Optionally accepts an exit status
 rm -f $TEMP_FILE
 exit $1
}

error_exit() {

 # Display error message and exit
 echo "${PROGNAME}: ${1:-"Unknown Error"}" 1>&2
 clean_up 1
}

trap clean_up SIGHUP SIGINT SIGTERM

if [ $# != "1" ]; then
 usage
 error_exit "one file to print must be specified"
fi
if [ ! -f "$1" ]; then
 error_exit "file $1 cannot be read"
fi

pr $1 > $TEMP_FILE || error_exit "cannot format file"

echo -n "Print file? [y/n]: "
read
if [ "$REPLY" = "y" ]; then
 lpr $TEMP_FILE || error_exit "cannot print file"
fi
clean_up

Creating Safe Temporary Files

In the program above, there a number of steps taken to help secure the temporary file used by this script. It is a Unix tradition to use a directory called /tmpto place temporary files used by programs. Everyone may write files into this directory. This naturally leads to some security concerns. If possible, avoid writing files in the /tmp directory. The preferred technique is to write them in a local directory such as ~/tmp (a tmp subdirectory in the user's home directory.) If you must write files in /tmp, you must take steps to make sure the file names are not predictable. Predictable file names allow an attacker to create symbolic links to other files that the attacker wants you to overwrite.
A good file name will help you figure out what wrote the file, but will not be entirely predictable. In the script above, the following line of code created the temporary file $TEMP_FILE:
TEMP_FILE=$TEMP_DIR/printfile.$$.$RANDOM
The $TEMP_DIR variable contains either /tmp or ~/tmp depending on the availability of the directory. It is common practice to embed the name of the program into the file name. We have done that with the string "printfile". Next, we use the $$ shell variable to embed the process id (pid) of the program. This further helps identify what process is responsible for the file. Surprisingly, the process id alone is not unpredictable enough to make the file safe, so we add the $RANDOM shell variable to append a random number to the file name. With this technique, we create a file name that is both easily identifiable and unpredictable.

There You Have It

This concludes the LinuxCommand.org tutorials. I sincerely hope you found them both useful and enjoyable. If you did, continue your command line adventure by downloading my book.

The Linux Command Line---Errors And Signals And Traps (Oh My!) - Part 1

The Linux Command Line by William Shotts


In this lesson, we're going to look at handling errors during the execution of your scripts.
The difference between a good program and a poor one is often measured in terms of the program's robustness. That is, the program's ability to handle situations in which something goes wrong.

Exit Status

As you recall from previous lessons, every well-written program returns an exit status when it finishes. If a program finishes successfully, the exit status will be zero. If the exit status is anything other than zero, then the program failed in some way.
It is very important to check the exit status of programs you call in your scripts. It is also important that your scripts return a meaningful exit status when they finish. I once had a Unix system administrator who wrote a script for a production system containing the following 2 lines of code:
# Example of a really bad idea

cd $some_directory
rm *
Why is this such a bad way of doing it? It's not, if nothing goes wrong. The two lines change the working directory to the name contained in $some_directoryand delete the files in that directory. That's the intended behavior. But what happens if the directory named in $some_directory doesn't exist? In that case, the cd command will fail and the script executes the rm command on the current working directory. Not the intended behavior!
By the way, my hapless system administrator's script suffered this very failure and it destroyed a large portion of an important production system. Don't let this happen to you!
The problem with the script was that it did not check the exit status of the cd command before proceeding with the rm command.

Checking The Exit Status

There are several ways you can get and respond to the exit status of a program. First, you can examine the contents of the $? environment variable. $? will contain the exit status of the last command executed. You can see this work with the following:
[me] $ true; echo $?
0
[me] $ false; echo $?
1
The true and false commands are programs that do nothing except return an exit status of zero and one, respectively. Using them, we can see how the $?environment variable contains the exit status of the previous program.
So to check the exit status, we could write the script this way:
# Check the exit status

cd $some_directory
if [ "$?" = "0" ]; then
 rm *
else
 echo "Cannot change directory!" 1>&2
 exit 1
fi
In this version, we examine the exit status of the cd command and if it's not zero, we print an error message on standard error and terminate the script with an exit status of 1.
While this is a working solution to the problem, there are more clever methods that will save us some typing. The next approach we can try is to use the ifstatement directly, since it evaluates the exit status of commands it is given.
Using if, we could write it this way:
# A better way

if cd $some_directory; then
 rm *
else
 echo "Could not change directory! Aborting." 1>&2
 exit 1
fi
Here we check to see if the cd command is successful. Only then does rm get executed; otherwise an error message is output and the program exits with a code of 1, indicating that an error has occurred.

An Error Exit Function

Since we will be checking for errors often in our programs, it makes sense to write a function that will display error messages. This will save more typing and promote laziness.
# An error exit function

error_exit()
{
 echo "$1" 1>&2
 exit 1
}

# Using error_exit

if cd $some_directory; then
 rm *
else
 error_exit "Cannot change directory!  Aborting."
fi

AND And OR Lists

Finally, we can further simplify our script by using the AND and OR control operators. To explain how they work, I will quote from the bash man page:
"The control operators && and || denote AND lists and OR lists, respectively. An AND list has the form
command1 && command2
command2 is executed if, and only ifcommand1 returns an exit status of zero.
An OR list has the form
command1 || command2
command2 is executed if, and only if, command1 returns a non-zero exit status. The exit status of AND and OR lists is the exit status of the last command executed in the list."
Again, we can use the true and false commands to see this work:
[me] $ true || echo "echo executed"
[me] $ false || echo "echo executed"
echo executed
[me] $ true && echo "echo executed"
echo executed
[me] $ false && echo "echo executed"
[me] $
Using this technique, we can write an even simpler version:
# Simplest of all

cd $some_directory || error_exit "Cannot change directory! Aborting"
rm *
If an exit is not required in case of error, then you can even do this:
# Another way to do it if exiting is not desired

cd $some_directory && rm *
I want to point out that even with the defense against errors we have introduced in our example for the use of cd, this code is still vulnerable to a common programming error, namely, what happens if the name of the variable containing the name of the directory is misspelled? In that case, the shell will interpret the variable as empty and the cd succeed, but it will change directories to the user's home directory, so beware!

Improving The Error Exit Function

There are a number of improvements that we can make to the error_exit function. I like to include the name of the program in the error message to make clear where the error is coming from. This becomes more important as your programs get more complex and you start having scripts launching other scripts, etc. Also, note the inclusion of the LINENO environment variable which will help you identify the exact line within your script where the error occurred.
#!/bin/bash

# A slicker error handling routine

# I put a variable in my scripts named PROGNAME which
# holds the name of the program being run.  You can get this
# value from the first item on the command line ($0).

PROGNAME=$(basename $0)

error_exit()
{

# ----------------------------------------------------------------
# Function for exit due to fatal program error
#  Accepts 1 argument:
#   string containing descriptive error message
# ----------------------------------------------------------------


 echo "${PROGNAME}: ${1:-"Unknown Error"}" 1>&2
 exit 1
}

# Example call of the error_exit function.  Note the inclusion
# of the LINENO environment variable.  It contains the current
# line number.

echo "Example of error with line number and message"
error_exit "$LINENO: An error has occurred."
The use of the curly braces within the error_exit function is an example of parameter expansion. You can surround a variable name with curly braces (as with ${PROGNAME}) if you need to be sure it is separated from surrounding text. Some people just put them around every variable out of habit. That usage is simply a style thing. The second use, ${1:-"Unknown Error"} means that if parameter 1 ($1) is undefined, substitute the string "Unknown Error" in its place. Using parameter expansion, it is possible to perform a number of useful string manipulations. You can read more about parameter expansion in the bash man page under the topic "EXPANSIONS".

The Linux Command Line---Flow Control - Part 3

The Linux Command Line by William Shotts


Now that you have learned about positional parameters, it is time to cover the remaining flow control statement, for. Like while and untilfor is used to construct loops. for works like this:
for variable in words; do
    commands
done
     
In essence, for assigns a word from the list of words to the specified variable, executes the commands, and repeats this over and over until all the words have been used up. Here is an example:
#!/bin/bash

for i in word1 word2 word3; do
    echo $i
done
     
In this example, the variable i is assigned the string "word1", then the statement echo $i is executed, then the variable i is assigned the string "word2", and the statement echo $i is executed, and so on, until all the words in the list of words have been assigned.
The interesting thing about for is the many ways you can construct the list of words. All kinds of expansions can be used. In the next example, we will construct the list of words using command substitution:
#!/bin/bash

count=0
for i in $(cat ~/.bash_profile); do
    count=$((count + 1))
    echo "Word $count ($i) contains $(echo -n $i | wc -c) characters"
done
Here we take the file .bash_profile and count the number of words in the file and the number of characters in each word.
So what's this got to do with positional parameters? Well, one of the features of for is that it can use the positional parameters as the list of words:
#!/bin/bash

for i in "$@"; do
    echo $i
done
The shell variable "$@" contains the list of command line arguments. This technique is often used to process a list of files on the command line. Here is a another example:
#!/bin/bash

for filename in "$@"; do
    result=
    if [ -f "$filename" ]; then
        result="$filename is a regular file"
    else
        if [ -d "$filename" ]; then
            result="$filename is a directory"
        fi
    fi
    if [ -w "$filename" ]; then
        result="$result and it is writable"
    else
        result="$result and it is not writable"
    fi
    echo "$result"
done
Try this script. Give it a list of files or a wildcard like "*" to see it work.
Here is another example script. This one compares the files in two directories and lists which files in the first directory are missing from the second.
#!/bin/bash

# cmp_dir - program to compare two directories

# Check for required arguments
if [ $# -ne 2 ]; then
    echo "usage: $0 directory_1 directory_2" 1>&2
    exit 1
fi

# Make sure both arguments are directories
if [ ! -d $1 ]; then
    echo "$1 is not a directory!" 1>&2
    exit 1
fi

if [ ! -d $2 ]; then
    echo "$2 is not a directory!" 1>&2
    exit 1
fi

# Process each file in directory_1, comparing it to directory_2
missing=0
for filename in $1/*; do
    fn=$(basename "$filename")
    if [ -f "$filename" ]; then
        if [ ! -f "$2/$fn" ]; then
            echo "$fn is missing from $2"
            missing=$((missing + 1))
        fi
    fi
done
echo "$missing files missing"
Now on to the real work. We are going to improve the home_space function in our script to output more information. You will recall that our previous version looked like this:
home_space()
{
    # Only the superuser can get this information

    if [ "$(id -u)" = "0" ]; then
    echo "<h2>Home directory space by user</h2>"
    echo "<pre>"
    echo "Bytes Directory"
        du -s /home/* | sort -nr
    echo "</pre>"
    fi

}   # end of home_space
     
Here is the new version:
home_space()
{
    echo "<h2>Home directory space by user</h2>"
    echo "<pre>"
    format="%8s%10s%10s   %-s\n"
    printf "$format" "Dirs" "Files" "Blocks" "Directory"
    printf "$format" "----" "-----" "------" "---------"
    if [ $(id -u) = "0" ]; then
        dir_list="/home/*"
    else
        dir_list=$HOME
    fi
    for home_dir in $dir_list; do
        total_dirs=$(find $home_dir -type d | wc -l)
        total_files=$(find $home_dir -type f | wc -l)
        total_blocks=$(du -s $home_dir)
        printf "$format" $total_dirs $total_files $total_blocks
    done
    echo "</pre>"

}   # end of home_space
This improved version introduces a new command printf, which is used to produce formatted output according to the contents of a format stringprintfcomes from the C programming language and has been implemented in many other programming languages including C++, Perl, awk, java, PHP, and of course, bash. You can read more about printf format strings at:
We also introduce the find command. find is used to search for files or directories that meet specific criteria. In the home_space function, we use find to list the directories and regular files in each home directory. Using the wc command, we count the number of files and directories found.
The really interesting thing about home_space is how we deal with the problem of superuser access. You will notice that we test for the superuser with id and, according to the outcome of the test, we assign different strings to the variable dir_list, which becomes the list of words for the for loop that follows. This way, if an ordinary user runs the script, only his/her home directory will be listed.
Another function that can use a for loop is our unfinished system_info function. We can build it like this:
system_info()
{
    # Find any release files in /etc

    if ls /etc/*release 1>/dev/null 2>&1; then
        echo "<h2>System release info</h2>"
        echo "<pre>"
        for i in /etc/*release; do

            # Since we can't be sure of the
            # length of the file, only
            # display the first line.

            head -n 1 $i
        done
        uname -orp
        echo "</pre>"
    fi

}   # end of system_info
In this function, we first determine if there are any release files to process. The release files contain the name of the vendor and the version of the distribution. They are located in the /etc directory. To detect them, we perform an ls command and throw away all of its output. We are only interested in the exit status. It will be true if any files are found.
Next, we output the HTML for this section of the page, since we now know that there are release files to process. To process the files, we start a for loop to act on each one. Inside the loop, we use the head command to return the first line of each file.
Finally, we use the uname command with the "o", "r", and "p" options to obtain some additional information from the system.

The Linux Command Line---Positional Parameters

The Linux Command Line by William Shotts


When we last left our script, it looked something like this:
#!/bin/bash

# sysinfo_page - A script to produce a system information HTML file

##### Constants

TITLE="System Information for $HOSTNAME"
RIGHT_NOW=$(date +"%x %r %Z")
TIME_STAMP="Updated on $RIGHT_NOW by $USER"

##### Functions

system_info()
{
    echo "<h2>System release info</h2>"
    echo "<p>Function not yet implemented</p>"

}   # end of system_info


show_uptime()
{
    echo "<h2>System uptime</h2>"
    echo "<pre>"
    uptime
    echo "</pre>"

}   # end of show_uptime


drive_space()
{
    echo "<h2>Filesystem space</h2>"
    echo "<pre>"
    df
    echo "</pre>"

}   # end of drive_space


home_space()
{
    # Only the superuser can get this information

    if [ "$(id -u)" = "0" ]; then
        echo "<h2>Home directory space by user</h2>"
        echo "<pre>"
        echo "Bytes Directory"
        du -s /home/* | sort -nr
        echo "</pre>"
    fi

}   # end of home_space



##### Main

cat <<- _EOF_
  <html>
  <head>
      <title>$TITLE</title>
  </head>
  <body>
      <h1>$TITLE</h1>
      <p>$TIME_STAMP</p>
      $(system_info)
      $(show_uptime)
      $(drive_space)
      $(home_space)
  </body>
  </html>
_EOF_

We have most things working, but there are several more features I want to add:
  1. I want to specify the name of the output file on the command line, as well as set a default output file name if no name is specified.
  2. I want to offer an interactive mode that will prompt for a file name and warn the user if the file exists and prompt the user to overwrite it.
  3. Naturally, we want to have a help option that will display a usage message.
All of these features involve using command line options and arguments. To handle options on the command line, we use a facility in the shell called positional parameters. Positional parameters are a series of special variables ($0 through $9) that contain the contents of the command line.
Let's imagine the following command line:
[me@linuxbox me]$ some_program word1 word2 word3
If some_program were a bash shell script, we could read each item on the command line because the positional parameters contain the following:
  • $0 would contain "some_program"
  • $1 would contain "word1"
  • $2 would contain "word2"
  • $3 would contain "word3"
Here is a script you can use to try this out:
#!/bin/bash

echo "Positional Parameters"
echo '$0 = ' $0
echo '$1 = ' $1
echo '$2 = ' $2
echo '$3 = ' $3

Detecting Command Line Arguments

Often, you will want to check to see if you have arguments on which to act. There are a couple of ways to do this. First, you could simply check to see if $1contains anything like so:
#!/bin/bash

if [ "$1" != "" ]; then
    echo "Positional parameter 1 contains something"
else
    echo "Positional parameter 1 is empty"
fi

Second, the shell maintains a variable called $# that contains the number of items on the command line in addition to the name of the command ($0).
#!/bin/bash

if [ $# -gt 0 ]; then
    echo "Your command line contains $# arguments"
else
    echo "Your command line contains no arguments"
fi

Command Line Options

As we discussed before, many programs, particularly ones from the GNU Project, support both short and long command line options. For example, to display a help message for many of these programs, you may use either the "-h" option or the longer "--help" option. Long option names are typically preceded by a double dash. We will adopt this convention for our scripts.
Here is the code we will use to process our command line:
interactive=
filename=~/sysinfo_page.html

while [ "$1" != "" ]; do
    case $1 in
        -f | --file )           shift
                                filename=$1
                                ;;
        -i | --interactive )    interactive=1
                                ;;
        -h | --help )           usage
                                exit
                                ;;
        * )                     usage
                                exit 1
    esac
    shift
done

This code is a little tricky, so bear with me as I attempt to explain it.
The first two lines are pretty easy. We set the variable interactive to be empty. This will indicate that the interactive mode has not been requested. Then we set the variable filename to contain a default file name. If nothing else is specified on the command line, this file name will be used.
After these two variables are set, we have default settings, in case the user does not specify any options.
Next, we construct a while loop that will cycle through all the items on the command line and process each one with case. The case will detect each possible option and process it accordingly.
Now the tricky part. How does that loop work? It relies on the magic of shift.
shift is a shell builtin that operates on the positional parameters. Each time you invoke shift, it "shifts" all the positional parameters down by one. $2becomes $1$3 becomes $2$4 becomes $3, and so on. Try this:
#!/bin/bash

echo "You start with $# positional parameters"

# Loop until all parameters are used up
while [ "$1" != "" ]; do
    echo "Parameter 1 equals $1"
    echo "You now have $# positional parameters"

    # Shift all the parameters down by one
    shift

done

Getting An Option's Argument

Our "-f" option requires a valid file name as an argument. We use shift again to get the next item from the command line and assign it to filename. Later we will have to check the content of filename to make sure it is valid.

Integrating The Command Line Processor Into The Script

We will have to move a few things around and add a usage function to get this new routine integrated into our script. We'll also add some test code to verify that the command line processor is working correctly. Our script now looks like this:
#!/bin/bash

# sysinfo_page - A script to produce a system information HTML file

##### Constants

TITLE="System Information for $HOSTNAME"
RIGHT_NOW=$(date +"%x %r %Z")
TIME_STAMP="Updated on $RIGHT_NOW by $USER"

##### Functions

system_info()
{
    echo "<h2>System release info</h2>"
    echo "<p>Function not yet implemented</p>"

}   # end of system_info


show_uptime()
{
    echo "<h2>System uptime</h2>"
    echo "<pre>"
    uptime
    echo "</pre>"

}   # end of show_uptime


drive_space()
{
    echo "<h2>Filesystem space</h2>"
    echo "<pre>"
    df
    echo "</pre>"

}   # end of drive_space


home_space()
{
    # Only the superuser can get this information

    if [ "$(id -u)" = "0" ]; then
        echo "<h2>Home directory space by user</h2>"
        echo "<pre>"
        echo "Bytes Directory"
        du -s /home/* | sort -nr
        echo "</pre>"
    fi

}   # end of home_space


write_page()
{
    cat <<- _EOF_
    <html>
        <head>
        <title>$TITLE</title>
        </head>
        <body>
        <h1>$TITLE</h1>
        <p>$TIME_STAMP</p>
        $(system_info)
        $(show_uptime)
        $(drive_space)
        $(home_space)
        </body>
    </html>
_EOF_

}

usage()
{
    echo "usage: sysinfo_page [[[-f file ] [-i]] | [-h]]"
}


##### Main

interactive=
filename=~/sysinfo_page.html

while [ "$1" != "" ]; do
    case $1 in
        -f | --file )           shift
                                filename=$1
                                ;;
        -i | --interactive )    interactive=1
                                ;;
        -h | --help )           usage
                                exit
                                ;;
        * )                     usage
                                exit 1
    esac
    shift
done


# Test code to verify command line processing

if [ "$interactive" = "1" ]; then
 echo "interactive is on"
else
 echo "interactive is off"
fi
echo "output file = $filename"


# Write page (comment out until testing is complete)

# write_page > $filename

Adding Interactive Mode

The interactive mode is implemented with the following code:
if [ "$interactive" = "1" ]; then

    response=

    echo -n "Enter name of output file [$filename] > "
    read response
    if [ -n "$response" ]; then
        filename=$response
    fi

    if [ -f $filename ]; then
        echo -n "Output file exists. Overwrite? (y/n) > "
        read response
        if [ "$response" != "y" ]; then
            echo "Exiting program."
            exit 1
        fi
    fi
fi

First, we check if the interactive mode is on, otherwise we don't have anything to do. Next, we ask the user for the file name. Notice the way the prompt is worded:
echo -n "Enter name of output file [$filename] > "

We display the current value of filename since, the way this routine is coded, if the user just presses the enter key, the default value of filename will be used. This is accomplished in the next two lines where the value of response is checked. If response is not empty, then filename is assigned the value ofresponse. Otherwise, filename is left unchanged, preserving its default value.
After we have the name of the output file, we check if it already exists. If it does, we prompt the user. If the user response is not "y," we give up and exit, otherwise we can proceed.