Hi all
I've got a question regarding error handling in shell scripts. My background is mainly object oriented programming languages, but for a year or so I've been doing more and more \(bash\) shell scripting \(which I quite enjoy by the way\).
To handle errors in my scripts I often find myself doing something like:
<execute some command>
if [ $? -ne 0 ] ;then
<handle error>
fi
In many cases the <handle error> is the same for the commands executed, so you end up duplicating code. Thus I think it would be better to encapsulate this code in a function, e.g.
exec_cmd()
{
$1 # the command to be executed is passed in as an argument
if [ $? -ne 0 ] ; then
<handle error>
fi
}
And execute commands by calling this function:
exec_cmd "<some command>"
Furthermore, the function can easily be enhanced to write the command as well as the command output and return code to a log file:
exec_cmd()
{
echo "" >> $log_file
echo "---------------------------------------------------------" >> $log_file
echo "$1" >> $log_file
$1 >> $log_file 2>&1
return_code=$?
if [ $return_code -ne 0 ] ; then
echo "ERROR - $1 failed with $return_code"
exit 1
fi
}
exec_cmd "<cmd1>"
exec_cmd "<cmd2>"
exec_cmd "<cmd2>"
...
I found this quite handy to handle errors in my scripts. If you write command, output and return code to a log file, it also makes it quite easy to investigate problems with scripts that run in the background.
Now my question is:
How is that as a practise? Is it common? Or would it be considerd as bad and if so why? Has the bash already got something like this build in? (E.g. there might be an exit on fail option; I also know that you can start scripts with an option which outputs the commands which are being executed).
How do the experienced people here do the error handling in their scripts?
Thanks and best wishes