If you are talking shell scripts, then you could have this structure:-
step=$1 # Read step as first parameter
step="${step:=1}" # Default to step one
until [ $step -gt 99 ]
do
case $step in
1) function_1 ;;
2) function_2 ;;
3) function_3 ;;
4) function_4 ;;
esac
((step=$step+1))
done
An alternate might be to set step and have an if ... then ... fi
round each chunk of code like this:-
step=$1 # Read step as first parameter
step="${step:=1}" # Default to step one
if [ $step -le 1 ]
then
function_1
fi
if [ $step -le 2 ]
then
function_2
fi
:
:
They would sort of get over your request, but honestly this might just end up in a huge script trying to do too much in one go. You would be better to have a single script for each task you need to perform and keep it simple. Even better, write a utility script that takes an SQL deck as an argument and runs the appropriate code. You can then use a proper scheduler to set up dependencies, change the sequence of your code or whatever far more flexibly.
Of course, there is a cost to a proper scheduler and you would likely have to justify it. There are many out there on the market from those massively over-engineered but can run work on (just about) any platform to those that run on the local host only (although with a bit of ssh or rsh knowledge you can run processing elsewhere too)
At worst, I would at least suggest having a directory holding the scripts you want to run. At the start of the schedule, copy in each script, making sure each starts with a sequence number so that they naturally sort in the required order so, for example:-
01_01_01_script_to_setup_something
01_02_01_doing_this_bit_next
01_03_01_and_then_this_thing
This would allow you to insert an ad-hoc job if you need to, say 01_02_05_extra-bit_this_time
You then need a script to simply loop over the contents of the directory and (re)move files when the script completes error free. You would, of course have to document a recovery procedure if there was a failure and enable whomever to copy back an completed steps that needed to be re-run, say after a data restore and adjustment.
Your script to run the schedule could be pretty simple:-
for file in $dir/*
do
$file
RC=$?
if [ $RC -eq 0 ]
then
mv $dir/$file $archive_dir/$file
logger "Scheduled job complete: $file"
else
echo "Exciting $file with return code $RC"
logger "Scheduled job failure: $file"
fi
done
That said, it is a poor substitute for a proper scheduler. How complex a batch schedule do you think this might grow to? There could really be many ways to do this, so you really need to be certain of your logic of how it would need to flow, allow for exceptions or ad-hoc processes, recovery steps, restart intervention, data corrections, etc. etc.
Overall, I'd really suggest a proper 3rd party scheduler. Search for schedulers from CA, BMC, UC4, Axway, or a myriad of other software companies out there. They generally all allow you to manipulate the batch, restart from a specific point, bypass steps, run tasks in parallel and various other things. There may even be a decent freeware one that I'm not aware of - perhaps sourceforge or similar have built something.
Some are pretty, some are text only and very basic, but it depends what you need and what you can justify.
Robin