The same code with Korn shell works perfectly but for some reason the execution is too slow and causes problems to the use I need it for. So I definately need to use Bash or something equally fast...
The execution is slow because you're running dozens of externals in backticks all the time. You should use externals to process big batches of data -- not individual lines. Externals are fast enough on bulk data, but overkill for anything less, like makng 10 phone calls to say 10 words. Most of your script could probably be done with one execution of awk, if we knew the details.
If you show the input you get from sar, and the output you want from the script, we can find a much more efficient way to do so, probably in nearly pure awk. It can do much more than '{ print $1 }'
I don't see anything obviously wrong with your script, though there's many places in it where things are unquoted and quoting may be important.
The script is not slow for its content.
The "sar -d 30 4" statement will take 2 minutes (sample interval of thirty seconds times the four iterations). On my test the script took 2 mins 3 secs.
The process to fish out the Average lines can be improved such that we only read $TEMP1 once.
If performance was really important, there is no need for the workfile $TEMP1 and (as Corona688 notes) the maths can be done in awk.
The other issue with your script is possibly the design. Field 5 in "sar" is the Number of Read+Write data transfers PER SECOND. Over the sample period of just 2 minutes you need some decent level of disc activity to get this figure above zero.
Finally the question I should have asked first:
Please explain the statement "execution is too slow and causes problems to the use I need it for".
(You are aware that unix systems can be configured to accumulate "sar" statistics automatically all day every day)?
I noticed afterwards that the sar command was causing most of the delay but then again I am under the impression that bash was a bit faster.
To answer "methyl"'s question, I will use this script along with net-SNMP extend (or exec if you will) functionality. The idea is to graph the accumulation of read/writes on servers.
So... Can someone propose a logical interval? The servers that will be queried I assure are very very busy
Currently I have it set as
sar -d 5 4
this also helps so that my snmpwalks do not time out.
For the sake of learning.. Can someone answer how come the variable scope of the variable "CONT" is not global...?