Functions in a shell script

so i have a very big script that has the following format:

functionA () {

....
...
....

}

Results=$(functionA)

the code inside of functionA is very huge. so by the time the script gets to the "Results=" part, several seconds have already passed. the script is about 15MB in size.

how can i reorganize this so the script runs faster?

i didnt write this script but i have to make it run faster one way or another. any ideas?

i tried this:

....
...
.... | while read functionA
do
...
...
done

notice i got rid of the function and i just let the script run. the thought is to avoid making the script skip the huge functionA code and just have it run as soon as it is kicked off.

I think if you have a 15MB shell script it's going to be slow no matter what. So if speeding things up is important the best approach is, unfortunately, to redesign the script and break functions into separate pieces. If you do that in a way that makes sense, by creating separate scripts that perform logical sub-tasks, then only the parts that need to run at any given time will be executed. It will probably be easier for you and whoever else inherits it to maintain going forward as well.

If you really have ONE shell function that's around 15MB, that's likely to be a painful redesign though.

And if the entire thing has to run each time, then breaking it up won't help much either. In that case, I think converting to another language that's compiled (C) or at least pre-parsed (python) would be your best bet.

1 Like

If you are using bash, and if there's no bash-specific code, try dash or ksh93. Bash isn't an efficient shell.

We can't possibly make any targeted recommendations without knowing something about the code beyond it's a large script.

Regards,
Alister

1 Like

What sort of processing are you doing? If there are lots of files being read & re-read, that will cost time. If there are lots of calls to external programs (grep, cut, tr, sed, awk etc.) in loops, then that will cost time.

For example are you reading a 10,000 line input file and for each line you use grep to scan another large file or set of files for each one, then there may be a better way to re-work the logic of avoid repetitive calls.

Can you give us a logical description of what it does? Roughly how many lines does your function have? (excluding comments and blanks if you can)

Can you put some tracing in to see where it is spending most of it's time?

Robin

15 MB shell script :eek:
How much lines is that if i may ask ?

It's time for a change.
Either reprogramming or starting from scratch, consider other tools for the job like databases :rolleyes: