performance issue using gzcat, awk and sort

hi all,
I was able to do a script to gather a few files and sort them.

here it is:

#!/usr/bin/ksh


ls *mainFile* |cut -c20-21 | sort > temp

set -A line_array
i=0
file_name='temp'

while read file_line
do
 line_array=${file_line}
 let i=${i}+1
  


# mainFile
gzcat *mainFile-dsa${file_line}* | awk '   
BEGIN { FS = "," } ; 
{if($1="") {mykey=$1} else {mykey=prev}}
{if(mykey != prev) 
    {print mykey",1,"NR","$0; prev=mykey} 
else 
    {print prev",1,"NR","$0; prev=mykey}}
' > final
# line
gzcat *line-dsa${file_line}* | awk '   
BEGIN { FS = "," } ; 
{if($1="") {mykey=$1} else {mykey=prev}}
{if(mykey != prev) 
    {print mykey",2,"NR","$0; prev=mykey} 
else 
    {print prev",2,"NR","$0; prev=mykey}}
' >> final
# ss
gzcat *ss-dsa${file_line}* | awk '   
BEGIN { FS = "," } ; 
    {print $1",3,"NR","$0;} 
' >> final
#bsginfo
gzcat *bsginfo-dsa${file_line}* | awk '   
BEGIN { FS = "," } ; 
    {print $1",4,"NR","$0;} 
' >> final
#gprs
gzcat *gprs-dsa${file_line}* | awk '   
BEGIN { FS = "," } ; 
{if($1="") {mykey=$1} else {mykey=prev}}
{if(mykey != prev) 
    {print mykey",5,"NR","$0; prev=mykey} 
else 
    {print prev",5,"NR","$0; prev=mykey}}
function isnum(n) { return n ~ /^[0-9]+$/ }
' >> final
#odbdata
gzcat *odbdata-dsa${file_line}* | awk '   
BEGIN { FS = "," } ; 
    {print $1",6,"NR","$0;} 
' >> final

ls *mainFile* |cut -c0-8 | sort | read data

#sort -t "," +0 -2 -n final > final2
sort  -t ',' +0 -1n +1 -2n +2 -3n  final > final2 
#sort final > final2
rm  final
rm  temp
gzip final2
mv final2.gz ${data}-final-dsa${file_line}.csv.gz


done < ${file_name}

my problems:

  • when lines in each file exceeds a few millions "NR" instead of having the normal number, so i can apply sort, it gets in scientific notation and I'm not able to guarantee the lines order;
  • the server as a I/0 charge very big so i should be able to do all the process only in memory (there are processors without charge and memory).
  • can i receive the several gzcat input into only one awk script? or it is not possible?
  • can i use pipe to send the previous result to the next instruction without writing to the "final" file?
  • when it gets to sort instruction I/0 use goes from 30% to 100% and memory use stays the same, why?

can someone help me out on any of this question?
it is getting really hard for a newbie like me to get a solution my problems because a system that should take one day doing his operations is taking 5 days and i'm trying to get solutions in areas that i really don't understand for now.

Best regards,
Ricardo Tom�s

That's really pesky. You can avoid the scientific format with printf but if the line numbers exceed the capacity of the data type used internally by awk for integers, the output will be bogus.

borkstation$ awk 'END { print 123456789123456 }' /dev/null
1.23457e+14
borkstation$ awk 'END { printf "%i\n", 123456789123456 }' /dev/null
2147483647
borkstation$ perl -le 'print 123456789123456'
123456789123456

So the only workaround I can suggest is to switch to Perl in order to solve this. There is a script a2p in the Perl distribution which can convert awk scripts to Perl scripts, although I hear it's not perfect.

I'm sorry, I can't find a question in that. Can you rephrase?

The scripts seem to be different for each file, so it seems a bit dubious. Certainly you could try to refactor the code to reduce duplication. It seems hard to write an awk script which could decide which fields to select purely based on the looks of the input (remember, file names are not visible when you receive data from a pipe), but if you know how to do that, by all means give it a try. Perhaps you could marshal the output from gzcat into a form where you can also include headers with information about which field numbers to use, or something. (Think XML format, although you don't have to use the specifics of XML, of course. Something simple like a prefix on each line which says which fields to look at is probably a lot easier to code and understand.)

Group the commands into a subshell and pipe the output from that shell to sort.

( awk one; awk too; awk some more ) | sort

sort uses temporary files if the inputs are big.

I usually find printf "%.f\n",variablename does the trick in awk.

sort usually has a command-line option to change the amount of memory it will allocate... usually the default is quite small, so you may see some benefit by increasing it. You can also sometimes control where it will store temporary files, so you may be able to specify some faster disks, or some that do not contain the original data so that they are not competing with each other. See man sort for details...

Neat, printf with %.f works for me, didn't know that one, thanks!

hi,
I'm doing some changes already.
I'm running the test now with printf and trying to figure the amount of memory for sort (1 or 2 or 3 Gb :P)

 					Originally Posted by naoseionome 					http://www.unix.com/images/buttons/viewpost.gif 				
 				- the server as a I/0 charge very big so i should be able to do all the process only in memory (there are processors without charge and memory).

I wanted to say that hard disk is working in the maximum but there is memory and processor available! I will start using a bit of the available memory for sort.

I planning on doing gunzip to the files in the biggining. This way i can send all the files into the same script and i just need to do an if for each file, like: IF FILENAME== /*line*/ "line code". this way i can send the result to sort instead of writting the "final" file.

thanks for the help.

Best regards,
Ricardo Tom�s

Some architectures/OS' allow only 2GB of memory per process. Keep that in mind.

Also, if your system starts swapping, you'll lose the memory advantage.

You can also sort within awk. Just load in the values into a hashed array and foreach() the values to get them out. It uses more memory, but less CPU time.
In that case, using an external sort would be better.

Sounds like a good plan.