compress files as they are created by a dbexport

hi,

we work on an IBM machine, and we have an INFORMIX datbase, we need to make a dbexport which create a flat files according to the tables we have.

we are on AIX OS using ksh 5.3

we need to make the dbexport and the compress (gzip) in the same time

i tried to make a shell but i'm sure there's better to do

#!/usr/bin/ksh

comp()
{
# wait until the dbexport dirctory is created with at least on file in it

while [ ! -d /dbex_dir_path/dbexport_dir ] || [ -z /dbex_dir_path/dbexport_dir ]
do
sleep 3
done


# compress the files before the end of the dbexport

while [ $(my_pgm_path/finished) != 1 ]
do

#compare the older non-nul file and non-compressed with the newer one (probably the currently created file)
# i do this test to prevent compressing the currently created file
if [[ $(ls -lt /dbex_dir_path/dbexport_dir|awk '$5!=0{print $9}'|grep -v .gz$|tail -1) != $(ls -ltr /dbex_dir_path/dbexport_dir|awk '$5!=0{print $9}'|grep -v .gz$|tail -1) ]] 

then

#compress the older non-nul and non- compressed file
gzip /dbex_dir_path/dbexport_dir/$(ls -lt $dos/bea_prod.exp|awk '$5!=0{print $9}'|grep -v .gz$|tail -1)


fi

done

# the dbexport is over so i can compress all the remaining files
for fich in $(ls -l /dbex_dir_path/dbexport_dir|awk '$5!=0{print $9}'|grep -v .gz$)
do
#compress 
gzip  /dbex_dir_path/dbexport_dir/$fich
echo "fin de la compression "
done

}


# i couldn't make a global variable so i used a file to specify that the dbexport hasn't succed yet
echo "0"> /my_pgm_path/finished

[ ! -d $dos ] || rm -rf $dos


# launch the compress process in the background
compy &


# dbexport
<dbexort-process> 

# dbexport is finished so i changed the "0" to "1" in my file
echo "1">/my_pgm_path/finished


i want if it's possible to make 4 parallel compress at once but i can't figure out how i can do it

There is more relative value in tools that import than those that export. If you select delimited text you might find it easy to compress on the fly, reducing latency/end-to-end time. Not sure if dbexport would write a named pipe, with the gzip on the other side. Maybe use ksh/bash '>(...)' to make the named pipes on the fly. No sense writing the uncompressed data to disk!

You could use 'parallel' to get N parallel runs. A more primitive shell trick is to have a process writing table names to a pipe feeding to parens in which there are 4 background subshells reading names using 'line' one at a time and export/gzipping them. Send the names in biggest first. The biggest may take more time than all others!