Iam having the scripts as follows. i jus want to run those in parallel.
main aim is to minimise the time for overall execution of the script.
now out.txt is having 1 lac records.
script1(split.sh)
split -1000 out.txt splitout
ls -A splitout* > filelist.txt
cat filelist.txt | while read fileln
do
parll.sh $fileln >> parll.nohup &
done
script2(parll.sh)
cat $1 | while read line
do
nohup grep -h $line filename.txt >> nohup.out
done
main aim is to reduce the grep time from the filename.txt. so iam splitting the large file and passin gthe splitted file one by one and running the grep in parallel.
Still it is taking more than 3.5 hours for lac records itself. iam having more than 5 lac records.
so can anybody help me to reduce the time considerably.
so iam using spliting to split the large file and small files have datas(file1) and iam using that data's and greping that line from file2 and writing to an output file.
To quick finish iam using nohup to run this spllited files paralley.
It would be my pleasure to solve your problem,
However, just let me know the exact string, you want to search so that I can re-write it for faster processing.
I just want this in simple grep syntax i.e.: - #grep "String" <filename>