sample data.file:
0,mfrh_green_screen,1454687485,383934,/PROD/G/cicsmrch/sys/unikixmain.log,37M,mfrh_green_screen,28961345,0,382962--383934
0,mfrh_green_screen,1454687785,386190,/PROD/G/cicsmrch/sys/unikixmain.log,37M,mfrh_green_screen,29139568,0,383934--386190
0,mfrh_green_screen,1452858644,-684,/PROD/G/cicsmrch/sys/unikixmain.log,111M,mfrh_green_screen,732502,732502,,111849151,0,731818
0,mfrh_green_screen,1452858944,-888,/PROD/G/cicsmrch/sys/unikixmain.log,111M,mfrh_green_screen,732707,732707,,111918753,0,731819
Code i'm running against this file:
VALFOUND=1454687485
SEARCHPATT='Thu Feb 04'
awk "/,${VALFOUND},/,0" data.file | gawk -F, '{A=strftime("%a %b %d %T %Y,%s",$3);{Q=1};if((Q)&&(NF == 13)){split($4, B,"-");print B[2] "-" $3 "_0""-" $4"----"A} else if ((Q)&&(NF == 10)) {split($NF, B,"--");print B[2]-B[1] "-" $3 "_" $10"----"A}}' | egrep "${SEARCHPATT}" | awk -F"----" '{print $1}'
data.file is about 7MB in size and can grow quite bigger than that. when i run the above command on it it, it takes about 6 seconds to complete. Anyway to bring that number down???