Measurement file parsing

I have an application performance measurement file with one thousand lines. Each line has some text indicating type of measurement and the last field containing the measured value. Each of the file has a unique measurement. I am interested in only extracting about 100 of those measurements and put them in a comma delimited file. I have written a ksh script with multiple grep commands with each grep searching for a uniq pattern. This method opens same file over and over again but I am wondering if there is a better scripting method or something other than ksh script that would eliminate multiple greps.

Example of input file

some text indicating type of measure1 is here value
some other zdfg value
string indicating yet another abdf value
....
...
...
last line with cpu_use value

Here is what I have done to extract only the measure that I am interested in

Measure1=`grep measure1 file | awk '{print $NF}'`
zdfg=`grep zdfg file | awk '{print $NF}'`
cpu_use=`grep cpu_use file | awk '{print $NF}'`

and so on

echo $Measure1,$zdfg,$cpu_use

This gives me following output

value,value,value,value....

I am sure there a better and efficient way. Please suggest.

You can put more than one statement in one awk. In fact, you can do the entire thing in awk.

$ cat zaf.awk

BEGIN { OFS="," }

/measure1/      { MEASURE1=$NF }
/zdfg/          { ZDFG=$NF     }
/cpu_use/       { CPUUSE=$NF   }

END { print MEASURE1, ZDFG, CPUUSE }

$ awk -f zaf.awk datafile

value,value,value

$
1 Like

Just awk.

awk '
/measure1/{a["measure1"]=$NF}
/zdfg/{a["zdfg"]=$NF}
/cpu_use/{a["cpu_use"]=$NF}
END { printf("%s,%s,%s\n", a["measure1"],a["zdfg"],a["cpu_use"]) }' file

for a start..

1 Like