Hi All,
I'm creating a script that goes through some csv files (output from sar) trying to get some statistics on system performance. This is the code I have so far:
awk -F"\",\"" 'NR != 1 {
per[$10]++
sum += $10
}
END {
print ARGV[ARGIND]
for (i in per)
print i" : "per
print "Total : "sum
print "Lines : "(NR -1)
}' $FILES
The input files have a header line which accounts for the "NR !=1" and "(NR - 1)".
What I need to do is normalise the data working out the 90th and 95th percentile (I haven't quite got to the math part yet - I'm going to try and calculate the cumulative frequency first... ). So what I believe I need to do next is sort my array - but I don't know how many elements are stored in it. Can anyone tell me how to do this?
Some example output here (from just one input file):
snsax-psmpw-5-2007-01-07_MemorySwapUtilisation.csv
9.63 : 1
9.64 : 20
9.6 : 1
9.65 : 29
9.66 : 17
9.87 : 11
9.78 : 1
9.89 : 1
9.61 : 38
9.62 : 24
Total : 1380.4
Lines : 143
Thanks in advance