hi all,
Say i have a range like 0 - 1000 and i need to split into diffrent files the lines which are within a specific fixed sub-range. I can achieve this manually but is not scalable if the range increase.
E.g
cat file1.txt
Response time 2 ms
Response time 15 ms
Response time 101 ms
Response time 279 ms
etc
What i currently do is create an array and then grep for it in a loop
bucketLimits=(
# 100 <> 150, 150 <> 200, 200 <> 250, 250 <> 300, 300 <> 350, 350 <> 400, 400 <> 450, 450 <> 500
'[1][0-4][0-9]' '[1][5-9][0-9]' '[2][0-4][0-9]' '[2][5-9][0-9]' '[3][0-4][0-9]' '[3][5-9][0-9]' '[4][0-4][0-9]' '[4][5-9][0-9]'
# 500 <> 550, 550 <> 600, 600 <> 650, 650 <> 700, 700 <> 750, 750 <> 800, 800 <> 850, 850 <> 900, 900 <> 950, 950 <> 1000
'[5][0-4][0-9]' '[5][5-9][0-9]' '[6][0-4][0-9]' '[6][5-9][0-9]' '[7][0-4][0-9]' '[7][5-9][0-9]' '[8][0-4][0-9]' '[8][5-9][0-9]' '[9][0-4][0-9]' '[9][5-9][0-9]'
)
for bucketLimit in ${bucketLimits[@]}
do
limit=${bucketLimits[$index]}
result=`grep "Response" file1.txt| grep -oE "time ${limit} ms" | wc -l`
finalResult=$finalResult","$result
index=$(( $index + 1 ))
done
echo "$finalResult" >> ./stats_results.csv
Any idea how i can auto generate the buckeLimits array by giving the sub-range value? Could be 10 range or as it is now 50 range.
Thx!