FNR>1 {
if (!($1 in amin))
amin[$1]=amax[$1]=$2
else {
if($2<amin[$1]) amin[$1]=$2
if($2>amax[$1]) amax[$1]=$2
}
}
END {
for (i in amin)
print i,amin, amax
}
---------- Post updated at 05:55 PM ---------- Previous update was at 05:27 PM ----------
Hi Scrutinizer, thanks for the solution, the problem is I am not very well versed with awk, would be really helpful if you can explain what's happening here.
awk ' # There are two associative arrays, which use the first field as the index L(ow) and H(high)
NR==1 {
next # Skip header
}
!($1 in L) { # For every line, if no array element exist for the first field in array L,
L[$1]=$2 # then create one with the second field as value. This becomes the Lowest number
}
$2<L[$1] { # If the second field is lower than the array element in L for the first field,
L[$1]=$2 # then replace it with the second field as the new lowest number
}
$2>=H[$1] { # If the second field is higher or equal to the array element in H for the first field,
H[$1]=$2 # Then replace it with the second field as the new highest number
}
END { # After the input file has been read
for(i in L) # For all elements in array L (which are the unique field values)
print i, L, H # Print the element name (the "operation" ) followed by the low and high values
}
' FS=, OFS=, file # Specify a comma as both input and output field separators and specify the file name
Note that the order is not guaranteed, so you might want to pipe the output through sort
This also sorts the header, putting it on an undefined line number. So the shell logic for skipping the header, which assumes it to be on the first line, will not function as intended (it happens to work with the sample)
An alternative would be to use: tail -n +2 data.txt | sort -k1,1 -k2n
(my approach in post #2 also did not take the header into account, it also happens to work but the order was undefined, so I made a change so that it gets skipped)
An alternative would be to do the splitting of values while reading the line
while IFS=, read operation value
then the variable expansions would not need to be used..
I thought the header was just some line of text that mukulverma2408 included to describe the fields. Given the misspelling, I deleted it from my first response.
However, if it is intended that such data files have headers, then one can invoke datamash to handle those:
#!/usr/bin/env bash
# @(#) s2 Demonstrate statistics for grouped data, datamash.
# Utility functions: print-as-echo, print-line-with-visual-space, debug.
# export PATH="/usr/local/bin:/usr/bin:/bin"
LC_ALL=C ; LANG=C ; export LC_ALL LANG
pe() { for _i;do printf "%s" "$_i";done; printf "\n"; }
pl() { pe;pe "-----" ;pe "$*"; }
em() { pe "$*" >&2 ; }
db() { ( printf " db, ";for _i;do printf "%s" "$_i";done;printf "\n" ) >&2 ; }
db() { : ; }
C=$HOME/bin/context && [ -f $C ] && $C datamash dixf
FILE=${1-data2}
E=expected-output.txt
pl " Input data file $FILE:"
cat $FILE
pl " Expected output:"
cat $E
pl " Results:"
datamash -t',' --header-in --sort --group=1 min 2 max 2 < $FILE |
tee f1
pl " Verify results if possible:"
C=$HOME/bin/pass-fail
[ -f $C ] && $C || ( pe; pe " Results cannot be verified." ) >&2
pl " Results for producing header on output:"
# datamash -t',' --header-in --header-out --sort --group=1 min 2 # max 2 < $FILE
datamash -t',' -H --sort --group=1 min 2 max 2 < $FILE
pl " Details for datamash:"
dixf datamash
exit 0
producing:
$ ./s2
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.7 (jessie)
bash GNU bash 4.3.30
datamash (GNU datamash) 1.0.6
dixf (local) 1.42
-----
Input data file data2:
Opeation,Time-Taken
operation1,83621
operation2,72321
operation3,13288
operation2,12312
operation1,12321
operation2,45455
operation2,42543
operation1,87934
operation4,94865
operation5,27383
operation6,322
operation6,93483
operation7,3223
-----
Expected output:
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223
-----
Results:
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223
-----
Verify results if possible:
-----
Comparison of 7 created lines with 7 lines of desired results:
Succeeded -- files (computed) f1 and (standard) expected-output.txt have same content.
-----
Results for producing header on output:
GroupBy(Opeation),min(Time-Taken),max(Time-Taken)
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223
-----
Details for datamash :
datamash command-line calculations (man)
Path : /usr/bin/datamash
Version : 1.0.6
Type : ELF 64-bit LSB executable, x86-64, version 1 (SYSV ...)
Help : probably available with -h,--help
Repo : Debian 8.7 (jessie)
Home : http://www.gnu.org/software/datamash
Thanks a lot for your comments, Scrutinizer. Those are all very valid points.
While writing the script, the biggest source of my consternation was the repetition of this construct
${line##*,}
in lines 19 and 21.
I would've loved to do this:
1) Extract the 2nd token from the line and assign it to a temp variable and check if $temp is greater than $min.
2) If comparison test is true, i.e. $temp is greater than $min, then assign the $temp to max.
Some languages like C, Perl and (I think) Python allow this.
Kind of like this idiom in K&R, which accomplishes three things: fetch, assignment and comparison.
while ((c = getchar()) != EOF) {
Your suggestion of using IFS along with the read function: