Emulate group-by in shell script

Hello All,

I saw this problem on one of the forum and solved it using group-by in oracle sql, though I am a bit curious to implement it using shell script :

There is a file having number of operations :

Opeation,Time-Taken
operation1,83621
operation2,72321
operation3,13288
operation2,12312
operation1,12321
operation2,45455
operation2,42543
operation1,87934
operation4,94865
operation5,27383
operation6,322
operation6,93483
operation7,3223

My task is to find the min and max time taken by each operation. Expected output :

operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223

I have tried solving it using combination of sort and uniq but not successful :

sort -nk 2 | uniq

Any help is highly appreciated.

Hi, try:

awk '
  NR==1 {
    next
  }

  !($1 in L) {
    L[$1]=$2
  }

  $2<L[$1] {
    L[$1]=$2
  } 

  $2>=H[$1] {
    H[$1]=$2
  }

  END {
    for(i in L) print i, L, H
  }
' FS=, OFS=,  file
1 Like

awk -F, -f muk.awk myFile OFS=, where muk.awk is:

FNR>1 {
   if (!($1 in amin))
       amin[$1]=amax[$1]=$2
   else {
      if($2<amin[$1]) amin[$1]=$2
      if($2>amax[$1]) amax[$1]=$2
   }
}
END {
  for (i in amin)
    print i,amin, amax
}
1 Like
perl -nalF',' -e '
 print "@F[0],MIN,MAX" and next if $. == 1;
 push @{$o{$F[0]}}, $F[1];
 END{ for( sort keys %o ){ @so = sort @{ $o{$_} };
      $l = $so[-1] ? $so[-1] : $so[0];
      print "$_,$so[0],$l" }
 }' mukulverma2408.file

Output:

Opeation,MIN,MAX
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223
1 Like

Hi.

As with other non-oracle solutions such as awk , perl , here is datamash :

#!/usr/bin/env bash

# @(#) s1       Demonstrate statistics for grouped data, datamash.

# Utility functions: print-as-echo, print-line-with-visual-space, debug.
# export PATH="/usr/local/bin:/usr/bin:/bin"
LC_ALL=C ; LANG=C ; export LC_ALL LANG
pe() { for _i;do printf "%s" "$_i";done; printf "\n"; }
pl() { pe;pe "-----" ;pe "$*"; }
em() { pe "$*" >&2 ; }
db() { ( printf " db, ";for _i;do printf "%s" "$_i";done;printf "\n" ) >&2 ; }
db() { : ; }
C=$HOME/bin/context && [ -f $C ] && $C datamash

FILE=${1-data1}
E=expected-output.txt

pl " Input data file $FILE:"
cat $FILE

pl " Expected output:"
cat $E

pl " Results:"
datamash -t',' --sort --group=1 min 2 max 2 < data1 |
tee f1

pl " Verify results if possible:"
C=$HOME/bin/pass-fail
[ -f $C ] && $C || ( pe; pe " Results cannot be verified." ) >&2

pl " Details for datamash:"
dixf datamash

exit 0

producing:

$ ./s1

Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution        : Debian 8.7 (jessie) 
bash GNU bash 4.3.30
datamash (GNU datamash) 1.0.6

-----
 Input data file data1:
operation1,83621
operation2,72321
operation3,13288
operation2,12312
operation1,12321
operation2,45455
operation2,42543
operation1,87934
operation4,94865
operation5,27383
operation6,322
operation6,93483
operation7,3223

-----
 Expected output:
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223

-----
 Results:
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223

-----
 Verify results if possible:

-----
 Comparison of 7 created lines with 7 lines of desired results:
 Succeeded -- files (computed) f1 and (standard) expected-output.txt have same content.

-----
 Details for datamash:
datamash        command-line calculations (man)
Path    : /usr/bin/datamash
Version : 1.0.6
Type    : ELF 64-bit LSB executable, x86-64, version 1 (SYSV ...)
Help    : probably available with -h,--help
Repo    : Debian 8.7 (jessie) 
Home    : http://www.gnu.org/software/datamash

Best wishes ... cheers, drl

1 Like

I have actually written a whole article about exactly this:

I hope this helps.

bakunin

1 Like

Thanks Every-one, that is really helpful :slight_smile:

---------- Post updated at 05:55 PM ---------- Previous update was at 05:27 PM ----------

Hi Scrutinizer, thanks for the solution, the problem is I am not very well versed with awk, would be really helpful if you can explain what's happening here.

Sure:

awk '                     # There are two associative arrays, which use the first field as the index L(ow) and H(high)
  NR==1 {                  
    next                  # Skip header
  }

  !($1 in L) {            # For every line, if no array element exist for the first field in array L,
    L[$1]=$2              # then create one with the second field as value. This becomes the Lowest number
  }

  $2<L[$1] {              # If the second field is lower than the array element in L for the first field, 
    L[$1]=$2              # then replace it with the second field as the new lowest number
  } 

  $2>=H[$1] {             # If the second field is higher or equal to the array element in H for the first field,
    H[$1]=$2              # Then replace it with the second field as the new highest number
  }

  END {                   # After the input file has been read
    for(i in L)           # For all elements in array L (which are the unique field values)
      print i, L, H # Print the element name (the "operation" ) followed by the low and high values
  }
' FS=, OFS=,  file        # Specify a comma as both input and output field separators and specify the file name

Note that the order is not guaranteed, so you might want to pipe the output through sort

$
$ cat data.txt
Opeation,Time-Taken
operation1,83621
operation2,72321
operation3,13288
operation2,12312
operation1,12321
operation2,45455
operation2,42543
operation1,87934
operation4,94865
operation5,27383
operation6,322
operation6,93483
operation7,3223
$
$ cat -n group_data.sh
     1  #!/usr/bin/bash
     2  header="y"
     3  while read line
     4  do
     5      if [ ! -z "$header" ] ; then
     6          header=
     7          continue
     8      fi
     9      opr=${line%%,*}
    10      if [ -z "$prev_opr" ] ; then
    11          # set min, max if first data line
    12          min=${line##*,}
    13          max=$min
    14      elif [ "$opr" != "$prev_opr" ] ; then
    15          # if opr changes, print previous line and set min, max
    16          echo "$prev_opr,$min,$max"
    17          min=${line##*,}
    18          max=$min
    19      elif [ "${line##*,}" -gt "$min" ] ; then
    20          # if opr is same, set max if needed
    21          max=${line##*,}
    22      fi
    23      prev_opr=$opr
    24  done <<< "$(sort -t, -k1,1 -k2,2n data.txt)"
    25  echo "$prev_opr,$min,$max"
    26
$
$ ./group_data.sh
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223
$
$
1 Like

--

This also sorts the header, putting it on an undefined line number. So the shell logic for skipping the header, which assumes it to be on the first line, will not function as intended (it happens to work with the sample)

An alternative would be to use: tail -n +2 data.txt | sort -k1,1 -k2n

(my approach in post #2 also did not take the header into account, it also happens to work but the order was undefined, so I made a change so that it gets skipped)

An alternative would be to do the splitting of values while reading the line

while IFS=, read operation value

then the variable expansions would not need to be used..

1 Like

HI.

I thought the header was just some line of text that mukulverma2408 included to describe the fields. Given the misspelling, I deleted it from my first response.

However, if it is intended that such data files have headers, then one can invoke datamash to handle those:

#!/usr/bin/env bash

# @(#) s2       Demonstrate statistics for grouped data, datamash.

# Utility functions: print-as-echo, print-line-with-visual-space, debug.
# export PATH="/usr/local/bin:/usr/bin:/bin"
LC_ALL=C ; LANG=C ; export LC_ALL LANG
pe() { for _i;do printf "%s" "$_i";done; printf "\n"; }
pl() { pe;pe "-----" ;pe "$*"; }
em() { pe "$*" >&2 ; }
db() { ( printf " db, ";for _i;do printf "%s" "$_i";done;printf "\n" ) >&2 ; }
db() { : ; }
C=$HOME/bin/context && [ -f $C ] && $C datamash dixf

FILE=${1-data2}
E=expected-output.txt

pl " Input data file $FILE:"
cat $FILE

pl " Expected output:"
cat $E

pl " Results:"
datamash -t',' --header-in --sort --group=1 min 2 max 2 < $FILE |
tee f1

pl " Verify results if possible:"
C=$HOME/bin/pass-fail
[ -f $C ] && $C || ( pe; pe " Results cannot be verified." ) >&2

pl " Results for producing header on output:"
# datamash -t',' --header-in --header-out --sort --group=1 min 2 # max 2 < $FILE
datamash -t',' -H --sort --group=1 min 2 max 2 < $FILE

pl " Details for datamash:"
dixf datamash

exit 0

producing:

$ ./s2

Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution        : Debian 8.7 (jessie) 
bash GNU bash 4.3.30
datamash (GNU datamash) 1.0.6
dixf (local) 1.42

-----
 Input data file data2:
Opeation,Time-Taken
operation1,83621
operation2,72321
operation3,13288
operation2,12312
operation1,12321
operation2,45455
operation2,42543
operation1,87934
operation4,94865
operation5,27383
operation6,322
operation6,93483
operation7,3223

-----
 Expected output:
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223

-----
 Results:
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223

-----
 Verify results if possible:

-----
 Comparison of 7 created lines with 7 lines of desired results:
 Succeeded -- files (computed) f1 and (standard) expected-output.txt have same content.

-----
 Results for producing header on output:
GroupBy(Opeation),min(Time-Taken),max(Time-Taken)
operation1,12321,87934
operation2,12312,72321
operation3,13288,13288
operation4,94865,94865
operation5,27383,27383
operation6,322,93483
operation7,3223,3223

-----
Details for datamash :

datamash        command-line calculations (man)
Path    : /usr/bin/datamash
Version : 1.0.6
Type    : ELF 64-bit LSB executable, x86-64, version 1 (SYSV ...)
Help    : probably available with -h,--help
Repo    : Debian 8.7 (jessie) 
Home    : http://www.gnu.org/software/datamash

Best wishes ... cheers, drl

Thanks a lot for your comments, Scrutinizer. Those are all very valid points.

While writing the script, the biggest source of my consternation was the repetition of this construct

${line##*,}

in lines 19 and 21.
I would've loved to do this:
1) Extract the 2nd token from the line and assign it to a temp variable and check if $temp is greater than $min.
2) If comparison test is true, i.e. $temp is greater than $min, then assign the $temp to max.

Some languages like C, Perl and (I think) Python allow this.
Kind of like this idiom in K&R, which accomplishes three things: fetch, assignment and comparison.

 while ((c = getchar()) != EOF) {
 

Your suggestion of using IFS along with the read function:

is a clean way of avoiding that repetition.