grep builds one overall pattern to match in the file(s) and thus can only yield one count. Use either four grep s, one for every single keyword, or try
awk -vKEYS="Exception Fatal Error Exec" '
BEGIN {for (MX=n=split(KEYS, T); n>0; n--) SRCH[T[n]]
}
{for (s in SRCH) if ($0 ~ s) CNT++
}
END {for (c in CNT) print c, CNT[c]
}
' file
Fatal 1
Exec 1
Error 1
Exception 1
$ ./s1
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.7 (jessie)
bash GNU bash 4.3.30
grep (GNU grep) 2.20
sort (GNU coreutils) 8.23
uniq (GNU coreutils) 8.23
-----
Input data file data1:
Now is the time
This is an Exception
for all good men Exec
to come to the aid
Sparkle Farkle Error
of their country.
And now, this: Fatal
Exception, finally.
-----
Results:
1 Error
2 Exception
1 Exec
1 Fatal
See files f1, f2 for intermediate data. See man pages for details.
Perl solution for a log file with highly improbable text in line # 9:
$
$ cat -n logfile.txt
1 this is line # 1
2 this is Exception line
3 this is line # 3
4 this is Fatal line
5 this is line # 5
6 this is Error line
7 this is Exec line
8 this is line # 8
9 this is Error line and Exception line
10 this is line # 10
11 this is Exception line
$
$ perl -lne '$h{$1}++ while(/(Exception|Fatal|Error|Exec)/g)}{print "$v $k" while(($k,$v)=each %h)' logfile.txt
1 Exec
2 Error
3 Exception
1 Fatal
$
$
The grep->sort->uniq pipeline also works with that:
...
Input data file data2:
this is line # 1
this is Exception line
this is line # 3
this is Fatal line
this is line # 5
this is Error line
this is Exec line
this is line # 8
this is Error line and Exception line
this is line # 10
this is Exception line
-----
Results:
2 Error
3 Exception
1 Exec
1 Fatal