Conditional Looping In Files

I have a req. where i need to read data from multiple files and take counts of row which satisfy the condition. e.g.:
FILE1:

Col1 Col2 Col3
12 ab cd
15 de fg
25 gh tm

FILE2:

Col1 Col2 Col3
21 ab1 cd1
13 de1 fg1
25 gh1 tm1
---
---

FILE-N...

i need to find the count of rows where COL1 is between 10-20..result shoud be like:
FILE1.txt 2
FILE2.txt 1
-----------
FILE-N.txt n

awk '$1>9 && $1 <21{a[FILENAME]++} END {for (i in a) {print i,a}}' file1 file2 fileN
1 Like

Thanks for the response. It would be great if you could explain it little. I am new to shell scripting.

awk will read all files one by one and do all commands on all files
$1>9 && $1 <21 If column #1 have a value more then 9 and less then 21
Then create an array a with index of the FILENAME
The ++ tells that array should have one extra number for every hit.

After all file is counted then this END {for (i in a) {print i,a}} will list all array that is created i in a and the value of them

PS array take some time to understand, but reading at testing will help a lot.

awk is the precise solution, and easily adaptable to other conditions.
A quick and dirty solution is

egrep -c '^(1[0-9]|20)[[:blank:]]' FILE*.txt

but you cannot modify the printout format, and for a changing condition you'll have to develop a new regular expression.

@MadeInGermany: this would count 1 abc def as well, or 200 xyz sss ...

1 Like

Thanks, I have improved the match in my post.