Counting lines in multiple files

Hi,
I have couple of .txt files (say 50 files) in a folder.

For each file:
I need to get the number of lines in each file and then that count -1 (I wanted to exclude the header.

Then sum the counts of all files and output the total sum.

Is there an efficient way to do this using shell programming or awk.

Please let me know.

LA

Use gawk, nawk or /usr/xpg4/bin/awk on Solaris:

awk 'END{print NR-(ARGC-1)}' *

Sorry I made a mistake in the above post.

Your reply was good. but it was not actually I needed.

For each file can we modify the above awk command to get the number of lines - header for each file and out put this in to another text file.

ie. the output file will be number of lines - 1 for each file

Please let me know.

LA

Something like:

awk '{a[FILENAME]++}END{for(i in a)print i,(a-1)}' * > newfile
1 Like

I modified Danmero's code slightly to also include totals:

awk '{a[FILENAME]++}END{for(i in a){r=a-1;t+=r;print i,r}print "total "t}' *.txt

Alternative 1 using pipe:

grep -c '' *.txt|awk -F: '{$2-=1;t+=$2;print} END{print "Total "t}'

Alternative 2 using pipe:

 wc -l *.txt|awk '$2=="total"{$1+=1-n}{$1-=1; n++}1'

And a variation of Scrutinizer's last one-liner:

wc -l * | perl -lne '@x=split; print /total/ ? $x[0]-$.+1 : $x[0]-1," ",$x[1]'

tyler_durden

And another one:

wc -l * | awk '(/total/&&$1-=NR-1)||--$1'

It doesn't handle 0 record files correctly :slight_smile:

Or:

wc -l *|perl -pe's/(\d+)/$x=$1;m|total|?$x-($.-1):$1-1/e'

Another 0 record challenged solution (thanks radoulov)

wc -l *.txt |awk '/ total$/{$1+=2-NR}{$1--}1'

It does not choke on files that contain the word 'total' except files called total :wink: