Please always tell us what shell and operating system you're using when you start a new thread. Don't assume that everyone who wants to help you has read all of your previous threads.
#!/bin/bash
tmpf="/tmp/$$.result"
trap 'rm -f "$tmpf"' EXIT
awk '
function dump() {
print linecount, distinct, lastfile
linecount = distinct = 0
split("", lines)
}
FILENAME != lastfile {
if(lastfile)
dump()
lastfile = FILENAME
}
{ linecount++
if(lines[$0]++ == 0)
distinct++
}
END { dump()
}' * > "$tmpf"
echo 'Sorted by increaasing number of lines in files:'
sort -n "$tmpf"
echo 'Sorted by increaasing number of distinct lines in files:'
sort -k2,2n "$tmpf"
Note that this should work with any version of awk (but on Solaris systems, you'll need to use nawk or /usr/xpg4/bin/awk ).
FYI - Just tried this, it printed correct counts but unique counts were off. I will check the others and update.
Thanks nezabudka!! This seems to work with gawk -- thanks also vgersh99 for pointing out gawk -- tried your different gawk but counts still off ... as in your original solution -- maybe uniq is not being done in correct order?
Thanks Don Cragun -- this also works!
Thanks MadeInGermany, first gives same unique count as vgersh99, second works for me. Maybe my perception of unique is incorrect
I'm getting my unique count by:
sort filename | uniq | wc -l
The contents of my files are URLs if that makes a difference.
You might note that the suggestion in post #5 in this thread invokes awk (using only standard awk features) once and sort twice producing both of the requested sorted outputs. Unlike some of the scripts in this thread, it doesn't need multiple invocations of sort or tr per file processed. And, the awk script processes one file at a time keeping only unique lines from that file (rather than keeping unique lines in memory from all files being processed). When the files being processed contain tens of thousands of input lines and tens of thousands of lines from most of those files are unique, that can chew up a lot of system resources.
And, although most of us corrected the use of sort without the n flag when sorting numeric values, none of us said why we did that. (If you use sort without the n flag, the sort performed is an alphanumeric sort; not a numeric sort. So, for example the string 9 is alphanumerically greater than the string 100000 because the leading digit 9 in the first string is greater than the leading digit 1 in the second string. When the n flag is given to sort , it performs a numeric sort instead of an alphanumeric sort for the key fields to which the flag is attached.)
gawk '{l[$0]++} ENDFILE {for (i in l) {if (l==1) u++;t+=l} print t, u, FILENAME; delete l; u=t=0}' *
to:
gawk '{l[$0]++} ENDFILE {for (i in l) {u++;t+=l} print t, u, FILENAME; delete l; u=t=0}' *
I think you'll get the results you want. (But, I don't have gawk installed on my system to verify that it works.)
Note that each subscript value represents a unique input line. So, there is no test needed to count the number of unique lines in a file. The test that is currently in that code is only counting unique lines if they only appear in the file once.