getting duplicates

how to get duplicates in a file containing data in columns using command or scripting?

Please explain more clearly.

You Could use uniq command to avoid duplicates...

checking all columns against each other

awk '{ for(i=1; i<=NF; i++) {arr[$i]++} }
        END{for(i in arr){if(arr>1 {print i}  }}' file 

finding duplicates in a given column -- 4

awk 'arr[$4]++' file | sort -u

Thank you Jim, for your reply..:b:

a small doubt..

awk '{ for(i=1; i<=NF; i++) {arr[$i]++} }
        END{for(i in arr){if(arr>1 {print i}  }}' file

I am a newbie at using awk...want to know what is NF in the above code...

should be the number of columns in the file if i am not wrong. :rolleyes: