How to extract duplicate rows

I have searched the internet for duplicate row extracting.
All I have seen is extracting good rows or eliminating duplicate rows.

How do I extract duplicate rows from a flat file in unix.
I'm using Korn shell on HP Unix.

For.eg.
FlatFile.txt

123:456:678
123:456:678
123:456:876
345:457:987
345:457:987
345:123:745

The output should be
OutPutFile.txt

123:456:678
345:457:987

I appreciate your help in advance. Thanks

awk '
{s[$0]++}
END {
  for(i in s) {
    if(s>1) {
      print i
    }
  }
}' file

Regards

1 Like

Or, of course, if sorting is not a problem:

sort filename|uniq -d
2 Likes

Gr8. Both scripts worked.

Thanks Franklin 52 and radoulov

This does not work when we have space between the data?

example:

1231080 5000104891 21592002082811037
1231080 5000104892 27492002082821037
1231080 5000104891 21592002082811037
1231080 5000104892 27492002082821037
934262 5000021182 27502002040110518
934262 5000021181 21552002040120518
934262 5000021182 27502002040110518
934262 5000021181 21552002040120518

What does not work when there are spaces? $0 in awk refers to the entire row, spaces and all.