Get duplicate rows from a csv file

How can i get the duplicates rows from a file using unix, for example i have data like

a,1
b,2
c,3
d,4
a,1
c,3
e,5

i want output to be like

a,1
c,3

Try:

awk '++A[$0]==2' file
1 Like

Hi.

Also for the data on file z5:

$ sort z5 | uniq -d
a,1
c,3

on a system like:

OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution        : Debian 8.8 (jessie) 
sort (GNU coreutils) 8.23
uniq (GNU coreutils) 8.23

Best wishes ... cheers, drl

@scrutinizer, thanks it really worked, could you please help me with the same case, if i have to find the duplicate rows in a csv if column 2 of csv is not equal, like if

a,1,2
a,3,2
b,2,4

my output should be like

a,1,2
a,3,2

because 2nd column do not equal rest are same.

Try something like:

awk -F, 'NR==FNR {A[$1,$3]++; next} A[$1,$3]>1'  file file

or

awk -F, 'NR==FNR {A[$1,$3]++; next} A[$1,$3]>1 && !B[$0]++' file file

if there can be multiple occurrences of the same record and they only need to be printed once.

--
Note the input file needs to be specified twice..