Extract and exclude rows based on duplicate values

Hello

I have a file like this:

> cat examplefile
ghi|NN603762|eee
mno|NN607265|ttt
pqr|NN613879|yyy
stu|NN615002|uuu
jkl|NN607265|rrr
vwx|NN615002|iii
yzA|NN618555|ooo
def|NN190486|www
BCD|NN628717|ppp
abc|NN190486|qqq
EFG|NN628717|aaa
HIJ|NN628717|sss
>

I can sort the file by the second field like this:

> sort -t\| +1 examplefile
abc|NN190486|qqq
def|NN190486|www
ghi|NN603762|eee
jkl|NN607265|rrr
mno|NN607265|ttt
pqr|NN613879|yyy
vwx|NN615002|iii
stu|NN615002|uuu
yzA|NN618555|ooo
EFG|NN628717|aaa
BCD|NN628717|ppp
HIJ|NN628717|sss
>

But how can I also extract *only* the records where field 2 is duplicated & also *only* where field 2 isn't duplicated?

So I want to get these 2 separate outputs:

duplicated rows:

abc|NN190486|qqq
def|NN190486|www
jkl|NN607265|rrr
mno|NN607265|ttt
vwx|NN615002|iii
stu|NN615002|uuu
EFG|NN628717|aaa
BCD|NN628717|ppp
HIJ|NN628717|sss

non-duplicated rows:

ghi|NN603762|eee
pqr|NN613879|yyy
yzA|NN618555|ooo

I've seen some discussions where uniq -D is suggested but that flag isn't available on my version of uniq (I'm working on Solaris). I suspect that something clever can be done with awk to solve this but I'm afraid that's beyond me.

Any advice appreciated.

Chris

Here you go -

# cat examplefile.txt
ghi|NN603762|eee
mno|NN607265|ttt
pqr|NN613879|yyy
stu|NN615002|uuu
jkl|NN607265|rrr
vwx|NN615002|iii
yzA|NN618555|ooo
def|NN190486|www
BCD|NN628717|ppp
abc|NN190486|qqq
EFG|NN628717|aaa
HIJ|NN628717|sss

## Duplicate
# awk 'BEGIN { FS="|" } { c[$2]++; l[$2,c[$2]]=$0 } END { for (i in c) { if (c > 1) for (j = 1; j <= c; j++) print l[i,j] } }'  examplefile.txt
stu|NN615002|uuu
vwx|NN615002|iii
mno|NN607265|ttt
jkl|NN607265|rrr
BCD|NN628717|ppp
EFG|NN628717|aaa
HIJ|NN628717|sss
def|NN190486|www
abc|NN190486|qqq

## Non-Duplicate
# awk 'BEGIN { FS="|" } { c[$2]++; l[$2,c[$2]]=$0 } END { for (i in c) { if (c <= 1) for (j = 1; j <= c; j++) print l[i,j] } }'  examplefile.txt
yzA|NN618555|ooo
ghi|NN603762|eee
pqr|NN613879|yyy
1 Like

Hello CHoggarth,

Could you please try following and let me know if this helps you.

awk -F"|" '{A[$2]=A[$2]?A[$2] ORS $0:$0;B[$2]++} END{for(i in B){if(B>1){print A > "more_than_one_file"} else {print A > "one_time_occurence_file"}};} END{system("echo more_than_one_file;sort -k1 more_than_one_file;echo;echo one_time_occurence_file;sort -k1 one_time_occurence_file")}'   Input_file

A non-one liner solution for same too.

awk -F"|" '{
                A[$2]=A[$2]?A[$2] ORS $0:$0;
                B[$2]++
           }
                END{
                        for(i in B){
                                        if(B>1){
                                                        print A > "more_than_one_file"
                                                  }
                                        else      {
                                                        print A > "one_time_occurence_file"
                                                  }
                                   };
                   }
                END{
                        system("echo more_than_one_file;\
                                sort -k1 more_than_one_file;\
                                echo;\
                                echo one_time_occurence_file;\
                                sort -k1 one_time_occurence_file")
                   }
           '   Input_file
 

Thanks,
R. Singh

If you'd rather create both file in one step and you don't need the output files to be sorted, the following will read your sample input file twice and produce two output files. The output file named duplicated will contain the lines that have field 2 values that appear on more than one line and the output file named non-duplicated will contain the lines that have field 2 values that only appear once:

awk -F '|' '
NR == FNR {
	c[$2]++
	next
}
c[$2] == 1 {
	print > "non-duplicated"
	next
}
{	print > "duplicated"
}' examplefile examplefile

If you need the output files to be in sorted order, sort the input file before running the above.

If you are running this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk .

Try also

sort -t"|" -k2,2 file | uniq -s4 -w8 -D
abc|NN190486|qqq
def|NN190486|www
jkl|NN607265|rrr
mno|NN607265|ttt
stu|NN615002|uuu
vwx|NN615002|iii
BCD|NN628717|ppp
EFG|NN628717|aaa
HIJ|NN628717|sss
sort -t"|" -k2,2 file | uniq -s4 -w8 -u
ghi|NN603762|eee
pqr|NN613879|yyy
yzA|NN618555|ooo

Many thanks for all the replies. I'm likely going with the solution from Mannu2525 which worked fine on my Solaris server once I changed awk to nawk.

I'll try to find time to try the others & let you all know how it goes.

I consider myself a reasonable shell scripter but these awk constructs are well beyond my abilities. Thanks again.

Chris