Delete Duplicate records from a tilde delimited file

Hi All,

I want to delete duplicate records from a tilde delimited file. Criteria is considering the first 2 fields, the combination of which has to be unique, below is a sample of records in the input file

1620000010338~2446694087~0~20061130220000~A00BCC1CT
1620000126196~2446694087~0~20061130220000~A00BCC1CT
1620000126196~2446694087~1~20061430220000~A00BCC1CT
1620000127475~2446694087~0~20061130220000~A00BCC1CT
1620000134743~2446694087~0~20061130220000~A00BCC1CT
1620000134743~2446694087~0~20060930220000~A00BCC1CT

here we notice that record 3 and 6 are duplicate records. let me how to do this in shell script.

Thanks in Advance

 awk '!x[$1,$2]++' FS="~" filename

Use nawk or /usr/xpg4/bin/awk on Solaris.

nawk -F'~' '!a[$1,$2]++' myFile

And another one (if sorting is acceptable):

sort -ut~ -k1,2 filename

in perl

perl -e ' while (<>) { chomp; my @arr = split(/~/); $fileHash{$arr[0].$arr[1]} = $_  } foreach my $k ( keys %fileHash) { print $fileHash{$k} . "\n" } ' filename

Thanks a lot guys for your reply, your replies helped me a lot