Removal of Duplicate Entries from the file

I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File.

The example of the File is shown below:
8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7
8244100010143276|MARISOL CARO||MORALES|New512|CARR 430 KM 1.7
8244100010196084|CARMEN L||VELEZ|Internet128|BO
8244100010196084|CARMEN L||VELEZ|Internet128|BO

from the above example i have to remove first Duplicate Entry That is 8244100010143276 and 8244100010196084.

Please help me in resolving the above issue.

Thanks in advance

Use nawk or /usr/xpg4/bin/awk on Solaris:

awk -F\| 'after[$1]++' infile