Delete a row that has a duplicate column

I'm trying to remove lines of data that contain duplicate data in a specific column.

For example.

apple 12345
apple 54321
apple 14234
orange 55656
orange 88989
orange 99898

I only want to see

apple 12345
orange 55656

How would i go about doing this?

#cat test.log
apple 12345
apple 54321
apple 14234
orange 55656
orange 88989
orange 99898

#awk 'a !~ $1; {a=$1}' test.log
apple 12345
orange 55656

the usual awk paradigm:

nawk '!a[$1]++' myFile

or

sort -u -k1,1 myFile

Hi Friends,

Can anybody change the above script "#awk 'a !~ $1; {a=$1}' test.log
" to keep the last repeated entry and delete all the previous duplicates.
For example if the input file is
1 2 3 4
2 2 4 5.
Here column 2 field(s) are repeating.
So I want the output as 2 2 4 5 but not 1 2 3 4.
Thanks in advance..

sort it in reverse mode and use the same command.

sort -r -k2,2 filename | awk 'a !~ $2; {a=$2}'

-Devaraj Takhellambam

Thanks Devaraj,

Actually my application is slight different. But your idea satisfied my needs with slight modification to my raw data. Thank you very much.