Removing duplicates from log file?

I have a log file with posts looking like this:
[arrival time]-[message]-[id number]

Messages can be delivered by different systems at different times. The id number is used to sort out duplicate messages. What I need is to strip the arrival time from each post, sort posts by id number, and reattach arrival time to respective post.

Raw data:

200901211501-hello1-010
200901211504-hello1-010
200901211507-hello2-052
200901211512-hello2-052
200901211522-hello3-713
200901211544-hello4-220
200901211559-hello5-117
200901211612-hello5-117
200901211630-hello5-117

This is what I want to achieve (dupes removed):

200901211501-hello1-010
200901211507-hello2-052
200901211522-hello3-713
200901211544-hello4-220
200901211559-hello5-117

I've used different combinations of sed and uniq, but I seem to fail.

Grateful for any help.

Use nawk or /usr/xpg4/bin/awk on Solaris if you get errors.

awk -F- '!a[$2$3]++' file > newfile

Regards

Thank you a lot!