identifying duplicate entries

hi all,
have got a large log file and was wondering if there is a easy way on solaris box to grep out duplicate entries based on email address ??

sample log file :

2010-06-19,04:08:12,235632470,2010-06-18T00:00:00.000+12:00,zinny123@hotmail.com
2010-06-19,04:09:57,235632470,2010-06-18T00:00:00.000+12:00,zinny123@hotmail.com
2010-06-19,04:28:36,223906214,2010-06-18T00:00:00.000+12:00,zkml123@xtra.co.nz
2010-06-19,04:01:51,101427641,2010-06-18T00:00:00.000+12:00,zl2t890@orcon.net.nz
2010-06-19,04:03:40,101427641,2010-06-18T00:00:00.000+12:00,zl2890@orcon.net.nz

thanks in advance.

Remove consecutive duplicate

awk -F, '$NF!=f{print}{f=$NF}' file

Hi

sort -u -t, -k5 file

Guru.