how can I delete duplicates in the log?

I have a log file and I am trying to run a script against it to search for key issues such as invalid users, errors etc. In one part, I grep for session closed and get a lot of the same thing,, ie. root username etc. I want to remove the multiple root and just have it do a count, like wc -l

These are the unidentified users
auth could not identify password for [henrys]: 1 Time(s)
auth could not identify password for [henrys]: 1 Time(s)
auth could not identify password for [henrys]: 2 Time(s)
auth could not identify password for [henrys]: 1 Time(s)
Maybe a simple variable tip?
or a for i loop?
Thanks, just learning scripting.

if the entries are exactly similar.
"cat logfile | sort -u "will work fine

in your case:
"cat logfile| awk -F':' '{print $1}'| sort -u" should work

why exactly do you need to cat a file?

sort -u worked great. Thanks so much!

---------- Post updated at 03:09 PM ---------- Previous update was at 03:02 PM ----------

why do I need to cat the file? I am trying to format a log so that the problems come out such as errors, invalid users, illegal, authentication failures etc. are in the report. I would like to clean it up as well, so that you don't have 100 entries if a single user had issues. Ideally it would do a count first, then may be state the user tried to login 10 times for e.g. Thanks V.

Follow the posted 'cat' link.

awk -F':' '{print $1}' logfile | sort -u

Thank you very much.