search for a string in a text file

I want to write a script to check for duplicates
For example: I have a text file with information in the format of /etc/passwd
alice:x:1008:555:William Williams:/home/bill:/bin/bash
bob:x:1018:588:Bobs Boos:/home/bob:/bin/bash
bob:x:1019:528:Robt Ross:/home/bob:/bin/bash
james:x:1012:518:Tilly James:/home/bob:/bin/bash

I want to simply check if there are duplicate users and if there are, output the line to standard error. So in the example above since bob appears twice my output would simply generate something like:
Error duplicate user
bob:x:1018:588:Bobs Boos:/home/bob:/bin/bash
bob:x:1019:528:Robt Ross:/home/bob:/bin/bash

Right now I have a while loop that reads each line and stores each piece of information in a variable using awk -F that is delimited with ":". After storing my username I am not too sure on the best approach to check to see if it already exists.

Some parts of my code:
while read line; do
user=`echo $line | awk -F : '{print $1}'`
match=`grep $user $1`($1 is the txtfile)
if [ $? -ne 0 ]; then
echo "Unique user"
else
echo "Not unique user"
then somehow grep those lines and output it

    fi

The matching does not produce the right results
Suggestions?

awk -F: '{print "^"$1":"}' /etc/passwd | uniq -d | grep -f - /etc/passwd

Perhaps it is best to include the field separator so bob and bobette do not pass as duplicates and also introduce a sort step because uniq only works on adjacent matching lines

awk -F: '{print "^"$1 FS}' /etc/passwd | sort | uniq -d | grep -f -  /etc/passwd

This would do the same thing with the userid field...

awk -F: '{print "^[^:]*:x:"$3":"}' /etc/passwd | sort | uniq -d | grep -f -  /etc/passwd

Or combined in one awk, something like this:

awk -F: 'NR==FNR{A[$1]++;B[$3]++;next}A[$1]>1||B[$3]>1' /etc/passwd /etc/passwd

Read /etc/passwd for only one time.

awk -F: '{a[$1]=(a[$1]=="")?$0:a[$1] ORS $0;b[$1]++}END {for (i in b) if (b>1)print a}' /etc/passwd