Find Duplicate files, not by name

I have a directory with images:

-rw-r--r-- 1 root root 26216 Mar 19 21:00 020109.210001.jpg
-rw-r--r-- 1 root root 21760 Mar 19 21:15 020109.211502.jpg
-rw-r--r-- 1 root root 23144 Mar 19 21:30 020109.213002.jpg
-rw-r--r-- 1 root root 31350 Mar 20 00:45 020109.004501.jpg
-rw-r--r-- 1 root root 31350 Mar 20 01:00 020109.010002.jpg
-rw-r--r-- 1 root root 31350 Mar 20 01:15 020109.011501.jpg
-rw-r--r-- 1 root root  8060 Mar 20 01:30 020109.013002.jpg
-rw-r--r-- 1 root root  8062 Mar 20 01:45 020109.014501.jpg

Some images are identical, but file names are different.

How can I write a script to find and delete the duplicates.

Is there a better way than this?

#!/bin/bash
DIR="/path/to/images"
echo "Starting:"
for file1 in ${DIR}/*.jpg; do
        for file2 in ${DIR}/*.jpg; do
                if [ $file1 != $file2 ]; then
                        DIFF=`diff "$file1" "$file2" -q`
                        if [ "${DIFF%% *}" != "Files" ]; then
                                echo "Same: $file1 $file2"
                                echo "Remove: $file2"
                                rm "$file1"
                                break
                        fi
                fi
        done
done
echo "Done."

use md5 or another checksum or hash, I just used cksum:

cksum  *.jpg | sort -n > filelist

change the sort command if you use md5.

The files with identical checksums are identical. Read the file over before you go on to part 2 below:

old=""
while read sum lines filename
do
      if [[ "$sum" != "$old" ]] ; then
            old="$sum"
            continue
      fi
      rm -f "$filename"
          
done < filelist

Nice.. Thats ALOT faster, the directory has hundreds of files.

Thanks a bunch...