script that detects duplicate files in directory

I need help with a script which accepts one argument and goes through all the files under a directory and prints a list of possible duplicate files As its output, it prints zero or more lines, each one containing a space-separated list of filenames. All the files listed on one line have the same MD5 hash; i.e., are believed to be identical.

Others/optional
If the -s switch is specified, the script should not print a list of all duplicate files; instead, it should print the number of duplicates. (For example, in the example above, there are 4 duplicate copies of 3 files), and how much extra space the duplicates take up. (Note: this summary information should only be displayed if the -s switch is present; if it is not present, every line in the output should display a set of duplicate files.)

here is an example with crc32 - you can use md5 if you absolutely must but this works just fine.

#!/bin/ksh

find ./c -type f | \
while read file
do
    echo " $(crc32 "$file")  $file"
done | sort | \
awk ' BEGIN { old_crc=""; old_file="" }
      {
       if( old_crc==$1) {print old_file, $2}
       old_crc=$1
       old_file=$2
       }'