Shell Script - find, recursively, all files that are duplicated

Hi. I have a problem that i can't seem to resolve. I need to create a script that list all the files, that are found recursively, with the same name.
For example if a file exists in more than one directory with the same name it list all the files that he founds with all the info. Could someone help me? Thanks in advance.

What have you tried have so far, and how does your code misbehave?

I'm sorry i'm stil a noob to shell script. I've been working around find, uniq and sort. But can't really find the solution. :frowning:

Check if you have fdupes in your OS.

If you dont have fdupes, and if you can write script in Perl.

You can do it easily as,

$hash{"filename"} => [ "absolute-path-of-file", "absolute-path-of-file" ... ]

So which ever file has more than one element in the array, then that are duplicate files !!

Inefficient but still took 10 mins for 60,000 files with 200 duplicates. Unsuitable for large numbers of duplicates.

#!/bin/ksh
find . -type f -print|while read F1
do
    basename "${F1}"
done | sort | uniq -d | while read F2
do
    find . -type f -name "${F2}" -exec ls -lad '{}' \;
done