Finding duplicate files in two base directories

Hello All,
I have got some assignment to complete till this Monday and problem statement is as follow :-

Problem :- Find duplicate files (especially .c and .cpp) from two project base directories with following requirement :- 
1.Should be extendable to search in multiple base project directories
2.Must use STL container 
3.Should be portable to be used on Linux and Windows   
4.In advance search it should also look for contents of the file.
           While surfing on net came across Boost::FileSystem which is portable on both OS. 
          Friends please provide me some inputs on this. 
          Thank you very much in advance.

Do not post classroom or homework problems in the main forums. Homework and coursework questions can only be posted in this forum under special homework rules.

Please review the rules, which you agreed to when you registered, if you have not already done so.

More-than-likely, posting homework in the main forums has resulting in a forum infraction. If you did not post homework, please explain the company you work for and the nature of the problem you are working on.

If you did post homework in the main forums, please review the guidelines for posting homework and repost.

Thank You.

The UNIX and Linux Forums.

Hello,
Surely its not gravitational assignment. Sorry for mentioning so. I did it just to bypass the strict restriction put on by service base companies on there employees. I am working as Software Engineer in one of the service base company and its task assigned to me but I am completely new to STL and Boost and hence opt for help here.

You can use SHA1 to identify identical files. See below a script to find and show similar files from two different directories:

DIR1=${1};
DIR2=${2};
TMP1=$(mktemp);
TMP2=$(mktemp);
trap "rm -f $TMP1 $TMP2" EXIT HUP INT QUIT TERM
  
for f1 in $( find $DIR1 -type f -name "*.[ch]" ); do
        shasum $f1 >> TMP1;
done
for f2 in $( find $DIR2 -type f -name "*.[ch]" ); do
        shasum $f2 >> TMP2;
done
 
cat TMP1 TMP2|cut -W -f1|sort|uniq -c|
 awk '{if($1>1)print $2;}'|
while read sha;
do
        grep $sha TMP1 TMP2 | cut -W -f2;
        echo;
done
 exit 0

I added

echo;

just to separate groups of identical files with an empty line, just for visibility.

This is a quick and dirty, no error checking etc... just to illustrate the idea.

Thanks Migurus for your prompt reply. But unfortunately I have to do this in C++.