Compare Fields from two text files using key columns

Hi All,

I have two files to compare. Each has 10 columns with first 4 columns being key index together. The rest of the columns have monetary values.

Using Perl, I want to read one file into hash; check for the key value availability in file 2; then compare the values in the rest of 6 columns; report the differences found.

The files are comman separated and do not have header

Here is the sample file:
File A:

Row1: abcd,abrd,fun,D000,$15,$236,$217,$200,$200,$200
Row2: dear,dare,tun,D000,$12.00405,$234.08976,$212.09876,$200,$200,$200

File B:

Row1: abcd,abrd,fun,D000,$12,$234,$212,$200,$200,$200
Row2: dear,dare,tun,D000,$12.00405,$234.08976,$212.09876,$200,$200,$200

Output:

Difference found for index abcd,abrd,fun,D000 for field 5,6 and 7
 

Any help would be appreciated. I am able to come up with the script in Bash, but not very comfortable with the concept of Hash in Perl and also setting up key index columns.

Thanks!

perl -F, -lane'
    $_{ join $,, @F[ 0 .. 3 ] } = [ @F[ 4 .. $#F ] ] and next
      if @ARGV;
    if ( ( join $,, @F[ 4 .. $#F ] ) ne join $,, @{ $_{ join $,, @F[ 0 .. 3 ] } } )
    {
        @diff = map ++$_,
          grep { $F[$_] ne $_{ join $,, @F[ 0 .. 3 ] }->[ $_ - 4 ] } 4 .. $#F;
        print "Difference found for index ", ( join ",", @F[ 0 .. 3 ] ),
          " for field(s) ", join ",", @diff;
    }' file[ab]

Just curious why, if you can achieve it in your known comfort zone, you would prefer to accomplish it in an area you're not so comfortable with...and possibly unable to support in the long-term?

Perl is no doubt powerful but it can be a bugbear to support if you leverage it using someone else's snippet, without a firm grasp of what it's doing for you. Additionally, it would make scaling that much more difficult. Are you sure you'd prefer this approach?

Silly me...homework, ha!