Ignore some lines with specific words from file comparison

Hi all,

I need help in doing this scenario. I have two files with multiple lines. I want to compare these two files but ignoring the lines which have words like Tran, Loc, Addr, Charge. Also if i have a word Credit in line, i want to tokenize (i.e string after character "[" ) that line and compare a substring of it.

File looks like

Record 1
Tran@1050e1f[
airbillNbr=1324576
origLocInfo=Loc@1c29ab2[
locId=923
state=FL
locCntry=US
postal=32817
locNbr=456
locCurr=CAD
lglEntity=E
]
destLocInfo=Loc@337838[
locId=298
state=FL
locCntry=US
postal=32845
locNbr=456
locCurr=CAD
lglEntity=E
]
shpDt=Tue Jan 08 00:00:00 EST 2008
shprAddrInfo=Addr@18558d2[
acctNbr=123456789
name=Peyton Manning
company=Giants
address1=Sports Nation
address2=
city=New York
state=NY
country=US
postal=76543
]
Charge@19c26f5[
code=305
crdtCard=Credit@15eb0a[creditCardTypeCode=M,creditCardExpDate=Sat Feb 28 00:00:00 EST 2009]

Any help is appreciated.
Thanks,
Jak

You could use the 'grep -v word ' or 'nawk '$0 !~ /word/' to filter files, redirect output into another files, and after complete filtering, compare files.
That is for word Tran and file fl1 you would use comands:
grep -v Tran fl1 > fltr_fl
or
nawk '$0 !~ /Tran/' fl1 > fltr_fl

For "tokenize" I would use the nawk -F[ ' {if ($0 ~ "Credit") { sub($1,"",$0);} print }' fl1 (should be remuwed else first '[' after that, maybe someone else will help on that)

I would put filtering commands in script and on end of that run the diff on filtered files.
Also the filtering the lines with worlds I would do in : for wrd in ...all words...; do .. done
So, it would be this way:

 words="Tran Loc Addr Charge"
 for wrd im $words ; do 
    nawk -v chk=#wrd '$0 !~ chk' in_fl >tmp;
    cp tmp in_fl;
 done

Thanks for your help!
Jak