Solved: finding diff in files

i want to compare 2 files and generate below 3 files:

  1. new lines
  2. updated lines
  3. deleted lines

are there any one liners for each one of them.

Note the method to find duplicates is based on field 1, values are separated by '|'

example:
test1 (older file)

1|XXX
2|YYY
3|ZZZ

test2 (newer file)

1|XXX
2|FFF
5|DDD
6|TTT
new => 
5|DDD
6|TTT

updated=>
2|FFF

deleted=>
3|ZZZ

Thanks a lot in advance ...

---------- Post updated at 04:33 PM ---------- Previous update was at 04:24 PM ----------

comm command solved the problem.. thanks a lot anyway guys

In order for us to tag as "solved", it would be nice if you shared with the community the resolution...

$ cat 1.txt
1|XXX
2|YYY
3|ZZZ
$ cat 2.txt
1|XXX
2|FFF
5|DDD
6|TTT
awk -F'|' 'FNR==NR {buff[$1]=$0;next} 
                  {
                     if(buff[$1]=="")  { new[$1]=$0; next;}
                     if(buff[$1]==$0) { delete buff[$1]; next;}
                     if(buff[$1]!=$0) { chng[$1]=$0; delete buff[$1];}
                  }
                  END{

                     print "new==>"
                     for (var in new) print new[var];

                     print "\n\nupdated==>"
                     for (var in chng) print chng[var];

                     print "\n\ndeleted==>"
                     for (var in buff) if(buff[var]!="") print buff[var];
                  }' 1.txt 2.txt

Result:

new==>
5|DDD
6|TTT


updated==>
2|FFF


deleted==>
3|ZZZ

use of "comm" command solved the problem..