Compare two files and remove all the contents of one file from another

Hi,

I have two files, in which the second file has exactly the same contents of the first file with some additional records. Now, if I want to remove those matching lines from file2 and print only the extra contents which the first file does not have, I could use the below unsophisticated command, consider f1 and f2 are the two files

var=`cat f1`
grep -v "$var" f2

but I need a more optimal solution with fast and reliable with less memory consumption.

I have found these 2 lines of code, but it does not work for files having lengthier lines:

fgrep -v -x -f f2 f1  
awk 'NR==FNR {b[$0]; next} !($0 in b)' f2 f1 
> cat file70
abc
def
ghi
jkl
mno
pqr
stu
vwx
yz
123
456
789
0

> cat file71
abc
def
ghi
jkl
mno
pqr
stu
vwx
yz
bash ksh
123
456
789
0
unix.com

> diff file70 file71 | grep "^>" | cut -c3-
bash ksh
unix.com

Hi,

to print the different lines of two files try:

comm -3 file1 file2

and for further informations

man comm

Kind regards

Chris

Thanks to you all for the suggestions. But anyone has any awk, perl code to do this task?

And Also, the below perl code will remove duplicate, non-consecutive lines based on the last field without sorting. Now, please tell me, what should I change in this code in order to print unique lines of a file by just not seeing the last field but the entire line (the whole record)?

perl -ane'print unless $_{$F[-1]}++'