I have two files, in which the second file has exactly the same contents of the first file with some additional records. Now, if I want to remove those matching lines from file2 and print only the extra contents which the first file does not have, I could use the below unsophisticated command, consider f1 and f2 are the two files
var=`cat f1`
grep -v "$var" f2
but I need a more optimal solution with fast and reliable with less memory consumption.
I have found these 2 lines of code, but it does not work for files having lengthier lines:
fgrep -v -x -f f2 f1
awk 'NR==FNR {b[$0]; next} !($0 in b)' f2 f1
Thanks to you all for the suggestions. But anyone has any awk, perl code to do this task?
And Also, the below perl code will remove duplicate, non-consecutive lines based on the last field without sorting. Now, please tell me, what should I change in this code in order to print unique lines of a file by just not seeing the last field but the entire line (the whole record)?