Need unix commands to delete records from one file if the same record present in another file...

Need unix  commands to delete records from one file if the same record present in another  file...



 just  like join ... if the record present in both files.. delete  from first file or delete the particular record and write the unmatched records to new file..

tried with grep and while loop... but no luck... can anyone help on this?

show, what you tried so far ?

please post correct heading while posting a thread

The UNIX and Linux Forums - Forum Rules

(11) Post questions with descriptive subjects. For example, do not post questions with subjects like "Help Me!", "Urgent!!" or "Doubt". Post subjects like "Execution Problems with Cron" or "Help with Backup Shell Script".

tried with the below codes for getting the unmatched records... but the issue here is while loop will return unmatched records for every loop ... I need only a single output which it should contain only unmatched records

while read a ; do
  grep -v "^${a}$" a.file;
done < b.file

a.file --target file

b.file--- required records to be matched

 
cp a.file a.file.orig
while read a ; do
  grep -v "^${a}$" a.file > /tmp/tmp.out
  mv /tmp/tmp.out a.file
done < b.file

got the expected result....very thanks kamaraj :slight_smile:

How about:

grep -vxFf b.file a.file

working fine....thanks