Hello
I have 2 files, eg
more file1 file2
::::::::::::::
file1
::::::::::::::
1 fromfile1
2 fromfile1
3 fromfile1
4 fromfile1
5 fromfile1
6 fromfile1
7 fromfile1
::::::::::::::
file2
::::::::::::::
3 fromfile2
5 fromfile2
I want to merge these but only include duplicated fields from the second file. So the result is
1 fromfile1
2 fromfile1
3 fromfile2
4 fromfile1
5 fromfile2
6 fromfile1
7 fromfile1
Basically merging 2 files but omitting any records in file 1 which appear in file2 based on the key field.
I've started to cobble a script together which
- makes a list of key fields from file2
- loops round reading that file and uses grep -v to remove records with that key from file1
- then use uniq -d to only keep records which were duplicated (so I now have copy of file1 but with noly records 1,2,4,6,7)
- then concatenate this file and file2
This only if file2 has exactly 2 records.
This feels like something which should be simple but I can't figure it out. I suspect I should be able to use join or maybe awk to achieve what i want but I can't get there & can't find anything through Google.
Can anyone suggest a more elegant solution to my approach? (& frankly one which works because mine doesn't)
Many thanks, Chris