Pandee
1
I'm trying to create a script.
There are 2 files - fileA.log & fileB.log
fileA.log has the below data :
aaaa
cccc
eeee
fileB.log has the below data :
cjahdskjah aaaa xyz
jhaskjdhas bbbb abc
ajdhjkh cccc abc
cjahdskjah dddd xyz
jhaskjdhas eeee abc
I need to read each line from fileA.log, check fileB.log for that string and delete the line from fileB.log, if that string exists
Finally the file fileB.log should have the below data :
jhaskjdhas bbbb abc
cjahdskjah dddd xyz
Below is the script that I've used :
File_A=/tmp/fileA.log
File_B=/tmp/fileB.log
while read line
do
sed -i "/\b$FileA\b/d" $File_B
done <$File_A
But unfortunately, it is deleting all the lines.
Please let me know what's wrong with my script.
Yoda
2
grep -vf fileA.log fileB.log
1 Like
Hi,
You must use $line versus $FileA in your line sed:
sed -i "/\b$line\b/d" $File_B
But, your solution read/write file2 as far as number line from file1
Better solution:
$ printf '/\\b%s\\b/d\n' $(<file1.txt) | sed -f - file2.txt
jhaskjdhas bbbb abc
cjahdskjah dddd xyz
or
$ sed -e 's#.*#/\\b&\\b/d#' file1.txt | sed -f - file2.txt
jhaskjdhas bbbb abc
cjahdskjah dddd xyz
Regards.
1 Like
Should like cjahdskjah aaaaSTUFF xyz
be deleted from fileB.log in your example?
If not modify Yoda's solution as follows:
grep -vwf fileA.log fileB.log
1 Like
Pandee
5
Thanks Folks!!!
With the input you all gave I got it working with the below script :
while read line
do
grep -v $line fileB.log > $TEMP_FILE
cat /dev/null > fileB.log
cat $TEMP_FILE > fileB.log
done <$fileA.log
That's massively inefficient. There's no need to invoke grep once per line.
If the lines in fileA.log are not regular expressions, use fixed string matching.
The first cat command it's pointless.
If there are slashes in your data, read will eat them. If there are trailing slashes, lines will be joined.
Regards,
Alister
1 Like
Pandee
7
Thanks Alister. Have taken your inputs and made the changes as needed.
The dread <cr><lf> (Win) vs. <lf> (*nix) vs. <cr> (Mac?) strikes again!