I'm interested in finding all occurrences of the terms in file1 in file2, which are both csv files. I can do this with a loop but I'm interested in knowing if I can also do it with the help of xargs and grep. What I have tried:
cat file1 | xargs grep file2
The problem is that grep tries to open the terms in file1 instead of using them as patterns.
If you want to search for the patterns from a file ( file1 in your example ) , you must use the "-f" option.
"grep -F" is equivalent to "fgrep" ( treating patterns as fix string and not regex) but differ from "grep -f" (patterns from file ) . you probably want to combine the both.
Thank you, I used the "-f" option, adding "-F" as well made it significantly faster.
---------- Post updated 10-12-11 at 12:27 AM ---------- Previous update was 10-11-11 at 04:43 AM ----------
I have another question related to this use of grep. If I instead wan to list all items in file1 that does not exist in file2 by negating how can that be done?
I tried the -v flag but it did not do this, am I missing something here or is it not possible to do this with grep alone? I have used -v before with grep but never combined with -f that is new to me.
grep -v -F -f file1 file2
EDIT:
I managed to get this working with the following script, any comments or tips if there are easier ways to accomplish the same this would be appreciated.
#!/bin/bash
if [ "$#" -ne 2 ]
then
echo "Missing arguments"
exit 1
fi
while read line ; do RESULT=$(grep "$line" "$1") ;
if [ -z "$RESULT" ]
then
echo "$line"
fi ; done < "$2"
Edit: I made two mock up files with just random words in them, and it now work as intended with them. Very strange, are there any special meaning to @ for bash that could mess this up?
There is nothing more than one email address per line at this stage.