Please Help. Need Help searching for multiple stings in a file and removing them.

Please help. Here is my problem. I have 9000 lines in file a and 500,000 lines in file b. For each line in file a I need to search file b and remove that line. I am currently using the grep -v command and loading the output into a new file. However, because of the size of file b this takes an extremely long time to do and I have 50 files similiar to file b. Is there a simpler way to accomplish this. Here is a code snippet of what I have so far.

cat $1 | while read LINE
do
echo $LINE

grep -v $LINE fileName > OutputFile

cp OutputFile fineName

done

You can use -i option of sed (if your sed have it), i.g.
sed -ie "/$LINE/d" fileName
or you can use similar perl -pie

Note: implementation of this feature can make temp files implicitly

Perhaps use "fgrep -v -f", e.g....

$ head file[12]
==> file1 <==
aaa
bbb
ccc

==> file2 <==
111
aaa
222
ccc
333
bbb
444

$ fgrep -v -f file1 file2
111
222
333
444