Removing lines from large files.. quickest method?

Hi

I have some files that contain be anything up to 100k lines - eg. file100k
I have another file called file5k and I need to produce filec which will contain everything in file100k minus what matches in file 5k..

ie.
File100k contains
1FP
2FP
3FP

File5k contains
2FP

I would normally do a grep pattern search with a for loop or something so I would output entire contents of file100k in to filec except anything found in file5k..

Problem is that with 100k entries to search - 5 thousand times.. its takes some time with normal unix tools (can take 10-15 mins for one of these 100k files) and I am wondering is there a way to do this faster - maybe with a perl command or something..

Hope I am making sense... can you help out??

Play around using simple grep with -v and -f.

Try sorting the files and use comm:

comm -23 file100k File5k