Hi all,
I have a big file (about 6 millions rows) and I have to delete same occurrences, stored in a small file (about 9000 rews). I have tried this:
while read line
do
grep -v $line big_file > ok_file.tmp
mv ok_file.tmp big_file
done < small_file
It works, but is very slow.
How can I do the same thing with less time?
PS I try sed -i but on AIX dosen't work.
Thanks in advance
---------- Post updated at 03:03 PM ---------- Previous update was at 11:44 AM ----------
Just for information,
I resolved my problem with perl script......very fast (2 minutes instead of 2 hours).