Delete rows from big file

Hi all,
I have a big file (about 6 millions rows) and I have to delete same occurrences, stored in a small file (about 9000 rews). I have tried this:

while read line
do
  grep -v $line big_file > ok_file.tmp
  mv ok_file.tmp big_file 
done < small_file

It works, but is very slow.
How can I do the same thing with less time?

PS I try sed -i but on AIX dosen't work.

Thanks in advance

---------- Post updated at 03:03 PM ---------- Previous update was at 11:44 AM ----------

Just for information,
I resolved my problem with perl script......very fast (2 minutes instead of 2 hours).

It would be nice, if you show your solution here so others benefit from it too. That's the spirit of this forum, to share knowledge, thanks.

Also: Please use code tags, thanks!

Hi all,
below the perl script used for:

#!/usr/bin/perl

open (FILE,"<$ARGV[0]");

while (<FILE>){

  chomp;

  my @line = split('\|' , $_);

  if($line[12] eq 'Compensation'){

    $accounts{$line[0]} = 1 ;

  }

}

close(FILE);

open (FILE,"<$ARGV[0]");
open (OUT,">Report.CSV");
while (<FILE>){

  my @line = split('\|' , $_);

  if(! exists $accounts{$line[0]}){

  #push(@output,$_);
  print OUT $_;

  }

}

close(FILE);
close(OUT);
1 Like