Perl : Large amount of data put into an array

This basic code works.

I have a very long list, almost 10000 lines that I am building into the array. Each line has either 2 or 3 fields as shown in the code snippit. The array elements are static (for a few reasons that out of scope of this question) the list has to be "built in".

It runs very well in the Nix environment but slows down in the Windows environment.

Question is what can I do to speed things up ?

@MYARRAY = ("field1 field2 field3", "filed1 field2" ........ 10000 lines);
foreach $eachline (@MYARRAY) {
   if ($eachline =~ /\b$ARGV[0]\b/) {
     print "$eachline\n";
   } 
}

Maybe eliminate double handling, doing something akin to shell:

grep <pattern> <<EOF
field1 field2 field3-line1
field1 field2 field3-line2
...
EOF

So many lines - you normally read them from a file!
Then instead of

open (FH, "<file");
@MYARRAY = <FH>;
foreach $eachline (@MYARRAY) {
   chomp $eachline;
   if ($eachline =~ /\b$ARGV[0]\b/) {
     print "$eachline\n";
   }
}

You save the memory with

open (FH, "<file");
foreach $eachline (<FH>) {
   chomp $eachline;
   if ($eachline =~ /\b$ARGV[0]\b/) {
     print "$eachline\n";
   }
}

and in this case even save the loop with

open (FH, "<file");
print grep (/\b$ARGV[0]\b/, <FH>);

Actually, the program would be significantly quicker if you read the file using a while loop. Here's a small test to prove it:

[user@host ~]$ seq 1 1000000 > file
[user@host ~]$ time perl -e 'open I, "< file"; for (<I>) {$i++}; close I; END { print "$i\n" }'
1000000

real    0m0.563s
user    0m0.546s
sys     0m0.046s
[user@host ~]$ time perl -e 'open I, "< file"; while (<I>) {$i++}; close I; END { print "$i\n" }'
1000000

real    0m0.156s
user    0m0.171s
sys     0m0.015s
[user@host ~]$

IMHO, though visually this is not a loop, technically it is, as grep "evaluates the block or expression for each element of list"as per perldoc.

All these snippets may be pretty memory-intensive. In all these cases, the file-handle is being read in list context. So, you'd end up consuming (and storing in memory) the file data all at once.
As balajesuri has already pointed out, reading the handle in scalar context would be much better.

The only thing faster than the scalar loop is to memory map the file in PERL so it is a big string in memory, and not by much, as many OS read flat files via mmap() (automatic buffering in RAM via VM using no swap).

It's a classic case of advanced tools doing expensive favors for you! But there is a base cost below which perl level code cannot go.

Compressing the file might speed the flow out a pipe, as CPUs are so much faster than disks. Old compress is faster than gzip -1. However, if it gets referenced on the same host often, it may be in RAM.