speed test +20,000 file existance checks too slow

Need to make a very fast file existence checker. Passing in 20-50K num of files

In the code below ${file} is a file with a listing of +20,000 files. test_speed is the script. I am commenting out the results of <time test_speed try>.

The normal "test -f" is much much too slow when a system call inside awk or perl. basic grep on +20,000 files is super fast, why does doing a file existence test slow it down so much.

Yes i am on try 55, and still i can not get this thing to go faster. I think try 55 would be very fast but i can not actauyly pass a file listing of +20,000 into a for loop becuase i run out of memory. anyone have any ideas on how to speed up a file check inside awk or perl or chell?

This would be fast if it actually worked

how can i pipe into pram $1 ?

awk '{print $10}' ${file} | if [ -f $1 ];then echo 1; else echo 0; fi

how can you pipe into an if statement?

Write it in C.

You can't; an if statement is not a loop and it doesn't read standard input.

Don't pipe "it" into a if statement, create the shell script on the fly and pipe that:

awk '{print "if [ -f \"" $10 "\" ]; then echo 1; else echo 0;fi"}' | sh