I don't think there would be any real difference here, there might me a very marginal improvement in the second case due to the number of comparison you are doing, but would expect that to be marginal. You are after all incrementing the same number of times in both cases anc calling close() the same number of time.
Just curious, is this shutdown code or is it after forking that you are closing the open fds?
This is for closing the open fds after forking. I was told that you generally need to close only 64 fds instead of the RLIMIT.rlim_cur after forking. Any thoughts ?
I would code your first snippet but compile with an optimizer. These days optimizers will unroll loops if unrolling is advantageous. That particular loop is not a real great candidate for unrolling anyway. A better candidate would be:
for(i=0; i<100; i++) A[i]=0;
Most superscalar cpus can execute:
A[i]=0;
A[i\+1]=0;
A[i\+2]=0;
simultaneously. How deep it can go depends on the cpu and that's why leaving unrolling to an optimizer is a good idea. The optimizer should know the target cpu. But your case involved a system call which is different. You're only saving some loop overhead.
Apparently, if you explicitly unroll a loop when it is not advantageous, most optimizers will not reroll the loop. At least this was the case circa 1998 when my copy of "High Performance Computing" was published. If you have that book, see chapter 8, "Loop Optimizations" and chapter 9, "Understanding Parallelism". This is still a great book and it's not just for Fortran programmers.
Anyway, if you are not in control of which fd's might be open, you need to to loop up to OPEN_MAX closing them. High fd's might have been opened and then setrlimit() called lower to the max fd.