Which is more expensive ?

I have the following code snippet's. Which one among these would be more expensive ?

#1
        for (int fd = 0; fd <= 1024; ++fd)
            close(fd);
#2
        for (int fd = 0; fd <= 1024; fd += 8)
        {
            close(fd);
            close(fd+1);
            close(fd+2);
            close(fd+3);
            close(fd+4);
            close(fd+5);
            close(fd+6);
            close(fd+7);
        }

But isn't that this will only close fd up to 135? :confused:

My very bad.

I actually meant a loop which runs for 1024 against another loop which runs for 128 times in which 8 fd's are closed each time.

        for (int fd = 0; fd <= 1024; fd += 8)
        {
            close(fd);
            close(fd+1);
            close(fd+2);
            close(fd+3);
            close(fd+4);
            close(fd+5);
            close(fd+6);
            close(fd+7);
        }

I know the close call gets called 1024 times. But what about the looping part ? Is there any benefit at all ?

I don't think there would be any real difference here, there might me a very marginal improvement in the second case due to the number of comparison you are doing, but would expect that to be marginal. You are after all incrementing the same number of times in both cases anc calling close() the same number of time.

Just curious, is this shutdown code or is it after forking that you are closing the open fds?

This is for closing the open fds after forking. I was told that you generally need to close only 64 fds instead of the RLIMIT.rlim_cur after forking. Any thoughts ?

You need to close the file descriptors you need to close. :slight_smile:

The number 64 comes from some editions of UNIX's hard coded limit.

There are alternatives...

  1. If the reason is the program is going to call 'exec' then the code that opens the file descriptors could set the close-on-exec bit.

  2. Modular threaded code could use pthead_atfork to set up a call back to close a file descriptor if required.

I would code your first snippet but compile with an optimizer. These days optimizers will unroll loops if unrolling is advantageous. That particular loop is not a real great candidate for unrolling anyway. A better candidate would be:

for(i=0; i<100; i++) A[i]=0;

Most superscalar cpus can execute:
A[i]=0;
A[i\+1]=0;
A[i\+2]=0;
simultaneously. How deep it can go depends on the cpu and that's why leaving unrolling to an optimizer is a good idea. The optimizer should know the target cpu. But your case involved a system call which is different. You're only saving some loop overhead.

Apparently, if you explicitly unroll a loop when it is not advantageous, most optimizers will not reroll the loop. At least this was the case circa 1998 when my copy of "High Performance Computing" was published. If you have that book, see chapter 8, "Loop Optimizations" and chapter 9, "Understanding Parallelism". This is still a great book and it's not just for Fortran programmers.

Anyway, if you are not in control of which fd's might be open, you need to to loop up to OPEN_MAX closing them. High fd's might have been opened and then setrlimit() called lower to the max fd.