File Descriptors + cron

Hi All,

This thread is going to be a discussion basically bringing out more information from the experts on cron jobs and the associated file handles.

So, here is the question.

There is definitely a constant ' n ' as the maximum number of file handles alloted to a process ' p '.

Will there be any difference, if the process ' p ' is running as a foreground process or as a process spawned by cron daemon; in the maximum number of file handles that is be alloted to the process ?

If so why is the difference, or basically what are the constraints placed over the process spawned by the cron daemon to that of a process running as a foreground process kicked of from the terminal ?

Thanks! :slight_smile:

Depending on the operating system the number of file descriptors per process is fixed in the kernel at compile time or configured with a parameter.

In terms of what makes any process different to UNIX it would be limited really to the following...

  1. does it have a controlling terminal attached

  2. is it's parent dead

  3. is it dead, then it's a zombie and has no memory, no file descriptors and just a minimal entry in the process list.

There are other process wide details such as priority, effective user etc, but not much that makes a process different, even case 2 merely means replace it's parent pid with '1'.

Number of file descriptors is unlikely to change.

The exact particulars vary depending on OS. I will use HP-UX as an example. The number of possible file decriptors is under the control of setrlimit(2). (A less powerful interface, ulimit() is also available.) A process cannot have more fd's than the "soft" limit . Using setrlimit(2), a process may raise or lower its soft limit. But a process cannot raise the soft limit above the hard limit. A process can lower the hard limit. Only a root process can raise the hard limit. Kernel parameters define the initial value of the hard and soft limit. Even root cannot raise the hard limit above the initial value for the hard limit. The kernel paramters:
maxfiles
maxfiles_lim

I have cheated a little bit by picking HP-UX as my sample OS. HP-UX allows dynamic reconfiguration of the kernel. Only root can reconfigure the kernel. But a root process could, in theory, raise maxfiles_lim and then raise its hard limit and then relower maxfiles_lim. Not all versions of Unix give that much power to a root process.

I don't believe that cron fiddles with these limits.

By default stdin, stdout, and stderr are file descriptors opened during process creation.
If the file limit is 16, for example, then the process has 13 files descriptors to play with.

In shell scripts, redirection uses up file descriptors. Once the line "doing" the redirection has been completed the file descriptor is closed ie: ls * > myfile.txt. Redirecting blocks of code, like a loop, use up a file descriptor over many lines of code.

for file in `ls /path`
do
     cat $file
done > myfiles.txt

Cron jobs run without a tty, stdin is the script.

Jim, does that mean there wont be any difference between the number of file descriptors for the foreground process and cron jobs alloted ?

Thanks for the reply.

But my question is more related to the difference in the allocation of the file descriptors to the different process ( foreground from the terminal / background ).

Cool! I had a weird doubt that whether there could be any differences.

Cleared

Thanks to all !!

If a process has a controlling terminal it still starts with only three file descriptors allocated.

If you are curious, you could write a program that dumps the fstat info for all file descriptors from say 0 to 63, checking for errors of course, run that program from the terminal, then run under xterm, then run from nohup and finally run from cron.