Usage of exit() inside a signal handler

Is it ok to use exit() inside a signal handler?
I catch SIGUSR1 in a signal handler and I try to close a file and then exit. The result is inconsistent. Sometimes the process exit and sometimes it returns to the original state before the signal handler was invoked.

Perhaps exit is not legal in a handler function?

T.

Show what your SIGUSR1 signal handler function looks like.

If your signal handling function is declared in main(), in the parent, before any fork and the children don't deviate from this handling then the the entire process group will react according to the described signal handler.

This is the signal handler function:

static void handle_sigusr1()
{
if (DebugFile!=NULL) {
fwrite(DebugBuff.buff_ptr, DebugBuff.buff_len, 1, DebugFile);
fprintf(DebugFile,"\n Shutting down process........\n");
fclose(DebugFile);
}

    exit\(0\);

}

Tuvia

The signal handler is declared after the fork. This is one of the first things that the child processes do after they are invoked. Should this be done before the fork?

Thanks, Tuvia

Declare the USR1 signal handler in both child and parent. This explains why some times the process exits and sometimes it seems that nothing has happened. Usually exit is the default disposition of most signals so putting it inside your handler is not good signal handling practice. Normally signal handlers trap a signal and when they return the process execution picks up from where it left off that is the process does not terminate.

You should not call stdC library functions like fwrite in a signal handler. They are not atomic, so they may not complete in the event of multiple signals. Try write().

Normally, a good program should have 1 exit point.
that is the reason for elimination of goto statements from good programming.
its better to return from signal hander to the calling function and exit.

Suggestion with regard to the original question. I believe the signal handlers to be roughly intended as giving you what amounts to a user-space interrupt service routine. Generally in those we set variables, move things around, that sort of thing. Unusual results sometimes happen because your context can vary significantly. If, for example, you are in the main "scope" of your process, context is quite different from if you are in a "wait" state, as in a blocking read on a socket or some other input source.

My first experience to programming in "kernel space" on a BSD kernel was an eye-opener; there are times when one must literally do everything necessary to count and minimize machine cycles. Rules regarding reuse and form can be secondary to the clear and absolute need for efficiency. Examples abound and would probably bore the reader, so I will spare you.

In any event, were I to declare and use the interrupt vectors such as SIGINT, SIGHUP and so on, I would use the routine to set flags, in the unusual circumstance that I'm working with globals; sometimes I use one if I'm deliberately doing a "blocking read" from something, so that I know that my slumber will not be forever. Non-blocking reads of course are a different sort of logic.

I hope this helps.