signal handling question

Hello all,

I am starting to learn signal handling in Linux and have been trying out some simple codes to deal with SIGALRM. The code shown below sets a timer to count down. When the timer is finished a SIGALRM is produced. The handler for the signal just increments a variable called count. This is repeated until the user hits �q' in the keyboard. The code is shown below:

#include <stdio.h>
#include <unistd.h>
#include <signal.h>
#include <sys/time.h>
#include <stdlib.h>

void my_action(int);

int count = 0;

int main(int argc, char** argv)
{
	struct sigaction sigalrm_action;
	struct itimerval timer;
	
	timer.it_interval.tv_sec = 0;	//Deal only in usec
	timer.it_interval.tv_usec = 1000;
	timer.it_value.tv_sec = 0;	//Deal only in usec
	timer.it_value.tv_usec = 1000;	

	sigalrm_action.sa_handler  = my_action;	
	sigemptyset(&sigalrm_action.sa_mask);
	sigalrm_action.sa_flags = 0;
	
	sigaction(SIGALRM, &sigalrm_action, 0);				

	printf("Hit any key to start, q to exit\n");	
	getchar();	
    	
	if(setitimer(ITIMER_REAL, &timer,NULL) != 0){
		perror("Error starting timer");
		exit(1);
	}    	
	while(getchar()!= 'q');	
	printf("Bye bye\n");
	return 0;
}

void my_action(int signum)
{	
	count++;
	printf("Count is %d\n", count);	
}

The problem I am facing is this, when I set the timer for 1000000usec it works fine (i.e 1sec). However if I keep reducing the usec time to 100000, 10000, 1000 etc the timing seems to be too slow. The count variable is not being incremented as fast as it should be. Why is this? I have a hunch I am doing some silly mistake here but I am not sure what it is.

Any help would be greatly appreciated.

Hi,
Just a thought but i might be totally wrong/out of subject: i suspect it has to do with linux timer resolution which is at 10ms (or not?) so no matter how small you set your interval your code will only be ran every 10ms ...

PS.: You query remind me of a "nice surprise" when i was working with timer on linux :smiley:

andryk is right - alarm timer expiry (SIGALRM) will be delivered to the process, but only when the process has enough priority to be awakened. In other words, if you ask for 10ms sleep time on a busy system, your alarm will be instantiated after 10ms, guaranteed, but the time when your process gets a turn at the CPU is NOT guaranteed, unless your process has elevated (realtime) priority. You have to wait for other processes to give up the CPU before you get it back, and process the signal. This becomes a problem on a busy system, or when the duration of the wait is less than a quantum slice and there is a least one other process that needs the cpu.

I really don't recommend it, but on a busy system you may have to nice your program up to a very high prioity to get the results you asked for.

I get what you guys mean.

I ran some test code using usleep and found some interesting results.
When i put usleep to 0.1secs, the actual sleeping time varies from the set point (0.1secs) with an error of about +5%. The same error percentage jumps to about +40% when i put usleep to 0.01sec and the same error jumps to about +300%(!!!!) when i put usleep to 0.001sec.

Looks like as the timer resolution is brought down the error keeps increasing. Could be due to system timer resolution as mentioned as well as the priority based scheduling employed by linux.

Anyway thanks a lot guys.

Regarding the timer resolution, refer to time(7) in the man pages. Looks like the resolution can be configured from kernel 2.6.13 onwards. A snippet:

"since kernel 2.6.13, the HZ value is a kernel configuration parameter and can be 100, 250 (the default) or 1000, yielding a jiffies value of, respectively, 0.01, 0.004, or 0.001 seconds."

Cool, thanks for sharing the info!

Interesting!

This kind of seems to maintain the integrity which is definitely needed.

I have a question - might be foolish.

So, the process when its sleeping is in the 'sleep' state and after SIGALRM moves to 'ready_to_run' state and gets a slot in one of the scheduled queues based on the priority.

So, now what happens to the process ( A ) when there is some information / signal delivered to the process ( A ) which is in the run queue but not actually running ?

How is the kernel managing this ? Just because some other process ( B ) is urging the current process ( A ) for some reply, process ( A ) can't jump ahead of its run queue and start running.

Just like when you are in line at a ticket window. Your kid comes up and say we need one more ticket. You cannot do anything about it until it is "your turn".

And "jiffies" resolution may or may not help in a single CPU box. The process still cannot process a signal until it is the current running process.