The problem: I need to work with large arrays and after one of my structures grew in size my program started getting segmentation fault.
My code where I allocate the arrays:
static R1 *tarr;
static R2 *rarr;
proc_init_mem()
{
const int t_sz = sizeof(R1) * MAX_R1;
const int r_sz = sizeof(R2) * MAX_R2;
tarr = malloc(t_sz);
rarr = malloc(r_sz);
if(tarr == NULL || rarr == NULL)
return(-1);
printf("tarr sz: %i\n", trp_sz);
printf("rarr sz: %i\n", run_sz);
return(0);
}
When I run the program, I am getting the printouts:
tarr sz: 11280000
rarr sz: 20200000
and then program dies according to debugger somewhere in fgetc (libc) as it reads in config params. If I decrease MAX_R1 or MAX_R2 everything is fine.
I am not exactly clear which resource limitations I am breaking.
This is Ubuntu 8 with gcc 4.2.4, the program is mostly in C, with addition of C++ libraries.
The rlimit parameters are as follows:
0 - -1 -1 per proc. CPU limit
1 - 16777216 16777216 largest file created
2 - -1 -1 max sz of data segment
3 - 33546240 33546240 max size of stack seg
4 - -1 -1 largest core sz
5 - -1 -1 largest resident set sz (swapping related)
6 - 8191 8191 number of processes
7 - 1024 1024 number of open files
8 - 32768 32768 locked-in mem addr space
9 - -1 -1 addr space linit
10 - -1 -1 max file locks
11 - 8191 8191 max number of pending signals
12 - 819200 819200 max bytes per msg queue
13 - 0 0 nice priority
14 - 0 0 max realtime priority
(I just run getrlimit in loop 0 through 14 to produce this list and manually added annotations)
I tried to run ulimit -s 32760 and then execute my program, but it did not help. BTW, the size 32760 was the bigest I was able to ulimit. Also, I tried to run as root, hoping that root would not have the limitation, but the same SIGSEGV happened.
Does anyone know how to deal with this type of problem? Any insight will be appreciated.