Hello,
I am not that experienced with Linux, and I am currently facing some issues.
The application I'm working on uses hundreds of threads. To optimize the memory usage, I am putting all my data inside a shared object (so).
The steps for this are as follows:
-
a C file (generated from an XML file) with lots of arrays (const char c1[] ="string" and const char* arr_const[] = {"s1", "s2", ...."sN"}, etc.)
-
a shell script to compile this to a shared object:
gcc -g -m64 -Wall -fPIC -c bigfile.c
gcc -m64 -shared -Wl -o <custom_new_name>.so bigfile.o -
some code (on each thread) to open the shared object and get the refferences to my arrays from the shared object:
void ** handle;
handle=dlopen (<full_path_to_so>, RTLD_NOW);
char* local_var = (char**) dlsym(*handle, "arr_const");
NOTE: the handles don't get unloaded until my application closes.
Using shared objects is mandatory for two reasons:
- optimized memory usage when launchig thousands of threads
- the ability to create new shared objects from new XML files and adding them to the application after it has been installed on client machines (with the help of text config files).
My problem is that the process described above does not work on my x86_64 CentOS 5.8 (dlopen returns NULL). The entire application is built with 64bit settings.
When using this on CentOS 32bit, with gcc using -m32 instead of -m64 for compiling the shared objects, and building my app with 32bit settings, everything is fine.
Any ideas ?
Thank you