Shared library with acces to shared memory.

Hello.
I am new to this forum and I would like to ask for advice about low level POSIX programming.

I have to implement a POSIX compliant C shared library.
A file will have some variables and the shared library will have some functions which need those variables.
There is one special requirement, the access to the variables should be as quick as posible, which means that variables should be load in memory and made available to the libray.

I think I can create a program using IPC shared memory (shmget functions) to create a shared memory region available to the process using the library.
This process would read a file with the variables and would write them in the shared memory.
As I have read, one you create the shared memory, the process can be stopped and the region should be still available to other process.
Also, the progam would be executed to apply changes in the configuration file, changing the shared memory.
What do you think about this solution? Is this feasible?

I have also read about mmap as a modern solution? Would you recommend it?

Thank you very much for your comments.

Is this homework?

No, it isn't.
In fact, I am asking because my experience in POSIX is based on homework some years ago. For that reason I would thank to have some feedback about IPC or other possible options.

It is a basic project, but it needs to be as quick and reliable as possible.
Also, it will run on Solaris 10, so the deployment could be difficult.

I'd use mmap rather than shm, since this would create an actual file that could be shared, and would be persistent across reboots. The contents of the file would effectively be memory.

How to do it depends on how you want to do it, there's no "create variable inside file" system call. You'd be building your own data structure and access methods. You might want to employ read-write locks for speed instead of global locks.

Thanks Corona for your opinion.

I have been reading about mmap and I have understood
that you need a process running to keep the file in-memory. I have been doing some testing and wiht 'shmget' I can share memory and when the process stops the shared memory is already available. Am I right?

I like the idea of a process which makes the memory available for the first time and does not need to be running later. Also, this operation could be delegated to the own library at the first time it is called. Is this posible using mmap?

Do not try to outsmart the operating system. Whether anything is in-memory is up to it -- things will get paged out at need. This includes shm segments too, so having it "purely in ram" is an illusion. Things which are frequently used will stay in memory.

Besides, with mmap, you get a file. With shm, you don't -- reboot and it's gone forever.

mmap may be actually more efficient since it doesn't need to back it with swap, only with file.

Do not try to outsmart the operating system. Whether anything is in-memory, be it code or data, is up to it. There are reasons for this. Just dumping everything into memory isn't always for the best.

On some systems you can tell mmap to preload the data, or you can mlock() the segment you want to keep in, but this is generally not necessary -- especially when the amount of data isn't huge. Frequeqntly-used data will stay in memory.

If the amount of data is huge, you don't want it all in memory, only the frequently used bits which will actually fit, for efficiency reasons. Large database systems often rely on mmap and the system deciding intelligently which things belong in memory.