shared memory with linked list??

is this possible, if so plz please share with me..

This is entirely possible.

  • Create shared memory region
  • Place the head of the linked list there at a known location (e.g. at the start of the shared region), being careful to make sure that both the linked list itself and the data referenced by it are in the shared memory, not on your heap/stack.

a bit more programmatical explanation will do.!!

"You're welcome"?

The thing (or rather, the combination of things) you're asking about is quite extensive and a bit niche. You'll have to do research for yourself on this one - don't expect anyone to hand the answers to you on a plate.

There are tutorials on the web for both subjects, so take a look. Search for "beej's guide to ipc" and you'll find some good info about shared memory, and it won't be hard to find examples of how to create linked lists. Combine these two (remembering that you can't use the linked list to "point" to data in your program's address space, you have to ship it all out to the shared memory region) and you're there.

Tank you

I do not think that this scenario is possible on protected memory managed systems ...
Like Linux / any flavor of UNIX .

If you recall that a shared memory is identified through key_t type of datatype and are operated through syscalls.

You can not perform memory operations like you do on normal program variables (be it a heap, stack, .bss, .data/ro etc).
Like &<my_variable>

Now a typical linked list node looks like:


struct myNode {
   int info;

      // ...

   struct myNode *nextNode;
};

Now could anyone explain , if you could populate the pointer (nextNode) with an address of a shared memory? If yes how, I want to learn. :slight_smile:

However, the same is possible (and very much possible) possible on a flat memory managed systems like VxWorks. As a food for though just think how?

answer @praveen if you could..

That makes no sense - the underlying memory model is hidden from the application programmer, so it doesn't matter. All the application sees is a flat address space, whether you're accessing it through virtual memory or not.

So think how you would do it for VxWorks... the same would work on any Linux variant (probably). Either way, creating data structures in shared memory is possible.

Sorry, but that's just plain incorrect. Where on earth did you get that from? When you get shared memory, you get a pointer to it by calling shmat(2). Shared memory is just normal memory. Otherwise, what would be the point...?

You can do anything to shared memory you can do to normal memory in your r/w data section, including populating it with a linked list.

Just to be sure - John is correct.

In the case of read/write shared memory you can do anything to shared memory that you can do in process-private read/write memory. Period.

How else do you think an application like Oracle could use a single "SGA" (system global area) to serve hundreds of user client processes simultaneously?

:slight_smile: Really good discussions so far.

Somehow , I did not see this idea actually materializing into a feasible C code . I wonder, how to populate the nextNode pointer of a linked list, which could be accessed (for read/write) by multiple concurrent processes and without getting a sig � 11 :slight_smile: .

Regarding shmat(), if anyone think he/she may provide us a example of a scalable linked list implementation which is fully shared across different concurrent processes (not threads) , nothing better than this. Request please go ahead and provide us an example.

I'm , however, of the opinion that If more than one process is using the shared memory resident linked list, the pointers used in one process through shmat() calls, would not work from one process to the next. That's because different processes might have the shared memory at different places in their respective process address spaces returned by their respective calls to shmat().

Apart (even that could had been possible), I do not see any good reason that any practical code would create such a dataStructures within kernel object / area where in you have a cap defined for its maximum size, is this defined by SHMMAX ? That prohibits such implementations from being scalable; a must for any serious implementation.

Thanks anyway for all the opinions expressed, whatever it is :wink: .

I don't agree.

There's a fundamental trick to avoiding signal 11. Don't use memory you don't have. It's really just that simple.

As for making memory consistent across different processes, there's two ways.
1) Always map the memory in the same place. You can accomplish it with mmap's MAP_FIXED flag, or shmat's shmaddr parameter. This means a little advance planning to avoid busy areas, but can work reasonably well.

2) Don't use pointers. Make the list a big table and use array indexes. Voila, your code no longer cares what area of memory your list gets put in.

#define LIST_END        (~(0UL))

typedef struct node
{
        unsigned long int next;
        unsigned long int prev;
        struct
        {
                char foo[64];
                int bar;
                double baz;
        } payload;
} node;

#define HEAD (shared_list[0])
...

node *shared_list=mmap(NULL, getpagesize()*16, PROT_READ|PROT_WRITE, MAP_ANON, -1, 0);
HEAD.next=LIST_END;
HEAD.prev=LIST_END;

Of course it'll need to be mutexed somehow, or used with atomic operations etc.

I've seen a working implementation of memory sharing across separate computers accomplished by setting up a special memory area that always exists in the same place, overloading the new operator, copying back and forth with RPC-like network calls, and mutexing properly. And these objects were real C++ objects with virtual members! A linked list is trivial in comparison.

You should really read their manual pages sometime. shmat() can be induced to put things where you ask it to, not where it pleases.

What makes you think it uses kernel memory?

The compile time limit isn't fixed. Just echo a bigger value into /proc/sys/kernel/shmmax to raise it at runtime.

You can also map entire files and parts of files into memory with mmap, allowing a list even larger than physical memory to be used. This is often how large database implementations access their files. (as well as how nearly all program code is loaded.)

In computer science, a linked list is a data structure that consists of a sequence of data records such that in each record there is a field that contains a reference (i.e., a link) to the next record in the sequence.
Linked lists are among the simplest and most common data structures; they provide an easy implementation for several important abstract data structures, including stacks, queues, hash tables, symbolic expressions, and skip lists.
The principal benefit of a linked list over a conventional array is that the order of the linked items may be different from the order that the data items are stored in memory or on disk. For that reason, linked lists allow insertion and removal of nodes at any point in the list, with a constant number of operations.
On the other hand, linked lists by themselves do not allow random access to the data, or any form of efficient indexing. Thus, many basic operations - such as obtaining the last node of the list, or finding a node that contains a given datum, or locating the place where a new node should be inserted - may require scanning most of the list elements.
Linked lists can be implemented in most languages. Languages such as Lisp and Scheme have the data structure built in, along with operations to access the linked list. Procedural languages, such as C, or object-oriented languages, such as C++ and Java, typically rely on mutable references to create linked lists.