Newbie question about using a Cluster: using memory

Newbie question about clusters and memory.

Is there a way using a cluster (or any other Linux feature/technology) where I can link up a bunch of PCs such that an app thinks it has more memory than available on just one local machine?

For example, we have multiple surplus PCs with 512MB RAM (which is the max that that hw supports). Is there a way to cluster or link them up in some way such that an app would think that it has more than the 512MB RAM available, say 1GB or anything else?

I am fully aware that the speed in such a hypothetical situation would be less than ideal, but speed is not the primary concern at the moment. The primary concern is available memory.

I (think I) know how to link up machines such that disk space on several PCs could be available to an app such that the disk space for the app exceeds the available space on the local machine. But is there a way to do this with memory... using cluster or any other Linux feature or technology?

Thanks for your replies (and don't beat me up too much if this is an insane question, I'm just a newbie :slight_smile: ).

check if Hadoop is what you are looking for. maybe its too heavy and your requirement is simpler one

Thanks Yogesh for the suggestion.

Hadoop does indeed looks very powerful, and somewhat overwhelming. It seems to have the ability to handle larger memory requirements than locally available, but in looking at the doc, I think it is overkill.

Also, it requires writing the application to meet Hadoop's requirements. Which won't fly in my situation.

The app I am dealing with is already written. What I get to start with is an executable, with no possibility of it being rewritten. So the environment needs to be standard Linux (there is a Windows version too, so that's a possibility too I guess, but I am trying to avoid the Windows world...).

Maybe I made an incorrect assumption that a possible use of a cluster might solve the problem. Perhaps there is an easier way?

Is there some Linux feature that would allow use of, say, local disk space that could be used to 'fake out' the application so the app sees that disk space as if it were memory (like maybe swap space)? That way if I had say a 1GB free amount of disk space, I could use it as if it was memory for an application that needed/required 1GB of memory even if the local machine only had 512MB of memory?

(Is this possible? Am I deluding myself about this as a possibility?)

Thanks for any advice.

why don't buy a new mashine with more memory? else you might want to look here:
Grid computing - Wikipedia, the free encyclopedia

GridCafe - Links

http://www.unix.com/links/high-performance-computing-15/

Assuming your machines have 100 Mbit nics you'll get a remote access bandwidth of about 10 MB/s and a latency about 1 ms. That's not much better than using traditional swap space on a local hard drive (say 30 MB/s and 8 ms for a drive of that age) and still way below local memory (hundreds of MB/s and latencies in the micro second range).

In the early late 90's, early part of this decade, there was a set of kernel patches and supporting tools called "MOSIX", which transparently handled migrating memory blocks from one cluster node to another. As Fabtagon noted, however, this kind of thing was severely hampered by network bandwidth. Further, unless you are using a 64-bit OS, the amount of memory a single process can use is very limited (3GB). Most modern hardware can handle 4 times that much on a single board.

lvs+keepalived+moosefs

if you give answers to such a complex question, you should go more into detail!

thanks...