Two player game, forking & sockets

Hi.
I am just making first steps in Linux POSIX programming. I have read some tutorials on processes, signals and sockets (thanks Beej!), so some basic knowledge I have already got.
I want to write some very basic game server. My idea is to have main process, which is waiting for new players. If there are at least two clients connected, spawn one process per every player and then start doing some actions (not relevant now). Players (separate processes) would communicate in pairs through hm... I don't know yet, for sure some IPC structure.

According to this very general approach (I am just beginner ;)) I have few questions:

  1. How to provide proper waiting for players, for example when there is no clients at all or there is only one and waits for opponent?
  2. How to handle three way inter-process communication? I mean between main process, player1 process, player3 process. I am guessing I would have to use some semaphores.
  3. How to handle such communication, when the main process has to gather some information about paired clients moves and waiting for new clients at the same time? Can it be done in one, main process?
  4. Could you explain me some basic algorithm how such program should be properly constructed? Some skeleton, names of functions, which manuals I have to study and general hints in order not to make steps blindly.

Thank you in advance.

Unless they're all on the same machine, it'd pretty much have to be sockets.

Print 'please wait' and block on accept() waiting for more connections? If that's not what you mean please explain in more detail.

It also depends what information's being communicated in what manner.

Sure. You could do different things in different threads, for example.

We don't have enough information to help you yet. Certainly there's many ways to get what you want.

You are right, I have not told too much.
First I would like to say that this is only my startup program, which may or may not grow into something more clever :wink: I am familiar with object oriented programming, now I want to bite some Unix-like stuff, so here I would like make some research before I mess up the design.

To clarify my idea let me describe example scenario.
There is a server, at the beginning main process only, listening for connections (SOCK_STREAM). There is also client application, may be even telnet or something very simple. It connects to the server.
Server spawns new process for every client connection. If there is enough clients to create some pairs I would like server to pair them somehow (that is tricky for me now). Every process communicates through sockets with its own client somewhere and with the other process. Game is played in turns and its state is being sent to clients on every change. At the server side, main process has to collect somehow states of every game and hmm.. store it, not relevant now.
To provide a better view of what I am going to do I have prepared some scheme (see attachment).
I hope that would clear more my crazy ideas :wink:
Dotted lines means read/write.

Okay.

You can create a shared memory segment with mmap. You can map in a file so as to keep memory stored on disk, or just map it in anonymously. It all just acts like memory either way.

// file-backed
int fd=open("filename", O_RDWR|O_CREAT);
ftrundate(fd, length);
char *mem=mmap(NULL, length, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);

// not file-backed
char *mem=mmap(NULL, length, PROT_READ|PROT_WRITE, MAP_ANON, 0, 0);

Semaphores will work fine to control it. pthread mutexes will work too, at least in modern linux, but semaphores would be more portable.

The server would do something like this very sketchy code

char *shared_mem;
sem_t sem;

void child_stuff(void)
{
        while(1)
        {
                sem_wait(&sem);
                shared_mem[0]++;
                sem_post(&sem);
        }
}

int main(void)
{
       shared_mem=mmap(...);

       // Initialize shared semaphore to 1.  It can be locked once, further attempts
       // will block.
       sem_init(&sem, 1, 1); 
       while(1)
       {
               pid_t pid;
               int fd=accept(...);
               if(fd < 0) continue;

               pid=fork();
               if(pid == 0) // child code
               {
                       // A forked process gets a copy of everything the main process
                       // has open.  99% of it the child probably doesn't need, so
                       // close it.
                       //
                       // You can also set files auto-close fork with an ioctl, look for 
                       // close-on-exec.
                       close_positively_everything_except_what_child_needs;
                       child_stuff();
                       exit(0); // DO NOT FORGET!  Don't want child doing parent stuff after child_stuff finishes for some reason
               }
               else if(pid > 0) // parent stuff
                      add_to_list_of_children(pid);
               else
               {
                       perror("fork failed for some reason");
                       break;
               }
       }
}

Thanks! Now I think that would give me some kickoff. I'll be back if any question arises.

---------- Post updated 07-06-11 at 08:41 PM ---------- Previous update was 06-06-11 at 09:06 PM ----------

As promised, I am back with questions :wink: I have read about different approaches and decided to use SysV shared memory for a start.
First problem:
Assuming that we don't know how many clients(c) will connect and how many games(floor(c/2)) will be started the question is how to manage this shared memory segment?
My first idea was to create an array of pointers to game_state (type, see below), which would be the argument for shmat() function and store information about all games.
But how to dynamically expand this array in case of new game start without ruining attachment? I mean that when we malloc new, bigger amount of memory and freeing old one, the addresses changes. To be honest I have feelings that this idea is useless.

typedef struct {
    int player_ids[GCOUNT];    // array with player ids
    char board[BOARD_SIZE][BOARD_SIZE];    // 10x10 game board
    char moves_sequence[TILES_NUM];    // array containing information who did what move
    player_stat player_stats[GCOUNT];        // point to a structure containing information who has how many points
} game_state;

Second problem:
How to implement reading and modification of shared memory segment in turns? (player1-mainprocess-player2-mainprocess-player1-mainprocess-player2-....)

Any particular reason? It's the same thing in the end, just a lot more convoluted.

I don't understand what you mean by 'argument for shmat'. Do the games inherit shared memory for the parent or not? When you fork(), shared memory is inherited, so if the parent had it the children will start off with it and not need to fool with shmat().

Pointers won't work right in shared memory. Imagine that process A allocates memory and shoves the pointer into the shared mem. Process B tries to use this pointer, but process B has a different heap than process A. So B tries to use memory it never malloc()ed and crashes.

You have to store the contents in the shared memory, never heap pointers. Even pointers to the shared memory might not work unless every process maps the memory in the exact same place. You could store an array index, if the array was in shared mem too.

What do you mean by 'ruining attachment'?

I know that when mmap() in a segment, pages won't actually be allocated until you use them. Just make a big enough memory segment in the first place and the OS will dynamically allocate more pages as you use them. I'm not sure sysv has this same ease and simplicity.

How about, each player gets its own sem, so player 1 waits on sem 1, does its turn, posts on sem 2. Player 2 waits on sem 2, does its turn, and posts on sem 3. Player 3 waits on sem 3, does its turn, posts on sem 1.

That could be a ridiculous number of semaphores for a lot of clients, though. You could have an integer stored in shared memory of whose turn it is, and when a client is created they poll it until they're told it's their turn, then they sem_wait()/sem_post()/sem_wait()/sem_post()... I think the order would remain stable after that.

As for your game structure -- instead of a structure full of big arrays, how about a big array of structures?

I read on stackoverflow.com that it is more portable. I have no particular must, just curiosity, I am also reading about POSIX ones.

Oh, I haven't noticed that fact. Good to know.

Ok, I will remember that.

Do you mean that below code is wrong?

int shm;
key_t key;
game_state *gsbuf;

if((key = ftok(".",'s')) < 0) ERR("ftok"); // still don't know why here is 's', copied from some webpage
shm = makeshm(sizeof(gsbuf), key);
if((gsbuf = shmat(shm, NULL, 0)) == (void *) -1) ERR("shmat");

/* then some forking */

(...)

/* outside main */
int makeshm(size_t size, key_t key) {
    int shm;
    if((shm = shmget(key,size,IPC_CREAT|S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH))<0)
         ERR("Create shared memory");
    return shm;
}

I think that is my problem. What does it mean big enough? Server doesn't know how much clients would have. The main point is how to organize this shared memory to grow as new games are started.

Good idea, thanks :slight_smile:

I am not sure if I understand well. Could you propose any sketch?

I'm pretty sure mmap is more fundamental. shm() will also work, but I prefer mmap, it's more flexible. I also like the idea of being able to map in massive things from disk without wasting untold amounts of memory; whereas the memory the system will allow you to shm() may be fairly limited, and eats swap.

Not wrong, no. It illustrates an important feature of shm, getting at anonymous memory from the outside. Just knowing the name can get you the memory, where mmap() would need to map an actual file to get at already-in-use segments a process didn't inherit. Remember though, a UNIX system is supposed to back all memory with disk one way or another. If you don't let it be on disk, it may just end up in swap.

Well, how large are your structures? Think in terms of available memory, not in clients. Many programs have that kind of configurable maximum; use this much memory but no more. mmap makes this simple because you can map in the max amount of memory without actually using it. The OS will just give you more when you use it, up to the max you requested.

Many programs have a configurable limit for that sort of upper max. I recall a chess program that took a configurable amount of space for its memory buffer. It couldn't run as a limited user until fiddled with settings in /proc though because shm only gives limited users 16 megs or so by default in Linux. With file-backed mmap, hundreds of megs is the practical limit for one segment in 32-bit; and on 64-bit, the sky is the limit.

You only want it to grow to a point. You don't want it to eat beyond all available memory and grind your machine into swapdeath. Pick a fraction of total RAM maybe.

// one way
struct mystruct
{
        int a;
        char b;
        float c;
};

// No arbitrary sizes
struct mystruct *myarray=malloc(sizeof(mystruct)*5000);
// or
struct mystruct *myarray=mmap(...);
// or shmat, etc etc.

struct otherstruct
{
        int a[5000];
        char b[5000];
        float c[5000];
};

// Size stuck at 5000
struct otherstruct otherarray;

It could be that I misunderstood the intent of the structure you posted though. I have no idea how you plan to organize this memory yet.