pthread_mutex_lock in ANSI C vs using Atomic builtins of GCC

I have a program which has 7-8 threads, and lots of shared variables; these variables (and also they may not the primitive type, they may be enum or struct ), then they may read/write by different threads at the same time.

Now, my design is like this,

typedef unsigned short int UINT16;
struct STRUCT {
  UINT16 uint16;
  int INT;
}
enum ENUM = { ... };

/* Shared Variables. */
struct STRUCT Shared_VariableA;
enum ENUM Shared_VariableB;

UINT16 GetVA () {
    pthread_mutex_lock( &_L_LOCK );
        UINT16 TMP = Shared_VariableA.uint16;
    pthread_mutex_lock( &_L_LOCK );
    return TMP;
}

void SetVA ( UINT16 in ) {
     pthread_mutex_lock( &_L_LOCK );
         Shared_VariableA.uint16 = in;
     pthread_mutex_lock( &_L_LOCK );
 }

Is it enough to guarantee that only a thread can write/read shared variables at the same time?

How about the following method? Same?
From the reference : Techie Stuff Atomic Operations

/* Initialization. */
atomic_t uint16 = ATOMIC_INIT ( 0 );

UINT16 GetVA () {
         UINT16 TMP = atomic_read( &uint16 );  /* Force convert 'int' to UINT16 since atomic_read() returns an int (32 bits). */
        return TMP;
 }
 
 void SetVA ( UINT16 in ) {
       atomic_set( &uint16,in );
  }

Whether getVA() will return an arbitrary value when SetVA() is running by other threads?

Yes, the pthread solution more than sufficiently protects it. (A pthread_rwlock would let readers operate at the same time and only block them for writes.)

If the gcc atomic operations are truly atomic, getva shouldn't ever return garbage.

I assume that this is a typo - the second pthread_mutex_lock() should be a pthread_mutex_unlock().

     pthread_mutex_lock( &_L_LOCK );
         Shared_VariableA.uint16 = in;
     pthread_mutex_lock( &_L_LOCK );
1 Like
/* Shared Variables. */
struct STRUCT Shared_VariableA;
enum ENUM Shared_VariableB;

UINT16 GetVA () {
    pthread_mutex_lock( &_L_LOCK );
        UINT16 TMP = Shared_VariableA.uint16;
    pthread_mutex_unlock( &_L_LOCK );
    return TMP;
}

I have a question, if a shared variable may read/write by some threads and with pthread_mutex_lock(), is the variable requires a 'volatile'?
e.g.

volatile struct STRUCT Shared_VariableA;

---------- Post updated at 01:03 PM ---------- Previous update was at 01:02 PM ----------

I am sorry, it should be:

     pthread_mutex_lock( &_L_LOCK );
         Shared_VariableA.uint16 = in;
     pthread_mutex_unlock( &_L_LOCK );

If properly bounded by mutexes(or other synchronization calls) it shouldn't need "volatile". Atomic operations might still need it.

1 Like

It is possible create a variable with "volatile" but it is not a primitive type, e.g.

struct STRUCT {
  volatile int S1;
  volatile char S2;
  volatile  unsigned short int S3;
}
volatile struct STRUCT v1;
volatile enum ENUM v2;
volatile char V3;
volatile unsigned int V4;

v1->S1 = 1;
v1->S2 = 2;
v1->S3 = 3;

v2 = 4;

v3 = 5;

v4 = 6;

Are above operations atomic?
Is any case that "volatile" can not guarantee atomic operation?

"volatile" has nothing to do with atomic, "volatile" just tells the compiler to never assume that variable is whatever it set it to last. (otherwise, it might assume it never changes and hardcode it in the instructions.)

No variables (except the gcc atomic extensions) are guaranteed to be atomic, at all, ever. There's other problems than being atomic, too -- on a multicore system, one core might have a different copy of the memory still in cache, causing even "volatile" to fail. You need to inform other cores you're modifying values out from under them, which is what memory barriers are for. pthreads does memory barriers for you. I don't know if gcc atomic ops do.

Thanks.:slight_smile:

Hi!

using pthreads synchronisation object (like mutex, read-write locks...) does more than just a simple "atomic access". It actually ensures proper memory visibility using appropriate memory barriers. Refer for instance to this article.

Cheers,
Lo�c

If I have many shared variables (more than 100), what is the better solution?

Without knowing the problem you are trying to solve, my reply might be off the track.

BUT reading "I have more than 100 variables to synchronize between threads" rings to me bells like "subtle, sporadic bugs, headache debugging and unmaintainable code"... Try to refactor your code to get as less shared variables as possible.

Lo�c

The real problem like this,
My program has about 10-15 threads, they do not run at the same time, depending on what service is triggered, and then each thread may maintain 5-6 shared variables; that means, if the thread is enabled then other threads may read/write these variables.
However, each shared variables many be an array or an array of structure, like

struct STRCTU_SAMPLE __variable_name__[ 100 ];

My solution now is to use a critical section, when a thread is read/write these variables, other thread should be wait, until the processing cycle is finished because each thread may read a shared variable from others thread (not just one) and also write result back to other shared variables.

At such case, is critical section better than mutux?

The structure of each thread like this:

READ shared variables to others.
...
Process
...
    //Sub processing
    READ shared variables to others.
    Process
    Write output to share variables.
...
Write output to share variables.

---------- Post updated at 03:45 PM ---------- Previous update was at 03:30 PM ----------

My current solution like this:

Enter critical section

READ shared variables to others. 
... 
Process 
... 
    //Sub processing 
    READ shared variables to others. 
    Process 
    Write output to share variables. 
... 
Write output to share variables.  

Leave critical section

Any better solution or structure about such system? Thanks.

What's controlling the critical section, if not a mutex?

You could try implementing a reader-writer lock if threads do a lot more reading than writing, otherwise, a mutex is about as good.

Good evening,

my answer might sound a bit harsh, but you still don't describe what is the original problem you want to solve. Rather you're describing which technical solution you came with to sort your problem out, which involves 15 threads and 5-6 shared variables per threads, where a variable might be an array of 100 or more elements. And this particular solution leads to a further problem, namely how to sanely synchronize this mess.

Of course, we could give you idea how to tackle the synchronization problem generated by your technical solution. But we might be even more efficient in providing guidance to the initial problem you're trying to solve. For instance, do you want to serve several requests coming e.g. from different TCP clients concurrently? Or you need to perform different processing on a piece of data in a pipeline fashion? ...

Do you understand my point?

Cheers,
Lo�c

I am using Lamport's bakery algorithm to control all threads since I need no starvation.

---------- Post updated at 10:13 AM ---------- Previous update was at 10:00 AM ----------

Hi,
I have TCP connection from clients and the handler also require or read/write some shared variables, but my main pointer is:
Each thread or critical section in the threading should share same time, since the system is time sensitive and it requires to control some hardware, if a critical section costs 100ms and others may delay (since I am using a bit critical section packaged a thread).

The basic requirement about my problem is:
Each critical section or read/write section should share more or less the same time (that means starvation is very bad to me, therefore, I use Lamport's bakery algorithm to control each critical section and make no starving.

And then, I am try to use read/write lock instead of critical section since critical section included a lot of coding, now the time of each critical section doesn't fair.

But I think, If a cut a big critical section (in my case) into different pieces of read/write locks, the starving and waiting time may increase and starving may occur.

:rolleyes: