I have a program which has 7-8 threads, and lots of shared variables; these variables (and also they may not the primitive type, they may be enum or struct ), then they may read/write by different threads at the same time.
Yes, the pthread solution more than sufficiently protects it. (A pthread_rwlock would let readers operate at the same time and only block them for writes.)
If the gcc atomic operations are truly atomic, getva shouldn't ever return garbage.
"volatile" has nothing to do with atomic, "volatile" just tells the compiler to never assume that variable is whatever it set it to last. (otherwise, it might assume it never changes and hardcode it in the instructions.)
No variables (except the gcc atomic extensions) are guaranteed to be atomic, at all, ever. There's other problems than being atomic, too -- on a multicore system, one core might have a different copy of the memory still in cache, causing even "volatile" to fail. You need to inform other cores you're modifying values out from under them, which is what memory barriers are for. pthreads does memory barriers for you. I don't know if gcc atomic ops do.
using pthreads synchronisation object (like mutex, read-write locks...) does more than just a simple "atomic access". It actually ensures proper memory visibility using appropriate memory barriers. Refer for instance to this article.
Without knowing the problem you are trying to solve, my reply might be off the track.
BUT reading "I have more than 100 variables to synchronize between threads" rings to me bells like "subtle, sporadic bugs, headache debugging and unmaintainable code"... Try to refactor your code to get as less shared variables as possible.
The real problem like this,
My program has about 10-15 threads, they do not run at the same time, depending on what service is triggered, and then each thread may maintain 5-6 shared variables; that means, if the thread is enabled then other threads may read/write these variables.
However, each shared variables many be an array or an array of structure, like
struct STRCTU_SAMPLE __variable_name__[ 100 ];
My solution now is to use a critical section, when a thread is read/write these variables, other thread should be wait, until the processing cycle is finished because each thread may read a shared variable from others thread (not just one) and also write result back to other shared variables.
At such case, is critical section better than mutux?
The structure of each thread like this:
READ shared variables to others.
...
Process
...
//Sub processing
READ shared variables to others.
Process
Write output to share variables.
...
Write output to share variables.
---------- Post updated at 03:45 PM ---------- Previous update was at 03:30 PM ----------
My current solution like this:
Enter critical section
READ shared variables to others.
...
Process
...
//Sub processing
READ shared variables to others.
Process
Write output to share variables.
...
Write output to share variables.
Leave critical section
Any better solution or structure about such system? Thanks.
my answer might sound a bit harsh, but you still don't describe what is the original problem you want to solve. Rather you're describing which technical solution you came with to sort your problem out, which involves 15 threads and 5-6 shared variables per threads, where a variable might be an array of 100 or more elements. And this particular solution leads to a further problem, namely how to sanely synchronize this mess.
Of course, we could give you idea how to tackle the synchronization problem generated by your technical solution. But we might be even more efficient in providing guidance to the initial problem you're trying to solve. For instance, do you want to serve several requests coming e.g. from different TCP clients concurrently? Or you need to perform different processing on a piece of data in a pipeline fashion? ...
I am using Lamport's bakery algorithm to control all threads since I need no starvation.
---------- Post updated at 10:13 AM ---------- Previous update was at 10:00 AM ----------
Hi,
I have TCP connection from clients and the handler also require or read/write some shared variables, but my main pointer is:
Each thread or critical section in the threading should share same time, since the system is time sensitive and it requires to control some hardware, if a critical section costs 100ms and others may delay (since I am using a bit critical section packaged a thread).
The basic requirement about my problem is:
Each critical section or read/write section should share more or less the same time (that means starvation is very bad to me, therefore, I use Lamport's bakery algorithm to control each critical section and make no starving.
And then, I am try to use read/write lock instead of critical section since critical section included a lot of coding, now the time of each critical section doesn't fair.
But I think, If a cut a big critical section (in my case) into different pieces of read/write locks, the starving and waiting time may increase and starving may occur.