Multiple process write to same log file at the same time

If we have 3 process to write to same log file at the same time like below. will it cause the data outdated because the multiple process writing same time? It this a safe way to keep the log for multiple process?

p1 >> test.log &;
p2 >> test.log &;
p3 >> test.log &

Thanks,

any suggestions?

Thanks

Can you detail what the processes p? are written in ? What O/S are they running on, and are they script, or do you have code ?
For example in script, linux has a "lockfile" command that could help in this instance.

Thanks, My question is a general question without special user case.

Maybe we can create a example to insert data 1-10000 in process to log and run 3 processes at the same time .

As you point , lockfile in linux could lock the file for exclusive writing, so I assume if we don't use lockfile or similar command, the above situation will cause the race condition or outdated data.

If I am wrong in above, please let me know.

Thanks again for your comments.

lock file concept is outdated - not maintainable or fool proof. Check for some mutex/synchronization mechanims.

The example will not cause a race condition, at worst it will corrupt the log file. However, If the processes write the data in single write statements (most do, either by using single "write" API calls or by using streamed file I/O which will buffer output), and each log line is less than 4kb (streamed I/O is), and the log file is local, then most modern UNIX/Linux hosts will not intermingle bytes within separate log lines. This is only a rule of thumb however, so I would not rely on this to work 100% across different O/S.

From your example in the first post it seems you launching the processes from the shell and are separate processes, and therefore you cannot use mutex's as matrixmadhan has suggested. Is this correct, or are you in control of the code, and could you implement threads (as mutexes are very useful and would solve this issue). Otherwise, Im afraid that I am in disagreement with matrixmadhan's suggestion that lock files are outdated - they are still a very useful tool which most daemons and other applications rely on (for example the whole System V "init" script architecture on most UNIX/Linux implementations still use this mechanism in creating "pid" files in /var/run, etc).

So I would say that if you are creating a simple application or script that calls multiple processes and isnt 100% mission critical/bullet proof, then "lockfile" would probably be a useful tool for you. If, however, you are developing a mission critical service that has to 100% guarantee data integrity, that you should look into more advanced synchronization techniques and/or data repository services.

I hope this helps.