NFS - concurrent write to same file normal ?

Hi all,

Sorry if I sound like a novice , I have always thought that for network file system which can be shared, there will be some access restriction in which

when user A is writing/editing fileA, user B is able to view the same fileA but cannot write/edit it until user A has completed and saved the fileA.

======================

Today, while in Solaris10, I encounter the following

 
 ServerA:/shareddrive
 ServerB -> mount -F nfs serverA:/shareddrive /shareddrive
 ServerC -> mount -F nfs serverA:/shareddrive /shareddrive
  
 -- shuser user has the same UID across all the 3 servers.
 In Server A -> mkdir sharedfolder; chown shuser:shuser sharedfolder;
 In Server B -> touch filea; vi filea;  insert 'abcd' -- leaving the file open
 In Server C -> vi filea; insert '1234'  
  
 In server B -> :wq!
 In server C -> :wq!
  
 Result filea = 1234
 

====================

Is this behaviour perfectly normal under NFS ?
Able to edit the same file at the same time, with 1 write overwriting another ? :confused::eek::confused:

Regards,
Noob

Hi javanoob,

That would be correct, if the share is set rw then you have to live with it I'm afraid. Options for getting round it are limited, you could try using NLM there are some documents here.

Regards

Gull04

2 Likes

I always avoid NFS where I can for this sort of reason. It gets more complicated for every extra client that uses it. It can seem like overkill, but reading through NFS and writing back with an FTP/SFTP process seems to help, but there is still a risk of who gets the last write - although that is present when using local files anyway.

Robin

1 Like

Hi Javanoob,

There are much more robust solutions, consequentially they are more complex. If you are serious about needing the access and the file level locking spread across multiple servers or shares. I would suggest that you use GFS, I have used it very successfully in the past - with upwards of 15 systems accessing the same disk areas.

However although the systems all functioned as expected, there was an issue where when the utilization of the filesystem got above 80% performance fell away dramatically. These file systems were pretty big (almost 10Tb), so if there was an issue that required an "fsck" the file system was unavailable for a long time (60 hours in one case).

So you have to weigh up the requirements, it may be worth considering something like CVS or similar - where the file has to be checked out and then back in by one user at a time.

Regards

Gull04

1 Like

This behavior is actually perfectly normal whether NFS is used or not.

It is simply a consequence of the editor you use, here the legacy BSD vi, not doing any concurrent edition avoidance.

Should you have used vim, the editor would have prevented or at least warned you about the situation regardless of whether the file system is local or remote, and supports file locking or not. vim tries to create a temporary file (.swp) when you open a file in read/write mode. Should this file already exists, it ask you about the issue and wait for your decision.

1 Like

Hi Jilliagre and all,

Thanks for your reply.

I have assume NFS would have built some auto-lock mechanism beneath the application layer (irregardless of the application used), so that concurrent access are restricted.
But guess i am wrong.

Regards,
Noob

Unix file systems do allow concurrent file access. This is a POSIX requirement. Forbidding such accesses with NFS would have prevented many applications to run properly.

Moreover, up to NFSv3, NFS is stateless, i.e. the NFS server doesn't store the fact a file is open by a remote client so file locking requires extra services to be supported.

NFSv4 is stateful so allows mandatory locking but this must be a client decision that "vi" never makes anyway.

2 Likes