Replication using NFS.

Hi all,

 I am going  to  implement  a script which  will use  NFS  to  replicate data between two  SCO  unix  servers. It  will take  files with mtime  -1 and copy  the  data periodically .In this  regard my  questions  are :

Is this approach good and reliable enough with respect to loading servers, accuracy and redundancy.

Also is there any way to detect the changed file as it changes without using a long search with find's mtime, as this searches the whole bulk of data each time it seraches for modified files less than 1 day old. If I have 100gb of data (increasing with hours) , mtime command have more data each time to search and will become slow with time.

Thanks in advance,

Dexter

I have more questions than answers.
Are the source files in all file systems?
Is the environment such that there is a small (relatively) number of large files involved,
or are a large number of small files that are created by various means and rarely modified.?
How many processes are there that can change a file.
Do you care if the permissions and ownership change?

Hello jgt,

Yes  I have 3  file  systems/partitions \(on the  source server\) which  are used by  three different  departments when they run their concerned  application software/processing  software.
Environment  is  such  that daily new files are  generated \(these files  are large  files either in small no. or large  nos.\)

Typically there shouln't be more than 2 processes simantaneously changing the file data.
Permisions and ownership don't matter as such but files with changed attributes should be copied as it is on the destination server.

  Basically I thought  of having the  eaxct  replica  of  my  source  server.

regards,

Dexter