Copy data over a TB

Hi All,

We are not able to grow a UFS filesystem since its size will be going over a TB and it wasn't created using -T with newfs.

Hence we have decided to take the backup of all the files on another filesystem and recreate it using -T with newfs.

Please recommend the most reliable command to copy data around a TB.

We have thought of using ufsdump and ufsrestore as below.

 
ufsdump 0f - /dev/md/rdsk/d251 | (cd /NEW_FS; ufsrestore xf -)

Since ufsdump will write on standard output and ufsrestore will read from standard input, I am not sure will the buffer be able to hold around a TB data.

The other option to copy is cp -pr but I am not sure the links will be preserved through this.

Please advice.

The OS is Solaris 9 update 8.

Regards,
Vishal

Hi vishalaswani, there is a big difference in the copy performance of a filesystem if you have a lot of little files or a few big files, I think that if you are in the first case, you can use the command you explain, but if you are in the second case, perhaps you can try rsync, it will be faster, the command will be, for instance:

rsync -Havx ORIGINAL_FOLDER DESTINATION_FOLDER

Where DESTINATION_FOLDER can be a local folder or something like server:/folder, whit this parameters you will have a mirror copy, including the links. You can use -e rsh in order to avoid the default ssh option. Of course, if the folder structure is really big, you can launch a "foreach" command whit a rsync by folder, so you can copy several folders at the same time which can not be done with ufsdump. I'm copying around 4,5TB all nights from my NFS main server to a slow disk machines which a script based in this idea.

Also, you can use the same command with tar instead of ufsdump, it's a little bit faster, but remember the E option if you use long names.

is the -T option in solaris9? maybe you can do an OS upgrade and use ZFS which is easier to manage...