Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS.
But when the command finishes I end up with 207 Gb copied on /u02!!!!, my question is... why !!!???, where did the extra 20 Gb come from???
Any ideas on why this happens and how I can safely copy from /u01 to /u02?, I have to be sure since this is a production DB.
PD: The /u01 has files that are 10 Gb large... could this be the problem?, can cpio handle large files? (I thought it could!, I've used it before to copy 2GB files without problems...)
Any sparse files may also change size when copied(blocks not stored in the input become stored blocks full of zeroes in the output), though the content remains identical.
Like Corona says, I there might be sparse files, perhaps temporary table space?. In that case dropping your temporary table space on the target file system and recreating it may turn it back into a sparse file...
When sparse files are copied by programs that do not take sparse files into account they get turned into dense files on the target file system and take up much more disk space...
Copying sparse files is frequently a problem. The ability to replicate sparse files holes-and-all is often not merely system-specific but filesystem-specific too. I wrote a sparsecat utility which turns space full of NULLs into sparse holes but it does it brute force -- it doesn't know where the holes originally were, it just checks for sectors full of zeroes. It's not guaranteed in any sense either.
Instead of using cpio which isn't sparse file aware, use ufsdump/ufsrestore to backup your directories. Make also sure you backup stable data by either locking the filesystem (lockfs) or creating a snapshot of it (fssnap).
But... is the "df" command able to detect sparse files???, it should detect them, should it?. I am kind of confused here... if I create a 5Gb sparse file on a filesystem, lets say /u01, then
du -sh sparse-file
should say that the file is 5Gb... then if I copy the same file to another filesystem, /u02, and do the
du -sh sparse-file
, the result should be the same. Now, lets say in both filesystems: /u01 and /u02 I have only this file, the df -h command will report the same as du...
What I mean is, it doesn't matter that a sparse file is 10 Gb big and only 1 Gb of it is filled with usefull data, the df or ls -lh or du -sh commands have to return that the file is 10 Gb big.... right?!?!
A 5GB sparse file may take up only 500MB for instance. If you copy the file with a program that does not take sparse files into account, its copy will take up 5GB on the target file system.
I found the infamous 20 Gb sparse file!!! :-), you were right guys!, now I'll follow jlliagre's advice on using ufsdump/ufsrestore to do this job... I'll try:
du reports disk used, not file size. Never quite the same thing at the best of times(i.e. 1K file in a filesystem with 4K clusters takes a minimum of 4K space), and the difference could be enormous for a sparse file, since arbitrary parts of the file aren't on disk at all.