2GB file size limit

Greetings,
I'm attempting to dump a filesystem from a RHEL5 Linux server to a VXFS filesystem on an HP-UX server. The VXFS filesystem is large file enabled and I've confirmed that I can copy/scp a file >2GB to the filesystem.

# fsadm -F vxfs /os_dumps
largefiles

# mkfs -F vxfs -m /dev/vg02/os_dumps
mkfs -F vxfs -o ninode=unlimited,bsize=1024,version=4,inosize=256,logsize=16384,largefiles /dev/vg02/os_dumps 2097152000

However, when using the Linux dump utility, it fails 2GB into the dump:

DUMP: write: File too large
  DUMP: write error 2097170 blocks into volume 1: File too large
  DUMP: Do you want to rewrite this volume?: ("yes" or "no")   DUMP: write: File too large
  DUMP: write: File too large

I've confirmed that I can dump from the same RHEL5 server to another RHEL5 server (ext3 filesystem) without issue so it doesn't seem to be a dump limitation. As I've mentioned, I've also confirmed that can scp a large file (8GB) from the same Linux server to the VXFS filesystem without issue. There seems to be some issue between Linux dump and the HP-UX server/filesystem. This is really starting to drive me nuts. We were previously dumping (via the same script) to a Solaris server without issue. After re-pointing the script to HP-UX, we now now have an issue.

Can someone please shed some light on this? I've tried various dump options and nothing seems to make a difference.

Thanks,

  • Bill

Perhaps if you said a bit more about your HPUX system and OS, I could start thinking a bit...

Can you also post the dump command line with the options that you are using to dump this ext3 fs from rhel to hpux?

Thanks for reaching out.

# uname -a
HP-UX corvette B.11.11 U 9000/800 1756503870 unlimited-user license

I'm primarily a Linux administrator and don't dabble much with HP-UX so if you need additional info, please let me know.

The HP-UX server is attached to EMC storage. Our Linux servers were previously backing up to a legacy Sun Solaris server but we've run out of space there so I'm trying to shift the scripts to now backup to the HP server. I've created the logical volume and filesystem from scratch. As mentioned, everything seems to be working as expected with the exception of using dump from Linux to this filesystem. The Linux servers are using the dump options "0uf". I've tried 0auf to no avail. Thanks again for reaching out.

  • Bill

---------- Post updated at 12:11 PM ---------- Previous update was at 12:08 PM ----------

Keep in mind that the exact same script works flawlessly to both a Solaris server and another Linux server. As soon as I change one of the variables to point to the HP-UX server, it craps out after 2GB every time. The dump is over SSH. I've also tried RSH but got the same results. Thanks.

---------- Post updated at 12:20 PM ---------- Previous update was at 12:11 PM ----------

Proof that the filesystem in question does in fact support large files:
(I've also scp'd an 8GB file from the same Linux server to the filesystem)

corvette]:/os_dumps/blades # dd if=/dev/zero of=8gb_file bs=8k count=1048576
1048576+0 records in
1048576+0 records out

[corvette]:/os_dumps/blades # ls -l
total 16809888
-rw-r-----   1 root       sys        8589934592 Nov  9 12:18 8gb_file

[corvette]:/os_dumps/blades # bdf .
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg02/os_dumps 2097152000 8485800 2072348504    0% /os_dumps

Can you give us the output of the next commands:

model
vgdisplay vg02
lvdisplay /dev/vg02/os_dumps
[corvette]:/os_dumps/blades # model
9000/800/rp8420

[corvette]:/os_dumps/blades # vgdisplay vg02
--- Volume groups ---
VG Name                     /dev/vg02
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      1      
Open LV                     1      
Max PV                      16     
Cur PV                      10     
Act PV                      10     
Max PE per PV               65535        
VGDA                        20  
PE Size (Mbytes)            128             
Total PE                    22981   
Alloc PE                    16000   
Free PE                     6981    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

[corvette]:/os_dumps/blades # lvdisplay /dev/vg02/os_dumps
--- Logical volumes ---
LV Name                     /dev/vg02/os_dumps
VG Name                     /dev/vg02
LV Permission               read/write   
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel     
LV Size (Mbytes)            2048000         
Current LE                  16000     
Allocated PE                16000       
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   on           
Allocation                  strict                    
IO Timeout (Seconds)        default

I took time answering (people in my office...) and replied without having the possibility to see your last post...
Did you do a man of dump on HP-UX ? I remember it differs a little...

Well dump is actually being initiated from the Linux side. I believe that rmt gets called on the remote side, but I'm not sure how that comes into play with the dump command. I did notice the following in the HP man page:

WARNINGS
      dump will not backup a file system containing large files.

But again, the dump is being initiated from the Linux side and it supports backing up large files. I wonder if it's an RMT limitation on the HP side?

I wonder if it is not a 2GB limitation on the command itself (a dejavu feeling...).
It reminds me of some similar situation with tar...
I will check...

http://www.peg.com/forums/dba/200406/msg00444.html

Hmmm....sorry, but I'm not sure if I'm following you. Do you think I should disable largefiles on the filesystem and try again? Thanks.

Or try to download/compile GNU utilities (dump), for it was the case for many system command with given reason (open system compatibility) so people started to use GNU tar...

I agree with vbe. The software has to be capable of writing a file larger than 2Gb too. That post about Progress 8.3E was a classic example but soluble by getting Progress to use multiple files each of which was less than 2Gb.