More > 1 TB single file cant copy

good evening,

hi, I have problem for copy file, size more > 1 TB, just only for single file.
error said, capacity not enough, even my storage I set to 4 TB, file always reject during finish copy.

but, if I copy with multiple file/separate file, total calculation file is 2 TB, always success.

problem only, if I copy single file to my server solaris 10.

I already try newfs -T to drive, but can't help with my issue

please

Hi,
what give:

ulimit -a | grep file

And your mount option for your partition (largefiles is enable) ?

Regards.

bash-3.2# ulimit -a | grep file
core file size          (blocks, -c) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256

How are you copying the file to the system?

Is the copy from an internal drive?
Is the copy from an external drive?
Is the copy from a NFS mounted drive?
Is the copy via FTP?

What command are you using? cp?

What filesystem type is it?

Solaris 10 - SPARC or x86? 32-bit or 64-bit?

According to This article and Oracle the maximim file size on Solaris 10 UFS is 1TB.
Only ZFS is (almost) wihout limits.

I copy using scp and rsync, but the problem same.
Copy from solaris 11 to solaris 10, or centos to solaris 10, but same too for the problem

Please answer the other relevant questions - internal drive, external drive, how is it mounted.

external drive using Netapp(SAN), and mounted to solaris.

problem only for copy single file, now my current file totally 2 Tb on storage 10 TB.

Please provide the details from the df and mount commands:

df /path/of/mountpoint
mount -v | grep /path/of/mountpoint

my mount disk

bash-3.2# mount -v | grep /BACKUP_NEW2
/dev/dsk/c3t600A09803830464A423F4C6F32654D59d0s0 on /BACKUP_NEW2 type ufs read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1d83630 on Tue Jul 10 15:23:27 2018

bash-3.2# df /BACKUP_NEW2
/BACKUP_NEW2       (/dev/dsk/c3t600A09803830464A423F4C6F32654D59d0s0):11495069792 blocks 16224642 files

type ufs means the filesystem type is UFS.
Where files cannot be greater than 1TB.
Look at my post#5.

Maybe it is possible to compress the data when it is copied to the UFS disk?

Otherwise I see 4 options:

  1. convert the disk to ZFS. Existing data is lost. Back-up first!
  2. get a new SAN disk, create ZFS on it, and copy the data.
  3. convert to NFS. Existing data is lost. Back-up first! Make a NetApp volume, export as NFS, mount as NFS.
  4. get a new NFS share from NetApp, mount it, copy the data.
1 Like