Disk Space

Hi,

I am installing TAM-eb components in solaris V10.o, unfortunately am running out of space. when I -df i come across a lot of directories. i would like to know whether there is any way to free some disk space.

Archiving, compression and deletion are your choices. Some people mount or symlink in space from another device. I dont think Solaris has a compression option on any file system, like NTFS. You could mount in space from another machine.

Solaris 10 has compression option on ZFS.

Can you compress dir subtrees on the fly?

It'd be nice if it would just compress quiescent stuff, quietly, in the background. CPUs are faster than disks!

After turning on the compression everything on that filesystem is being compressed on the fly, including subdirectories. The only drawback is that data that was on the filesystem before compression was enabled, stays uncompressed.

Oh, it is on the create the get marked for compressed storage?

No way to qualify old files for compression, but copying them?

What'd be really nice is if you could NFS mount a compressed ZFS and have the network data be compressed, some sort of NFS discovered feature for when ZFS SW is found on both ends. There's always room at the top!

When you create ZFS filesystem you can enable compression, so there won't be "uncompressed" data. By default compression is disabled. And if you NFS share/mount compressed filesystem it stays compressed :wink:

1 Like

I always felt compressing filesystem reduces reliability. its a probably harder to fsck it in case of problems. I'd archive and delete log files first. perhaps uninstall some packages. shame scsi disks are so expensive.

ZFS is much more reliable than UFS. Enabling compression doesn't change that.. It only slightly affects performance.

It usually does but by improving it !
Enabling compression on ZFS is increasing the overall i/o performance in most use cases.

---------- Post updated 16-06-11 at 00:00 ---------- Previous update was 15-06-11 at 23:54 ----------

On windows, definitely.

ZFS neither provides nor need fsck. You never need to fsck a ZFS file system. It is always consistent by design.

Nice ZFS summary: ZFS - THe newest file system, explained in human terms

Compression just means:

  1. the data on disk is smaller
  2. the data is more random
  3. if the data is corrupted, a bigger part or the remainer of the file is likely lost (but this is not 1962 tape or 1969 disk, and there is backup, raid and mirroring!)
  4. Because it reads fewer pages of disk for more pages of RAM, and CPUs are hugely faster than disks, serial read write flow is faster. Write flow is a bit more CPU intensive, might not beat raw disk. However, ZFS buffers writes a lot, and who has only one CPU any more? LZJB should be much faster than gzip, but perhaps twice as large?
  5. Next point is a bit speculative, as I could not find an exact description of ZFS compression blocking: Random access requires reading compressed blocks on disk from the beginning, and, for writes, recompression of entire separate compressed block.

I wonder if they ever considered deferring compression of each file until the data goes quiescent?