ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.

Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any data and when I unmounted the pool, I could not even mount it again.

I've heard that this is a standard behavior of ZFS filesystems and that the correct way to avoid such problems in future is not to use the full capacity of the pool.

Now I'm thinking about creating quotas on my filesystems (as they describe in [this article](ZFS: Set or create a filesystem quota)), but I am wondering if that is enough.

I have got a tree hiearchy of filesystems on the pool, e.g. something like this (pool is the name of the zpool and also the name of the root filesystem on the pool):

/pool
/pool/svn
/pool/home
...

Is it OK to just set a quota on "pool" (as they should propagate to all sub-filesystems)? I mean, is this enough to prevent such event to happen again? For instance, would it prevent me to make a new fs snapshot should the quota be overrun?

How much space should I reserve, e.g. make unavailable (I read somewhere that it is a good practise to use only about 80% of the pool capacity)?

Finally, is there a better/more suitable solution to my original problem than setting the quota on fs?

Thank you very much for your advice.
Dusan

What Solaris and zpool version are you using ?

cat /etc/release
zpool upgrade

This issue was fixed a long time ago if I remember correctly.

Hi,

Post the output from;

zpool upgrade -v

What I would say is that the 80% rule still holds true, I have experienced several problems due to zpools being over this threshold. Although the problem seems to be highlighted in CPU utilisation, it is still a definite problem.

A possible resolution would be to limit the ZFS access to memory space, especially if the system is lightly used.

We are currently running version 29 and the patch level is;

su02sa000> uname -a
SunOS su02sa000 5.10 Generic_147440-13 sun4v sparc SUNW,SPARC-Enterprise-T5220

Regards

Dave

Hi,

thank you both for your replies!

Here are the outputs when run on my machine:

uname -a

SunOS nas1 5.11 snv_151a i86pc i386 i86pc Solaris
cat /etc/release

Oracle Solaris 11 Express snv_151a X86
Copyright (c) 2010, Oracle and/or its affiliates.  All rights reserved.
Assembled 04 November 2010
zpool upgrade -v

This system is currently running ZFS pool version 31.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 29  RAID-Z/mirror hybrid allocator
 30  Encryption
 31  Improved 'zfs list' performance

Thank you for helping me,
Dusan