Zpool showing 100% full

Hi,
This is Solaris-10 on Sparc. Due to some reason, one zpool size is showing 100% full, while nothing is there in that. dstr03-zone02 is a non global zone running on physical machine - dstr03

root@dstr03:/# df -h | grep -i zone02
zone02_app_pool        60G    31K   3.8G     1%    /zone02_app_pool
zone02_root_pool      8.6G    31K     0K   100%    /zone02_root_pool
zone02_app_pool/cad_apps    18G    14G   3.8G    79%    /zone/dstr03-zone02/cad/apps
zone02_app_pool/cad_envs    36G    21G    15G    58%    /zone/dstr03-zone02/cad/envs
zone02_app_pool/cad_users   2.0G   194M   1.8G    10%    /zone/dstr03-zone02/cad/users
zone02_root_pool/root   8.5G   1.4G   7.2G    16%    /zone/dstr03-zone02/root
root@dstr03:/# ls -ltr /zone02_root_pool
total 0
root@dstr03:/# du -sh /zone02_root_pool
   1K   /zone02_root_pool
root@dstr03:/# zfs get quota zone02_root_pool
NAME               PROPERTY  VALUE  SOURCE
zone02_root_pool  quota     none   default
root@dstr03:/# zfs get reservation zone02_root_pool
NAME               PROPERTY     VALUE   SOURCE
zone02_root_pool  reservation  none    default
root@dstr03:/#
 root@dstr03:/# zfs list -o space | grep -i zone02
zone02_app_pool                    3.85G  56.0G         0     31K              0      56.0G
zone02_app_pool/cad_apps           3.84G  14.2G         0   14.2G              0          0
zone02_app_pool/cad_envs           15.2G  20.8G         0   20.8G              0          0
zone02_app_pool/cad_users          1.81G   194M         0    194M              0          0
zone02_root_pool                       0  8.55G         0     31K              0      8.55G
zone02_root_pool/root              7.19G  1.36G         0   1.36G              0          0

How can I fix this ?

Umounting and mounting again?

I did not tried that.
Can I unmount /zone02_root_pool without harming anything ? I checked with fuser and it is not being used by any process currently.

Why not?

Then if it solves, we will try to explain what was the possible cause...

I tried, but no luck

root@dstr03:/# umount /zone02_root_pool
root@dstr03:/# zfs mount -a
root@dstr03:/# df -h | grep -i zone02 | grep root
zone02_root_pool/root   8.5G   1.4G   7.2G    16%    /zone/dstr03-zone02/root
zone02_root_pool      8.6G    31K     0K   100%    /zone02_root_pool

All other root pool are showing good on server (for other non global zones), which shows 1% capacity, while "available" column varies in "df -h"

Compare a df -k with a du -dsk on the filesystem /zone02_root_pool
If the values are radically different, then you have a dangling filehandle (ie a process has an open handle on there but there's no directory entries pointing at it).

You can find the process in question via lsof (easy) or /usr/proc/pfiles (loop over all running processes) and look for open handles with no associated filename that are on the zpool in question. This will find you the process that's holding it as well as the filehandle for it.

If you can bounce the process, problem solved. If not, you can at least truncate the file by catting /dev/null over the top of /proc/<pid>/fd/<fd>