Solaris patching issue with Live Upgrade

I have Solaris-10 sparc box with ZFS file-system, which is running two non global zones. I am in process of applying Solaris Recommended patch cluster via Live Upgrade.
Though I have enough space in root file-system of both zones, everytime I run installcluster, it fails with complaining less space (but in alternate BE). It seems its snapshot is taking too much space. I am not sure, how to fix this issue.

root@oraprod_sap21:/# zoneadm list -icv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   1 oraprod_sap21-zesbr01 running    /zone/oraprod_sap21-zesbr01/root    native   shared
   3 oraprod_sap21-zesbq01 running    /zone/oraprod_sap21-zesbq01/root    native   shared
root@oraprod_sap21:/# df -h | grep -i root
rpool/ROOT/s10s_u9wos_14a   274G    11G   216G     5%    /
rpool/ROOT/s10s_u9wos_14a/var   274G    21G   216G     9%    /var
zesbq01_root_pool       17G    21K    53M     1%    /zesbq01_root_pool
zesbr01_root_pool       17G    21K    80M     1%    /zesbr01_root_pool
zesbq01_root_pool/root    17G   6.9G    10G    41%    /zone/oraprod_sap21-zesbq01/root
zesbr01_root_pool/zone    17G   6.2G    11G    37%    /zone/oraprod_sap21-zesbr01/root
root@oraprod_sap21:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
old_patch                  yes      yes    yes       no     -
19_march                   yes      no     no        yes    -
root@oraprod_sap21:/# cd /var/tmp/10_Recommended
root@oraprod_sap21:/var/tmp/10_Recommended# ./installpatchset -B 19_march --s10patchset
Setup
..................
 
Recommended OS Patchset Solaris 10 SPARC (2013.01.29)
Application of patches started : 2013.03.19 23:12:28
 
Application of patches finished : 2013.03.19 23:12:28
 
The following filesystems have available space less than the recommended limit
to safely continue installation of this patch set :
 /.alt.19_march/zone/oraprod_sap21-zesbq01/root-19_march (zesbq01_root_pool/root-19_march) : 54513kb available, 1260901kb recommended
 /.alt.19_march/zone/oraprod_sap21-zesbr01/root-19_march (zesbr01_root_pool/zone-19_march) : 80976kb available, 1252637kb recommended
The recommended limit is an estimated upper bound on the amount of space an
individual patch application operation may require to complete successfully.
Due to the way the recommended limit is estimated, it will always be greater
than the actual amount of space required, sometimes by a significant margin.
Note the recommended limit is neither the exact amount of free space required
to apply a patch, or the amount of free space to completely install the
bundle, these interpretations are incorrect.
If the operator wishes to continue installation of this patch set at their own
risk, space checking can be overridden by invoking this script with the
'--disable-space-check' option.
Install log files written :
  /.alt.19_march/var/sadm/install_data/s10s_rec_patchset_short_2013.03.19_23.12.28.log
  /.alt.19_march/var/sadm/install_data/s10s_rec_patchset_verbose_2013.03.19_23.12.28.log
root@oraprod_sap21:/var/tmp/10_Recommended# 
root@oraprod_sap21:/# cd /
root@oraprod_sap21:/# zfs list | grep -i 19_march
rpool/ROOT/19_march                      917M   216G  10.6G  /
rpool/ROOT/19_march/var                  483M   216G  21.8G  /var
rpool/ROOT/s10s_u9wos_14a@19_march      26.6M      -  10.7G  -
rpool/ROOT/s10s_u9wos_14a/var@19_march  46.9M      -  21.3G  -
zesbq01_root_pool/root@19_march         34.7M      -  6.87G  -
zesbq01_root_pool/root-19_march          301M  53.5M  6.70G  /zone/oraprod_sap21-zesbq01/root-19_march
zesbr01_root_pool/zone@19_march         33.1M      -  6.22G  -
zesbr01_root_pool/zone-19_march          275M  79.3M  6.29G  /zone/oraprod_sap21-zesbr01/root-19_march

Please suggest, how to fix this issue.

What's the output from

zfs list -t all | egrep 'rpool|root_pool'

There's also a problem with using Live Upgrade if your zone names and therefore ZFS pool names are long enough to make the output columns from df (IIRC) to run together. I seem to remember that one of the LU scripts uses df to parse file system, and when the columns run together the LU fails badly.

1 Like

Here is output

root@oraprod_sap21:/# zfs list -t all | egrep 'rpool|root_pool'
rpool                                   57.3G   216G    99K  /rpool
rpool/ROOT                              33.0G   216G    21K  legacy
rpool/ROOT/19_march                      917M   216G  10.6G  /
rpool/ROOT/19_march/var                  483M   216G  21.8G  /var
rpool/ROOT/s10s_u9wos_14a               32.1G   216G  10.7G  /
rpool/ROOT/s10s_u9wos_14a@19_march      53.8M      -  10.7G  -
rpool/ROOT/s10s_u9wos_14a/var           21.4G   216G  21.3G  /var
rpool/ROOT/s10s_u9wos_14a/var@19_march  85.1M      -  21.3G  -
rpool/dump                              11.9G   216G  11.9G  -
rpool/export                              44K   216G    23K  /export
rpool/export/home                         21K   216G    21K  /export/home
rpool/swap                              12.3G   229G    16K  -
zesbq01_root_pool                       17.3G  53.2M    21K  /zesbq01_root_pool
zesbq01_root_pool/root                  6.91G  10.1G  6.86G  /zone/oraprod_sap21-zesbq01/root
zesbq01_root_pool/root@19_march         55.0M      -  6.87G  -
zesbq01_root_pool/root-19_march          301M  53.2M  6.70G  /zone/oraprod_sap21-zesbq01/root-19_march
zesbr01_root_pool                       17.3G  79.3M    22K  /zesbr01_root_pool
zesbr01_root_pool/zone                  6.26G  10.7G  6.21G  /zone/oraprod_sap21-zesbr01/root
zesbr01_root_pool/zone@19_march         51.2M      -  6.22G  -
zesbr01_root_pool/zone-19_march          275M  79.3M  6.29G  /zone/oraprod_sap21-zesbr01/root-19_march

How Where did the "...@19_march" snapshots come from? I don't seem to recall snapshots like that being created from "lucreate".

The zesbq01_root_pool seems to be only 17GB, which IMO is awfully small for a root pool. I'd be half tempted to just destroy that pool and start over.

I had created 19_march BE and after that ran lucreate which might have created it. I can remove it, if this is causing issue.

root@oraprod_sap21:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
old_patch                  yes      yes    yes       no     -
19_march                   yes      no     no        yes    -

Most of our zone root are of 8 gb, since occupancy is also less. In this zone, it is 17 gb and 6.9 gb is used. It is still having 10 gb space free. That should not be issue. Anything to do with quota ?

root@oraprod_sap21:/# df -h | grep -i root
rpool/ROOT/s10s_u9wos_14a   274G    11G   216G     5%    /
rpool/ROOT/s10s_u9wos_14a/var   274G    21G   216G     9%    /var
zesbq01_root_pool       17G    21K    53M     1%    /zesbq01_root_pool
zesbr01_root_pool       17G    22K    79M     1%    /zesbr01_root_pool
zesbq01_root_pool/root    17G   6.9G    10G    41%    /zone/oraprod_sap21-zesbq01/root
zesbr01_root_pool/zone    17G   6.2G    11G    37%    /zone/oraprod_sap21-zesbr01/root

It looks like lucreate might have created another boot environment - or done something with the existing one. I'd clean out the pool and start over. A simple "lucreate -n name -p pool" should be all you need to do.

FWIW, I like putting new boot envs in a separate pool from the active boot env - it's slower and takes a lot more disk space, but you don't wind up with a maze of clones and snapshots.

Do I need to do following ?

zfs destroy zesbq01_root_pool/root@19_march
zfs destroy zesbq01_root_pool/root-19_march
zfs destroy zesbr01_root_pool/zone@19_march
zfs destroy zesbr01_root_pool/zone-19_march

I know that I'm a bit late to the party on this question, but this should allow you to perform the update without the error you showed in the original post.

./installpatchset -B 19_march --s10patchset --disable-space-check

Before you do this be absolutely sure you have a good, current backup of the data specifically on the two filesystems listed in the error message. It's a good idea to have a backup of everything before performing any updates, but especially in a situation like this.

-Tom