Restoring to previous Boot Environment

Hi all,

I'm fairly new to Solaris and am just getting to grips with using LU (Live Upgrade) for OS patching purposes.

worcester#uname -a
SunOS worcester 5.10 Generic_144488-12 sun4v sparc SUNW,SPARC-Enterprise-T5220

I have successfully created and patched a new BE (boot environment) using the lucreate/luactivate commands.

On activating the patched BE I received the following instructions:

  worcester#luactivate patching20110606
  A Live Upgrade Sync operation will be performed on startup of boot environment <patching20110606>.
  
  **********************************************************************
The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. 
You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE.
  **********************************************************************
  
  In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment:
  
  1. Enter the PROM monitor (ok prompt).
  
  2. Boot the machine to Single User mode using a different boot device
  (like the Solaris Install CD or Network). Examples:
  
       At the PROM monitor (ok prompt):
       For boot to Solaris CD:  boot cdrom -s
       For boot to network:     boot net -s
  
  3. Mount the Current boot environment root slice to some directory (like /mnt). You can use the following commands in sequence to mount the BE:
  
       zpool import rpool
       zfs inherit -r mountpoint rpool/ROOT/patching
       zfs set mountpoint=<mountpointName> rpool/ROOT/patching
       zfs mount rpool/ROOT/patching
  
  4. Run <luactivate> utility with out any arguments from the Parent boot environment root slice, as shown below:
  
       <mountpointName>/sbin/luactivate
  
  5. luactivate, activates the previous working boot environment and
  indicates the result.
  
  6. Exit Single User mode and reboot the machine.
  
  **********************************************************************
  
  Modifying boot archive service
  Activation of boot environment <patching20110606> successful.
  

On rebooting the server, everything comes back up fine and my OS is now fully patched. This is a test server I am working on and want to get recovery procedures documented before I apply this to my live servers. Therefore i tried rolling back to the previous BE (pre patched)

Following the instructions given when running the luactivate command, i booted into single user mode and first ran

zpool import rpool

This presented me with lots of failures saying

Cannot mount '/export': failed to create mountpoint
Cannot mount '/rpool': failed to create mountpoint

for all my mountpoints. Is this correct?

I ignored these errors as I've read in some other forums that this can be ignored (not sure how true this is..)

The next few steps were, it seems, completed successfully

zfs inherit -r mountpoint rpool/ROOT/patching 
zfs set mountpoint=/mnt rpool/ROOT/patching
zfs mount rpool/ROOT/patching

The final step was to run

<mountpointName>/sbin/luactivate

This resulted in an error of

luactivate: ERROR: Live Upgrade not installed properly (/etc/default/lu not found).

Could anyone offer any assistance or see that I'm missing anything out please?

Any other output that is needed, please let me know.

thank you.

You did boot into single user mode from the network or DVD? Correct?

Yes - I booted, from DVD, into single user mode using

boot cdrom -s

Hi, am facing same issue. My release is Solaris 10 5/09 s10s_u7wos_08. Not able to activate old BE , by booting thru net.

Well, I raised a call with Oracle and received the following response:

I'll be trying the boot -L command tomorrow and let you know how i get on..

Just thought I'd give an update on this issue to say that i've successfully used the boot -L workaround to boot to an old BE.

Oracle have been able to reproduce this behaviour in their lab. They said that this is a known bug addressed in:

Bug 6996301 : s10u9 zfs root PBE/ABE in same pool on-screen luactivate instructions don't work

and

Bug 6923286 : luactivate fails in single user mode during fallback activation

There's no official fix available currently.