Solaris not booting with new BE after performing Liveupgrade.

After getting the new BE created and activating the new BE with luactivate command, OS is still booting with OLD BE.

Steps followed below..

bash-3.2#
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    yes       no     -
New_zfs                    yes      no     no        yes    -
bash-3.2#
bash-3.2# luactivate New_zfs
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <oldZFS>

Generating boot-sign for ABE <New_zfs>
Generating partition and slice information for ABE <New_zfs>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/s10x_u11wos_24a
     zfs set mountpoint=<mountpointName> rpool/ROOT/s10x_u11wos_24a
     zfs mount rpool/ROOT/s10x_u11wos_24a

3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.
5. umount /mnt
6. zfs set mountpoint=/ rpool/ROOT/s10x_u11wos_24a
7. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <New_zfs> successful.
bash-3.2#
bash-3.2# init 6
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <New_zfs> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful


######### After Reboot ###########


bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    yes       no     -
New_zfs                    yes      no     no        yes    -

Plz suggest..

Did you activate the new BE?
What's the output of the lustatus before reboot?
The new BE should be activated, from what I see it's not active on reboot.

You did not give a boot environment argument, i.e. New_zfs, to luactivate.

@br1an : I forgot to mention the lustatus before rebooting. It does shoes New_zfs having tag "yes" for Active on Reboot .

@fpmurphy : I activated the new BE using luactivate New_zfs command. Anything I missed ?

I again tried same steps and got the same results.. :frowning:

bash-3.2#
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    yes       no     -
New_zfs                    yes      no     no        yes    -
bash-3.2#
bash-3.2# luactivate New_zfs
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <oldZFS>

Generating boot-sign for ABE <New_zfs>
Generating partition and slice information for ABE <New_zfs>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/s10x_u11wos_24a
     zfs set mountpoint=<mountpointName> rpool/ROOT/s10x_u11wos_24a
     zfs mount rpool/ROOT/s10x_u11wos_24a

3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.
5. umount /mnt
6. zfs set mountpoint=/ rpool/ROOT/s10x_u11wos_24a
7. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <New_zfs> successful.
bash-3.2#
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    no        no     -
New_zfs                    yes      no     yes       no     -
bash-3.2#
bash-3.2#
bash-3.2# zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
rpool                       7.34G  12.2G    43K  /rpool
rpool/ROOT                  5.28G  12.2G    31K  legacy
rpool/ROOT/s10x_u11wos_24a  5.28G  12.2G  5.28G  /
rpool/dump                  1.00G  12.2G  1.00G  -
rpool/export                  63K  12.2G    32K  /export
rpool/export/home             31K  12.2G    31K  /export/home
rpool/swap                  1.06G  12.3G  1.00G  -
rpool2                      7.35G  2.43G  41.5K  /rpool2
rpool2/ROOT                 5.29G  2.43G    31K  legacy
rpool2/ROOT/New_zfs         5.29G  2.43G  5.29G  /
rpool2/dump                 1.03G  3.46G    16K  -
rpool2/swap                 1.03G  3.46G    16K  -

FYI : My alternate BE (New BE) is in different zpool.

Thanks