Live Upgrade Patching Error: Unable to write vtoc

Attempting to patch several servers using live upgrade

Release: Oracle Solaris 10 8/11 s10x_u10wos_17b X86

Error I'm receiving is in the message in the log below

tail -15 /var/svc/log/rc6.log
Legacy init script "/etc/rc0.d/K50pppd" exited with return code 0.
Executing legacy init script "/etc/rc0.d/K52llc2".
Legacy init script "/etc/rc0.d/K52llc2" exited with return code 0.
Executing legacy init script "/etc/rc0.d/K62lu".
Live Upgrade: Deactivating current boot environment <sol10_U10x86>.
Live Upgrade: Executing Stop procedures for boot environment <sol10_U10x86>.
Live Upgrade: Current boot environment is <sol10_U10x86>.
Live Upgrade: New boot environment will be <patched_05102012>.
Live Upgrade: Activating boot environment <patched_05102012>.
ERROR: Live Upgrade: Unable to write the vtoc for boot environment
<patched_05102012> to device </dev/rdsk/c0t5000C50039FEF187d0s2>.
Partition 0 not aligned on cylinder boundary: "0 2 00 20352 286657920"
ERROR: Live Upgrade: Activation of boot environment <patched_05102012> FAILED.
Legacy init script "/etc/rc0.d/K62lu" exited with return code 0.

---------- Post updated at 04:10 PM ---------- Previous update was at 12:00 PM ----------

very big difference in the root size between the current BE and patched BE

# lufslist sol10_U10_101512
               boot environment name: sol10_U10_101512

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap      21474836480 -                   -
rpool/ROOT/sol10_U10_101512 zfs        8365630464 /                   -
rpool/export            zfs          21292032 /export             -
rpool/export/home       zfs          21259264 /export/home        -
rpool                   zfs       32889929728 /rpool              -
# lufslist sol10_U10x86
               boot environment name: sol10_U10x86
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap      21474836480 -                   -
rpool/ROOT/sol10_U10x86 zfs          59540480 /                   -
rpool/export            zfs          21292032 /export             -
rpool/export/home       zfs          21259264 /export/home        -
rpool                   zfs       32889929728 /rpool              -

Partition 0 not aligned on cylinder boundary - comp.unix.solaris

Maybe the boundary thing inflated the size?

awwgghh I may have to re-install these servers... how did this happen?

Why? There are tools . . . .

It seems  that the disk layout appears to be incorrect. 

* /dev/rdsk/c0t5000C50039FEF187d0s0 partition map
*
* Dimensions:
*     512 bytes/sector
*      63 sectors/track
*     255 tracks/cylinder
*   16065 sectors/cylinder
*   17847 cylinders
*   17845 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector 
*   286678272      1653 286679924
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      2    00      20352 286657920 286678271
       2      5    00          0 286678272 286678271
       8      1    01          0     20352     20351

The prtvtoc shows that there are 16065 sectors per cylinder. If I take this number and multiply it the number of accessible cylinders I should get a sector count of 286679924
but ( 16065*17845 = 286679925 ) and 286679925 - 286679924 = -1 :wall:

The disk is currently only partitioned out to have 286678271 as the last sector. The unallocated space is the missing space that is now appears to be causing my or rather LU issue.

I guess I could make a flash archive of the system and use JET and the FLAR

Can you fix it by trimming the partition size after a defrag?