Migrating ldom from one SAN to another SAN

Hi,

I have Solaris 11 cdom (without sdom) and there is one ldom running on it. There are 5 SAN disk presented to this cdom, which I bind to ldom as below. No zpool created for those 5 disks on cdom --

root@ldom001:~# ldm list
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME
primary          active     -n-cv-  UART    8     16G      0.2%  0.2%  176d 22h
guest-001   active     -n----  5000    16    36G      0.1%  0.1%  156d 12h
root@ldom001:~#
root@ldom001:~# ldm list-constraints guest-001 | grep vds
    guest-001-root guest-001-root@primary-vds0      0
    guest-001-u01 guest-001-u01@primary-vds0      2
    guest-001-u02 guest-001-u02@primary-vds0      3
    guest-001-u03 guest-001-u03@primary-vds0      4
    guest-001-u04 guest-001-u04@primary-vds0      5
    guest-001-u05 guest-001-u05@primary-vds0      6
root@ldom001:~#
root@ldom001:~# zpool list
NAME    SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  1.09T   166G  946G  14%  1.00x  ONLINE  -
root@ldom001:~#

Now there is a requirement from client that I should have created have created this ldom on encrypted storage and current SAN is old one, not encrypted. Only way is to migrate to new SAN. I can present another 5 SAN device from new encrypted SAN, of similar size. But I am trying to figure, which is best way to migrate.
Here is config from guest ldom -

root@guest-001:~# echo |format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1d0 <COMPELNT-Compellent Vol-0704-100.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d2 <COMPELNT-Compellent Vol-0704-20.00GB>
          /virtual-devices@100/channel-devices@200/disk@2
       2. c1d3 <COMPELNT-Compellent Vol-0704-500.00GB>
          /virtual-devices@100/channel-devices@200/disk@3
       3. c1d4 <COMPELNT-Compellent Vol-0704-100.00GB>
          /virtual-devices@100/channel-devices@200/disk@4
       4. c1d5 <COMPELNT-Compellent Vol-0704-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@5
       5. c1d6 <COMPELNT-Compellent Vol-0704-500.00GB>
          /virtual-devices@100/channel-devices@200/disk@6
Specify disk (enter its number): Specify disk (enter its number):
root@guest-001:~# zpool list
NAME       SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool     99.5G  79.8G  19.7G  80%  1.00x  ONLINE  -
u01_pool  19.9G  4.62G  15.3G  23%  1.00x  ONLINE  -
u02_pool   496G   307G   189G  61%  1.00x  ONLINE  -
u03_pool  99.5G  52.8G  46.7G  53%  1.00x  ONLINE  -
u04_pool  49.8G  5.65M  49.7G   0%  1.00x  ONLINE  -
u05_pool   496G   367G   129G  73%  1.00x  ONLINE  -
root@guest-001:~#

One way I can think is :

  • Present similar size 5 SAN devices to cdom and configure those to guest ldom.
  • Once I can scan and see them in ldom, I do zpool mirror with 5 existing pools.
  • After resilvering is complete, I can detach old disk from mirrored pool.
  • Shutdown ldom and remove all 5 devices with "ldm rm-vdisk"
  • save sp-config and boot ldom.

Please guide, if this should work as high level plan or I am missing something.

Thanks

This should be as you described.

You actually do not need to shutdown the LDOM, this can be in live fashion.

When the mirroring is complete (be careful you want mirror not just add!), you will just detach the old LUN (break mirror) from zpool(s) and ldm rm-vdisk will remove the detached disk when it is not used during ldom runtime.

If you add bigger LUN, after the detach of smaller one, you can utilize the new size as well (when configuration only has bigger disk) with a simple command.

Of course the SP config save is a must as it will contain the changes you made.

Tip : be sure to save configs occasionally.
Personally, i had a small zfs compressed filesystem on hypervisor which held those configs daily, and used zfs send to external machine via cron.
This took like 100 mbs or so and held a year of daily configurations.

P.S for rpool in LDOM extra step is required after migration e.g set boot param with new disk now containing rpool or else the LDOM will fail to boot cause disk in configuration is missing.

Regards.

3 Likes

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.