Can I create virtual disk from zpool on Solaris 11.4 for OVM?

Hello,

I am here again, with another issue.

I am setting up a new Oracle VM setup (GUI). On the backend, it is ldom concept, but GUI seems to have an easier interface.
For now, we don't have any SFP module and these two S7 Sparc servers are not connected to Storage, so I have to live with local disks only. Now the limitation comes here.
I will create one repository with one local disk and can create VM's from that. That means, VM will be sitting on that repository, which is made of local disk. If ever, I want to replace that local disk (due to hard/transport/bad errors), I will not be able to migrate these VMs to another repository/pool because these VMs are on local disk. This will be a bad design.
Even if I create zpool with two disk, I thought I should be able to see that pool in OVM. But from Oracle VM perspective GUI Manager does not recognize zpools, only disk. I created a case with Oracle for best practice to be used for this and they responded -- "OVM is not designed to manage redundancy with local disks. That is why Oracle VM is capable to handle SAN Storage over the network or even fiber channel. Now from Solaris perspective, I think that you can manually assign zpool as virtual disk to the LDOMs guest, but you need to do research on this if this is possible"

I tried to google and didn't find any relevant note, if I can create virtual disks from zpool.

Another idea came in my mind was, if I can create hardware RAID, then OS will see that single disk, but then I found this link - Hardware RAID Support -
SPARC and Netra SPARC S7-2 Series Servers Administration Guide
and it denies it.

Any suggestions or ideas, please ?

Thanks

Lets start with how many disks do you have in that box and what size ?
Did the operating system come install over entire devices (s2) ?
Can you show zpool status rpool on that box.

You have 3 choices for disks to use in ldoms :

1.) Reinstall the hypervisor on slice not entire disk, partitioning it in text installer and using VTOC label.
OR detach one of mirror disks, repartition with slice 0, attach. This will require that you have free space in rpool for such actions.
Then the other disk s0 after rebuild/resilver is complete - be sure to check eeprom and primary domain for boot device specification.
Reboot the server to confirm everything is ok.

After that, 2 x slice 0 (s0) will make rpool zpool, and all other slices (except s2!) can be used in ldoms after partitioning via format with free space.

Example, you have 2 x 300 GB, with slice 0 (s0) of 100 GB on each disk. You install operating system on it or repartition, attach, detach.
You are left with s1 to s6 (VTOC label, s2 excluded as it represents entire disk) to use with ldm add-vdsdev options=slice /dev/dsk/cNtNdNs1... on both disks.
Followed by a ldm add-vdisk id=<num> ... <ldom>
Add, for instance, s1 slices from both disks to ldom and create a mirrored rpool inside.

When you get SAN lun, you will add that lun to LDOM, and make a three way mirror of existing pool.
After the resilver is complete, you will detach and remove 2 local disks.
This would be mid level in terms on ease of setup.

2.) Creating ZVOL in existing zpool and adding /dev/zvol/dsk/ ... to LDOM.
Same with SAN lun afterwards, you add it, mirror and detach the old ZVOL after inside LDOM.
This will be worst performance, but easiest to setup on existing configuration.

3.) Combination of 1.) and SVM.
You mirror two slice 1 (s1) with all remaining space on SVM level and create soft partitions which you present to ldom as entire devices /dev/md/dsk..
Migration is done after the SAN lun is available to hypervisor via [/icode]meta..[/icode] commands on the hypervisor level or inside LDOM via zpool [attach|detach] commands.
This is the most complicated way and would advise against it if you do not have SVM experience.

No downtime is required for any operations above, unless, of course, you reinstall the hypervisor entirely.

Hope that helps
Regards
Peasant.

Thanks for wonderful explanation. Here is my current setup.
There are 6 x 1TB disks. I installed Solaris 11.4 on the first entire disk and then mirrored it in rpool with the second disk. That gives me 4 disks of 1 TB each.

root@ovmi-host1:/# echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t5000C500C1FD0833d0 <SEAGATE-ST1200IN9SUN1.2T-ORA6-1.09TB>
          /scsi_vhci/disk@g5000c500c1fd0833
          /dev/chassis/SYS/HDD0/disk
       1. c0t5000C500C1FD0337d0 <SEAGATE-ST1200IN9SUN1.2T-ORA6-1.09TB>
          /scsi_vhci/disk@g5000c500c1fd0337
          /dev/chassis/SYS/HDD1/disk
       2. c0t5000C500C1FD03ABd0 <SEAGATE-ST1200IN9SUN1.2T-ORA6-1.09TB>
          /scsi_vhci/disk@g5000c500c1fd03ab
          /dev/chassis/SYS/HDD2/disk
       3. c0t5000C500C1FD0C9Fd0 <SEAGATE-ST1200IN9SUN1.2T-ORA6-1.09TB>
          /scsi_vhci/disk@g5000c500c1fd0c9f
          /dev/chassis/SYS/HDD3/disk
       4. c0t5000C500C1FD0CA7d0 <SEAGATE-ST1200IN9SUN1.2T-ORA6-1.09TB>
          /scsi_vhci/disk@g5000c500c1fd0ca7
          /dev/chassis/SYS/HDD4/disk
       5. c0t5000C500C1FD03CFd0 <SEAGATE-ST1200IN9SUN1.2T-ORA6-1.09TB>
          /scsi_vhci/disk@g5000c500c1fd03cf
          /dev/chassis/SYS/HDD5/disk
       6. c1t0d0 <MICRON-eUSB DISK-1112-1.89GB>
          /pci@300/pci@1/pci@0/pci@2/usb@0/storage@1/disk@0,0
          /dev/chassis/SYS/MB/EUSB_DISK/disk
Specify disk (enter its number): Specify disk (enter its number):
root@ovmi-host1:/# zpool list
NAME    SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  1.09T   175G  937G  15%  1.00x  ONLINE  -
root@ovmi-host1:/# zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
        pool will no longer be accessible on older software versions.
  scan: resilvered 171G in 15m50s with 0 errors on Mon Mar  2 16:24:24 2020

config:

        NAME                       STATE      READ WRITE CKSUM
        rpool                      ONLINE        0     0     0
          mirror-0                 ONLINE        0     0     0
            c0t5000C500C1FD0833d0  ONLINE        0     0     0
            c0t5000C500C1FD0337d0  ONLINE        0     0     0

errors: No known data errors
root@ovmi-host1:/#

When I login to OVM Manager, select "create repository", select this server, select "physical disks", it gives me 4 disks in below format to select from. Here I can choose any disk and create a repository

Name			Size (GiB)
35000c500c1fd03cf	1117.81
35000c500c1fd0ca7	1117.81
35000c500c1fd0c9f	1117.81
35000c500c1fd03ab	1117.81

My advice/suggestion is Oracle VM server for SPARC using ldm and format commands from shell/hypervisor.

I have not used OVM manager or storage repositories, so i cannot be of much help further.

Regards
Peasant.

1 Like