Help with Oracle VM on SPARC and LUN Multipathing

I plan to use a small SPARC Netra T4-1 as a VM server with a primary domain and 2 guest domains. The primary has MPIO configured (4 paths to SAN storage). When a LUN is exported and added to a guest domain, does the single disk server defined on the primary handle the multipathing for the guest or are 2 service domains required? Only have 4 cores so don't want to add a secondary service domain. Using OVM 3.2 and Solaris 10 1/13.

Multipathing is handled by the OS itself.
Not need for additional root domains, primary is enough on that type of server.

Additional root domains are for other purposes (mostly security) and on stronger machines.

You will need to configure it to use the native multipath for all FC controller ports

stmsboot -D fp -e

After that do a reboot of physical machine.
Confirm multipath with mpathadm.

mpathadm list lu

It will show all the paths configured (2 or more depending on the FC configuration)

Then you can use format on those luns and add them as vdsdev and vdisks.

I would recommend using latest Solaris OS (11.2.x.x) with newer firmware for t4-1, since you can create guests (LDOM) using Solaris 10 on Solaris 11.2 hypervisor

Hope that helps
Regards
Peasant.

1 Like

Thanks Peasant. I'm quite familiar with FC HBAs and multipathing but have not used OVM before. I found the admin doc confusing since it only mentioned that a second service domain would be required.

Consider this post in about middle about domains, c/p from my previous post

As for bare metal domains (primary,secondary), let me offer a short explanation of domains as i understand it...
For instance, you have sparc t4-2 with two sockets, two 4 port network cards and two 2 port FC card.

You can create two hardware domains - primary and secondary, in which the actual I/O hardware is splited between those two domains (each has one PCI card and one FC card and one CPU socket and memory ).

Now you have a situation that you have one t4-2 sparc which is actually two machines separated on hardware level. So all LDOMS created on primary domain will use its resources (CPU,PCI - half of them) and ldoms on secondary will use its resources (other half)

Basically, if one socket fails due to hardware failure, only the primary domain and guest ldoms on them will fail, while secondary and guest ldoms on it will continue to run.
Those setups complicate things considerably and are done on machines which have more resources in redundant matter (like 4 cards or 4 sockets, 2 phys cards per domain for redundancy etc.)

For your setup i guess you need (keep it simple - as per scheme in the begining) :

One primary domain (bare metal)
One vsw created on top of the aggr0 interface in primary domain.
One vnet interface added to LDOM from primary-vsw on primary domain.
One VDS (virtual disk service) in primary domain per guest ldom (sneezy-vds@primary, otherguestldom-vds@primary etc.) in which you add disks for ldoms.

This is the entire topic :

Bare in mind that most of the network stuff discussed here is patched, and there are some more additional options (like DLMP).

Hope that clears things out
Regards
Peasant.

Simple is better, in my case anyway. I have experience with the virtual server environment with AIX (Power VM) but Solaris and OVM quite different. Thanks again.

is there some limitation that requires you to use Solaris 10? I would recommend Solaris 11.2 as the OVM/Multipathing support is MUCH more mature.

and what are you using for your SAN Storage device. If it's netapp I highly recommend acquiring the netapp SANtoolkit specifically for the sanlun utility. it does wonders when trying to map luns to storage devices in a virtual environment.

1 Like

We are a small shop...not a great amount of Solaris expertise. We have 2 Netra T4-1 servers running production apps and the OVM Netra is intended to be DR for those apps. Just keeping things simple and keeping setup similar across all 3 servers. We have cisco SAN with IBM DS3524 storage. Thanks for your input (was on vacation which is the reason for my delay in responding).