Different multipath softwares for RAC

Hi,
I have a RAC installation on 2 solaris 11 nodes. (With ASM)
Disks' mpath are managed by EMC Powerpath on both nodes.
I want to migrate mpath software to Solaris multipath without downtime.
My plan is first migrate node1 to solaris mpath, then after some days migrate node2 to sol.mpath.
My question is, is it possible to run RAC's ASM disks with different mpath softwares?
For some days node1 will run with solaris mpath and node2 with Emc powerpath...
Thanks..

Hi vlkkck,

I'd be tempted to take the outage on both nodes of the RAC if I was you, there is always a risk of something unexpected when you "mix and match" so caution is the way forward here.

It has been many years since I have used EMC powerpath and it was MPXIO that replaced it in my case.

Changing to MPXIO is very straight forward with only minor differences to the stmsboot command.

If there is a successful outcome the disk naming will change when viewed from format you will see the devices listed as below fir internal disk;

root@fvssphsun01:/export/home/erdr/logs# echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t5000CCA0561F502Cd0 <HITACHI-H109060SESUN600G-A690-558.91GB>
          /scsi_vhci/disk@g5000cca0561f502c
          /dev/chassis/SYS/SASBP/HDD0/disk
       1. c0t5000CCA056200D00d0 <HITACHI-H109060SESUN600G-A690-558.91GB>
          /scsi_vhci/disk@g5000cca056200d00
          /dev/chassis/SYS/SASBP/HDD1/disk
       2. c0t5000CCA05620DAE4d0 <HITACHI-H109060SESUN600G-A606 cyl 64986 alt 2 hd 27 sec 668>  solaris
          /scsi_vhci/disk@g5000cca05620dae4
          /dev/chassis/SYS/SASBP/HDD4/disk
       3. c0t5000CCA0561F5014d0 <HITACHI-H109060SESUN600G-A690-558.91GB>
          /scsi_vhci/disk@g5000cca0561f5014
          /dev/chassis/SYS/SASBP/HDD5/disk

And for SAN disk you should see;

       4. c0t600507680C80827800000000000004A3d0 <IBM-2145-0000 cyl 51198 alt 2 hd 256 sec 128>
          /scsi_vhci/ssd@g600507680c80827800000000000004a3
       5. c0t600507680C808278000000000000049Ad0 <IBM-2145-0000 cyl 6398 alt 2 hd 256 sec 64>
          /scsi_vhci/ssd@g600507680c808278000000000000049a
       6. c0t600507680C808278000000000000049Bd0 <IBM-2145-0000 cyl 1278 alt 2 hd 256 sec 64>
          /scsi_vhci/ssd@g600507680c808278000000000000049b
       7. c0t600507680C808278000000000000053Ed0 <IBM-2145-0000 cyl 3838 alt 2 hd 256 sec 64>
          /scsi_vhci/ssd@g600507680c808278000000000000053e

As I said earlier, I would take the outage to do this work and do both nodes together - I know that there is likely to be push back on that scenario. But balancing the potential risk seems like a sensible thing to do here, running two disk control subsystems is increasing the risk. In the event that there is some kind of clash between the pathing software do you have a plan to resolve, i.e a set of snapshots to roll back or something similar.

Regards

Gull04

------ Post updated at 01:31 PM ------

Hi vlkkck,

OK, having had a bit of time to have a look and nab a DBA. There are a number of points to be aware off;

  • Oracle RAC is active/active so both sides should be the same mult-pathing.
  • The system could contain a number of combinations of types of disks - in addition to ASM
  • The ASM disks may be tagged with the "emcpower" ID's.
  • Management disks may exist, traditionally these would be migrated to new disks.

All in all this could turn out to be a fairly major piece of work, so there would have to be a bit of research. If you are planning to remove power path, there is a need to check how the ASM disks are identified.

Regards

Gull04

3 Likes

Well i would start by installing (if my infrastructure allows this) a same solaris 11 instance with couple of luns, not related to original RAC.
Present couple of luns, create asm or simple filesystem over those disk, and then switch from EMC to native to see effects

If you can, clone/snapshot the entire monster using storage methods in a network isolated environment to give it a shot.

In theory, ASM doesn't care which multipath software is used.
It's just a symlink to disk slice if configured per docs.

So, if you failover and use only single node, reconfigure the multipath on other box and create exact same symlinks to devices used on EMC devices before, stuff should just magically work.

Disk label should stay the same and sliceN used in ASM configuration should be the same, no mater which multipathing software is used.

But to put a small disclaimer, from all the storage stuff i have not worked with EMC or their multipath software.
As i've been reading the powerpath stuff will create /dev/emcpower* or similar device files, on which it will intercept the IO and balance it to appropriate disks devices below.

Using commands provided by that software, list the mapping to devices below.
Check what ASM has defined inside configuration for disk match (ASM_DISKSTRING).

Looks fairly transparent to change, but i would try it first in non-volatile environment...

Regards
Peasant.

1 Like

Hi Peasant,

The power path devices used by ASM are as you point out

/dev/rdsk/emcpower*a,b,c........

and may be identified by ASM as such. Removing the power path devices and replacing with MPXIO will create the disks as

/scsi_vhci/ssd@g600507680c80827800000000000004a3d0s*

which ASM may not recognise on startup.

Once ASM see's the disks it will sort it's self out and will correctly identify all the disks, I know this because two years ago I did extensive testing - on AIX and Solaris where the disk ID's were deliberately mixed up. In all cases ASM sorted everything out, even when the physical disk ID's were swapped around on multiple disks but the underlying multi-pathing always remained the same.

To accomplish this there was a manual intervention, whilst bringing up ASM - DBA's did that bit. I would completely agree with you that if the environment allows this should be tested, I would assume that there would be a Test/Dev setup of some sort to do this on.

Regards

Gull04

1 Like

Yes, it's the same as volume groups.

Each has a unique identifier in header.
For instance on Linux or HPUX, if you run vgscan, whatever names are there the import will work if all the disks are there with same VGID.

ASM works the same, after it can see the disks using ASM_DISKSTRING.
Using MPXIO the path to be used in software won't actually be /scsi_vhci/... but rather /dev/[r]dsk/c0t ...
... being storage controller wwn + LUN ID

Check the correlation with ls -lrt /dev/rdsk/<some fc disk>
And doing luxadm display /dev/rdsk/...s2
Numbers will be matched.t.

This is what i do not like about linux multipath and those default /dev/sd[abcefg...]
But you can always rewrite the multipath configuration file to create mapper device files in similar fashion :slight_smile:

Regards
Peasant

1 Like

Thank you for all answers..
I used solaris mpath for 5 years, and its very easy to manage and activate it with stmsboot. But I moved to another job and my new system is using emc powerpath which has some pros and cons.
As a old fashioned admin I dont want to see hundreds of disks with format command, but with emc powerpath I see (number of LUNS * number of paths) disks (in my case its over 1500 disks :slight_smile: ) and it increases the complexity of the system.

So next week I will migrate one node of my test DB to solaris Mpath and see what will happen. I think different mpaths will not work properly. I will share my experiences about migration.

Best,