We're trying out a SAN migration from HP EVA to EMC VMAX, and run into a bit of an issue with powerpath and zfs.
The method we're currently using to migrate is to export the HP EVA luns from our sun server, replicate using SAN based method, and then present the new luns to our Sun server doing a zfs import.
The problem we have is when doing a zfs import, zfs chooses one of the 4 possible paths to the lun, instead of using the powerpath pseudo device.
zpool status output:
pool: tibcoapp
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tibcoapp ONLINE 0 0 0
c5t50000975F000258Dd20 ONLINE 0 0 0
/etc/powermt display dev=all
Pseudo name=emcpower29a
Symmetrix ID=0002987XXXXX
Logical device ID=1306
state=alive; policy=SymmOpt; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3073 pci@3,700000/SUNW,emlxs@0,1/fp@0,0 c3t50000975F0002589d20s0 FA 3gB active alive 0 0
3073 pci@3,700000/SUNW,emlxs@0,1/fp@0,0 c3t50000975F0002585d20s0 FA 2gB active alive 0 0
3077 pci@13,700000/SUNW,emlxs@0,1/fp@0,0 c5t50000975F000258Dd20s0 FA 4gB active alive 0 0
3077 pci@13,700000/SUNW,emlxs@0,1/fp@0,0 c5t50000975F0002581d20s0 FA 1gB active alive 0 0
Is there a way to force zfs to use the pseudo device: /dev/dsk/emcpower29a, instead of c5t50000975F000258Dd20, whch it is currently using?
I do know that adding a blank lun from the EMC SAN to the existing zpool as a mirror would be a lot simpler - but unfortunately this method is the one we have to use.
I've done this quite a few times on UFS and I believe the same general procedure will work for ZFS as well. When you're creating the zpool, use the following paramater 'zpool create your_pool_name /dev/dsk/emcpower29a' and not ''zpool create your_pool_name /dev/dsk/c5t50000975F0002581d20s0'.
Or, if you've already tried that, please post the output you used to create the zpool. That's probably the best place to start.
now we've exported that zpool, done a SAN migration to duplicate the lun onto EMC, presented the new lun to the server and done a
zpool import tibcoapp
which automatically mounted the zpool, configuring it using the c5t5 path rather than the EMC pseudo device. It seems to be functioning perfectly, other than the fact we'll lose the device if we lose that single c5t5 path.
Yes, I had a feeling we were over complicating things. I will push for the zpool mirror method. Attach the new lun (referencing the EMC pseudo device), wait for sync and detach old device.
Many thanks to everyone for their input so far. I've just got one final ZFS query, then i'll be done - promise!
I've followed peasant's advice utilising the ZFS attach/detach, and it works perfectly. I'm just not clear on how to mirror a zpool with 2 striped luns:
You should be able to create a mirror with from existing two disks to new two disks using attach.
You can try this method on your server using ZVOL's or files as backend for your test zpool.
First create a pool with two zvols or files as backend device, then use attach to attach additional 2 zvols/files as backend device.
This can be 200 MB files or zvols, no need for gigabytes, on existing free space anywhere on the system.
If everything is ok, you can run it on your real data.
Also, you can use virtualization on your desktop to check if things work before typing it on production systems.
wow thanks for that, I had no idea you could use regular text files as devices within ZFS. Using 'dd', I created 4 new luns to try out the procedure above, and it worked fine!
pool: mypool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Oct 30 15:28:52 2013
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
/tmp/test/lun1 ONLINE 0 0 0
/tmp/test/lun2 ONLINE 0 0 0 56K resilvered
I attached my 2 remaining luns lun1 -> lun3 and lun2 -> lun4, and ended up with the following: