Emc powerpath device & zfs query

Hi,

We're trying out a SAN migration from HP EVA to EMC VMAX, and run into a bit of an issue with powerpath and zfs.

The method we're currently using to migrate is to export the HP EVA luns from our sun server, replicate using SAN based method, and then present the new luns to our Sun server doing a zfs import.

The problem we have is when doing a zfs import, zfs chooses one of the 4 possible paths to the lun, instead of using the powerpath pseudo device.

zpool status output:

 pool: tibcoapp
 state: ONLINE
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        tibcoapp                  ONLINE       0     0     0
          c5t50000975F000258Dd20  ONLINE       0     0     0



/etc/powermt display dev=all

Pseudo name=emcpower29a
Symmetrix ID=0002987XXXXX
Logical device ID=1306
state=alive; policy=SymmOpt; queued-IOs=0
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
3073 pci@3,700000/SUNW,emlxs@0,1/fp@0,0 c3t50000975F0002589d20s0 FA  3gB  active   alive      0      0
3073 pci@3,700000/SUNW,emlxs@0,1/fp@0,0 c3t50000975F0002585d20s0 FA  2gB  active   alive      0      0
3077 pci@13,700000/SUNW,emlxs@0,1/fp@0,0 c5t50000975F000258Dd20s0 FA  4gB  active   alive      0      0
3077 pci@13,700000/SUNW,emlxs@0,1/fp@0,0 c5t50000975F0002581d20s0 FA  1gB  active   alive      0      0

Is there a way to force zfs to use the pseudo device: /dev/dsk/emcpower29a, instead of c5t50000975F000258Dd20, whch it is currently using?

I do know that adding a blank lun from the EMC SAN to the existing zpool as a mirror would be a lot simpler - but unfortunately this method is the one we have to use.

Thanks in advance.

I've done this quite a few times on UFS and I believe the same general procedure will work for ZFS as well. When you're creating the zpool, use the following paramater 'zpool create your_pool_name /dev/dsk/emcpower29a' and not ''zpool create your_pool_name /dev/dsk/c5t50000975F0002581d20s0'.

Or, if you've already tried that, please post the output you used to create the zpool. That's probably the best place to start.

the zpool was created using the id from a different device (as it used to be hosted on HP EVA SAN). So it must have been something along:

zpool create tibcoapp /dev/dsk/c6t6001438002A57F6C0000800006D80000d0

now we've exported that zpool, done a SAN migration to duplicate the lun onto EMC, presented the new lun to the server and done a

zpool import tibcoapp

which automatically mounted the zpool, configuring it using the c5t5 path rather than the EMC pseudo device. It seems to be functioning perfectly, other than the fact we'll lose the device if we lose that single c5t5 path.

Just use zpool mirror/attach/detach.

Add a with same size or bigger from EVA storage in tibcoapp pool via attach.
After the resilver is complete, detach the EMC disk.

No need for SAN storage replication techniques.

Hope that helps
Regards
Peasant.

thanks for the advice Peasant.

Yes, I had a feeling we were over complicating things. I will push for the zpool mirror method. Attach the new lun (referencing the EMC pseudo device), wait for sync and detach old device.

Cheers.

Many thanks to everyone for their input so far. I've just got one final ZFS query, then i'll be done - promise!

I've followed peasant's advice utilising the ZFS attach/detach, and it works perfectly. I'm just not clear on how to mirror a zpool with 2 striped luns:

  pool: dev_app
 state: ONLINE
 scrub: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        dev_app                            ONLINE       0     0     0
          c6t6001438002A57F6C000080000C0A0000d0  ONLINE       0     0     0
          c6t6001438002A57F6C000080000C150000d0  ONLINE       0     0     0

On a normal one lun zpool, to attach a mirror device, I would issue a:

zpool attach <pool name> <existing device> <new device>

But how would I do that one the zpool above, with 2 striped luns? Googling suggests tackling each lun in turn:

zpool attach dev_app c6t6001438002A57F6C000080000C0A0000d0 <new lun1>
zpool attach dev_app c6t6001438002A57F6C000080000C150000d0 <new lun2>

But have no idea, what kind out zpool that would create. Unfortunately I don't hve a test system I can try this out on. Can anyone please advise?

Many thanks.

You should be able to create a mirror with from existing two disks to new two disks using attach.

You can try this method on your server using ZVOL's or files as backend for your test zpool.

First create a pool with two zvols or files as backend device, then use attach to attach additional 2 zvols/files as backend device.
This can be 200 MB files or zvols, no need for gigabytes, on existing free space anywhere on the system.

If everything is ok, you can run it on your real data.

Also, you can use virtualization on your desktop to check if things work before typing it on production systems.

Hope that helps
Regards
Peasant.

1 Like

wow thanks for that, I had no idea you could use regular text files as devices within ZFS. Using 'dd', I created 4 new luns to try out the procedure above, and it worked fine!

  pool: mypool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Oct 30 15:28:52 2013
config:

        NAME               STATE     READ WRITE CKSUM
        mypool          ONLINE       0     0     0
          /tmp/test/lun1  ONLINE       0     0     0
          /tmp/test/lun2  ONLINE       0     0     0  56K resilvered

I attached my 2 remaining luns lun1 -> lun3 and lun2 -> lun4, and ended up with the following:

        NAME                 STATE     READ WRITE CKSUM
        mypool            ONLINE       0     0     0
          mirror-0           ONLINE       0     0     0
            /tmp/test/lun1  ONLINE       0     0     0
            /tmp/test/lun3  ONLINE       0     0     0
          mirror-1           ONLINE       0     0     0
            /tmp/test/lun2  ONLINE       0     0     0
            /tmp/test/lun4  ONLINE       0     0     0  56K resilvered

I created some test files in /mypool, and detached the original devices, (lun1 & lun2) and it worked perfectly!

Many many thanks for your help. brilliant!