SAN Migration

Hi all,
We are migrating our SAN storage from HSV360 to 3PAR. The system runs aix 6.1 version with HACMP.
Please let me know what are requirements from OS side and how are the data copied to the new disks.

Hi Elizabeth,

I'm going to guess that you're meaning an LPAR here, running AIX 6.1 also guessing that it's part of a cluster. If you are actually moving to a HP 3PAR array, can you be a little more specific please?

But that's about all I can guess at, how are the storage arrays attached? What other information can you give us?

Regards

Gull04

Do you see all the new disks on your AIX server? If so, you can:-

  • grow the volume group
  • migrate the logical volumes
  • remove the old disks from the volume group
  • remove the LUNs

Does this seem a sensible approach to you?

Robin

Hi Elizabeth,

It would also be possible to mirror the data over the two arrays - a solution I used when going from DS8300 to VMax.

Regards

Gull04

Well,
You need ODM definitions for the new storage, then you can extend the disks into each vg, followed by migration of LV's, update the passive node and lastly sync the cluster.

Hi Gull04

I am not sure how to check about how are they attached. I am just in the learning phase. if you tell me what details you need in specifi , i can provide them to you.

Hi Robin,

We havnt started the migration yet hence the disks are not added.

I was wondering whether i need to upgrade firware or hba level for this. should i change the mpio?
Also the rootvg doesnt need to be touched right? they are SCsi disks.
so i suppose its only the dataVg that needs to be migrated.

So when its a cluster environment do i need to make any changes in the hacmp configuration after the migrations?

Thanks in advance!

So, if the SAN side beleives that the disk LUNs are assigned to the AIX partition, run cfgmgr -S as root on AIX to discover the devices. Hopefully you will see the difference between lspv before and after. You should see the existing disks marked as belonging to volume groups and the new ones be listed as volume group "none" They will likely be the higher number hdisks in the display.

If the disks are for an Oracle running ASM, then it's possible that the disks as a whole may be assigned to Oracle and that will make it trickier. Do we need to consider this?

Can you get downtime on the cluster for all this?

Regards,
Robin

Thank you very much.
Yes I can get downtime for cluster

---------- Post updated 05-07-14 at 08:31 AM ---------- Previous update was 05-06-14 at 02:01 PM ----------

Hi,

Thank you ver much.
Yes i can get a downtime for the cluster

You do not even need a downtime for the cluster, provided that your SAN disks allow for concurrent access (check with "lsvg <vgname>" if it is "enhanced concurrent capable" and if it is opened in concurrent mode).

If this is indeed the case:

  • attach the new disks to both nodes, run "cfgmgr" on both nodes to create the new devices. Check with "lspv".

  • add the new volumes to your VGs on the active node:

extendvg <volumegroup> <hdisk-device>
  • mirror the old disks to the new disks. Switch off automatic sync and do a "syncvg" afterwards, it is usually much faster. Use the "-P" switch to let it run in parallel:
mirrorvg -s <volumegroup> <hdisk-device>
syncvg -P <##> -v <volumegroup>
  • after the syncing is finished do a "learning import" on the passive node. The passive node knows only the old disk to be in the VG so far, so use this to reimport the VG definition from:
importvg -L <volumegroup> <old hdisk-device>
  • on the active node unmirror the VG, removing the mirror on the old disk, then move that old disk out of the VG:
unmirrorvg <volumegroup> <old hdisk-device>
reducevg <volumegroup> <old hdisk-device>
  • again do a "learning import" on the passive node to sync the VG definition across cluster nodes, this time from the new disk:
importvg -L <volumegroup> <new hdisk-device>

Remove the zones from the fabric and delete the old hdisk-devices. When running "cfgmgr" again they should not be discovered any more.

I hope this helps
bakunin

Hi,

consider the 3par Implementation Guide for AIX.

302 Moved Temporarily

Also check your environment with HP SPOCK site:
HP Storage Single Point of Connectivity Knowledge - SPOCK

thank you