SAN Migration of HP-UX hosts

Hello gurus,

I am a SAN Admin - not very familiar with the HPUX administration - so need help with the steps in regards to the migration I need to do at my client place.

Environment: Migrating from CX4 to VMAX - using OR/Hot Pull.

Here are the steps I have put together - HPUX gurus please correct/validate the steps on the system admin side:

Pre-Migration Steps:

  1. Remediate server
  2. Complete OR Zoning and Masking - zone the VMAX FAs and CX SP Ports, Create a Migration Storage Group on the CX frame and add the FAs to the group.

Migration Steps:

  1. Stop all applications/databases - perform sanity reboot.

  2. Take backups of the following:
    df -k >> /var/tmp/Migration/df -k
    cp /etc/fstab /var/tmp/Migration/fstab.premigration

    multipath -ll >> /var/tmp/Migration/multipath
    cd /dev/mpath ==> ls -l >> /var/tmp/migration/mpath.bakup

  3. Gather VG info: vgdisplay -v

  4. Mount points: bdf

  5. Make note of the group major/minor numbers info for later use;
    ls -l /dev/*/group

  6. Map the VG:
    vgexport -pvs -m /tmp/vg00.map /dev/vg00
    vgexport -pvs -m /tmp/vg01.map /dev/vg01

  7. Unmount all the FSs;
    unmount -a

  8. Deactivate the VG:
    vgchange -a n /dev/vg00
    vgchange -a n /dev/vg01

  9. Export the VG
    vgexport /dev/vg00
    vgexport /dev/vg01

  10. Remove device files:
    rmsf (??)

  11. make sure all device files are gone:
    ioscan -nkf -C disk

  12. Hash-out all the SAN mountpoints in the /etc/fstab

  13. shutdown the server: shutdown -r now

a . Remove masking to the host from CX: move all the LUNs from server from host storage group to migration storage group.
b. Remove the zoning between CX and host.

  1. Mask the storage from VMAX to host (similar or greater sized luns be used)

  2. Start data copy using OR hot pull commands - query

  3. Once 15% copy has been reached; bring the server up.

  4. Scan new devices; ioscan -fnC disk

  5. create device special files; insf -e Cdisk

  6. Create directory:
    mkdir /dev/vg00
    mkdir /dev/vg01

  7. Create device files named groupr in the created dirs
    mknod /dev/vg00/group c 64 0x010000
    mknod /dev/vg01/group c 64 0x020000

  8. Import all VGs;
    vgimport -vs -m /tmp/vg00.map /dev/vg00
    vgimport -vs -m /tmp/vg01.map /dev/vg01

  9. Activate the VGs:
    vgchange -a y /dev/vg00
    vgchange -a y /dev/vg01

  10. Check
    vgdisplay -v

  11. mount all FSs:
    mount -a
    bdf

  12. Reconfigure PowerPath
    powermt config
    powermt display dev=all

  13. Start Databases/Applications

  14. Once 100% of migration is complete, terminate OR session.

Please correct anything that looks need to be modified.

I have had a quick look and found a few issue.
You can't deactivate vg00 as this is a root VG neither import to vg00 on another server.
mknod /dev/vg00/group c 64 0x010000 => 0x000000 is right but you can use it. Not recommanded.
I have done many SAN or disks migration.
Any questions are okay for me.

Cheers,

1 Like

Thank you.

---------- Post updated at 09:53 AM ---------- Previous update was at 09:47 AM ----------

Joseph,

What would be the steps if the luns are not under volume groups - say if they are under ASM?

Thanks,

To Jps460,
I have never seen any luns not under VGs except raw devices for Oracle DB.
I am just wondering what purpose the luns are not under VGs is.
If you show me the output of bdf, I might give you an idea.
Cheers,

Hi,

I tried to attach the bdf and inq reports.
Please see that few disks are under ASM.
I am trying to figure out the steps I need to take while cutting over to the new LUNS.

Thanks,

Are you talking about /ora00 filesystem which is on ASM?
If yes, vg01a has /ora00 filesystem so you can export it as same as the normal way below.

 
# umount /ora00
# vgexport -pvs -m /tmp/vg01a.map /dev/vg01a
# vgchange -a n vg01a

and then remap the luns to new HW and import vg01a.

Cheers,