Reuse a LUN

I have a LUN (From HP-Storage VA7110) that is claimed on 2 servers, but is in used in one of the VG on Server-1 .
Now I want to shut Server-1 and re-use that LUN on server-2 .

Server-1
Path-1 : /dev/rdsk/c4t0d1 
Path-2: /dev/rdsk/c6t0d1

Server-2
Path-1: /dev/rdsk/c5t0d1  
Path-2: /dev/rdsk/c4t0d1

Am new on HP-UX, confused with right steps .

  1. Do I need to format LUN: /dev/rdsk/c4t0d1 before reuse with new VG on server-2
  2. If need to format, what command I can use safely
  3. Also I want alternate path to be available with new VG as well, so is below syntax is right .

On Server-2

# pvcreate -f /dev/rdsk/c5t0d1
# mkdir /dev/vg05
# mknod /dev/vg05/group c 64 0x050000 
# vgcreate -l 255 -p 32 -s 8 /dev/vg05   /dev/rdsk/c5t0d1   /dev/dsk/c4t0d1
# vgdisplay -v vg05

--Shirish Shukla

I suggest you do a

 pvdisplay -v /dev/dsk/c4t0d1 | more 

and see what is on it before going further... Why is the disk seen on both servers, is this a cluster ( or was...)?

No server is not in cluster, have clean exported the VG and de-provisioned that server-1 .

Now I want to use same LUN on Server-2 as fresh to extend an existing volume group .

You will need to run on server-1 (on which you want to remove the LUN from)

  1. vgdisplay -v the group, and write down the LUN's/disks that are used in it.
  2. vgexport on that volume group.
  3. pvremove on disks that are inside that volume group.
  4. Depresent the disks from storage side.
  5. Run ioscan -fnC disk , and check for NO_HW
  6. Run rmsf -H on NO_HW devices

You have now removed the Lun(s) from server1.

Present the lun's to server 2, run ioscan -fnC disk .
If you are running v2, you will need to run insf -e to create devices, on v3 they will be created after ioscan.
Verify the results with xpinfo , or similar tool.
Run pvcreate on those rdsks created (you don't need -f if you ran pvremove on server1), only per one path is enough of course.
Use the LUN in new or existing volume group.

Notice, if you are running v2 and you are using multipath, you will need to add both paths in your volume group via vgextend (existing) or vgcreate (new).

Hope that helps.
Regards
Peasant.

Thanks Peasant .

Now am getting below error while importing same on cluster node .

# vgimport -p -v -s -m /tmp/vg01.map /dev/vg01
Beginning the import process on Volume Group "/dev/vg01".
Verification of unique LVM disk id on each disk in the volume group
/dev/vg01 failed.
Following are the sets of disks having identical LVM disk id
/dev/dsk/c5t0d2 /dev/dsk/c4t1d1 /dev/dsk/c5t1d1
/dev/dsk/c5t0d3 /dev/dsk/c4t1d2 /dev/dsk/c5t1d2


# vgimport -p -v -s -m /tmp/vg01.map /dev/vg01 /dev/dsk/c5t0d2 /dev/dsk/c5t0d3 /dev/dsk/c4t0d2 /dev/dsk/c4t0d3 /dev/dsk/c5t0d1 /dev/dsk/c4t0d1
Beginning the import process on Volume Group "/dev/vg01".
Verification of unique LVM disk id on each disk in the volume group
/dev/vg01 failed.
Following are the sets of disks having identical LVM disk id
/dev/dsk/c5t0d2 /dev/dsk/c4t1d1 /dev/dsk/c5t1d1
/dev/dsk/c5t0d3 /dev/dsk/c4t1d2 /dev/dsk/c5t1d2

Do you have any clue why am getting so ... and how to resolve this ..

This is because the new disks that you are adding have the same LVM id as existing ones on machine

Are you sure this is not a serviceguard cluster or some storage based cloning like truecopy or business copy ?

If you are sure those are the disks you want to use...

vgexport /dev/vg01
mkdir /dev/vg01
mknod /dev/vg01/group c 64 0x050000
vgchgid  /dev/rdsk/c5t0d2 /dev/rdsk/c5t0d3 /dev/rdsk/c4t0d2 /dev/rdsk/c4t0d3 /dev/rdsk/c5t0d1 /dev/rdsk/c4t0d1
vgimport /dev/vg01 /dev/dsk/c5t0d2 /dev/dsk/c5t0d3 /dev/dsk/c4t0d2 /dev/dsk/c4t0d3 /dev/dsk/c5t0d1 /dev/dsk/c4t0d1

Notice rdsk when using vgchgid and dsk in vgimport.

You will not use -m or -s in this case.

Just please, be sure about your operations and what are you trying to achive, since this looks like some volume group migration from one host to another, rather then reusing of existing luns.

Regards
Peasant.

Yes.. there's serviceguard cluster and each LUN (from HP Storage VA7110) having true copy and business copy ..

  • Have check business copy is taken but is of 0MB size .. (AM not sure why..)

  • Have added a LUN on Primary cluster server and are able to see the extended space in VG and both path ..

< ON Server-1 >

# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c1t2d0
/dev/dsk/c2t2d0
/dev/vg01
/dev/dsk/c5t0d2
/dev/dsk/c5t0d3
/dev/dsk/c4t0d2
/dev/dsk/c4t0d3
/dev/dsk/c5t0d1
/dev/dsk/c4t0d1



# vgdisplay -v vg01
..
..
      --- Physical volumes ---
   PV Name                     /dev/dsk/c5t0d2
   PV Name                     /dev/dsk/c4t0d2  Alternate Link
   PV Status                   available
   Total PE                    5631
   Free PE                     0
   Autoswitch                  On

   PV Name                     /dev/dsk/c5t0d3
   PV Name                     /dev/dsk/c4t0d3  Alternate Link
   PV Status                   available
   Total PE                    5631
   Free PE                     1
   Autoswitch                  On

   PV Name                     /dev/dsk/c5t0d1
   PV Name                     /dev/dsk/c4t0d1  Alternate Link
   PV Status                   available
   Total PE                    5119
   Free PE                     1279
   Autoswitch                  On

Now I want same to be updated on secondary node vg01 .

?? Do vgchgid requires any downtime .. on primary cluster

If you use truecopy or similar storage cloning methods ...

You have situation that source VG and destination VG have the same LVM id (that's why vgimport will fail), and the disks are presented on all nodes of SG.
Because of the same LVM id, HPUX will not import, but will warn you there are same ID's on multiple sets of disk.

That's where vgchgid comes into play, allowing you to have 'same/cloned' volume group on any of the SG cluster member.

So you clone the disks via storage method, then vgexport the cloned volume group, change the LVM ID and import.

If you just want to extend the SG clustered volume group with new luns (your case), you just extend on the node where the package is (as you did), export preview of that volume group ( -p -m -s switches, easy to remember :), then on secondary node you make vgexport of that group (plain and simple no switches),mknod, vgimport using map file from primary node.

vgimport -m yourmapfile -N -s /dev/yourvolumegroup

No downtime is required for these operations.

If you are adding additional LVOLS you will need to modify the /etc/cmcluster/package/package.sh script (where package is the name of the package you are modifying).

If those luns were used on any of the cluster nodes before the above action (extending on the primary node), you will need to run pvremove on every node for specifed disk(s) (to remove LVM information).

Hope that clears things out.
Regards
Peasant.

Thanks Peasant.

vgimport -m yourmapfile -N -s /dev/yourvolumegroup

Am getting below error ..saying that -N option is not available ..

#  vgimport -p  -m /tmp/vg01.2.map -N -s /dev/VGT
Usage: vgimport
        [-p]
        [-v]
        [-s]
        [-m MapFile]
        VolumeGroupName PhysicalVolumePath...
"N": Illegal option.
# uname -a
HP-UX pbup1s B.11.11 U 9000/800 112434670 unlimited-user license

# swlist -l product | grep -i lvm
  LVM                   B.11.11        LVM
  PHCO_29379            1.0            LVM commands cumulative patch
  PHKL_26743            1.0            LVM Cumulative Patch

# As I read on various forum, have to perform vgchgid but am not sure can we perform vgchgid on all LUN's with on active VG ..as this is production server and I can't take any risk .. :slight_smile:

---------- Post updated at 08:54 AM ---------- Previous update was at 05:39 AM ----------

Have solved the issue .. we can use below command to import map file ..

1# Varify corresponding luns on secondary server .. and pass the data path discovered disk as args to vgimport as below ..
2# ensure pass both primary and alternate disk path without -s option ..

# vgimport  -v -m  /tmp/vg01.map /dev/vgpsdp1vg01  dev/dsk/c19t1d1 /dev/dsk/c19t1d2 /dev/dsk/c21t1d1 /dev/dsk/c21t1d2 /dev/dsk/c19t1d0 /dev/dsk/c21t1d0

Thanks all for valuable Help !!

Thanks,
Shirish Shukla