I have a LUN (From HP-Storage VA7110) that is claimed on 2 servers, but is in used in one of the VG on Server-1 .
Now I want to shut Server-1 and re-use that LUN on server-2 .
You will need to run on server-1 (on which you want to remove the LUN from)
vgdisplay -v the group, and write down the LUN's/disks that are used in it.
vgexport on that volume group.
pvremove on disks that are inside that volume group.
Depresent the disks from storage side.
Run ioscan -fnC disk , and check for NO_HW
Run rmsf -H on NO_HW devices
You have now removed the Lun(s) from server1.
Present the lun's to server 2, run ioscan -fnC disk .
If you are running v2, you will need to run insf -e to create devices, on v3 they will be created after ioscan.
Verify the results with xpinfo , or similar tool.
Run pvcreate on those rdsks created (you don't need -f if you ran pvremove on server1), only per one path is enough of course.
Use the LUN in new or existing volume group.
Notice, if you are running v2 and you are using multipath, you will need to add both paths in your volume group via vgextend (existing) or vgcreate (new).
Now am getting below error while importing same on cluster node .
# vgimport -p -v -s -m /tmp/vg01.map /dev/vg01
Beginning the import process on Volume Group "/dev/vg01".
Verification of unique LVM disk id on each disk in the volume group
/dev/vg01 failed.
Following are the sets of disks having identical LVM disk id
/dev/dsk/c5t0d2 /dev/dsk/c4t1d1 /dev/dsk/c5t1d1
/dev/dsk/c5t0d3 /dev/dsk/c4t1d2 /dev/dsk/c5t1d2
# vgimport -p -v -s -m /tmp/vg01.map /dev/vg01 /dev/dsk/c5t0d2 /dev/dsk/c5t0d3 /dev/dsk/c4t0d2 /dev/dsk/c4t0d3 /dev/dsk/c5t0d1 /dev/dsk/c4t0d1
Beginning the import process on Volume Group "/dev/vg01".
Verification of unique LVM disk id on each disk in the volume group
/dev/vg01 failed.
Following are the sets of disks having identical LVM disk id
/dev/dsk/c5t0d2 /dev/dsk/c4t1d1 /dev/dsk/c5t1d1
/dev/dsk/c5t0d3 /dev/dsk/c4t1d2 /dev/dsk/c5t1d2
Do you have any clue why am getting so ... and how to resolve this ..
Notice rdsk when using vgchgid and dsk in vgimport.
You will not use -m or -s in this case.
Just please, be sure about your operations and what are you trying to achive, since this looks like some volume group migration from one host to another, rather then reusing of existing luns.
Yes.. there's serviceguard cluster and each LUN (from HP Storage VA7110) having true copy and business copy ..
Have check business copy is taken but is of 0MB size .. (AM not sure why..)
Have added a LUN on Primary cluster server and are able to see the extended space in VG and both path ..
< ON Server-1 >
# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c1t2d0
/dev/dsk/c2t2d0
/dev/vg01
/dev/dsk/c5t0d2
/dev/dsk/c5t0d3
/dev/dsk/c4t0d2
/dev/dsk/c4t0d3
/dev/dsk/c5t0d1
/dev/dsk/c4t0d1
# vgdisplay -v vg01
..
..
--- Physical volumes ---
PV Name /dev/dsk/c5t0d2
PV Name /dev/dsk/c4t0d2 Alternate Link
PV Status available
Total PE 5631
Free PE 0
Autoswitch On
PV Name /dev/dsk/c5t0d3
PV Name /dev/dsk/c4t0d3 Alternate Link
PV Status available
Total PE 5631
Free PE 1
Autoswitch On
PV Name /dev/dsk/c5t0d1
PV Name /dev/dsk/c4t0d1 Alternate Link
PV Status available
Total PE 5119
Free PE 1279
Autoswitch On
Now I want same to be updated on secondary node vg01 .
?? Do vgchgid requires any downtime .. on primary cluster
If you use truecopy or similar storage cloning methods ...
You have situation that source VG and destination VG have the same LVM id (that's why vgimport will fail), and the disks are presented on all nodes of SG.
Because of the same LVM id, HPUX will not import, but will warn you there are same ID's on multiple sets of disk.
That's where vgchgid comes into play, allowing you to have 'same/cloned' volume group on any of the SG cluster member.
So you clone the disks via storage method, then vgexport the cloned volume group, change the LVM ID and import.
If you just want to extend the SG clustered volume group with new luns (your case), you just extend on the node where the package is (as you did), export preview of that volume group ( -p -m -s switches, easy to remember :), then on secondary node you make vgexport of that group (plain and simple no switches),mknod, vgimport using map file from primary node.
If you are adding additional LVOLS you will need to modify the /etc/cmcluster/package/package.sh script (where package is the name of the package you are modifying).
If those luns were used on any of the cluster nodes before the above action (extending on the primary node), you will need to run pvremove on every node for specifed disk(s) (to remove LVM information).
# As I read on various forum, have to perform vgchgid but am not sure can we perform vgchgid on all LUN's with on active VG ..as this is production server and I can't take any risk ..
---------- Post updated at 08:54 AM ---------- Previous update was at 05:39 AM ----------
Have solved the issue .. we can use below command to import map file ..
1# Varify corresponding luns on secondary server .. and pass the data path discovered disk as args to vgimport as below ..
2# ensure pass both primary and alternate disk path without -s option ..