How to make existing volume group "shared"?

We have a 2 node cluster in which only the primary actually mounts the shared VGs at any specific time. We recently added a volume group to the primary.

  • The disks in it are visible to both nodes, but the secondary does not know about the new VG.
  • The new VG is not a "shared volume group"
  • The new VG is in production
    I need to find a way to make the (existing/active) VG on the primary into a "shared VG" available to both nodes.
  1. Is there a way to do this without taking both nodes down?
  2. How do I make an existing VG a "shared VG"?

Thanks in advance for any guidance you can provide. Procedures I'm finding seem to involve creating the VG and making it shared before actually mounting it and using it on a node.

Are you doing MPIO? Are these physical systems or lpars? Is this HACMP? How do you open the volume group on the system you say that can see the disk?

Try this IBM Redbook and seach for the string concurrent

I hope that this helps, but let me know if I'm miles off target and i will think again.

Robin
Liverpool/Blackburn
UK

AIX is not my "native language", I do much more Solaris, so please forgive any terminology mistakes!

Our luns are served from an EMC Clariion san, and are SATA. The cluster is HACMP, with no standalone management server. OS is 5.3. Servers are P6 550s.

Here's the VG from the Primary/active node:

bash-2.05b# lsvg vg_arc2
VOLUME GROUP:       vg_arc2                  VG IDENTIFIER:  00cbd24200004c0000000135104bc85d
VG STATE:           active                   PP SIZE:        64 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      36460 (2333440 megabytes)
MAX LVs:            256                      FREE PPs:       0 (0 megabytes)
LVs:                4                        USED PPs:       36460 (2333440 megabytes)
OPEN LVs:           4                        QUORUM:         5 (Enabled)
TOTAL PVs:          8                        VG DESCRIPTORS: 8
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         8                        AUTO ON:        no
MAX PPs per VG:     65536                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
bash-2.05b# lsvg -p vg_arc2
vg_arc2:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdiskpower32      active            7197        0           00..00..00..00..00
hdiskpower33      active            7197        0           00..00..00..00..00
hdiskpower34      active            7197        0           00..00..00..00..00
hdiskpower35      active            7197        0           00..00..00..00..00
hdiskpower41      active            1918        0           00..00..00..00..00
hdiskpower42      active            1918        0           00..00..00..00..00
hdiskpower43      active            1918        0           00..00..00..00..00
hdiskpower44      active            1918        0           00..00..00..00..00
bash-2.05b# lsvg -l vg_arc2
vg_arc2:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
lv_arc9             jfs2       9115    9115    2    open/syncd    /archive9
lv_arc10            jfs2       9115    9115    2    open/syncd    /archive10
lv_arc11            jfs2       9115    9115    2    open/syncd    /archive11
lv_arc12            jfs2       9115    9115    2    open/syncd    /archive12
bash-2.05b# lspv |grep vg_arc2
hdiskpower32    00cbd2427b2ff68d                    vg_arc2         active
hdiskpower33    00cbd2427b35914d                    vg_arc2         active
hdiskpower34    00cbd2427b279500                    vg_arc2         active
hdiskpower35    00cbd2427b2b3c06                    vg_arc2         active
hdiskpower41    00cbd24203dcdec2                    vg_arc2         active
hdiskpower42    00cbd24203e5c4dc                    vg_arc2         active
hdiskpower43    00cbd24203df9ccd                    vg_arc2         active
hdiskpower44    00cbd24203e5eb16                    vg_arc2         active

Here are the same commands from the Secondary/inactive node:

bash-2.05b# lsvg vg_arc2
0516-306 : Unable to find volume group vg_arc2 in the Device
        Configuration Database.
bash-2.05b# lsvg -p vg_arc2
0516-306 : Unable to find volume group vg_arc2 in the Device
        Configuration Database.
bash-2.05b# lsvg -l vg_arc2
0516-306 : Unable to find volume group vg_arc2 in the Device
        Configuration Database.
bash-2.05b# lspv |grep vg_arc2
bash-2.05b# lspv | egrep 'hdiskpower3|hdiskpower4'
hdiskpower32    00cbd2427b2ff68d                      None
hdiskpower33    00cbd2427b35914d                    None
hdiskpower34    00cbd2427b279500                    None
hdiskpower35    00cbd2427b2b3c06                    None
hdiskpower41    none                                None
hdiskpower42    none                                None
hdiskpower43    none                                None
hdiskpower44    none                                None

The secondary node at least sees the PVIDs of 4 of the disks, the 4 it doesn't see PVIDs for were just added.

"smit cl_admin" shows that this volume group is not shared at all, while various other VGs are shared:

#Volume Group            Resource Group          Node List
 hbvg                    <None>                  cmax1,cmax2
 vg_arc                  cmax_res                cmax1,cmax2
 vg_bkp                  cmax_res                cmax1,cmax2
 vg_cmax                 cmax_res                cmax1,cmax2
 vg_dba                  cmax_res                cmax1,cmax2

Thanks...

Have a look at the manual page for chvg. I think you are looking for the -c flag to make the volume group concurrent.

Robin

My understanding is we don't use (or want) concurrent access, as we only have one node active at a time. The Redbook I'm looking at says
"Shared logical volumesWhile not explicitly configured as part of a resource group, each logical volume in
a shared volume group will be available on a node when the resource group is
online. These shared logical volumes can be configured to be accessible by one
node at a time or concurrently by a number of nodes in case the volume group is
part of a concurrent Resource Group."

We never need the nodes to be accessible by a number of nodes, just one node at a time. Problem now is that the volume group is in production, but not shared, as indicated by what you see when you look at "smit cl_admin -> HACMP Logical Volume Management -> Shared Volume Groups". The volume group in question, vg_arc2, is not even listed there, so it is not yet a shared vg. Thanks again.

I think that you might need to make them visible (although varied off) to each node, so there is an update to make to the VG. If I can find the course notes I will dig in a bit more.

Robin

I seem to recall there might be a way...but I am thinking old hacmp where there might have been a way to take the odm entries from the live system and update the odm on the old system. Not sure.

Ok. I had a thought - would this work?

Can I add new disks to the primary, create a new shared VG using the new disks, and then somehow migrate the VG contents to the new VG and get rid of the old VG (while the VG is active)?

Thanks.

Hello there,
After reading your post I assume that you wanted the newly created VG to be shared,
For that you need couple of things,
The reserve policy of each disk under that VG should be 'NO' and the VG must be concurrent capable.

Now, you can add a new VG, LV and File system to a cluster without bringing it down or causing a failover. use CSPOC to add a VG to your cluster.

So, before creating vg, on each disk run chdev -l hdisk -a RSV=no (where RSV=reserve_lock or policy depending upon your storage).

Now, you CSPOC to create a new VG, it will create a vg and what even you want (like LV and FS) and also sync the cluster, meaning updating the same info at the passive node.

I do not have an "HACMP" (v1-v5) (aka PowerHA (v6) aka System Mirror (v7)) handy, but since HACMP v5 volume groups are meant to be "enhanced concurrent".

What this means in "AIX speak" is that the VGDA is open on both sides so that when the active side makes a change to the volume group that inactive (passive) side can update the ODM with the data - which speeds up the takeover time when either moving a resource group manually of during a failover. The reason being - getting the VGDA (read ODM) data current during a move was often 30-50% of the time needed for a move. "Enhanced" concurrent speeds this up.

You should be able to synchronize the resource groups. You may get an warning about not being "enhanced concurrent" - but this means your move/failover will take longer.

To make it enhanced concurrent you will need to stop the application so the volume group can be varied off and then on again.

p.s. when a disk is part of an enhanced concurrent vg it can be used for a non-IP network for passing a heartbeat. This is changed, I do not know the details, in SystemMirror (v7) as it uses CAA (Cluster Aware AIX) rather than RSCT for topology monitoring.

Hope this clarifies it enough for you, and the AIX speak was not too difficult :wink: