on normal (non concurrent) vgs, it's possible to extend a lun on san-storage , and use chvg -g to rewrite vgda, and use disks with the new size for lvm operations
is the same procedure possbile on a hacmp-cluster (2 node in our case) with concurrent vgs in active/passive mode?
cheers funksen
Edit:
ok here ist what the man page says, but perhaps it's possible and the man page just was not adapted, has anyone tried it yet?
-g
Will examine all the disks in the volume group to see if they have
grown in size. If any disks have grown in size attempt to add
additional PPs to PV. If necessary will determine proper 1016
multiplier and conversion to big vg. Notes:
1 The user might be required to execute varyoffvg and then
varyonvg on the volume group for LVM to see the size change
on the disks.
2 There is no support for re-sizing while the volume group is
activated in classic or enhanced concurrent mode.
3 There is no support for re-sizing for the rootvg.
AFAIK growing the VG by adding new partitions to VGDA while it is varyon is still at the pilot stage ;). If you want to extend an HACMP shared VG online you can (with CSPOC) add disks to it and then extend the LV/FS.
another way, unfortunately offline, ist taking down hacmp, extend luns, cfgmgr, varyonvg in normal mode, run chvg, start the cluster, and hacmp synchronises the new vgda to other nodes
extending with new luns is the way we were doing it the last years, but this leads to 100+ luns on bigger systems
we run hacmp in virtualized environment, with two vio-servers, mpio on the lpar
yes, that happens usually when the hdisk slices from the SAN are too small. I remember a datacenter where the SAN guys allocated the SAN disks in 8GB chunks because "with smaller disks we are more flexible".... I.e. know your workload from the beginning. If your data doubles every 6 month or so start with few big disks/lun so that you neither hit the VG limits nor end up with hundreds of disks/luns (that might double when mirrored).
you are right, I asked about concurrent vg, just wanted to say that it's no problem on non concurrent vgs
we were told that we should use 60gb luns and a lot of them, due to better performance on our old DS4800 storage
on our new DS8300 we tested the same lv striped on 2, 4, 8, and only one disk
there was almost no performance improvement with more luns, thats why I ordered luns up to 260gb for some systems, makes it much easier
and back to hacmp, I know that it is possible to run hacmp with non-concurrent vgs, and in case of takeover, the vg is varied offline on primary node and online on sec node,
takes a bit longer, but it's easier to handle the whole lvm management
any experience on this?
Yes, this definitely sounds like a good lun size for any DB and/or SAP installation these days.
In my experience this "bit longer" can make quite a big difference especially with VG that have lots of disks. Personally I'd thus always prefer an ECM VG and live with a slightly more complicated handling. Another big advantage of ECM IMHO is that a takeover won't fail because of problems with releasing SCSI locks. (But that was with HACMP V4 and might not be a problem with V5)