Increase LUN size in AIX with VIOS and HACMP

Hello!

I have this infraestructure:

  • 1 POWER7 with single VIOS on Site A.
  • 1 POWER6 with single VIOS on Site B.
  • 1 LPAR called NodeA as primary node for PowerHA 6.1 on Site A.
  • 1 LPAR called NodeB as secondary (cold) node for PowerHA 6.1 on SiteB.
  • 1 Storage DS4700 on Site A.
  • 1 Storage DS4700 on Site B.
  • All VIOS versions are 2.2.0.13-FP24 SP-03.
  • All AIX versions are 6.1.6.5.
  • Power version is 6.1 SP3.
  • Data VG is configured as LVM Cross Site, using one disk from each storage.
  • Both disks are configured with reserve_policy=no_reserve in all VIOS an LPARs.
  • queue_depth attribute is the same (10) in all VIOS and LPARs for this disks too.

My problem is that I can't increase the size of my data VG.

I increased the LUN size. Then I run cfgdev in both VIOS and cfgmgr in both LPARs. Also chvg -g datavg returns: "0516-1382 chvg: Volume group is not changed. None of the disks in the volume group have grown in size."

I've tried several ways to do this without luck.

Any suggestions?

Thanks!

Enzote

Are you using Concurrent VG for your cluster ?
System will not allow you increase those with VG Online
But again you should got different error....

Are you sure those disks are re-sized ?

I would try export and Import VG tor reread VGDA from the disks

How are the disks mapped to the LPAR? Are they VSCSI or VFCHOST?

The disk are in concurrent mode. If a run lspv from one node I got:

# lspv
...
hdisk3 00f6317b414624c2 vg_data concurrent
hdisk4 00f6317b41466700 vg_data concurrent
...

I tried export/import.

The disks are mapped through VSCSI.

I will make a test, unmapping the disks, running cfgdev on VIOS and then remapping them.

Thanks!

Enzote

---------- Post updated at 09:41 AM ---------- Previous update was at 08:12 AM ----------

Hello!

This procedure works:

1.- Stop PowerHA services in both nodes.

2.- In VIOS A:
$ rmdev -dev vtd_data_A
$ rmdev -dev vtd_data_B
$ r oem
# rmdev -dl hdisk1
# rmdev -dl hdisk2
# cfgmgr
# chdev -l hdisk1 -a reserve_policy=no_reserve
# chdev -l hdisk2 -a reserve_policy=no_reserve
# exit
$ mkvdev -vdev hdisk1 -vadaper vhost0 -dev vtd_data_A
$ mkvdev -vdev hdisk2 -vadaper vhost0 -dev vtd_data_B

3.- Repeat step 2 en VIOS B.

4.- On node A:
# rmdev -dl hdisk3
# rmdev -dl hdisk4
# cfgmgr
# chdev -l hdisk3 -a reserve_policy=no_reserve
# chdev -l hdisk4 -a reserve_policy=no_reserve
# chdev -l hdisk3 -a queue_depth=10
# chdev -l hdisk4 -a queue_depth=10
# varyonvg vg_data
0516-1434 varyonvg: Following physical volumes appear to be grown in size.
Run chvg command to activate the new space.
hdisk3 hdisk4
# chvg -g vg_data
# varyoffvg vg_data

5.- Repeat step 4 on node B, except chvg -g.

Thanks for the support!

Enzote

This is why I asked if they were mapped as VSCSI as the new disks needed re-mapping.

we have almost the same configuration on some clusters, resizing luns (storage DS8300) works without problems, I think even without cfgmgr on both vio and lpar, at least it's not necessary on the vio server

as gito said, dynamic resizing of hdisks that belong to a concurrent vg working in concurrent active or concurrent passive mode is not supported, so you need to at least bring the resource group down once

just could imagine the problem in conjunction with your ds4xxx storage and the drivers on the vio servers

we have clusters with 80+ concurrent pvs, imagine remapping of every single one (we have a script for this, but anyways..), that must be a bug, I would open an IBM ticket in your case

According to APAR IZ80021, there is an error on man pages of chvg -g. It says that since AIX 6.1 TL4 this is no longer a restriction.

I have all AIX and VIOS in TL6.

I presume this is a bug on VIOS side.

I've already opened a ticket on IBM support for a problem with a loss of a LUN access when a failover of DS controller occurs, but they don't have a solution yet.

Thanks!

Enzote

man chvg on aix7.1

      -g
            Will examine all the disks in the volume group to see if they have grown in size. If any disks have grown in size attempt to add additional PPs to PV. If necessary will determine proper 1016 multiplier and conversion to big
            vg. Notes:
              1    The user might be required to execute varyoffvg and then varyonvg on the volume group for LVM to see the size change on the disks.
              2    There is no support for re-sizing while the volume group is activated in classic or enhanced concurrent mode.
              3    There is no support for re-sizing for the rootvg.

if this is a bug, then it has made it's way even to aix 7.1
but I would call it lazy man page update policy then a bug :wink: