AIX extend VG error

Hi,

I have encountered the issue of extending VG:
Here is my command

[root@xxx] / > extendvg vg1 hdisk39
0516-1162 extendvg: Warning, The Physical Partition Size of 128 requires the
        creation of 8192 partitions for hdisk39.  The limitation for volume group
        tsm_stg is 3048 physical partitions per physical volume.  Use chvg command
        with -t option to attempt to change the maximum Physical Partitions per
        Physical volume for this volume group.
0516-792 extendvg: Unable to extend volume group.

Have try chvg -t but

[root@xxx] / > chvg -t 6 vg1
0516-1780 lchangevg: Volume group conversion is not possible since the factor value
        of 6 can accommodate at most 21 physical volumes.
0516-732 chvg: Unable to change volume group vg1.

[root@xxx] / > lsvg vg1
VOLUME GROUP:       vg1                  VG IDENTIFIER:  00f8337b00004c0000000146cc959f6b
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      46359 (5933952 megabytes)
MAX LVs:            512                      FREE PPs:       132 (16896 megabytes)
LVs:                6                        USED PPs:       46227 (5917056 megabytes)
OPEN LVs:           6                        QUORUM:         15 (Enabled)
TOTAL PVs:          29                       VG DESCRIPTORS: 29
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         29                       AUTO ON:        yes
MAX PPs per VG:     130048
MAX PPs per PV:     4064                     MAX PVs:        32
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512

Any advice if I can change the parameter "MAX PPs per PV"?

I can see other VG has big number

[root@xxx] / > lsvg vg2
VOLUME GROUP:       vg2                VG IDENTIFIER:  00f8337b00004c000000015451842ab7
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      40955 (5242240 megabytes)
MAX LVs:            512                      FREE PPs:       32454 (4154112 megabytes)
LVs:                2                        USED PPs:       8501 (1088128 megabytes)
OPEN LVs:           2                        QUORUM:         3 (Enabled)
TOTAL PVs:          5                        VG DESCRIPTORS: 5
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         5                        AUTO ON:        yes
MAX PPs per VG:     128016
MAX PPs per PV:     9144                     MAX PVs:        14
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512

Well, you have a LOT of disks in this VG. 29 right now, and you would like to put the 30th in. This is big. When you use the factor "6" as in your example, you can have only 21 PV-s in the VG. Try a smaller factor. Or omit factor entirely to let chvg choose a value.

Also - if you have the opportunity to do that - you should recreate that VG with bigger PP size - 1GB is not too big to a 5TB volume group. Also, if it is not defined as "big VG", the PV limit is 32 which is scary close.

--Trifo

Hi Trifo75,

If I use small factor, I can have more PVs but the max PP of PV is smaller which cannot afford to add 1TB disk which around 8000 PPs.

How if I convert it to scalable VG? it can add the disk regardless of the size as well as "Max PP per PV" parameter?

Let me first explain a bit of how historically the LVM evolved to what it is now. It will put things better in perspective so that when i answer your question the picture will be better understandable.

The LVM was introduced with AIX 3, a very long time ago (beginning of the nineties). Back then disks were measured in MB and hence the limitations of the LVM weren't really limitations at all:

  • a PV (physical volume) can hold only 1019 PPs (phyiscal partitions)
  • a VG (volume group) can contain only 32 PVs

When a VG is created the property of PP size is selected and it cannot be changed later. The only way is to backup the VG, delete and recrete it with a different PP size, recreate the LVs/FSs and then restore the backup. Since disks grew bigger and bigger and the PP size cannot be changed later PVs added later to VGs soon hit the 1019 PPs limit and that could only be rectified by a (time-consuming) backup-and-restore.

The "solution" (actually less solution than rather "workaround") IBM came up with was to introduce the "factor". By rearranging the meaning of a few bits in the respective counters (obviously) they allowed for PVs to hold multiples of these 1019 PPs by - at the same time - reducing the maximum number of PVs in a VG. e.g. a factor of 2 means a single PV (disk) can hold up to 2x1019=2038 PPs but the VG would be limited to hold 16 PVs instead of 32. Analogous for different factors. Notice that will do nothing to increase the maximum amount of space a VG can hold because the increased size of one disk will be offset by the reduction in the number of possible disks.

So, finally IBM came up with the "Big VG" and the "Scalable VG". The scalable VG did away with a lot of limitations the older "classic" VG had. The downside is that VG operations take ever so slightly (seconds at best) longer because the layout is a bit more complex. Also it is not possible to directly convert a classical VG into a scalable one and the management of a scalable VG takes a bit more space.

Short answer: you can't so backup, recreate and restore. You should do that anyway as i will explain further on:

Your PP size is ridiculously small (128MB) for a VG of this size (~5.2TB). The PP size should be the smallest amount by which you will ever want to increase/decrease a FS. Ask yourself if you need to increase a FS by 128MB and the most probable answer will be "certainly not". Increase the size to something sensible, perhaps 1GB.

Second, you have 29 disks in this VG. Most probably many of these disks are very small compared to the size of the VG. I suppose most of these disks are LUNs anyway therefore it makes sense to recreate at most 3-4 of these disks with the same total size and put the VG on it there. Future increases in size can be done by increasing the size of these LUNs and re-read the LUNs size (see the chvg -g command) instead of adding new LUNs to the VG.

For this to work you want to do away with the 1019-PP-per-disk maximum and hence it makes sense to recreate the new VG as a scalable VG where a PV can have a (practically) unlimited number of PPs. The slight penalty when managing scalable VGs in comparison to classical VGs is negligible.

I hope this helps.

bakunin

1 Like

Hi Bakunin,

Thank for reply and sorry for late response.

I still not understand well your point here. Yes the disks are LUN. What do you mean when saying to recreate at most 3-4 disks? I guess you mean to create a LUN which contains 3-4 disk like this. Then increase will be done at storage and reflect the change of LUN growth by chvg -g?

This

and this:

I understand fro your point that we cannot convert that VG (it's big VG) to scalable to avoid "backup, recreate, restore".

But please see the things here:
on another server ( for illustration my thinking) below is the big VG, I can see we have the parameter "Max PPs per PV"

[root@abc] / > lsvg vg_abc
VOLUME GROUP:       vg_abc                     VG IDENTIFIER:  00c1cc1400004c0000000128174b7c57
VG STATE:           active                   PP SIZE:        64 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      19188 (1228032 megabytes)
MAX LVs:            512                      FREE PPs:       3826 (244864 megabytes)
LVs:                10                       USED PPs:       15362 (983168 megabytes)
OPEN LVs:           7                        QUORUM:         7 (Enabled)
TOTAL PVs:          12                       VG DESCRIPTORS: 12
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         12                       AUTO ON:        yes
MAX PPs per VG:     130048
MAX PPs per PV:     2032                     MAX PVs:        64
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no

But on the scalable VG, the parameter "Max PPs per PV" seems disappeared

[root@abc] / > lsvg vg_pa3
VOLUME GROUP:       vg_pa3                    VG IDENTIFIER:  00fa6d1200004c0000000169ec4b083d
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1599 (204672 megabytes)
MAX LVs:            256                      FREE PPs:       1599 (204672 megabytes)
LVs:                0                        USED PPs:       0 (0 megabytes)
OPEN LVs:           0                        QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no

--> based on this I think if we convert the big VG to scalable VG, we can override that settings, in another word, we can add more disks with any size (without limited by 2032 or 3048 PPs)

How about your idea with that?

Fair enough. I could perhaps have done a better job in explaining what i mean:

You cannot convert a classic or a big VG into a scalable VG. At least not in a sensible way, because you cannot change the PP size of a VG and you cannot change the disk layout. Therefore you need (or, if you like that wording better: i strongly suggest) to do a backup of the old VG, recreate it anew and then restore it.

The thing is: you have right now 29 disks (a "LUN" *is* a disk, albeit a virtual one) or, rather, PVs in your VG. (I use the terms "disk", "LUN" and "PV" synonymously here as a "LUN" from the storage POV is a "disk" or, rather, a "hard disk device" from the OS POV and a PV from the LVMs POV.) The maximum number of PVs is 32 and although this has changed in scalable VGs you still want to have a manageable number of disks in your system. I have administrated systems with 1500 hdisks and it is a nightmare - avoid that at all costs. When you recreate the VG it therefore makes sense to create - instead of 29 LUNs having a total of 5.2TB - the smallest possible number of LUNs with the same total amount of storage and build the new VG there. Depending on your LVs being mirrored or not, your LUNs coming from a single storage box or several (this is sometimes done for load balancing reasons), etc., etc., you want to have 3-4 (or maybe two or even one) LUN for the whole new VG in the end.

Most storage systems offer the possibility to increase LUNs in size. If your system does you can use chvg -g to make the VG aware of such an increased LUN instead of adding new LUNs to the VG to accomodate increased storage demands over time. I suppose the 29 disks you have right now have grown over time by adding one or two disks at a time. You should stop that in the future and instead increase the size of the LUNs so you won't hit the 32-disk-barrier in the future.

Really? To quote yourself:

When you recreate the VG Note that there is a parameter "Max PPs per PV" (or similar, i have no AIX system at hand to look at) which you can set. Note that this is done in steps of 1024, so the value you set there is multiplied by 1024 to give the actual maximum. The maximum number you can enter is IIRC 2048 giving you 1024x2048 ~ 2 million PPs per PV.

I hope this helps.

bakunin

1 Like

Hi Bakunin,
Like you said, we can increase LUNs in size at storage by adding 1 or more disks to the LUN.

I have 1 concern here, currently, the max PP of PV is limited at 2032 PPs x 128 MB = 260 GB. Because of this limitation, I cannot add the 1 TB PV whose PPs are greater than 2032 PPs.

If we grow the LUN in size example 500GB in total, it will breach the limits of 2032 PPs. Still acceptable from VG side?

Yes. Exactly.

I copy here again. Note that below info is taken from another machine which has big VG and scalable VG.
Big VG it has 2 parameters: Max PPs per VG and Max PPs per PV

[root@abc] / > lsvg vg_abc
VOLUME GROUP:       vg_abc                     VG IDENTIFIER:  00c1cc1400004c0000000128174b7c57
VG STATE:           active                   PP SIZE:        64 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      19188 (1228032 megabytes)
MAX LVs:            512                      FREE PPs:       3826 (244864 megabytes)
LVs:                10                       USED PPs:       15362 (983168 megabytes)
OPEN LVs:           7                        QUORUM:         7 (Enabled)
TOTAL PVs:          12                       VG DESCRIPTORS: 12
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         12                       AUTO ON:        yes
MAX PPs per VG:     130048
MAX PPs per PV:     2032                     MAX PVs:        64
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no

But with the scalable VG, we have only Max PPs per VG, the Max PPs per PG disappeared. PP size still remains unchanged like you said (never able to change without recreate)
This disappearance of Max PPs per PG, according to my thinking, remove the limit of max PPs per PV, it means that we can add any PV regardless of its size/PPs while PP size unchanged.

[root@abc] / > lsvg vg_pa3
VOLUME GROUP:       vg_pa3                    VG IDENTIFIER:  00fa6d1200004c0000000169ec4b083d
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1599 (204672 megabytes)
MAX LVs:            256                      FREE PPs:       1599 (204672 megabytes)
LVs:                0                        USED PPs:       0 (0 megabytes)
OPEN LVs:           0                        QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no

--- Post updated at 07:05 AM ---

---- correct some typos ---

OK, now i understand what you mean - sorry, my bad.

Yes, big VGs have still a limitation of how many PPs can reside on a single PV AND they have a restriction of how many PVs can be in a VG, although these limits are higher than in a classic VG. The scalable VG only has one upper limit and that is the number of PPs overall, in the whole VG. There is no limit of how many PPs can be in a single PV any more. You can set the maximum number of PPs for the whole VG at the creation of the VG and my suggestion is to set it to the maximum possible of ~2 million PPs as i told you already.

ahem, no. A "LUN" consists already of many disk snippets (that is done inside the storage box) but it is presented as a single homogenous disk device to the OS. You can make such LUNs bigger (at least with certain storage systems) and this will - to the OS - look like the hard disk it has attached just grew in size. You do not "add another disk" or something such. To make use of such a grown disk first run cfgmgr to make the OS aware of the new size, then run chvg -g to make the LVM aware.

This is why i told you to recreate the VG: a PP size of 128MB is just ridiculous. Do yourself a favour and when you recreate the VG select a PP size of 1GB or even 2GB. I mean, is there any chance you may want to change (increase or shrink) a filesystem on that VG in steps smaller than 1GB? Certainly not, i'd reckon. Therefore there is no reason to have such a small PP size at all.

Such a recreation of the VG (and the filesystems it contains) will give you the opportunity to rectify the filesystems too: if you still have JFS filesystems recreate them with JFS2, if you still use external log volumes change the new FSes to use inline logs.

I hope this helps.

bakunin

1 Like

Hi Bakunin,

So we can convert it to scalable VG and then we can add the 1 TB disk later?

Regarding the PP size yes it's ridiculous, it was created by the ex-admin and now I inherited all the legacy :rolleyes:

NO!! Again, for the umpteenth time now: you SHOULD NOT - even if you can!! You should recreate the VG on as few as possible LUNs, create new filesystems, then restore the contents of the old filesystems to the new VG, then delete the old VG, then remount the FSes. If you add another "1TB disk later" you will then have 30 disks in the VG (instead of the already 29 that are in it). This is unmanageable in the long run (and even on short term). Instead of learning ways how to more effectively use a crutch you should consider learning how to walk.

Of course you can do whatever you want, but ask yourself: does it make sense to even ask if you are going to disregard what you are being told anyways?

I hope this helps.

bakunin

Hi Bakunin,

Understand your recommendation. My aim is to know if we can go this way technically :slight_smile:
For the current status, there is no need to add new disk to that VG, we just need to create new VG and that's all. The scalable VG with PP size 1 GB is created :slight_smile:

I have 1 more point, can you explain to me the in-band and calculation of the percentage? In the docs it says this is used to calculate the usage percentage of disk, but somehow I can see 0%, 100% but the disk has data already on it and still has free PPs.

[root@xxx] / > lslv -l fslv03
fslv03:/export/images
PV                COPIES        IN BAND       DISTRIBUTION
hdisk19           1599:000:000  20%           320:320:319:320:320
hdisk20           1599:000:000  20%           320:320:319:320:320
hdisk21           1599:000:000  20%           320:320:319:320:320
hdisk22           1599:000:000  20%           320:320:319:320:320
hdisk23           1599:000:000  20%           320:320:319:320:320
hdisk24           1599:000:000  20%           320:320:319:320:320
hdisk25           1599:000:000  20%           320:320:319:320:320
hdisk26           1599:000:000  20%           320:320:319:320:320
hdisk17           968:000:000   18%           320:178:319:151:000

[root@xxx] / > lspv -l hdisk21
hdisk21:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv03                1599    1599    320..320..319..320..320 /export/images

[root@xxx] / > lspv hdisk21
PHYSICAL VOLUME:    hdisk21                  VOLUME GROUP:     vg_3
PV IDENTIFIER:      00fa6d12ed215e81 VG IDENTIFIER     00c1cc1400004c0000000128174b7c57
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            64 megabyte(s)           LOGICAL VOLUMES:  1
TOTAL PPs:          1599 (102336 megabytes)  VG DESCRIPTORS:   1
FREE PPs:           0 (0 megabytes)          HOT SPARE:        no
USED PPs:           1599 (102336 megabytes)  MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..00
USED DISTRIBUTION:  320..320..319..320..320
MIRROR POOL:        None