Upgrading rootvg disks on the fly.

I'm looking for a way to upgrade disks containing my rootvg volume group on the fly without a reboot.

Currently, rootvg contains 2x74gb drives in RAID 10. What I want to do is swap them out one-by-one with 146gb drives then expand the volume group. I've done this with a test system before, and the new drives are recognized as being the larger capacity but I can't extend the volume group to the extra size. lsvg rootvg still shows the vg as being 74gb.

I've tried simply using 'chvg -g rootvg', but I get the following error...

# chvg -g rootvg                                                                     
0516-1382 chvg: Volume group is not changed. None of the disks in the                
        volume group have grown in size.                                             
0516-732 chvg: Unable to change volume group rootvg.                                 
#                                                                                    

I can't seem to find a way to extend the actual RAID volume past the original 74gb.

Hi,

please provide the output from lsvg -l and lsvg -p .

regards

I removed my post since you changed all the subject... why did you not say in the first place you were in RAID 10 ?

Well, assuming that you have:-

  • hot-plug disks
  • spare slots to put new ones in that you can boot from
  • current disks are hdisk0 & hdisk1

If you insert a new disk and run cfgmgr -S does it discover it? Let's refer to these as hdisk100 and 101.

extendvg rootvg hdisk100 hdisk101

migratepv hdisk0 hdisk100
bosboot -ad hdisk100
bootlist -o -m normal hdisk1 hdisk100        # Just in case we lose the other disk at the wrong moment.

migratepv hdisk1 hdisk101
bosboot -ad hdisk101
bootlist -o -m normal hdisk100 hdisk101

reducevg rootvg hdisk0 hdisk1

I hope that this helps

Robin
Liverpool/Blackburn
UK

---------- Post updated at 03:54 PM ---------- Previous update was at 03:52 PM ----------

Oh great, RAID disks. What do you see as rootvg from AIX then? If it's a single protected disk, then it's all handled by the RAID manager and it's not an AIX question.

It would have been nice to know this first.

I await the output from the request by -=XrAy=-

I'm going to try messing around with the migratepv command. I didn't think about doing that.

Q: how do you manage to do RAID with only 2 disks?

# lsvg -l rootvg                                                                     
rootvg:                                                                              
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT        
hd5                 boot       1       1       1    closed/syncd  N/A                
hd6                 paging     4       4       1    open/syncd    N/A                
hd8                 jfs2log    1       1       1    open/syncd    N/A                
hd4                 jfs2       3       3       1    open/syncd    /                  
hd2                 jfs2       21      21      1    open/syncd    /usr               
hd9var              jfs2       5       5       1    open/syncd    /var               
hd3                 jfs2       1       1       1    open/syncd    /tmp               
hd1                 jfs2       1       1       1    open/syncd    /home              
hd10opt             jfs2       1       1       1    open/syncd    /opt               
hd11admin           jfs2       1       1       1    open/syncd    /admin             
hd7                 sysdump    3       3       1    open/syncd    N/A                
fwdump              jfs2       1       1       1    open/syncd    /var/adm/ras/platfo
rm                                                                                   
livedump            jfs2       2       2       1    open/syncd    /var/adm/ras/livedu
mp                                                                                   
paging00            paging     19      19      1    closed/syncd  N/A                
# lsvg -p rootvg                                                                     
rootvg:                                                                              
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION        
hdisk0            active            531         467         103..99..53..106..106    
#       

There won't be available drive slots on the systems I plan on doing this on. This is why I was hoping it be as easy as swapping out the drives in the array one-by-one.

I only see only one disk here, no RAID 10... Im missing something...

Because if this is the case what I submitted in the first place is valid... start by adding ONE new disk to rootvg, and get it to be accepted (because sure you will have to mody PP size for the group) then just do a mirrovg...once done split the mirror and remove the other 72GB disk - not physically: Properly using appropraite commands so you can add your second 144GB disk to the VG and mirror again...

Thanks. I'll give this a try today.

I appologize for forgetting to mention this is a RAID setup.

So, if you have a disk failure, what do you see and where? You surely have something to alert you and manage the replacement.

When you replace a disk, do you get a notification when RAID is back at the required redundancy?

With 2 disks in a RAID set, I'd suggest that is RAID 1 (full disk mirror)

Robin

The disks are in RAID 1, AIX's RAID manager calls it RAID 10 for whatever reason. When a drive fails we notifications from diagela and/or the HMC.

Ok, before we beat around the bushes, can you answer the below

  1. Are the disk internal SAS or they are SAN disks?
    lsdev -Ccdisk

  2. If they are internal SAS disks, who created the RAID? you as Admin using diag ?

  3. If they are SAN disk, they responsibility is of a SAN admin and NOT yours.

Provide the output of the below

lspv
bootinfo -s <rootvg hdisk>
bootinfo -s <new empty hdisk>
1 Like