Extend lvm root partition

Hi,

I have a server redhat 5.9 and need to extend / partition. The server is running on a VMware host. The team VMware has extended only the existing disk by adding 100G. At OS level the disk size still the same 1TB, how do I resize the current disk to be able to extend it. /dev/sda is lvm. i did echo "- - -" > /sys/class/scsi_host/host0/scan . I think need to extend then pvresize please help.

1 Like

Did the scan work? Does the new total disk size show up in commands fdisk -l and lsblk ?
Please give the output of the two, and of the pvs command, to decide how to proceed.

The scan didn't work, lsblk command is not working in Rhel 5.9 here's the fdisk -l output
image

Hmm, I think the scan has worked, there are more cylinders on sda then used by the partitions sda1 and sda2.
My knowledge is a bit rusty. Perhaps you should run
parted -l
that displays the same in better units.
Further, in an interactive parted you should be able to grow the sda2. After that you can indeed run pvresize on sda2, then lvextend on the LV with the / filesystem, then grow the filesystem.
I don' t understand why lsblk does not work, is there an error message?
Do
pvs
gvs
lvs
work, what is the output?

lsblk is command not found..
pvs,vgs and lvs output.. Looks like the disk was updated?? they have just added 100G to get a free of 150G
pvs
vgs
lvs

The end of /dev/sda2 is at cylinder 150122 and fdisk sees the disk as 187144 cylinders. So it looks to me like the disk may have been extended. (I think I see the very top of a prompt peaking up from the bottom of the image.)

I don't know how well resizing /dev/sda2 and the PV that it is will go.

I've seen people add an additional partition; /dev/sdX, and turn that new partition into a PV, add said PV to the VG, grow the LV, and grow the FS.

I can't tell what type of partitioning scheme you're using. I don't know if it's DOS / BIOS or if it's GPT.

DOS / BIOS partitioning only supports four primary partitions. One of those primary partitions can be an extended partition. You can put many logical partitions into the extended partition.

This becomes very germane if you wan to keep adding space to the end of the drive and growing the root VG. Once you add your 4th primary partition, you can't add any more. You're stuck unless you do a dance to re-arrange things. You are better off making /dev/sda4 be an extended and creating a logical inside of it for growth. You can also start with /dev/sda3 and not forget about the problem down the road.

Once you go with the extended partition, you'd grow the extended to encompass the free space appended to the end of the drive and then create a new logical in the free space after the last logical and the end of the extended partition. This type of growth can be done 10-20 times without much effort.

Conversely if you don't use extended and logical partitions, you will get trapped at four primary partitions in a DOS / BIOS partition table.

Or just add new LUN instead of extending existing in future and issue upto 3 commands total :slight_smile:

Yes, that can work.

However I strongly dislike spanning the root volume group across multiple disks. I view it as making the system more fragile as the OS is now dependent on more disks.

I'm perfectly fine with data volume groups spanning disks. But the root volume group is special and can render a system unbootable if one of the disks in the root volume group is missing.

Why create a new sda3 partition? (More metadata.)
Just grow the sda2 partition!
Start an interactive fdisk or parted for this.

Then update the PV on sda2 with
pvresize /dev/sda2
Check with pvs that the PV is grown.
Check with vgs that the new space is available in VolGroup00.

Then you can lvextend the desired LV, to the available maximum (or less, leaving free space for a future lvextend). Check with
df -T
which LV it is.

The last step is to grow the filesystem on the just extended LV.

for this i will be executing the pvresize /dev/sda2 and verify? no need for restart? okay will do this

it didn't work, i restored my snapshot then I allocated the space into /dev/sda3 and extend vg and lv... Thank you all for the help.

I started using LVM back when you couldn't grow a active PV.

If you can grow an active PV (after growing it's container), cool!

#TIL

It really depends in which manner the new disk is used in root VG.

If we just create logical volume mystuff in root vg and PEs from new disk, nothing bad would happen if we lost it operating system wise.
VG would complain, fstab would need to be modified, but far from unbootable, especially if mount options for such 'non system LV' in root VG are set to 'nofail' or automount is used for it.

Even if you used it on OS layout stuff, unbootable would be far fetched.

Broken yes, but unbootable not likely - it would really depend on time passed, how many writes on that specific filesystem, separation of logical volumes (/opt, /, /var etc.) and likes.

But i do agree on practice of keeping OS stuff to OS, and having separate vg(s) for data.
In todays world of linked VM clones, deduplications etc. keeping OS part separated can save you resources and trouble in the long run.
Also it offers flexibility in other day to day operations such as migrations to new disks (pvmove), or just attaching the disks to another box and doing simple import.

OS today should be recyclable, only thing that matters is data anyway and even that ..... databases can be recreated from primary, apps should be rebuilt from version control etc.
But that is another topic :slight_smile:

Regards
Peasant.

I want to agree. However I have concerns about if the root VG itself would activate / vary on / vgchange -a (nomenclature?) if one of the underlying PVs is offline. I suspect there might be a way to get it to activate but I highly doubt that standard init scripts will do it. Probably going to end up in a recovery mode and need intervention.

VG activation aside....

I agree that if all of the root LV's PEs are within the surviving PV, then the LV and it's file system should be accessible.

However I think that it's going to be too easy and too likely that someone will end up extending the root LV and FS such that PEs are on the missing PV.

It's going to be dependent on if all of the root LV's PEs are in the first PV or not.

I think that you will be very hard pressed to find init scripts that will deal with both activating a VG that is missing a PV and using an LV that's missing PEs from the missing PV. -- This shouts -> screams expert mode to me. And I consider most init scripts to be decidedly non-expert mode.

Yep, where the PEs for the LVs live.

+10

Eh ... I get paid for caring and feeding of of pets for a living.

I agree that having throw away OSs is laudable. But I maintain that it's not always possible to do. What's more is I'll stipulate that it shouldn't always be the goal. I'll give you 80% of the time. But I'm keeping the other 20%.

find /sys|grep sda.*block

/sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/queue/physical_block_size

/sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/queue/logical_block_size

echo "- - -" > $(find /sys|grep /0:0:0:0/.*rescan)

The 0:0:0:0 is from the find sys...

check with fdisk -l /dev/sda if the disk has the right size

When /dev/sda has no partitions (see fdisk output)

lvresize /dev/sda

lvextend ...

However when /dev/sda has partitions it is different

create a vmware snapshot

start fdisk

delete only the last partition WITHOUT leaving fdisk (sda2)

recreate the last partition again (it wil be bigger)

save fdisk

reboot the server

remove snapshot when okay.

then the resize etc...

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.