Move data from pv to another one inside the same volume group

I have cluster HP-UX of 4 hosts with Service Guard and I need to move data of application to another physical disk. Application data is in /myapp/datafile (mount point) and this one is mounted to /dev/vg_myapp/lv_myapp_datafile.

mount point = /myapp/datafile
logical volume = /dev/vg_myapp/lv_myapp_datafile
volume group = /dev/vg_myapp
PV Name = /dev/disk/disk432

My target is :

  1. to add new physical volume (e.g. /dev/disk/disk555) to volume group /dev/vg_myapp

  2. to move the existing data from /dev/disk/disk432 to /dev/disk/disk555

  3. to remove old /dev/disk/disk432

In this way my app should have to see no changes because changes are applied to lower level that is volume group and physical device.
What do you think ? It's necessary to stop application to make this job ?

Can you suggest me what commands I need for these 3 points please?

Welcome!

With a licensed Veritas Volume Manager and a vxfs filesystem it should be possible to migrate data from any location to any other location and even change the structure (e.g. the RAID level.)

But I guess you have the standard (also called Veritas Volume Manager Lite, part of HP-UX).

Within one volume group it should be possible to mirror the data, then reduce the mirror to only keep the new disk. See the chapter Using LVM Mirroring in

(Yours is only one volume and one disk. Step 4 starts the mirroring and will take a while.)
But this will not increase the filesystem size. It is a separate task and depends on the filesystem type. Check your file system type in /etc/fstab or with command
mount | grep /dev/vg_myapp
(Perhaps fsadm -F fstype ... is the front end for all file system types. I have no experience with it.)

1 Like

This should also be doable using pvmove on HPUX systems.
vgextend with disk larger or same as existing one.
Do a pvmove and wait.

Be sure to add new disk to all cluster members and follow SG HPUX docs to the letter.
Informative example :
https://support.hpe.com/hpesc/public/docDisplay?docId=c01053585&docLocale=en_US

Of course, you will not be extending or creating stuff mentioned here, just to get an overview how it is done.

After all the nodes see the new disk and import is made with new definition, you are free to do a pvmove on the active node (node holding the lock for volume group - active one).
After that you will remove the old disk (vgreduce) and do same update of VG info on all nodes (via import to update lvmtab)

It's not a trivial task and requires understanding of serviceguard cluster operations, LVM in general, and their inter-operation. But it is done without downtime if executed precisely.
Point of LVM is to do such operations without any downtime, even in clustered environments.

Confirming configuration here is key (read the SG docs for your operating system and SG version!), since you do not want a fail over to fail months after you did something wrong on 'live' node :wink:

Regards
Peasant.

2 Likes