Mirrorvg to different multiple disks

Hi All,
I have a vg that consists of 2 physical disks (PV), To migrate the vg into these 2 new disks (presented from different SAN Storage), I use the mirrorvg command, but I am not sure the command I use is correct.

 hdisk1  appvg
 hdisk2  appvg
 hdisk3  none    
 hdisk4  none

  mirrorvg appvg hdisk3 hdisk4 

Could someone assists me to verify this ? logically it will automatically distribute the source to target disks accordingly.

Thanks

Hi All,
I have a db2 DPF database running in active/active using HACMP.
Would like to migrate the vol into different hdisk using mirrorvg. But I understand that HACMP require to use its own mirror method to mirrorvg not AIX mirrorvg command.

Could someone share the steps to perform the HACMP mirrorvg ?

Thanks.

Close, but not quite:

First, use extendvg <VG> <hdisk> to add the disk(s) to the VG. You can only mirror to disks already present in the VG.

Second: use the command you suggested. Note, however, that you have to watch out for some gotchas:

You may need to change the quorum of the VG. The quorum is the number of PVs needed to be available for a varyonvg to succeed. Additional restrictions apply if the VG is part of a resource group in a HA-cluster, but i suggest you do NOT try to configure such a cluster if you have no firm grasp of AIX concepts. Do yourself a favor and hire an expert for such a task or - if you have enough time for this - first learn AIX/HACMP concepts to the point where you are familiar with these. It's like driving a car: in principle not too complicated, but try to do it without training and you are likely to cause some damage. There are a lot of valuable resources for this (i suggest reading the freely available "IBM Redbooks", which can be downloaded in PDF-format) but you need to get familiar with these first, everything else is asking for trouble.

Basically, the LVM works like this: you have "physical volumes" (PVs), which are logical disks. Each PV is member of one and only one VG. Upon adding the disk to the VG (by which it becomes a PV) it is chopped into small pieces (PPs - "physical partitions") and these pieces can be added to "logical volumes" (LVs). Onto LVs filesystems or all other sorts of things can be put: swap devices, JFSlog devices, ... .

In principle these LVs consist of "logical partitions" (LPs), which are the same size as PPs. Every LP is represented by one PP. In case of mirroring you have a 2:1 ratio and every LP is represented by 2 (or even 3) different PPs, each with the same content. Mirroring is done on LV level and you can have mirrored alongside unmirrored LVs within the same VG. A mirrorvg command just executed the mirror process for each LV automatically.

Also in principle, PPs are anonymous and you do not have to care from which disk (PV) they come from. However, the selection process for allocation as well as mirroring can be influenced (i.e. allow/forbid all copies of a LP to be on the same disk) in the LV properties ("Intra-policy" and "Inter-policy", quite a poor choice of naming).

Now, this was a (very very short) introduction to LVM concepts, but it probably left more questions than it provided answers. For LVM too there is a redbook which i suggest you read.

I hope this helps.

bakunin

1 Like

Hello to answer the question of your 1st post,
Make sure the disk attributes are set same as existing PVs
1st you need to extend the vg

extendvg appvg hdisk3 hdisk4
mirrorvg appvg hdisk3 hdisk4  (also you can use -S flag to sync in the background)

Answer to 2nd post
To the best of my knowledge HACMP is active/passive, never worked on PowerMirror7.1, and not sure if it has active/active feature.

If I am not mistaken you will find it in C-SPOC feature with in HACMP, currently I have no cluster(s) to exactly point where you can do so.

Moreover I don't remember doing a mirrorvg from HACMP.

I hope this helps.

HACMP (all versions, regardless of their name) have "rotating" and "cascading" resource groups.

"rotating" is active/passive: NodeA has RgA, NodeB is passive, you can switch this state, but the other node will be passive in this case.

"cascading" is active/active: NodeA has RgA, NodeB has RgB, if NodeB fails, nothing happens, if NodeA fails then NodeB will take over RgA and either shut down RgB or not.

The latter setup is for instance used in SAP setups, where you run production on one system and Test/Development on the other. Prod is HA, Test/Dev is not. Of course all this is possible with more than two nodes too, adding more complexity.

I hope this helps.

bakunin

1 Like

About the First post, AIX gurus there, could migratepv be an alternative (because the thread owner is talking of migrating PVs...)? (just adding some salt...) then I suppose the next step would have been to remove the old PVs...

Yes, migratepv would work, but it is easier to do the mirroring in one run, then remove the mirror in a second run. Doing it with "migratepv" is more hassle.

You are right about removing the old PVs afterwards if you want to do a migration. When i wrote my first answer i got somewhat carried away and forgot the last part:

after mirroring the whole volume (check with lsvg -l <VG> , all LVs have to be in status "syncd". Depending on sizes and SAN speed this may take some while.) use "unmirrorvg" to remove the copies residing on the old disks.

Lastly, when this is done and the disks are indeed free, use reducevg to remove the old disks from the VG.

Finally you might use rmdev -dl <hdiskN> to remove the disk devices, unattach the LUNs from the system and then run cfgmgr again to update the configuration.

I hope this helps.

bakunin

Thanks for the reply. Can I conclude that if to mirrorvg to one disk I do not require perform extendvg unless more than 1 disk ?

No, quite the opposite: you can use the "mirrorvg" command only on disks you added with "extendvg" before.

I hope this helps.

bakunin

There is no need to mirror, sync then un-mirror. You can:-

extendvg appvg hdisk3 hdisk4
migratepv hdisk1 hdisk3
migratepv hdisk2 hdisk4
reducevg appvg hdisk1 hdisk2

The migration phases will take a long time (same as a mirror and sync) but it's less prone to error. I'm assuming that the target disks are at least the same size as the source.

You may want to consider stopping the database and HA too, but test this elsewhere and see if you need to take services down. It might be fine as the OS maintains access to the LV even with updates happening. I'm just very cautious.

I hope that this helps.

Robin
Liverpool/Blackburn
UK

Here the steps to perform HACMP Mirrorring

To mirror in hacmp
------------------
smitty hacmp

system management (c-spoc)

 HACMP Logical Volume Management

   Shared Volume Groups

     Mirror a Shared Volume Group

	 Select a volume_group to mirror

** select disks to mirror

After this need to issue command to sync

System Management (C-SPOC)
	HACMP Logical Volume Management
		 Synchronize Shared LVM Mirrors
			Synchronize by Volume Group


Do take note, only select and run those vg that in the active node.

You can copy out the script (F6) 

This will work only with ECC (enhanced concurrent capable) VGs. Shared VGs need RW-access on one (the active) node and RO-access on all the others. You can have that only with SCSI-3 (SCSI-2 would have disk reservations) and fast disk takeover, which means ECC. There are some storage systems, though, (for instance the EMC CLARiiON), which will not allow concurrent access to mirrors.

In such cases you have to mirror the volume group on the active node with classic AIX means, switch the primary mirror over to the other node during a downtime, do a "learning import" ( importvg -L ) and then restart the cluster.

I hope this helps.

bakunin