As promised, here is the second part:
First, there are some commands to add and remove disks to existing VGs. You should know these:
extendvg <VGname> <hdisk> # adds the disk to the VG
reducevg <VGname> <hdisk> # removes the disk from the VG
Second layer: Logical Volumes (LVs)
After having created a VG you can start to create logical volumes within the VG. Notice that a LV is NOT a filesystem - a logical volume is rather some space where you can put a filesystem. Other options for a logical volume to be used is as a "raw disk" - some databases use this because bypassing the filesystem layer comes with a (nowadays minuscule) performance advantage. Also possible is to put a swap space onto such an LV.
Also, when creating LVs you are making some decisions which cannot be reversed later - you'd have to recreate the LV in such a case with different attributes and move the data. So, again, plan thoroughly, then revise your plans, then pan again and only then implement them.
First, like with VGs, i'd suggest a naming convention and to be consistent with it. Personally i name the LVs always with a "lv" at the end (like the VGs have "vg" at the end) and some hint about the usage of the LV. You can have the LVs automatically named when you create them but i suggest NOT to do that. Ending with LVs named "lv00"-"lv127" pretty easily makes you lose orientation and once you have deleted the wrong one because you confused "lv87" with "lv86" you are in deep kimchi.
Now, after so much warnings, what is an LV actually? It is a collection of so-called Logical Partitions (LPs). A Logical Partition is like a PP, has the same size and it can have 1, 2 or 3 PPs representing it. That means you can even have mirrored data by the means of the LVM itself.
Because you can alter the number of PPs by which an LP is represented later (even while the file system is online and used) you can move LVs across disks: suppose you have a LV residing on PPs only coming from one disk. Now, that you introduced a new disk and want to replace the old one with the new one you need to move the data to the new disk somehow: you create a mirror with a second set of PPs from the new disk to represent each LP, wait until everthing is synchronised, then remove the mirror with the set of PPs coming from the old disk. Everything now is on the new disk without even so much as a downtime.
It is possible to completely control which PP represents which LP (via a so-called "map file"), but this is rarely done. Usually you rely on the automatisms built into the LVM and only make high-level requests about the layout of the mapping. You can request that each copy (if you have more than one PP representing each LP) resides on a separate disk, which makes sense because if you have mirrored data you won't want the mirrors to end up on the same device. This is called the "strictness" attribute.
You can also request the PPs to come from as many disks as possible (this is to engage as many disks as possible so that the load on each disk levels out) or as few as possible (this will make the layout less complicated and more easily maintainable but will come with a slight performance penalty). This is - rather unintuitively - called "inter-policy". Request "maximum" to spread the LV over as many disks as possible, "minimum" to use as few disks as possible.
You can also control where on the disk the LV is placed. The fastest parts of the disks are the center and it gets slower the more to the edges it gets because the read-/write-heads will have the longest way to travel there. This is called "intra-policy" and not to be confused withe the "inter-policy" from above.
Notice that all these performance considerations ca be skipped if you deal with LUNs from a storage box. All of the above applies only to real, physical harddisks consisting of rotating magnetic platters. It will also not apply to flash disks and the like.
Here are the most important commands for dealing with LVs:
lsvg -l <VGname> # list all LVs in the VG and their status
lslv <LVname> # show the attributes of an LV
lslv -m <LVname> # show the exact mapping of PPs to LPs in an LV
mklv # create an LV. Has a lot of options, see the man page
rmlv # remove an LV, the LV has to be closed
chlv # changes attributs of an LV, also see man page
Third layer: filesystems
At lst we come to the filesystems: you create them by basically formatting an LV and so turn it into a FS. Notice that an FS resides on an LV but these two are different things - or rather different logical layers. It doesn't really help to clarify things that the command to create a FS ( crfs
) will create an LV automatically if there is none and the command to remove the FS ( rmfs
) will automatically remove the underlying LV if not invoked with special options. Still, LVs and FSs are NOT the same.
When creating FSs you don't have to consider that much as it used to be: disk space is cheap and plenty today and you need not to concern yourself at all over the waste of a few KB. Finetuning the number of inodes and similar things are rarely used any more because using a few MBs of space or not will not affect anything.
What you need to take into account, though, is if you work in a cluster environment or not: if so, you need to make sure the information about LVs, FSs etc. is propagated throughout the cluster nodes consistently. You either do that with "learning imports" (see the importvg
command) or by using the cluster commands instead of the normal commands to create or manage VGs, LVs and FSs.
When creating FSs in a cluster make sure they are NOT mounted automatically at boot time! The cluster software itself will activate them when a VG is activated on a certain node.
Another thing you want to take into consideration is the placement of the log volume: AIX uses a "journaled file system" and somewhere the JFS-log has to be placed. Historically there was a special LV for that in each VG but nowadays with JFS2 it is better to use "inline logs", which set aside a part of the FS to do the same.
A feature you also want to use is to create "mount groups" for groups of FSs and respectively, make FSs part of a mount group: mount groups can be used to mount or unmount groups of FSs together and it is a very practial way of making sure all FSs related to each other (like all FSs of a certain application) are mounted or unmounted together. This saves a lot of unnecessary work and headache when managing a system. If you forgot to put a FS into a mountgroup just edit the file /etc/filesystems
, which contains information about all the filesystems in stanza format. Here is an example:
/opt/myapp/bin:
dev = /dev/myappbinlv
vol = "binlv"
mount = true
check = true
log = INLINE
Add the line "type = <groupname>" to such a stanza to add a filesystem to a mount group, like this:
/opt/myapp/bin:
dev = /dev/myappbinlv
vol = "binlv"
mount = true
check = true
log = INLINE
type = myapp
If you have questions please ask.
I hope this helps.
bakunin