New IBM Power8 (S822) and StorWiz V3700 SAN, best practices for production setup/config?

Hello,

Got a IBM Power8 box (S822) that I am configuring for replacement of our existing IBM machine.

Wanted to touch base with the expert community here to ensure I don't miss anything critical in my setup/config of AIX.

Did a fresh AIX 7.1 install on the internal scsi hdisk, mirror'ed the rootvg and made sure it can boot from both hdisks in case one fails using this guide.

The AIX oslevel is 7100-03, going to check for any updates.

What about firmware/microcode? How does one go about updating that for the head unit?

Anything else I should consider doing before I start adding my SAN FC LUNs and moving over our application?

This is all fine, but you should be aware that this way you will never have the possibility of LPM (live partition mobility). LPM means the ability to move an LPAR to another hardware box ("managed system" in IBM-speak) on the same HMC without even stopping it. For this to work you (obviously) must not have any non-virtualised components in the LPAR: disks, adapters, etc..

If you want LPM you need to create 1 (2) VIOS LPARs, give these all the physical adapters, then create virtual adapters and give out these to the other LPARs. Instead of physical SCSI disks you usually create LUNs on the SAN, connect them to the VIOS, give them out to the LPARs as virtual SCSI (vscsi) disks and use these. Usually this is used for boot disks, for data disks you either use the same or FC-connected ("NPIV")-disks. When you move a LPAR via LPM the vscsi-disks are moved to the VIOS on the target managed system in the process.

That is OK. In fact some applications will prescribe exact versions anyway, so, as long as your version is supported (which is the case with 7.1), you are on the safe side.

The POWER8 is a new hardware so expect the microcode to change quite oftenly in the next future. In general you only update when you must, not, when you can. It is good practice to install the very latest revision before the system goes productive because this way you might avoid the one or other downtime. Save for that: wait until you have a support case and support advises you to update microcode or a software update makes it necessary. Only then install the - at that time - latest microcode. As long as you haven't got a problem related to it: leave it alone.

Do you have a NIM server? If so make sure this is at the absolutely latest AIX version there is, because it can only serve systems at the same or lower versions than it is itself.

I hope this helps.

bakunin

Thanks this helps :slight_smile: I will make sure the microcode/firmware is up to the latest available before I move this into production.

This is a standalone box, so no LPM.

---------- Post updated at 02:04 PM ---------- Previous update was at 01:36 PM ----------

Hey bakunin,

Here is a question for you on pv/vg in regards to my new system setup...

I have carved a 350GB LUN from my StorWize SAN and presented it to the Power8 box via FC.

The pv hdisk2 has the following attributes (the hdisk2 is the LUN I carved and presented)..

# lspv hdisk2
PHYSICAL VOLUME:    hdisk2                   VOLUME GROUP:     vg_usr2
PV IDENTIFIER:      00f9af9427d70816 VG IDENTIFIER     00f9af9400004c000000014b28233d7d
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            512 megabyte(s)          LOGICAL VOLUMES:  1
TOTAL PPs:          699 (357888 megabytes)   VG DESCRIPTORS:   2
FREE PPs:           0 (0 megabytes)          HOT SPARE:        no
USED PPs:           699 (357888 megabytes)   MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..00
USED DISTRIBUTION:  140..140..139..140..140
MIRROR POOL:        None

After some googling I understand that a normal VG is limited to 32512 physical partitions (32 physical volumes each with 1016 partitions) and 256 logical volumes.

Now I am already confused, hope you can demystify some stuff..

My "hdisk2" has 699 total physical partitions (Total PPs?) based on PP Size being 512mb chunks. So this is 699 out of 32,512 or 1016?

If I changed the hdisk2 to have a PP Size of 256mb chunks, the Total PPs would be 1398 correct? Is there any performance benefits of making the PP Size smaller vs. larger? I am trying to decide what I want my PP Size to be for the PVs.

In the end I will have 4 enhanced JFS2 FS each allocated 350GB, 1TB, 1T, and 2.45TB.

You should open different threads for different questions. This here deals with LVM concepts and has nothing to do with your previous question. Let us see where this takes us and maybe i will split the thread in two for the different topics. OK, so much for organisational stuff, here is the LVM 101:

The following will be about "classic" VGs, there are also "big" and "scalable" VGs, which lift several of the restraints. Still, the basic concepts remain the same.

We start with raw disks. "Disk" here is anything with a hdisk-device entry: physical disks, RAIDsets, LUNs, whatever. One or more such disks (up to 32) build a "Volume Group". Every disk can be member of exactly one VG. When the disk is assigned to a VG it becomes a "PV" (physical volume) and is formatted to be used by the VG. Some information about the VG is written to it (the VGDA - volume group descriptor area and the PVID, a unique serial number by which the LVM can recognize the disk even if it gets a different hdisk-device during a reboot).

The disk is also sliced into logical chunks: the PPs (phyiscal partitions). How big such a physical partition is is up to you, but it can't be changed any more once it is set. to change it you will have to backup your data, delete the VG (and all data in it), recreate it and restore the data back.

PPs are the smallest unit of disk space the LVM can deal with. On any single PV there can be up to 1016 PPs. This means that a small PP size will limit the size of disks you can put into your VG. Roughly the size of a single PP in MB is the size in GB your disk can be at max: 512MB PPsize means disks up to 512GB in size. Because a VG can hold only up to 32 PVs this means that with a PP size of 512MB your VG can grow to ~16TB and not more.

It is therefore wise to plan how much data the VG is going to hold in the near and not so near future, because the above process - backup, delete, recreate, the restore - is a time-consuming process.

Interlude: there is oftenly the "factor" mentioned as a remedy. Yes, this might help you in certain situations, no, it won't lift the size limit of the VG. How comes? Because many VGs were planned badly admins tried to put disks into their VGs which were too big to fit because they had room for more than th 1016 PPs. IBM invented a workaround: you can introduce a "factor" (the command is "chvg -t <factor>") so that a multiple of 1016 PPs can reside on a PV. On the downside this reduces the number of PVs this VG can hold: with a factor of 2 the single PV can hold 2032 (2x1016) PPs (so you can put in a bigger disk) but the VG can only hold 16 (32/2) PV now. With a factor of 4 a single PV can hold 4064 PPs but the VG is reduced to 8 possible PVs.

This is why you should make your PP size rather on the big side than too small. Performance-wise this will change nothing. the only downside is that you will waste some space, because you have to deal with bigger chunks and a logical volume (LV) has to consist of at least one PP. Also the log LV will be one PP, regardless of what the PP size is.

After creating a VG you can create logical volumes in it. Notice that LVs are raw diskspace, not filesystems. you can create a FS on a LV but you do not have to. You can use the LV for all sorts of other things: swap space, raw devices, log-LVs, etc.. I will leave out the option to mirror or stripe the LV here, look it up in the documentation.

Once you are done with the LV you can create a FS on it. Notice that there are two options for JFS-logs: a dedicated log LV (which is created automatically when you create the first FS without an inline log) or inline logs. Inline logs are somewhat faster in most circumstances, so they are preferable for DB FSes. Speaking of DBs: stay away from raw devices, even if the DBA wants them! You get a microscopic gain in performance in exchange for a lot of trouble and loss of manageability.

Notice that inline logs are 0.5% of the FS's size by default. When you increase the size of the FS/LV, the log size is increased too if you do not explicitly state that it shouldn't be increased. This is a slight waste of disk space (inline log sizes above 64MB are pointless) but the amounts involved are just not worth any effort. Only tinker with this if you are really really tight on space.

I hope this helps.

bakunin