Adding disk to my lpar

hi all i have entered Aix environment 4 months had experienced in linux
what i am facing is i am unable to do sort of RnD with aix like
installation on my own, creating vgs managing networks, the VIOS, storage,lpars,

So we have a setup here almost all are in live production environment
with 2 VIOS server, storage,HMC

I got one lpar for RnD but it has only 1 hdisk say "hdisk0" as rootvg

i would like to have another hdisk say hdisk1 to make say "datavg"

now the problem is how can i add that disk to my lpar

hmm, your name suggests you know VMS too, LOL.

There is no direct answer to this because it will depend on so many details of your setup (which you didn't describe in detail so far). I will try to give you an overview and you may want to ask details as we go along.

It is possible to put physical disks into POWER-systems and use these, but this is commonly only done for VIOS systems. All the other LPARs get disks usually from some external (SAN) storage. I will describe this here.

There are two common ways to attach external disks to an LPAR: vSCSI and NPIV. Both involve the VIOS and the most common setup is to use both: vSCSI for the boot disks (that is: rootvgs) of the LPARs and NPIV for the data/application disks.

vSCSI is the simplest method: first, on the VIOS a virtual SCSI adapter is created and then attached to the LPAR. Now it is possible to create virtual SCSI disks from SAN disks attached to the VIOS and attach these virtual SCSI disks to a certain LPAR/certain adapter. When the LPAR boots it sees this virtual disk as a SCSI-disk attached to a SCSI-adapter.

The advantage of doing it this way is that you can do everything you do with a real SCSI disk, including booting from it, from the LPAR. Still, because the gear the LPAR uses is in fact virtualised, you can still use LPM (Live Partition Mobility). The VIOS of the source- and target-machines (="Managed System") will move the virtualised adapters and disks around so that on the target-MS you have the LPAR running the same way you had it on the source-MS. On the downside the VIOS is involved a lot relatively in the handling and utilisation of the vSCSI disks and when you have much traffic on vSCSI-disks the VIOS needs an increasing amount of (processor and memory) resources.

This is why disks you do not need to boot from ("data disks") are commonly not vSCSI but NPIV: the VIOS only creates a virtual FC-adapter (the VIOS gets the physical FC-adapter attached) and exports that to the LPAR, which in turn uses it to connect to SAN LUNs directly. The LPAR needs to use the respective FC-driver (multipath for IBM storage, powermt for EMC, ...) to access the LUNs. The zoning gets a bit more complicated too, because the LUNs need to be zoned to the LPAR now. On the upside the VIOS is less involved and some limitations of SCSI do not apply.

There are a few redbooks from IBM about their virtualisation techniques which i suggest you download and read.

I hope this helps.

bakunin

2 Likes

@bakunin

Thanks for this brief and prompt reply..although what ever you debriefed i was trying to summarize and understand but was unable to do since no experience in this field.

if you can tell what else i can provide you with info so that i can get my resolution and perform some RnD

---------- Post updated at 04:30 PM ---------- Previous update was at 04:02 PM ----------

All i have a rack with
Two p-Series 710 & 750
A storage v7000 with controller and expansions
SAN switches and HMC these are the physical components

now when i login with HMC i see two server as mentioned
one server is used as bkup server i.e. 710
and another one have 25 Lpars :open_mouth: where all 22 Lpars are LIVE !!
i managed to get one lpar when i logged in and issued lspv command i see one hdisk0 as rootvg
so i want another hdisk to add in that lpar and make it as datavg

Note: among 25 lpars two lpars are mentioned as VIO server and rest 23 are AIX lpars
will these info helps to resolve my queries ??

OK. I think what you need is to first understand how virtualisation in general and virtualisation the IBM way in particular works. Only then we can care to certain specifics. Lets start:

Take a look at your PC: you have a mainboard inside, onto which a SCSI-Adapter is attached. To this SCSI-Adapter is a disk (or several of them, maybe other devices like CD-ROM drives, etc.) attached. Further, there is some amount of memory and one or more processors.

Let us suppose for a moment that we want to virtualise this system so that we run several logical systems off this hardware. We could divide the memory up between the logical systems and we could also divide the processors between these systems. It would be even possible to makde these shares dynamic so that we take away reosurces from a system which doesn't need it at the time and give it to another which does, then reverse the process as the loads change.

This works well, because processors and memory are "anonymous" resources. They have no content of their own and can therefore be easily attached and reattached to a system. This is not the case with disks, because disks have a certain content. You cannot add "more disk" or take away "some disk" like you could add or remove some GB of memory. This is why virtualisation needs to treat disks differently than other resources.

For this, storage devices were invented. You no longer have physical disks locally installed in your computer (which is already virtualised) but you have a specialised system - the "storage" - which is basically a lot of disks with some logic on top to create virtualised disks of arbitrary size and attach them to some external systems. Examples for such storage boxes are the DS8000 from IBM, the VMax from EMC (these are both the enterprise-level systems), the IBM V7000 and EMCs VNX (the respective midrange category) and similar systems.

In most cases you have several hardware boxes, each with several virtualised systems needing disks and one or more storage boxes providing these disks. To organize all this it is common not to attach the storage directly to such a system but to connect all involved parties to a common network connected via specialised switches and utilizing special broadband communication pathes - the "Storage Area Network" or "SAN". SANs usually run on fibre optic cables and use FC (Fibre Channel) communication. Like Cisco is industry-standard for network switches Brocade is industry-standard for FC switches.

Note that this network of disks connecting to their respective hosts needs to be a lot more reliable than a normal network: in a normal network if a packet is lost it is simply retransmitted - IP has a lot of error-checking and -correcting built in. When a packet in a SAN is lost it results in a disk read/write error. This is why communication pathes are usually redundant, with two or even more parallel connections just in case. This is why the whole system is often called a "fabric" rather than a "network".

Further, with so many logical disks (sometimes thousands of them) and systems involved there needs to be some sort of security so that each host only sees the disks it is supposed to use: this is usually done on the switches of the SAN network and the process is called "zoning". A "zone" basically states that only adapter X is allowed to see disk Y. Because adapter X is a virtual adapter attached to system Z only this system can see the disk. The identification works by a system similar to the MAC-addresses of a network: WWNNs (World Wide Node Name) and WWPNs (World Wide Port Name). Read the Wikipedia article about FC for more details.

So, basically, to get a disk to your system: you need to create one on the storage box, get it zoned to your system, then start using it. I know, this, while this is true, it is like explaining how to fly a plane by "get into the plane, take a seat in the pilots seat and start flying" - way too general. It was the best i could come up right now, though. You need to understand some vital basics before you can even ask the right questions.

I hope this helps.

bakunin

Something that might help is a listing of the virtual devices - does not need to be all of them.

So, the command you run (for yourself) as padmin on VIOS is:

$ lsdev -virtual

From the output I would like to know if you see and vfchost devices. If you do, that implies you may be using NPIV for your storage. V7000 can certainly support this.

I assume you will also have some vhost devices.

Using the command

$ lsmap -all

look through the output to see if any hdisk or logical volumes are included in the output. If there are then you are (also) using VSCSI for storage.

To add a disk to a partition using NPIV - you need to find it's WWER number and zone a new LUN to it, and then in the partition run 'cfgmgr' and the partition should see it. No change is needed in the VIOS, HMC, etc.

If you know you are not using NPIV then you will need to find a free disk on a VIOS and 'attach' that to the correct vhost adapter.

Assuming hdisk93 is free, and the RnD partition has vhost7 assigned to it the command is:
$ mkvdev -vdev hdisk93 -vadapter vhost7

Now run cfgmgr in the client and disk should appear as hdisk1

If you do not have a free hdisk on the VIOS then you zone an additional disk to the VIOS (as root run cfgmgr on the VIOS, I continue to forget the padmin equivalent) and then do the steps above (Assume ... hdisk93...)

Hope this helps!