VIOS Configuration Question

Hi Guys,

This is a general question relating to the building of VIO servers;

It has been a long time since I have done any serious AIX work, AIX 4.3 was new and I was for the most part working with 3.2.2 on older RS6000 kit.

So to the current day, I'm about to take delivery of some new tin which I have to plan, install and configure the first batch of tin will comprise.

  • Three Power 8 S824's
  • One HMC
  • One SVC
  • One EMC VNX5800 Disk Array

The S824's are customer install and will each come with 4 x Dual Port 16Gb HBA's and 4 x Quad Port 10Gb NIC's along with the 4 x 1Gb copper NIC.

So in simple terms my question is:- When building the VIO's can I assign LAN and SAN resource on a port by port basis, or do I have to assign a it on a card by card basis. I.E. can I assign a single port from a card or do I have to assign the whole card.

Currently each server will have a pair of VIOS and a NIM server as standard, well that's the plan.

Regards

Gull04

Hi gull04,

it's now sadly some time ago I actively worked with these beside some courses, but here we go (I hope the other guys correct me if there is an error in my explanation):

Basically each port will show up as single interface, like a 4 port NIC assigned to a VIOS will show as ent0, ent1, ent2, ent3.

Today you usually want to bind 2 physical adapters together with a main and a backup adapter which ist called an Etherchannel. You can also bundle adapters together (Link Aggregation) for performance reasons, but you can only have 1 backup adapter and that should be, as said, another physical adapter.
So it depends a lot on your layout how many networks you want to have connected to your systems.

Many setups I had to do with looked like this:


          ADMIN-LAN

        SWITCH    SWITCH
           |        |
           |        |
           |        |
           |        |
          ent0     ent1
           | main   | backup
           +--------+
                |
                |
               ent 2
                | EtherChannel
                +
                |
               ent 3 bridged vNIC (VLAN11)
                |
                =
                |
               ent 4
                | Shared Ethernet Adapter (SEA)
                | PortVLAN ID 11
                |
                |
                |
                |
                +------------------+-------------------+
                |                  |                   |
                |                  |                   |
                |                  |                   |
               LPAR A             LPAR B              LPAR C
               ent0               ent0                ent0
                  virtual NIC        virtual NIC         virtual NIC
                  PVID 11            PVID 11             PVID 11

You will also want a HASEA (High Available Shared Ethernet Adapter), so that in case of failure of the whole EtherChannel, the 2nd VIOS can take over - but I leave this out for now.

The EtherChannel is built with smitty etherchannel on the VIOS and don't forget to have no IP-address on the adapters you put together. The IP-address will be assigned later on the new generated SEA!

On the HMC you create a virtual Ethernet Adapter with a , that will be the virtual Ethernet Adapter with a Port VLAN ID (PVID, not the Physical Volume Identifier ;)) and check the box that it will be the adapter with contact to the outside physical LAN.
You also create the other virtual Ethernet Adapters that will be assigned to each LPAR via HMC, but they don't get the check in the box as they don't have a connection to the outside.

The SEA will be created on the VIOS command line by "fusing" the virtual Ethernet Adapter with the checked box for the outside communication together with the EtherChannel Adapter you created formerly out of the 2 physical adapters.

The assignment, which virtual Adapter in an LPAR corresponds to which SEA is solely defined by the PVID they share, in the "painting" above I took PVID 11 as example.

Next to this setup for the ADMIN LAN, you will probably have such a construct for a PROD LAN and a BACKUP LAN (network backup), but I was too lazy to "paint" them next to this :slight_smile:

With 4x4 ports per box, you have different options now. If you have a very big load on the NICs, you can either bundle ent0, ent1, ent2, ent3 together in a Link Aggregation, so that they all 4 form up the "main" adapter in the EtherChannel and you get a lot of possible throughput. Though you will have to take a port of another physical adapter as the backup adapter for your Etherchannel. Though what to do with the rest of 3 ports on this physical adapter..?
If you don't need for aggregated ports, then you can setup maybe something like:

        physAdapter 1   physAdapter 2   Etherchannel

Port A  ent0            ent4            ent0 + ent4 = ent8
Port B  ent1            ent5            ent1 + ent5 = ent9
Port C  ent2            ent6            ent2 + ent6 = ent10
Port D  ent3            ent7            ent3 + ent7 = ent11
                                         |      |
                                        main + backup

Maybe someone else has a better idea how to assign your many ports, but it all depends on what you want/have to achieve.

For SAN, you will have to decide if you go for vSCSI or NPIV.
vSCSI will let you provide LVs, VGs or PVs from the VIOS to the LPARs. The LPAR will see them as normal hdisks.

NPIV is N-Port Virtualization and will virtualize a FC-Adapter which then can be assigned to the differnt LPARs. Doing this will create 2 new WWNs, 1 of these is for LPM (Live Partition Mobility), to which the LUNs from your storage will be zoned as usual.

I personally like NPIV more because the administration of the mappings for LUN/DISK-to-LPAR that comes with vSCSI is much more tideous work to handle instead of the zonings and the NPIV-WWNs assignments to the LPARs. The downside is, that you have to do driver updates on each LPAR where vSCSI will be updated on the involved VIOS.

This as a start and quick overview for your considerations.
The HASEAs, which you want to have for sure too, needs some more details when configuring the SEA, but I assume you will have to make your layout 1st.

2 Likes

Hi zaxxon,

Thanks very much for the info, I had read a sizable chunk of the VIOS config and implementation manual and had worked some of the above out, but there's some really handy tips in there. For the SAN connectivity it will be NPIV through the SVC for onward attachment of the LUNs.

The network configs will be pretty complex here as there are numerous VLAN's, at the moment I have just started the Low Level Design - so just a couple of pictures at the moment (one attached).

Anyway, thanks for the most welcome reply.

Regards

Gull04

Hi gull04,

you're absolutely welcome. It also makes me recapitulate these very interessting things again :slight_smile:

I don't want to be a nitpicker, but each VIOS is his own OS/LPAR. VIOS #2 will start counting devices from ent0 again.
You wrote you have 1 HMC - quite a bit dangerous if it fails. The LPARs and VIOS will not complain and run if the HMC is down, but you can't do anything on the Managed Systems for this time.
Also remember that each HMC has a dhcpd which provides internal IP-addresses for the service processors (SP). If you have more than one, they may not see each other. They are attached to a NIC per Managed System often very simple by a plain network hub.

I do not completely understand the layout, tbh.

Does the same colors mean, that they are in the same EtherChannel? For instance the yellow

VIOS #1: ent0, ent1
VIOS #2: ent16

Which of the 3 is the main, which the backup? Are 2 of them supposed to form up as an aggregation as main adapter so you have a higher bandwith?

In terms of hardware failure, it would not make sense to use 1 port as main and 1 as backup when they are on the same adapter.
This might work if only 1 port fails, but usually the whole adapter says goodbye and then there would be nothing left.

Not sure if intended but you cannot form an EtherChannel from adapters of VIOS #1 and VIOS #2. You can only create an EtherChannel with adapters from the same OS/LPAR/VIOS.
Later on the upper layers you can hand over traffic to a vNIC on another VIOS with the HASEA.

For clarity - an EtherChannel is good for 2 things:

  • To have a backup NIC available when one of the physical NICs fail.
  • To be able to get a higher throughput by aggregating physical adapters as "main" adapter in an EtherChannel. The backup adapter can always be only 1 physical NIC.

If the whole EtherChannel fails, ie. the main and backup physical adapters in it, or the whole VIOS goes down - then the HASEA comes into action. It hands over the traffic to VIOS #2, that is hopefully still up and running :wink:

If you describe the requirements on hardware failure or possible aggregation of adapters to get a higher throughput and how many networks are needed, we can maybe assist designing.
You wrote you have 4 x 4port NICs per Managed System - the 2 other are not used on the plan yet - maybe we can do something to make it more failsafe with these.

Cheers
zaxxon

As far as i know you have to assign the whole card, except you using the new SRIOV feature.

To assist, i added our current Power8 Dual-VIO-Setup. Maybe this help your planing your infrastructure.

Regards

PS
Assigning a IP directly to an SEA results in crashes an misbehavior of the VIO-Server in our Power8 (S824) environment. On Power7 there are not such problems. So we added dedicated (vlan-tagged) virtual NICs to assign the VIO-Server-IPs.

1 Like

Hi XrAy :smiley:

Just to make sure I understand the plan you attached:

  • The bundle of EtherChannels and SEAs is just shown abstract I assume?
  • The "abstract?" SEA has a PVID of 2999?
  • If there is no abstract bundle, have the vNICs just tagged more than 1 VLAN ID and all are bundled via LACP to a big one?

Hi zaxxo,

  • this is one big LACP-Channel/SEA for all VLANs
  • this SEA must have an PVID which is not used in the real Network and will never called directly
  • maybe this link (sea with load sharing) will help :slight_smile:

Regards

2 Likes

Do you have SRIOV-capable LAN adapters?

Hi agent.kgb,

Yes all the quad port adapters (4x10Gb Fiber, 1x1Gb Copper) are SRIOV capable, so I have decided to adopt the following model at a HIGH level. I'll get down to the detail planning when I start the build process, currently each box will host around 20 LPAR's and unfortunately due to some major firmware issues with the SVC's we will not be using LPM to move stuff around - so a lot of planning here.

Currently the plan is to use 2 adapters per VIOS, this will give me 8x10Gb ports per VIOS, these will be configured as either one or two ether channels (most likely one)as per the diagram very kindly sent by xray.

So when I have the high level drawings complete I'll attach them to the post for information.

Regards

Gull04

If you have SR-IOV-capable adapter, don't make SEA and follow advices and schemes, made for standard Ethernet adapters. If you don't need LPM, configure SR-IOV's VFs on an HMC and assign them directly to LPARs. If you need HA between 2 adapters, make a NIB with a ping address. It takes less CPU and provides more network performance.

Hi Folks,

Here is a quick diagram of what is planned - just for the 10GbE network. I have to draw separate diagrams for all the individual sub-systems.

@agent.kgb, Sorry I didn't make myself clear on the previous mail. The full job is actually the replacement of two P770+ servers with three S824's, but due to problems with the existing SVC's we are unable to use LPM for the actual migrations. Once we have relocated the applications workloads to the new tin - then LPM will be used.

Regards

Gull04

770 to 824 migration? my condolences to your customer.

Once again - if you have SRIOV capable adapters, don't make SEAs. You will have worse performance. Learn what SRIOV is, Virtual Functions and VNICs are.

You don't have to trust me, you can read e.g. this guy - A first look at SRIOV vNIC adapters | chmod666 AIX blog. Benoit (the guy) is the co-author of the PowerVC redbooks and manages quite a big infrastructure.

I unfortunately can't give you information on performance tests of SR-IOV adapters, I received from another guy, who made the tests for IBM labs. I spoke to him last year, as IBM still didn't announce VNIC feature, and he convinced me, that SR-IOV performance is better, then of SEA. IBM uses the results to convince customers to buy SR-IOV adapters, but if you already bought them and paid money, use them!

1 Like