Configure VIOS SEA w. load sharing

I am trying to install a VIOS pair with a load-sharing SEA adapter, following this recipe from Developerworks. Without load-sharing everything went fine and worked as expected, but somehow i am a bit lost and the first tries with "ha_mode=sharing" didn't work at all.

Here is the situation:

I have one VIOS pair and in each VIOS i have one SEA contructed from 2 physical Ethernet Adapters (10G-Ethernet). The adapters (ent0 and ent1) as well as the SEA (ent2) work testedly well, as did the control-channel.

From the "outside" i have several VLAN IDs, "internal" i have only one (which might change, we build the machine right now).

What i tried to achieve was to have the SEA of the second VIOS share the load instead of having an idle stand-by. I wanted to base the load-sharing on the external VLAN IDs (VIOS 1 serving ext. VLANs A, C, E, ... and VIOS 2 serving ext. VLANs B, D, F, ...). For this i created one virtual adapter for each external VLAN on the SEA and tried to switch on load-sharing, but got no communication.

I suspect that i somehow missed the proper VLAN layout and confused internal and external VLANs somehow in my HMC definitions. Maybe someone has already done that and can shed some light on this. Some step-by-step how-to would be appreciated.

Another question is: how do i create additional internal VLANs and put them on this setup? Right now there is no need for additional VLANs but this requirement might well change over time.

Thanks for your help.

bakunin

While I am sorry I have no solution at hand, could you provide a drawing of your layout? Maybe when drawing this and comparing with what has been set on VIOS or via HMC, there might be already a little error/detail that comes to your eye when having a look at your drawing.

never tried such a configuration, since we migrate almost everything to blade now, and they just have one vio server each, it's been a long time I configured a sea over two vios

maybe the problem comes with the etherchannel? you wrote you use 2 10gbit interfaces on each vio
do you use an active backup configuration, round robin active active or lacp? I had big problems with 10gbit lacp, so we had to switch to active backup

you could try to remove the etherchannel, and create the sea with one physical adapter, just for troubleshooting

if you don't use etherchannel at all, forget this post.. :wink:

Just coming back from work: we had success and figured it out. It works.

I will post the setup with a step-by-step tomorrow, when i get back to the office (and the documentation). For now i just wanted to announce the success before falling into bed, so stay tuned.

For reference: i have a couple of p795 to work with (yes: yummy ;-)) and in each one i configure one VIOS-pair. Most of the LPARs we migrate onto these frames will be HACMP-Nodes (so that one node resides in the left and one in the right frame), but we still want partition mobility to have at least a chance to get one of the frames free, even if this means giving up redundancy temporarily.

bakunin

1 Like

OK, as promised, here is the How-To. First off, it was basically an occurrence of PEBKAC on my side, but once i straightened this out it worked fairly straightforward. You might want to read the following documents for reference:

How to Setup SEA Failover on DUAL VIO servers

Shared Ethernet Adapter (SEA) Failover with Load Balancing

Tips for implementing PowerHA in a virtual I/O environment

  1. Build EtherChannel
    =============
# lsdev -type adapter | grep '10 Gigabit Ethernet Adapter'
ent7             Available   10 Gigabit Ethernet Adapter (ct3)
ent8             Available   10 Gigabit Ethernet Adapter (ct3)

# chdev -dev ent7 -attr flow_ctrl=yes large_receive=yes large_send=yes
ent7 changed
# chdev -dev ent8 -attr flow_ctrl=yes large_receive=yes large_send=yes
ent8 changed

# mkvdev -lnagg ent7,ent8 -attr mode=8023ad hash_mode=src_dst_port
ent9 Available
en9
et9
  1. Build SEA on the EtherCannel Device
    ========================

This is the critical part, where i failed before: for every external VLAN there has to be a distinct internal VLAN to match. We have 5 external VLANS here and for every VLAN there is one virtual adapter defined. The adapters are created with a priority of 1 on the "primary" VIOS and 2 on the "secondary" VIOS, every other property stays identical. The control channel has only an internal VLAN (ID=99) because it is only internally used.

# lsdev -slots | grep ent
U9119.FHB.841DC07-V1-C2      Virtual I/O Slot  ent0
U9119.FHB.841DC07-V1-C3      Virtual I/O Slot  ent1
U9119.FHB.841DC07-V1-C4      Virtual I/O Slot  ent2
U9119.FHB.841DC07-V1-C5      Virtual I/O Slot  ent3
U9119.FHB.841DC07-V1-C6      Virtual I/O Slot  ent4
U9119.FHB.841DC07-V1-C7      Virtual I/O Slot  ent5
U9119.FHB.841DC07-V1-C8      Virtual I/O Slot  ent6

# for i in ent0 ent1 ent2 ent3 ent4 ent5 ent6 ; do
     echo $i ; entstat -all $i | grep -E 'Port VLAN ID|VLAN Tag IDs'
done

ent0
Port VLAN ID:     1
VLAN Tag IDs:   <first external VLAN-ID>
ent1
Port VLAN ID:     2
VLAN Tag IDs:   <second external VLAN-ID>
ent2
Port VLAN ID:     3
VLAN Tag IDs:   <third external VLAN-ID>
ent3
Port VLAN ID:     4
VLAN Tag IDs:   <fourth external VLAN-ID>
ent4
Port VLAN ID:     5
VLAN Tag IDs:   <fifth external VLAN-ID>
ent5
Port VLAN ID:    99
VLAN Tag IDs:  None
ent6
Port VLAN ID:   <external VLAN-ID>
VLAN Tag IDs:  None

# mkvdev -sea ent9 -vadapter ent0,ent1,ent2,ent3,ent4 \
         -default ent0 -defaultid 1 \
         -attr ha_mode=sharing accounting=enabled largesend=1 \
                large_receive=yes ctl_chan=ent5
ent10 Available
en10
et10

Note that "load balancing" is not done, at least not in the classical way. In fact this setup serves the first (third, fifth, ...) VLAN over the first route (first SEA) and the second (fourth, sixth, ...) over the second. It is advisable, therefore, to sort the VLANs based on expected traffic prior to defining the adapters to balance the load as good as possible.

I hope this helps.

bakunin

2 Likes

Hi,

Thats gr8, but what you have configured at vio client lpar.

regards,

vjm

I haven't configured the clients by now, but will follow-up with a sample client when i get to create them if this is of interest.

En passant, i'd like to ask you to refrain from using "gr8" or similar "leet speak" abbreviations when writing here. The people here don't mind undergoing the effort to help others and share their findings but if these can be brought to write whole articles its only fair to ask the others to undergo the effort of writing whole words instead of saving 2 characters like by writing "gr8" instead of "great". Thank you for your consideration.

bakunin