Mulitple Zones on Dual NIC host

Greetings Forumers!

I am running into an issue with multiple zones on an M5000 with 2 NICs. The NICs are on separate VLANs. These zones are using the 2 NICs to communicate with other systems but when they need to communicate with a zone on the same system, but different NIC, the application fails. The network guys here indicate that no packets leave the origination NIC. And, when I run a traceroute from one zone (on NIC#1) to another zone (on NIC#2), the traceroute take 1 hop but never access the default gateway.

Here's some diag:

root@globalzone# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
...
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone zone2
        inet 127.0.0.1 netmask ff000000
...
lo0:5: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone zone1
        inet 127.0.0.1 netmask ff000000
nxge0: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 2
        inet 10.10.19.140 netmask ffffffe0 broadcast 10.10.19.159
        ether 0:21:28:8b:e1:70
...
nxge0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        zone zone2
        inet 10.10.19.132 netmask ffffffe0 broadcast 10.10.19.159
...
nxge4: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 3
        inet 10.10.19.114 netmask ffffffe0 broadcast 10.10.19.127
        ether 0:21:28:8b:e2:50
...
nxge4:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        zone zone1
        inet 10.10.19.104 netmask ffffffe0 broadcast 10.10.19.127
root@globalzone# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.10.19.126        UG        1      22619
default              10.10.19.158        UG        1      88877 nxge0
default              10.10.19.126        UG        1      35422 nxge4
10.1.1.2             10.1.1.3             UH        1          1 sppp0
10.10.19.96          10.10.19.114        U         1         57 nxge4
10.10.19.128         10.10.19.140        U         1          5 nxge0
224.0.0.0            10.10.19.114        U         1          0 nxge4
127.0.0.1            127.0.0.1            UH       12       4520 lo0

Here's the traceroute:

root@zone1# traceroute zone2
traceroute to zone2 (10.10.19.132), 30 hops max, 40 byte packets
 1  zone2.mydomain.com (10.10.19.132)  0.135 ms  0.040 ms  0.035 ms
root@zone1# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.10.19.126        UG        1      22619
default              10.10.19.126        UG        1      35422 nxge4
10.10.19.96          10.10.19.104        U         1         16 nxge4:2
224.0.0.0            10.10.19.104        U         1          0 nxge4:2
127.0.0.1            127.0.0.1            UH        3         46 lo0:5

Notice the packets don't go through the default gateway.

This is a problem because the application on zone1 will not start due to a communication issue.

I'm trying to get the zone to send the packets to the default g/w so the app comes up.

Any assistance is greatly appreciated in advance!

Well, if it sees there is no need to leave the host, there is no need of any gateway. Does localhost connect?

Ping and traceroute are special on Solaris zones. Interaction between zones for ICMP packets is permitted.

Is there any way I can tell the zones "use the default gateway for all communication - don't communicate with the other zones internal to the global zone."

If you want zones to communicate through an external gateway, use exclusive IP zones, not shared IP ones. When using shared IP like you do, there is a single IP stack shared by all zones. That means there is no chance for a packet to leave the server if its destination address is local. This is by design and by standard.

I just read this in the Solaris Containers Technology Architecture Guide May 2006 page 20:

I'll try setting one zone to exclusive IP and test.

The white paper you quote predates exclusive-ip zones so isn't very helpful. As an update to what I previously wrote, if you want to stay with shared-IP zones and are using a recent enough Solaris release, you might also use the defrouter zone configuration parameter to overcome the previously mentioned restriction. see Using zonecfg defrouter with shared-IP zones - What the krowteN? for details.

Both zones share localhost. Does that work, and can it be used?

localhost interfaces are strictly isolated.

We use the "default router" parameter in all our zones. But the packets still do not leave the global zone because of the "shared" ip-type parameter. I have not tried the "exclusive" ip zone implementation yet. Here is one of our zone configs:

root@globalzone# zonecfg -z localzone export
create -b
set zonepath=/ZONES/localzone
set autoboot=false
set ip-type=shared
add net
set address=10.10.19.132/27
set physical=nxge0
set defrouter=10.10.19.158
end

Jlliagre - I have read this blog which discusses inter-zone traffic isolation and plan to test this before implementing an exclusive IP zones.

ALSO, all the zones are built on Solaris 10 9/10 (update 9) release.