Interface goes down after 2 minutes

Hi all,

I am using an F5 LTM Load balancer VM. Its network config is as follows:

eth0      Link encap:Ethernet  HWaddr 00:50:56:01:01:FA  
          inet addr:192.168.2.104  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe01:1fa/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7134 errors:0 dropped:0 overruns:0 frame:0
          TX packets:70 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:431545 (421.4 KiB)  TX bytes:9888 (9.6 KiB)

eth1      Link encap:Ethernet  HWaddr 00:50:56:01:01:FB  
          inet6 addr: fe80::250:56ff:fe01:1fb/64 Scope:Link
          UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:479497 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:38422565 (36.6 MiB)  TX bytes:810 (810.0 b)

eth2      Link encap:Ethernet  HWaddr 00:50:56:01:01:F8  
          inet6 addr: fe80::250:56ff:fe01:1f8/64 Scope:Link
          UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:457477 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:34193855 (32.6 MiB)  TX bytes:468 (468.0 b)

eth3      Link encap:Ethernet  HWaddr 00:50:56:01:01:FC  
          inet addr:142.133.174.246  Bcast:142.133.175.255  Mask:255.255.254.0
          inet6 addr: fe80::250:56ff:fe01:1fc/64 Scope:Link
          UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:459573 errors:0 dropped:0 overruns:0 frame:0
          TX packets:305 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:34535062 (32.9 MiB)  TX bytes:30038 (29.3 KiB)

eth4      Link encap:Ethernet  HWaddr 00:50:56:01:01:F9  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.255.255.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:34913 errors:0 dropped:0 overruns:0 frame:0
          TX packets:34913 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2886349 (2.7 MiB)  TX bytes:2886349 (2.7 MiB)

lo:1      Link encap:Local Loopback  
          inet addr:127.2.0.2  Mask:255.255.255.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1

mgmt_bp   Link encap:IPIP Tunnel  HWaddr   
          inet addr:127.3.0.0  Mask:255.255.255.255
          UP RUNNING NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:8 dropped:0 overruns:0 carrier:8
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

tmm0      Link encap:Ethernet  HWaddr 00:98:76:54:32:10  
          inet addr:127.1.1.1  Bcast:127.1.1.255  Mask:255.255.255.0
          inet6 addr: fe80::298:76ff:fe54:3210/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9198  Metric:1
          RX packets:4318 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4297 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:255977 (249.9 KiB)  TX bytes:710130 (693.4 KiB)

It is connected to other RHEL VMs using 2 networks, 192.168.2.X & 142.133.174.X .

Once I boot the F5 VM up, I am able to ping the default gateway of the 142.133.174.X network for about 2 minutes after which the ping fails;

[root@localhost:Offline:Standalone] config # ping 142.133.174.1
PING 142.133.174.1 (142.133.174.1) 56(84) bytes of data.
From 142.133.174.246 icmp_seq=2 Destination Host Unreachable
From 142.133.174.246 icmp_seq=3 Destination Host Unreachable
From 142.133.174.246 icmp_seq=4 Destination Host Unreachable

Using an other VM on the 142.133.174.X network, I cannot ping the F5 VM.

When I send the reboot command, the ping again starts to be successful.

I have no idea why the network ( 142.133.174.X ) or the interface is acting this way.

Any idea ?

[root@localhost:Offline:Standalone] config # ip route
127.1.1.0/24 dev tmm0  proto kernel  scope link  src 127.1.1.1 
127.3.0.0/24 dev mgmt_bp  scope link 
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.104 
142.133.174.0/23 dev eth3  proto kernel  scope link  src 142.133.174.246 
default via 142.133.174.1 dev eth3 

dmesg output

loop: module loaded
ip_tables: (C) 2000-2006 Netfilter Core Team
nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
warning: `mcstransd' uses 32-bit capabilities (legacy support in use)
eth0: intr type 3, mode 0, 3 vectors allocated
eth0: NIC Link is Up 10000 Mbps
eth1: intr type 3, mode 0, 3 vectors allocated
eth1: NIC Link is Up 10000 Mbps
eth2: intr type 3, mode 0, 3 vectors allocated
eth2: NIC Link is Up 10000 Mbps
eth3: intr type 3, mode 0, 3 vectors allocated
eth3: NIC Link is Up 10000 Mbps
ADDRCONF(NETDEV_UP): eth4: link is not ready
IPv4 over IPv4 tunneling driver
tunl0: Disabled Privacy Extensions
mgmt_bp: Disabled Privacy Extensions
eth0: no IPv6 routers present
eth1: no IPv6 routers present
eth2: no IPv6 routers present
eth3: no IPv6 routers present
SELinux: initialized (dev hda1, type vfat), uses genfs_contexts
vnic: control interface intialized
SELinux: initialized (dev hda1, type vfat), uses genfs_contexts
vnic: register instance 0
[built:14:45:06][unic] handle_unic_mmap(674): size=327680, start=47530241994752, end=47530242322432, type=0
[built:14:45:06][unic] handle_unic_mmap(674): size=327680, start=47530242322432, end=47530242650112, type=128
device eth3 entered promiscuous mode
[built:14:45:06][unic] unic_cmp_addunic(1301): TMM CMP index 0, device eth3
[built:14:45:06][unic] handle_unic_mmap(674): size=327680, start=47530242650112, end=47530242977792, type=0
[built:14:45:06][unic] handle_unic_mmap(674): size=327680, start=47530242977792, end=47530243305472, type=128
device eth1 entered promiscuous mode
[built:14:45:06][unic] unic_cmp_addunic(1301): TMM CMP index 0, device eth1
[built:14:45:06][unic] handle_unic_mmap(674): size=327680, start=47530243305472, end=47530243633152, type=0
[built:14:45:06][unic] handle_unic_mmap(674): size=327680, start=47530243633152, end=47530243960832, type=128
device eth2 entered promiscuous mode
[built:14:45:06][unic] unic_cmp_addunic(1301): TMM CMP index 0, device eth2

What interface is this? Something like a packet sniffer is messing with it.

Its just a simple interface used to access the node. I dont think any sniffing is going on.

Something is, though. That's what "promiscuous mode" means, it will receive packets to any destination. Things like wireshark and tcpdump use it to monitor the connection.

What programs / services are you running? You need to find what is putting it into that mode. Whatever it is, is what I suspect is messing up your connection.