the common problems with DUP! packets are related to network mask misconfiguration
you may have NETWORK=192.168.30.0 inside your /etc/sysconfig/network-scripts/ifcfg-eth0 but you didn't design your network standard ( your gateway is 192.168.30.2 instead of .1 or .254 ), change gateway ip and try again.
# netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.30.0 * 255.255.255.0 U 0 0 0 eth0
169.254.0.0 * 255.255.0.0 U 0 0 0 eth0
default 192.168.30.1 0.0.0.0 UG 0 0 0 eth0
#
# ping 192.168.20.10
PING 192.168.20.10 (192.168.20.10) 56(84) bytes of data.
64 bytes from 192.168.20.10: icmp_seq=1 ttl=254 time=4.03 ms
64 bytes from 192.168.20.10: icmp_seq=1 ttl=254 time=4.03 ms (DUP!)
64 bytes from 192.168.20.10: icmp_seq=2 ttl=254 time=0.734 ms
64 bytes from 192.168.20.10: icmp_seq=2 ttl=254 time=0.735 ms (DUP!)
its so weird, are you sure you have 1 gateway on your 192.168.30.1 ?
unplug any system on your network that acts like router/firewall like 192.168.30.2
Duplicate ping error with network bonding driver in Linux
Starting with version 3.0.2, the bonding driver has logic to suppress duplicate packets, which should largely eliminate this problem. The following description is kept for reference.
It is not uncommon to observe a short burst of duplicated traffic when the bonding device is first used, or after it has been idle for some period of time. This is most easily observed by issuing a �ping� to some other host on the network, and noticing that the output from ping flags duplicates \(typically one per slave\).
For example, on a bond in active-backup mode with two slaves all connected to one switch, the output may appear as follows:
# ping -n 10.10.0.2
PING 10.10.0.2 (10.10.0.2) from 10.10.0.30: 56(84) bytes of data.
64 bytes from 10.10.0.2: icmp_seq=1 ttl=64 time=13.7 ms
64 bytes from 10.10.0.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.10.0.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.10.0.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.10.0.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.10.0.2: icmp_seq=2 ttl=64 time=0.216 ms
64 bytes from 10.10.0.2: icmp_seq=3 ttl=64 time=0.267 ms
64 bytes from 10.10.0.2: icmp_seq=4 ttl=64 time=0.222 ms
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v2.6.5 Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:74:56:4e:98
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:74:56:4e:9a
# uname -a
Linux myserver01 2.6.5-7.282-bigsmp #1 SMP UTC 2006 i686 i686 i386 GNU/Linux
This is not due to an error in the bonding driver, rather, it is a side effect of how many switches update their MAC forwarding tables. Initially, the switch does not associate the MAC address in the packet with a particular switch port, and so it may send the traffic to all ports until its MAC forwarding table is updated. Since the interfaces attached to the bond may occupy multiple ports on a single switch, when the switch (temporarily) floods the traffic to all ports, the bond device receives multiple copies of the same packet (one per slave device).
The duplicated packet behavior is switch dependent, some switches exhibit this, and some do not. On switches that display this behavior, it can be induced by clearing the MAC forwarding table (on most Cisco switches, the privileged command �clear mac address-table dynamic� will accomplish this).
---------- Post updated at 07:43 PM ---------- Previous update was at 05:04 PM ----------