iptables latency evaluation

Hello guys,

I'm actually working on my master thesis which has for subject the evaluation of virtual firewall in a cloud environment. To do so, I installed my own cloud using OpenNebula (as a frontend) and Xen (as a Node) on two different machines. The Xen machine is my virtual firewall thanks to iptables.

I am running a number of different performance tests over the xen machine to evaluate the performance of iptables. One of this test, would be the latency time introduced by the processing of the packet in iptables; and this is where I'm having troubles testing it.

Here are the different ideas I had so far, and their problems:

  • ICMP Timestamp pinging. An ICMP Timestamp reply contains three timestamps: originate timestamp which is the time the sender last touched the message, receive timestamp which is the time the receiver first touched the message, and transmit timestamp which is the time the receiver last touched the message before sending it back. By subtracting the transmit timestamp by the receive timestamp, we get the processing latency of the packet. The problem is the time is in milliseconds which is no precise enough as the latency (at least when a very little number of rules are active in iptables) is lower than 1ms.
  • Normal ping ran two times with the firewall on, and then off. The process time is the subtraction between this two times, divided buy 2 (because of round-trip latency) A little more precise has it is in microsecond, but still not enough (nanoseconds would be good). And I fear all this calculation adds too much approximation anyway...
  • Wireshark timestamp calculation: sucks totally as wireshark capture the packets before they enter iptables
  • Normal ping one time. Displaying the latency as round-trip latency. I won't get the processing latency, but I will still be able to display in a graph the effect of rules and throughput level on the overall latency of a connection going through the firewall. That's my "best" plan so far, but it sucks because it's off the original idea which is measuring the firewall latency only.

Do you guys have any comments on my ideas, or even better a solution to accurately measure firewall latency ?

Cheers,

Clement

You might have to do it statistically. Take the same reading hundreds or thousands of times, determine confidence intervals, etc.

---------- Post updated at 12:05 PM ---------- Previous update was at 12:04 PM ----------

I think there is a LOG target for iptables which might mark time, but probably not accurately enough, and would add delay of its own.

Would the following benchmark tools do it: netpipe , netperf , lmbench ? I have watched some benchmarks between pf and iptables done with those tools(pf won :slight_smile: ).