Testing methodologies

Characterize the performance of a firewall, as usual with benchmarking,  is not an easy task: you must account for different workloads and different possible configurations that, in turn, cause very different loads on the device. The various corners of network performance can be described by measuring:

  • small & large packets throughput with simple stateful inspection
  • small & large packets throughput with VPN encryption (3DES and AES, with SHA1 and DH/PFS Group2)
  • small & large packets throughput with advanced filter enabled (eg: layer7 pattern matching, UTM, ecc.)
  • connections creation and management speed
  • added latency for packets traversing the device

To correctly measure such things, I developed a simple script-driven benchmark that use the following programs:

  • Netperf and wget, to test large packets and streaming throughput 
  • Ping, to test large / big / fragmented packets handling and latency
  • Apache Benchmark, to test connections creation and management speed

To clearly remark how different conditions can lead to very different results, I run the tests with the following different configurations:

  1. a very simple, default (2-access rules) firewall config
  2. a little more complex setup, with others 10 firewall rules added before the previous simple accept rule (to quantify the rule traversing penalty)
  3. a simple configuration similar to #1, but with UTM services started but not enabled on any policies
  4. a simple configuration similar to #1, but with UTM services started and enabled on outgoing policies
  5. a simple configuration similar to #1, but with both client and server machine behind a IPSec 3DES and AES128 VPN between a CR25ia and an OpensWan machine (for measuring VPN throughput and latency penalty)
  6. a simple configuration similar to #3, but with both client and server machine behind a IPSec 3DES and AES128 VPN between a CR25ia and an OpensWan machine (for measuring VPN throughput and latency penalty with started UTM services)
  7. a simple configuration similar to #4, but with both client and server machine behind a IPSec 3DES and AES128 VPN between a CR25ia and an OpensWan machine (for measuring VPN throughput and latency penalty with full-UTM applied on VPN rules)

Please note that, while I usually measure VPN performances using two identical firewalls, this time I had only a single CR25ia. In order to terminate the VPN tunnel, I had to use OpensWan on the target machine. Considering that the OpensWan machine had a capable CPU, this means that measured VPN throughput / latency can be marginally better then using a pair of Cyberoam firewalls.

UTM configuration was the following:

  • content filter was configured to block violent and questionable sites;
  • antivirus was enabled to scan all supported protocols (HTTP, FTP, SMTP, POP3, IMAP, IM)
  • intrusion prevention was enabled;
  • application firewall was enabled and configured to block P2P applications;

A note on the UTM-enabled tests: generally speaking, the tests with started, but not policy-enabled, UTM services should be redundant, as the firewall does not inspects traffic unless specifically stated in the relative policy. However, I realized that many UTM devices use different scan paths when UTM services are enabled (eg: to correctly scans fragmented packets). So, in order to present you the most useful informations, I am going to report back performances of both UTM-started-but-not-enabled & fully-enabled-UTM modes.

All tests and benchmarks where run between two laptop machine: a Dell E6410 client machine (Core i5-520M, 4 GB RAM and Gigabit Ethernet), running Lubuntu 12.04 amd64, and a Dell D620 (Core2 T7200, 4 GB RAM and Gigabit Ethernet) server machine, running CentOS 6.2 amd64.

The entire small network worked at Gigabit speed.

CR25ia's firmware version was 10.01.2.158.