Testing methodologies
Characterize a firewall's performances, as usual with benchmarking, is not an easy task: you must account for different workloads and different possible configurations that, in turn, cause very different loads on the device. The various corners of network performance can be described by measuring quite different things, nominally:
- small & large packets throughput with simple stateful inspection
- small & large packets throughput with VPN encryption (3DES and AES, with SHA1 and DH/PFS Group2)
- small & large packets throughput with advanced filter enabled (eg: layer7 pattern matching, UTM, ecc.)
- connections creation and management speed
- added latency for packets traversing the device
To correctly measure such things, I developed a simple script-driven benchmark that use the following programs:
- Netperf (ver. 2.4.5) and wget, to test large packets and streaming throughput
- Ping, to test large / big / fragmented packets handling and latency
- Apache Benchmark, to test connections creation and management speed
To clearly remark how different conditions can lead to very different results, I run the tests with the following different configurations:
- a very simple, one accept rule firewall config
- a little more complex setup, with others 10 firewall rules added before the previous simple accept rule (in this manner we can quantify the added penalty for rules traversing by the firewall engine)
- a simple configuration similar to #1, but with UTM enabled at common settings (incoming traffic inspection only)
- a simple configuration similar to #1, but with UTM enabled at maximum settings (incoming and outgoint traffic inspection)
- a simple configuration similar to #3, but with both client and server machine behind a IPSec 3DES and AES128 VPN between two TZ100 (for measuring VPN throughput and latency penalty)
- a simple configuration similar to #4, but with both client and server machine behind a IPSec 3DES and AES128 VPN between two TZ100 (for measuring VPN throughput and latency penalty)
UTM configuration was the following:
- content filter was configured to block violent and questionable sites;
- antivirus was enabled and configured to scan not only all recognized protocols (HTTP, FTP, IMAP, SMTP, POP3 and CIFS), but generic TCP STREAM also;
- CloudAV database was enabled;
- intrusion prevention was enabled and configured to detect all attacks but to block only medium and high priority ones;
- application detection was enabled and configured to block P2P applications;
- antispyware was enabled and configured to detect all spywares but to block only medium and high priority ones.
A note on the UTM-enabled tests: generally, you want to use incoming-only inspection to preserve CPU time while inspection all traffic routed to your internal LAN. So, this is the setting that you should care more when reading the following results. On the other hand, “max-UTM” results (incoming and outgoing traffic inspection) are going to be quite interesting as this configuration stresses the machine as much as possible.
All tests and benchmarks where run between two laptop machine: a Dell E6410 client machine (Core i5-520M, 4 GB RAM and Gigabit Ethernet), running Debian 6 amd64, and a Dell D620 (Core2 T7200, 4 GB RAM and Gigabit Ethernet) server machine, running CentOS 6.2 amd64. While switch, server and client machines were Gigabit-ready, the TZ100 features only 10/100 ports, so the entire small network was working at 100Mbit speed.
TZ100's firmware version was 5.8.1.5-46o.