While important, throughput is not the only thing to care: latency can be equally, and even more, crucial. Think, for example, to a branch office using its Internet line to carry VoIP traffic: high latency can led to metallic voice and other sound artifacts.
So, how our today hero performs in latency test? Lets see:
A variable-sized ping flood (100,000 sent / received packets) show us no increase in latency when going from 2 to 12 rules. On the other hand, UTM comprehensibly drives latency slightly up, but the interesting thing is that this happens in “UTM services started but not enabled scenario” also, albeit with a very low impact.
VPN has a higher impact, and obviously enabling UTM and VPN has the greatest influence. The problem is that in the worst case (fragmented packets + 3DES VPN) latency grows by multiple times, with a mean time of 24.4 ms. Moreover, the mean and maximum deviation are very high, at 19.3 / 93.4 ms respectively. Bottom line: if you plan to implement UTM inside a VPN, try to avoid 3DES encryption scheme, as in AES mode you will have vastly lower latencies.
Another view of the same dataset can be obtained considering the total time needed to ping the remote host. Keep in mind that this measurement include the client-side overhead needed to generate the packets and the ethernet-specific recovery time between two packets:
Total time results are (inevitably) perfectly in-line with rtt analysis.