While important, throughput is not the only thing to care: latency can be equally, and even more, crucial. Think, for example, to a branch office using its Internet line to carry VoIP traffic: high latency can lead to metallic voice and other sound artifacts.
So, how our today hero performs in latency test?
A variable-sized ping flood (100,000 packets sent / received) show us no increase in latency when going from 1 to 11 rules. On the other hand, UTM comprehensibly drives latency slightly up, adding 0.3 ms, 0.7 ms and 1.5 ms for small, big and fragmented packets respectively. VPN has a similar impact, and obviously enabling UTM and VPN has the greatest influence. Anyway, in the worst case (fragmented packets, UTM + VPN) latency increase by 3.5 ms at most: this is an excellent results, especially considering the very low (< 2 ms) mean deviation showed in this heavy mode.
Another view of the same dataset can be obtained considering the total time needed to ping the remote host. Keep in mind that this measurement include the client-side overhead needed to generate the packets:
Total time results are (inevitably) perfectly in-line with rtt analysis.