Security product testing is nontrivial. I already knew this from my years on the vendor side of the aisle, but in my six months as CTO at NSS Labs, I’ve learned that it’s even more challenging and nuanced than I could have imagined.
Because it has to be. And here’s why.
We recently completed our Next Generation Firewall (NGFW) 7.0 Group Test. While the NGFW market is mature, it continues to expand, and testing these products remains critical. NSS’ many years of experience in testing these products allows me to confidently discuss why such testing remains critical.
First, testing is a way to assess not only whether a vendor definitively understands the domains it is developing and selling in, but also to see if it is firmly committed to delivering top-notch capabilities for its customers. The security world is a diverse one, full of eccentric personalities that are wonderful to work with and, let’s be honest, a few that are more challenging. I myself am a mix of Sheldon Cooper and Paul Bunyan—or so I’ve been told. While the different personalities can make for some interesting conversations on a wide range of topics, especially during testing, the protection of the end customer is always the point of reference—or should be—regardless of the personalities involved.
Additionally, testing helps challenge assumptions around mature technologies that are pervasively deployed, have been through several refresh cycles, and are table stakes for effective security. If you’d asked me six months ago, I would’ve said there couldn’t be too many surprises when testing these established products. I was unequivocally wrong.
Security is a very different domain than IT, something of which I am reminded constantly, and this latest round of testing is an example of why I chose this industry as my profession. In our latest NGFW group test, we introduced 26 new evasion test cases, each focused on HTTP (the new TCP), and the results presented us with a bit of a challenge (i.e., testing is hard). Of the 11 products we tested, only two appropriately handled all evasions while the rest failed to handle between one and 11 of the evasions tested.
The Sheldon Cooper in me, or the pedantic, says one evasion is enough to bypass security and thus the product should be placed in the Caution category. However, the Paul Bunyan in me, or the pragmatist, recognizes that there is a huge difference between missing one evasion and missing 10 evasions. I also recognize that not all evasions are created equal and that a UTF-32 encoding evasion is not nearly as impactful as a TCP segmentation evasion. This brings us back to the challenges I referred to earlier: How do we appropriately represent the differences between the evasions we test against? How do we ensure customers are protected as quickly as possible? How do we make sure customers are not needlessly put at increased risk? I’m proud of our team for answering these challenges.
The group test results were surprising, and in many ways unfortunate. However, for me, what was more surprising, and ultimately heartening, was the competence and professionalism of each vendor. I was impressed at the speed with which some of them responded to these missed evasions and produced workable fixes for their customers. It was reassuring to see their genuine desire to make sure they ultimately got things right and to find that they were readily available to work through the details. This has not been my experience with all security domains.
Even with this great commitment, the unfortunate reality is that not all vendors were able to resolve all issues within days, and even if they could have done so, their customers were not likely to have deployed fixes at publication. This resulted in us making the decision to delay publication of the details for the 26 net new evasion test cases for 90 days as it is difficult for the evasions to be replicated until we publish these details. In this case, obscurity works.
We faced a similar challenge for products that failed previously published evasion test cases, which are well known, can be quickly and easily replicated, and can’t be effectively obscured. We chose to delay publication of the test reports for these products for 30 days, or until the evasion cases are resolved. While this is not a perfect solution, it is the best available course of action.
The test and the discovered evasions certainly presented their challenges, but I’d like to talk about the importance of quick responses in adverse conditions. Short of not having an evasion, there is no replacement for responsiveness. After analysis of evidence provided by NSS, the following vendors moved quickly, professionally, and without excessive pedantry, to resolve the identified evasions. I’ve placed them in the order of their responsiveness, although there isn’t much difference between them overall.
Please note that NSS has not fully validated any of the following updates for other potential impact to security, performance, and/or stability, and enterprises should fully qualify all hotfixes before deployment in their environments. I look forward to full verification of these fixes in the coming weeks.
Palo Alto Networks produced an update to Application and Threat Content (version 699). This update was applied and NSS verified that all tested evasion techniques were subsequently blocked.
Check Point Software provided a hotfix for software release R77.20, and NSS verified that after this hotfix, all tested evasion techniques were blocked.
Barracuda Networks provided a hotfix (“Hotfix 828 – Firewall”), and NSS verified that after this hotfix, all tested evasion techniques were blocked.
SonicWall recommended an update to the Security Services signature databases. An update was performed (Signature Database Timestamps: Gateway Anti-Virus – UTC 05/17/2017 16:31:39.000; Intrusion Prevention – UTC 05/17/2017 19:29:30.000; Anti-Spyware – UTC 05/17/2017 19:28:09.000), and NSS verified that after this update, all tested evasion techniques were blocked.
Finally, I’d like to talk about the future. We have several tests publishing between now and the end of the year. Some of these tests are in mature markets such as next generation intrusion prevention systems (NGIPS), while others will focus on emerging markets such as breach prevention systems (BPS). We are also exploring how to effectively test security technologies that are delivered via the cloud, and I hope to begin formalizing that soon. I hope that the positive engagement I’ve had with NGFW vendors is repeated in these tests, but I won’t be entirely surprised if I encounter at least one or two interesting personalities along the way. This is security, after all.
Whether you are a vendor, an enterprise, an analyst, a researcher, or a hobbyist, please don’t be afraid to reach out and ask questions, challenge perspectives, express needs, or just say you do or don’t agree. While I can’t say that we will always see eye to eye on perspectives or outcomes, I can say with absolute certainty that I will give all perspectives due consideration and that I will apply the right amount of diligence to each concern in order to reach fair conclusions. The goal is and will continue to be to represent the interests of the customer to ensure that they are receiving the protections and the value they deserve from their security technologies.