Manual vs. Automated Testing
I’ve always been a firm believer in incorporating manual testing as part of any security assessment; after all, a human is the best judge of evaluating the contents of application output, and best able to truly understand how an application is supposed to function. But after attending Darren Challey’s (GE) presentation at the 2009 OWASP AppSec conference, I was encouraged that someone actually measured the value of manual testing – and justified my belief! According to Darren, no single application assessment or code review product could find more than about 35% of the total vulnerabilities GE could find with a manual process. That alone should encourage anyone serious about eradicating vulnerabilities within their applications to step it up a notch! I would not want to be the person certifying an application for public consumption with only 30% of security issues fixed!
To understand why manual testing is so critical, let’s break down some of the reasons why assessment tools have limitations. For network scanners, vulnerabilities are largely based upon remote OS and application footprints; accuracy will decrease if that footprint is inaccurate or masqueraded. Application scanners must try to interpret application output; if an application uses custom messaging, what’s the scanner supposed to think? Code review products are never going to be able to accurately interpret code comments, identify customized backdoors, or follow application functionality that might appear orphaned. One must also keep in mind that an assessment product will only report on something if the vendor has written a check or signature for it; think about how many vulnerability signature authors exist compared to the number of hackers identifying new exploits.
Automated testing has a very important role in security assessments: these security tools help us identify a large swath of mainstream issues in an efficient manner. And manual testing can be expensive and time consuming. However, the cost of fixing vulnerabilities after an application or system is in production is also very costly and time consuming. According to the Systems Sciences Institute at IBM, a production or maintenance bug fix costs 100x more than a design bug fix. Furthermore, the cost of a breach increases annually as well. Adding comprehensive manual testing to your assessment criteria does have an ROI, and more importantly, could improve your detection accuracy by 60% or more!
Explore more blog posts
Exploiting Second Order SQL Injection with Stored Procedures
Learn how to detect and exploit second-order SQL injection vulnerabilities using Out-of-Band (OOB) techniques, including leveraging DNS requests for data extraction.
CTEM Defined: The Fundamentals of Continuous Threat Exposure Management
Learn how continuous threat exposure management (CTEM) boosts cybersecurity with proactive strategies to assess, manage, and reduce risks.
Balancing Security and Usability of Large Language Models: An LLM Benchmarking Framework
Explore the integration of Large Language Models (LLMs) in critical systems and the balance between security and usability with a new LLM benchmarking framework.