[WEB SECURITY] stats on how web app vulns are identified

Jeremiah Grossman jeremiah at whitehatsec.com
Tue Jun 7 13:34:16 EDT 2005

There are many security consultants and service providers, including 
myself, who perform black-box web application vulnerability 
assessments. In order to speed the identification of vulnerabilities, 
people use a variety of open-source/commercial scanners and proxy 
utilities.  In my field experience, I've tested websites where it's 
possible to find all vulnerabilities with a scanner (because manual 
testing revealed nothing else);  websites where every vulnerability 
needed to be found by hand (because the scanner reported zero); and, 
other websites where different vulnerabilities were found by the tester 
and the scanner. I'm sure others on the list have experienced similar 

What I haven't seen discussed in the industry, probably due to lack of 
hard data, is what the statistical breakdown looks like. For example, 
if we analyze assessment results on a website-by-website basis, how are 
vulnerabilities typically identified? What does the average website 
require as a testing methodology? I'd like to present our data 
(WhiteHat Security) in hopes that others will share their 
data/thoughts/experiences on the subject as well.

Based on the last 100 websites that WhiteHat Security has assessed 
(using the WASC Threat Classification as a baseline), below are the 
statistical results, using both automated scanning and human testing:

In 36% of websites, humans identified zero vulnerabilities beyond the 
In 17% of websites, humans identified all vulnerabilities and scanner 
identified zero.
In 47% of websites, the experts and the scanner were complementary, 
identifying different vulnerabilities.


Jeremiah Grossman

The Web Security Mailing List

The Web Security Mailing List Archives

More information about the websecurity mailing list