[WEB SECURITY] stats on how web app vulns are identified

Minga minga at minga.com
Fri Jun 10 10:47:25 EDT 2005

> Based on the last 100 websites that WhiteHat Security has assessed 
> (using the WASC Threat Classification as a baseline), below are the 
> statistical results, using both automated scanning and human testing:
> In 36% of websites, humans identified zero vulnerabilities beyond the 
> scanner.
> In 17% of websites, humans identified all vulnerabilities and scanner 
> identified zero.
> In 47% of websites, the experts and the scanner were complementary, 
> identifying different vulnerabilities.

I dont have the raw data in front of me - but we have totally stopped
relying on large scanner utilities to perform web application tests.
(Hell, we never even started).

If you only have 40 hours to perform a test- and you have to spend 8 of
those pouring over 100's of false positives to get a single finding, its just
not worth it. 

We have migrated to only use tools for:
1) Web SERVER evaluation, (nmap, cgiscan, nitko)
2) SSL decryption/encryption (stunnel)
3) Proxy utilities (homegrown in perl, precurser to Archilles)
4) And a utility that sends gargage data to variables - must like all
other web-app utilities. But this one's output is CSV. 5 Main areas we
look for are  a) overflow b) XSS c) garbage data d) numbers (trys all
sort of different numbers) e) SQL injection. 

The output of anything "good" that tool #4 found is them confirmed by
hand, and the REAL scope of it's risk is analyzed by a human.

Tool #4 is great at finding verbose error messages. But thats a medium
risk - what CAUSED the verbose error message is possibly the higher

I would honestly say that:
-	0% of the web-sites we test humans identified zero vulnerabilities
	beyond the scanner. 

I think more important to our customers/clients, automated tools have
NO way to discuss real RISK of discovered holes.  Any time a test that
relies on "tool output" only you are essentially ignoring:

1) combined findings... 2 medium risk findings on the same page - could
possibly be a high risk. (Combine verbose error messages and poor server
side filtering of user data).

2) Data based logic bugs. Trying certain requests as a certain user (user A). 
Keeping all variables, then trying the same request as user B.  Whos data 
will user B see? User B or User's A? 

3) Permission based logic bugs.  If User A can see menu page 1,2,3,4 but
used B is only allowed to see menu page 1,2,3. Can user B get to menu
page 4? Once they are there, can they perform the actions of the page?
Can regular users get to the administrative pages? 

If so, that is a HUGE risk that a tool would never catch. If someone
develops a tool that is "multiuser" aware - I will be very impressed.
It *IS* do able.

4) Best Practices. Examples:
	a) login error messages allow for login brute forcing (different
	error messages for "correct login/bad password" than "incorrect login"
	b) Cookie Settings. "Secure" Flag? Path variable?
	c) Filenames reveal information
	d) Poor pathnames
	e) Variable names (do they reveal too much?) (EncryptedCredentials=....)


The Web Security Mailing List

The Web Security Mailing List Archives

More information about the websecurity mailing list