[WEB SECURITY] Artificial Intelligence vs. Human Intelligence on finite amounts of possible outcomes

Erik Peterson EPeterson at Veracode.com
Tue Feb 1 21:59:32 EST 2011


The real problem is that this is a deep dark hole.

Automated Web application testing would be easy(tm) if it wasn't for the
edge cases, of which there are legion. Oh and because web technology is
re-inventing itself every 6 months, anything you create that worked 6
months ago, will become broken sooner or later, thus the cycle of edge
cases is, for all practical purposes, infinite.

Humans handle edge cases almost subconsciously, scanners on the other hand
tend to go sideways. But like MZ says the (better) scanners are doing a
lot pre, during and post processing to handle as much as they can
gracefully. WebInspect and AppScan (just as an example) both do
pre-processing along with some black magic regarding 404 checking, site
availability, response times and login/logout detection to name a few
things, with some more interesting post processing tricks coming in the
future I'm sure.

But again, the problem is that this is a bottomless pit unless you draw
the line somewhere. You will never run out of edge cases to deal with, and
even if you made it modular at what point is it smarter to just use your
brain and notice the scan has gone wrong vs. trying to write code to deal
with all the edge cases.

Unless of course you are trying to build a tool that will be used by
people who know nothing about web applications or web security to which I
say "Good luck with that" :)

++EJP


On 2/1/11 7:35 PM, "Tasos Laskos" <tasos.laskos at gmail.com> wrote:

>>> C'mon man don't keep to yourself share the examples.
>> Well, they're pretty trivial, but for example, skipfish does a
>> postprocessing round to eliminate duplicates and other loops; and
>> generally looks at a variety of information collected in earlier
>> checks to make decisions later on (e.g., the outcome of 404 probes,
>> individual and cumulative - if there are too many 404 signatures,
>> something has obviously gone wrong; etc). Nothing of this is special,
>> and it does not prevent it from being extremely dumb at times, but
>> it's probably unfair to say that absolutely no high-level
>> meta-analysis is being done.
>>
>> /mz
>>
>Of course not, I didn't mean to say that all scanners just blindly spew
>out their findings,
>but they seem to be sticking with the bare minimum, just enough for
>their results to make sence and reduce some noise.
>(Myself included obviously.)
>
>Like you said, there's much room for improvement and we need to start
>from somewhere.
>
>
>_______________________________________________
>The Web Security Mailing List
>
>WebSecurity RSS Feed
>http://www.webappsec.org/rss/websecurity.rss
>
>Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>
>WASC on Twitter
>http://twitter.com/wascupdates
>
>websecurity at lists.webappsec.org
>http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or
>g





More information about the websecurity mailing list