[WEB SECURITY] Artificial Intelligence vs. Human Intelligence on finite amounts of possible outcomes

Tasos Laskos tasos.laskos at gmail.com
Tue Feb 1 13:49:35 EST 2011


Hi guys,

This isn't a paper or benchmark or some implemented feature, this is 
something that just hit me and I'd appreciate some input.

It came to me while auditing a very slow server which produced a lot of 
false positives on blind attacks that used time delays.
The server at some point just died and all modules that used that attack 
type thought that their payloads had been executed successfully due to 
timeouts.

However, don't focus on this particular situation (I already know the 
solution), this was merely the trigger that prompted my 
question/suggestion/RFC
which I think will make for an interesting conversation.

Lots of people in this list would like to see our tools implement some 
sort of AI (I know for a fact that at least Michal does) to make 
educated guesses/decisions about a scan's results
and adjust the report accordingly.
Training an expert system would take a lot of effort/time though and 
until convergence has been reached the false results will be reported as 
legitimate (not counting SaaS solutions).

I'd like to note at this point that I'm a strong proponent of fixing the 
root of the problem instead of adding filtering layers on top of it but 
let's ignore this for argument's sake as well.

Premises:
  * We have a finite amount of entities that assert the existence of 
issues -- we'll call these "modules".
  * We have a finite amount of outcomes for each of the modules; usually 
a binary result (either true/vulnerable or false/safe)
     but in some cases with a twist about the certainty of a result 
(i.e. a notice that an issue may require manual verification).

And here comes my point:
     Do we really need AI? Wouldn't simple rules that check for unusual 
results and give appropriate notice suffice and be a better and more 
efficient way to handle this?

A possible implementation I have in mind is to pre-tag a module when 
it's added to the system.
The tags would specify key elements of the behavior of a module and will 
later be used in the decision making process (based on rules).

For instance, in the example I mentioned at the beginning of this 
e-mail, the system would check how many of the results have the 
"timing_attack" tag
and if that number was above a preset threshold it would remove the 
results from the scan report or flag them accordingly.
And possibly take into account environment statistics to make a more 
well-rounded decision (like average response times etc).

What do you guys thing?

Cheers,
Tasos L.

PS. I guess that this could be perceived as pre-trained expert system 
but not really.




More information about the websecurity mailing list