[WEB SECURITY] Artificial Intelligence vs. Human Intelligence on finite amounts of possible outcomes
tasos.laskos at gmail.com
Tue Feb 1 16:53:40 EST 2011
My bad, the statement I had read about your thoughts on AI wasn't
accompanied by such an elaborate explanation.
(Anyways, let's forget the term AI for now and focus on my proposed system.)
My point was interpretation of results as I mentioned in my previous
reply (list latency can play tricks on us).
In very simple terms, you as a person can draw/extrapolate conlusions
based on a combination of factors from the results of a scan.
* if a specific page takes an anormally long time to response then it
must be doing some heavy duty processing (and thus can be the perfect
* if a lot of requests from some point on timed out then the server died
These are very simple examples and anyone worth his salt will be able to
come to the same conclusion by himself.
However, the reason scanners exist is to make relatively simple
scenarios easy and quick to find,
so why not incorporate our insights and conclusions in there too?
Each module, when run, does not yet have a wide view of the whole
process because the process itself is still in progress.
Separate entities that are called after the scan has finished and given
the whole picture will be able to draw well rounded conclusions
based on a combination of factors.
And since they will be written by people they will be as good as the
person who wrote them.
These entities will just check for the existense (or lack) of available
data, very straightforward stuff, and be developed and incorporated into
the whole system
on a case by case basis.
1. I run a scan and see weird behavior
2. I look into the situation
3. I find that the root of that behavior is something very valuable to
my security assesment
4. I create a "rule" that will check for the same behavior during future
scans and replay the knowledge I had previously gained
5. I send my rule back to the project for everyone to use
Up to now we've only been concerned about identifying issues and (to my
knowledge) have neglected to include interpretation of combinations of
Does that make any sense?
>> Lots of people in this list would like to see our tools implement some sort
>> of AI (I know for a fact that at least Michal does)
> To clarify, I don't think that the current "AI toolset" (ANN, genetic
> algorithms, expert systems) is going to make a substantial difference.
> These tools simply offer you a glorified framework for brute-forcing
> quasi-optimal decision algorithms in some not-too-complicated cases.
> One time, they may arrive at results better than what would be
> possible with, ahem, a man-made algorithm; other times, they work just
> as well or worse, and just introduce a layer of indirection.
> There's a cost to that layer, too: when your "dumb" scanner
> incorrectly labels a particular response as XSRF, you just tweak
> several lines of code. If the same determination is made by a complex
> ANN with hundreds of inputs, there is no simple fix. You may retrain
> it with new data, which may or may not help; and even if it helps, it
> may decrease performance in other areas. Then you have to change the
> topology or inputs, or the learning algorithm... and nothing of this
> guarantees success.
> Web scanners do lack certain cognitive abilities of humans, which
> makes these tools fairly sucky - but I don't think we know how to
> approximate these abilities with a computer yet; they're mostly
> related to language comprehension and abstract reasoning.
>> Do we really need AI?
> The term is sort of meaningless. Scanners will require a lot of human
> assistance until they can perform certain analytic tasks that
> computers currently suck at; calling it "AI" is probably just a
More information about the websecurity