[WEB SECURITY] Artificial Intelligence vs. Human Intelligence on finite amounts of possible outcomes

Tasos Laskos tasos.laskos at gmail.com
Tue Feb 1 15:41:11 EST 2011


Heh.. your reaction makes me think that I've overstated what I'm proposing.

It's not so outlandish, each decision making module will only use the 
data relevant to its case.
It won't be like a huge brain making connections, decisions and writting 
in natural language out of thin air.

In the cookie example I mentioned it would just look if cookie 
vulnerabilities of the same type are present across a large number of pages.

To the user it would seem like black magic but it's really a simple 
process and a cool way to document your knowledge and have it played 
back if such a situation occurs again.

To answer your last question, I don't really know of any products like that.
I think I've read that the WHS Sentinel uses an expert sysem but I'm not 
sure why, how and if it amounts to anything.
(Arian I think that's your cue.)

Cheers,
Tasos L.
> Tasos,
>
> On Tue, Feb 1, 2011 at 4:46 PM, Tasos Laskos<tasos.laskos at gmail.com>  wrote:
>> On 01/02/11 19:20, Andres Riancho wrote:
>>> Tasos,
>>> [...]
>>>> I'd like to note at this point that I'm a strong proponent of fixing the
>>>> root of the problem instead of adding filtering layers on top of it but
>>>> let's ignore this for argument's sake as well.
>>>>
>>>> Premises:
>>>>   * We have a finite amount of entities that assert the existence of
>>>> issues
>>>> -- we'll call these "modules".
>>>>   * We have a finite amount of outcomes for each of the modules; usually a
>>>> binary result (either true/vulnerable or false/safe)
>>>>     but in some cases with a twist about the certainty of a result (i.e. a
>>>> notice that an issue may require manual verification).
>>>>
>>>> And here comes my point:
>>>>     Do we really need AI? Wouldn't simple rules that check for unusual
>>>> results and give appropriate notice suffice and be a better and more
>>>> efficient way to handle this?
>>> In some cases you need AI, or need lots of signatures (either works).
>>> For example, if you're trying to find SQL injections based on error
>>> messages, a module that has 10 signatures is worse than one that has
>>> 100. But I'm sure that the module with 100 signatures doesn't cover
>>> all possible DBMS errors. On the other side, a web application
>>> penetration tester that sees a rendered HTML response can identify a
>>> SQL error even if its not something he has seen in the past (its not
>>> in the expert's signature DB). That's where AI might be handy.
>>>
>> That's not what I meant exactly.
>> My thoughts were mostly towards interpretation of the results after the
>> scan.
> Oh, that's even harder to do!
>
>> Something akin to adding the following to the report:
>> ---------------------
>> Judging by the results of the scan and request timeouts the site seems to
>> have been stressed to its limits.
>> This shouldn't have happened and it means that you are susceptible to DoS
>> attack quite easily.
>> ---------------------
>>
>> Or:
>> ---------------------
>> The web application's cookies are uniformly vulnerable across the web
>> application.
>> Consider adding a centralized point of sanitization.
>> ---------------------
> I would love a tool that does that in an 99% accurate way. Maybe it
> could also write reports and have dinner with my mother in-law also?
> :)
>
> The potential number of outcomes from many combinations of outputs can
> be very high. What about combinations? """The web application's
> cookies seem to be uniformly vulnerable across the web application;
> but a timeout happen and the process had to stop""".
>
>> I know that this comes close to taking the user by the hand (which I've
>> never really liked)
>> but I really think that such a system could work and save us time while
>> we're performing a pentest by
>> incorporating an expert's maverick experience, insights and interpretation
>> to an otherwise soulless process.
> For now, at least on w3af's side, we're focusing on reducing the
> amount of false positives. That's a hard process in which we are also
> thinking about letting the user "train" the "expert system" by letting
> them click on the GUI and reporting what they think is a false
> positive. Then, we're going to process that information and modify our
> algorithms. I think that's the closest we're getting to AI and
> training.
>
>> Something far superior to any AI.
>>
>>>> A possible implementation I have in mind is to pre-tag a module when it's
>>>> added to the system.
>>>> The tags would specify key elements of the behavior of a module and will
>>>> later be used in the decision making process (based on rules).
>>>>
>>>> For instance, in the example I mentioned at the beginning of this e-mail,
>>>> the system would check how many of the results have the "timing_attack"
>>>> tag
>>>> and if that number was above a preset threshold it would remove the
>>>> results
>>>> from the scan report or flag them accordingly.
>>>> And possibly take into account environment statistics to make a more
>>>> well-rounded decision (like average response times etc).
>>> That makes sense... somehow... but I would rather fix the cause of the
>>> timing attack bug.
>> That's why I said not to focus on that particular scenario as I'm not
>> talking about avoiding
>> false positives or (just) improving the accuracy of modules but to use our
>> that information to our advantage.
>>>> What do you guys thing?
>>> AI for web application scanning has been on my mind since I started
>>> with w3af, but I really haven't found a problem for which I would say:
>>> "The best / faster / easier to develop way to solve this is AI". Maybe
>>> if we hit our heads hard enough, we can find something where AI is
>>> applied and then state: "w3af/arachni , the only web app scanner with
>>> AI" ? :)
>>>
>> Same here, and I'd rather avoid it too; that's why I presented this thought
>> of mine as a more fitting alternative to such a situation.
>> I usually try to avoid unnecessary complexity like the plague.
> Do you know if any of the commercial offerings have some kind of AI?
>
> Regards,
>
>>>> Cheers,
>>>> Tasos L.
>>>>
>>>> PS. I guess that this could be perceived as pre-trained expert system but
>>>> not really.
>>>>
>>>> _______________________________________________
>>>> The Web Security Mailing List
>>>>
>>>> WebSecurity RSS Feed
>>>> http://www.webappsec.org/rss/websecurity.rss
>>>>
>>>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>>>>
>>>> WASC on Twitter
>>>> http://twitter.com/wascupdates
>>>>
>>>> websecurity at lists.webappsec.org
>>>>
>>>> http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
>>>>
>>>
>>>
>>
>
>





More information about the websecurity mailing list