websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

which is the best web application vulnerability scanner

N
neza0x@gmail.com
Wed, May 4, 2011 6:44 PM

I think you are a burp sales guy! Just kidding. I said Webinspect as an example, I do not care since black box testing is just the first phase in these efforts. BTW, AMP is not required to run it.

You mentioned Burp (or any other one I do not care) can "consolidate" reports:
Consolidate=time=money

Last email from my side about this topic.
Sent via BlackBerry from Danux Network

-----Original Message-----
From: Andre Gironda andreg@gmail.com
Date: Wed, 4 May 2011 11:26:50
To: websecurity@webappsec.org
Cc: Ryan Dewhurstryandewhurst@gmail.com; neza0x@gmail.com
Subject: Re: [WEB SECURITY] which is the best web application vulnerabilityscanner

On Wed, May 4, 2011 at 10:40 AM,  neza0x@gmail.com wrote:

"Use As many as possible scanners" mmmmm??? Technically could be, but in the real Corporate world, you only have some days to test and validate, so, more scanners, more time to run/validate and multiple different reports too generate. Without any consistency.

This is not true at all. There are many ways to aggregate/consolidate
scans and data from scans. Burp Suite Professional, along with Buby,
provide this capability in their combined API through the "send to
[rprogram]" interface. It's certainly possible to generate a single
report -- and The Dradis Framework provides hooks into HTML, MS Word,
PDF generation, Mediawiki, et al. This would only serve to make the
data MORE consistent and valuable.

What if you find 10. Vulns in Webinspect, 5 in AppScan and 3 in Acunetix?
When the developer says, please re-scan! Crap!!!

You obviously need to work better with app developers if they are
directing this language towards you.

My recommendation is to use an scan that allows macro creation so that you can reach sections that the scanner itself cannot.
Think of a process with 4 steps where the first one is to enter the invoice number or credit card, the scanner will never guess it and therefore not able to touch next steps/sections/urls/parameters.
What about gotcha-enable apps??

Webinspect supports macros, Acunetix (at least in the version I used years ago) only supports login macros.

The manual modes in WebInspect and Netsparker mimic what is capable
with Burp Suite Professional Scanner (including and especially the
passive scanner). However, free tools such as W3AF, Casaba Watcher,
and Watobo also do this. You could always use a browser macro
language/framework (e.g. iMacros, Selenium IDE, Selenium-RC/WebDriver,
Geb, Watir/WatiN/Watij, etc) do drive the browser through a passive,
local web proxy. This may give better results depending on the target
app. It even works well with Tamper Data since this add-on maintains a
list of URIs, parameters, headers in addition to timestamps (all
available from XML). I'm sure you will find that this methodology is
better than using an app scanner. It's not as scaleable today, but it
is more viable in the long-term. Have you seen the hardware
requirements for WI/AMP though? Talk about scalability issues...

I think you are a burp sales guy! Just kidding. I said Webinspect as an example, I do not care since black box testing is just the first phase in these efforts. BTW, AMP is not required to run it. You mentioned Burp (or any other one I do not care) can "consolidate" reports: Consolidate=time=money Last email from my side about this topic. Sent via BlackBerry from Danux Network -----Original Message----- From: Andre Gironda <andreg@gmail.com> Date: Wed, 4 May 2011 11:26:50 To: <websecurity@webappsec.org> Cc: Ryan Dewhurst<ryandewhurst@gmail.com>; <neza0x@gmail.com> Subject: Re: [WEB SECURITY] which is the best web application vulnerabilityscanner On Wed, May 4, 2011 at 10:40 AM, <neza0x@gmail.com> wrote: > "Use As many as possible scanners" mmmmm??? Technically could be, but in the real Corporate world, you only have some days to test and validate, so, more scanners, more time to run/validate and multiple different reports too generate. Without any consistency. This is not true at all. There are many ways to aggregate/consolidate scans and data from scans. Burp Suite Professional, along with Buby, provide this capability in their combined API through the "send to [rprogram]" interface. It's certainly possible to generate a single report -- and The Dradis Framework provides hooks into HTML, MS Word, PDF generation, Mediawiki, et al. This would only serve to make the data MORE consistent and valuable. > What if you find 10. Vulns in Webinspect, 5 in AppScan and 3 in Acunetix? > When the developer says, please re-scan! Crap!!! You obviously need to work better with app developers if they are directing this language towards you. > My recommendation is to use an scan that allows macro creation so that you can reach sections that the scanner itself cannot. > Think of a process with 4 steps where the first one is to enter the invoice number or credit card, the scanner will never guess it and therefore not able to touch next steps/sections/urls/parameters. > What about gotcha-enable apps?? > > Webinspect supports macros, Acunetix (at least in the version I used years ago) only supports login macros. The manual modes in WebInspect and Netsparker mimic what is capable with Burp Suite Professional Scanner (including and especially the passive scanner). However, free tools such as W3AF, Casaba Watcher, and Watobo also do this. You could always use a browser macro language/framework (e.g. iMacros, Selenium IDE, Selenium-RC/WebDriver, Geb, Watir/WatiN/Watij, etc) do drive the browser through a passive, local web proxy. This may give better results depending on the target app. It even works well with Tamper Data since this add-on maintains a list of URIs, parameters, headers in addition to timestamps (all available from XML). I'm sure you will find that this methodology is better than using an app scanner. It's not as scaleable today, but it is more viable in the long-term. Have you seen the hardware requirements for WI/AMP though? Talk about scalability issues...
MZ
Michal Zalewski
Wed, May 4, 2011 7:55 PM
  1. In corporate environments you cannot only download any tool (specially freeware ones) and run it, those need to be approved tools or at least it should be that away, I cannot imagine a Company allowing its users to download/run anything they want.

If corporate "security" policies prevent the actual security team from
leveraging security testing tools, then... you probably have a problem
more significant than selecting the right tool ;-)

  1. Scanners run in a Corporate environment must be allowed by IDS/IPS, WAF, so on to go through and reach the target, as you know, every scanner has a http header that identifies it with the Network, with your approach, the Networking Team, will need to allow different scanners in the network, by the way, also those could be the ones from malicious guys.

Ditto if you whitelist access to your systems based on HTTP header layout ;-)

/mz

> 1. In corporate environments you cannot only download any tool (specially freeware ones) and run it, those need to be approved tools or at least it should be that away, I cannot imagine a Company allowing its users to download/run anything they want. If corporate "security" policies prevent the actual security team from leveraging security testing tools, then... you probably have a problem more significant than selecting the right tool ;-) > 3. Scanners run in a Corporate environment must be allowed by IDS/IPS, WAF, so on to go through and reach the target, as you know, every scanner has a http header that identifies it with the Network, with your approach, the Networking Team, will need to allow different scanners in the network, by the way, also those could be the ones from malicious guys. Ditto if you whitelist access to your systems based on HTTP header layout ;-) /mz
AJ
Arian J. Evans
Wed, May 4, 2011 8:35 PM

"Nothing is stopping you two firing two scanners at the same time"

As someone who has tried this many, many times I can tell you with
conviction it just doesn't work. Anyone who has tried this with any
meaningful scanner configuration knows this won't work for obvious
reasons.

The most obvious reason this "run multiple scanners in parallel"
doesn't work is that their test injections will stomp all over each
other and also wrangle responses. Especially in persistent fields.

You will get both false positives and false negatives.

Then we get to scanner state and timeout issues, and threading issues.
But I will stop my list there unless you want me to go on.

I simply share this wisdom to help any new folks on this list avoid
the headache that will ensure should they download and fire up 2-4
scanners in parallel on websites with lots of persistent data inputs.

For unauth brochureware, sure, have at it, at least until the app falls over.


Arian Evans
Software Security Scanner Singularities

On Wed, May 4, 2011 at 12:55 PM, Michal Zalewski lcamtuf@coredump.cx wrote:

  1. In corporate environments you cannot only download any tool (specially freeware ones) and run it, those need to be approved tools or at least it should be that away, I cannot imagine a Company allowing its users to download/run anything they want.

If corporate "security" policies prevent the actual security team from
leveraging security testing tools, then... you probably have a problem
more significant than selecting the right tool ;-)

  1. Scanners run in a Corporate environment must be allowed by IDS/IPS, WAF, so on to go through and reach the target, as you know, every scanner has a http header that identifies it with the Network, with your approach, the Networking Team, will need to allow different scanners in the network, by the way, also those could be the ones from malicious guys.

Ditto if you whitelist access to your systems based on HTTP header layout ;-)

/mz


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

"Nothing is stopping you two firing two scanners at the same time" As someone who has tried this many, many times I can tell you with conviction it just doesn't work. Anyone who has tried this with any meaningful scanner configuration knows this won't work for obvious reasons. The most obvious reason this "run multiple scanners in parallel" doesn't work is that their test injections will stomp all over each other and also wrangle responses. Especially in persistent fields. You will get both false positives and false negatives. Then we get to scanner state and timeout issues, and threading issues. But I will stop my list there unless you want me to go on. I simply share this wisdom to help any new folks on this list avoid the headache that will ensure should they download and fire up 2-4 scanners in parallel on websites with lots of persistent data inputs. For unauth brochureware, sure, have at it, at least until the app falls over. --- Arian Evans Software Security Scanner Singularities On Wed, May 4, 2011 at 12:55 PM, Michal Zalewski <lcamtuf@coredump.cx> wrote: >> 1. In corporate environments you cannot only download any tool (specially freeware ones) and run it, those need to be approved tools or at least it should be that away, I cannot imagine a Company allowing its users to download/run anything they want. > > If corporate "security" policies prevent the actual security team from > leveraging security testing tools, then... you probably have a problem > more significant than selecting the right tool ;-) > >> 3. Scanners run in a Corporate environment must be allowed by IDS/IPS, WAF, so on to go through and reach the target, as you know, every scanner has a http header that identifies it with the Network, with your approach, the Networking Team, will need to allow different scanners in the network, by the way, also those could be the ones from malicious guys. > > Ditto if you whitelist access to your systems based on HTTP header layout ;-) > > /mz > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >
RD
Ryan Dewhurst
Wed, May 4, 2011 9:44 PM

I've never had any problems running parallel scans, maybe I have just been
lucky. By 'parallel' I mean running two scanners at the same time against
the same application, not necessarily running the scanners from the same
machine.

Not disagreeing here, just generally interested...

Shouldn't every injection payload have a unique identifier associated with
it? Making it distinguishable between scanners?!

What do you mean by 'wrangle responses'? Every HTTP request/response goes
back to the scanner that initiated the TCP connection.

What do you mean by 'persistent fields'? I your thinking persistent XSS type
injection, again, each payload should be uniquely identifiable to the
scanner that sent it. Example: <script>alert(Scanner1)</script>,

<script>alert(Scanner2)</script>

State should not be a problem?! Give each scanner a different user.

Threading?! On the client?!

My original 'run multiple scanners' comment meant running, Nikto alongside,
DirBuster for example. Not Netsparker and Acunetix. But even though I still
don't see that much of a problem.

To be honest, I need to do some research in order to verify my assumptions,
but my initial thoughts are that there should be no problems.

Ryan Dewhurst

blog www.ethicalhack3r.co.uk
projects www.dvwa.co.uk | www.webwordcount.com
twitter www.twitter.com/ethicalhack3r

On Wed, May 4, 2011 at 9:35 PM, Arian J. Evans
arian.evans@anachronic.comwrote:

"Nothing is stopping you two firing two scanners at the same time"

As someone who has tried this many, many times I can tell you with
conviction it just doesn't work. Anyone who has tried this with any
meaningful scanner configuration knows this won't work for obvious
reasons.

The most obvious reason this "run multiple scanners in parallel"
doesn't work is that their test injections will stomp all over each
other and also wrangle responses. Especially in persistent fields.

You will get both false positives and false negatives.

Then we get to scanner state and timeout issues, and threading issues.
But I will stop my list there unless you want me to go on.

I simply share this wisdom to help any new folks on this list avoid
the headache that will ensure should they download and fire up 2-4
scanners in parallel on websites with lots of persistent data inputs.

For unauth brochureware, sure, have at it, at least until the app falls
over.


Arian Evans
Software Security Scanner Singularities

On Wed, May 4, 2011 at 12:55 PM, Michal Zalewski lcamtuf@coredump.cx
wrote:

  1. In corporate environments you cannot only download any tool

(specially freeware ones) and run it, those need to be approved tools or at
least it should be that away, I cannot imagine a Company allowing its users
to download/run anything they want.

If corporate "security" policies prevent the actual security team from
leveraging security testing tools, then... you probably have a problem
more significant than selecting the right tool ;-)

  1. Scanners run in a Corporate environment must be allowed by IDS/IPS,

WAF, so on to go through and reach the target, as you know, every scanner
has a http header that identifies it with the Network, with your approach,
the Networking Team, will need to allow different scanners in the network,
by the way, also those could be the ones from malicious guys.

Ditto if you whitelist access to your systems based on HTTP header layout

;-)

I've never had any problems running parallel scans, maybe I have just been lucky. By 'parallel' I mean running two scanners at the same time against the same application, not necessarily running the scanners from the same machine. Not disagreeing here, just generally interested... Shouldn't every injection payload have a unique identifier associated with it? Making it distinguishable between scanners?! What do you mean by 'wrangle responses'? Every HTTP request/response goes back to the scanner that initiated the TCP connection. What do you mean by 'persistent fields'? I your thinking persistent XSS type injection, again, each payload should be uniquely identifiable to the scanner that sent it. Example: <script>alert(Scanner1)</script>, <script>alert(Scanner2)</script> State should not be a problem?! Give each scanner a different user. Threading?! On the client?! My original 'run multiple scanners' comment meant running, Nikto alongside, DirBuster for example. Not Netsparker and Acunetix. But even though I still don't see that much of a problem. To be honest, I need to do some research in order to verify my assumptions, but my initial thoughts are that there should be no problems. Ryan Dewhurst blog www.ethicalhack3r.co.uk projects www.dvwa.co.uk | www.webwordcount.com twitter www.twitter.com/ethicalhack3r On Wed, May 4, 2011 at 9:35 PM, Arian J. Evans <arian.evans@anachronic.com>wrote: > "Nothing is stopping you two firing two scanners at the same time" > > As someone who has tried this many, many times I can tell you with > conviction it just doesn't work. Anyone who has tried this with any > meaningful scanner configuration knows this won't work for obvious > reasons. > > The most obvious reason this "run multiple scanners in parallel" > doesn't work is that their test injections will stomp all over each > other and also wrangle responses. Especially in persistent fields. > > You will get both false positives and false negatives. > > Then we get to scanner state and timeout issues, and threading issues. > But I will stop my list there unless you want me to go on. > > I simply share this wisdom to help any new folks on this list avoid > the headache that will ensure should they download and fire up 2-4 > scanners in parallel on websites with lots of persistent data inputs. > > For unauth brochureware, sure, have at it, at least until the app falls > over. > > --- > Arian Evans > Software Security Scanner Singularities > > > On Wed, May 4, 2011 at 12:55 PM, Michal Zalewski <lcamtuf@coredump.cx> > wrote: > >> 1. In corporate environments you cannot only download any tool > (specially freeware ones) and run it, those need to be approved tools or at > least it should be that away, I cannot imagine a Company allowing its users > to download/run anything they want. > > > > If corporate "security" policies prevent the actual security team from > > leveraging security testing tools, then... you probably have a problem > > more significant than selecting the right tool ;-) > > > >> 3. Scanners run in a Corporate environment must be allowed by IDS/IPS, > WAF, so on to go through and reach the target, as you know, every scanner > has a http header that identifies it with the Network, with your approach, > the Networking Team, will need to allow different scanners in the network, > by the way, also those could be the ones from malicious guys. > > > > Ditto if you whitelist access to your systems based on HTTP header layout > ;-) > > > > /mz > > > > _______________________________________________ > > The Web Security Mailing List > > > > WebSecurity RSS Feed > > http://www.webappsec.org/rss/websecurity.rss > > > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > > > WASC on Twitter > > http://twitter.com/wascupdates > > > > websecurity@lists.webappsec.org > > > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org > > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >
AG
Andre Gironda
Wed, May 4, 2011 9:58 PM

On Wed, May 4, 2011 at 2:44 PM, Ryan Dewhurst ryandewhurst@gmail.com wrote:

My original 'run multiple scanners' comment meant running, Nikto alongside,
DirBuster for example. Not Netsparker and Acunetix. But even though I still
don't see that much of a problem.

To be honest, I need to do some research in order to verify my assumptions,
but my initial thoughts are that there should be no problems.

An alternative to running multiple tools in parallel (even from
different hosts) is to run them in serial, overnight, in Windows
Scheduled Tasks or Linux crontab (or similar subsystem). That way, you
can wake up to results.

My favorite way to implement this is with the W3AF emailReport plugin
-- http://blog.oxdef.info/2011/03/scheduled-scans-with-w3af.html

Burp Suite Professional and many other commercial tools also have
scheduled task functionality built-in.

On Wed, May 4, 2011 at 2:44 PM, Ryan Dewhurst <ryandewhurst@gmail.com> wrote: > My original 'run multiple scanners' comment meant running, Nikto alongside, > DirBuster for example. Not Netsparker and Acunetix. But even though I still > don't see that much of a problem. > > To be honest, I need to do some research in order to verify my assumptions, > but my initial thoughts are that there should be no problems. An alternative to running multiple tools in parallel (even from different hosts) is to run them in serial, overnight, in Windows Scheduled Tasks or Linux crontab (or similar subsystem). That way, you can wake up to results. My favorite way to implement this is with the W3AF emailReport plugin -- http://blog.oxdef.info/2011/03/scheduled-scans-with-w3af.html Burp Suite Professional and many other commercial tools also have scheduled task functionality built-in.
PH
Pete Herzog
Thu, May 5, 2011 10:33 AM

On 5/4/2011 10:35 PM, Arian J. Evans wrote:

"Nothing is stopping you two firing two scanners at the same time"

As someone who has tried this many, many times I can tell you with
conviction it just doesn't work. Anyone who has tried this with any

From the many, many testing scenarios we studied through Hacker
Highschool and OPST exams, we found that testing in parallel where
tools sent packets and waited for a reply, the main problem was
packets lost at the host. That means we could track them back tot he
sending host where they never make it to the reporting function of the
host. This also occurred where a sniffer was run in parallel to the
scanner on the same host. We suspect that collisions in the listening
portion of the tools were causing the problems. We found this got
worse the more layer of abstractions a result had to go through to go
from the ethernet card to the tool gui. Therefore commandline tools on
linux directly installed (not vmware) AND not running a gui had the
least problems of loss when run in parallel. However, even in that
scenario, there was still a very small number of losses when run in
parallel.

We actually do convey this to our trainers to tell their students just
so as to avoid possible problems with accuracy.

Sincerely,
-pete.

--
Pete Herzog - Managing Director - pete@isecom.org
ISECOM - Institute for Security and Open Methodologies
www.isecom.org - www.osstmm.org
www.hackerhighschool.org - www.badpeopleproject.org

On 5/4/2011 10:35 PM, Arian J. Evans wrote: > "Nothing is stopping you two firing two scanners at the same time" > > As someone who has tried this many, many times I can tell you with > conviction it just doesn't work. Anyone who has tried this with any From the many, many testing scenarios we studied through Hacker Highschool and OPST exams, we found that testing in parallel where tools sent packets and waited for a reply, the main problem was packets lost at the host. That means we could track them back tot he sending host where they never make it to the reporting function of the host. This also occurred where a sniffer was run in parallel to the scanner on the same host. We suspect that collisions in the listening portion of the tools were causing the problems. We found this got worse the more layer of abstractions a result had to go through to go from the ethernet card to the tool gui. Therefore commandline tools on linux directly installed (not vmware) AND not running a gui had the least problems of loss when run in parallel. However, even in that scenario, there was still a very small number of losses when run in parallel. We actually do convey this to our trainers to tell their students just so as to avoid possible problems with accuracy. Sincerely, -pete. -- Pete Herzog - Managing Director - pete@isecom.org ISECOM - Institute for Security and Open Methodologies www.isecom.org - www.osstmm.org www.hackerhighschool.org - www.badpeopleproject.org