websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

case studies around the holistic approach to application security testing

CW
Chris Weber
Sun, Jul 24, 2011 7:21 PM

In practice, organizations today are combining static source code
analysis, Web VA scannning, and penetration testing to find bugs and
vulnerabilities in their products.  Each one of these can complement or
supplement the other.  Are you aware of any studies that have focused on
the relationship between these three approaches?  For example, in terms
of results - is there a case study showing that static analysis found
30% of the security bugs while Web scanning found 40% while pen testing
found 20%?

A view into the overlap and relevancy of the "findings" is also what I'm
hoping to see... for example, how many findings were identified by each
approach, how many were only found in one approach, and did pen testing
find one critical/devastating bug whose value was more than the total
found by the other approaches?  That last example may seem more
subjective than the rest but some organizations do have ways to assign a
monetary cost to a bug/vuln.

thanks,
CWeber

In practice, organizations today are combining static source code analysis, Web VA scannning, and penetration testing to find bugs and vulnerabilities in their products. Each one of these can complement or supplement the other. Are you aware of any studies that have focused on the relationship between these three approaches? For example, in terms of results - is there a case study showing that static analysis found 30% of the security bugs while Web scanning found 40% while pen testing found 20%? A view into the overlap and relevancy of the "findings" is also what I'm hoping to see... for example, how many findings were identified by each approach, how many were only found in one approach, and did pen testing find one critical/devastating bug whose value was more than the total found by the other approaches? That last example may seem more subjective than the rest but some organizations do have ways to assign a monetary cost to a bug/vuln. thanks, CWeber
AG
Andre Gironda
Mon, Jul 25, 2011 4:50 PM

On Sun, Jul 24, 2011 at 12:21 PM, Chris Weber chris@lookout.net wrote:

In practice, organizations today are combining static source code analysis,
Web VA scannning, and penetration testing to find bugs and vulnerabilities
in their products.  Each one of these can complement or supplement the
other.  Are you aware of any studies that have focused on the relationship
between these three approaches?  For example, in terms of results - is there
a case study showing that static analysis found 30% of the security bugs
while Web scanning found 40% while pen testing found 20%?

http://www.cigital.com/justiceleague/2011/05/03/when-all-you-have-is-a-hammer/
Manual pen 21%, Manual SCR 21%, Dynamic/Static tool 12%, etc

From my perspective -- the tools, without an expert behind them, can

never be trusted to provide any value. The above statistics must
involve an expert.

There is no way to accurately benchmark web application security
scanners, but the best ones that I've seen are
http://wivet.googlecode.com (for coverage) and
http://wavsep.googlecode.com (for fault-injection attack capability
and analysis capability). No tool received 100% in coverage (94% was
the highest in wivet) and in SQLi or RXSS-GET categories (no single
tool performed 100% in both categories. All commercial tools performed
well below 70% compared to open-source tools in any given category).

The best benchmarking against static analysis tools was from the NSA
CSA Project. It showed that open-source tools are, on average, better
for security-focused static analysis results. It also explained that
in any given software weakness category, no tool exceeded a
positive-rate of greater than 1%, except perhaps in the file handling
categories, where some tools received positive-rates as high as 11%.

A view into the overlap and relevancy of the "findings" is also what I'm
hoping to see... for example, how many findings were identified by each
approach, how many were only found in one approach, and did pen testing find
one critical/devastating bug whose value was more than the total found by
the other approaches?  That last example may seem more subjective than the
rest but some organizations do have ways to assign a monetary cost to a
bug/vuln.

The only breakdown I've seen like this is from page 18 of this PDF --
http://www.isecpartners.com/files/CodeScanning.pdf
However, that's a bit outdated and it presupposes perhaps a few
bad/outdated practices.

You should be careful when comparing tools to people. Recall what I
said above -- the tools do not speak for themselves. An expert must
drive (i.e. influence the direction from start to finish, with
potential hiccups in-between) each tool and interpret the results.

Another personal opinion of mine is that build(with source)-assisted,
manual penetration-testing using pair-testers is the way to go. The
level of expertise should put at least one person at the GSSP-Java
level and another person at the OSCP level. Note that I don't
necessarily support or condone those certifications or training
programs -- just the concepts behind their curriculum.

Cheers,
Andre

On Sun, Jul 24, 2011 at 12:21 PM, Chris Weber <chris@lookout.net> wrote: > In practice, organizations today are combining static source code analysis, > Web VA scannning, and penetration testing to find bugs and vulnerabilities > in their products.  Each one of these can complement or supplement the > other.  Are you aware of any studies that have focused on the relationship > between these three approaches?  For example, in terms of results - is there > a case study showing that static analysis found 30% of the security bugs > while Web scanning found 40% while pen testing found 20%? http://www.cigital.com/justiceleague/2011/05/03/when-all-you-have-is-a-hammer/ Manual pen 21%, Manual SCR 21%, Dynamic/Static tool 12%, etc >From my perspective -- the tools, without an expert behind them, can never be trusted to provide any value. The above statistics must involve an expert. There is no way to accurately benchmark web application security scanners, but the best ones that I've seen are http://wivet.googlecode.com (for coverage) and http://wavsep.googlecode.com (for fault-injection attack capability and analysis capability). No tool received 100% in coverage (94% was the highest in wivet) and in SQLi or RXSS-GET categories (no single tool performed 100% in both categories. All commercial tools performed well below 70% compared to open-source tools in any given category). The best benchmarking against static analysis tools was from the NSA CSA Project. It showed that open-source tools are, on average, better for security-focused static analysis results. It also explained that in any given software weakness category, no tool exceeded a positive-rate of greater than 1%, except perhaps in the file handling categories, where some tools received positive-rates as high as 11%. > A view into the overlap and relevancy of the "findings" is also what I'm > hoping to see... for example, how many findings were identified by each > approach, how many were only found in one approach, and did pen testing find > one critical/devastating bug whose value was more than the total found by > the other approaches?  That last example may seem more subjective than the > rest but some organizations do have ways to assign a monetary cost to a > bug/vuln. The only breakdown I've seen like this is from page 18 of this PDF -- http://www.isecpartners.com/files/CodeScanning.pdf However, that's a bit outdated and it presupposes perhaps a few bad/outdated practices. You should be careful when comparing tools to people. Recall what I said above -- the tools do not speak for themselves. An expert must drive (i.e. influence the direction from start to finish, with potential hiccups in-between) each tool and interpret the results. Another personal opinion of mine is that build(with source)-assisted, manual penetration-testing using pair-testers is the way to go. The level of expertise should put at least one person at the GSSP-Java level and another person at the OSCP level. Note that I don't necessarily support or condone those certifications or training programs -- just the concepts behind their curriculum. Cheers, Andre