websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

Experience with using HTTP proxy tools for QA testers

RS
Rohit Sethi
Fri, Feb 25, 2011 5:29 PM

Does anyone have experience rolling out an HTTP Proxy tool for QA testers?
What proxy tools have you seen successfully used by QA? We're looking for
free tools in particular. While many security testers are comfortable with
burp-suite, webscarab, fiddler, etc., some (but certainly not all) QA shops
are looking for simpler-to-use tools with fewer features that work with IE
7+.

Thanks,

Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi

Does anyone have experience rolling out an HTTP Proxy tool for QA testers? What proxy tools have you seen successfully used by QA? We're looking for free tools in particular. While many security testers are comfortable with burp-suite, webscarab, fiddler, etc., some (but certainly not all) QA shops are looking for simpler-to-use tools with fewer features that work with IE 7+. Thanks, -- Rohit Sethi Security Compass http://www.securitycompass.com twitter: rksethi
P
psiinon
Sat, Feb 26, 2011 10:08 AM

Hi Rohit,

Not too surprisingly I'd recommend the OWASP Zed Attack Proxy:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project (I'm the
project lead).
It is specifically designed to be used by people with a wide range of
security experience and as such is ideal for developers and functional
testers who are new to penetration testing.
Its free, open source, cross platform, and is a fork of the open
source Paros Proxy.
Ease of use is a priority, and it has a significant amount of help
pages, both included with the tool and online.
I run courses in the company I work for in which I teach pen testing
techniques to QA testers, and on those courses ZAP has proved to be
very effective.

Let me know if you would like any more info about it, or any advice
and guidance for using it in training courses.
And if you have any suggestions as to how we could make ZAP more
suitable then please let me know - I want ZAP to be the most effective
tool for QA testers.

Many thanks,

Psiinon

On Fri, Feb 25, 2011 at 5:29 PM, Rohit Sethi rklists@gmail.com wrote:

Does anyone have experience rolling out an HTTP Proxy tool for QA testers?
What proxy tools have you seen successfully used by QA? We're looking for
free tools in particular. While many security testers are comfortable with
burp-suite, webscarab, fiddler, etc., some (but certainly not all) QA shops
are looking for simpler-to-use tools with fewer features that work with IE
7+.
Thanks,

Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

Hi Rohit, Not too surprisingly I'd recommend the OWASP Zed Attack Proxy: http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project (I'm the project lead). It is specifically designed to be used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing. Its free, open source, cross platform, and is a fork of the open source Paros Proxy. Ease of use is a priority, and it has a significant amount of help pages, both included with the tool and online. I run courses in the company I work for in which I teach pen testing techniques to QA testers, and on those courses ZAP has proved to be very effective. Let me know if you would like any more info about it, or any advice and guidance for using it in training courses. And if you have any suggestions as to how we could make ZAP more suitable then please let me know - I want ZAP to be the most effective tool for QA testers. Many thanks, Psiinon On Fri, Feb 25, 2011 at 5:29 PM, Rohit Sethi <rklists@gmail.com> wrote: > Does anyone have experience rolling out an HTTP Proxy tool for QA testers? > What proxy tools have you seen successfully used by QA? We're looking for > free tools in particular. While many security testers are comfortable with > burp-suite, webscarab, fiddler, etc., some (but certainly not all) QA shops > are looking for simpler-to-use tools with fewer features that work with IE > 7+. > Thanks, > -- > Rohit Sethi > Security Compass > http://www.securitycompass.com > twitter: rksethi > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org > >
RS
Rohit Sethi
Sat, Feb 26, 2011 4:06 PM

Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll
test it out with some qas and see how it goes

Thanks,

Rohit

On 2/26/11, psiinon psiinon@gmail.com wrote:

Hi Rohit,

Not too surprisingly I'd recommend the OWASP Zed Attack Proxy:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project (I'm the
project lead).
It is specifically designed to be used by people with a wide range of
security experience and as such is ideal for developers and functional
testers who are new to penetration testing.
Its free, open source, cross platform, and is a fork of the open
source Paros Proxy.
Ease of use is a priority, and it has a significant amount of help
pages, both included with the tool and online.
I run courses in the company I work for in which I teach pen testing
techniques to QA testers, and on those courses ZAP has proved to be
very effective.

Let me know if you would like any more info about it, or any advice
and guidance for using it in training courses.
And if you have any suggestions as to how we could make ZAP more
suitable then please let me know - I want ZAP to be the most effective
tool for QA testers.

Many thanks,

Psiinon

On Fri, Feb 25, 2011 at 5:29 PM, Rohit Sethi rklists@gmail.com wrote:

Does anyone have experience rolling out an HTTP Proxy tool for QA testers?
What proxy tools have you seen successfully used by QA? We're looking for
free tools in particular. While many security testers are comfortable with
burp-suite, webscarab, fiddler, etc., some (but certainly not all) QA
shops
are looking for simpler-to-use tools with fewer features that work with IE
7+.
Thanks,

Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

--
Sent from my mobile device

Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi

Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll test it out with some qas and see how it goes Thanks, Rohit On 2/26/11, psiinon <psiinon@gmail.com> wrote: > Hi Rohit, > > Not too surprisingly I'd recommend the OWASP Zed Attack Proxy: > http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project (I'm the > project lead). > It is specifically designed to be used by people with a wide range of > security experience and as such is ideal for developers and functional > testers who are new to penetration testing. > Its free, open source, cross platform, and is a fork of the open > source Paros Proxy. > Ease of use is a priority, and it has a significant amount of help > pages, both included with the tool and online. > I run courses in the company I work for in which I teach pen testing > techniques to QA testers, and on those courses ZAP has proved to be > very effective. > > Let me know if you would like any more info about it, or any advice > and guidance for using it in training courses. > And if you have any suggestions as to how we could make ZAP more > suitable then please let me know - I want ZAP to be the most effective > tool for QA testers. > > Many thanks, > > Psiinon > > On Fri, Feb 25, 2011 at 5:29 PM, Rohit Sethi <rklists@gmail.com> wrote: >> Does anyone have experience rolling out an HTTP Proxy tool for QA testers? >> What proxy tools have you seen successfully used by QA? We're looking for >> free tools in particular. While many security testers are comfortable with >> burp-suite, webscarab, fiddler, etc., some (but certainly not all) QA >> shops >> are looking for simpler-to-use tools with fewer features that work with IE >> 7+. >> Thanks, >> -- >> Rohit Sethi >> Security Compass >> http://www.securitycompass.com >> twitter: rksethi >> >> _______________________________________________ >> The Web Security Mailing List >> >> WebSecurity RSS Feed >> http://www.webappsec.org/rss/websecurity.rss >> >> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA >> >> WASC on Twitter >> http://twitter.com/wascupdates >> >> websecurity@lists.webappsec.org >> http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >> >> > -- Sent from my mobile device Rohit Sethi Security Compass http://www.securitycompass.com twitter: rksethi
AG
Andre Gironda
Mon, Feb 28, 2011 5:02 AM

On Sat, Feb 26, 2011 at 9:06 AM, Rohit Sethi rklists@gmail.com wrote:

Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll
test it out with some qas and see how it goes

As you can see from --
http://www.gartner.com/technology/media-products/reprints/microfocus/vol4/article1/article1.html

The top QA or "dev-test" tools are:

  1. HP QTP
  2. IBM Rational Functional Tester
  3. SmarteSoft (looks strong in the energy and healthcare verticals)
  4. SmartBear Software TestComplete
  5. Micro Focus SilkTest
  6. Microsoft Coded UI Builder in Visual Studio 2010 Ultimate

I was actually surprised to see SmarteSoft and TMap on that list. In
my view of the world, Atlassian is pretty strong in building a
Selenium integrated ALM. I am also familiar with Parasoft and
Crosscheck Networks, but they are niche to WS and SOA testing
(Parasoft is also stronger overseas, such as in the UK),

Only HP and IBM integrate QAInspect and AppScan through HP Software QC
and IBM Rational QM, but these are extremely low-quality tools for
application assessments. It would be better to integrate Burp Pro,
Fiddler, or perhaps customize existing tools, especially
JUnit+JUnitTester+Selenium+HTMLUnit+Cactus or perhaps TestComplete or
the VS Coded UI Builder. You would certainly want to do table-driven
or data-driven development of test cases, which would require light
XML development knowledge e.g. familiarity with xmllint or xmlstarlet
(there are many books e.g. on programming REST that include deep
information on modern XML libraries and tools).

QA integration of security testing needs careful planning. I would
suggest that heavy QC or QM environments combine security testing into
exploratory testing programs and processes before they add tool
integration points (e.g. QAInspect/AppScan/Fortify SCA) or security
gates. I believe it is best to avoid QA except when to rely on their
existing tools -- i.e. put QTP or JUnit through Burp Pro or Fiddler.
Then, modify requests and replay them as a penetration-tester would,
and go as far down the rabbit hole as you can go -- exploit XSS and
SQLi for sure (and perform attacks on authn/authz). An overall appsec
risk management program should really be primary managed through
systems like Cigital ESP -- http://www.cigital.com/solutions/esp --
and HoneyApps Conduit -- http://www.honeyapps.com -- (or Veracode
Analytics if you want both static and runtime).

Cheers,
Andre

On Sat, Feb 26, 2011 at 9:06 AM, Rohit Sethi <rklists@gmail.com> wrote: > Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll > test it out with some qas and see how it goes As you can see from -- http://www.gartner.com/technology/media-products/reprints/microfocus/vol4/article1/article1.html The top QA or "dev-test" tools are: 1) HP QTP 2) IBM Rational Functional Tester 3) SmarteSoft (looks strong in the energy and healthcare verticals) 4) SmartBear Software TestComplete 5) Micro Focus SilkTest 6) Microsoft Coded UI Builder in Visual Studio 2010 Ultimate I was actually surprised to see SmarteSoft and TMap on that list. In my view of the world, Atlassian is pretty strong in building a Selenium integrated ALM. I am also familiar with Parasoft and Crosscheck Networks, but they are niche to WS and SOA testing (Parasoft is also stronger overseas, such as in the UK), Only HP and IBM integrate QAInspect and AppScan through HP Software QC and IBM Rational QM, but these are extremely low-quality tools for application assessments. It would be better to integrate Burp Pro, Fiddler, or perhaps customize existing tools, especially JUnit+JUnitTester+Selenium+HTMLUnit+Cactus or perhaps TestComplete or the VS Coded UI Builder. You would certainly want to do table-driven or data-driven development of test cases, which would require light XML development knowledge e.g. familiarity with xmllint or xmlstarlet (there are many books e.g. on programming REST that include deep information on modern XML libraries and tools). QA integration of security testing needs careful planning. I would suggest that heavy QC or QM environments combine security testing into exploratory testing programs and processes before they add tool integration points (e.g. QAInspect/AppScan/Fortify SCA) or security gates. I believe it is best to avoid QA except when to rely on their existing tools -- i.e. put QTP or JUnit through Burp Pro or Fiddler. Then, modify requests and replay them as a penetration-tester would, and go as far down the rabbit hole as you can go -- exploit XSS and SQLi for sure (and perform attacks on authn/authz). An overall appsec risk management program should really be primary managed through systems like Cigital ESP -- http://www.cigital.com/solutions/esp -- and HoneyApps Conduit -- http://www.honeyapps.com -- (or Veracode Analytics if you want both static and runtime). Cheers, Andre
RS
Rohit Sethi
Mon, Feb 28, 2011 9:00 PM

I think relying soly on QA for security testing is a slippery slope. We've
heard interest from our clients about reducing the number of vulnerabilities
that get caught in pen testing, rather than using pen testing as the only
testing method. Rather than making QA comprehensive, we're trying to see
which low-hanging fruit can be caught with minimal effort - both by those
who use automated run-time assessment scanners and those who do not.

Interestingly, we're not seeing many of our clients make use of QA
automation tools. While developers may make extensive integration testing
suites in JUnit/NUnit we've found less evidence of QTP / Selnium being used
in enterprise QA. What have your experiences been? Clearly automation is
important but if it requires training in a new tool then it's less likely to
be adopted.

On Mon, Feb 28, 2011 at 12:02 AM, Andre Gironda andreg@gmail.com wrote:

On Sat, Feb 26, 2011 at 9:06 AM, Rohit Sethi rklists@gmail.com wrote:

Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll
test it out with some qas and see how it goes

As you can see from --

http://www.gartner.com/technology/media-products/reprints/microfocus/vol4/article1/article1.html

The top QA or "dev-test" tools are:

  1. HP QTP
  2. IBM Rational Functional Tester
  3. SmarteSoft (looks strong in the energy and healthcare verticals)
  4. SmartBear Software TestComplete
  5. Micro Focus SilkTest
  6. Microsoft Coded UI Builder in Visual Studio 2010 Ultimate

I was actually surprised to see SmarteSoft and TMap on that list. In
my view of the world, Atlassian is pretty strong in building a
Selenium integrated ALM. I am also familiar with Parasoft and
Crosscheck Networks, but they are niche to WS and SOA testing
(Parasoft is also stronger overseas, such as in the UK),

Only HP and IBM integrate QAInspect and AppScan through HP Software QC
and IBM Rational QM, but these are extremely low-quality tools for
application assessments. It would be better to integrate Burp Pro,
Fiddler, or perhaps customize existing tools, especially
JUnit+JUnitTester+Selenium+HTMLUnit+Cactus or perhaps TestComplete or
the VS Coded UI Builder. You would certainly want to do table-driven
or data-driven development of test cases, which would require light
XML development knowledge e.g. familiarity with xmllint or xmlstarlet
(there are many books e.g. on programming REST that include deep
information on modern XML libraries and tools).

QA integration of security testing needs careful planning. I would
suggest that heavy QC or QM environments combine security testing into
exploratory testing programs and processes before they add tool
integration points (e.g. QAInspect/AppScan/Fortify SCA) or security
gates. I believe it is best to avoid QA except when to rely on their
existing tools -- i.e. put QTP or JUnit through Burp Pro or Fiddler.
Then, modify requests and replay them as a penetration-tester would,
and go as far down the rabbit hole as you can go -- exploit XSS and
SQLi for sure (and perform attacks on authn/authz). An overall appsec
risk management program should really be primary managed through
systems like Cigital ESP -- http://www.cigital.com/solutions/esp --
and HoneyApps Conduit -- http://www.honeyapps.com -- (or Veracode
Analytics if you want both static and runtime).

Cheers,
Andre

--
Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi

I think relying soly on QA for security testing is a slippery slope. We've heard interest from our clients about reducing the number of vulnerabilities that get caught in pen testing, rather than using pen testing as the only testing method. Rather than making QA comprehensive, we're trying to see which low-hanging fruit can be caught with minimal effort - both by those who use automated run-time assessment scanners and those who do not. Interestingly, we're not seeing many of our clients make use of QA automation tools. While developers may make extensive integration testing suites in JUnit/NUnit we've found less evidence of QTP / Selnium being used in enterprise QA. What have your experiences been? Clearly automation is important but if it requires training in a new tool then it's less likely to be adopted. On Mon, Feb 28, 2011 at 12:02 AM, Andre Gironda <andreg@gmail.com> wrote: > On Sat, Feb 26, 2011 at 9:06 AM, Rohit Sethi <rklists@gmail.com> wrote: > > Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll > > test it out with some qas and see how it goes > > As you can see from -- > > http://www.gartner.com/technology/media-products/reprints/microfocus/vol4/article1/article1.html > > The top QA or "dev-test" tools are: > 1) HP QTP > 2) IBM Rational Functional Tester > 3) SmarteSoft (looks strong in the energy and healthcare verticals) > 4) SmartBear Software TestComplete > 5) Micro Focus SilkTest > 6) Microsoft Coded UI Builder in Visual Studio 2010 Ultimate > > I was actually surprised to see SmarteSoft and TMap on that list. In > my view of the world, Atlassian is pretty strong in building a > Selenium integrated ALM. I am also familiar with Parasoft and > Crosscheck Networks, but they are niche to WS and SOA testing > (Parasoft is also stronger overseas, such as in the UK), > > Only HP and IBM integrate QAInspect and AppScan through HP Software QC > and IBM Rational QM, but these are extremely low-quality tools for > application assessments. It would be better to integrate Burp Pro, > Fiddler, or perhaps customize existing tools, especially > JUnit+JUnitTester+Selenium+HTMLUnit+Cactus or perhaps TestComplete or > the VS Coded UI Builder. You would certainly want to do table-driven > or data-driven development of test cases, which would require light > XML development knowledge e.g. familiarity with xmllint or xmlstarlet > (there are many books e.g. on programming REST that include deep > information on modern XML libraries and tools). > > QA integration of security testing needs careful planning. I would > suggest that heavy QC or QM environments combine security testing into > exploratory testing programs and processes before they add tool > integration points (e.g. QAInspect/AppScan/Fortify SCA) or security > gates. I believe it is best to avoid QA except when to rely on their > existing tools -- i.e. put QTP or JUnit through Burp Pro or Fiddler. > Then, modify requests and replay them as a penetration-tester would, > and go as far down the rabbit hole as you can go -- exploit XSS and > SQLi for sure (and perform attacks on authn/authz). An overall appsec > risk management program should really be primary managed through > systems like Cigital ESP -- http://www.cigital.com/solutions/esp -- > and HoneyApps Conduit -- http://www.honeyapps.com -- (or Veracode > Analytics if you want both static and runtime). > > Cheers, > Andre > -- Rohit Sethi Security Compass http://www.securitycompass.com twitter: rksethi
P
psiinon
Mon, Feb 28, 2011 9:30 PM

As I try to make clear in the preso here:
http://www.owasp.org/images/c/c8/Conference_Style_slides_for_ZAP.ppt I
definitely dont think we should just rely on QA for security testing.
There is no silver bullet, and so this isnt it ;)
However professional pentesting is expensive, and often gets done late
in the product cycle.
If you can catch even a few security vulnerabilities earlier on then
it will save time, effort and therefore money.
Its not cost effective to pay pen testers to find the simple issues -
pick those up in QA and then get value for money from the
professionals by clearing the way for them to look for the hard ones
:)

Psiinon

On Mon, Feb 28, 2011 at 9:00 PM, Rohit Sethi rklists@gmail.com wrote:

I think relying soly on QA for security testing is a slippery slope. We've
heard interest from our clients about reducing the number of vulnerabilities
that get caught in pen testing, rather than using pen testing as the only
testing method. Rather than making QA comprehensive, we're trying to see
which low-hanging fruit can be caught with minimal effort - both by those
who use automated run-time assessment scanners and those who do not.

Interestingly, we're not seeing many of our clients make use of QA
automation tools. While developers may make extensive integration testing
suites in JUnit/NUnit we've found less evidence of QTP / Selnium being used
in enterprise QA. What have your experiences been? Clearly automation is
important but if it requires training in a new tool then it's less likely to
be adopted.

On Mon, Feb 28, 2011 at 12:02 AM, Andre Gironda andreg@gmail.com wrote:

On Sat, Feb 26, 2011 at 9:06 AM, Rohit Sethi rklists@gmail.com wrote:

Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll
test it out with some qas and see how it goes

As you can see from --

http://www.gartner.com/technology/media-products/reprints/microfocus/vol4/article1/article1.html

The top QA or "dev-test" tools are:

  1. HP QTP
  2. IBM Rational Functional Tester
  3. SmarteSoft (looks strong in the energy and healthcare verticals)
  4. SmartBear Software TestComplete
  5. Micro Focus SilkTest
  6. Microsoft Coded UI Builder in Visual Studio 2010 Ultimate

I was actually surprised to see SmarteSoft and TMap on that list. In
my view of the world, Atlassian is pretty strong in building a
Selenium integrated ALM. I am also familiar with Parasoft and
Crosscheck Networks, but they are niche to WS and SOA testing
(Parasoft is also stronger overseas, such as in the UK),

Only HP and IBM integrate QAInspect and AppScan through HP Software QC
and IBM Rational QM, but these are extremely low-quality tools for
application assessments. It would be better to integrate Burp Pro,
Fiddler, or perhaps customize existing tools, especially
JUnit+JUnitTester+Selenium+HTMLUnit+Cactus or perhaps TestComplete or
the VS Coded UI Builder. You would certainly want to do table-driven
or data-driven development of test cases, which would require light
XML development knowledge e.g. familiarity with xmllint or xmlstarlet
(there are many books e.g. on programming REST that include deep
information on modern XML libraries and tools).

QA integration of security testing needs careful planning. I would
suggest that heavy QC or QM environments combine security testing into
exploratory testing programs and processes before they add tool
integration points (e.g. QAInspect/AppScan/Fortify SCA) or security
gates. I believe it is best to avoid QA except when to rely on their
existing tools -- i.e. put QTP or JUnit through Burp Pro or Fiddler.
Then, modify requests and replay them as a penetration-tester would,
and go as far down the rabbit hole as you can go -- exploit XSS and
SQLi for sure (and perform attacks on authn/authz). An overall appsec
risk management program should really be primary managed through
systems like Cigital ESP -- http://www.cigital.com/solutions/esp --
and HoneyApps Conduit -- http://www.honeyapps.com -- (or Veracode
Analytics if you want both static and runtime).

Cheers,
Andre

--
Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi

As I try to make clear in the preso here: http://www.owasp.org/images/c/c8/Conference_Style_slides_for_ZAP.ppt I definitely dont think we should just rely on QA for security testing. There is no silver bullet, and so this isnt it ;) However professional pentesting is expensive, and often gets done late in the product cycle. If you can catch even a few security vulnerabilities earlier on then it will save time, effort and therefore money. Its not cost effective to pay pen testers to find the simple issues - pick those up in QA and then get value for money from the professionals by clearing the way for them to look for the hard ones :) Psiinon On Mon, Feb 28, 2011 at 9:00 PM, Rohit Sethi <rklists@gmail.com> wrote: > I think relying soly on QA for security testing is a slippery slope. We've > heard interest from our clients about reducing the number of vulnerabilities > that get caught in pen testing, rather than using pen testing as the only > testing method. Rather than making QA comprehensive, we're trying to see > which low-hanging fruit can be caught with minimal effort - both by those > who use automated run-time assessment scanners and those who do not. > > Interestingly, we're not seeing many of our clients make use of QA > automation tools. While developers may make extensive integration testing > suites in JUnit/NUnit we've found less evidence of QTP / Selnium being used > in enterprise QA. What have your experiences been? Clearly automation is > important but if it requires training in a new tool then it's less likely to > be adopted. > > > > On Mon, Feb 28, 2011 at 12:02 AM, Andre Gironda <andreg@gmail.com> wrote: >> >> On Sat, Feb 26, 2011 at 9:06 AM, Rohit Sethi <rklists@gmail.com> wrote: >> > Thanks Andre and Psiinon. I hadn't actually looked at Zed before, I'll >> > test it out with some qas and see how it goes >> >> As you can see from -- >> >> http://www.gartner.com/technology/media-products/reprints/microfocus/vol4/article1/article1.html >> >> The top QA or "dev-test" tools are: >> 1) HP QTP >> 2) IBM Rational Functional Tester >> 3) SmarteSoft (looks strong in the energy and healthcare verticals) >> 4) SmartBear Software TestComplete >> 5) Micro Focus SilkTest >> 6) Microsoft Coded UI Builder in Visual Studio 2010 Ultimate >> >> I was actually surprised to see SmarteSoft and TMap on that list. In >> my view of the world, Atlassian is pretty strong in building a >> Selenium integrated ALM. I am also familiar with Parasoft and >> Crosscheck Networks, but they are niche to WS and SOA testing >> (Parasoft is also stronger overseas, such as in the UK), >> >> Only HP and IBM integrate QAInspect and AppScan through HP Software QC >> and IBM Rational QM, but these are extremely low-quality tools for >> application assessments. It would be better to integrate Burp Pro, >> Fiddler, or perhaps customize existing tools, especially >> JUnit+JUnitTester+Selenium+HTMLUnit+Cactus or perhaps TestComplete or >> the VS Coded UI Builder. You would certainly want to do table-driven >> or data-driven development of test cases, which would require light >> XML development knowledge e.g. familiarity with xmllint or xmlstarlet >> (there are many books e.g. on programming REST that include deep >> information on modern XML libraries and tools). >> >> QA integration of security testing needs careful planning. I would >> suggest that heavy QC or QM environments combine security testing into >> exploratory testing programs and processes before they add tool >> integration points (e.g. QAInspect/AppScan/Fortify SCA) or security >> gates. I believe it is best to avoid QA except when to rely on their >> existing tools -- i.e. put QTP or JUnit through Burp Pro or Fiddler. >> Then, modify requests and replay them as a penetration-tester would, >> and go as far down the rabbit hole as you can go -- exploit XSS and >> SQLi for sure (and perform attacks on authn/authz). An overall appsec >> risk management program should really be primary managed through >> systems like Cigital ESP -- http://www.cigital.com/solutions/esp -- >> and HoneyApps Conduit -- http://www.honeyapps.com -- (or Veracode >> Analytics if you want both static and runtime). >> >> Cheers, >> Andre > > > > -- > Rohit Sethi > Security Compass > http://www.securitycompass.com > twitter: rksethi >
AG
Andre Gironda
Mon, Feb 28, 2011 10:59 PM

On Mon, Feb 28, 2011 at 2:00 PM, Rohit Sethi rklists@gmail.com wrote:

I think relying soly on QA for security testing is a slippery slope. We've

I don't think I've ever seen anyone propose that, but I'd fight it ;>

heard interest from our clients about reducing the number of vulnerabilities
that get caught in pen testing, rather than using pen testing as the only
testing method. Rather than making QA comprehensive, we're trying to see
which low-hanging fruit can be caught with minimal effort - both by those
who use automated run-time assessment scanners and those who do not.

More on this below in psiinon's comments.

Interestingly, we're not seeing many of our clients make use of QA
automation tools. While developers may make extensive integration testing
suites in JUnit/NUnit we've found less evidence of QTP / Selnium being used
in enterprise QA. What have your experiences been? Clearly automation is
important but if it requires training in a new tool then it's less likely to
be adopted.

I think it's more common to see testing automation in place where
there is also a focus on application lifecycle management. If a CxO or
R&D director/manager has focused on hygiene to solve some systemic
project issues, then you'll probably find testing automation.

Because many development projects, including Enterprise, involve only
one developer, you will only find JUnit/NUnit where the app developer
has done their own work (usually because they were given the time by
management to focus on these efforts), and usually they have had past
experience with unit testing frameworks. QTP/Selenium style testing
automation, as you suggest, is even rarer to find -- possibly because
app developers do not have the experience or exposure to these at all.
I haven't seen training budget for test automation frameworks and we
do not see these classes being offered -- especially not at the same
pace as app development or even app penetration-testing.

When I see application developers get to a major iteration for a
project/product that is absolutely going to be released -- and they
know that the app is going to be around for 5+ years or decades...
then I always question them: Why haven't you put more rigor into unit
and component test automation when you know that somebody (even if
it's not you) is going to be refactoring this project/product?

An even more important question would be about keeping the test
automation up-to-date. I have seen many projects implement JUnit with
either HtmlUnit or Selenium to later abandon the test automation,
leaving it several revisions out of date compared to the latest
iteration of the system build. When performing continuous integration,
particularly nightly builds -- all build code and test code should
remain in working order on the build server(s).

Rohit -- one interesting point here is that you and your company may
have had experience with many development shops that lack hygiene in
both test automation and appsec because the two concepts are
intrinsically linked. It's a lot less likely that you'll walk into a
dev shop that recently had an appsec-related incident (or proactively
hires consultants because they are worried) to find that the
documentation and test automation is world-class.

On Mon, Feb 28, 2011 at 2:30 PM, psiinon psiinon@gmail.com wrote:

As I try to make clear in the preso here:
http://www.owasp.org/images/c/c8/Conference_Style_slides_for_ZAP.ppt I
definitely dont think we should just rely on QA for security testing.
There is no silver bullet, and so this isnt it ;)
However professional pentesting is expensive, and often gets done late
in the product cycle.

Why is penetration-testing performed late in the product cycle? Why
not put pen-testers and app developers together during a
project/product's first iteration? If there is a "system" (anything
past the point of wireframes to include deployable instances or even
mocks) to test, I say test it when it's considered a system, not when
it's considered "ready" by some other standards. An afternoon spent at
an early iteration would easily find and fix more issues than a more
formal pen-test right before an app is flagged for push into
production. Better, using continuous deployments could allow
pen-testers and app developers to work together during blue
deployments (where the code push/release is active for some) instead
of green deployments (active for everyone) if a project/product is
already in production and this new iteration is actually a new code
release.

If you can catch even a few security vulnerabilities earlier on then
it will save time, effort and therefore money.
Its not cost effective to pay pen testers to find the simple issues -
pick those up in QA and then get value for money from the
professionals by clearing the way for them to look for the hard ones

I don't agree with this. Training dev-testers or quality testers to
understand HTTP/TLS requests/responses or other nuances (for appsec!)
is too difficult. It takes time, research, dedication, and focus to
become fluent with web application vulnerability
testing/exploitation/risk-analysis. I would rather that quality
testers understand the architectural appsec issues facing
in-development apps: e.g. they should know that Classic ASP and PHP
are ripe for problems as compared to a standardized ESAPI Java -- and
grab an appsec expert to discuss the ramifications coming down the
pipeline. In other words, QA should be trained as whistleblowers and
get the experts involved. They can easily be rewarded as such -- just
give bonuses to any QA teams or individuals that report risky
applications that include sensitive data to the InfoSec team.

Another thing here: There are no easy problems. You don't see
eradication of SQLi or XSS anywhere. QA is not going to add any value,
even when armed with the "right" tools (note: I don't believe any
commercial appsec tools today work in the ways that they are
marketed). You can take a QA person and turn him or her into an app
penetration-tester, but again -- this takes time, research,
dedication, and focus on appsec and only appsec. Bring this person
onto an appsec team instead of leaving them behind in QA. Best of all
-- they probably already know the app developers and have a better
working relationship with them than the appsec team does. Now put them
in between QA and development -- they should be on the development
team as a dev-tester, not at a quality/security gate.

In many ways, forcing or helping QA to add in some low-hanging fruit
vulnerability-finding techniques only placates the notion that appsec
is not being taken seriously.

On a completely separate topic point, I'd say that appsec teams and
app penetration-testers can learn more from test automation experts
and quality testers rather than vice-versa. I have never seen or heard
of an app pen-tester who utilizes equivalence classification wisely,
nor combinatorial/pairwise testing, exploratory testing, or
domain-specific knowledge. At the very least -- they do not call these
concepts out directly/correctly, or identify their own "ad-hoc"
techniques (even if similar, and even if formalized). If app
pen-testers aren't using the correct language, then who are they to
say that they know anything about testing software at all?

-Andre

On Mon, Feb 28, 2011 at 2:00 PM, Rohit Sethi <rklists@gmail.com> wrote: > I think relying soly on QA for security testing is a slippery slope. We've I don't think I've ever seen anyone propose that, but I'd fight it ;> > heard interest from our clients about reducing the number of vulnerabilities > that get caught in pen testing, rather than using pen testing as the only > testing method. Rather than making QA comprehensive, we're trying to see > which low-hanging fruit can be caught with minimal effort - both by those > who use automated run-time assessment scanners and those who do not. More on this below in psiinon's comments. > Interestingly, we're not seeing many of our clients make use of QA > automation tools. While developers may make extensive integration testing > suites in JUnit/NUnit we've found less evidence of QTP / Selnium being used > in enterprise QA. What have your experiences been? Clearly automation is > important but if it requires training in a new tool then it's less likely to > be adopted. I think it's more common to see testing automation in place where there is also a focus on application lifecycle management. If a CxO or R&D director/manager has focused on hygiene to solve some systemic project issues, then you'll probably find testing automation. Because many development projects, including Enterprise, involve only one developer, you will only find JUnit/NUnit where the app developer has done their own work (usually because they were given the time by management to focus on these efforts), and usually they have had past experience with unit testing frameworks. QTP/Selenium style testing automation, as you suggest, is even rarer to find -- possibly because app developers do not have the experience or exposure to these at all. I haven't seen training budget for test automation frameworks and we do not see these classes being offered -- especially not at the same pace as app development or even app penetration-testing. When I see application developers get to a major iteration for a project/product that is absolutely going to be released -- and they know that the app is going to be around for 5+ years or decades... then I always question them: Why haven't you put more rigor into unit and component test automation when you know that somebody (even if it's not you) is going to be refactoring this project/product? An even more important question would be about keeping the test automation up-to-date. I have seen many projects implement JUnit with either HtmlUnit or Selenium to later abandon the test automation, leaving it several revisions out of date compared to the latest iteration of the system build. When performing continuous integration, particularly nightly builds -- all build code and test code should remain in working order on the build server(s). Rohit -- one interesting point here is that you and your company may have had experience with many development shops that lack hygiene in both test automation and appsec because the two concepts are intrinsically linked. It's a lot less likely that you'll walk into a dev shop that recently had an appsec-related incident (or proactively hires consultants because they are worried) to find that the documentation and test automation is world-class. On Mon, Feb 28, 2011 at 2:30 PM, psiinon <psiinon@gmail.com> wrote: > As I try to make clear in the preso here: > http://www.owasp.org/images/c/c8/Conference_Style_slides_for_ZAP.ppt I > definitely dont think we should just rely on QA for security testing. > There is no silver bullet, and so this isnt it ;) > However professional pentesting is expensive, and often gets done late > in the product cycle. Why is penetration-testing performed late in the product cycle? Why not put pen-testers and app developers together during a project/product's first iteration? If there is a "system" (anything past the point of wireframes to include deployable instances or even mocks) to test, I say test it when it's considered a system, not when it's considered "ready" by some other standards. An afternoon spent at an early iteration would easily find and fix more issues than a more formal pen-test right before an app is flagged for push into production. Better, using continuous deployments could allow pen-testers and app developers to work together during blue deployments (where the code push/release is active for some) instead of green deployments (active for everyone) if a project/product is already in production and this new iteration is actually a new code release. > If you can catch even a few security vulnerabilities earlier on then > it will save time, effort and therefore money. > Its not cost effective to pay pen testers to find the simple issues - > pick those up in QA and then get value for money from the > professionals by clearing the way for them to look for the hard ones I don't agree with this. Training dev-testers or quality testers to understand HTTP/TLS requests/responses or other nuances (for appsec!) is too difficult. It takes time, research, dedication, and focus to become fluent with web application vulnerability testing/exploitation/risk-analysis. I would rather that quality testers understand the architectural appsec issues facing in-development apps: e.g. they should know that Classic ASP and PHP are ripe for problems as compared to a standardized ESAPI Java -- and grab an appsec expert to discuss the ramifications coming down the pipeline. In other words, QA should be trained as whistleblowers and get the experts involved. They can easily be rewarded as such -- just give bonuses to any QA teams or individuals that report risky applications that include sensitive data to the InfoSec team. Another thing here: There are no easy problems. You don't see eradication of SQLi or XSS anywhere. QA is not going to add any value, even when armed with the "right" tools (note: I don't believe any commercial appsec tools today work in the ways that they are marketed). You can take a QA person and turn him or her into an app penetration-tester, but again -- this takes time, research, dedication, and focus on appsec and only appsec. Bring this person onto an appsec team instead of leaving them behind in QA. Best of all -- they probably already know the app developers and have a better working relationship with them than the appsec team does. Now put them in between QA and development -- they should be on the development team as a dev-tester, not at a quality/security gate. In many ways, forcing or helping QA to add in some low-hanging fruit vulnerability-finding techniques only placates the notion that appsec is not being taken seriously. On a completely separate topic point, I'd say that appsec teams and app penetration-testers can learn more from test automation experts and quality testers rather than vice-versa. I have never seen or heard of an app pen-tester who utilizes equivalence classification wisely, nor combinatorial/pairwise testing, exploratory testing, or domain-specific knowledge. At the very least -- they do not call these concepts out directly/correctly, or identify their own "ad-hoc" techniques (even if similar, and even if formalized). If app pen-testers aren't using the correct language, then who are they to say that they know anything about testing software at all? -Andre
RS
Rohit Sethi
Mon, Feb 28, 2011 11:38 PM

Thanks Andre - interesting perspective. With respect to QA automation, you
might be right about the correlation between security and QA hygiene. Some
of the clients are actually quite mature with respect to application
security process (when compared to others in their respective industries);
generally their QA processes are less consistent between groups.

Regarding pen testing in early iterations, this would work well but the
problem is often scalability. One reason dynamic testing tools are so
popular is because many organizations are unable/unwilling to have enough
penetration testers to get involved with every application at every
iteration (or early on in testing in waterfall shops).

I think there are some cases of low hanging fruit. For example session
fixation, authentication throttling / lockout, forcible browsing, limited
CSRF testing (assuming they can leverage domain knowledge), picking up on
verbose error messages, etc. Even some basic parameter manipulation &
horizontal privilege escalation may be possible with clear guidance on an
HTTP proxy tool. I agree that most of these could be picked-up by
penetration testers more efficiently and accurately. I'm currently working
with some QA shops now, so I'll let you know effective it ends up being.

Seasoned testers can certainly teach us a lot. I'm learning a lot from them
right now and I suspect many other security people could as well. Any ideas
on ways to improve the knowledge sharing from QA to security more generally?

Cheers,

Rohit

On Mon, Feb 28, 2011 at 5:59 PM, Andre Gironda andreg@gmail.com wrote:

On Mon, Feb 28, 2011 at 2:00 PM, Rohit Sethi rklists@gmail.com wrote:

I think relying soly on QA for security testing is a slippery slope.

We've

I don't think I've ever seen anyone propose that, but I'd fight it ;>

heard interest from our clients about reducing the number of

vulnerabilities

that get caught in pen testing, rather than using pen testing as the only
testing method. Rather than making QA comprehensive, we're trying to see
which low-hanging fruit can be caught with minimal effort - both by those
who use automated run-time assessment scanners and those who do not.

More on this below in psiinon's comments.

Interestingly, we're not seeing many of our clients make use of QA
automation tools. While developers may make extensive integration testing
suites in JUnit/NUnit we've found less evidence of QTP / Selnium being

used

in enterprise QA. What have your experiences been? Clearly automation is
important but if it requires training in a new tool then it's less likely

to

be adopted.

I think it's more common to see testing automation in place where
there is also a focus on application lifecycle management. If a CxO or
R&D director/manager has focused on hygiene to solve some systemic
project issues, then you'll probably find testing automation.

Because many development projects, including Enterprise, involve only
one developer, you will only find JUnit/NUnit where the app developer
has done their own work (usually because they were given the time by
management to focus on these efforts), and usually they have had past
experience with unit testing frameworks. QTP/Selenium style testing
automation, as you suggest, is even rarer to find -- possibly because
app developers do not have the experience or exposure to these at all.
I haven't seen training budget for test automation frameworks and we
do not see these classes being offered -- especially not at the same
pace as app development or even app penetration-testing.

When I see application developers get to a major iteration for a
project/product that is absolutely going to be released -- and they
know that the app is going to be around for 5+ years or decades...
then I always question them: Why haven't you put more rigor into unit
and component test automation when you know that somebody (even if
it's not you) is going to be refactoring this project/product?

An even more important question would be about keeping the test
automation up-to-date. I have seen many projects implement JUnit with
either HtmlUnit or Selenium to later abandon the test automation,
leaving it several revisions out of date compared to the latest
iteration of the system build. When performing continuous integration,
particularly nightly builds -- all build code and test code should
remain in working order on the build server(s).

Rohit -- one interesting point here is that you and your company may
have had experience with many development shops that lack hygiene in
both test automation and appsec because the two concepts are
intrinsically linked. It's a lot less likely that you'll walk into a
dev shop that recently had an appsec-related incident (or proactively
hires consultants because they are worried) to find that the
documentation and test automation is world-class.

On Mon, Feb 28, 2011 at 2:30 PM, psiinon psiinon@gmail.com wrote:

As I try to make clear in the preso here:
http://www.owasp.org/images/c/c8/Conference_Style_slides_for_ZAP.ppt I
definitely dont think we should just rely on QA for security testing.
There is no silver bullet, and so this isnt it ;)
However professional pentesting is expensive, and often gets done late
in the product cycle.

Why is penetration-testing performed late in the product cycle? Why
not put pen-testers and app developers together during a
project/product's first iteration? If there is a "system" (anything
past the point of wireframes to include deployable instances or even
mocks) to test, I say test it when it's considered a system, not when
it's considered "ready" by some other standards. An afternoon spent at
an early iteration would easily find and fix more issues than a more
formal pen-test right before an app is flagged for push into
production. Better, using continuous deployments could allow
pen-testers and app developers to work together during blue
deployments (where the code push/release is active for some) instead
of green deployments (active for everyone) if a project/product is
already in production and this new iteration is actually a new code
release.

If you can catch even a few security vulnerabilities earlier on then
it will save time, effort and therefore money.
Its not cost effective to pay pen testers to find the simple issues -
pick those up in QA and then get value for money from the
professionals by clearing the way for them to look for the hard ones

I don't agree with this. Training dev-testers or quality testers to
understand HTTP/TLS requests/responses or other nuances (for appsec!)
is too difficult. It takes time, research, dedication, and focus to
become fluent with web application vulnerability
testing/exploitation/risk-analysis. I would rather that quality
testers understand the architectural appsec issues facing
in-development apps: e.g. they should know that Classic ASP and PHP
are ripe for problems as compared to a standardized ESAPI Java -- and
grab an appsec expert to discuss the ramifications coming down the
pipeline. In other words, QA should be trained as whistleblowers and
get the experts involved. They can easily be rewarded as such -- just
give bonuses to any QA teams or individuals that report risky
applications that include sensitive data to the InfoSec team.

Another thing here: There are no easy problems. You don't see
eradication of SQLi or XSS anywhere. QA is not going to add any value,
even when armed with the "right" tools (note: I don't believe any
commercial appsec tools today work in the ways that they are
marketed). You can take a QA person and turn him or her into an app
penetration-tester, but again -- this takes time, research,
dedication, and focus on appsec and only appsec. Bring this person
onto an appsec team instead of leaving them behind in QA. Best of all
-- they probably already know the app developers and have a better
working relationship with them than the appsec team does. Now put them
in between QA and development -- they should be on the development
team as a dev-tester, not at a quality/security gate.

In many ways, forcing or helping QA to add in some low-hanging fruit
vulnerability-finding techniques only placates the notion that appsec
is not being taken seriously.

On a completely separate topic point, I'd say that appsec teams and
app penetration-testers can learn more from test automation experts
and quality testers rather than vice-versa. I have never seen or heard
of an app pen-tester who utilizes equivalence classification wisely,
nor combinatorial/pairwise testing, exploratory testing, or
domain-specific knowledge. At the very least -- they do not call these
concepts out directly/correctly, or identify their own "ad-hoc"
techniques (even if similar, and even if formalized). If app
pen-testers aren't using the correct language, then who are they to
say that they know anything about testing software at all?

-Andre

--
Rohit Sethi
Security Compass
http://www.securitycompass.com
twitter: rksethi

Thanks Andre - interesting perspective. With respect to QA automation, you might be right about the correlation between security and QA hygiene. Some of the clients are actually quite mature with respect to application security process (when compared to others in their respective industries); generally their QA processes are less consistent between groups. Regarding pen testing in early iterations, this would work well but the problem is often scalability. One reason dynamic testing tools are so popular is because many organizations are unable/unwilling to have enough penetration testers to get involved with every application at every iteration (or early on in testing in waterfall shops). I think there are some cases of low hanging fruit. For example session fixation, authentication throttling / lockout, forcible browsing, limited CSRF testing (assuming they can leverage domain knowledge), picking up on verbose error messages, etc. Even some basic parameter manipulation & horizontal privilege escalation may be possible with clear guidance on an HTTP proxy tool. I agree that most of these could be picked-up by penetration testers more efficiently and accurately. I'm currently working with some QA shops now, so I'll let you know effective it ends up being. Seasoned testers can certainly teach us a lot. I'm learning a lot from them right now and I suspect many other security people could as well. Any ideas on ways to improve the knowledge sharing from QA to security more generally? Cheers, Rohit On Mon, Feb 28, 2011 at 5:59 PM, Andre Gironda <andreg@gmail.com> wrote: > On Mon, Feb 28, 2011 at 2:00 PM, Rohit Sethi <rklists@gmail.com> wrote: > > I think relying soly on QA for security testing is a slippery slope. > We've > > I don't think I've ever seen anyone propose that, but I'd fight it ;> > > > heard interest from our clients about reducing the number of > vulnerabilities > > that get caught in pen testing, rather than using pen testing as the only > > testing method. Rather than making QA comprehensive, we're trying to see > > which low-hanging fruit can be caught with minimal effort - both by those > > who use automated run-time assessment scanners and those who do not. > > More on this below in psiinon's comments. > > > Interestingly, we're not seeing many of our clients make use of QA > > automation tools. While developers may make extensive integration testing > > suites in JUnit/NUnit we've found less evidence of QTP / Selnium being > used > > in enterprise QA. What have your experiences been? Clearly automation is > > important but if it requires training in a new tool then it's less likely > to > > be adopted. > > I think it's more common to see testing automation in place where > there is also a focus on application lifecycle management. If a CxO or > R&D director/manager has focused on hygiene to solve some systemic > project issues, then you'll probably find testing automation. > > Because many development projects, including Enterprise, involve only > one developer, you will only find JUnit/NUnit where the app developer > has done their own work (usually because they were given the time by > management to focus on these efforts), and usually they have had past > experience with unit testing frameworks. QTP/Selenium style testing > automation, as you suggest, is even rarer to find -- possibly because > app developers do not have the experience or exposure to these at all. > I haven't seen training budget for test automation frameworks and we > do not see these classes being offered -- especially not at the same > pace as app development or even app penetration-testing. > > When I see application developers get to a major iteration for a > project/product that is absolutely going to be released -- and they > know that the app is going to be around for 5+ years or decades... > then I always question them: Why haven't you put more rigor into unit > and component test automation when you know that somebody (even if > it's not you) is going to be refactoring this project/product? > > An even more important question would be about keeping the test > automation up-to-date. I have seen many projects implement JUnit with > either HtmlUnit or Selenium to later abandon the test automation, > leaving it several revisions out of date compared to the latest > iteration of the system build. When performing continuous integration, > particularly nightly builds -- all build code and test code should > remain in working order on the build server(s). > > Rohit -- one interesting point here is that you and your company may > have had experience with many development shops that lack hygiene in > both test automation and appsec because the two concepts are > intrinsically linked. It's a lot less likely that you'll walk into a > dev shop that recently had an appsec-related incident (or proactively > hires consultants because they are worried) to find that the > documentation and test automation is world-class. > > On Mon, Feb 28, 2011 at 2:30 PM, psiinon <psiinon@gmail.com> wrote: > > As I try to make clear in the preso here: > > http://www.owasp.org/images/c/c8/Conference_Style_slides_for_ZAP.ppt I > > definitely dont think we should just rely on QA for security testing. > > There is no silver bullet, and so this isnt it ;) > > However professional pentesting is expensive, and often gets done late > > in the product cycle. > > Why is penetration-testing performed late in the product cycle? Why > not put pen-testers and app developers together during a > project/product's first iteration? If there is a "system" (anything > past the point of wireframes to include deployable instances or even > mocks) to test, I say test it when it's considered a system, not when > it's considered "ready" by some other standards. An afternoon spent at > an early iteration would easily find and fix more issues than a more > formal pen-test right before an app is flagged for push into > production. Better, using continuous deployments could allow > pen-testers and app developers to work together during blue > deployments (where the code push/release is active for some) instead > of green deployments (active for everyone) if a project/product is > already in production and this new iteration is actually a new code > release. > > > If you can catch even a few security vulnerabilities earlier on then > > it will save time, effort and therefore money. > > Its not cost effective to pay pen testers to find the simple issues - > > pick those up in QA and then get value for money from the > > professionals by clearing the way for them to look for the hard ones > > I don't agree with this. Training dev-testers or quality testers to > understand HTTP/TLS requests/responses or other nuances (for appsec!) > is too difficult. It takes time, research, dedication, and focus to > become fluent with web application vulnerability > testing/exploitation/risk-analysis. I would rather that quality > testers understand the architectural appsec issues facing > in-development apps: e.g. they should know that Classic ASP and PHP > are ripe for problems as compared to a standardized ESAPI Java -- and > grab an appsec expert to discuss the ramifications coming down the > pipeline. In other words, QA should be trained as whistleblowers and > get the experts involved. They can easily be rewarded as such -- just > give bonuses to any QA teams or individuals that report risky > applications that include sensitive data to the InfoSec team. > > Another thing here: There are no easy problems. You don't see > eradication of SQLi or XSS anywhere. QA is not going to add any value, > even when armed with the "right" tools (note: I don't believe any > commercial appsec tools today work in the ways that they are > marketed). You can take a QA person and turn him or her into an app > penetration-tester, but again -- this takes time, research, > dedication, and focus on appsec and only appsec. Bring this person > onto an appsec team instead of leaving them behind in QA. Best of all > -- they probably already know the app developers and have a better > working relationship with them than the appsec team does. Now put them > in between QA and development -- they should be on the development > team as a dev-tester, not at a quality/security gate. > > In many ways, forcing or helping QA to add in some low-hanging fruit > vulnerability-finding techniques only placates the notion that appsec > is not being taken seriously. > > On a completely separate topic point, I'd say that appsec teams and > app penetration-testers can learn more from test automation experts > and quality testers rather than vice-versa. I have never seen or heard > of an app pen-tester who utilizes equivalence classification wisely, > nor combinatorial/pairwise testing, exploratory testing, or > domain-specific knowledge. At the very least -- they do not call these > concepts out directly/correctly, or identify their own "ad-hoc" > techniques (even if similar, and even if formalized). If app > pen-testers aren't using the correct language, then who are they to > say that they know anything about testing software at all? > > -Andre > -- Rohit Sethi Security Compass http://www.securitycompass.com twitter: rksethi
AG
Andre Gironda
Tue, Mar 1, 2011 12:28 AM

On Mon, Feb 28, 2011 at 4:38 PM, Rohit Sethi rklists@gmail.com wrote:

Thanks Andre - interesting perspective. With respect to QA automation, you
might be right about the correlation between security and QA hygiene. Some

How do you propose that we get more information about the differences?
This is an interesting area of study for us.

of the clients are actually quite mature with respect to application
security process (when compared to others in their respective industries);
generally their QA processes are less consistent between groups.

Repeat clients or new clients? :> How long on average do they take to
go from immature to mature? How many never go from immature to mature?

Regarding pen testing in early iterations, this would work well but the
problem is often scalability. One reason dynamic testing tools are so
popular is because many organizations are unable/unwilling to have enough
penetration testers to get involved with every application at every
iteration (or early on in testing in waterfall shops).

I believe that runtime appsec testing is best when combined with
someone immediately empowered to rewrite source code. Certainly this
cannot be done on every project.

For the projects that haven't yet seen this collaboration, it would
make most sense to derive a positive model for appsec controls based
on a risk management gap analysis, where existing controls are
documented (and risks merely accepted) and planned controls are sent
to project management (such as a PMO), where risks can be fully
mitigated.

Again, I believe that risks can be best mitigated by planting two warm
bodies together -- one that has the expertise to find security-related
bugs (a very specialized skill), and one who has the experience to fix
their root cause (without interfering too much with other project
requirements -- usually a balancing act). Many years ago I figured
this could be one role: the security bugfixer, but with more
experience and exposure to these issues I am begging to see that the
roles must remain separate, but work more and more co-operatively
using formalized collaboration techniques.

I think there are some cases of low hanging fruit. For example session
fixation, authentication throttling / lockout, forcible browsing, limited
CSRF testing (assuming they can leverage domain knowledge), picking up on
verbose error messages, etc. Even some basic parameter manipulation &
horizontal privilege escalation may be possible with clear guidance on an
HTTP proxy tool. I agree that most of these could be picked-up by
penetration testers more efficiently and accurately. I'm currently working
with some QA shops now, so I'll let you know effective it ends up being.

In some of these suggested cases, system administrators (or devops)
are seemingly just as capable. However, now it becomes a matter of who
is responsible/accountable. I believe that QA and sysadmin people are
not equipped with the risk assessment background necessary to fulfill
these roles appropriately, and that information security professionals
are (or should be, because it's on their professional roadmap at the
very least).

However, in rare cases with some verticals (e.g. ones dealing with
financial information exchange such as stock exchanges) the quality
testing and system administration personnel are certainly capable of
dealing with risk assessment because it is part of their job. I'm not
going to rule it out, but this is very situational and one-off.

OTOH, we as application security professionals could certainly use all
of the help we can get. This is why ad-hoc leadership in the QA and
IT/Ops teams is necessary during crisis management, or during
quarterly/yearly reviews. It's more of an outreach program,
spear-headed by an information security management team or risk
management team. There will always be key stakeholders, but they
shouldn't be relied upon during day-to-day or week-to-week
results-oriented project reporting.

Seasoned testers can certainly teach us a lot. I'm learning a lot from them
right now and I suspect many other security people could as well. Any ideas
on ways to improve the knowledge sharing from QA to security more generally?

I think we should leverage their tools. That's a great start, right?
For example, take a dev-tester's pimped-out Eclipse test automation
suite that they developed using JUnit and HtmlUnit and then proxy it
(no intercept) with Burp Pro or Fiddler, saving a Burp session file or
Fiddler SAZ file.

We should also share documentation. I develop domain models and
data/execution flow models in my head or sometimes model them in
Visio, software architect, or mindmapping apps. Quality testers would
probably like to see this stuff, and we would probably like to see
theirs. Looking at the whole of developers, testers, and app
pen-testers -- developers tend to do the least amount of work on
documentation and reporting -- but this may vary depending on a lot of
factors. We need to learn how to best leverage our existing
intellectual leadership in the software quality/security space.

I've seen HP QC and IBM QM based ALM implementations, but perhaps not
as often as Atlassian ALM. Most dev shops, quality testers, and IT/Ops
staff tend to document their ALM using Confluence. Sometimes, less
often seen, is Sharepoint or MediaWiki (or another similar
collaborative tool). I just had a great idea -- I'd like to see more
information security management and risk management practices put
their information in Confluence (if that's what the appdevs, QA, or
IT/Ops staff use) so it's all in one place. A lot of information
security management teams are too entrenched in the business-oriented
backoffice infrastructure (e.g. the company-approved Sharepoint
instance instead of the company-approved Confluence instance) and lack
having any presence in the technology / application development
infrastructure. You'll see an appsec portal that is totally
disconnected and largely ignored by app development teams, even if the
CxO(s) or auditors know about it.

-Andre

On Mon, Feb 28, 2011 at 4:38 PM, Rohit Sethi <rklists@gmail.com> wrote: > Thanks Andre - interesting perspective. With respect to QA automation, you > might be right about the correlation between security and QA hygiene. Some How do you propose that we get more information about the differences? This is an interesting area of study for us. > of the clients are actually quite mature with respect to application > security process (when compared to others in their respective industries); > generally their QA processes are less consistent between groups. Repeat clients or new clients? :> How long on average do they take to go from immature to mature? How many never go from immature to mature? > Regarding pen testing in early iterations, this would work well but the > problem is often scalability. One reason dynamic testing tools are so > popular is because many organizations are unable/unwilling to have enough > penetration testers to get involved with every application at every > iteration (or early on in testing in waterfall shops). I believe that runtime appsec testing is best when combined with someone immediately empowered to rewrite source code. Certainly this cannot be done on every project. For the projects that haven't yet seen this collaboration, it would make most sense to derive a positive model for appsec controls based on a risk management gap analysis, where existing controls are documented (and risks merely accepted) and planned controls are sent to project management (such as a PMO), where risks can be fully mitigated. Again, I believe that risks can be best mitigated by planting two warm bodies together -- one that has the expertise to find security-related bugs (a very specialized skill), and one who has the experience to fix their root cause (without interfering too much with other project requirements -- usually a balancing act). Many years ago I figured this could be one role: the security bugfixer, but with more experience and exposure to these issues I am begging to see that the roles must remain separate, but work more and more co-operatively using formalized collaboration techniques. > I think there are some cases of low hanging fruit. For example session > fixation, authentication throttling / lockout, forcible browsing, limited > CSRF testing (assuming they can leverage domain knowledge), picking up on > verbose error messages, etc. Even some basic parameter manipulation & > horizontal privilege escalation may be possible with clear guidance on an > HTTP proxy tool. I agree that most of these could be picked-up by > penetration testers more efficiently and accurately. I'm currently working > with some QA shops now, so I'll let you know effective it ends up being. In some of these suggested cases, system administrators (or devops) are seemingly just as capable. However, now it becomes a matter of who is responsible/accountable. I believe that QA and sysadmin people are not equipped with the risk assessment background necessary to fulfill these roles appropriately, and that information security professionals are (or should be, because it's on their professional roadmap at the very least). However, in rare cases with some verticals (e.g. ones dealing with financial information exchange such as stock exchanges) the quality testing and system administration personnel are certainly capable of dealing with risk assessment because it is part of their job. I'm not going to rule it out, but this is very situational and one-off. OTOH, we as application security professionals could certainly use all of the help we can get. This is why ad-hoc leadership in the QA and IT/Ops teams is necessary during crisis management, or during quarterly/yearly reviews. It's more of an outreach program, spear-headed by an information security management team or risk management team. There will always be key stakeholders, but they shouldn't be relied upon during day-to-day or week-to-week results-oriented project reporting. > Seasoned testers can certainly teach us a lot. I'm learning a lot from them > right now and I suspect many other security people could as well. Any ideas > on ways to improve the knowledge sharing from QA to security more generally? I think we should leverage their tools. That's a great start, right? For example, take a dev-tester's pimped-out Eclipse test automation suite that they developed using JUnit and HtmlUnit and then proxy it (no intercept) with Burp Pro or Fiddler, saving a Burp session file or Fiddler SAZ file. We should also share documentation. I develop domain models and data/execution flow models in my head or sometimes model them in Visio, software architect, or mindmapping apps. Quality testers would probably like to see this stuff, and we would probably like to see theirs. Looking at the whole of developers, testers, and app pen-testers -- developers tend to do the least amount of work on documentation and reporting -- but this may vary depending on a lot of factors. We need to learn how to best leverage our existing intellectual leadership in the software quality/security space. I've seen HP QC and IBM QM based ALM implementations, but perhaps not as often as Atlassian ALM. Most dev shops, quality testers, and IT/Ops staff tend to document their ALM using Confluence. Sometimes, less often seen, is Sharepoint or MediaWiki (or another similar collaborative tool). I just had a great idea -- I'd like to see more information security management and risk management practices put their information in Confluence (if that's what the appdevs, QA, or IT/Ops staff use) so it's all in one place. A lot of information security management teams are too entrenched in the business-oriented backoffice infrastructure (e.g. the company-approved Sharepoint instance instead of the company-approved Confluence instance) and lack having any presence in the technology / application development infrastructure. You'll see an appsec portal that is totally disconnected and largely ignored by app development teams, even if the CxO(s) or auditors know about it. -Andre