wasc-satec@lists.webappsec.org

WASC Static Analysis Tool Evaluation Criteria

View all threads

Static Analysis Tools Evaluation Criteria

SP
Shah, Paras (HP Software - Fortify ASC)
Mon, Jul 11, 2011 4:56 PM

I should have been more clear; what I meant to say was that is dangerous to assume that ONLY developers will use this tool. I agree that there will be a mix of roles that will be users and each role will have a different set of tasks they are trying to complete. The technology should support all role types.

Paras Shah
Fortify ASC
District Manager, Canada
Fortify Software, an HP Company

+1 408 836 7216 / Mobile
+1 866 234 1609/ Fax
paras@hp.com / Email

Please consider the environment before printing this email.

From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Alen Zukich
Sent: Monday, July 11, 2011 12:18 PM
To: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

I think we would assume that both developers and security auditors will use this tool.  So a tool should support both, which I think most vendors do to a certain extent.

IMHO it is very important that developers own their security problems.  This does matter because they need to fix issues right while they are coding and while it is fresh in their minds.  Don't check in code with security vulnerabilities.

alen

From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Shah, Paras (HP Software - Fortify ASC)
Sent: July-11-11 11:25 AM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that a security auditor will be the only using the technology and he/ she will simply pass results to development.

Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code. Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is important, however, is that the technology can support efficient distribution of results and collaborative remediation.

The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program will move the scans to a central build environment and will probably do them during off-hours on really beefy machines.

Same thing with updates and the like. If an org has a funded mandate and the fortitude to get a software security program deployed, I do not think scan time, install time, or time to update is going be a problem for them.

Paras Shah
Fortify ASC
District Manager, Canada
Fortify Software, an HP Company

+1 408 836 7216 / Mobile
+1 866 234 1609/ Fax
paras@hp.commailto:paras@hp.com / Email

Please consider the environment before printing this email.

From: Sherif Koussa [mailto:sherif.koussa@gmail.com]
Sent: Monday, July 11, 2011 10:55 AM
To: Shah, Paras (HP Software - Fortify ASC)
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

If a company has 10K developers and one tool saves 30 mins vs another tool, I think that company would really care about 5,000 hours of lost productivity. Ofcorse, that this is given that the two tools are equivalent in everything else. Factoring in, signatures upgrades, product updates....etc, this will really add up, given that security is always not the main goal of software development teams, this becomes a big deal if they have to spent considerable amount of time installing a tool, updating a tool or figuring out the UI of a tool.

I really don't want to get hung on one issue in the document, but I like the approach Paras is taking in which tool best supports achieving better software security, but at the end of the day, developers are the ones who are going to use it and if developers found the tools hard to use, they will not use it, so the bigger goal of achieving better software security will not be met.

What I am trying to say is yes, there are criteria more important than others, but both have to be taken into consideration in my opinion in order for the tool to achieve its goals.

Regards,
Sherif
On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) <paras.shah@hp.commailto:paras.shah@hp.com> wrote:
I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome.

Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing.

With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example:
*Which technology has the broadest language support; to cover real work development environments?
*Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation?
*How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes?
*Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles?

Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above.

Paras Shah

+1 408 836 7216tel:%2B1%20408%20836%207216 / Mobile

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.orgmailto:wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.orgmailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher
Sent: Friday, July 08, 2011 5:02 PM
To: Robert A.
Cc: wasc-satec@lists.webappsec.orgmailto:wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Just to pile on few the references, the following links are pretty interesting:

Romain

On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.orgmailto:robert@webappsec.org> wrote:

Can we step back a little and say what we want to accomplish with
this evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure
that people understand that there are no good answer (or in that
context, no perfect tool), and to help them highlight the strong
points as well as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at
what these tools are capable of, and what they aren't. I'm not talking
biz logic issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can
use it within my own environment. This would help me (and others) get
a better handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a
tool based on installation time, they should be basing it on
capability mapped to their needs. If they believe installation is a
major requirement, well then there are bigger issues :)

  • Robert
I should have been more clear; what I meant to say was that is dangerous to assume that ONLY developers will use this tool. I agree that there will be a mix of roles that will be users and each role will have a different set of tasks they are trying to complete. The technology should support all role types. Paras Shah Fortify ASC District Manager, Canada Fortify Software, an HP Company +1 408 836 7216 / Mobile +1 866 234 1609/ Fax paras@hp.com / Email Please consider the environment before printing this email. From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Alen Zukich Sent: Monday, July 11, 2011 12:18 PM To: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria I think we would assume that both developers and security auditors will use this tool. So a tool should support both, which I think most vendors do to a certain extent. IMHO it is very important that developers own their security problems. This does matter because they need to fix issues right while they are coding and while it is fresh in their minds. Don't check in code with security vulnerabilities. alen From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Shah, Paras (HP Software - Fortify ASC) Sent: July-11-11 11:25 AM To: Sherif Koussa; wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that a security auditor will be the only using the technology and he/ she will simply pass results to development. Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code. Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is important, however, is that the technology can support efficient distribution of results and collaborative remediation. The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program will move the scans to a central build environment and will probably do them during off-hours on really beefy machines. Same thing with updates and the like. If an org has a funded mandate and the fortitude to get a software security program deployed, I do not think scan time, install time, or time to update is going be a problem for them. Paras Shah Fortify ASC District Manager, Canada Fortify Software, an HP Company +1 408 836 7216 / Mobile +1 866 234 1609/ Fax paras@hp.com<mailto:paras@hp.com> / Email Please consider the environment before printing this email. From: Sherif Koussa [mailto:sherif.koussa@gmail.com] Sent: Monday, July 11, 2011 10:55 AM To: Shah, Paras (HP Software - Fortify ASC) Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria If a company has 10K developers and one tool saves 30 mins vs another tool, I think that company would really care about 5,000 hours of lost productivity. Ofcorse, that this is given that the two tools are equivalent in everything else. Factoring in, signatures upgrades, product updates....etc, this will really add up, given that security is always not the main goal of software development teams, this becomes a big deal if they have to spent considerable amount of time installing a tool, updating a tool or figuring out the UI of a tool. I really don't want to get hung on one issue in the document, but I like the approach Paras is taking in which tool best supports achieving better software security, but at the end of the day, developers are the ones who are going to use it and if developers found the tools hard to use, they will not use it, so the bigger goal of achieving better software security will not be met. What I am trying to say is yes, there are criteria more important than others, but both have to be taken into consideration in my opinion in order for the tool to achieve its goals. Regards, Sherif On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) <paras.shah@hp.com<mailto:paras.shah@hp.com>> wrote: I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome. Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing. With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example: *Which technology has the broadest language support; to cover real work development environments? *Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation? *How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes? *Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles? Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above. Paras Shah +1 408 836 7216<tel:%2B1%20408%20836%207216> / Mobile -----Original Message----- From: wasc-satec-bounces@lists.webappsec.org<mailto:wasc-satec-bounces@lists.webappsec.org> [mailto:wasc-satec-bounces@lists.webappsec.org<mailto:wasc-satec-bounces@lists.webappsec.org>] On Behalf Of Romain Gaucher Sent: Friday, July 08, 2011 5:02 PM To: Robert A. Cc: wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria Just to pile on few the references, the following links are pretty interesting: - http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf - http://www.informit.com/articles/article.aspx?p=1680863 (Disclaimer, I work for Cigital) Romain On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.org<mailto:robert@webappsec.org>> wrote: >> Can we step back a little and say what we want to accomplish with >> this evaluation criteria? >> >> To me, this should be developed to raise the awareness of how to >> choose a source code analysis tool for security, and to make sure >> that people understand that there are no good answer (or in that >> context, no perfect tool), and to help them highlight the strong >> points as well as the weaknesses of the tools. > > Totally agreed. I think that people have a very bad understanding at > what these tools are capable of, and what they aren't. I'm not talking > biz logic issues either. > > For me I would like to understand what SAST has to offer, what it is > consistently good at, what it is inconsistently good at, and if I can > use it within my own environment. This would help me (and others) get > a better handle on coverage and dependability for certain things. > >> For example, we cannot start talking about the time it takes to >> install a tool. This is a subjective data, and I believe this is not >> very useful. Talking about the number of false-positive, or the >> "coverage" is also very misleading, and I believe we should move away >> from these "criteria". > > I feel the same way. To me people shouldn't be evaluating selecting a > tool based on installation time, they should be basing it on > capability mapped to their needs. If they believe installation is a > major requirement, well then there are bigger issues :) > > > - Robert > _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org
AG
Andre Gironda
Tue, Jul 12, 2011 3:16 PM

On Mon, Jul 11, 2011 at 9:18 AM, Alen Zukich alen.zukich@klocwork.com wrote:

IMHO it is very important that developers own their security problems.  This
does matter because they need to fix issues right while they are coding and
while it is fresh in their minds.  Don’t check in code with security
vulnerabilities.

SAST don't typically run when a developer checks-in code. They run in
a nightly build -- and only when code is ready to build.

Additional problems here are that depending on how the developer(s)
construct their code (e.g. tests first, et al) and when the code has
the functionality necessary to contain the issues found by a SAST
(e.g. post-wireframes, post-early-iterations, and closer to a full
system test). If it's too early, the developers and everyone involved
is going to waste a lot of time.

Style checkers are a more likely candidate to be run at check-in time.
They can be customized to a custom coding standard (although this is
weak customization compared to what is possible during a build), but
this won't have obvious security implications.

Some SAST can operate without a build (on merely syntactically correct
code) and I have already made this point on this list so there is no
need to re-hash what I've already said.

It's important that we, as a group, try to figure out language that
can describe the implementation avenues of both standard and
non-standard SAST rollouts. These should be based on teaching the
SATEC reader how to reverse engineer the build/non-build requirements
of any given SAST.

However, I don't like to think of SAST as a mandatory, Enterprise-wide
tool, but merely a tool that one can use if one wants to use it. It
should be up to the managers to decide whether each individual who
wants a SAST gets one or not (usually based on cost-effectiveness and
benefit-effectiveness, although it would be good to combine this
decision-making with some qualitative or personality influenced
information as well, especially for proactive quality reasons -- let
alone security).

To only imagine SAST as an information (or app; or data) security
control would be an example of a huge decision-making mistake/error.

Cheers,
Andre

On Mon, Jul 11, 2011 at 9:18 AM, Alen Zukich <alen.zukich@klocwork.com> wrote: > IMHO it is very important that developers own their security problems.  This > does matter because they need to fix issues right while they are coding and > while it is fresh in their minds.  Don’t check in code with security > vulnerabilities. SAST don't typically run when a developer checks-in code. They run in a nightly build -- and only when code is ready to build. Additional problems here are that depending on how the developer(s) construct their code (e.g. tests first, et al) and when the code has the functionality necessary to contain the issues found by a SAST (e.g. post-wireframes, post-early-iterations, and closer to a full system test). If it's too early, the developers and everyone involved is going to waste a lot of time. Style checkers are a more likely candidate to be run at check-in time. They can be customized to a custom coding standard (although this is weak customization compared to what is possible during a build), but this won't have obvious security implications. Some SAST can operate without a build (on merely syntactically correct code) and I have already made this point on this list so there is no need to re-hash what I've already said. It's important that we, as a group, try to figure out language that can describe the implementation avenues of both standard and non-standard SAST rollouts. These should be based on teaching the SATEC reader how to reverse engineer the build/non-build requirements of any given SAST. However, I don't like to think of SAST as a mandatory, Enterprise-wide tool, but merely a tool that one can use if one wants to use it. It should be up to the managers to decide whether each individual who wants a SAST gets one or not (usually based on cost-effectiveness and benefit-effectiveness, although it would be good to combine this decision-making with some qualitative or personality influenced information as well, especially for proactive quality reasons -- let alone security). To only imagine SAST as an information (or app; or data) security control would be an example of a huge decision-making mistake/error. Cheers, Andre
RA
Robert A.
Tue, Jul 12, 2011 4:11 PM

Additional problems here are that depending on how the developer(s)
construct their code (e.g. tests first, et al) and when the code has
the functionality necessary to contain the issues found by a SAST
(e.g. post-wireframes, post-early-iterations, and closer to a full
system test). If it's too early, the developers and everyone involved
is going to waste a lot of time.

Is there anything that you think could be included to describe this while
still having the criteria focus?  I'm wondering if this is something that
could be included in the direct document, a followup article, or supplemental
document.

It's important that we, as a group, try to figure out language that
can describe the implementation avenues of both standard and
non-standard SAST rollouts. These should be based on teaching the
SATEC reader how to reverse engineer the build/non-build requirements
of any given SAST.

I think this is a good point and I agree that rollout models are something
that a user would like to understand.

However, I don't like to think of SAST as a mandatory, Enterprise-wide
tool, but merely a tool that one can use if one wants to use it. It
should be up to the managers to decide whether each individual who
wants a SAST gets one or not (usually based on cost-effectiveness and
benefit-effectiveness, although it would be good to combine this
decision-making with some qualitative or personality influenced
information as well, especially for proactive quality reasons -- let
alone security).

Agreed. I don't think that this document should 'dictate' that you SHOULD
or MUST use SAST, merely that if you are looking at running it here are
some things you should be aware of.

Regards,

> Additional problems here are that depending on how the developer(s) > construct their code (e.g. tests first, et al) and when the code has > the functionality necessary to contain the issues found by a SAST > (e.g. post-wireframes, post-early-iterations, and closer to a full > system test). If it's too early, the developers and everyone involved > is going to waste a lot of time. Is there anything that you think could be included to describe this while still having the criteria focus? I'm wondering if this is something that could be included in the direct document, a followup article, or supplemental document. > It's important that we, as a group, try to figure out language that > can describe the implementation avenues of both standard and > non-standard SAST rollouts. These should be based on teaching the > SATEC reader how to reverse engineer the build/non-build requirements > of any given SAST. I think this is a good point and I agree that rollout models are something that a user would like to understand. > However, I don't like to think of SAST as a mandatory, Enterprise-wide > tool, but merely a tool that one can use if one wants to use it. It > should be up to the managers to decide whether each individual who > wants a SAST gets one or not (usually based on cost-effectiveness and > benefit-effectiveness, although it would be good to combine this > decision-making with some qualitative or personality influenced > information as well, especially for proactive quality reasons -- let > alone security). Agreed. I don't think that this document should 'dictate' that you SHOULD or MUST use SAST, merely that if you are looking at running it here are some things you should be aware of. Regards, - Robert Auger http://www.webappsec.org/ http://www.qasec.com/ http://www.cgisecurity.com/
MJ
McGovern, James
Tue, Jul 12, 2011 6:38 PM
  1. Can a Scan be kicked off via ANT task
  2. Does the Scan provide error return codes that could be used to automate subsequent steps

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Robert A.
Sent: Tuesday, July 12, 2011 12:11 PM
To: Andre Gironda
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Additional problems here are that depending on how the developer(s)
construct their code (e.g. tests first, et al) and when the code has
the functionality necessary to contain the issues found by a SAST
(e.g. post-wireframes, post-early-iterations, and closer to a full
system test). If it's too early, the developers and everyone involved
is going to waste a lot of time.

Is there anything that you think could be included to describe this while
still having the criteria focus?  I'm wondering if this is something that
could be included in the direct document, a followup article, or supplemental
document.

It's important that we, as a group, try to figure out language that
can describe the implementation avenues of both standard and
non-standard SAST rollouts. These should be based on teaching the
SATEC reader how to reverse engineer the build/non-build requirements
of any given SAST.

I think this is a good point and I agree that rollout models are something
that a user would like to understand.

However, I don't like to think of SAST as a mandatory, Enterprise-wide
tool, but merely a tool that one can use if one wants to use it. It
should be up to the managers to decide whether each individual who
wants a SAST gets one or not (usually based on cost-effectiveness and
benefit-effectiveness, although it would be good to combine this
decision-making with some qualitative or personality influenced
information as well, especially for proactive quality reasons -- let
alone security).

Agreed. I don't think that this document should 'dictate' that you SHOULD
or MUST use SAST, merely that if you are looking at running it here are
some things you should be aware of.

Regards,


wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org

1. Can a Scan be kicked off via ANT task 2. Does the Scan provide error return codes that could be used to automate subsequent steps -----Original Message----- From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Robert A. Sent: Tuesday, July 12, 2011 12:11 PM To: Andre Gironda Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria > Additional problems here are that depending on how the developer(s) > construct their code (e.g. tests first, et al) and when the code has > the functionality necessary to contain the issues found by a SAST > (e.g. post-wireframes, post-early-iterations, and closer to a full > system test). If it's too early, the developers and everyone involved > is going to waste a lot of time. Is there anything that you think could be included to describe this while still having the criteria focus? I'm wondering if this is something that could be included in the direct document, a followup article, or supplemental document. > It's important that we, as a group, try to figure out language that > can describe the implementation avenues of both standard and > non-standard SAST rollouts. These should be based on teaching the > SATEC reader how to reverse engineer the build/non-build requirements > of any given SAST. I think this is a good point and I agree that rollout models are something that a user would like to understand. > However, I don't like to think of SAST as a mandatory, Enterprise-wide > tool, but merely a tool that one can use if one wants to use it. It > should be up to the managers to decide whether each individual who > wants a SAST gets one or not (usually based on cost-effectiveness and > benefit-effectiveness, although it would be good to combine this > decision-making with some qualitative or personality influenced > information as well, especially for proactive quality reasons -- let > alone security). Agreed. I don't think that this document should 'dictate' that you SHOULD or MUST use SAST, merely that if you are looking at running it here are some things you should be aware of. Regards, - Robert Auger http://www.webappsec.org/ http://www.qasec.com/ http://www.cgisecurity.com/ _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org
GP
Guido Pederzini
Tue, Jul 12, 2011 9:26 PM

Hi, I've read all the interesting post.
I think that it's important that a specific tool can help developers to
write secure code, so I think that it should be auto-esplicative, or very
simple to use.

Otherwise it will be very difficult that a team developer use it, because
the team will see that tool as an impediment to release features and will
abandon it.

Another question is: should this tool runs on every checkin?
And if so..if a secuirty check fails, it invalidate the whole release?

Should be a failed security check treated as a red unity test o red
regression test?

If so, 10k developers that make a checkin on a different line of code
potentially could arise 10 security bug (1 per 1000 line of code is an
estimation of very well written code..so very very good developers..)
In that case 10 security bugs for a checkin, could grow up on every checkin
during the day, and if we wait the nightly build could be very late and
spent the whole next day to fix security bug.

In my opinion a static analysis tool should be run on compile and every
developer should be aware to fix security bug, with tool instructions or
some simply guidelines provided by more expert security people in the team.

Regards and sorry for my bad english

Guido

2011/7/12 McGovern, James james.mcgovern@hp.com

  1. Can a Scan be kicked off via ANT task
  2. Does the Scan provide error return codes that could be used to automate
    subsequent steps

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.org [mailto:
wasc-satec-bounces@lists.webappsec.org] On Behalf Of Robert A.
Sent: Tuesday, July 12, 2011 12:11 PM
To: Andre Gironda
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Additional problems here are that depending on how the developer(s)
construct their code (e.g. tests first, et al) and when the code has
the functionality necessary to contain the issues found by a SAST
(e.g. post-wireframes, post-early-iterations, and closer to a full
system test). If it's too early, the developers and everyone involved
is going to waste a lot of time.

Is there anything that you think could be included to describe this while
still having the criteria focus?  I'm wondering if this is something that
could be included in the direct document, a followup article, or
supplemental
document.

It's important that we, as a group, try to figure out language that
can describe the implementation avenues of both standard and
non-standard SAST rollouts. These should be based on teaching the
SATEC reader how to reverse engineer the build/non-build requirements
of any given SAST.

I think this is a good point and I agree that rollout models are something
that a user would like to understand.

However, I don't like to think of SAST as a mandatory, Enterprise-wide
tool, but merely a tool that one can use if one wants to use it. It
should be up to the managers to decide whether each individual who
wants a SAST gets one or not (usually based on cost-effectiveness and
benefit-effectiveness, although it would be good to combine this
decision-making with some qualitative or personality influenced
information as well, especially for proactive quality reasons -- let
alone security).

Agreed. I don't think that this document should 'dictate' that you SHOULD
or MUST use SAST, merely that if you are looking at running it here are
some things you should be aware of.

Regards,


wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org


wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org

Hi, I've read all the interesting post. I think that it's important that a specific tool can help developers to write secure code, so I think that it should be auto-esplicative, or very simple to use. Otherwise it will be very difficult that a team developer use it, because the team will see that tool as an impediment to release features and will abandon it. Another question is: should this tool runs on every checkin? And if so..if a secuirty check fails, it invalidate the whole release? Should be a failed security check treated as a red unity test o red regression test? If so, 10k developers that make a checkin on a different line of code potentially could arise 10 security bug (1 per 1000 line of code is an estimation of very well written code..so very very good developers..) In that case 10 security bugs for a checkin, could grow up on every checkin during the day, and if we wait the nightly build could be very late and spent the whole next day to fix security bug. In my opinion a static analysis tool should be run on compile and every developer should be aware to fix security bug, with tool instructions or some simply guidelines provided by more expert security people in the team. Regards and sorry for my bad english Guido 2011/7/12 McGovern, James <james.mcgovern@hp.com> > 1. Can a Scan be kicked off via ANT task > 2. Does the Scan provide error return codes that could be used to automate > subsequent steps > > -----Original Message----- > From: wasc-satec-bounces@lists.webappsec.org [mailto: > wasc-satec-bounces@lists.webappsec.org] On Behalf Of Robert A. > Sent: Tuesday, July 12, 2011 12:11 PM > To: Andre Gironda > Cc: wasc-satec@lists.webappsec.org > Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria > > > Additional problems here are that depending on how the developer(s) > > construct their code (e.g. tests first, et al) and when the code has > > the functionality necessary to contain the issues found by a SAST > > (e.g. post-wireframes, post-early-iterations, and closer to a full > > system test). If it's too early, the developers and everyone involved > > is going to waste a lot of time. > > Is there anything that you think could be included to describe this while > still having the criteria focus? I'm wondering if this is something that > could be included in the direct document, a followup article, or > supplemental > document. > > > It's important that we, as a group, try to figure out language that > > can describe the implementation avenues of both standard and > > non-standard SAST rollouts. These should be based on teaching the > > SATEC reader how to reverse engineer the build/non-build requirements > > of any given SAST. > > I think this is a good point and I agree that rollout models are something > that a user would like to understand. > > > However, I don't like to think of SAST as a mandatory, Enterprise-wide > > tool, but merely a tool that one can use if one wants to use it. It > > should be up to the managers to decide whether each individual who > > wants a SAST gets one or not (usually based on cost-effectiveness and > > benefit-effectiveness, although it would be good to combine this > > decision-making with some qualitative or personality influenced > > information as well, especially for proactive quality reasons -- let > > alone security). > > Agreed. I don't think that this document should 'dictate' that you SHOULD > or MUST use SAST, merely that if you are looking at running it here are > some things you should be aware of. > > Regards, > - Robert Auger > http://www.webappsec.org/ > http://www.qasec.com/ > http://www.cgisecurity.com/ > > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org >
AG
Andre Gironda
Tue, Jul 12, 2011 10:17 PM

On Tue, Jul 12, 2011 at 2:26 PM, Guido Pederzini
guido.pederzini@gmail.com wrote:

Another question is: should this tool runs on every checkin?
And if so..if a secuirty check fails, it invalidate the whole release?

Just to re-iterate -- I'm not aware of any SAST that can work on each check-in.

There is at least one SAST that does not require a build (i.e. it
works on syntactically correct source code instead of bytecode and/or
build integration process), but it's unpopular and still wouldn't work
for each check-in.

Has anyone on this list ever used a SAST before?

Should be a failed security check treated as a red unity test o red
regression test?

The only periodical source that mentions unit tests for security
purposes is "Security on Rails". No commercial SAST supports this,
however, and no popular commercial SAST supports Ruby or Rails that
I'm aware of.

The earliest literature on security-focused unit tests was written by
Corsaire, but I've never seen these methods adopted in the JEE/.NET
worlds outside of the OWASP O2 Project from Dinis Cruz.

The concepts of Agile/xP test-first development are often replaced by
Prototyping (a general description for iterative development, such as
Scrum sprints, where an iteration demo is produced at the end of a 2-4
week period and constantly refactored using several methods but only
sometimes test-first development). Other times, they are replaced by
Modeling, although this is less common in Enterprises. In the case of
Modeling, some SAST do support Z and other formal method notation
languages. We should probably discuss support of formal methods in
tools, as well as their general approach to modeling, prototyping, and
test-first development.

However, SAST are predominantly used in Secure Code Review (SCR) where
an auditor/assessor, usually external to the application developer(s),
are deeply connected to its internals and capabilities. If you don't
believe me, check out the Gartner, Forrester, and The 451 Group
analyst reports.

Code review and desk-checks are commonly difficult to integrate into
Agile software lifecycles. According to McConnell, they also provide
less value than test-first development, modeling, and prototyping in
terms of bug finding/stomping.

If you look at SAST solutions (e.g. Cigital ESP, Veracode, Fortify
On-Demand), you'll see that they are somewhere in-between SCR and
Prototyping -- where one or two pre-production builds are shipped
after iteration demos and refactoring are nearly completed. It's
difficult to integrate this into Green-Blue Continuous Deployments
because of the constant refactoring, "always-on" production
requirements, and large size/agility of new builds. For this sort of
code churn, it is likely that application developer(s) will be heavily
tuning SAST to meet their specific time and resource requirements.

According to recent 451 Group analyst reports, secure application
development platforms such as SD Elements or Security Innovation Team
Mentor could be more useful than SAST when used by application
developer(s) who do not have a primary (or even secondary) role
involved in application security. I.e. "regular" app developers.

Cheers,
Andre

On Tue, Jul 12, 2011 at 2:26 PM, Guido Pederzini <guido.pederzini@gmail.com> wrote: > Another question is: should this tool runs on every checkin? > And if so..if a secuirty check fails, it invalidate the whole release? Just to re-iterate -- I'm not aware of any SAST that can work on each check-in. There is at least one SAST that does not require a build (i.e. it works on syntactically correct source code instead of bytecode and/or build integration process), but it's unpopular and still wouldn't work for each check-in. Has anyone on this list ever used a SAST before? > Should be a failed security check treated as a red unity test o red > regression test? The only periodical source that mentions unit tests for security purposes is "Security on Rails". No commercial SAST supports this, however, and no popular commercial SAST supports Ruby or Rails that I'm aware of. The earliest literature on security-focused unit tests was written by Corsaire, but I've never seen these methods adopted in the JEE/.NET worlds outside of the OWASP O2 Project from Dinis Cruz. The concepts of Agile/xP test-first development are often replaced by Prototyping (a general description for iterative development, such as Scrum sprints, where an iteration demo is produced at the end of a 2-4 week period and constantly refactored using several methods but only sometimes test-first development). Other times, they are replaced by Modeling, although this is less common in Enterprises. In the case of Modeling, some SAST do support Z and other formal method notation languages. We should probably discuss support of formal methods in tools, as well as their general approach to modeling, prototyping, and test-first development. However, SAST are predominantly used in Secure Code Review (SCR) where an auditor/assessor, usually external to the application developer(s), are deeply connected to its internals and capabilities. If you don't believe me, check out the Gartner, Forrester, and The 451 Group analyst reports. Code review and desk-checks are commonly difficult to integrate into Agile software lifecycles. According to McConnell, they also provide less value than test-first development, modeling, and prototyping in terms of bug finding/stomping. If you look at SAST solutions (e.g. Cigital ESP, Veracode, Fortify On-Demand), you'll see that they are somewhere in-between SCR and Prototyping -- where one or two pre-production builds are shipped after iteration demos and refactoring are nearly completed. It's difficult to integrate this into Green-Blue Continuous Deployments because of the constant refactoring, "always-on" production requirements, and large size/agility of new builds. For this sort of code churn, it is likely that application developer(s) will be heavily tuning SAST to meet their specific time and resource requirements. According to recent 451 Group analyst reports, secure application development platforms such as SD Elements or Security Innovation Team Mentor could be more useful than SAST when used by application developer(s) who do not have a primary (or even secondary) role involved in application security. I.e. "regular" app developers. Cheers, Andre
SK
Sherif Koussa
Wed, Jul 13, 2011 1:00 AM

On Tue, Jul 12, 2011 at 6:17 PM, Andre Gironda andreg@gmail.com wrote:

On Tue, Jul 12, 2011 at 2:26 PM, Guido Pederzini
guido.pederzini@gmail.com wrote:

Another question is: should this tool runs on every checkin?
And if so..if a secuirty check fails, it invalidate the whole release?

Just to re-iterate -- I'm not aware of any SAST that can work on each
check-in.

Not out of the box, but I believe there are ways to tweak a couple of tools
to run on check-in.
For example, some repositories would trigger a build on check-in and the
tool will be part of the build,
maybe using ANT or something similar

There is at least one SAST that does not require a build (i.e. it
works on syntactically correct source code instead of bytecode and/or
build integration process), but it's unpopular and still wouldn't work
for each check-in.

Has anyone on this list ever used a SAST before?

I believe most of the participants on the list used a SAST before on the
list.

Should be a failed security check treated as a red unity test o red
regression test?

The only periodical source that mentions unit tests for security
purposes is "Security on Rails". No commercial SAST supports this,
however, and no popular commercial SAST supports Ruby or Rails that
I'm aware of.

Agreed

The earliest literature on security-focused unit tests was written by
Corsaire, but I've never seen these methods adopted in the JEE/.NET
worlds outside of the OWASP O2 Project from Dinis Cruz.

The concepts of Agile/xP test-first development are often replaced by
Prototyping (a general description for iterative development, such as
Scrum sprints, where an iteration demo is produced at the end of a 2-4
week period and constantly refactored using several methods but only
sometimes test-first development). Other times, they are replaced by
Modeling, although this is less common in Enterprises. In the case of
Modeling, some SAST do support Z and other formal method notation
languages. We should probably discuss support of formal methods in
tools, as well as their general approach to modeling, prototyping, and
test-first development.

I would love to believe that most software organization are using some sort
of methodology (XP, Agile, ....etc) but I have seen a LOT of organization
that
don't really use anything at all.
However, I am not sure we should discuss development methodologies in the
document.
My rational is software developers are smart enough to tweak the tool for
their own use and
I think adding several software development methodologies will indicate that
the tool could
be used only within these methodologies.

However, SAST are predominantly used in Secure Code Review (SCR) where
an auditor/assessor, usually external to the application developer(s),
are deeply connected to its internals and capabilities. If you don't
believe me, check out the Gartner, Forrester, and The 451 Group
analyst reports.

Absolutely true. However, I like to believe that this can change and tools
could be leveraged
within software development teams.

Code review and desk-checks are commonly difficult to integrate into
Agile software lifecycles. According to McConnell, they also provide
less value than test-first development, modeling, and prototyping in
terms of bug finding/stomping.

Here is what 10 years of software development taught me. Software developers
like to
break rules, get creative and do things out of the box, if they believe the
tool could help
them, they will use it in ways that we can't really predict which is bad
news for the bad guys.
Our role is to guide this creativity.

If you look at SAST solutions (e.g. Cigital ESP, Veracode, Fortify
On-Demand), you'll see that they are somewhere in-between SCR and
Prototyping -- where one or two pre-production builds are shipped
after iteration demos and refactoring are nearly completed. It's
difficult to integrate this into Green-Blue Continuous Deployments
because of the constant refactoring, "always-on" production
requirements, and large size/agility of new builds. For this sort of
code churn, it is likely that application developer(s) will be heavily
tuning SAST to meet their specific time and resource requirements.

According to recent 451 Group analyst reports, secure application
development platforms such as SD Elements or Security Innovation Team
Mentor could be more useful than SAST when used by application
developer(s) who do not have a primary (or even secondary) role
involved in application security. I.e. "regular" app developers.

Cheers,
Andre


wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org

On Tue, Jul 12, 2011 at 6:17 PM, Andre Gironda <andreg@gmail.com> wrote: > On Tue, Jul 12, 2011 at 2:26 PM, Guido Pederzini > <guido.pederzini@gmail.com> wrote: > > Another question is: should this tool runs on every checkin? > > And if so..if a secuirty check fails, it invalidate the whole release? > > Just to re-iterate -- I'm not aware of any SAST that can work on each > check-in. > Not out of the box, but I believe there are ways to tweak a couple of tools to run on check-in. For example, some repositories would trigger a build on check-in and the tool will be part of the build, maybe using ANT or something similar > There is at least one SAST that does not require a build (i.e. it > works on syntactically correct source code instead of bytecode and/or > build integration process), but it's unpopular and still wouldn't work > for each check-in. > > Has anyone on this list ever used a SAST before? > I believe most of the participants on the list used a SAST before on the list. > > > Should be a failed security check treated as a red unity test o red > > regression test? > > The only periodical source that mentions unit tests for security > purposes is "Security on Rails". No commercial SAST supports this, > however, and no popular commercial SAST supports Ruby or Rails that > I'm aware of. > > Agreed > The earliest literature on security-focused unit tests was written by > Corsaire, but I've never seen these methods adopted in the JEE/.NET > worlds outside of the OWASP O2 Project from Dinis Cruz. > > The concepts of Agile/xP test-first development are often replaced by > Prototyping (a general description for iterative development, such as > Scrum sprints, where an iteration demo is produced at the end of a 2-4 > week period and constantly refactored using several methods but only > sometimes test-first development). Other times, they are replaced by > Modeling, although this is less common in Enterprises. In the case of > Modeling, some SAST do support Z and other formal method notation > languages. We should probably discuss support of formal methods in > tools, as well as their general approach to modeling, prototyping, and > test-first development. > I would love to believe that most software organization are using some sort of methodology (XP, Agile, ....etc) but I have seen a LOT of organization that don't really use anything at all. However, I am not sure we should discuss development methodologies in the document. My rational is software developers are smart enough to tweak the tool for their own use and I think adding several software development methodologies will indicate that the tool could be used only within these methodologies. > > However, SAST are predominantly used in Secure Code Review (SCR) where > an auditor/assessor, usually external to the application developer(s), > are deeply connected to its internals and capabilities. If you don't > believe me, check out the Gartner, Forrester, and The 451 Group > analyst reports. > Absolutely true. However, I like to believe that this can change and tools could be leveraged within software development teams. > > Code review and desk-checks are commonly difficult to integrate into > Agile software lifecycles. According to McConnell, they also provide > less value than test-first development, modeling, and prototyping in > terms of bug finding/stomping. > Here is what 10 years of software development taught me. Software developers like to break rules, get creative and do things out of the box, if they believe the tool could help them, they will use it in ways that we can't really predict which is bad news for the bad guys. Our role is to guide this creativity. > > If you look at SAST solutions (e.g. Cigital ESP, Veracode, Fortify > On-Demand), you'll see that they are somewhere in-between SCR and > Prototyping -- where one or two pre-production builds are shipped > after iteration demos and refactoring are nearly completed. It's > difficult to integrate this into Green-Blue Continuous Deployments > because of the constant refactoring, "always-on" production > requirements, and large size/agility of new builds. For this sort of > code churn, it is likely that application developer(s) will be heavily > tuning SAST to meet their specific time and resource requirements. > > According to recent 451 Group analyst reports, secure application > development platforms such as SD Elements or Security Innovation Team > Mentor could be more useful than SAST when used by application > developer(s) who do not have a primary (or even secondary) role > involved in application security. I.e. "regular" app developers. > > Cheers, > Andre > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org >
AZ
Alen Zukich
Wed, Jul 13, 2011 1:53 AM

Just to re-iterate -- I'm not aware of any SAST that can work on each
check-in.

Not out of the box, but I believe there are ways to tweak a couple of
tools to run on check-in.
For example, some repositories would trigger a build on check-in and the
tool will be part of the build,
maybe using ANT or something similar

[alen] I can't speak for all vendors but you can absolutely automate
tools to run on check-in.  It typically involves some tweaking depending
on environment.

From: wasc-satec-bounces@lists.webappsec.org
[mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Sherif
Koussa
Sent: July-12-11 9:01 PM
To: Andre Gironda
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

On Tue, Jul 12, 2011 at 6:17 PM, Andre Gironda andreg@gmail.com wrote:

On Tue, Jul 12, 2011 at 2:26 PM, Guido Pederzini
guido.pederzini@gmail.com wrote:

Another question is: should this tool runs on every checkin?
And if so..if a secuirty check fails, it invalidate the whole release?

Just to re-iterate -- I'm not aware of any SAST that can work on each
check-in.

Not out of the box, but I believe there are ways to tweak a couple of
tools to run on check-in.
For example, some repositories would trigger a build on check-in and the
tool will be part of the build,
maybe using ANT or something similar

There is at least one SAST that does not require a build (i.e.

it
works on syntactically correct source code instead of bytecode
and/or
build integration process), but it's unpopular and still
wouldn't work
for each check-in.

Has anyone on this list ever used a SAST before?

I believe most of the participants on the list used a SAST before on the
list.

Should be a failed security check treated as a red unity test

o red

regression test?

The only periodical source that mentions unit tests for security
purposes is "Security on Rails". No commercial SAST supports

this,
however, and no popular commercial SAST supports Ruby or Rails
that
I'm aware of.

Agreed

The earliest literature on security-focused unit tests was

written by
Corsaire, but I've never seen these methods adopted in the
JEE/.NET
worlds outside of the OWASP O2 Project from Dinis Cruz.

The concepts of Agile/xP test-first development are often

replaced by
Prototyping (a general description for iterative development,
such as
Scrum sprints, where an iteration demo is produced at the end of
a 2-4
week period and constantly refactored using several methods but
only
sometimes test-first development). Other times, they are
replaced by
Modeling, although this is less common in Enterprises. In the
case of
Modeling, some SAST do support Z and other formal method
notation
languages. We should probably discuss support of formal methods
in
tools, as well as their general approach to modeling,
prototyping, and
test-first development.

I would love to believe that most software organization are using some
sort
of methodology (XP, Agile, ....etc) but I have seen a LOT of
organization that
don't really use anything at all.
However, I am not sure we should discuss development methodologies in
the document.
My rational is software developers are smart enough to tweak the tool
for their own use and
I think adding several software development methodologies will indicate
that the tool could
be used only within these methodologies.

However, SAST are predominantly used in Secure Code Review (SCR)

where
an auditor/assessor, usually external to the application
developer(s),
are deeply connected to its internals and capabilities. If you
don't
believe me, check out the Gartner, Forrester, and The 451 Group
analyst reports.

Absolutely true. However, I like to believe that this can change and
tools could be leveraged
within software development teams.

Code review and desk-checks are commonly difficult to integrate

into
Agile software lifecycles. According to McConnell, they also
provide
less value than test-first development, modeling, and
prototyping in
terms of bug finding/stomping.

Here is what 10 years of software development taught me. Software
developers like to
break rules, get creative and do things out of the box, if they believe
the tool could help
them, they will use it in ways that we can't really predict which is bad
news for the bad guys.
Our role is to guide this creativity.

If you look at SAST solutions (e.g. Cigital ESP, Veracode,

Fortify
On-Demand), you'll see that they are somewhere in-between SCR
and
Prototyping -- where one or two pre-production builds are
shipped
after iteration demos and refactoring are nearly completed. It's
difficult to integrate this into Green-Blue Continuous
Deployments
because of the constant refactoring, "always-on" production
requirements, and large size/agility of new builds. For this
sort of
code churn, it is likely that application developer(s) will be
heavily
tuning SAST to meet their specific time and resource
requirements.

According to recent 451 Group analyst reports, secure

application
development platforms such as SD Elements or Security Innovation
Team
Mentor could be more useful than SAST when used by application
developer(s) who do not have a primary (or even secondary) role
involved in application security. I.e. "regular" app developers.

Cheers,
Andre


_______________________________________________
wasc-satec mailing list
wasc-satec@lists.webappsec.org

http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.o
rg

Just to re-iterate -- I'm not aware of any SAST that can work on each check-in. Not out of the box, but I believe there are ways to tweak a couple of tools to run on check-in. For example, some repositories would trigger a build on check-in and the tool will be part of the build, maybe using ANT or something similar [alen] I can't speak for all vendors but you can absolutely automate tools to run on check-in. It typically involves some tweaking depending on environment. From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Sherif Koussa Sent: July-12-11 9:01 PM To: Andre Gironda Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria On Tue, Jul 12, 2011 at 6:17 PM, Andre Gironda <andreg@gmail.com> wrote: On Tue, Jul 12, 2011 at 2:26 PM, Guido Pederzini <guido.pederzini@gmail.com> wrote: > Another question is: should this tool runs on every checkin? > And if so..if a secuirty check fails, it invalidate the whole release? Just to re-iterate -- I'm not aware of any SAST that can work on each check-in. Not out of the box, but I believe there are ways to tweak a couple of tools to run on check-in. For example, some repositories would trigger a build on check-in and the tool will be part of the build, maybe using ANT or something similar There is at least one SAST that does not require a build (i.e. it works on syntactically correct source code instead of bytecode and/or build integration process), but it's unpopular and still wouldn't work for each check-in. Has anyone on this list ever used a SAST before? I believe most of the participants on the list used a SAST before on the list. > Should be a failed security check treated as a red unity test o red > regression test? The only periodical source that mentions unit tests for security purposes is "Security on Rails". No commercial SAST supports this, however, and no popular commercial SAST supports Ruby or Rails that I'm aware of. Agreed The earliest literature on security-focused unit tests was written by Corsaire, but I've never seen these methods adopted in the JEE/.NET worlds outside of the OWASP O2 Project from Dinis Cruz. The concepts of Agile/xP test-first development are often replaced by Prototyping (a general description for iterative development, such as Scrum sprints, where an iteration demo is produced at the end of a 2-4 week period and constantly refactored using several methods but only sometimes test-first development). Other times, they are replaced by Modeling, although this is less common in Enterprises. In the case of Modeling, some SAST do support Z and other formal method notation languages. We should probably discuss support of formal methods in tools, as well as their general approach to modeling, prototyping, and test-first development. I would love to believe that most software organization are using some sort of methodology (XP, Agile, ....etc) but I have seen a LOT of organization that don't really use anything at all. However, I am not sure we should discuss development methodologies in the document. My rational is software developers are smart enough to tweak the tool for their own use and I think adding several software development methodologies will indicate that the tool could be used only within these methodologies. However, SAST are predominantly used in Secure Code Review (SCR) where an auditor/assessor, usually external to the application developer(s), are deeply connected to its internals and capabilities. If you don't believe me, check out the Gartner, Forrester, and The 451 Group analyst reports. Absolutely true. However, I like to believe that this can change and tools could be leveraged within software development teams. Code review and desk-checks are commonly difficult to integrate into Agile software lifecycles. According to McConnell, they also provide less value than test-first development, modeling, and prototyping in terms of bug finding/stomping. Here is what 10 years of software development taught me. Software developers like to break rules, get creative and do things out of the box, if they believe the tool could help them, they will use it in ways that we can't really predict which is bad news for the bad guys. Our role is to guide this creativity. If you look at SAST solutions (e.g. Cigital ESP, Veracode, Fortify On-Demand), you'll see that they are somewhere in-between SCR and Prototyping -- where one or two pre-production builds are shipped after iteration demos and refactoring are nearly completed. It's difficult to integrate this into Green-Blue Continuous Deployments because of the constant refactoring, "always-on" production requirements, and large size/agility of new builds. For this sort of code churn, it is likely that application developer(s) will be heavily tuning SAST to meet their specific time and resource requirements. According to recent 451 Group analyst reports, secure application development platforms such as SD Elements or Security Innovation Team Mentor could be more useful than SAST when used by application developer(s) who do not have a primary (or even secondary) role involved in application security. I.e. "regular" app developers. Cheers, Andre _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.o rg