wasc-satec@lists.webappsec.org

WASC Static Analysis Tool Evaluation Criteria

View all threads

Static Analysis Tools Evaluation Criteria

SK
Sherif Koussa
Fri, Jul 8, 2011 7:36 PM

Hi All,

I have put down a draft criteria document structure to start things off.
Once we agree on the structure of the document we can proceed to the next
step of filling in the criteria.

http://projects.webappsec.org/w/page/42093482/Static-Analysis-Tool-Evaluation-Criteria-Working

Regards,
Sherif

Hi All, I have put down a draft criteria document structure to start things off. Once we agree on the structure of the document we can proceed to the next step of filling in the criteria. http://projects.webappsec.org/w/page/42093482/Static-Analysis-Tool-Evaluation-Criteria-Working Regards, Sherif
RG
Romain Gaucher
Fri, Jul 8, 2011 8:29 PM

Can we step back a little and say what we want to accomplish with this
evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure that
people understand that there are no good answer (or in that context,
no perfect tool), and to help them highlight the strong points as well
as the weaknesses of the tools. (Off-topic: Static Analysis Tools is
really generic btw. we should define what this means in the document,
and what is the scope of the evaluation criteria).

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

Anyhow, here is an example of criteria from NIST about source code analysis:
http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf
I believe that this document could be a start, as to understand how we
should approach the problem...

Romain

On Fri, Jul 8, 2011 at 3:36 PM, Sherif Koussa sherif.koussa@gmail.com wrote:

Hi All,

I have put down a draft criteria document structure to start things off.
Once we agree on the structure of the document we can proceed to the next
step of filling in the criteria.

http://projects.webappsec.org/w/page/42093482/Static-Analysis-Tool-Evaluation-Criteria-Working

Regards,
Sherif


wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org

Can we step back a little and say what we want to accomplish with this evaluation criteria? To me, this should be developed to raise the awareness of how to choose a source code analysis tool for security, and to make sure that people understand that there are no good answer (or in that context, no perfect tool), and to help them highlight the strong points as well as the weaknesses of the tools. (Off-topic: Static Analysis Tools is really generic btw. we should define what this means in the document, and what is the scope of the evaluation criteria). For example, we cannot start talking about the time it takes to install a tool. This is a subjective data, and I believe this is not very useful. Talking about the number of false-positive, or the "coverage" is also very misleading, and I believe we should move away from these "criteria". Anyhow, here is an example of criteria from NIST about source code analysis: http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf I believe that this document could be a start, as to understand how we should approach the problem... Romain On Fri, Jul 8, 2011 at 3:36 PM, Sherif Koussa <sherif.koussa@gmail.com> wrote: > Hi All, > > I have put down a draft criteria document structure to start things off. > Once we agree on the structure of the document we can proceed to the next > step of filling in the criteria. > > http://projects.webappsec.org/w/page/42093482/Static-Analysis-Tool-Evaluation-Criteria-Working > > > Regards, > Sherif > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org > >
RA
Robert A.
Fri, Jul 8, 2011 8:46 PM

Can we step back a little and say what we want to accomplish with this
evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure that
people understand that there are no good answer (or in that context,
no perfect tool), and to help them highlight the strong points as well
as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at what
these tools are capable of, and what they aren't. I'm not talking biz
logic issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can use
it within my own environment. This would help me (and others) get a better
handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a tool
based on installation time, they should be basing it on capability mapped
to their needs. If they believe installation is a major requirement, well
then there are bigger issues :)

  • Robert
> Can we step back a little and say what we want to accomplish with this > evaluation criteria? > > To me, this should be developed to raise the awareness of how to > choose a source code analysis tool for security, and to make sure that > people understand that there are no good answer (or in that context, > no perfect tool), and to help them highlight the strong points as well > as the weaknesses of the tools. Totally agreed. I think that people have a very bad understanding at what these tools are capable of, and what they aren't. I'm not talking biz logic issues either. For me I would like to understand what SAST has to offer, what it is consistently good at, what it is inconsistently good at, and if I can use it within my own environment. This would help me (and others) get a better handle on coverage and dependability for certain things. > For example, we cannot start talking about the time it takes to > install a tool. This is a subjective data, and I believe this is not > very useful. Talking about the number of false-positive, or the > "coverage" is also very misleading, and I believe we should move away > from these "criteria". I feel the same way. To me people shouldn't be evaluating selecting a tool based on installation time, they should be basing it on capability mapped to their needs. If they believe installation is a major requirement, well then there are bigger issues :) - Robert
RG
Romain Gaucher
Fri, Jul 8, 2011 9:01 PM

Just to pile on few the references, the following links are pretty interesting:

Romain

On Fri, Jul 8, 2011 at 4:46 PM, Robert A. robert@webappsec.org wrote:

Can we step back a little and say what we want to accomplish with this
evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure that
people understand that there are no good answer (or in that context,
no perfect tool), and to help them highlight the strong points as well
as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at what
these tools are capable of, and what they aren't. I'm not talking biz logic
issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can use
it within my own environment. This would help me (and others) get a better
handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a tool
based on installation time, they should be basing it on capability mapped
to their needs. If they believe installation is a major requirement, well
then there are bigger issues :)

  • Robert
Just to pile on few the references, the following links are pretty interesting: - http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf - http://www.informit.com/articles/article.aspx?p=1680863 (Disclaimer, I work for Cigital) Romain On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.org> wrote: >> Can we step back a little and say what we want to accomplish with this >> evaluation criteria? >> >> To me, this should be developed to raise the awareness of how to >> choose a source code analysis tool for security, and to make sure that >> people understand that there are no good answer (or in that context, >> no perfect tool), and to help them highlight the strong points as well >> as the weaknesses of the tools. > > Totally agreed. I think that people have a very bad understanding at what > these tools are capable of, and what they aren't. I'm not talking biz logic > issues either. > > For me I would like to understand what SAST has to offer, what it is > consistently good at, what it is inconsistently good at, and if I can use > it within my own environment. This would help me (and others) get a better > handle on coverage and dependability for certain things. > >> For example, we cannot start talking about the time it takes to >> install a tool. This is a subjective data, and I believe this is not >> very useful. Talking about the number of false-positive, or the >> "coverage" is also very misleading, and I believe we should move away >> from these "criteria". > > I feel the same way. To me people shouldn't be evaluating selecting a tool > based on installation time, they should be basing it on capability mapped > to their needs. If they believe installation is a major requirement, well > then there are bigger issues :) > > > - Robert >
SK
Sherif Koussa
Sat, Jul 9, 2011 9:51 PM

On Fri, Jul 8, 2011 at 4:29 PM, Romain Gaucher romain@webappsec.org wrote:

Can we step back a little and say what we want to accomplish with this
evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure that
people understand that there are no good answer (or in that context,
no perfect tool), and to help them highlight the strong points as well
as the weaknesses of the tools.

Agreed 100%

(Off-topic: Static Analysis Tools is
really generic btw. we should define what this means in the document,
and what is the scope of the evaluation criteria).

Excellent point

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

Why don't we start with "what matters"? And more importantly who are we
targeting
with this document?

Anyhow, here is an example of criteria from NIST about source code
analysis:

http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf
I believe that this document could be a start, as to understand how we
should approach the problem...

Romain

On Fri, Jul 8, 2011 at 3:36 PM, Sherif Koussa sherif.koussa@gmail.com
wrote:

Hi All,

I have put down a draft criteria document structure to start things off.
Once we agree on the structure of the document we can proceed to the next
step of filling in the criteria.

Regards,
Sherif


wasc-satec mailing list
wasc-satec@lists.webappsec.org

On Fri, Jul 8, 2011 at 4:29 PM, Romain Gaucher <romain@webappsec.org> wrote: > Can we step back a little and say what we want to accomplish with this > evaluation criteria? > > To me, this should be developed to raise the awareness of how to > choose a source code analysis tool for security, and to make sure that > people understand that there are no good answer (or in that context, > no perfect tool), and to help them highlight the strong points as well > as the weaknesses of the tools. Agreed 100% > (Off-topic: Static Analysis Tools is > really generic btw. we should define what this means in the document, > and what is the scope of the evaluation criteria). > Excellent point > For example, we cannot start talking about the time it takes to > install a tool. This is a subjective data, and I believe this is not > very useful. Talking about the number of false-positive, or the > "coverage" is also very misleading, and I believe we should move away > from these "criteria". > Why don't we start with "what matters"? And more importantly who are we targeting with this document? > Anyhow, here is an example of criteria from NIST about source code > analysis: > > http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf > I believe that this document could be a start, as to understand how we > should approach the problem... > > Romain > > > On Fri, Jul 8, 2011 at 3:36 PM, Sherif Koussa <sherif.koussa@gmail.com> > wrote: > > Hi All, > > > > I have put down a draft criteria document structure to start things off. > > Once we agree on the structure of the document we can proceed to the next > > step of filling in the criteria. > > > > > http://projects.webappsec.org/w/page/42093482/Static-Analysis-Tool-Evaluation-Criteria-Working > > > > > > Regards, > > Sherif > > _______________________________________________ > > wasc-satec mailing list > > wasc-satec@lists.webappsec.org > > > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org > > > > >
SP
Shah, Paras (HP Software - Fortify ASC)
Mon, Jul 11, 2011 12:58 PM

I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome.

Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing.

With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example:
*Which technology has the broadest language support; to cover real work development environments?
*Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation?
*How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes?
*Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles?

Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above.

Paras Shah
 
+1 408 836 7216 / Mobile
 

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher
Sent: Friday, July 08, 2011 5:02 PM
To: Robert A.
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Just to pile on few the references, the following links are pretty interesting:

Romain

On Fri, Jul 8, 2011 at 4:46 PM, Robert A. robert@webappsec.org wrote:

Can we step back a little and say what we want to accomplish with
this evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure
that people understand that there are no good answer (or in that
context, no perfect tool), and to help them highlight the strong
points as well as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at
what these tools are capable of, and what they aren't. I'm not talking
biz logic issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can
use it within my own environment. This would help me (and others) get
a better handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a
tool based on installation time, they should be basing it on
capability mapped to their needs. If they believe installation is a
major requirement, well then there are bigger issues :)

  • Robert
I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome. Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing. With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example: *Which technology has the broadest language support; to cover real work development environments? *Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation? *How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes? *Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles? Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above. Paras Shah   +1 408 836 7216 / Mobile   -----Original Message----- From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher Sent: Friday, July 08, 2011 5:02 PM To: Robert A. Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria Just to pile on few the references, the following links are pretty interesting: - http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf - http://www.informit.com/articles/article.aspx?p=1680863 (Disclaimer, I work for Cigital) Romain On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.org> wrote: >> Can we step back a little and say what we want to accomplish with >> this evaluation criteria? >> >> To me, this should be developed to raise the awareness of how to >> choose a source code analysis tool for security, and to make sure >> that people understand that there are no good answer (or in that >> context, no perfect tool), and to help them highlight the strong >> points as well as the weaknesses of the tools. > > Totally agreed. I think that people have a very bad understanding at > what these tools are capable of, and what they aren't. I'm not talking > biz logic issues either. > > For me I would like to understand what SAST has to offer, what it is > consistently good at, what it is inconsistently good at, and if I can > use it within my own environment. This would help me (and others) get > a better handle on coverage and dependability for certain things. > >> For example, we cannot start talking about the time it takes to >> install a tool. This is a subjective data, and I believe this is not >> very useful. Talking about the number of false-positive, or the >> "coverage" is also very misleading, and I believe we should move away >> from these "criteria". > > I feel the same way. To me people shouldn't be evaluating selecting a > tool based on installation time, they should be basing it on > capability mapped to their needs. If they believe installation is a > major requirement, well then there are bigger issues :) > > > - Robert > _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org
SK
Sherif Koussa
Mon, Jul 11, 2011 2:55 PM

If a company has 10K developers and one tool saves 30 mins vs another tool,
I think that company would really care about 5,000 hours of lost
productivity. Ofcorse, that this is given that the two tools are equivalent
in everything else. Factoring in, signatures upgrades, product
updates....etc, this will really add up, given that security is always not
the main goal of software development teams, this becomes a big deal if they
have to spent considerable amount of time installing a tool, updating a tool
or figuring out the UI of a tool.

I really don't want to get hung on one issue in the document, but I like the
approach Paras is taking in which tool best supports achieving better
software security, but at the end of the day, developers are the ones who
are going to use it and if developers found the tools hard to use, they will
not use it, so the bigger goal of achieving better software security will *
not* be met.

What I am trying to say is yes, there are criteria more important than
others, but both have to be taken into consideration in my opinion in order
for the tool to achieve its goals.

Regards,
Sherif

On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) <
paras.shah@hp.com> wrote:

I am generally in agreement with Romain and Robert. My opinion and
experience had taught me that organizations typically procure a static
analysis technology to improve their software/ application security profile/
maturity, but the technology itself is the least important factor in
achieving the desired outcome.

Far too much time is spent on "in the weeds" technical evaluation when the
most important criteria in actually achieving the goal of better software
security is building the right processes/ workflows, providing people with
the right training to complete their new tasks and making sure there are
documented standards so people know exactly what they should and should not
be doing.

With this philosophy in mind, it is my opinion, that evaluation criteria
should focus on how the technology best supports achieving better software
security by aligning with a larger program. For example:
*Which technology has the broadest language support; to cover real work
development environments?
*Which technology most efficiently integrates with multiple build
technologies and facilitates scanning automation?
*How does the technology integrate with defect tracking systems and other
application lifecycle management systems; to align with existing development
processes?
*Which technology offers interfaces and content that provides developers
and auditors with the most context relevant information for their specific
roles?

Asking questions about scan time and installation seem completely
irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a
lot of salient information. Pitfalls 7, 8 and 10 are particularly
interesting and align with my points above.

Paras Shah

+1 408 836 7216 / Mobile

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.org [mailto:
wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher
Sent: Friday, July 08, 2011 5:02 PM
To: Robert A.
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Just to pile on few the references, the following links are pretty
interesting:

Romain

On Fri, Jul 8, 2011 at 4:46 PM, Robert A. robert@webappsec.org wrote:

Can we step back a little and say what we want to accomplish with
this evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure
that people understand that there are no good answer (or in that
context, no perfect tool), and to help them highlight the strong
points as well as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at
what these tools are capable of, and what they aren't. I'm not talking
biz logic issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can
use it within my own environment. This would help me (and others) get
a better handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a
tool based on installation time, they should be basing it on
capability mapped to their needs. If they believe installation is a
major requirement, well then there are bigger issues :)

  • Robert
If a company has 10K developers and one tool saves 30 mins vs another tool, I think that company would really care about 5,000 hours of lost productivity. Ofcorse, that this is given that the two tools are equivalent in everything else. Factoring in, signatures upgrades, product updates....etc, this will really add up, given that security is always not the main goal of software development teams, this becomes a big deal if they have to spent considerable amount of time installing a tool, updating a tool or figuring out the UI of a tool. I really don't want to get hung on one issue in the document, but I like the approach Paras is taking in which tool best supports achieving better software security, but at the end of the day, developers are the ones who are going to use it and if developers found the tools hard to use, they will not use it, so the bigger goal of achieving better software security will * not* be met. What I am trying to say is yes, there are criteria more important than others, but both have to be taken into consideration in my opinion in order for the tool to achieve its goals. Regards, Sherif On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) < paras.shah@hp.com> wrote: > I am generally in agreement with Romain and Robert. My opinion and > experience had taught me that organizations typically procure a static > analysis technology to improve their software/ application security profile/ > maturity, but the technology itself is the least important factor in > achieving the desired outcome. > > Far too much time is spent on "in the weeds" technical evaluation when the > most important criteria in actually achieving the goal of better software > security is building the right processes/ workflows, providing people with > the right training to complete their new tasks and making sure there are > documented standards so people know exactly what they should and should not > be doing. > > With this philosophy in mind, it is my opinion, that evaluation criteria > should focus on how the technology best supports achieving better software > security by aligning with a larger program. For example: > *Which technology has the broadest language support; to cover real work > development environments? > *Which technology most efficiently integrates with multiple build > technologies and facilitates scanning automation? > *How does the technology integrate with defect tracking systems and other > application lifecycle management systems; to align with existing development > processes? > *Which technology offers interfaces and content that provides developers > and auditors with the most context relevant information for their specific > roles? > > Asking questions about scan time and installation seem completely > irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a > lot of salient information. Pitfalls 7, 8 and 10 are particularly > interesting and align with my points above. > > > Paras Shah > > +1 408 836 7216 / Mobile > > > > -----Original Message----- > From: wasc-satec-bounces@lists.webappsec.org [mailto: > wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher > Sent: Friday, July 08, 2011 5:02 PM > To: Robert A. > Cc: wasc-satec@lists.webappsec.org > Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria > > Just to pile on few the references, the following links are pretty > interesting: > - http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf > - http://www.informit.com/articles/article.aspx?p=1680863 > (Disclaimer, I work for Cigital) > > Romain > > On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.org> wrote: > >> Can we step back a little and say what we want to accomplish with > >> this evaluation criteria? > >> > >> To me, this should be developed to raise the awareness of how to > >> choose a source code analysis tool for security, and to make sure > >> that people understand that there are no good answer (or in that > >> context, no perfect tool), and to help them highlight the strong > >> points as well as the weaknesses of the tools. > > > > Totally agreed. I think that people have a very bad understanding at > > what these tools are capable of, and what they aren't. I'm not talking > > biz logic issues either. > > > > For me I would like to understand what SAST has to offer, what it is > > consistently good at, what it is inconsistently good at, and if I can > > use it within my own environment. This would help me (and others) get > > a better handle on coverage and dependability for certain things. > > > >> For example, we cannot start talking about the time it takes to > >> install a tool. This is a subjective data, and I believe this is not > >> very useful. Talking about the number of false-positive, or the > >> "coverage" is also very misleading, and I believe we should move away > >> from these "criteria". > > > > I feel the same way. To me people shouldn't be evaluating selecting a > > tool based on installation time, they should be basing it on > > capability mapped to their needs. If they believe installation is a > > major requirement, well then there are bigger issues :) > > > > > > - Robert > > > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org >
SP
Shah, Paras (HP Software - Fortify ASC)
Mon, Jul 11, 2011 3:24 PM

I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that a security auditor will be the only using the technology and he/ she will simply pass results to development.

Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code. Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is important, however, is that the technology can support efficient distribution of results and collaborative remediation.

The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program will move the scans to a central build environment and will probably do them during off-hours on really beefy machines.

Same thing with updates and the like. If an org has a funded mandate and the fortitude to get a software security program deployed, I do not think scan time, install time, or time to update is going be a problem for them.

Paras Shah
Fortify ASC
District Manager, Canada
Fortify Software, an HP Company

+1 408 836 7216 / Mobile
+1 866 234 1609/ Fax
paras@hp.com / Email

Please consider the environment before printing this email.

From: Sherif Koussa [mailto:sherif.koussa@gmail.com]
Sent: Monday, July 11, 2011 10:55 AM
To: Shah, Paras (HP Software - Fortify ASC)
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

If a company has 10K developers and one tool saves 30 mins vs another tool, I think that company would really care about 5,000 hours of lost productivity. Ofcorse, that this is given that the two tools are equivalent in everything else. Factoring in, signatures upgrades, product updates....etc, this will really add up, given that security is always not the main goal of software development teams, this becomes a big deal if they have to spent considerable amount of time installing a tool, updating a tool or figuring out the UI of a tool.

I really don't want to get hung on one issue in the document, but I like the approach Paras is taking in which tool best supports achieving better software security, but at the end of the day, developers are the ones who are going to use it and if developers found the tools hard to use, they will not use it, so the bigger goal of achieving better software security will not be met.

What I am trying to say is yes, there are criteria more important than others, but both have to be taken into consideration in my opinion in order for the tool to achieve its goals.

Regards,
Sherif

On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) <paras.shah@hp.commailto:paras.shah@hp.com> wrote:
I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome.

Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing.

With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example:
*Which technology has the broadest language support; to cover real work development environments?
*Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation?
*How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes?
*Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles?

Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above.

Paras Shah

+1 408 836 7216tel:%2B1%20408%20836%207216 / Mobile

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.orgmailto:wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.orgmailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher
Sent: Friday, July 08, 2011 5:02 PM
To: Robert A.
Cc: wasc-satec@lists.webappsec.orgmailto:wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Just to pile on few the references, the following links are pretty interesting:

Romain

On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.orgmailto:robert@webappsec.org> wrote:

Can we step back a little and say what we want to accomplish with
this evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure
that people understand that there are no good answer (or in that
context, no perfect tool), and to help them highlight the strong
points as well as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at
what these tools are capable of, and what they aren't. I'm not talking
biz logic issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can
use it within my own environment. This would help me (and others) get
a better handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a
tool based on installation time, they should be basing it on
capability mapped to their needs. If they believe installation is a
major requirement, well then there are bigger issues :)

  • Robert
I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that a security auditor will be the only using the technology and he/ she will simply pass results to development. Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code. Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is important, however, is that the technology can support efficient distribution of results and collaborative remediation. The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program will move the scans to a central build environment and will probably do them during off-hours on really beefy machines. Same thing with updates and the like. If an org has a funded mandate and the fortitude to get a software security program deployed, I do not think scan time, install time, or time to update is going be a problem for them. Paras Shah Fortify ASC District Manager, Canada Fortify Software, an HP Company +1 408 836 7216 / Mobile +1 866 234 1609/ Fax paras@hp.com / Email Please consider the environment before printing this email. From: Sherif Koussa [mailto:sherif.koussa@gmail.com] Sent: Monday, July 11, 2011 10:55 AM To: Shah, Paras (HP Software - Fortify ASC) Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria If a company has 10K developers and one tool saves 30 mins vs another tool, I think that company would really care about 5,000 hours of lost productivity. Ofcorse, that this is given that the two tools are equivalent in everything else. Factoring in, signatures upgrades, product updates....etc, this will really add up, given that security is always not the main goal of software development teams, this becomes a big deal if they have to spent considerable amount of time installing a tool, updating a tool or figuring out the UI of a tool. I really don't want to get hung on one issue in the document, but I like the approach Paras is taking in which tool best supports achieving better software security, but at the end of the day, developers are the ones who are going to use it and if developers found the tools hard to use, they will not use it, so the bigger goal of achieving better software security will not be met. What I am trying to say is yes, there are criteria more important than others, but both have to be taken into consideration in my opinion in order for the tool to achieve its goals. Regards, Sherif On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) <paras.shah@hp.com<mailto:paras.shah@hp.com>> wrote: I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome. Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing. With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example: *Which technology has the broadest language support; to cover real work development environments? *Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation? *How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes? *Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles? Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above. Paras Shah +1 408 836 7216<tel:%2B1%20408%20836%207216> / Mobile -----Original Message----- From: wasc-satec-bounces@lists.webappsec.org<mailto:wasc-satec-bounces@lists.webappsec.org> [mailto:wasc-satec-bounces@lists.webappsec.org<mailto:wasc-satec-bounces@lists.webappsec.org>] On Behalf Of Romain Gaucher Sent: Friday, July 08, 2011 5:02 PM To: Robert A. Cc: wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria Just to pile on few the references, the following links are pretty interesting: - http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf - http://www.informit.com/articles/article.aspx?p=1680863 (Disclaimer, I work for Cigital) Romain On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.org<mailto:robert@webappsec.org>> wrote: >> Can we step back a little and say what we want to accomplish with >> this evaluation criteria? >> >> To me, this should be developed to raise the awareness of how to >> choose a source code analysis tool for security, and to make sure >> that people understand that there are no good answer (or in that >> context, no perfect tool), and to help them highlight the strong >> points as well as the weaknesses of the tools. > > Totally agreed. I think that people have a very bad understanding at > what these tools are capable of, and what they aren't. I'm not talking > biz logic issues either. > > For me I would like to understand what SAST has to offer, what it is > consistently good at, what it is inconsistently good at, and if I can > use it within my own environment. This would help me (and others) get > a better handle on coverage and dependability for certain things. > >> For example, we cannot start talking about the time it takes to >> install a tool. This is a subjective data, and I believe this is not >> very useful. Talking about the number of false-positive, or the >> "coverage" is also very misleading, and I believe we should move away >> from these "criteria". > > I feel the same way. To me people shouldn't be evaluating selecting a > tool based on installation time, they should be basing it on > capability mapped to their needs. If they believe installation is a > major requirement, well then there are bigger issues :) > > > - Robert > _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org
AZ
Alen Zukich
Mon, Jul 11, 2011 4:18 PM

I think we would assume that both developers and security auditors will
use this tool.  So a tool should support both, which I think most
vendors do to a certain extent.

IMHO it is very important that developers own their security problems.
This does matter because they need to fix issues right while they are
coding and while it is fresh in their minds.  Don't check in code with
security vulnerabilities.

alen

From: wasc-satec-bounces@lists.webappsec.org
[mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Shah, Paras
(HP Software - Fortify ASC)
Sent: July-11-11 11:25 AM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

I think it would be dangerous to assume that developers will be using
the technology. It is a viable and likely scenario that a security
auditor will be the only using the technology and he/ she will simply
pass results to development.

Secondly, an org can have 10k developers or 10 developers. In either
case, every single developer is not scanning code. Let's take an extreme
example. Say an org has 10k developers and they are all working on the
same application. Why would 10k people scan the same app 10k times. That
responsibility is given to one person (and maybe a backup) and the scan
is done once and results are shared in an efficient manner. In this
example saving 30 minutes or even 30 hours is inconsequential. What is
important, however, is that the technology can support efficient
distribution of results and collaborative remediation.

The example is extreme, but you get my point - scan time does not scale
with the number of developers. And an efficient program will move the
scans to a central build environment and will probably do them during
off-hours on really beefy machines.

Same thing with updates and the like. If an org has a funded mandate and
the fortitude to get a software security program deployed, I do not
think scan time, install time, or time to update is going be a problem
for them.

Paras Shah

Fortify ASC

District Manager, Canada

Fortify Software, an HP Company

+1 408 836 7216 / Mobile

+1 866 234 1609/ Fax

paras@hp.com / Email

Please consider the environment before printing this email.

From: Sherif Koussa [mailto:sherif.koussa@gmail.com]
Sent: Monday, July 11, 2011 10:55 AM
To: Shah, Paras (HP Software - Fortify ASC)
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

If a company has 10K developers and one tool saves 30 mins vs another
tool, I think that company would really care about 5,000 hours of lost
productivity. Ofcorse, that this is given that the two tools are
equivalent in everything else. Factoring in, signatures upgrades,
product updates....etc, this will really add up, given that security is
always not the main goal of software development teams, this becomes a
big deal if they have to spent considerable amount of time installing a
tool, updating a tool or figuring out the UI of a tool.

I really don't want to get hung on one issue in the document, but I like
the approach Paras is taking in which tool best supports achieving
better software security, but at the end of the day, developers are the
ones who are going to use it and if developers found the tools hard to
use, they will not use it, so the bigger goal of achieving better
software security will not be met.

What I am trying to say is yes, there are criteria more important than
others, but both have to be taken into consideration in my opinion in
order for the tool to achieve its goals.

Regards,
Sherif

On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC)
paras.shah@hp.com wrote:

I am generally in agreement with Romain and Robert. My opinion and
experience had taught me that organizations typically procure a static
analysis technology to improve their software/ application security
profile/ maturity, but the technology itself is the least important
factor in achieving the desired outcome.

Far too much time is spent on "in the weeds" technical evaluation when
the most important criteria in actually achieving the goal of better
software security is building the right processes/ workflows, providing
people with the right training to complete their new tasks and making
sure there are documented standards so people know exactly what they
should and should not be doing.

With this philosophy in mind, it is my opinion, that evaluation criteria
should focus on how the technology best supports achieving better
software security by aligning with a larger program. For example:
*Which technology has the broadest language support; to cover real work
development environments?
*Which technology most efficiently integrates with multiple build
technologies and facilitates scanning automation?
*How does the technology integrate with defect tracking systems and
other application lifecycle management systems; to align with existing
development processes?
*Which technology offers interfaces and content that provides developers
and auditors with the most context relevant information for their
specific roles?

Asking questions about scan time and installation seem completely
irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls)
has a lot of salient information. Pitfalls 7, 8 and 10 are particularly
interesting and align with my points above.

Paras Shah

+1 408 836 7216 tel:%2B1%20408%20836%207216  / Mobile

-----Original Message-----
From: wasc-satec-bounces@lists.webappsec.org
[mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain
Gaucher
Sent: Friday, July 08, 2011 5:02 PM
To: Robert A.
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria

Just to pile on few the references, the following links are pretty
interesting:

http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf

Romain

On Fri, Jul 8, 2011 at 4:46 PM, Robert A. robert@webappsec.org wrote:

Can we step back a little and say what we want to accomplish with
this evaluation criteria?

To me, this should be developed to raise the awareness of how to
choose a source code analysis tool for security, and to make sure
that people understand that there are no good answer (or in that
context, no perfect tool), and to help them highlight the strong
points as well as the weaknesses of the tools.

Totally agreed. I think that people have a very bad understanding at
what these tools are capable of, and what they aren't. I'm not talking
biz logic issues either.

For me I would like to understand what SAST has to offer, what it is
consistently good at, what it is inconsistently good at, and if I can
use it within my own environment. This would help me (and others) get
a better handle on coverage and dependability for certain things.

For example, we cannot start talking about the time it takes to
install a tool. This is a subjective data, and I believe this is not
very useful. Talking about the number of false-positive, or the
"coverage" is also very misleading, and I believe we should move away
from these "criteria".

I feel the same way. To me people shouldn't be evaluating selecting a
tool based on installation time, they should be basing it on
capability mapped to their needs. If they believe installation is a
major requirement, well then there are bigger issues :)

  • Robert
I think we would assume that both developers and security auditors will use this tool. So a tool should support both, which I think most vendors do to a certain extent. IMHO it is very important that developers own their security problems. This does matter because they need to fix issues right while they are coding and while it is fresh in their minds. Don't check in code with security vulnerabilities. alen From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Shah, Paras (HP Software - Fortify ASC) Sent: July-11-11 11:25 AM To: Sherif Koussa; wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that a security auditor will be the only using the technology and he/ she will simply pass results to development. Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code. Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is important, however, is that the technology can support efficient distribution of results and collaborative remediation. The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program will move the scans to a central build environment and will probably do them during off-hours on really beefy machines. Same thing with updates and the like. If an org has a funded mandate and the fortitude to get a software security program deployed, I do not think scan time, install time, or time to update is going be a problem for them. Paras Shah Fortify ASC District Manager, Canada Fortify Software, an HP Company +1 408 836 7216 / Mobile +1 866 234 1609/ Fax paras@hp.com / Email Please consider the environment before printing this email. From: Sherif Koussa [mailto:sherif.koussa@gmail.com] Sent: Monday, July 11, 2011 10:55 AM To: Shah, Paras (HP Software - Fortify ASC) Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria If a company has 10K developers and one tool saves 30 mins vs another tool, I think that company would really care about 5,000 hours of lost productivity. Ofcorse, that this is given that the two tools are equivalent in everything else. Factoring in, signatures upgrades, product updates....etc, this will really add up, given that security is always not the main goal of software development teams, this becomes a big deal if they have to spent considerable amount of time installing a tool, updating a tool or figuring out the UI of a tool. I really don't want to get hung on one issue in the document, but I like the approach Paras is taking in which tool best supports achieving better software security, but at the end of the day, developers are the ones who are going to use it and if developers found the tools hard to use, they will not use it, so the bigger goal of achieving better software security will not be met. What I am trying to say is yes, there are criteria more important than others, but both have to be taken into consideration in my opinion in order for the tool to achieve its goals. Regards, Sherif On Mon, Jul 11, 2011 at 8:58 AM, Shah, Paras (HP Software - Fortify ASC) <paras.shah@hp.com> wrote: I am generally in agreement with Romain and Robert. My opinion and experience had taught me that organizations typically procure a static analysis technology to improve their software/ application security profile/ maturity, but the technology itself is the least important factor in achieving the desired outcome. Far too much time is spent on "in the weeds" technical evaluation when the most important criteria in actually achieving the goal of better software security is building the right processes/ workflows, providing people with the right training to complete their new tasks and making sure there are documented standards so people know exactly what they should and should not be doing. With this philosophy in mind, it is my opinion, that evaluation criteria should focus on how the technology best supports achieving better software security by aligning with a larger program. For example: *Which technology has the broadest language support; to cover real work development environments? *Which technology most efficiently integrates with multiple build technologies and facilitates scanning automation? *How does the technology integrate with defect tracking systems and other application lifecycle management systems; to align with existing development processes? *Which technology offers interfaces and content that provides developers and auditors with the most context relevant information for their specific roles? Asking questions about scan time and installation seem completely irrelevant. I think the IEEE doc (Static Analysis Deployment Pitfalls) has a lot of salient information. Pitfalls 7, 8 and 10 are particularly interesting and align with my points above. Paras Shah +1 408 836 7216 <tel:%2B1%20408%20836%207216> / Mobile -----Original Message----- From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Romain Gaucher Sent: Friday, July 08, 2011 5:02 PM To: Robert A. Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Static Analysis Tools Evaluation Criteria Just to pile on few the references, the following links are pretty interesting: - http://web.me.com/flashsheridan/Static_Analysis_Deployment_Pitfalls.pdf - http://www.informit.com/articles/article.aspx?p=1680863 (Disclaimer, I work for Cigital) Romain On Fri, Jul 8, 2011 at 4:46 PM, Robert A. <robert@webappsec.org> wrote: >> Can we step back a little and say what we want to accomplish with >> this evaluation criteria? >> >> To me, this should be developed to raise the awareness of how to >> choose a source code analysis tool for security, and to make sure >> that people understand that there are no good answer (or in that >> context, no perfect tool), and to help them highlight the strong >> points as well as the weaknesses of the tools. > > Totally agreed. I think that people have a very bad understanding at > what these tools are capable of, and what they aren't. I'm not talking > biz logic issues either. > > For me I would like to understand what SAST has to offer, what it is > consistently good at, what it is inconsistently good at, and if I can > use it within my own environment. This would help me (and others) get > a better handle on coverage and dependability for certain things. > >> For example, we cannot start talking about the time it takes to >> install a tool. This is a subjective data, and I believe this is not >> very useful. Talking about the number of false-positive, or the >> "coverage" is also very misleading, and I believe we should move away >> from these "criteria". > > I feel the same way. To me people shouldn't be evaluating selecting a > tool based on installation time, they should be basing it on > capability mapped to their needs. If they believe installation is a > major requirement, well then there are bigger issues :) > > > - Robert > _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.o rg _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.o rg
RA
Robert A.
Mon, Jul 11, 2011 4:24 PM

I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that
a security auditor will be the only using the technology and he/ she will simply pass results to development.

It is dangerous not to assume this. In my organization we use a SAST which was driven by infosec, and is primarily used
by developers.

Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code.
Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k
people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once
and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is
important, however, is that the technology can support efficient distribution of results and collaborative remediation.

I don't think that this project is going to directly cover 'usage guidance/models', merely technical capabilities.

The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program
will move the scans to a central build environment and will probably do them during off-hours on really beefy machines.

Again I don't think 'scan times' is something that should be measured as it is dependant on

  • Accuracy of the SAST tool
  • Proper setup by the SAST user
  • Hardware limitations (machine could be busy, slow, old, etc)

Regards,

> I think it would be dangerous to assume that developers will be using the technology. It is a viable and likely scenario that > a security auditor will be the only using the technology and he/ she will simply pass results to development. It is dangerous not to assume this. In my organization we use a SAST which was driven by infosec, and is *primarily* used by developers. > Secondly, an org can have 10k developers or 10 developers. In either case, every single developer is not scanning code. > Let's take an extreme example. Say an org has 10k developers and they are all working on the same application. Why would 10k > people scan the same app 10k times. That responsibility is given to one person (and maybe a backup) and the scan is done once > and results are shared in an efficient manner. In this example saving 30 minutes or even 30 hours is inconsequential. What is > important, however, is that the technology can support efficient distribution of results and collaborative remediation. > I don't think that this project is going to directly cover 'usage guidance/models', merely technical capabilities. > The example is extreme, but you get my point - scan time does not scale with the number of developers. And an efficient program > will move the scans to a central build environment and will probably do them during off-hours on really beefy machines. Again I don't think 'scan times' is something that should be measured as it is dependant on - Accuracy of the SAST tool - Proper setup by the SAST user - Hardware limitations (machine could be busy, slow, old, etc) Regards, - Robert Auger http://www.webappsec.org/ http://www.qasec.com/ http://www.cgisecurity.com/