Hi, here is some comments on my side
3.2 IDE integration support
[gueb] The vendor should provide the minimum requirements to run the
tools in the IDE. In some case, those requirements will have an impact
on the choice of the deployment configuration, or the need to upgrade
the actual computers to support a vendor.
4.1 Frequency of signature update
[gueb] Signature download model : depending on the size of the
signature package, the download configuration could have an impact on
a large scale deployment.
5.1 Support for Role-based Reports
[gueb] Ability to attach the finding to a developer, to increase the
effectiveness of awareness
On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa sherif.koussa@gmail.com wrote:
Great feedback everyone, keep it coming :)
Regards,
Sherif
On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov
alec.shcherbakov@astechconsulting.com wrote:
Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the other
projects has used before, e.g.
http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria
Alec Shcherbakov
The information in this email is intended for the addressee. Any other
use of this information is unauthorized and prohibited.
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf
Of McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready
2.2 minor: font size changes throughout doc
3.4 Scan configuration capabilities: this includes:
Search for “Ability to mark findingsas false positives, and remove them
from the report”
Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of a
user-defined taxonomy
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf
Of Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready
All,
Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:
Summary of the last 9-10 Months:
We agreed as a community on a set of categories and sub-categories that
represent the most important aspects of choosing a static code analysis
tool.
The most essential lesson we learned during that phase is that we should
stay away from "relative" and "qualitative" criteria (e.g. number false
positive, CPU usage...etc) because it just does not give a deterministic way
for evaluators to evaluate the tool.
I sent out asking for contributors who would like to author or review
content.
Each author's work passed through 2-4 rounds of review.
Finally, I took all the work and merged it together into one document
(partially here
http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft)
Since, the document was authored by more than one person, I had to
revise this document more than once, in order to come up with a consistent
and homogeneous document.
Please Notice:
There were some areas where I had to trim down because they were too
detailed while there were other areas that I had to flesh out a bit since
they were too thin.
I had to merge a couple of criteria because after merging the whole
document, they didn't stand up as a category or a sub-category on their own
(e.g. Mobile Frameworks).
Most of the changes were done so that the document would look consistent
and homogeneous as a whole. If you wrote or reviewed a criteria and you
think it is totally different than what it is today, please contact me
directly.
What Now? Your feedback is much NEEDED
It is VERY important that:
You review the document and make sure that it is accurate/contains no
misleading information/is not biased to a certain product
Free of grammar/spelling/ambiguous issues.
If you were an author and you used any references, it is very important
that you send them to me.
Timeline:
We have 14 days till November 23rd to get all feedback. On November 26th
we have to start rolling out the document for general availability.
The Draft:
You can find the draft here
Looking forward to your feedback.
Regards,
Sherif
May I challenge 5.1? I would think that attaching a finding to a developer is a function of a defect tracking system, not of static analysis. If the goal were to send findings back to the last developer who touched the code, that would require deeper integration with a version control system.
-----Original Message-----
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of gueb
Sent: Tuesday, November 20, 2012 9:10 PM
To: Sherif Koussa
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready
Hi, here is some comments on my side
3.2 IDE integration support
[gueb] The vendor should provide the minimum requirements to run the tools in the IDE. In some case, those requirements will have an impact on the choice of the deployment configuration, or the need to upgrade the actual computers to support a vendor.
4.1 Frequency of signature update
[gueb] Signature download model : depending on the size of the signature package, the download configuration could have an impact on a large scale deployment.
5.1 Support for Role-based Reports
[gueb] Ability to attach the finding to a developer, to increase the effectiveness of awareness
On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa sherif.koussa@gmail.com wrote:
Great feedback everyone, keep it coming :)
Regards,
Sherif
On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov
alec.shcherbakov@astechconsulting.com wrote:
Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the
other projects has used before, e.g.
http://projects.webappsec.org/w/page/13246985/Web%20Application%20Fir
ewall%20Evaluation%20Criteria
Alec Shcherbakov
The information in this email is intended for the addressee. Any
other use of this information is unauthorized and prohibited.
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On
Behalf Of McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready
2.2 minor: font size changes throughout doc
3.4 Scan configuration capabilities: this includes:
Search for “Ability to mark findingsas false positives, and remove
them from the report”
Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of
a user-defined taxonomy
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On
Behalf Of Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready
All,
Finally we have a draft ready. Before discussing next steps, I would
like to summarize what has been done during the last few months:
Summary of the last 9-10 Months:
We agreed as a community on a set of categories and sub-categories
that represent the most important aspects of choosing a static code
analysis tool.
The most essential lesson we learned during that phase is that we
should stay away from "relative" and "qualitative" criteria (e.g.
number false positive, CPU usage...etc) because it just does not give
a deterministic way for evaluators to evaluate the tool.
I sent out asking for contributors who would like to author or
review content.
Each author's work passed through 2-4 rounds of review.
Finally, I took all the work and merged it together into one
document (partially here
http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft)
Since, the document was authored by more than one person, I had to
revise this document more than once, in order to come up with a
consistent and homogeneous document.
Please Notice:
There were some areas where I had to trim down because they were
too detailed while there were other areas that I had to flesh out a
bit since they were too thin.
I had to merge a couple of criteria because after merging the whole
document, they didn't stand up as a category or a sub-category on
their own (e.g. Mobile Frameworks).
Most of the changes were done so that the document would look
consistent and homogeneous as a whole. If you wrote or reviewed a
criteria and you think it is totally different than what it is today,
please contact me directly.
What Now? Your feedback is much NEEDED
It is VERY important that:
You review the document and make sure that it is accurate/contains
no misleading information/is not biased to a certain product 2. Free
of grammar/spelling/ambiguous issues.
If you were an author and you used any references, it is very
important that you send them to me.
Timeline:
We have 14 days till November 23rd to get all feedback. On November
26th we have to start rolling out the document for general availability.
The Draft:
You can find the draft here
Looking forward to your feedback.
Regards,
Sherif
wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec
.org
On Wed, Nov 21, 2012 at 12:31:12PM +0000, McGovern, James wrote:
May I challenge 5.1? I would think that attaching a finding to a developer is a function of a defect tracking system, not of static analysis. If the goal were to send findings back to the last developer who touched the code, that would require deeper integration with a version control system.
I agree.
Fixed.
On Mon, Nov 12, 2012 at 11:32 PM, Alen Zukich alen.zukich@klocwork.comwrote:
Still going through this but wanted to point out one thing. I noticed in
a couple of spots it says either “SANS 25” or “SANS Top 20”. Should it be
“CWE/SANS Top 25” (http://www.sans.org/top25-software-errors/)?****
Technically speaking there is a SANS Top 20 which is quite old. Just
trying to understand what is meant with both references. Note: links would
be nice as well.****
Alen****
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] *On
Behalf Of *Sherif Koussa
Sent: November-09-12 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready****
All,****
Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:****
Summary of the last 9-10 Months:****
We agreed as a community on a set of categories and sub-categories that
represent the most important aspects of choosing a static code analysis
tool.****
The most essential lesson we learned during that phase is that we should
stay away from "relative" and "qualitative" criteria (e.g. number false
positive, CPU usage...etc) because it just does not give a deterministic
way for evaluators to evaluate the tool.****
I sent out asking for contributors who would like to author or review
content.****
Each author's work passed through 2-4 rounds of review.****
Finally, I took all the work and merged it together into one document
(partially here http://projects.
webappsec.org/w/page/55204553/SATEC%20First%20Draft) ****
Since, the document was authored by more than one person, I had to
revise this document more than once, in order to come up with a consistent
and homogeneous document. ****
Please Notice:****
There were some areas where I had to trim down because they were too
detailed while there were other areas that I had to flesh out a bit since
they were too thin.****
I had to merge a couple of criteria because after merging the whole
document, they didn't stand up as a category or a sub-category on their own
(e.g. Mobile Frameworks).****
Most of the changes were done so that the document would look consistent
and homogeneous as a whole. If you wrote or reviewed a criteria and you
think it is totally different than what it is today, please contact me
directly.****
What Now? Your feedback is much NEEDED****
It is VERY important that:****
You review the document and make sure that it is accurate/contains no
misleading information/is not biased to a certain product
Free of grammar/spelling/ambiguous issues.****
If you were an author and you used any references, it is * very
important* that you send them to me. ****
Timeline:****
We have 14 days till November 23rd to get all feedback. On November
26th we have to start rolling out the document for general availability.
**
The Draft:****
You can find the draft herehttp://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft
Looking forward to your feedback.****
Regards,****
Sherif****
Benoit,
Thanks for your comments. Please find my replies inline
Regards,
Sherif
On Tue, Nov 20, 2012 at 9:10 PM, gueb gueb@owasp.org wrote:
Hi, here is some comments on my side
3.2 IDE integration support
[gueb] The vendor should provide the minimum requirements to run the
tools in the IDE. In some case, those requirements will have an impact
on the choice of the deployment configuration, or the need to upgrade
the actual computers to support a vendor.
Sherif: Good point. I think this should be covered in the 1.1 installation
support. I added clarification in 1.1
4.1 Frequency of signature update
[gueb] Signature download model : depending on the size of the
signature package, the download configuration could have an impact on
a large scale deployment.
Sherif: May I challenge this point, what difference will it make if the
size of the update is 1Kb vs 1 MB. In addition, the vendor will not be able
to tell the size of their update 18 months down the road.
5.1 Support for Role-based Reports
[gueb] Ability to attach the finding to a developer, to increase the
effectiveness of awareness
Sherif: I think this is going to hurt the organization more so than not
having it, as they have to keep two lists of developers, one in the bug
tracking and one in the tool.
On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa sherif.koussa@gmail.com
wrote:
Great feedback everyone, keep it coming :)
Regards,
Sherif
On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov
alec.shcherbakov@astechconsulting.com wrote:
Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the
other
projects has used before, e.g.
Alec Shcherbakov
The information in this email is intended for the addressee. Any other
use of this information is unauthorized and prohibited.
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On
Behalf
Of McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready
2.2 minor: font size changes throughout doc
3.4 Scan configuration capabilities: this includes:
Search for “Ability to mark findingsas false positives, and remove them
from the report”
Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of a
user-defined taxonomy
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On
Behalf
Of Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready
All,
Finally we have a draft ready. Before discussing next steps, I would
like
to summarize what has been done during the last few months:
Summary of the last 9-10 Months:
that
represent the most important aspects of choosing a static code analysis
tool.
should
stay away from "relative" and "qualitative" criteria (e.g. number false
positive, CPU usage...etc) because it just does not give a
deterministic way
for evaluators to evaluate the tool.
I sent out asking for contributors who would like to author or review
content.
Each author's work passed through 2-4 rounds of review.
Finally, I took all the work and merged it together into one document
(partially here
http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft)
Since, the document was authored by more than one person, I had to
revise this document more than once, in order to come up with a
consistent
and homogeneous document.
Please Notice:
since
they were too thin.
own
(e.g. Mobile Frameworks).
consistent
and homogeneous as a whole. If you wrote or reviewed a criteria and you
think it is totally different than what it is today, please contact me
directly.
What Now? Your feedback is much NEEDED
It is VERY important that:
You review the document and make sure that it is accurate/contains no
misleading information/is not biased to a certain product
Free of grammar/spelling/ambiguous issues.
If you were an author and you used any references, it is very
important
that you send them to me.
Timeline:
We have 14 days till November 23rd to get all feedback. On November 26th
we have to start rolling out the document for general availability.
The Draft:
You can find the draft here
Looking forward to your feedback.
Regards,
Sherif
wasc-satec mailing list
wasc-satec@lists.webappsec.org
Alec,
Looks like we are using the same font size, although I agree it looks
smaller. I will try to dig more into the issue.
Sherif
On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov <
alec.shcherbakov@astechconsulting.com> wrote:
Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the other
projects has used before, e.g.
http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria
Alec Shcherbakov
The information in this email is intended for the addressee. Any other
use of this information is unauthorized and prohibited.
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] *On
Behalf Of *McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready
2.2 minor: font size changes throughout doc
3.4 Scan configuration capabilities: this includes:
Search for “Ability to mark findingsas false positives, and remove them
from the report”
Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of a
user-defined taxonomy
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.orgwasc-satec-bounces@lists.webappsec.org]
*On Behalf Of *Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready
All,
Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:
Summary of the last 9-10 Months:
We agreed as a community on a set of categories and sub-categories that
represent the most important aspects of choosing a static code analysis
tool.
The most essential lesson we learned during that phase is that we should
stay away from "relative" and "qualitative" criteria (e.g. number false
positive, CPU usage...etc) because it just does not give a deterministic
way for evaluators to evaluate the tool.
I sent out asking for contributors who would like to author or review
content.
Each author's work passed through 2-4 rounds of review.
Finally, I took all the work and merged it together into one document
(partially here http://projects.
webappsec.org/w/page/55204553/SATEC%20First%20Draft)
Since, the document was authored by more than one person, I had to
revise this document more than once, in order to come up with a consistent
and homogeneous document.
Please Notice:
There were some areas where I had to trim down because they were too
detailed while there were other areas that I had to flesh out a bit since
they were too thin.
I had to merge a couple of criteria because after merging the whole
document, they didn't stand up as a category or a sub-category on their own
(e.g. Mobile Frameworks).
Most of the changes were done so that the document would look consistent
and homogeneous as a whole. If you wrote or reviewed a criteria and you
think it is totally different than what it is today, please contact me
directly.
What Now? Your feedback is much NEEDED
It is VERY important that:
You review the document and make sure that it is accurate/contains no
misleading information/is not biased to a certain product
Free of grammar/spelling/ambiguous issues.
If you were an author and you used any references, it is very
important that you send them to me.
Timeline:
We have *14 days till November 23rd *to get all feedback. On November
26th we have to start rolling out the document for general availability.
The Draft:
You can find the draft herehttp://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft
Looking forward to your feedback.
Regards,
Sherif
All,
6 more days to get your feedback in.
Regards,
Sherif
On Fri, Nov 9, 2012 at 9:18 PM, Sherif Koussa sherif.koussa@gmail.comwrote:
All,
Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:
Summary of the last 9-10 Months:
*
*
Please Notice:
What Now? Your feedback is much NEEDED
It is VERY important that:
Timeline:
We have *14 days till November 23rd *to get all feedback. On November
26th we have to start rolling out the document for general availability.
The Draft:
You can find the draft herehttp://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft
*
*
Looking forward to your feedback.
Regards,
Sherif
In index A: "A list of the frameworks and libraries used in the
organization." is mentioned. Does it refer to an external document?
I would suggest to give categories of frameworks/libraries and examples for
different languages. This would give a precise guideline to the readers.
The support for framework/api used is a crucial part.
--
Philippe Arteau
Everyone,
I had a quick look at the SATEC and commented on some sections. Sorry to
attach the comments like this, but emails aren't just great for that
(*RG:*is the start of my comment and I tried to use rich text stuff to
make it
more separated from the text). I left only the parts of the SATEC I
commented on.
A general feedback is that the SATEC is really oriented toward a very small
subset of the static analysis tools available and mostly talk about web
application. I believe the effort should be taken to make this SATEC more
generic.
Also, an area that's not touched upon is the usability of the tool:
interface it provides, workflow, integration w/ bugs management systems,
etc.
In addition, it would be good to provide links about other entities who did
evaluation criteria for static analysis tools. I find it kinda sad that
NIST SAMATE is not even mentioned once, I'm really wondering if the authors
looked at these specs (
http://samate.nist.gov/index.php/Source_Code_Security_Analysis.html) before
writing this document. NIST also gets test suites from another government
entity (not sure if I can say who) to test for coverage. This might be
interesting to point it too.
If it's unknown to some, I'm working for a tool vendor and you could
therefore consider my take on this tainted :), but I tried to stay factual.
Also, there are several typos in the document; copy/paste into Word should
fix this.
Cheers,
Romain
1. Platform Support:
*Static code analysis tools are represent a significant investment by
software organizations looking to automate parts of their software security
testing and quality assurance processes. Not only do they represent a
monetary investment, but they also demand time and effort by staff members
to setup, operate, and maintain the tool. This, in addition to checking and
acting upon the results produced by the tool. Understanding the ideal
deployment environment for the tool will maximize the return on investment
and will avoid unplanned hardware purchase cost. The following factors are
essential to understanding the tool's capabilities and hence ensuring
proper utilization of the tool which will reflect positively on
tool utilization and ensuring maximum return on investment (ROI). *
1.2 Scalability Support:
Vendors provide various deployment options for their tools. Clear
description of the different deployment options must be provided by the
vendor to maximize the tool's usage. In addition, the vendor must specify
the optimal operating conditions. At a minimum the vendor must specify:
*RG: Why isn't parallelism mentioned? To achieve speed, if you can have
many threads running the different checks it's much faster. Hence the
advantages of having multi-core/cpu machines. There are multiple things to
be considered:
2. Technology Support:
Most organizations leverage more than one programming language within
their applications portfolio. In addition, more software frameworks are
becoming mature enough for development teams to leverage and use across the
board. In addition, to a score of 3rd party libraries, technologies, both
server and client side. Once these technologies, frameworks and libraries
are integrated into an application, they become part of it and the
application inherits any vulnerability within these components. It is vital
for the static code analysis tool to be able to understand and analyse, not
only the application, but the libraries, frameworks and technologies
supporting the application.
RG: There is a big misconception here I believe. You don't want to analyze
the framework, but how the applications use the framework. It would be
ridiculous to scan the frameworks every time an application use them, but
if the static analysis do not understand the important frameworks (control,
data, and view) then it will miss most of the behavior of the application.
This is fine for some quality analysis, but security checks are usually
more global and require such understanding.
2.1 Standard Languages Support:
Most of the tools support more than one programming language. However, an
organization looking to purchase a static code analysis tool should make an
inventory of all the programming languages used inside the organizations as
well as third party applications that will be scanned as well. After
shortlisting all the programming languages, an organization should compare
the list against the tool’s supported list of programming languages.
RG: Languages and versions. If you use C++11 a lot in one app, make sure
that the frontend of the analysis tool will understand it. Also,
applications such as web apps use several languages in the same app (SQL,
Java, JavaScript and HTML is a very simple stack for example); does the
tool understand all of these languages and is able to track the behavior of
the program when it passes data or call into another language? Example:
stored procedures. Is it understood where the data is actually coming from?
*2.2 Frameworks Support:
*
Once an application is built on a top of a framework, the application
inherits any vulnerability in that framework. In addition, depending on how
the application leverages a framework or a library, it can add new attack
vectors. It is very important for the tool to be able to be able to trace
tainted data through the framework as well as the custom modules built on
top of it. At large, frameworks and libraries can be classified to two
types:
The tool should understand the relationship between the application and
the frameworks/libraries. Ideally, the tool would also be able to follow
tainted data between different frameworks/libraries.
RG: There is a lot to be said on framework. I don't especially like the
separation between server-side, mobile, and client-side. For a static
analysis point of view, that doesn't matter so much, those are all
programs. Frameworks have interesting properties and different features.
Some will manage the data and database (ORM, etc.), some will be
responsible for the flow in the application (spring mvc, struts, .net mvc),
and some will render the view (jasper reports, asp, freemarker,
asp.netpages, jinja, etc.). This is to me the important part of the
framework,
understand what the framework is doing to the application.
The support of framework should be tested and well defined by the tool
vendor: does it understand configuration files, etc.? What feature of the
framework doesn't it understand?
*2.3 Industry Standards Aided Analysis:
*
The tool should be able to provide analysis that is tailored towards one
of the industry standard weaknesses classification, e.g. OWASP Top 10,
CWE/SANS Top 25, WASC Threat Classification, etc. This becomes a desirable
feature for many reasons. For example, an organization that just started
its application security program, a full standard scan might prove
overwhelming, especially with an extensive portfolio of applications.
Focusing on a specific industry standard in this case would be a good place
to start for that particular organization.
*In term of "aided analysis", I believe it's more important to talk about
the ability to enable or disable some checks based on what the
security/development team need. *
3. Scan, Command and Control Support
The scan, command and control of static code analysis tools have a
significant influence on the user’s ability to make the best out of the
tool. This affects both the speed and effectiveness of processing findings
and remediating them.
3.3 Customization:
The tool usually comes with a set of signatures, this set is usually
followed by the tool to uncover the different weaknesses in the sourse
code. Static code analysis should offer a way to extend these signatures in
order to customize the tool's capabilities of detecting new weaknesses,
alter the way the tool detect weaknesses or stop the tool from detecting a
specific pattern. The tool should allow users to:
RG: Can we make this a bit more generic? Signatures or rules are just one
way of accomplishing customization. I can think of few directions:
- Ability to enable/disable/modify the understanding of frameworks: either
create custom rules, checkers, or generic frameworks definition (this
construct means this stuff)
- Ability to create new checkers, detect new/customized types of issues
- Ability to override the core knowledge of the tool
*- Ability to override the core remediation advices *
3.4 Scan configuration capabilities: this includes:
*RG: How about the ability to support new compilers? *
3.5 Testing Capabilities:
Scanning an application for weaknesses is the sole most important
functionality of the tool. It is essential for the tool to be able to
understand, accurately identify and report the following attacks and
security weaknesses.
RG: Okay for webapps, what about the rest? Also, some are very
generic… "information leakage" what does it me to "accurately identify and
report" this? Note that this is a non solvable problem with
static analysis techniques. Also, a static analysis tool cannot report
"attacks" since it doesn't have enough information about the runtime.
Generally, the testing capability should be a very large section and the
focus should be "how well are these covered?". Several open-source
tools have a large testing capability but will generate tons of FP. The
accuracy is important, and there is no real way to test for it but to
actually use the tool on one of your application and see what it finds.
*4. Product Signature Update *
Product signatures is what the static code analysis tool use to identify
security weaknesses. When making a choice of a static analysis tools, one
should take into consideration the following:
RG: Can we move away from "signature"? I mean this is really biased
towards some tools and some kind of analysis. If you take findbugs/clang
they don't use signatures but checkers. We can talk about
core-knowledge/checks/checkers as I believe this is more generic.
6. Triage and Remediation Support
A crucial factor in a static code analysis tool is the support provided in
the triage process and the accuracy, effectiveness of the remediation
advice. This is vital to the speed in which the finding is assessed and
remediated by the development team.
RG: This section is talking about formats of files and findings, but not
about triage and remediation support. Triage support means: can I say that
this is a FP? Remediation support means: Does the tool provide remediation,
are they accurate or generic, can they be customized?
*6.1 Finding Meta-Data: *
The information provided together with a finding, at a minimum the tool
should provide the following with each finding:*
*RG: s/recommendation/remediation. Taint analysis is only one type of
analysis, how about the rest? It's all about evidence such as
flow-evidence, and conditions why the checker/tool thought it was an issue.
There is no standard format to report these defects, but the tool should
report as much information as it can on the defect. *
6.2 Assessment File Management:
Assessment file management saves triage time immensely when scanning
larger applications or when a rescan is performed on an application. At a
minimum the tool should provide the following:
*RG: This is also specific to some tools. Not all tools generate
"assessment files", so this is mostly irrelevant. *
*7. Enterprise Level Support *
When making a choice on a static analysis tool in the Enterprise, an
important consideration to make is support for integration into various
systems at the Enterprise level. These systems include bug tracking
systems, systems for reporting on the risk posture of various applications,
and systems that mine the data for evaluating trending patterns.
7.2 Data Mining Capabilities Reports:
It is an important goal of any security team to be able to understand the
security trends of an organization’s applications. To meet this goal,
static analysis tools should provide the user with the ability to mine the
vulnerability data, present trends and build intelligence from it.
*RG: Shouldn't we talk more about the ability to define customized mining
capabilities and trends generation? *
Romain
On Wed, Nov 21, 2012 at 8:58 PM, Philippe Arteau
philippe.arteau@gmail.comwrote:
In index A: "A list of the frameworks and libraries used in the
organization." is mentioned. Does it refer to an external document?
I would suggest to give categories of frameworks/libraries and examples
for different languages. This would give a precise guideline to the
readers. The support for framework/api used is a crucial part.
--
Philippe Arteau
wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org
Btw, have we tried to reach out to tool vendors/makers to take their input
on this document? I think it's fairly important, and I'm not sure who's
working for who here...