wasc-satec@lists.webappsec.org

WASC Static Analysis Tool Evaluation Criteria

View all threads

SATEC Draft is Ready

G
gueb
Wed, Nov 21, 2012 2:10 AM

Hi, here is some comments on my side

3.2 IDE integration support
[gueb] The vendor should provide the minimum requirements to run the
tools in the IDE. In some case, those requirements will have an impact
on the choice of the deployment configuration, or the need to upgrade
the actual computers to support a vendor.

4.1 Frequency of signature update
[gueb] Signature download model : depending on the size of the
signature package, the download configuration could have an impact on
a large scale deployment.

5.1 Support for Role-based Reports
[gueb] Ability to attach the finding to a developer, to increase the
effectiveness of awareness

On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa sherif.koussa@gmail.com wrote:

Great feedback everyone, keep it coming :)

Regards,
Sherif

On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov
alec.shcherbakov@astechconsulting.com wrote:

Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the other
projects has used before, e.g.
http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria

Alec Shcherbakov

The information in this email is intended for the addressee.  Any other
use of this information is unauthorized and prohibited.

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf
Of McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready

2.2 minor: font size changes throughout doc

3.4 Scan configuration capabilities: this includes:

Search for “Ability to mark findingsas false positives, and remove them
from the report”

Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of a
user-defined taxonomy

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf
Of Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready

All,

Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:

Summary of the last 9-10 Months:

  • We agreed as a community on a set of categories and sub-categories that
    represent the most important aspects of choosing a static code analysis
    tool.

  • The most essential lesson we learned during that phase is that we should
    stay away from "relative" and "qualitative" criteria (e.g. number false
    positive, CPU usage...etc) because it just does not give a deterministic way
    for evaluators to evaluate the tool.

  • I sent out asking for contributors who would like to author or review
    content.

  • Each author's work passed through 2-4 rounds of review.

  • Finally, I took all the work and merged it together into one document
    (partially here
    http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft)

  • Since, the document was authored by more than one person, I had to
    revise this document more than once, in order to come up with a consistent
    and homogeneous document.

Please Notice:

  • There were some areas where I had to trim down because they were too
    detailed while there were other areas that I had to flesh out a bit since
    they were too thin.

  • I had to merge a couple of criteria because after merging the whole
    document, they didn't stand up as a category or a sub-category on their own
    (e.g. Mobile Frameworks).

  • Most of the changes were done so that the document would look consistent
    and homogeneous as a whole. If you wrote or reviewed a criteria and you
    think it is totally different than what it is today, please contact me
    directly.

What Now? Your feedback is much NEEDED

It is VERY important that:

  1. You review the document and make sure that it is accurate/contains no
    misleading information/is not biased to a certain product

  2. Free of grammar/spelling/ambiguous issues.

  3. If you were an author and you used any references, it is very important
    that you send them to me.

Timeline:

We have 14 days till November 23rd to get all feedback. On November 26th
we have to start rolling out the document for general availability.

The Draft:

You can find the draft here

Looking forward to your feedback.

Regards,

Sherif

Hi, here is some comments on my side 3.2 IDE integration support [gueb] The vendor should provide the minimum requirements to run the tools in the IDE. In some case, those requirements will have an impact on the choice of the deployment configuration, or the need to upgrade the actual computers to support a vendor. 4.1 Frequency of signature update [gueb] Signature download model : depending on the size of the signature package, the download configuration could have an impact on a large scale deployment. 5.1 Support for Role-based Reports [gueb] Ability to attach the finding to a developer, to increase the effectiveness of awareness On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa <sherif.koussa@gmail.com> wrote: > Great feedback everyone, keep it coming :) > > Regards, > Sherif > > > On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov > <alec.shcherbakov@astechconsulting.com> wrote: >> >> Also, the font size used for the content text is too small. For >> consistency and easier reading I would use the same font size as the other >> projects has used before, e.g. >> http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria >> >> >> >> >> >> >> >> Alec Shcherbakov >> >> The information in this email is intended for the addressee. Any other >> use of this information is unauthorized and prohibited. >> >> >> >> From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf >> Of McGovern, James >> Sent: Sunday, November 11, 2012 1:33 PM >> To: Sherif Koussa; wasc-satec@lists.webappsec.org >> Subject: Re: [WASC-SATEC] SATEC Draft is Ready >> >> >> >> 2.2 minor: font size changes throughout doc >> >> 3.4 Scan configuration capabilities: this includes: >> >> Search for “Ability to mark findingsas false positives, and remove them >> from the report” >> >> >> >> Think we left out the ability to classify an “app” such as >> mission-critical, financial, internet-facing, who cares, etc. More of a >> user-defined taxonomy >> >> >> >> >> >> >> >> From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf >> Of Sherif Koussa >> Sent: Friday, November 09, 2012 9:19 PM >> To: wasc-satec@lists.webappsec.org >> Subject: [WASC-SATEC] SATEC Draft is Ready >> >> >> >> All, >> >> >> >> Finally we have a draft ready. Before discussing next steps, I would like >> to summarize what has been done during the last few months: >> >> >> >> Summary of the last 9-10 Months: >> >> >> >> - We agreed as a community on a set of categories and sub-categories that >> represent the most important aspects of choosing a static code analysis >> tool. >> >> - The most essential lesson we learned during that phase is that we should >> stay away from "relative" and "qualitative" criteria (e.g. number false >> positive, CPU usage...etc) because it just does not give a deterministic way >> for evaluators to evaluate the tool. >> >> - I sent out asking for contributors who would like to author or review >> content. >> >> - Each author's work passed through 2-4 rounds of review. >> >> - Finally, I took all the work and merged it together into one document >> (partially here >> http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft) >> >> - Since, the document was authored by more than one person, I had to >> revise this document more than once, in order to come up with a consistent >> and homogeneous document. >> >> >> >> Please Notice: >> >> - There were some areas where I had to trim down because they were too >> detailed while there were other areas that I had to flesh out a bit since >> they were too thin. >> >> - I had to merge a couple of criteria because after merging the whole >> document, they didn't stand up as a category or a sub-category on their own >> (e.g. Mobile Frameworks). >> >> - Most of the changes were done so that the document would look consistent >> and homogeneous as a whole. If you wrote or reviewed a criteria and you >> think it is totally different than what it is today, please contact me >> directly. >> >> >> >> What Now? Your feedback is much NEEDED >> >> It is VERY important that: >> >> 1. You review the document and make sure that it is accurate/contains no >> misleading information/is not biased to a certain product >> 2. Free of grammar/spelling/ambiguous issues. >> >> 3. If you were an author and you used any references, it is very important >> that you send them to me. >> >> >> >> Timeline: >> >> We have 14 days till November 23rd to get all feedback. On November 26th >> we have to start rolling out the document for general availability. >> >> >> >> The Draft: >> >> You can find the draft here >> >> >> >> Looking forward to your feedback. >> >> >> Regards, >> >> Sherif > > > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org >
MJ
McGovern, James
Wed, Nov 21, 2012 12:31 PM

May I challenge 5.1? I would think that attaching a finding to a developer is a function of a defect tracking system, not of static analysis. If the goal were to send findings back to the last developer who touched the code, that would require deeper integration with a version control system.

-----Original Message-----
From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of gueb
Sent: Tuesday, November 20, 2012 9:10 PM
To: Sherif Koussa
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready

Hi, here is some comments on my side

3.2 IDE integration support
[gueb] The vendor should provide the minimum requirements to run the tools in the IDE. In some case, those requirements will have an impact on the choice of the deployment configuration, or the need to upgrade the actual computers to support a vendor.

4.1 Frequency of signature update
[gueb] Signature download model : depending on the size of the signature package, the download configuration could have an impact on a large scale deployment.

5.1 Support for Role-based Reports
[gueb] Ability to attach the finding to a developer, to increase the effectiveness of awareness

On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa sherif.koussa@gmail.com wrote:

Great feedback everyone, keep it coming :)

Regards,
Sherif

On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov
alec.shcherbakov@astechconsulting.com wrote:

Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the
other projects has used before, e.g.
http://projects.webappsec.org/w/page/13246985/Web%20Application%20Fir
ewall%20Evaluation%20Criteria

Alec Shcherbakov

The information in this email is intended for the addressee.  Any
other use of this information is unauthorized and prohibited.

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On
Behalf Of McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready

2.2 minor: font size changes throughout doc

3.4 Scan configuration capabilities: this includes:

Search for “Ability to mark findingsas false positives, and remove
them from the report”

Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of
a user-defined taxonomy

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On
Behalf Of Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready

All,

Finally we have a draft ready. Before discussing next steps, I would
like to summarize what has been done during the last few months:

Summary of the last 9-10 Months:

  • We agreed as a community on a set of categories and sub-categories
    that represent the most important aspects of choosing a static code
    analysis tool.

  • The most essential lesson we learned during that phase is that we
    should stay away from "relative" and "qualitative" criteria (e.g.
    number false positive, CPU usage...etc) because it just does not give
    a deterministic way for evaluators to evaluate the tool.

  • I sent out asking for contributors who would like to author or
    review content.

  • Each author's work passed through 2-4 rounds of review.

  • Finally, I took all the work and merged it together into one
    document (partially here
    http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft)

  • Since, the document was authored by more than one person, I had to
    revise this document more than once, in order to come up with a
    consistent and homogeneous document.

Please Notice:

  • There were some areas where I had to trim down because they were
    too detailed while there were other areas that I had to flesh out a
    bit since they were too thin.

  • I had to merge a couple of criteria because after merging the whole
    document, they didn't stand up as a category or a sub-category on
    their own (e.g. Mobile Frameworks).

  • Most of the changes were done so that the document would look
    consistent and homogeneous as a whole. If you wrote or reviewed a
    criteria and you think it is totally different than what it is today,
    please contact me directly.

What Now? Your feedback is much NEEDED

It is VERY important that:

  1. You review the document and make sure that it is accurate/contains
    no misleading information/is not biased to a certain product 2. Free
    of grammar/spelling/ambiguous issues.

  2. If you were an author and you used any references, it is very
    important that you send them to me.

Timeline:

We have 14 days till November 23rd to get all feedback. On November
26th we have to start rolling out the document for general availability.

The Draft:

You can find the draft here

Looking forward to your feedback.

Regards,

Sherif

May I challenge 5.1? I would think that attaching a finding to a developer is a function of a defect tracking system, not of static analysis. If the goal were to send findings back to the last developer who touched the code, that would require deeper integration with a version control system. -----Original Message----- From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of gueb Sent: Tuesday, November 20, 2012 9:10 PM To: Sherif Koussa Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] SATEC Draft is Ready Hi, here is some comments on my side 3.2 IDE integration support [gueb] The vendor should provide the minimum requirements to run the tools in the IDE. In some case, those requirements will have an impact on the choice of the deployment configuration, or the need to upgrade the actual computers to support a vendor. 4.1 Frequency of signature update [gueb] Signature download model : depending on the size of the signature package, the download configuration could have an impact on a large scale deployment. 5.1 Support for Role-based Reports [gueb] Ability to attach the finding to a developer, to increase the effectiveness of awareness On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa <sherif.koussa@gmail.com> wrote: > Great feedback everyone, keep it coming :) > > Regards, > Sherif > > > On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov > <alec.shcherbakov@astechconsulting.com> wrote: >> >> Also, the font size used for the content text is too small. For >> consistency and easier reading I would use the same font size as the >> other projects has used before, e.g. >> http://projects.webappsec.org/w/page/13246985/Web%20Application%20Fir >> ewall%20Evaluation%20Criteria >> >> >> >> >> >> >> >> Alec Shcherbakov >> >> The information in this email is intended for the addressee. Any >> other use of this information is unauthorized and prohibited. >> >> >> >> From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On >> Behalf Of McGovern, James >> Sent: Sunday, November 11, 2012 1:33 PM >> To: Sherif Koussa; wasc-satec@lists.webappsec.org >> Subject: Re: [WASC-SATEC] SATEC Draft is Ready >> >> >> >> 2.2 minor: font size changes throughout doc >> >> 3.4 Scan configuration capabilities: this includes: >> >> Search for “Ability to mark findingsas false positives, and remove >> them from the report” >> >> >> >> Think we left out the ability to classify an “app” such as >> mission-critical, financial, internet-facing, who cares, etc. More of >> a user-defined taxonomy >> >> >> >> >> >> >> >> From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On >> Behalf Of Sherif Koussa >> Sent: Friday, November 09, 2012 9:19 PM >> To: wasc-satec@lists.webappsec.org >> Subject: [WASC-SATEC] SATEC Draft is Ready >> >> >> >> All, >> >> >> >> Finally we have a draft ready. Before discussing next steps, I would >> like to summarize what has been done during the last few months: >> >> >> >> Summary of the last 9-10 Months: >> >> >> >> - We agreed as a community on a set of categories and sub-categories >> that represent the most important aspects of choosing a static code >> analysis tool. >> >> - The most essential lesson we learned during that phase is that we >> should stay away from "relative" and "qualitative" criteria (e.g. >> number false positive, CPU usage...etc) because it just does not give >> a deterministic way for evaluators to evaluate the tool. >> >> - I sent out asking for contributors who would like to author or >> review content. >> >> - Each author's work passed through 2-4 rounds of review. >> >> - Finally, I took all the work and merged it together into one >> document (partially here >> http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft) >> >> - Since, the document was authored by more than one person, I had to >> revise this document more than once, in order to come up with a >> consistent and homogeneous document. >> >> >> >> Please Notice: >> >> - There were some areas where I had to trim down because they were >> too detailed while there were other areas that I had to flesh out a >> bit since they were too thin. >> >> - I had to merge a couple of criteria because after merging the whole >> document, they didn't stand up as a category or a sub-category on >> their own (e.g. Mobile Frameworks). >> >> - Most of the changes were done so that the document would look >> consistent and homogeneous as a whole. If you wrote or reviewed a >> criteria and you think it is totally different than what it is today, >> please contact me directly. >> >> >> >> What Now? Your feedback is much NEEDED >> >> It is VERY important that: >> >> 1. You review the document and make sure that it is accurate/contains >> no misleading information/is not biased to a certain product 2. Free >> of grammar/spelling/ambiguous issues. >> >> 3. If you were an author and you used any references, it is very >> important that you send them to me. >> >> >> >> Timeline: >> >> We have 14 days till November 23rd to get all feedback. On November >> 26th we have to start rolling out the document for general availability. >> >> >> >> The Draft: >> >> You can find the draft here >> >> >> >> Looking forward to your feedback. >> >> >> Regards, >> >> Sherif > > > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec > .org > _______________________________________________ wasc-satec mailing list wasc-satec@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org
HS
Henri Salo
Wed, Nov 21, 2012 1:06 PM

On Wed, Nov 21, 2012 at 12:31:12PM +0000, McGovern, James wrote:

May I challenge 5.1? I would think that attaching a finding to a developer is a function of a defect tracking system, not of static analysis. If the goal were to send findings back to the last developer who touched the code, that would require deeper integration with a version control system.

I agree.

  • Henri Salo
On Wed, Nov 21, 2012 at 12:31:12PM +0000, McGovern, James wrote: > May I challenge 5.1? I would think that attaching a finding to a developer is a function of a defect tracking system, not of static analysis. If the goal were to send findings back to the last developer who touched the code, that would require deeper integration with a version control system. I agree. - Henri Salo
SK
Sherif Koussa
Thu, Nov 22, 2012 2:57 AM

Fixed.

On Mon, Nov 12, 2012 at 11:32 PM, Alen Zukich alen.zukich@klocwork.comwrote:

Still going through this but wanted to point out one thing. I noticed in
a couple of spots it says either “SANS 25” or “SANS Top 20”. Should it be
“CWE/SANS Top 25” (http://www.sans.org/top25-software-errors/)?****


Technically speaking there is a SANS Top 20 which is quite old. Just
trying to understand what is meant with both references. Note: links would
be nice as well.****


Alen****




From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] *On
Behalf Of *Sherif Koussa
Sent: November-09-12 9:19 PM

To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready****


All,****


Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:****


Summary of the last 9-10 Months:****


  • We agreed as a community on a set of categories and sub-categories that
    represent the most important aspects of choosing a static code analysis
    tool.****

  • The most essential lesson we learned during that phase is that we should
    stay away from "relative" and "qualitative" criteria (e.g. number false
    positive, CPU usage...etc) because it just does not give a deterministic
    way for evaluators to evaluate the tool.****

  • I sent out asking for contributors who would like to author or review
    content.****

  • Each author's work passed through 2-4 rounds of review.****

  • Finally, I took all the work and merged it together into one document
    (partially here http://projects.
    webappsec.org/w/page/55204553/SATEC%20First%20Draft) ****

  • Since, the document was authored by more than one person, I had to
    revise this document more than once, in order to come up with a consistent
    and homogeneous document. ****


Please Notice:****

  • There were some areas where I had to trim down because they were too
    detailed while there were other areas that I had to flesh out a bit since
    they were too thin.****

  • I had to merge a couple of criteria because after merging the whole
    document, they didn't stand up as a category or a sub-category on their own
    (e.g. Mobile Frameworks).****

  • Most of the changes were done so that the document would look consistent
    and homogeneous as a whole. If you wrote or reviewed a criteria and you
    think it is totally different than what it is today, please contact me
    directly.****


What Now? Your feedback is much NEEDED****

It is VERY important that:****

  1. You review the document and make sure that it is accurate/contains no
    misleading information/is not biased to a certain product

  2. Free of grammar/spelling/ambiguous issues.****

  3. If you were an author and you used any references, it is * very
    important* that you send them to me. ****


Timeline:****

We have 14 days till November 23rd to get all feedback. On November
26th we have to start rolling out the document for general availability.

**


The Draft:****

You can find the draft herehttp://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft



Looking forward to your feedback.****

Regards,****

Sherif****

Fixed. On Mon, Nov 12, 2012 at 11:32 PM, Alen Zukich <alen.zukich@klocwork.com>wrote: > Still going through this but wanted to point out one thing. I noticed in > a couple of spots it says either “SANS 25” or “SANS Top 20”. Should it be > “CWE/SANS Top 25” (http://www.sans.org/top25-software-errors/)?**** > > ** ** > > Technically speaking there is a SANS Top 20 which is quite old. Just > trying to understand what is meant with both references. Note: links would > be nice as well.**** > > ** ** > > Alen**** > > ** ** > > **** > > ** ** > > *From:* wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] *On > Behalf Of *Sherif Koussa > *Sent:* November-09-12 9:19 PM > > *To:* wasc-satec@lists.webappsec.org > *Subject:* [WASC-SATEC] SATEC Draft is Ready**** > > ** ** > > All,**** > > ** ** > > Finally we have a draft ready. Before discussing next steps, I would like > to summarize what has been done during the last few months:**** > > ** ** > > *Summary of the last 9-10 Months:***** > > ** ** > > - We agreed as a community on a set of categories and sub-categories that > represent the most important aspects of choosing a static code analysis > tool.**** > > - The most essential lesson we learned during that phase is that we should > stay away from "relative" and "qualitative" criteria (e.g. number false > positive, CPU usage...etc) because it just does not give a deterministic > way for evaluators to evaluate the tool.**** > > - I sent out asking for contributors who would like to author or review > content.**** > > - Each author's work passed through 2-4 rounds of review.**** > > - Finally, I took all the work and merged it together into one document > (partially here http://projects. > webappsec.org/w/page/55204553/SATEC%20First%20Draft) **** > > - Since, the document was authored by more than one person, I had to > revise this document more than once, in order to come up with a consistent > and homogeneous document. **** > > ** ** > > *Please Notice:***** > > - There were some areas where I had to trim down because they were too > detailed while there were other areas that I had to flesh out a bit since > they were too thin.**** > > - I had to merge a couple of criteria because after merging the whole > document, they didn't stand up as a category or a sub-category on their own > (e.g. Mobile Frameworks).**** > > - Most of the changes were done so that the document would look consistent > and homogeneous as a whole. If you wrote or reviewed a criteria and you > think it is totally different than what it is today, *please contact* me > directly.**** > > ** ** > > *What Now? Your feedback is much NEEDED***** > > *It is VERY important that:***** > > 1. You review the document and make sure that it is accurate/contains no > misleading information/is not biased to a certain product > 2. Free of grammar/spelling/ambiguous issues.**** > > 3. If you were an author and you used any references, it is * very > important* that you send them to me. **** > > ** ** > > *Timeline:***** > > We have *14 days till November 23rd *to get *all* feedback. On November > 26th we have to start rolling out the document for general availability.** > ** > > ** ** > > *The Draft:***** > > You can find the draft here<http://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft> > **** > > ** ** > > Looking forward to your feedback.**** > > > Regards,**** > > Sherif**** >
SK
Sherif Koussa
Thu, Nov 22, 2012 3:04 AM

Benoit,

Thanks for your comments. Please find my replies inline

Regards,
Sherif

On Tue, Nov 20, 2012 at 9:10 PM, gueb gueb@owasp.org wrote:

Hi, here is some comments on my side

3.2 IDE integration support
[gueb] The vendor should provide the minimum requirements to run the
tools in the IDE. In some case, those requirements will have an impact
on the choice of the deployment configuration, or the need to upgrade
the actual computers to support a vendor.

Sherif: Good point. I think this should be covered in the 1.1 installation
support. I added clarification in 1.1

4.1 Frequency of signature update
[gueb] Signature download model : depending on the size of the
signature package, the download configuration could have an impact on
a large scale deployment.

Sherif: May I challenge this point, what difference will it make if the
size of the update is 1Kb vs 1 MB. In addition, the vendor will not be able
to tell the size of their update 18 months down the road.

5.1 Support for Role-based Reports
[gueb] Ability to attach the finding to a developer, to increase the
effectiveness of awareness

Sherif: I think this is going to hurt the organization more so than not
having it, as they have to keep two lists of developers, one in the bug
tracking and one in the tool.

On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa sherif.koussa@gmail.com
wrote:

Great feedback everyone, keep it coming :)

Regards,
Sherif

On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov
alec.shcherbakov@astechconsulting.com wrote:

Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the

other

projects has used before, e.g.

Alec Shcherbakov

The information in this email is intended for the addressee.  Any other
use of this information is unauthorized and prohibited.

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On

Behalf

Of McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready

2.2 minor: font size changes throughout doc

3.4 Scan configuration capabilities: this includes:

Search for “Ability to mark findingsas false positives, and remove them
from the report”

Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of a
user-defined taxonomy

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On

Behalf

Of Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready

All,

Finally we have a draft ready. Before discussing next steps, I would

like

to summarize what has been done during the last few months:

Summary of the last 9-10 Months:

  • We agreed as a community on a set of categories and sub-categories

that

represent the most important aspects of choosing a static code analysis
tool.

  • The most essential lesson we learned during that phase is that we

should

stay away from "relative" and "qualitative" criteria (e.g. number false
positive, CPU usage...etc) because it just does not give a

deterministic way

for evaluators to evaluate the tool.

  • I sent out asking for contributors who would like to author or review
    content.

  • Each author's work passed through 2-4 rounds of review.

  • Finally, I took all the work and merged it together into one document
    (partially here
    http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft)

  • Since, the document was authored by more than one person, I had to
    revise this document more than once, in order to come up with a

consistent

and homogeneous document.

Please Notice:

  • There were some areas where I had to trim down because they were too
    detailed while there were other areas that I had to flesh out a bit

since

they were too thin.

  • I had to merge a couple of criteria because after merging the whole
    document, they didn't stand up as a category or a sub-category on their

own

(e.g. Mobile Frameworks).

  • Most of the changes were done so that the document would look

consistent

and homogeneous as a whole. If you wrote or reviewed a criteria and you
think it is totally different than what it is today, please contact me
directly.

What Now? Your feedback is much NEEDED

It is VERY important that:

  1. You review the document and make sure that it is accurate/contains no
    misleading information/is not biased to a certain product

  2. Free of grammar/spelling/ambiguous issues.

  3. If you were an author and you used any references, it is very

important

that you send them to me.

Timeline:

We have 14 days till November 23rd to get all feedback. On November 26th
we have to start rolling out the document for general availability.

The Draft:

You can find the draft here

Looking forward to your feedback.

Regards,

Sherif


wasc-satec mailing list
wasc-satec@lists.webappsec.org

Benoit, Thanks for your comments. Please find my replies inline Regards, Sherif On Tue, Nov 20, 2012 at 9:10 PM, gueb <gueb@owasp.org> wrote: > Hi, here is some comments on my side > > 3.2 IDE integration support > [gueb] The vendor should provide the minimum requirements to run the > tools in the IDE. In some case, those requirements will have an impact > on the choice of the deployment configuration, or the need to upgrade > the actual computers to support a vendor. > Sherif: Good point. I think this should be covered in the 1.1 installation support. I added clarification in 1.1 > 4.1 Frequency of signature update > [gueb] Signature download model : depending on the size of the > signature package, the download configuration could have an impact on > a large scale deployment. > Sherif: May I challenge this point, what difference will it make if the size of the update is 1Kb vs 1 MB. In addition, the vendor will not be able to tell the size of their update 18 months down the road. > > 5.1 Support for Role-based Reports > [gueb] Ability to attach the finding to a developer, to increase the > effectiveness of awareness > Sherif: I think this is going to hurt the organization more so than not having it, as they have to keep two lists of developers, one in the bug tracking and one in the tool. > > > > > On Wed, Nov 14, 2012 at 9:35 PM, Sherif Koussa <sherif.koussa@gmail.com> > wrote: > > Great feedback everyone, keep it coming :) > > > > Regards, > > Sherif > > > > > > On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov > > <alec.shcherbakov@astechconsulting.com> wrote: > >> > >> Also, the font size used for the content text is too small. For > >> consistency and easier reading I would use the same font size as the > other > >> projects has used before, e.g. > >> > http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria > >> > >> > >> > >> > >> > >> > >> > >> Alec Shcherbakov > >> > >> The information in this email is intended for the addressee. Any other > >> use of this information is unauthorized and prohibited. > >> > >> > >> > >> From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On > Behalf > >> Of McGovern, James > >> Sent: Sunday, November 11, 2012 1:33 PM > >> To: Sherif Koussa; wasc-satec@lists.webappsec.org > >> Subject: Re: [WASC-SATEC] SATEC Draft is Ready > >> > >> > >> > >> 2.2 minor: font size changes throughout doc > >> > >> 3.4 Scan configuration capabilities: this includes: > >> > >> Search for “Ability to mark findingsas false positives, and remove them > >> from the report” > >> > >> > >> > >> Think we left out the ability to classify an “app” such as > >> mission-critical, financial, internet-facing, who cares, etc. More of a > >> user-defined taxonomy > >> > >> > >> > >> > >> > >> > >> > >> From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] On > Behalf > >> Of Sherif Koussa > >> Sent: Friday, November 09, 2012 9:19 PM > >> To: wasc-satec@lists.webappsec.org > >> Subject: [WASC-SATEC] SATEC Draft is Ready > >> > >> > >> > >> All, > >> > >> > >> > >> Finally we have a draft ready. Before discussing next steps, I would > like > >> to summarize what has been done during the last few months: > >> > >> > >> > >> Summary of the last 9-10 Months: > >> > >> > >> > >> - We agreed as a community on a set of categories and sub-categories > that > >> represent the most important aspects of choosing a static code analysis > >> tool. > >> > >> - The most essential lesson we learned during that phase is that we > should > >> stay away from "relative" and "qualitative" criteria (e.g. number false > >> positive, CPU usage...etc) because it just does not give a > deterministic way > >> for evaluators to evaluate the tool. > >> > >> - I sent out asking for contributors who would like to author or review > >> content. > >> > >> - Each author's work passed through 2-4 rounds of review. > >> > >> - Finally, I took all the work and merged it together into one document > >> (partially here > >> http://projects.webappsec.org/w/page/55204553/SATEC%20First%20Draft) > >> > >> - Since, the document was authored by more than one person, I had to > >> revise this document more than once, in order to come up with a > consistent > >> and homogeneous document. > >> > >> > >> > >> Please Notice: > >> > >> - There were some areas where I had to trim down because they were too > >> detailed while there were other areas that I had to flesh out a bit > since > >> they were too thin. > >> > >> - I had to merge a couple of criteria because after merging the whole > >> document, they didn't stand up as a category or a sub-category on their > own > >> (e.g. Mobile Frameworks). > >> > >> - Most of the changes were done so that the document would look > consistent > >> and homogeneous as a whole. If you wrote or reviewed a criteria and you > >> think it is totally different than what it is today, please contact me > >> directly. > >> > >> > >> > >> What Now? Your feedback is much NEEDED > >> > >> It is VERY important that: > >> > >> 1. You review the document and make sure that it is accurate/contains no > >> misleading information/is not biased to a certain product > >> 2. Free of grammar/spelling/ambiguous issues. > >> > >> 3. If you were an author and you used any references, it is very > important > >> that you send them to me. > >> > >> > >> > >> Timeline: > >> > >> We have 14 days till November 23rd to get all feedback. On November 26th > >> we have to start rolling out the document for general availability. > >> > >> > >> > >> The Draft: > >> > >> You can find the draft here > >> > >> > >> > >> Looking forward to your feedback. > >> > >> > >> Regards, > >> > >> Sherif > > > > > > > > _______________________________________________ > > wasc-satec mailing list > > wasc-satec@lists.webappsec.org > > > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org > > >
SK
Sherif Koussa
Thu, Nov 22, 2012 3:07 AM

Alec,

Looks like we are using the same font size, although I agree it looks
smaller. I will try to dig more into the issue.

Sherif

On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov <
alec.shcherbakov@astechconsulting.com> wrote:

Also, the font size used for the content text is too small. For
consistency and easier reading I would use the same font size as the other
projects has used before, e.g.
http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria

Alec Shcherbakov

The information in this email is intended for the addressee.  Any other
use of this information is unauthorized and prohibited.

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] *On
Behalf Of *McGovern, James
Sent: Sunday, November 11, 2012 1:33 PM
To: Sherif Koussa; wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] SATEC Draft is Ready

2.2 minor: font size changes throughout doc

3.4 Scan configuration capabilities: this includes:

Search for “Ability to mark findingsas false positives, and remove them
from the report”

Think we left out the ability to classify an “app” such as
mission-critical, financial, internet-facing, who cares, etc. More of a
user-defined taxonomy

From: wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.orgwasc-satec-bounces@lists.webappsec.org]
*On Behalf Of *Sherif Koussa
Sent: Friday, November 09, 2012 9:19 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] SATEC Draft is Ready

All,

Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:

Summary of the last 9-10 Months:

  • We agreed as a community on a set of categories and sub-categories that
    represent the most important aspects of choosing a static code analysis
    tool.

  • The most essential lesson we learned during that phase is that we should
    stay away from "relative" and "qualitative" criteria (e.g. number false
    positive, CPU usage...etc) because it just does not give a deterministic
    way for evaluators to evaluate the tool.

  • I sent out asking for contributors who would like to author or review
    content.

  • Each author's work passed through 2-4 rounds of review.

  • Finally, I took all the work and merged it together into one document
    (partially here http://projects.
    webappsec.org/w/page/55204553/SATEC%20First%20Draft)

  • Since, the document was authored by more than one person, I had to
    revise this document more than once, in order to come up with a consistent
    and homogeneous document.

Please Notice:

  • There were some areas where I had to trim down because they were too
    detailed while there were other areas that I had to flesh out a bit since
    they were too thin.

  • I had to merge a couple of criteria because after merging the whole
    document, they didn't stand up as a category or a sub-category on their own
    (e.g. Mobile Frameworks).

  • Most of the changes were done so that the document would look consistent
    and homogeneous as a whole. If you wrote or reviewed a criteria and you
    think it is totally different than what it is today, please contact me
    directly.

What Now? Your feedback is much NEEDED

It is VERY important that:

  1. You review the document and make sure that it is accurate/contains no
    misleading information/is not biased to a certain product

  2. Free of grammar/spelling/ambiguous issues.

  3. If you were an author and you used any references, it is very
    important
    that you send them to me.

Timeline:

We have *14 days till November 23rd *to get all feedback. On November
26th we have to start rolling out the document for general availability.

The Draft:

You can find the draft herehttp://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft

Looking forward to your feedback.

Regards,

Sherif

Alec, Looks like we are using the same font size, although I agree it looks smaller. I will try to dig more into the issue. Sherif On Tue, Nov 13, 2012 at 2:12 PM, Alec Shcherbakov < alec.shcherbakov@astechconsulting.com> wrote: > Also, the font size used for the content text is too small. For > consistency and easier reading I would use the same font size as the other > projects has used before, e.g. > http://projects.webappsec.org/w/page/13246985/Web%20Application%20Firewall%20Evaluation%20Criteria > > > > > > > > Alec Shcherbakov > > *The information in this email is intended for the addressee. Any other > use of this information is unauthorized and prohibited.* > > > > *From:* wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org] *On > Behalf Of *McGovern, James > *Sent:* Sunday, November 11, 2012 1:33 PM > *To:* Sherif Koussa; wasc-satec@lists.webappsec.org > *Subject:* Re: [WASC-SATEC] SATEC Draft is Ready > > > > 2.2 minor: font size changes throughout doc > > *3.4 Scan configuration capabilities: this includes:* > > Search for “Ability to mark findingsas false positives, and remove them > from the report” > > > > Think we left out the ability to classify an “app” such as > mission-critical, financial, internet-facing, who cares, etc. More of a > user-defined taxonomy > > > > * * > > > > *From:* wasc-satec [mailto:wasc-satec-bounces@lists.webappsec.org<wasc-satec-bounces@lists.webappsec.org>] > *On Behalf Of *Sherif Koussa > *Sent:* Friday, November 09, 2012 9:19 PM > *To:* wasc-satec@lists.webappsec.org > *Subject:* [WASC-SATEC] SATEC Draft is Ready > > > > All, > > > > Finally we have a draft ready. Before discussing next steps, I would like > to summarize what has been done during the last few months: > > > > *Summary of the last 9-10 Months:* > > > > - We agreed as a community on a set of categories and sub-categories that > represent the most important aspects of choosing a static code analysis > tool. > > - The most essential lesson we learned during that phase is that we should > stay away from "relative" and "qualitative" criteria (e.g. number false > positive, CPU usage...etc) because it just does not give a deterministic > way for evaluators to evaluate the tool. > > - I sent out asking for contributors who would like to author or review > content. > > - Each author's work passed through 2-4 rounds of review. > > - Finally, I took all the work and merged it together into one document > (partially here http://projects. > webappsec.org/w/page/55204553/SATEC%20First%20Draft) > > - Since, the document was authored by more than one person, I had to > revise this document more than once, in order to come up with a consistent > and homogeneous document. > > > > *Please Notice:* > > - There were some areas where I had to trim down because they were too > detailed while there were other areas that I had to flesh out a bit since > they were too thin. > > - I had to merge a couple of criteria because after merging the whole > document, they didn't stand up as a category or a sub-category on their own > (e.g. Mobile Frameworks). > > - Most of the changes were done so that the document would look consistent > and homogeneous as a whole. If you wrote or reviewed a criteria and you > think it is totally different than what it is today, *please contact* me > directly. > > > > *What Now? Your feedback is much NEEDED* > > *It is VERY important that:* > > 1. You review the document and make sure that it is accurate/contains no > misleading information/is not biased to a certain product > 2. Free of grammar/spelling/ambiguous issues. > > 3. If you were an author and you used any references, it is *very > important* that you send them to me. > > > > *Timeline:* > > We have *14 days till November 23rd *to get *all* feedback. On November > 26th we have to start rolling out the document for general availability. > > > > *The Draft:* > > You can find the draft here<http://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft> > > > > Looking forward to your feedback. > > > Regards, > > Sherif >
SK
Sherif Koussa
Thu, Nov 22, 2012 3:35 AM

All,

6 more days to get your feedback in.

Regards,
Sherif

On Fri, Nov 9, 2012 at 9:18 PM, Sherif Koussa sherif.koussa@gmail.comwrote:

All,

Finally we have a draft ready. Before discussing next steps, I would like
to summarize what has been done during the last few months:

Summary of the last 9-10 Months:
*
*

  • We agreed as a community on a set of categories and sub-categories that
    represent the most important aspects of choosing a static code analysis
    tool.
  • The most essential lesson we learned during that phase is that we should
    stay away from "relative" and "qualitative" criteria (e.g. number false
    positive, CPU usage...etc) because it just does not give a deterministic
    way for evaluators to evaluate the tool.
  • I sent out asking for contributors who would like to author or review
    content.
  • Each author's work passed through 2-4 rounds of review.
  • Finally, I took all the work and merged it together into one document
    (partially here http://projects.webappsec.org/w/page/55204553/SATEC
    %20First%20Draft)
  • Since, the document was authored by more than one person, I had to
    revise this document more than once, in order to come up with a consistent
    and homogeneous document.

Please Notice:

  • There were some areas where I had to trim down because they were too
    detailed while there were other areas that I had to flesh out a bit since
    they were too thin.
  • I had to merge a couple of criteria because after merging the whole
    document, they didn't stand up as a category or a sub-category on their own
    (e.g. Mobile Frameworks).
  • Most of the changes were done so that the document would look consistent
    and homogeneous as a whole. If you wrote or reviewed a criteria and you
    think it is totally different than what it is today, please contact me
    directly.

What Now? Your feedback is much NEEDED
It is VERY important that:

  1. You review the document and make sure that it is accurate/contains no
    misleading information/is not biased to a certain product
  2. Free of grammar/spelling/ambiguous issues.
  3. If you were an author and you used any references, it is very
    important
    that you send them to me.

Timeline:
We have *14 days till November 23rd *to get all feedback. On November
26th we have to start rolling out the document for general availability.

The Draft:
You can find the draft herehttp://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft
*
*
Looking forward to your feedback.

Regards,
Sherif

All, 6 more days to get your feedback in. Regards, Sherif On Fri, Nov 9, 2012 at 9:18 PM, Sherif Koussa <sherif.koussa@gmail.com>wrote: > All, > > Finally we have a draft ready. Before discussing next steps, I would like > to summarize what has been done during the last few months: > > *Summary of the last 9-10 Months:* > * > * > - We agreed as a community on a set of categories and sub-categories that > represent the most important aspects of choosing a static code analysis > tool. > - The most essential lesson we learned during that phase is that we should > stay away from "relative" and "qualitative" criteria (e.g. number false > positive, CPU usage...etc) because it just does not give a deterministic > way for evaluators to evaluate the tool. > - I sent out asking for contributors who would like to author or review > content. > - Each author's work passed through 2-4 rounds of review. > - Finally, I took all the work and merged it together into one document > (partially here http://projects.webappsec.org/w/page/55204553/SATEC > %20First%20Draft) > - Since, the document was authored by more than one person, I had to > revise this document more than once, in order to come up with a consistent > and homogeneous document. > > *Please Notice:* > - There were some areas where I had to trim down because they were too > detailed while there were other areas that I had to flesh out a bit since > they were too thin. > - I had to merge a couple of criteria because after merging the whole > document, they didn't stand up as a category or a sub-category on their own > (e.g. Mobile Frameworks). > - Most of the changes were done so that the document would look consistent > and homogeneous as a whole. If you wrote or reviewed a criteria and you > think it is totally different than what it is today, *please contact* me > directly. > > *What Now? Your feedback is much NEEDED* > *It is VERY important that:* > 1. You review the document and make sure that it is accurate/contains no > misleading information/is not biased to a certain product > 2. Free of grammar/spelling/ambiguous issues. > 3. If you were an author and you used any references, it is *very > important* that you send them to me. > > *Timeline:* > We have *14 days till November 23rd *to get *all* feedback. On November > 26th we have to start rolling out the document for general availability. > > *The Draft:* > You can find the draft here<http://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft> > * > * > Looking forward to your feedback. > > Regards, > Sherif >
PA
Philippe Arteau
Thu, Nov 22, 2012 4:58 AM

In index A: "A list of the frameworks and libraries used in the
organization." is mentioned. Does it refer to an external document?

I would suggest to give categories of frameworks/libraries and examples for
different languages. This would give a precise guideline to the readers.
The support for framework/api used is a crucial part.

--
Philippe Arteau

In index A: "A list of the frameworks and libraries used in the organization." is mentioned. Does it refer to an external document? I would suggest to give categories of frameworks/libraries and examples for different languages. This would give a precise guideline to the readers. The support for framework/api used is a crucial part. -- Philippe Arteau
RG
Romain Gaucher
Fri, Nov 23, 2012 7:44 PM

Everyone,
I had a quick look at the SATEC and commented on some sections. Sorry to
attach the comments like this, but emails aren't just great for that
(*RG:*is the start of my comment and I tried to use rich text stuff to
make it
more separated from the text). I left only the parts of the SATEC I
commented on.
A general feedback is that the SATEC is really oriented toward a very small
subset of the static analysis tools available and mostly talk about web
application. I believe the effort should be taken to make this SATEC more
generic.

Also, an area that's not touched upon is the usability of the tool:
interface it provides, workflow, integration w/ bugs management systems,
etc.

In addition, it would be good to provide links about other entities who did
evaluation criteria for static analysis tools. I find it kinda sad that
NIST SAMATE is not even mentioned once, I'm really wondering if the authors
looked at these specs (
http://samate.nist.gov/index.php/Source_Code_Security_Analysis.html) before
writing this document. NIST also gets test suites from another government
entity (not sure if I can say who) to test for coverage. This might be
interesting to point it too.

If it's unknown to some, I'm working for a tool vendor and you could
therefore consider my take on this tainted :), but I tried to stay factual.

Also, there are several typos in the document; copy/paste into Word should
fix this.

Cheers,
Romain

1. Platform Support:

*Static code analysis tools are represent a significant investment by
software organizations looking to automate parts of their software security
testing and quality assurance processes. Not only do they represent a
monetary investment, but  they also demand time and effort by staff members
to setup, operate, and maintain the tool. This, in addition to checking and
acting upon the results produced by the tool. Understanding the ideal
deployment environment for the tool will maximize the return on investment
and will avoid unplanned hardware purchase cost. The following factors are
essential to understanding the tool's capabilities and hence ensuring
proper utilization of the tool which will reflect positively on
tool utilization and ensuring maximum return on investment (ROI). *

1.2 Scalability Support:

Vendors provide various deployment options for their tools. Clear
description of the different deployment options must be provided by the
vendor to maximize the tool's usage. In addition, the vendor must specify
the optimal operating conditions. At a minimum the vendor must specify:

  • The type of deployment: server-side vs client-side as this might
    incur hardware purchase.
  • Ability to multi-chain several machines to achieve more scan speed.
  • Ability to run simultaneous scans at the same time.

*RG: Why isn't parallelism mentioned? To achieve speed, if you can have
many threads running the different checks it's much faster. Hence the
advantages of having multi-core/cpu machines. There are multiple things to
be considered:

  • Ability to analyze multiple applications on one or multiple machines
  • Ability to speed up the analysis of one application*

2. Technology Support:

Most organizations leverage more than one programming language within
their applications portfolio. In addition, more software frameworks are
becoming mature enough for development teams to leverage and use across the
board. In addition, to a score of 3rd party libraries, technologies, both
server and client side. Once these technologies, frameworks and libraries
are integrated into an application, they become part of it and the
application inherits any vulnerability within these components. It is vital
for the static code analysis tool to be able to understand and analyse, not
only the application, but the libraries, frameworks and technologies
supporting the application.

RG: There is a big misconception here I believe. You don't want to analyze
the framework, but how the applications use the framework. It would be
ridiculous to scan the frameworks every time an application use them, but
if the static analysis do not understand the important frameworks (control,
data, and view) then it will miss most of the behavior of the application.
This is fine for some quality analysis, but security checks are usually
more global and require such understanding.

2.1 Standard Languages Support:

Most of the tools support more than one programming language. However, an
organization looking to purchase a static code analysis tool should make an
inventory of all the programming languages used inside the organizations as
well as third party applications that will be scanned as well. After
shortlisting all the programming languages, an organization should compare
the list against the tool’s supported list of programming languages.

RG: Languages and versions. If you use C++11 a lot in one app, make sure
that the frontend of the analysis tool will understand it. Also,
applications such as web apps use several languages in the same app (SQL,
Java, JavaScript and HTML is a very simple stack for example); does the
tool understand all of these languages and is able to track the behavior of
the program when it passes data or call into another language? Example:
stored procedures. Is it understood where the data is actually coming from?

*2.2 Frameworks Support:
*

Once an application is built on a top of a framework, the application
inherits any vulnerability in that framework. In addition, depending on how
the application leverages a framework or a library, it can add new attack
vectors. It is very important for the tool to be able to be able to trace
tainted data through the framework as well as the custom modules built on
top of it. At large, frameworks and libraries can be classified to two
types:

  • Server-side Frameworks: which are the frameworks/libraries that
    reside on the server, e.g. Spring, Struts, Rails, .NET etc.
  • Mobile Frameworks: which are the frameworks that are used on mobile
    devices, e.g. Android, iOS, Windows Mobile etc.
  • Client-side Frameworks: which are the frameworks/libraries that
    reside on browsers, e.g. JQuery, Prototype, etc.

The tool should understand the relationship between the application and
the frameworks/libraries. Ideally, the tool would also be able to follow
tainted data between different frameworks/libraries.

RG: There is a lot to be said on framework. I don't especially like the
separation between server-side, mobile, and client-side. For a static
analysis point of view, that doesn't matter so much, those are all
programs. Frameworks have interesting properties and different features.
Some will manage the data and database (ORM, etc.), some will be
responsible for the flow in the application (spring mvc, struts, .net mvc),
and some will render the view (jasper reports, asp, freemarker,
asp.netpages, jinja, etc.). This is to me the important part of the
framework,
understand what the framework is doing to the application.

The support of framework should be tested and well defined by the tool
vendor: does it understand configuration files, etc.? What feature of the
framework doesn't it understand?

*2.3 Industry Standards Aided Analysis:
*

The tool should be able to provide analysis that is tailored towards one
of the industry standard weaknesses classification, e.g. OWASP Top 10,
CWE/SANS Top 25, WASC Threat Classification, etc. This becomes a desirable
feature for many reasons. For example, an organization that just started
its application security program, a full standard scan might prove
overwhelming, especially with an extensive portfolio of applications.
Focusing on a specific industry standard in this case would be a good place
to start for that particular organization.

  • RG: OWASP and WASC aren't weaknesses classifications. I would prefer to
    see the emphasis on CWE since this is the real only one weaknesses
    classification out there.*

*In term of "aided analysis", I believe it's more important to talk about
the ability to enable or disable some checks based on what the
security/development team need. *

3. Scan, Command and Control Support

The scan, command and control of static code analysis tools have a
significant influence on the user’s ability to make the best out of the
tool. This affects both the speed and effectiveness of processing findings
and remediating them.

3.3 Customization:

The tool usually comes with a set of signatures, this set is usually
followed by the tool to uncover the different weaknesses in the sourse
code. Static code analysis should offer a way to extend these signatures in
order to customize the tool's capabilities of detecting new weaknesses,
alter the way the tool detect weaknesses or stop the tool from detecting a
specific pattern. The tool should allow users to:

  • Users should be able to add/delete/modify core signatures: Core
    signatures are the signatures that come bundled with the application. False
    positives is one of the inherit flaws in static code analysis tools in
    general. One way to minimize this problem is to optimize the tool’s
    core signatures, e.g. mark a certain source as safe input.
  • Users should be able to author custom signatures: This feature is
    almost invaluable to maximize the tool’s benefits. For example, a
    custom signature might be needed to “educate” the tool of the existence of
    a custom cleansing module so to start flagging lines that do not use that
    module or stop flagging lines that do use it.

RG: Can we make this a bit more generic? Signatures or rules are just one
way of accomplishing customization. I can think of few directions:

- Ability to enable/disable/modify the understanding of frameworks: either
create custom rules, checkers, or generic frameworks definition (this
construct means this stuff)

- Ability to create new checkers, detect new/customized types of issues

- Ability to override the core knowledge of the tool

*- Ability to override the core remediation advices *

3.4 Scan configuration capabilities: this includes:

  • Ability to schedule scans: scheduled scan are often a mandatory
    features. Scans are often scheduled after nightly builds, some other times
    they are scheduled when the CPU usage as at its minimum. Therefore, it is
    important for the user to be able to schedule the scan to run at a
    particular time.
  • Ability to view real-time status of running scans: some scans would
    take hours to finish, it would be beneficial and desirable for a user to be
    able to see the scan’s progress and weaknesses found thus far.
  • Ability to save configurations and re-use them as configuration
    templates: Often a significant amount of time and effort is involved in
    optimally configuring a static code analyser for a particular application.
    A tool should provide the user with the ability to save a scan's
    configuration so that it can be re-used for later scans.
  • Ability to run multiple scans simultaneously: Organizations that have
    many applications to scan, will find the ability to run simultaneous scans
    to be a desirable feature.
  • Ability to support multiple users: this is important for
    organizations which are planning to rollout the tool to be used by
    developers or organizations which are planning to scan large applications
    that require more than one engineer to assess at the same time.

*RG: How about the ability to support new compilers?  *

3.5 Testing Capabilities:

Scanning an application for weaknesses is the sole most important
functionality of the tool. It is essential for the tool to be able to
understand, accurately identify and report the following attacks and
security weaknesses.

  • Abuse of Functionality
  • Application Misconfiguration
  • *Auto-complete Not Disabled on Password Parameters *
  • Buffer Overflow
  • Credential/Session Prediction
  • Cross-site Scripting
  • Cross-site Request Forgery
  • Denial of Service
  • *Insecure Cryptography *
  • Format String
  • HTTP Response Splitting
  • Improper Input Handling
  • Improper Output Encoding
  • Information Leakage
  • Insufficient Authentication
  • Insufficient Authorization
  • Insufficient Session Expiration
  • Integer Overflows
  • LDAP Injection
  • Mail Command Injection
  • Null Byte Injection
  • OS Command Injection
  • Path Traversal
  • Remote File Inclusion
  • Session Fixation
  • SQL Injection
  • URL Redirection Abuse
  • XPATH Injection
  • XML External Entities
  • XML Entity Expansion
  • XQuery Injection

RG: Okay for webapps, what about the rest? Also, some are very
generic… "information leakage" what does it me to "accurately identify and
report" this? Note that this is a non solvable problem with
static analysis techniques. Also, a static analysis tool cannot report
"attacks" since it doesn't have enough information about the runtime.

Generally, the testing capability should be a very large section and the
focus should be "how well are these covered?". Several open-source
tools have a large testing capability but will generate tons of FP. The
accuracy is important, and there is no real way to test for it but to
actually use the tool on one of your application and see what it finds.

*4. Product Signature Update  *

Product signatures is what the static code analysis tool use to identify
security weaknesses. When making a choice of a static analysis tools, one
should take into consideration the following:

RG: Can we move away from "signature"? I mean this is really biased
towards some tools and some kind of analysis. If you take findbugs/clang
they don't use signatures but checkers. We can talk about
core-knowledge/checks/checkers as I believe this is more generic.

6. Triage and Remediation Support

A crucial factor in a static code analysis tool is the support provided in
the triage process and the accuracy, effectiveness of the remediation
advice. This is vital to the speed in which the finding is assessed and
remediated by the development team.

RG: This section is talking about formats of files and findings, but not
about triage and remediation support. Triage support means: can I say that
this is a FP? Remediation support means: Does the tool provide remediation,
are they accurate or generic, can they be customized?

*6.1 Finding Meta-Data: *

  • The information provided together with a finding, at a minimum the tool
    should provide the following with each finding:*

    • Finding Severity: the severity of the finding with a way to change if
      required.
    • Summary: explanation of the finding and the risk it poses on exploit.
    • Location: the code file and the line of code where the finding is
      located
    • Taint Analysis: the flow of the tainted data until it reaches the
      finding cited location.
    • Recommendation advice: customized recommendation advice with details
      pertaining to the current finding, ideally with code examples written in
      the application’s programming language.

*RG: s/recommendation/remediation. Taint analysis is only one type of
analysis, how about the rest? It's all about evidence such as
flow-evidence, and conditions why the checker/tool thought it was an issue.
There is no standard format to report these defects, but the tool should
report as much information as it can on the defect. *

6.2 Assessment File Management:

Assessment file management saves triage time immensely when scanning
larger applications or when a rescan is performed on an application. At a
minimum the tool should provide the following:

  • The ability to merge two assessment files
  • The ability to diff two assessment files
  • The ability to increment on the application’s ex-assessment file.

*RG: This is also specific to some tools. Not all tools generate
"assessment files", so this is mostly irrelevant.  *

*7. Enterprise Level Support *

When making a choice on a static analysis tool in the Enterprise, an
important consideration to make is support for integration into various
systems at the Enterprise level. These systems include bug tracking
systems, systems for reporting on the risk posture of various applications,
and systems that mine the data for evaluating trending patterns.

7.2  Data Mining Capabilities Reports:

It is an important goal of any security team to be able to understand the
security trends of an organization’s applications. To meet this goal,
static analysis tools should provide the user with the ability to mine the
vulnerability data, present trends and build intelligence from it.

*RG: Shouldn't we talk more about the ability to define customized mining
capabilities and trends generation? *

Romain

On Wed, Nov 21, 2012 at 8:58 PM, Philippe Arteau
philippe.arteau@gmail.comwrote:

In index A: "A list of the frameworks and libraries used in the
organization." is mentioned. Does it refer to an external document?

I would suggest to give categories of frameworks/libraries and examples
for different languages. This would give a precise guideline to the
readers. The support for framework/api used is a crucial part.

--
Philippe Arteau


wasc-satec mailing list
wasc-satec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org

Everyone, I had a quick look at the SATEC and commented on some sections. Sorry to attach the comments like this, but emails aren't just great for that (*RG:*is the start of my comment and I tried to use rich text stuff to make it more separated from the text). I left only the parts of the SATEC I commented on. A general feedback is that the SATEC is really oriented toward a very small subset of the static analysis tools available and mostly talk about web application. I believe the effort should be taken to make this SATEC more generic. Also, an area that's not touched upon is the usability of the tool: interface it provides, workflow, integration w/ bugs management systems, etc. In addition, it would be good to provide links about other entities who did evaluation criteria for static analysis tools. I find it kinda sad that NIST SAMATE is not even mentioned once, I'm really wondering if the authors looked at these specs ( http://samate.nist.gov/index.php/Source_Code_Security_Analysis.html) before writing this document. NIST also gets test suites from another government entity (not sure if I can say who) to test for coverage. This might be interesting to point it too. If it's unknown to some, I'm working for a tool vendor and you could therefore consider my take on this tainted :), but I tried to stay factual. Also, there are several typos in the document; copy/paste into Word should fix this. Cheers, Romain * * *1. Platform Support:* *Static code analysis tools are represent a significant investment by software organizations looking to automate parts of their software security testing and quality assurance processes. Not only do they represent a monetary investment, but they also demand time and effort by staff members to setup, operate, and maintain the tool. This, in addition to checking and acting upon the results produced by the tool. Understanding the ideal deployment environment for the tool will maximize the return on investment and will avoid unplanned hardware purchase cost. The following factors are essential to understanding the tool's capabilities and hence ensuring proper utilization of the tool which will reflect positively on tool utilization and ensuring maximum return on investment (ROI). * *1.2 Scalability Support:* *Vendors provide various deployment options for their tools. Clear description of the different deployment options must be provided by the vendor to maximize the tool's usage. In addition, the vendor must specify the optimal operating conditions. At a minimum the vendor must specify:* - *The type of deployment: server-side vs client-side as this might incur hardware purchase.* - *Ability to multi-chain several machines to achieve more scan speed.* - *Ability to run simultaneous scans at the same time.* *RG: Why isn't parallelism mentioned? To achieve speed, if you can have many threads running the different checks it's much faster. Hence the advantages of having multi-core/cpu machines. There are multiple things to be considered: - Ability to analyze multiple applications on one or multiple machines - Ability to speed up the analysis of one application* *2. Technology Support:* *Most organizations leverage more than one programming language within their applications portfolio. In addition, more software frameworks are becoming mature enough for development teams to leverage and use across the board. In addition, to a score of 3rd party libraries, technologies, both server and client side. Once these technologies, frameworks and libraries are integrated into an application, they become part of it and the application inherits any vulnerability within these components. It is vital for the static code analysis tool to be able to understand and analyse, not only the application, but the libraries, frameworks and technologies supporting the application.* *RG: There is a big misconception here I believe. You don't want to analyze the framework, but how the applications use the framework. It would be ridiculous to scan the frameworks every time an application use them, but if the static analysis do not understand the important frameworks (control, data, and view) then it will miss most of the behavior of the application. This is fine for some quality analysis, but security checks are usually more global and require such understanding.* *2.1 Standard Languages Support:* *Most of the tools support more than one programming language. However, an organization looking to purchase a static code analysis tool should make an inventory of all the programming languages used inside the organizations as well as third party applications that will be scanned as well. After shortlisting all the programming languages, an organization should compare the list against the tool’s supported list of programming languages.* *RG: Languages and versions. If you use C++11 a lot in one app, make sure that the frontend of the analysis tool will understand it. Also, applications such as web apps use several languages in the same app (SQL, Java, JavaScript and HTML is a very simple stack for example); does the tool understand all of these languages and is able to track the behavior of the program when it passes data or call into another language? Example: stored procedures. Is it understood where the data is actually coming from?* *2.2 Frameworks Support: * *Once an application is built on a top of a framework, the application inherits any vulnerability in that framework. In addition, depending on how the application leverages a framework or a library, it can add new attack vectors. It is very important for the tool to be able to be able to trace tainted data through the framework as well as the custom modules built on top of it. At large, frameworks and libraries can be classified to two types:* - *Server-side Frameworks: which are the frameworks/libraries that reside on the server, e.g. Spring, Struts, Rails, .NET etc.* - *Mobile Frameworks: which are the frameworks that are used on mobile devices, e.g. Android, iOS, Windows Mobile etc.* - *Client-side Frameworks: which are the frameworks/libraries that reside on browsers, e.g. JQuery, Prototype, etc.* *The tool should understand the relationship between the application and the frameworks/libraries. Ideally, the tool would also be able to follow tainted data between different frameworks/libraries.* *RG: There is a lot to be said on framework. I don't especially like the separation between server-side, mobile, and client-side. For a static analysis point of view, that doesn't matter so much, those are all programs. Frameworks have interesting properties and different features. Some will manage the data and database (ORM, etc.), some will be responsible for the flow in the application (spring mvc, struts, .net mvc), and some will render the view (jasper reports, asp, freemarker, asp.netpages, jinja, etc.). This is to me the important part of the framework, understand what the framework is doing to the application.* *The support of framework should be tested and well defined by the tool vendor: does it understand configuration files, etc.? What feature of the framework doesn't it understand?* *2.3 Industry Standards Aided Analysis: * *The tool should be able to provide analysis that is tailored towards one of the industry standard weaknesses classification, e.g. OWASP Top 10, CWE/SANS Top 25, WASC Threat Classification, etc. This becomes a desirable feature for many reasons. For example, an organization that just started its application security program, a full standard scan might prove overwhelming, especially with an extensive portfolio of applications. Focusing on a specific industry standard in this case would be a good place to start for that particular organization.* * RG: OWASP and WASC aren't weaknesses classifications. I would prefer to see the emphasis on CWE since this is the real only one weaknesses classification out there.* *In term of "aided analysis", I believe it's more important to talk about the ability to enable or disable some checks based on what the security/development team need. * *3. Scan, Command and Control Support* *The scan, command and control of static code analysis tools have a significant influence on the user’s ability to make the best out of the tool. This affects both the speed and effectiveness of processing findings and remediating them.* *3.3 Customization:* *The tool usually comes with a set of signatures, this set is usually followed by the tool to uncover the different weaknesses in the sourse code. Static code analysis should offer a way to extend these signatures in order to customize the tool's capabilities of detecting new weaknesses, alter the way the tool detect weaknesses or stop the tool from detecting a specific pattern. The tool should allow users to:* - *Users should be able to add/delete/modify core signatures: Core signatures are the signatures that come bundled with the application. False positives is one of the inherit flaws in static code analysis tools in general. One way to minimize this problem is to optimize the tool’s core signatures, e.g. mark a certain source as safe input.* - *Users should be able to author custom signatures: This feature is almost invaluable to maximize the tool’s benefits. For example, a custom signature might be needed to “educate” the tool of the existence of a custom cleansing module so to start flagging lines that do not use that module or stop flagging lines that do use it.* *RG: Can we make this a bit more generic? Signatures or rules are just one way of accomplishing customization. I can think of few directions:* *- Ability to enable/disable/modify the understanding of frameworks: either create custom rules, checkers, or generic frameworks definition (this construct means this stuff)* *- Ability to create new checkers, detect new/customized types of issues* *- Ability to override the core knowledge of the tool* *- Ability to override the core remediation advices * *3.4 Scan configuration capabilities: this includes:* - *Ability to schedule scans: scheduled scan are often a mandatory features. Scans are often scheduled after nightly builds, some other times they are scheduled when the CPU usage as at its minimum. Therefore, it is important for the user to be able to schedule the scan to run at a particular time.* - *Ability to view real-time status of running scans: some scans would take hours to finish, it would be beneficial and desirable for a user to be able to see the scan’s progress and weaknesses found thus far.* - *Ability to save configurations and re-use them as configuration templates: Often a significant amount of time and effort is involved in optimally configuring a static code analyser for a particular application. A tool should provide the user with the ability to save a scan's configuration so that it can be re-used for later scans.* - *Ability to run multiple scans simultaneously: Organizations that have many applications to scan, will find the ability to run simultaneous scans to be a desirable feature.* - *Ability to support multiple users: this is important for organizations which are planning to rollout the tool to be used by developers or organizations which are planning to scan large applications that require more than one engineer to assess at the same time.* *RG: How about the ability to support new compilers? * *3.5 Testing Capabilities:* *Scanning an application for weaknesses is the sole most important functionality of the tool. It is essential for the tool to be able to understand, accurately identify and report the following attacks and security weaknesses.* - *Abuse of Functionality* - *Application Misconfiguration* - *Auto-complete Not Disabled on Password Parameters * - *Buffer Overflow* - *Credential/Session Prediction* - *Cross-site Scripting* - *Cross-site Request Forgery* - *Denial of Service* - *Insecure Cryptography * - *Format String* - *HTTP Response Splitting* - *Improper Input Handling* - *Improper Output Encoding* - *Information Leakage* - *Insufficient Authentication* - *Insufficient Authorization* - *Insufficient Session Expiration* - *Integer Overflows* - *LDAP Injection* - *Mail Command Injection* - *Null Byte Injection* - *OS Command Injection* - *Path Traversal* - *Remote File Inclusion* - *Session Fixation* - *SQL Injection* - *URL Redirection Abuse* - *XPATH Injection* - *XML External Entities* - *XML Entity Expansion* - *XQuery Injection* *RG: Okay for webapps, what about the rest? Also, some are very generic… "information leakage" what does it me to "accurately identify and report" this? Note that this is a non solvable problem with static analysis techniques. Also, a static analysis tool cannot report "attacks" since it doesn't have enough information about the runtime.* *Generally, the testing capability should be a very large section and the focus should be "how well are these covered?". Several open-source tools have a large testing capability but will generate tons of FP. The accuracy is important, and there is no real way to test for it but to actually use the tool on one of your application and see what it finds.* *4. Product Signature Update * *Product signatures is what the static code analysis tool use to identify security weaknesses. When making a choice of a static analysis tools, one should take into consideration the following:* *RG: Can we move away from "signature"? I mean this is really biased towards some tools and some kind of analysis. If you take findbugs/clang they don't use signatures but checkers. We can talk about core-knowledge/checks/checkers as I believe this is more generic.* *6. Triage and Remediation Support* *A crucial factor in a static code analysis tool is the support provided in the triage process and the accuracy, effectiveness of the remediation advice. This is vital to the speed in which the finding is assessed and remediated by the development team.* *RG: This section is talking about formats of files and findings, but not about triage and remediation support. Triage support means: can I say that this is a FP? Remediation support means: Does the tool provide remediation, are they accurate or generic, can they be customized?* *6.1 Finding Meta-Data: * * The information provided together with a finding, at a minimum the tool should provide the following with each finding:* - *Finding Severity: the severity of the finding with a way to change if required.* - *Summary: explanation of the finding and the risk it poses on exploit.* - *Location: the code file and the line of code where the finding is located* - *Taint Analysis: the flow of the tainted data until it reaches the finding cited location.* - *Recommendation advice: customized recommendation advice with details pertaining to the current finding, ideally with code examples written in the application’s programming language.* *RG: s/recommendation/remediation. Taint analysis is only one type of analysis, how about the rest? It's all about evidence such as flow-evidence, and conditions why the checker/tool thought it was an issue. There is no standard format to report these defects, but the tool should report as much information as it can on the defect. * *6.2 Assessment File Management:* *Assessment file management saves triage time immensely when scanning larger applications or when a rescan is performed on an application. At a minimum the tool should provide the following:* - *The ability to merge two assessment files* - *The ability to diff two assessment files* - *The ability to increment on the application’s ex-assessment file.* *RG: This is also specific to some tools. Not all tools generate "assessment files", so this is mostly irrelevant. * *7. Enterprise Level Support * *When making a choice on a static analysis tool in the Enterprise, an important consideration to make is support for integration into various systems at the Enterprise level. These systems include bug tracking systems, systems for reporting on the risk posture of various applications, and systems that mine the data for evaluating trending patterns.* *7.2 Data Mining Capabilities Reports:* *It is an important goal of any security team to be able to understand the security trends of an organization’s applications. To meet this goal, static analysis tools should provide the user with the ability to mine the vulnerability data, present trends and build intelligence from it.* *RG: Shouldn't we talk more about the ability to define customized mining capabilities and trends generation? * Romain On Wed, Nov 21, 2012 at 8:58 PM, Philippe Arteau <philippe.arteau@gmail.com>wrote: > In index A: "A list of the frameworks and libraries used in the > organization." is mentioned. Does it refer to an external document? > > I would suggest to give categories of frameworks/libraries and examples > for different languages. This would give a precise guideline to the > readers. The support for framework/api used is a crucial part. > > -- > Philippe Arteau > > _______________________________________________ > wasc-satec mailing list > wasc-satec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-satec_lists.webappsec.org > >
RG
Romain Gaucher
Fri, Nov 23, 2012 7:45 PM

Btw, have we tried to reach out to tool vendors/makers to take their input
on this document? I think it's fairly important, and I'm not sure who's
working for who here...

Btw, have we tried to reach out to tool vendors/makers to take their input on this document? I think it's fairly important, and I'm not sure who's working for who here...