wasc-satec@lists.webappsec.org

WASC Static Analysis Tool Evaluation Criteria

View all threads

Phase II: Are you an author or reviewer?

SK
Sherif Koussa
Mon, Jan 9, 2012 9:39 PM

Hi All,

So we have been working for about 4-5 months now, trying to figure out what
matters most to software companies which may be trying to acquire a Static
Code Analysis tool. I think we have a very good set of criteria, which were
vetted several times, these were captured in the form of categories and
sub-categories (headers and sub-headers mainly) in the Wiki page here
http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working
.

So now we got the categories and sub-categories locked down, we need to
start the next phase, which is about fleshing the categories and
sub-categories out and explain what each of them means. If you need an
example, please visit the WASSEC project
http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria
to
get a sense of how the finished criteria would look like.

Now, we need authors who are going to actually start fleshing out
(write\explain) the categories and sub-categories and we need reviewers who
will review the authors' work and suggest modifications.

if you have cycles in the next two month, please reply to this email with
either "Author" or "Reviewer" to indicate the role you would like to play
in the next period.

Ideally, we would like to keep the workload per contributor to less than 2
hours a week for the next two months. We should be able to achieve this
considering that we have almost 40 people on this mailing list.

Please let me know if you had any comments, suggestions or questions.

Regards,
Sherif

Hi All, So we have been working for about 4-5 months now, trying to figure out what matters most to software companies which may be trying to acquire a Static Code Analysis tool. I think we have a very good set of criteria, which were vetted several times, these were captured in the form of categories and sub-categories (headers and sub-headers mainly) in the Wiki page here http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working . So now we got the categories and sub-categories locked down, we need to start the next phase, which is about fleshing the categories and sub-categories out and explain what each of them means. If you need an example, please visit the WASSEC project http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to get a sense of how the finished criteria would look like. Now, we need authors who are going to actually start fleshing out (write\explain) the categories and sub-categories and we need reviewers who will review the authors' work and suggest modifications. *if you have cycles in the next two month, please reply to this email with either "Author" or "Reviewer" to indicate the role you would like to play in the next period.* Ideally, we would like to keep the workload per contributor to less than 2 hours a week for the next two months. We should be able to achieve this considering that we have almost 40 people on this mailing list. Please let me know if you had any comments, suggestions or questions. Regards, Sherif
MJ
McGovern, James
Mon, Jan 9, 2012 10:04 PM

Count me in as a reviewer. In the meantime, I have the following questions/thoughts:

  •     How should we separate out static analysis in terms of tools that do security vs the ones that do quality? They do produce different metrics, etc
    
  •     When I was at The Hartford, we had a big focus on reporting. This included an understanding not just of the code characteristics, but departments, divisions and developers who wrote the best vs worst, etc
    
  •     We also wanted a richer classification that just grouping of "projects" For example, if ten applications used Struts then we wanted to understand cross-cutting concerns.
    
  •     We also cared about integration. For example, could we prevent Cognizant developers from seeing how suboptimal the results were from code written by BLANK
    
  •     At times, we wanted to export report data, you know the habit of doing interesting pivots in Excel
    
  •     Bug tracking shouldn't assume one repository, so this needs to work in a federated manner
    
  •     Could I access the reports and not require yet another username/password and instead consume enterprise identity
    
  •     I really hate having to install an application desktop by desktop and would rather incorporate this into a desktop build. Some vendors license tracking became an impediment
    
  •     Does it make sense for every project that uses Spring to scan Spring or could I somehow "include" other scan results
    

From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Sherif Koussa
Sent: Monday, January 09, 2012 4:40 PM
To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] Phase II: Are you an author or reviewer?

Hi All,

So we have been working for about 4-5 months now, trying to figure out what matters most to software companies which may be trying to acquire a Static Code Analysis tool. I think we have a very good set of criteria, which were vetted several times, these were captured in the form of categories and sub-categories (headers and sub-headers mainly) in the Wiki page here http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working.

So now we got the categories and sub-categories locked down, we need to start the next phase, which is about fleshing the categories and sub-categories out and explain what each of them means. If you need an example, please visit the WASSEC project http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to get a sense of how the finished criteria would look like.

Now, we need authors who are going to actually start fleshing out (write\explain) the categories and sub-categories and we need reviewers who will review the authors' work and suggest modifications.

if you have cycles in the next two month, please reply to this email with either "Author" or "Reviewer" to indicate the role you would like to play in the next period.

Ideally, we would like to keep the workload per contributor to less than 2 hours a week for the next two months. We should be able to achieve this considering that we have almost 40 people on this mailing list.

Please let me know if you had any comments, suggestions or questions.

Regards,
Sherif

Count me in as a reviewer. In the meantime, I have the following questions/thoughts: * How should we separate out static analysis in terms of tools that do security vs the ones that do quality? They do produce different metrics, etc * When I was at The Hartford, we had a big focus on reporting. This included an understanding not just of the code characteristics, but departments, divisions and developers who wrote the best vs worst, etc * We also wanted a richer classification that just grouping of "projects" For example, if ten applications used Struts then we wanted to understand cross-cutting concerns. * We also cared about integration. For example, could we prevent Cognizant developers from seeing how suboptimal the results were from code written by BLANK * At times, we wanted to export report data, you know the habit of doing interesting pivots in Excel * Bug tracking shouldn't assume one repository, so this needs to work in a federated manner * Could I access the reports and not require yet another username/password and instead consume enterprise identity * I really hate having to install an application desktop by desktop and would rather incorporate this into a desktop build. Some vendors license tracking became an impediment * Does it make sense for every project that uses Spring to scan Spring or could I somehow "include" other scan results From: wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Sherif Koussa Sent: Monday, January 09, 2012 4:40 PM To: wasc-satec@lists.webappsec.org Subject: [WASC-SATEC] Phase II: Are you an author or reviewer? Hi All, So we have been working for about 4-5 months now, trying to figure out what matters most to software companies which may be trying to acquire a Static Code Analysis tool. I think we have a very good set of criteria, which were vetted several times, these were captured in the form of categories and sub-categories (headers and sub-headers mainly) in the Wiki page here http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working. So now we got the categories and sub-categories locked down, we need to start the next phase, which is about fleshing the categories and sub-categories out and explain what each of them means. If you need an example, please visit the WASSEC project http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to get a sense of how the finished criteria would look like. Now, we need authors who are going to actually start fleshing out (write\explain) the categories and sub-categories and we need reviewers who will review the authors' work and suggest modifications. if you have cycles in the next two month, please reply to this email with either "Author" or "Reviewer" to indicate the role you would like to play in the next period. Ideally, we would like to keep the workload per contributor to less than 2 hours a week for the next two months. We should be able to achieve this considering that we have almost 40 people on this mailing list. Please let me know if you had any comments, suggestions or questions. Regards, Sherif
SK
Sherif Koussa
Tue, Jan 10, 2012 2:16 AM

Hi James,

Thanks for your feedback, please find my replies below

Regards,
Sherif

On Mon, Jan 9, 2012 at 5:04 PM, McGovern, James james.mcgovern@hp.comwrote:

Count me in as a reviewer. In the meantime, I have the following
questions/thoughts:****


**·        **How should we separate out static analysis in terms of
tools that do security vs the ones that do quality? They do produce
different metrics, etc

Sherif: This project is focused on security tools only, now, thinking about
this, I am not sure if we reflected this fact enough in our documentation
efforts so far though


**·        **When I was at The Hartford, we had a big focus on
reporting. This included an understanding not just of the code
characteristics, but departments, divisions and developers who wrote the
best vs worst, etc

Sherif:  Reporting is indeed a big chunk of a tool's value, I am not sure
we should direct companies to pay too too much attention to it though. The
4 sub-categories that exist right now under reporting covers more than 95%,
I would imagine, of the reporting needs of companies, there is a risk of
adding more direction under reporting, the risk is that the criteria are
flat, so something like 3.5 Support for editing core rules, would be as
important as any other criteria in the document, there is no weight
associated to each criteria, at least for the current version we are
working.


**·        **We also wanted a richer classification that just grouping
of “projects” For example, if ten applications used Struts then we wanted
to understand cross-cutting concerns.

Sherif:  I am not following, can you elaborate?


**·        **We also cared about integration. For example, could we
prevent Cognizant developers from seeing how suboptimal the results were
from code written by BLANK

Sherif: that's indeed an interesting point. So you have an application,
written by multiple 3rd party software vendors, you scan the code, would
there be a way to show each vendor the findings of their own code and not
the code written by other vendors?


**·        **At times, we wanted to export report data, you know the
habit of doing interesting pivots in Excel

Sherif: One thing holding the SCA industry right now is porting results
from one tool to the other, or agreeing on a certain format to export
findings. But none of the SCA tools have this right now, not sure whether
there would be a value of adding it as a criteria if none supports it.


**·        **Bug tracking shouldn’t assume one repository, so this needs
to work in a federated manner

Sherif: I see your point, how would you change the current sub-category
though?


**·        **Could I access the reports and not require yet another
username/password and instead consume enterprise identity

Sherif: I am not sure, that would be in our criteria though. I would
imagine another company wanting to restrict access to the reports.


**·        **I really hate having to install an application desktop by
desktop and would rather incorporate this into a desktop build. Some
vendors license tracking became an impediment

Sherif: Licensing is indeed an important issue. The main two players in
this industry has changed their licensing scheme several times during the
last 12 months, so I am not sure this is something we should dive into,
thoughts?


**·        **Does it make sense for every project that uses Spring to
scan Spring or could I somehow “include” other scan results

Sherif: that's an interesting point. But what is the value of including
other scan results? I am just thinking out loud here. It would be really
useful for per LOC licensing schemes, but most of the vendors have ditched
LOC licensing, so this is not a driver anymore. Saving time is probably not
a big concern. Would you see any other benefits for this?



From: wasc-satec-bounces@lists.webappsec.org [mailto:
wasc-satec-bounces@lists.webappsec.org] *On Behalf Of *Sherif Koussa
Sent: Monday, January 09, 2012 4:40 PM

To: wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] Phase II: Are you an author or reviewer?****


Hi All,****


So we have been working for about 4-5 months now, trying to figure out
what matters most to software companies which may be trying to acquire a
Static Code Analysis tool. I think we have a very good set of criteria,
which were vetted several times, these were captured in the form of
categories and sub-categories (headers and sub-headers mainly) in the Wiki
page here
http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working
. ****


So now we got the categories and sub-categories locked down, we need to
start the next phase, which is about fleshing the categories and
sub-categories out and explain what each of them means. If you need an
example, please visit the WASSEC project
http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to
get a sense of how the finished criteria would look like.****


Now, we need authors who are going to actually start fleshing out
(write\explain) the categories and sub-categories and we need reviewers who
will review the authors' work and suggest modifications.****


if you have cycles in the next two month, please reply to this email
with either "Author" or "Reviewer" to indicate the role you would like to
play in the next period.
****


Ideally, we would like to keep the workload per contributor to less than 2
hours a week for the next two months. We should be able to achieve this
considering that we have almost 40 people on this mailing list.****


Please let me know if you had any comments, suggestions or questions.****


Regards,****

Sherif****

Hi James, Thanks for your feedback, please find my replies below Regards, Sherif On Mon, Jan 9, 2012 at 5:04 PM, McGovern, James <james.mcgovern@hp.com>wrote: > Count me in as a reviewer. In the meantime, I have the following > questions/thoughts:**** > > ** ** > > **· **How should we separate out static analysis in terms of > tools that do security vs the ones that do quality? They do produce > different metrics, etc > Sherif: This project is focused on security tools only, now, thinking about this, I am not sure if we reflected this fact enough in our documentation efforts so far though > **** > > **· **When I was at The Hartford, we had a big focus on > reporting. This included an understanding not just of the code > characteristics, but departments, divisions and developers who wrote the > best vs worst, etc > Sherif: Reporting is indeed a big chunk of a tool's value, I am not sure we should direct companies to pay too too much attention to it though. The 4 sub-categories that exist right now under reporting covers more than 95%, I would imagine, of the reporting needs of companies, there is a risk of adding more direction under reporting, the risk is that the criteria are flat, so something like 3.5 Support for editing core rules, would be as important as any other criteria in the document, there is no weight associated to each criteria, at least for the current version we are working. > **** > > **· **We also wanted a richer classification that just grouping > of “projects” For example, if ten applications used Struts then we wanted > to understand cross-cutting concerns. > Sherif: I am not following, can you elaborate? > **** > > **· **We also cared about integration. For example, could we > prevent Cognizant developers from seeing how suboptimal the results were > from code written by BLANK > Sherif: that's indeed an interesting point. So you have an application, written by multiple 3rd party software vendors, you scan the code, would there be a way to show each vendor the findings of their own code and not the code written by other vendors? > **** > > **· **At times, we wanted to export report data, you know the > habit of doing interesting pivots in Excel > Sherif: One thing holding the SCA industry right now is porting results from one tool to the other, or agreeing on a certain format to export findings. But none of the SCA tools have this right now, not sure whether there would be a value of adding it as a criteria if none supports it. > **** > > **· **Bug tracking shouldn’t assume one repository, so this needs > to work in a federated manner > Sherif: I see your point, how would you change the current sub-category though? > **** > > **· **Could I access the reports and not require yet another > username/password and instead consume enterprise identity > Sherif: I am not sure, that would be in our criteria though. I would imagine another company wanting to restrict access to the reports. > **** > > **· **I really hate having to install an application desktop by > desktop and would rather incorporate this into a desktop build. Some > vendors license tracking became an impediment > Sherif: Licensing is indeed an important issue. The main two players in this industry has changed their licensing scheme several times during the last 12 months, so I am not sure this is something we should dive into, thoughts? > **** > > **· **Does it make sense for every project that uses Spring to > scan Spring or could I somehow “include” other scan results > Sherif: that's an interesting point. But what is the value of including other scan results? I am just thinking out loud here. It would be really useful for per LOC licensing schemes, but most of the vendors have ditched LOC licensing, so this is not a driver anymore. Saving time is probably not a big concern. Would you see any other benefits for this? > **** > > ** ** > > *From:* wasc-satec-bounces@lists.webappsec.org [mailto: > wasc-satec-bounces@lists.webappsec.org] *On Behalf Of *Sherif Koussa > *Sent:* Monday, January 09, 2012 4:40 PM > > *To:* wasc-satec@lists.webappsec.org > *Subject:* [WASC-SATEC] Phase II: Are you an author or reviewer?**** > > ** ** > > Hi All,**** > > ** ** > > So we have been working for about 4-5 months now, trying to figure out > what matters most to software companies which may be trying to acquire a > Static Code Analysis tool. I think we have a very good set of criteria, > which were vetted several times, these were captured in the form of > categories and sub-categories (headers and sub-headers mainly) in the Wiki > page here > http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working > . **** > > ** ** > > So now we got the categories and sub-categories locked down, we need to > start the next phase, which is about fleshing the categories and > sub-categories out and explain what each of them means. If you need an > example, please visit the WASSEC project > http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to > get a sense of how the finished criteria would look like.**** > > ** ** > > Now, we need authors who are going to actually start fleshing out > (write\explain) the categories and sub-categories and we need reviewers who > will review the authors' work and suggest modifications.**** > > ** ** > > *if you have cycles in the next two month, please reply to this email > with either "Author" or "Reviewer" to indicate the role you would like to > play in the next period.***** > > ** ** > > Ideally, we would like to keep the workload per contributor to less than 2 > hours a week for the next two months. We should be able to achieve this > considering that we have almost 40 people on this mailing list.**** > > ** ** > > Please let me know if you had any comments, suggestions or questions.**** > > ** ** > > Regards,**** > > Sherif**** >
SK
Sherif Koussa
Tue, Jan 10, 2012 1:13 PM

Great response so far, we have filled all the reviewer spots in less than
24 hours and we have only 3 author spots left to fill.

Regards,
Sherif

On Mon, Jan 9, 2012 at 4:39 PM, Sherif Koussa sherif.koussa@gmail.comwrote:

Hi All,

So we have been working for about 4-5 months now, trying to figure out
what matters most to software companies which may be trying to acquire a
Static Code Analysis tool. I think we have a very good set of criteria,
which were vetted several times, these were captured in the form of
categories and sub-categories (headers and sub-headers mainly) in the Wiki
page here
http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working
.

So now we got the categories and sub-categories locked down, we need to
start the next phase, which is about fleshing the categories and
sub-categories out and explain what each of them means. If you need an
example, please visit the WASSEC project
http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to
get a sense of how the finished criteria would look like.

Now, we need authors who are going to actually start fleshing out
(write\explain) the categories and sub-categories and we need reviewers who
will review the authors' work and suggest modifications.

if you have cycles in the next two month, please reply to this email
with either "Author" or "Reviewer" to indicate the role you would like to
play in the next period.

Ideally, we would like to keep the workload per contributor to less than 2
hours a week for the next two months. We should be able to achieve this
considering that we have almost 40 people on this mailing list.

Please let me know if you had any comments, suggestions or questions.

Regards,
Sherif

Great response so far, we have filled all the reviewer spots in less than 24 hours and we have only 3 author spots left to fill. Regards, Sherif On Mon, Jan 9, 2012 at 4:39 PM, Sherif Koussa <sherif.koussa@gmail.com>wrote: > Hi All, > > So we have been working for about 4-5 months now, trying to figure out > what matters most to software companies which may be trying to acquire a > Static Code Analysis tool. I think we have a very good set of criteria, > which were vetted several times, these were captured in the form of > categories and sub-categories (headers and sub-headers mainly) in the Wiki > page here > http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working > . > > So now we got the categories and sub-categories locked down, we need to > start the next phase, which is about fleshing the categories and > sub-categories out and explain what each of them means. If you need an > example, please visit the WASSEC project > http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to > get a sense of how the finished criteria would look like. > > Now, we need authors who are going to actually start fleshing out > (write\explain) the categories and sub-categories and we need reviewers who > will review the authors' work and suggest modifications. > > *if you have cycles in the next two month, please reply to this email > with either "Author" or "Reviewer" to indicate the role you would like to > play in the next period.* > > Ideally, we would like to keep the workload per contributor to less than 2 > hours a week for the next two months. We should be able to achieve this > considering that we have almost 40 people on this mailing list. > > Please let me know if you had any comments, suggestions or questions. > > Regards, > Sherif >
MJ
McGovern, James
Tue, Jan 10, 2012 5:14 PM
  1.   In terms of results portability, is it more about having an industry standard format or more about the ability for the specification of the existing format to be published and consumed via tools such as Dinis Cruz's most wonderful O2 platform?
    
  2.   Licensing can either be an enabler or an impediment to rolling out the tools to the developers that need them. I would like to see this rated high.
    
  3.   We need a way to "classify" projects. This requires the capture of more "metadata". The traditional view of static analysis is to focus on "MY PROJECT" but think about the scenario of a large enterprise that has thousands of "MY Projects".
    
  4.   Licensing should be separated out into: how can run a scan, who can read/create reports, who can bring up results in IDE so that they can remediate code.
    

From: Sherif Koussa [mailto:sherif.koussa@gmail.com]
Sent: Monday, January 09, 2012 9:16 PM
To: McGovern, James
Cc: wasc-satec@lists.webappsec.org
Subject: Re: [WASC-SATEC] Phase II: Are you an author or reviewer?

Hi James,

Thanks for your feedback, please find my replies below

Regards,
Sherif
On Mon, Jan 9, 2012 at 5:04 PM, McGovern, James <james.mcgovern@hp.commailto:james.mcgovern@hp.com> wrote:
Count me in as a reviewer. In the meantime, I have the following questions/thoughts:

  •     How should we separate out static analysis in terms of tools that do security vs the ones that do quality? They do produce different metrics, etc
    

Sherif: This project is focused on security tools only, now, thinking about this, I am not sure if we reflected this fact enough in our documentation efforts so far though

  •     When I was at The Hartford, we had a big focus on reporting. This included an understanding not just of the code characteristics, but departments, divisions and developers who wrote the best vs worst, etc
    

Sherif:  Reporting is indeed a big chunk of a tool's value, I am not sure we should direct companies to pay too too much attention to it though. The 4 sub-categories that exist right now under reporting covers more than 95%, I would imagine, of the reporting needs of companies, there is a risk of adding more direction under reporting, the risk is that the criteria are flat, so something like 3.5 Support for editing core rules, would be as important as any other criteria in the document, there is no weight associated to each criteria, at least for the current version we are working.

  •     We also wanted a richer classification that just grouping of "projects" For example, if ten applications used Struts then we wanted to understand cross-cutting concerns.
    

Sherif:  I am not following, can you elaborate?

  •     We also cared about integration. For example, could we prevent Cognizant developers from seeing how suboptimal the results were from code written by BLANK
    

Sherif: that's indeed an interesting point. So you have an application, written by multiple 3rd party software vendors, you scan the code, would there be a way to show each vendor the findings of their own code and not the code written by other vendors?

  •     At times, we wanted to export report data, you know the habit of doing interesting pivots in Excel
    

Sherif: One thing holding the SCA industry right now is porting results from one tool to the other, or agreeing on a certain format to export findings. But none of the SCA tools have this right now, not sure whether there would be a value of adding it as a criteria if none supports it.

  •     Bug tracking shouldn't assume one repository, so this needs to work in a federated manner
    

Sherif: I see your point, how would you change the current sub-category though?

  •     Could I access the reports and not require yet another username/password and instead consume enterprise identity
    

Sherif: I am not sure, that would be in our criteria though. I would imagine another company wanting to restrict access to the reports.

  •     I really hate having to install an application desktop by desktop and would rather incorporate this into a desktop build. Some vendors license tracking became an impediment
    

Sherif: Licensing is indeed an important issue. The main two players in this industry has changed their licensing scheme several times during the last 12 months, so I am not sure this is something we should dive into, thoughts?

  •     Does it make sense for every project that uses Spring to scan Spring or could I somehow "include" other scan results
    

Sherif: that's an interesting point. But what is the value of including other scan results? I am just thinking out loud here. It would be really useful for per LOC licensing schemes, but most of the vendors have ditched LOC licensing, so this is not a driver anymore. Saving time is probably not a big concern. Would you see any other benefits for this?

From: wasc-satec-bounces@lists.webappsec.orgmailto:wasc-satec-bounces@lists.webappsec.org [mailto:wasc-satec-bounces@lists.webappsec.orgmailto:wasc-satec-bounces@lists.webappsec.org] On Behalf Of Sherif Koussa
Sent: Monday, January 09, 2012 4:40 PM

To: wasc-satec@lists.webappsec.orgmailto:wasc-satec@lists.webappsec.org
Subject: [WASC-SATEC] Phase II: Are you an author or reviewer?

Hi All,

So we have been working for about 4-5 months now, trying to figure out what matters most to software companies which may be trying to acquire a Static Code Analysis tool. I think we have a very good set of criteria, which were vetted several times, these were captured in the form of categories and sub-categories (headers and sub-headers mainly) in the Wiki page here http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working.

So now we got the categories and sub-categories locked down, we need to start the next phase, which is about fleshing the categories and sub-categories out and explain what each of them means. If you need an example, please visit the WASSEC project http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to get a sense of how the finished criteria would look like.

Now, we need authors who are going to actually start fleshing out (write\explain) the categories and sub-categories and we need reviewers who will review the authors' work and suggest modifications.

if you have cycles in the next two month, please reply to this email with either "Author" or "Reviewer" to indicate the role you would like to play in the next period.

Ideally, we would like to keep the workload per contributor to less than 2 hours a week for the next two months. We should be able to achieve this considering that we have almost 40 people on this mailing list.

Please let me know if you had any comments, suggestions or questions.

Regards,
Sherif

1. In terms of results portability, is it more about having an industry standard format or more about the ability for the specification of the existing format to be published and consumed via tools such as Dinis Cruz's most wonderful O2 platform? 2. Licensing can either be an enabler or an impediment to rolling out the tools to the developers that need them. I would like to see this rated high. 3. We need a way to "classify" projects. This requires the capture of more "metadata". The traditional view of static analysis is to focus on "MY PROJECT" but think about the scenario of a large enterprise that has thousands of "MY Projects". 4. Licensing should be separated out into: how can run a scan, who can read/create reports, who can bring up results in IDE so that they can remediate code. From: Sherif Koussa [mailto:sherif.koussa@gmail.com] Sent: Monday, January 09, 2012 9:16 PM To: McGovern, James Cc: wasc-satec@lists.webappsec.org Subject: Re: [WASC-SATEC] Phase II: Are you an author or reviewer? Hi James, Thanks for your feedback, please find my replies below Regards, Sherif On Mon, Jan 9, 2012 at 5:04 PM, McGovern, James <james.mcgovern@hp.com<mailto:james.mcgovern@hp.com>> wrote: Count me in as a reviewer. In the meantime, I have the following questions/thoughts: * How should we separate out static analysis in terms of tools that do security vs the ones that do quality? They do produce different metrics, etc Sherif: This project is focused on security tools only, now, thinking about this, I am not sure if we reflected this fact enough in our documentation efforts so far though * When I was at The Hartford, we had a big focus on reporting. This included an understanding not just of the code characteristics, but departments, divisions and developers who wrote the best vs worst, etc Sherif: Reporting is indeed a big chunk of a tool's value, I am not sure we should direct companies to pay too too much attention to it though. The 4 sub-categories that exist right now under reporting covers more than 95%, I would imagine, of the reporting needs of companies, there is a risk of adding more direction under reporting, the risk is that the criteria are flat, so something like 3.5 Support for editing core rules, would be as important as any other criteria in the document, there is no weight associated to each criteria, at least for the current version we are working. * We also wanted a richer classification that just grouping of "projects" For example, if ten applications used Struts then we wanted to understand cross-cutting concerns. Sherif: I am not following, can you elaborate? * We also cared about integration. For example, could we prevent Cognizant developers from seeing how suboptimal the results were from code written by BLANK Sherif: that's indeed an interesting point. So you have an application, written by multiple 3rd party software vendors, you scan the code, would there be a way to show each vendor the findings of their own code and not the code written by other vendors? * At times, we wanted to export report data, you know the habit of doing interesting pivots in Excel Sherif: One thing holding the SCA industry right now is porting results from one tool to the other, or agreeing on a certain format to export findings. But none of the SCA tools have this right now, not sure whether there would be a value of adding it as a criteria if none supports it. * Bug tracking shouldn't assume one repository, so this needs to work in a federated manner Sherif: I see your point, how would you change the current sub-category though? * Could I access the reports and not require yet another username/password and instead consume enterprise identity Sherif: I am not sure, that would be in our criteria though. I would imagine another company wanting to restrict access to the reports. * I really hate having to install an application desktop by desktop and would rather incorporate this into a desktop build. Some vendors license tracking became an impediment Sherif: Licensing is indeed an important issue. The main two players in this industry has changed their licensing scheme several times during the last 12 months, so I am not sure this is something we should dive into, thoughts? * Does it make sense for every project that uses Spring to scan Spring or could I somehow "include" other scan results Sherif: that's an interesting point. But what is the value of including other scan results? I am just thinking out loud here. It would be really useful for per LOC licensing schemes, but most of the vendors have ditched LOC licensing, so this is not a driver anymore. Saving time is probably not a big concern. Would you see any other benefits for this? From: wasc-satec-bounces@lists.webappsec.org<mailto:wasc-satec-bounces@lists.webappsec.org> [mailto:wasc-satec-bounces@lists.webappsec.org<mailto:wasc-satec-bounces@lists.webappsec.org>] On Behalf Of Sherif Koussa Sent: Monday, January 09, 2012 4:40 PM To: wasc-satec@lists.webappsec.org<mailto:wasc-satec@lists.webappsec.org> Subject: [WASC-SATEC] Phase II: Are you an author or reviewer? Hi All, So we have been working for about 4-5 months now, trying to figure out what matters most to software companies which may be trying to acquire a Static Code Analysis tool. I think we have a very good set of criteria, which were vetted several times, these were captured in the form of categories and sub-categories (headers and sub-headers mainly) in the Wiki page here http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working. So now we got the categories and sub-categories locked down, we need to start the next phase, which is about fleshing the categories and sub-categories out and explain what each of them means. If you need an example, please visit the WASSEC project http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to get a sense of how the finished criteria would look like. Now, we need authors who are going to actually start fleshing out (write\explain) the categories and sub-categories and we need reviewers who will review the authors' work and suggest modifications. if you have cycles in the next two month, please reply to this email with either "Author" or "Reviewer" to indicate the role you would like to play in the next period. Ideally, we would like to keep the workload per contributor to less than 2 hours a week for the next two months. We should be able to achieve this considering that we have almost 40 people on this mailing list. Please let me know if you had any comments, suggestions or questions. Regards, Sherif
SK
Sherif Koussa
Fri, Jan 27, 2012 3:42 PM

Thanks for those who replied, here is the final list:

Authors:

  • Benoit Guerette
  • Sneha Phadke
  • Alec Shcherbakov .
  • James McGovern

Reviewers:

  • Aaron Weaver
  • Jojo Maalouf (standby)
  • Henri Salo
  • Mushtaq Ahmed

Here is a suggested way of getting this done:

1- Each author will be assigned two reviewers, author sends to reviewer1,
reviewer1 sends to reviewer 2 when he is done.
2- Author will be assigned between 1-3 criteria each week.
2- Author fleshes out the criteria assigned and sends to reviewer 1.
3- Each reviewer will have two sets of criteria to review per week.
(Assuming that review takes less time than writing, but this assumption
needs to be tested)

For example:

Author1 will have Reviewer1 and Reviewer2 to review his work in this
particular order
Author2 will have Reviewer2 and Reviewer1 to review his work in this
particular order

Authors will get their assigned criteria shortly. Please don't hesitate to
send any comments or suggestions.

Regards,
Sherif
On Mon, Jan 9, 2012 at 4:39 PM, Sherif Koussa sherif.koussa@gmail.comwrote:

Hi All,

So we have been working for about 4-5 months now, trying to figure out
what matters most to software companies which may be trying to acquire a
Static Code Analysis tool. I think we have a very good set of criteria,
which were vetted several times, these were captured in the form of
categories and sub-categories (headers and sub-headers mainly) in the Wiki
page here
http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working
.

So now we got the categories and sub-categories locked down, we need to
start the next phase, which is about fleshing the categories and
sub-categories out and explain what each of them means. If you need an
example, please visit the WASSEC project
http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to
get a sense of how the finished criteria would look like.

Now, we need authors who are going to actually start fleshing out
(write\explain) the categories and sub-categories and we need reviewers who
will review the authors' work and suggest modifications.

if you have cycles in the next two month, please reply to this email
with either "Author" or "Reviewer" to indicate the role you would like to
play in the next period.

Ideally, we would like to keep the workload per contributor to less than 2
hours a week for the next two months. We should be able to achieve this
considering that we have almost 40 people on this mailing list.

Please let me know if you had any comments, suggestions or questions.

Regards,
Sherif

Thanks for those who replied, here is the final list: *Authors:* - Benoit Guerette - Sneha Phadke - Alec Shcherbakov . - James McGovern *Reviewers:* - Aaron Weaver - Jojo Maalouf (standby) - Henri Salo - Mushtaq Ahmed Here is a suggested way of getting this done: 1- Each author will be assigned two reviewers, author sends to reviewer1, reviewer1 sends to reviewer 2 when he is done. 2- Author will be assigned between 1-3 criteria each week. 2- Author fleshes out the criteria assigned and sends to reviewer 1. 3- Each reviewer will have two sets of criteria to review per week. (Assuming that review takes less time than writing, but this assumption needs to be tested) For example: Author1 will have Reviewer1 and Reviewer2 to review his work in this particular order Author2 will have Reviewer2 and Reviewer1 to review his work in this particular order Authors will get their assigned criteria shortly. Please don't hesitate to send any comments or suggestions. Regards, Sherif On Mon, Jan 9, 2012 at 4:39 PM, Sherif Koussa <sherif.koussa@gmail.com>wrote: > Hi All, > > So we have been working for about 4-5 months now, trying to figure out > what matters most to software companies which may be trying to acquire a > Static Code Analysis tool. I think we have a very good set of criteria, > which were vetted several times, these were captured in the form of > categories and sub-categories (headers and sub-headers mainly) in the Wiki > page here > http://projects.webappsec.org/w/page/42093482/Static%20Analysis%20Tool%20Evaluation%20Criteria%20Working > . > > So now we got the categories and sub-categories locked down, we need to > start the next phase, which is about fleshing the categories and > sub-categories out and explain what each of them means. If you need an > example, please visit the WASSEC project > http://projects.webappsec.org/w/page/13246986/Web%20Application%20Security%20Scanner%20Evaluation%20Criteria to > get a sense of how the finished criteria would look like. > > Now, we need authors who are going to actually start fleshing out > (write\explain) the categories and sub-categories and we need reviewers who > will review the authors' work and suggest modifications. > > *if you have cycles in the next two month, please reply to this email > with either "Author" or "Reviewer" to indicate the role you would like to > play in the next period.* > > Ideally, we would like to keep the workload per contributor to less than 2 > hours a week for the next two months. We should be able to achieve this > considering that we have almost 40 people on this mailing list. > > Please let me know if you had any comments, suggestions or questions. > > Regards, > Sherif >