wasc-satec@lists.webappsec.org

WASC Static Analysis Tool Evaluation Criteria

View all threads

Do we need two lists?

SK
Sherif Koussa
Thu, Sep 29, 2011 6:29 PM

Hi All,

The inspiration behind this idea came from the other thread on the direction
and comments initiated by Romain and followed up by Ale, Benoit and Herman.
However, this email is NOT intended to discuss that. This email is to get
your opinion on whether we need two lists instead of one.

One List For Vendors to Fill:

This would include all the facts criteria such as what languages do you
support, what OS? 32/64 bit OS....etc. Nice and simple with no "subjective"
criteria.

One List For the Evaluation Team to Fill:

This would include basically what we think is still important but kinda on
the subjective side of things, so for example, what are the skills necessary
to run the tool? the vendor might say: none of minimal, while the evaluator
when they actually try the tool might have a different opinion. Things like
Number of False positives, while it is very dependent on the environment,
language, application being scanned and probably a dozen other factors, but
now when the evaluator try the tool, they would pretty much be able to
compare apples to apples since they would be probably trying the different
tool inside the same environments on the same applications.

So the bottom lines is, there are criteria that are facts with no grey
areas and these are up to the vendors to fill and there are criteria that
are either subjective or not important to everyone and these are up to the
evaluators to fill.

I think using two lists instead of one would provide more value to the
evaluators of SCA tools, streamline the process and provide the best of two
worlds.

Do you guys think this makes sense?

Regards,
Sherif

Hi All, The inspiration behind this idea came from the other thread on the direction and comments initiated by Romain and followed up by Ale, Benoit and Herman. However, this email is NOT intended to discuss that. This email is to get your opinion on whether we need two lists instead of one. *One List For Vendors to Fill:* This would include all the facts criteria such as what languages do you support, what OS? 32/64 bit OS....etc. Nice and simple with no "subjective" criteria. *One List For the Evaluation Team to Fill:* This would include basically what we think is still important but kinda on the subjective side of things, so for example, what are the skills necessary to run the tool? the vendor might say: none of minimal, while the evaluator when they actually try the tool might have a different opinion. Things like Number of False positives, while it is very dependent on the environment, language, application being scanned and probably a dozen other factors, but now when the evaluator try the tool, they would pretty much be able to compare apples to apples since they would be probably trying the different tool inside the same environments on the same applications. *So the bottom lines is,* there are criteria that are facts with no grey areas and these are up to the vendors to fill and there are criteria that are either subjective or not important to everyone and these are up to the evaluators to fill. I think using two lists instead of one would provide more value to the evaluators of SCA tools, streamline the process and provide the best of two worlds. *Do you guys think this makes sense?* Regards, Sherif
BG
Benoit Guerette (OWASP)
Fri, Sep 30, 2011 12:20 PM

I agree with the idea, but thinking more and more about it makes the
idea difficult to get a consensus for a vote.

Criterias such as quality of the ui, actually in the list, have a
place in the decision.

But subjective criterias are subjective for the business. Example,
pricing is major for small business, and "cross selling opportunities"
may be more important for others.

There is about 5-6 criterias that are subjective in the actual list we made.

So honestly, I would like to know what other members are thinking about this

On 9/29/11, Sherif Koussa sherif.koussa@gmail.com wrote:

Hi All,

The inspiration behind this idea came from the other thread on the direction
and comments initiated by Romain and followed up by Ale, Benoit and Herman.
However, this email is NOT intended to discuss that. This email is to get
your opinion on whether we need two lists instead of one.

One List For Vendors to Fill:

This would include all the facts criteria such as what languages do you
support, what OS? 32/64 bit OS....etc. Nice and simple with no "subjective"
criteria.

One List For the Evaluation Team to Fill:

This would include basically what we think is still important but kinda on
the subjective side of things, so for example, what are the skills necessary
to run the tool? the vendor might say: none of minimal, while the evaluator
when they actually try the tool might have a different opinion. Things like
Number of False positives, while it is very dependent on the environment,
language, application being scanned and probably a dozen other factors, but
now when the evaluator try the tool, they would pretty much be able to
compare apples to apples since they would be probably trying the different
tool inside the same environments on the same applications.

So the bottom lines is, there are criteria that are facts with no grey
areas and these are up to the vendors to fill and there are criteria that
are either subjective or not important to everyone and these are up to the
evaluators to fill.

I think using two lists instead of one would provide more value to the
evaluators of SCA tools, streamline the process and provide the best of two
worlds.

Do you guys think this makes sense?

Regards,
Sherif

--
Sent from my mobile device

I agree with the idea, but thinking more and more about it makes the idea difficult to get a consensus for a vote. Criterias such as quality of the ui, actually in the list, have a place in the decision. But subjective criterias are subjective for the business. Example, pricing is major for small business, and "cross selling opportunities" may be more important for others. There is about 5-6 criterias that are subjective in the actual list we made. So honestly, I would like to know what other members are thinking about this On 9/29/11, Sherif Koussa <sherif.koussa@gmail.com> wrote: > Hi All, > > The inspiration behind this idea came from the other thread on the direction > and comments initiated by Romain and followed up by Ale, Benoit and Herman. > However, this email is NOT intended to discuss that. This email is to get > your opinion on whether we need two lists instead of one. > > *One List For Vendors to Fill:* > > This would include all the facts criteria such as what languages do you > support, what OS? 32/64 bit OS....etc. Nice and simple with no "subjective" > criteria. > > > *One List For the Evaluation Team to Fill:* > > This would include basically what we think is still important but kinda on > the subjective side of things, so for example, what are the skills necessary > to run the tool? the vendor might say: none of minimal, while the evaluator > when they actually try the tool might have a different opinion. Things like > Number of False positives, while it is very dependent on the environment, > language, application being scanned and probably a dozen other factors, but > now when the evaluator try the tool, they would pretty much be able to > compare apples to apples since they would be probably trying the different > tool inside the same environments on the same applications. > > *So the bottom lines is,* there are criteria that are facts with no grey > areas and these are up to the vendors to fill and there are criteria that > are either subjective or not important to everyone and these are up to the > evaluators to fill. > > I think using two lists instead of one would provide more value to the > evaluators of SCA tools, streamline the process and provide the best of two > worlds. > > *Do you guys think this makes sense?* > > Regards, > Sherif > -- Sent from my mobile device
SK
Sherif Koussa
Fri, Sep 30, 2011 7:32 PM

I just want to add that the goal for the two sets of criteria is NOT to try
to keep the subjective\hard-to-measure criteria. The goal is simply:
"Hey evaluator, we think that some criteria are important, they might just
be difficult for us or the vendor to quantify, so we are leaving them up to
you (the evaluator) to decide, we just wanted to let you know that they
might be of interest to you"

Regards,
Sherif

On Fri, Sep 30, 2011 at 8:20 AM, Benoit Guerette (OWASP) gueb@owasp.orgwrote:

I agree with the idea, but thinking more and more about it makes the
idea difficult to get a consensus for a vote.

Criterias such as quality of the ui, actually in the list, have a
place in the decision.

But subjective criterias are subjective for the business. Example,
pricing is major for small business, and "cross selling opportunities"
may be more important for others.

There is about 5-6 criterias that are subjective in the actual list we
made.

So honestly, I would like to know what other members are thinking about
this

On 9/29/11, Sherif Koussa sherif.koussa@gmail.com wrote:

Hi All,

The inspiration behind this idea came from the other thread on the

direction

and comments initiated by Romain and followed up by Ale, Benoit and

Herman.

However, this email is NOT intended to discuss that. This email is to get
your opinion on whether we need two lists instead of one.

One List For Vendors to Fill:

This would include all the facts criteria such as what languages do you
support, what OS? 32/64 bit OS....etc. Nice and simple with no

"subjective"

criteria.

One List For the Evaluation Team to Fill:

This would include basically what we think is still important but kinda

on

the subjective side of things, so for example, what are the skills

necessary

to run the tool? the vendor might say: none of minimal, while the

evaluator

when they actually try the tool might have a different opinion. Things

like

Number of False positives, while it is very dependent on the environment,
language, application being scanned and probably a dozen other factors,

but

now when the evaluator try the tool, they would pretty much be able to
compare apples to apples since they would be probably trying the

different

tool inside the same environments on the same applications.

So the bottom lines is, there are criteria that are facts with no grey
areas and these are up to the vendors to fill and there are criteria that
are either subjective or not important to everyone and these are up to

the

evaluators to fill.

I think using two lists instead of one would provide more value to the
evaluators of SCA tools, streamline the process and provide the best of

two

worlds.

Do you guys think this makes sense?

Regards,
Sherif

--
Sent from my mobile device

I just want to add that the goal for the two sets of criteria is NOT to try to keep the subjective\hard-to-measure criteria. The goal is simply: "Hey evaluator, we think that some criteria are important, they might just be difficult for us or the vendor to quantify, so we are leaving them up to you (the evaluator) to decide, we just wanted to let you know that they might be of interest to you" Regards, Sherif On Fri, Sep 30, 2011 at 8:20 AM, Benoit Guerette (OWASP) <gueb@owasp.org>wrote: > I agree with the idea, but thinking more and more about it makes the > idea difficult to get a consensus for a vote. > > Criterias such as quality of the ui, actually in the list, have a > place in the decision. > > But subjective criterias are subjective for the business. Example, > pricing is major for small business, and "cross selling opportunities" > may be more important for others. > > There is about 5-6 criterias that are subjective in the actual list we > made. > > So honestly, I would like to know what other members are thinking about > this > > On 9/29/11, Sherif Koussa <sherif.koussa@gmail.com> wrote: > > Hi All, > > > > The inspiration behind this idea came from the other thread on the > direction > > and comments initiated by Romain and followed up by Ale, Benoit and > Herman. > > However, this email is NOT intended to discuss that. This email is to get > > your opinion on whether we need two lists instead of one. > > > > *One List For Vendors to Fill:* > > > > This would include all the facts criteria such as what languages do you > > support, what OS? 32/64 bit OS....etc. Nice and simple with no > "subjective" > > criteria. > > > > > > *One List For the Evaluation Team to Fill:* > > > > This would include basically what we think is still important but kinda > on > > the subjective side of things, so for example, what are the skills > necessary > > to run the tool? the vendor might say: none of minimal, while the > evaluator > > when they actually try the tool might have a different opinion. Things > like > > Number of False positives, while it is very dependent on the environment, > > language, application being scanned and probably a dozen other factors, > but > > now when the evaluator try the tool, they would pretty much be able to > > compare apples to apples since they would be probably trying the > different > > tool inside the same environments on the same applications. > > > > *So the bottom lines is,* there are criteria that are facts with no grey > > areas and these are up to the vendors to fill and there are criteria that > > are either subjective or not important to everyone and these are up to > the > > evaluators to fill. > > > > I think using two lists instead of one would provide more value to the > > evaluators of SCA tools, streamline the process and provide the best of > two > > worlds. > > > > *Do you guys think this makes sense?* > > > > Regards, > > Sherif > > > > -- > Sent from my mobile device >
BG
Benoit Guerette (OWASP)
Thu, Oct 6, 2011 1:26 AM

Here is a part of my experience with one or two lists.

In our RFI, we included most of the current WASC-SATEC criterias,
without the 'quality' ones. The RFI answers count for x% of the
decision.

After the vendors filled the RFI, we asked for a live demo, and we
gave to the attendance an evaluation sheet. This sheet includes all
quality, simplicity, feeling, etc. criterias. It counts for x%
of the decision.

Pricing and other criterias are covered for x% of the decision, but
not related to IT, so I guess not valuable for this project.

So on our side, we have 2 lists.

Here is a part of my experience with one or two lists. In our RFI, we included most of the current WASC-SATEC criterias, without the 'quality' ones. The RFI answers count for x% of the decision. After the vendors filled the RFI, we asked for a live demo, and we gave to the attendance an evaluation sheet. This sheet includes all quality, simplicity, feeling, etc. criterias. It counts for x% of the decision. Pricing and other criterias are covered for x% of the decision, but not related to IT, so I guess not valuable for this project. So on our side, we have 2 lists.
SK
Sherif Koussa
Sat, Oct 15, 2011 5:42 PM

All,

For the sake of keeping this as simple as possible, we will stick to one
list for now. In addition, I will be proposing a new version of the
categories based on all the comments collected within the last few months.
Stay tuned.

Regards,
Sherif

On Thu, Sep 29, 2011 at 2:29 PM, Sherif Koussa sherif.koussa@gmail.comwrote:

Hi All,

The inspiration behind this idea came from the other thread on the
direction and comments initiated by Romain and followed up by Ale, Benoit
and Herman. However, this email is NOT intended to discuss that. This email
is to get your opinion on whether we need two lists instead of one.

One List For Vendors to Fill:

This would include all the facts criteria such as what languages do you
support, what OS? 32/64 bit OS....etc. Nice and simple with no "subjective"
criteria.

One List For the Evaluation Team to Fill:

This would include basically what we think is still important but kinda on
the subjective side of things, so for example, what are the skills necessary
to run the tool? the vendor might say: none of minimal, while the evaluator
when they actually try the tool might have a different opinion. Things like
Number of False positives, while it is very dependent on the environment,
language, application being scanned and probably a dozen other factors, but
now when the evaluator try the tool, they would pretty much be able to
compare apples to apples since they would be probably trying the different
tool inside the same environments on the same applications.

So the bottom lines is, there are criteria that are facts with no grey
areas and these are up to the vendors to fill and there are criteria that
are either subjective or not important to everyone and these are up to the
evaluators to fill.

I think using two lists instead of one would provide more value to the
evaluators of SCA tools, streamline the process and provide the best of two
worlds.

Do you guys think this makes sense?

Regards,
Sherif

All, For the sake of keeping this as simple as possible, we will stick to one list for now. In addition, I will be proposing a new version of the categories based on all the comments collected within the last few months. Stay tuned. Regards, Sherif On Thu, Sep 29, 2011 at 2:29 PM, Sherif Koussa <sherif.koussa@gmail.com>wrote: > Hi All, > > The inspiration behind this idea came from the other thread on the > direction and comments initiated by Romain and followed up by Ale, Benoit > and Herman. However, this email is NOT intended to discuss that. This email > is to get your opinion on whether we need two lists instead of one. > > *One List For Vendors to Fill:* > > This would include all the facts criteria such as what languages do you > support, what OS? 32/64 bit OS....etc. Nice and simple with no "subjective" > criteria. > > > *One List For the Evaluation Team to Fill:* > > This would include basically what we think is still important but kinda on > the subjective side of things, so for example, what are the skills necessary > to run the tool? the vendor might say: none of minimal, while the evaluator > when they actually try the tool might have a different opinion. Things like > Number of False positives, while it is very dependent on the environment, > language, application being scanned and probably a dozen other factors, but > now when the evaluator try the tool, they would pretty much be able to > compare apples to apples since they would be probably trying the different > tool inside the same environments on the same applications. > > *So the bottom lines is,* there are criteria that are facts with no grey > areas and these are up to the vendors to fill and there are criteria that > are either subjective or not important to everyone and these are up to the > evaluators to fill. > > I think using two lists instead of one would provide more value to the > evaluators of SCA tools, streamline the process and provide the best of two > worlds. > > *Do you guys think this makes sense?* > > Regards, > Sherif >