Christian Heinrich christian.heinrich at cmlh.id.au
Fri Oct 9 21:10:10 EDT 2015


On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org> wrote:
> Hi Christian, thanks for your question. One of the things I'd like want to
> get away from in a future version of WAFEC, not sure if it will make the
> next release, is the yes/no responses. Ideally I'd like to identify a
> scoring mechanism, say on a scale of 1-10 how a product meets a specific
> criteria. So 2 WAF vendors may both have signatures for XSS, but one may be
> much better at detecting evasion attempts, maybe one doesn't normalize, etc.
> or 2 vendors may have mechanisms for mitigating that vary in effectiveness.
> Otherwise with a binary approach for core capabilities, you are correct
> there will likely be very little deviation from many WAF vendors until we
> start hitting the extrinsic criteria. There's likely a ton of research and
> testing as well as some tool development efforts to support more granular
> evaluations such as this, which is why I think the next release may be too
> soon for this.

I disagree, binary or yes/no is absolute and objective while assigning
a score of 1-10 is subjective such as the OWASP distrust of its own
benchmark i.e. http://lists.owasp.org/pipermail/owasp-leaders/2015-September/015120.html

Using your example, there would be three yes/no questions would be
addressed in the order of specific to general:
1. XSS normalize
2. XSS evasion
3. XSS signature

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org> wrote:
> The additional qualifiers you mention could be covered by that scaled
> approach. Some of these qualifiers will also be called out in a separate set
> of extrinsic criteria. For instance, ease of use might be a category on its
> own that includes criteria for specific items related to ease of use such as
> GUI functionality as well as criteria that allows the evaluator to judge how
> intuitive the interface is, tuning violations and policies, how difficult to
> generate or customize reports, etc. Additional criteria like architectural
> change categories to clearly spell out where the solution requires things
> like additional ports to be opened, changing IP schemes, OS level changes
> like agent installation or authentication changes, etc. Other extrinsic
> criteria such as user communities, vendor training and documentation,
> product certification, sales process and more could be evaluated in the same
> fashion, but I don't intend to go beyond technical criteria, feature sets
> and usage for the purposes of WAFEC documentation. I'm not sure I'm prepared
> to provide guidance in WAFEC documentation, for example, identifying how
> vendors should be working with customers. Some of these extrinsic categories
> may never find their way into the core WAFEC document, but might still be
> included in a response matrix.

I disagree and these should be excluded since this is reinventing the
wheel already established by Gartner and
for instance.

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org> wrote:
> The biggest problem with those kinds of evaluations is they tend to be very
> subjective and don't align well to a mature and repeatable process with
> multiple end-users comparing results. When I've created similar matrixes in
> the past, (namely for vulnerability management and SIEM products) I've
> always done something similar here and admittedly done a poor job of clearly
> identifying what is the difference between a 8 and a 6 as I went with a gut
> feel and have typically been the only user (except when a certain VM vendor
> got a copy of my matrix I used for a bake-off and thought it would be a
> great sales tool). That's a maturity consideration that will be planned for
> before we include that capability in a future Response Matrix.

I established the relationships with ISCA i.e.
and Gartner i.e.
and should be driven by them [and NSS] not WAFEC as this is
detrimental to our independence.

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org> wrote:
> One thing I've done in the past that I intend to bring over, however, is the
> concept of weighting. For instance I've typically used a weight of 1-5 so
> that I could assign different weights to criteria. For instance, maybe you
> don't care as much about how impactful the deployment will be so you assign
> those criteria a weight of 1, while the ability to have robust support for
> policy tuning might be a 5 and associated scoring would have a much greater
> impact on overall score.  This way, once a WAF has been evaluated, even if
> the requirements change the scores can be easily recalculated based on the
> new set of weighted requirements.

Based on my experience with CVSSv2 (and addressed in CVSSv3),
weighting skew the result in favour of vendor and not the consumer so
I disagree with this too.

Christian Heinrich


More information about the wasc-wafec mailing list