[WASC-WAFEC] AWS WAF

Tony Turner tony at sentinel24.com
Fri Oct 9 21:57:45 EDT 2015


My responses inline below Christian, I don't necessarily agree with your
assessment

On Fri, Oct 9, 2015 at 9:10 PM, Christian Heinrich <
christian.heinrich at cmlh.id.au> wrote:

> Tony,
>
> On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org>
> wrote:
> > Hi Christian, thanks for your question. One of the things I'd like want
> to
> > get away from in a future version of WAFEC, not sure if it will make the
> > next release, is the yes/no responses. Ideally I'd like to identify a
> > scoring mechanism, say on a scale of 1-10 how a product meets a specific
> > criteria. So 2 WAF vendors may both have signatures for XSS, but one may
> be
> > much better at detecting evasion attempts, maybe one doesn't normalize,
> etc.
> > or 2 vendors may have mechanisms for mitigating that vary in
> effectiveness.
> > Otherwise with a binary approach for core capabilities, you are correct
> > there will likely be very little deviation from many WAF vendors until we
> > start hitting the extrinsic criteria. There's likely a ton of research
> and
> > testing as well as some tool development efforts to support more granular
> > evaluations such as this, which is why I think the next release may be
> too
> > soon for this.
>
> I disagree, binary or yes/no is absolute and objective while assigning
> a score of 1-10 is subjective such as the OWASP distrust of its own
> benchmark i.e.
> http://lists.owasp.org/pipermail/owasp-leaders/2015-September/015120.html
>
>
This distrust has to do with lack of transparency and use of a benchmark by
a commercial entity before it is a mature framework. Don't confuse the
issues with your bias against OWASP by making inflammatory statements out
of context. I'm very aware of the issues and do not intend to repeat those
mistakes.

Secondly, yes/no answers are very easy to game unless we get very very
specific with the questions, far more specific than the previous version of
the Response Matrix. For example "Does the WAF support signature based
detection?" is a terrible binary question. There may be varying degrees of
how comprehensive default signatures are, how easy to create new, modify
existing, differences in regex that can impact performance, denial of
service conditions for overly greedy regex, etc.

I do not intend to provide a graduated scoring mechanism unless those
scores can be clearly defined. In some cases it may make sense to use a
binary value, but in others there are varying degrees that should be
measured. I'm willing to discuss an expansion of those criteria in the
Response Matrix, but also want to be mindful of the fact that there may be
circumstances we have not accounted for and do understand your reluctance
to move to a model that has a chance to become more subjective. The only
way a scale works is if there are very clear definitions of what the
variances in score mean.



> Using your example, there would be three yes/no questions would be
> addressed in the order of specific to general:
> 1. XSS normalize
> 2. XSS evasion
> 3. XSS signature
>
> On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org>
> wrote:
> > The additional qualifiers you mention could be covered by that scaled
> > approach. Some of these qualifiers will also be called out in a separate
> set
> > of extrinsic criteria. For instance, ease of use might be a category on
> its
> > own that includes criteria for specific items related to ease of use
> such as
> > GUI functionality as well as criteria that allows the evaluator to judge
> how
> > intuitive the interface is, tuning violations and policies, how
> difficult to
> > generate or customize reports, etc. Additional criteria like
> architectural
> > change categories to clearly spell out where the solution requires things
> > like additional ports to be opened, changing IP schemes, OS level changes
> > like agent installation or authentication changes, etc. Other extrinsic
> > criteria such as user communities, vendor training and documentation,
> > product certification, sales process and more could be evaluated in the
> same
> > fashion, but I don't intend to go beyond technical criteria, feature sets
> > and usage for the purposes of WAFEC documentation. I'm not sure I'm
> prepared
> > to provide guidance in WAFEC documentation, for example, identifying how
> > vendors should be working with customers. Some of these extrinsic
> categories
> > may never find their way into the core WAFEC document, but might still be
> > included in a response matrix.
>
> I disagree and these should be excluded since this is reinventing the
> wheel already established by Gartner and
>
> https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/#//apple_ref/doc/uid/TP40006556-CH66-SW1
> for instance.
>
>
I'm willing to debate this issue, and arguably have to draw the line
somewhere with regards to extrinsic features that should be included. That
being said, its still valuable for consumers to include a mechanism whereby
they can draw their own conclusions here. Don't get too caught up on a
single extrinsic criteria until we have identified a comprehensive list of
criteria for inclusion. There will be plenty of debate once that happens I
assure you.


> On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org>
> wrote:
> > The biggest problem with those kinds of evaluations is they tend to be
> very
> > subjective and don't align well to a mature and repeatable process with
> > multiple end-users comparing results. When I've created similar matrixes
> in
> > the past, (namely for vulnerability management and SIEM products) I've
> > always done something similar here and admittedly done a poor job of
> clearly
> > identifying what is the difference between a 8 and a 6 as I went with a
> gut
> > feel and have typically been the only user (except when a certain VM
> vendor
> > got a copy of my matrix I used for a bake-off and thought it would be a
> > great sales tool). That's a maturity consideration that will be planned
> for
> > before we include that capability in a future Response Matrix.
>
> I established the relationships with ISCA i.e.
>
> http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-June/000286.html
> and Gartner i.e.
>
> http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-July/000293.html
> and should be driven by them [and NSS] not WAFEC as this is
> detrimental to our independence.
>
>
I'm not quite sure what issue you are having here. Please elaborate. I do
not intend for WAFEC to perform evaluations. Its simply a framework and set
of tools for vendors, independent testing entities and consumers to conduct
their own evaluations. It should be designed for consistency and
flexibility to map as closely as possible to the unique scenario it is
being used to evaluate.


On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner at owasp.org> wrote:
> > One thing I've done in the past that I intend to bring over, however, is
> the
> > concept of weighting. For instance I've typically used a weight of 1-5 so
> > that I could assign different weights to criteria. For instance, maybe
> you
> > don't care as much about how impactful the deployment will be so you
> assign
> > those criteria a weight of 1, while the ability to have robust support
> for
> > policy tuning might be a 5 and associated scoring would have a much
> greater
> > impact on overall score.  This way, once a WAF has been evaluated, even
> if
> > the requirements change the scores can be easily recalculated based on
> the
> > new set of weighted requirements.
>
> Based on my experience with CVSSv2 (and addressed in CVSSv3),
> weighting skew the result in favour of vendor and not the consumer so
> I disagree with this too.
>
>
Lack of weighting makes all criteria equally important. For the purpose of
a generic evaluation, that's fine but that does not map to how consumers
need to use evaluation tools. Not all implementations are the same, not all
requirements are the same and not all consumers will consider criteria with
the exact same weight. I'm sorry but I strongly disagree with you here. I'm
not talking about setting weights, I'm talking about providing the
flexibility for the user of the tool to set their own weights. Please
provide concrete examples of how you find this to be problematic.



>
> --
> Regards,
> Christian Heinrich
>
> http://cmlh.id.au/contact
>
> _______________________________________________
> wasc-wafec mailing list
> wasc-wafec at lists.webappsec.org
> http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/attachments/20151009/4a8cb4f9/attachment-0003.html>


More information about the wasc-wafec mailing list