wasc-wafec@lists.webappsec.org

WASC Web Application Firewall Evaluation Criteria Project Mailing List

View all threads

Re: [WASC-WAFEC] AWS WAF

CS
Christian Strache
Fri, Oct 9, 2015 7:15 AM

Dear Tony,

your explanations sound reasonable.
But, when you created your requirement catalogue and try to compare
different products, many 'big players' (i.e. Gartner listed WAFs) offer
the same set of features.
So, for example using the WAFEC to find a suitable product for a
customer with few requirements often ends up with a drawn in the
evaluation.
I guess if try to find out what features an AWS WAF offers,
Blacklisting, Whitelisting, Signatures, regular updates etcetera the AWS
WAF will fulfill those features. The quality and the customization
features however may differ from the once you get with other 'on
premise' products.

Do you think it may be possible to add a criteria to the WAFEC to
distinguish between:

  • yes, product offers feature XYZ
    and
  • yes, product offers feature XYZ based on a GUI/requires manual adaption
    or
  • yes, product offers features XYZ but only allows restricted adaption
    requiering operating system level changes.

Kind regards,
Christian

*Von:*wasc-wafec [mailto:wasc-wafec-bounces@lists.webappsec.org] *Im
Auftrag von *Tony Turner
Gesendet: Donnerstag, 8. Oktober 2015 21:20
An: Anton Chuvakin anton@chuvakin.org
Cc: wasc-wafec@lists.webappsec.org
Betreff: Re: [WASC-WAFEC] AWS WAF

Anton, A WAF is just a tool, it's all in how it's used. How can the tool
be extended through additional extrinsic features? This is where the top
players in the space (no I won't name names, we are all friends here)
really start to shine and where they differentiate themselves from the
competition. Ease of use, centralized management, threat feeds, load
balancing, etc. Does that mean if you don't have those things you can't
use the tools effectively? Does it make it an inferior product? What if
I don't need an ADC-based WAF because I'm happy with my existing load
balancer? Does that mean my WAF can't still perform its core functions
to detect and mitigate web app attacks?

I do understand the direction you are going in here, and yes this will
likely be misused by compliance minded entities. I think it's important
to remember that any 2 WAF consumers may have vastly different
requirements. The objective here is to define the core of what a WAF is
and should be capable of, but what exactly are we talking about here?
Many (but not all) of these extrinsic criteria are dependent upon
deployment architectures. We aren't always comparing reverse proxy to
reverse proxy here, and in fact the customer may not even want to
architect the solution that way. This makes WAF a more complex product
to develop criteria for because implementations are frequently not very
cookie-cutter. Then when you start talking about framework filters and
RASP, the conversation gets even more confusing for a lot of people.

I think when you take the typical Gartner-centric view of a solution
space you are looking at a product category very generically. How can we
evaluate vendors in a way that fits as many scenarios as possible with
the greatest amount of options for consumers? That's great, and has
substantial value. But that doesn't always map to real world requirements.

Instead I think it makes sense to create a core framework for WAF
criteria and provide a modular approach we can leverage when we develop
the WAFEC Response Matrix that identifies those extrinsic criteria only
when they are mapped to actual customer requirements. The criteria will
be broken up into categories such that if you need SSL offload, or a
cloud deployment we will have a set of criteria that map to that
requirement. If you don't, why would you care about those capabilities?

Lastly, there's really 2 audiences for WAFEC. Firstly, are the vendors
that make WAFs and organizations like Gartner seeking to evaluate the
product space. Secondly, are consumers who likely have very specific
needs. They may not care about cloud deployments or DLP capabilities or
how friendly the support staff are that they never need to call or
<insert extrinsic capability>. I honestly think with this approach you
can be as inclusive or exclusive as you want to when utilizing the
framework for evaluation. So if you are a vendor or Gartner, or you have
not identified all your requirements and you want to see who has the
shiniest toys, you just check all the boxes. If you have more selective
requirements, you can just build your menu and evaluate based on those
criteria.

-Tony

On Thu, Oct 8, 2015 at 2:37 PM, Anton Chuvakin
<mailto:anton@chuvakin.organton@chuvakin.org> wrote:

So, assuming this WAF would be "a basic WAF", do you guys think that a
basic WAF is actually useful for anything apart from compliance? When
was the last time somebody used a generic SQLi (like so
https://xkcd.com/327/https://xkcd.com/327/) to attack an app?
#just_wondering

On Wed, Oct 7, 2015 at 9:23 PM, Christian Heinrich
<mailto:christian.heinrich@cmlh.id.auchristian.heinrich@cmlh.id.au> wrote:

Tony,

http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-wafhttp://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf
are the slides of the WAF Product from Amazon Web Services:

I would assume that the AWS partner/vendor integration is via
http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.htmlhttp://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html

--
Regards,
Christian Heinrich

http://cmlh.id.au/contact


wasc-wafec mailing list
wasc-wafec@lists.webappsec.orgmailto:wasc-wafec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.orghttp://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org

--

Dr. Anton Chuvakin
Site: http://www.chuvakin.org
Twitter: @anton_chuvakinhttps://twitter.com/anton_chuvakin
Work:
http://www.linkedin.com/in/chuvakinhttp://www.linkedin.com/in/chuvakin

--

Tony Turner
OWASP Orlando Chapter Founder/Co-Leader

WAFEC Project Leader

STING Game Project Leader
tony.turner@owasp.orgmailto:tony.turner@owasp.org

https://www.owasp.org/index.php/Orlandohttps://www.owasp.org/index.php/Orlando

Dear Tony, your explanations sound reasonable. But, when you created your requirement catalogue and try to compare different products, many 'big players' (i.e. Gartner listed WAFs) offer the same set of features. So, for example using the WAFEC to find a suitable product for a customer with few requirements often ends up with a drawn in the evaluation. I guess if try to find out what features an AWS WAF offers, Blacklisting, Whitelisting, Signatures, regular updates etcetera the AWS WAF will fulfill those features. The quality and the customization features however may differ from the once you get with other 'on premise' products. Do you think it may be possible to add a criteria to the WAFEC to distinguish between: - yes, product offers feature XYZ and - yes, product offers feature XYZ based on a GUI/requires manual adaption or - yes, product offers features XYZ but only allows restricted adaption requiering operating system level changes. Kind regards, Christian *Von:*wasc-wafec [mailto:wasc-wafec-bounces@lists.webappsec.org] *Im Auftrag von *Tony Turner *Gesendet:* Donnerstag, 8. Oktober 2015 21:20 *An:* Anton Chuvakin <anton@chuvakin.org> *Cc:* wasc-wafec@lists.webappsec.org *Betreff:* Re: [WASC-WAFEC] AWS WAF Anton, A WAF is just a tool, it's all in how it's used. How can the tool be extended through additional extrinsic features? This is where the top players in the space (no I won't name names, we are all friends here) really start to shine and where they differentiate themselves from the competition. Ease of use, centralized management, threat feeds, load balancing, etc. Does that mean if you don't have those things you can't use the tools effectively? Does it make it an inferior product? What if I don't need an ADC-based WAF because I'm happy with my existing load balancer? Does that mean my WAF can't still perform its core functions to detect and mitigate web app attacks? I do understand the direction you are going in here, and yes this will likely be misused by compliance minded entities. I think it's important to remember that any 2 WAF consumers may have vastly different requirements. The objective here is to define the core of what a WAF is and should be capable of, but what exactly are we talking about here? Many (but not all) of these extrinsic criteria are dependent upon deployment architectures. We aren't always comparing reverse proxy to reverse proxy here, and in fact the customer may not even want to architect the solution that way. This makes WAF a more complex product to develop criteria for because implementations are frequently not very cookie-cutter. Then when you start talking about framework filters and RASP, the conversation gets even more confusing for a lot of people. I think when you take the typical Gartner-centric view of a solution space you are looking at a product category very generically. How can we evaluate vendors in a way that fits as many scenarios as possible with the greatest amount of options for consumers? That's great, and has substantial value. But that doesn't always map to real world requirements. Instead I think it makes sense to create a core framework for WAF criteria and provide a modular approach we can leverage when we develop the WAFEC Response Matrix that identifies those extrinsic criteria only when they are mapped to actual customer requirements. The criteria will be broken up into categories such that if you need SSL offload, or a cloud deployment we will have a set of criteria that map to that requirement. If you don't, why would you care about those capabilities? Lastly, there's really 2 audiences for WAFEC. Firstly, are the vendors that make WAFs and organizations like Gartner seeking to evaluate the product space. Secondly, are consumers who likely have very specific needs. They may not care about cloud deployments or DLP capabilities or how friendly the support staff are that they never need to call or <insert extrinsic capability>. I honestly think with this approach you can be as inclusive or exclusive as you want to when utilizing the framework for evaluation. So if you are a vendor or Gartner, or you have not identified all your requirements and you want to see who has the shiniest toys, you just check all the boxes. If you have more selective requirements, you can just build your menu and evaluate based on those criteria. -Tony On Thu, Oct 8, 2015 at 2:37 PM, Anton Chuvakin <<mailto:anton@chuvakin.org>anton@chuvakin.org> wrote: So, assuming this WAF would be "a basic WAF", do you guys think that a basic WAF is actually useful for anything apart from compliance? When was the last time somebody used a generic SQLi (like so <https://xkcd.com/327/>https://xkcd.com/327/) to attack an app? #just_wondering On Wed, Oct 7, 2015 at 9:23 PM, Christian Heinrich <<mailto:christian.heinrich@cmlh.id.au>christian.heinrich@cmlh.id.au> wrote: Tony, <http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf>http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf are the slides of the WAF Product from Amazon Web Services: I would assume that the AWS partner/vendor integration is via <http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html>http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html -- Regards, Christian Heinrich http://cmlh.id.au/contact _______________________________________________ wasc-wafec mailing list wasc-wafec@lists.webappsec.org<mailto:wasc-wafec@lists.webappsec.org> <http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org>http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org -- Dr. Anton Chuvakin Site: http://www.chuvakin.org Twitter: @anton_chuvakin<https://twitter.com/anton_chuvakin> Work: <http://www.linkedin.com/in/chuvakin>http://www.linkedin.com/in/chuvakin -- Tony Turner OWASP Orlando Chapter Founder/Co-Leader WAFEC Project Leader STING Game Project Leader tony.turner@owasp.org<mailto:tony.turner@owasp.org> <https://www.owasp.org/index.php/Orlando>https://www.owasp.org/index.php/Orlando
TT
Tony Turner
Fri, Oct 9, 2015 12:20 PM

Hi Christian, thanks for your question. One of the things I'd like want to
get away from in a future version of WAFEC, not sure if it will make the
next release, is the yes/no responses. Ideally I'd like to identify a
scoring mechanism, say on a scale of 1-10 how a product meets a specific
criteria. So 2 WAF vendors may both have signatures for XSS, but one may be
much better at detecting evasion attempts, maybe one doesn't normalize,
etc. or 2 vendors may have mechanisms for mitigating that vary in
effectiveness. Otherwise with a binary approach for core capabilities, you
are correct there will likely be very little deviation from many WAF
vendors until we start hitting the extrinsic criteria. There's likely a ton
of research and testing as well as some tool development efforts to support
more granular evaluations such as this, which is why I think the next
release may be too soon for this.

The additional qualifiers you mention could be covered by that scaled
approach. Some of these qualifiers will also be called out in a separate
set of extrinsic criteria. For instance, ease of use might be a category on
its own that includes criteria for specific items related to ease of use
such as GUI functionality as well as criteria that allows the evaluator to
judge how intuitive the interface is, tuning violations and policies, how
difficult to generate or customize reports, etc. Additional criteria like
architectural change categories to clearly spell out where the solution
requires things like additional ports to be opened, changing IP schemes, OS
level changes like agent installation or authentication changes, etc. Other
extrinsic criteria such as user communities, vendor training and
documentation, product certification, sales process and more could be
evaluated in the same fashion, but I don't intend to go beyond technical
criteria, feature sets and usage for the purposes of WAFEC documentation.
I'm not sure I'm prepared to provide guidance in WAFEC documentation, for
example, identifying how vendors should be working with customers. Some of
these extrinsic categories may never find their way into the core WAFEC
document, but might still be included in a response matrix.

The biggest problem with those kinds of evaluations is they tend to be very
subjective and don't align well to a mature and repeatable process with
multiple end-users comparing results. When I've created similar matrixes in
the past, (namely for vulnerability management and SIEM products) I've
always done something similar here and admittedly done a poor job of
clearly identifying what is the difference between a 8 and a 6 as I went
with a gut feel and have typically been the only user (except when a
certain VM vendor got a copy of my matrix I used for a bake-off and thought
it would be a great sales tool). That's a maturity consideration that will
be planned for before we include that capability in a future Response
Matrix.

One thing I've done in the past that I intend to bring over, however, is
the concept of weighting. For instance I've typically used a weight of 1-5
so that I could assign different weights to criteria. For instance, maybe
you don't care as much about how impactful the deployment will be so you
assign those criteria a weight of 1, while the ability to have robust
support for policy tuning might be a 5 and associated scoring would have a
much greater impact on overall score.  This way, once a WAF has been
evaluated, even if the requirements change the scores can be easily
recalculated based on the new set of weighted requirements.

On Fri, Oct 9, 2015 at 3:15 AM, Christian Strache cs@strache-it.de wrote:

Dear Tony,

your explanations sound reasonable.
But, when you created your requirement catalogue and try to compare
different products, many 'big players' (i.e. Gartner listed WAFs) offer the
same set of features.
So, for example using the WAFEC to find a suitable product for a customer
with few requirements often ends up with a drawn in the evaluation.
I guess if try to find out what features an AWS WAF offers, Blacklisting,
Whitelisting, Signatures, regular updates etcetera the AWS WAF will fulfill
those features. The quality and the customization features however may
differ from the once you get with other 'on premise' products.

Do you think it may be possible to add a criteria to the WAFEC to
distinguish between:

  • yes, product offers feature XYZ
    and
  • yes, product offers feature XYZ based on a GUI/requires manual adaption
    or
  • yes, product offers features XYZ but only allows restricted adaption
    requiering operating system level changes.

Kind regards,
Christian

Von: wasc-wafec [ wasc-wafec-bounces@lists.webappsec.org
mailto:wasc-wafec-bounces@lists.webappsec.org
wasc-wafec-bounces@lists.webappsec.org] *Im Auftrag von *Tony Turner
Gesendet: Donnerstag, 8. Oktober 2015 21:20
An: Anton Chuvakin anton@chuvakin.org anton@chuvakin.org
Cc: wasc-wafec@lists.webappsec.org
Betreff: Re: [WASC-WAFEC] AWS WAF

Anton, A WAF is just a tool, it's all in how it's used. How can the tool
be extended through additional extrinsic features? This is where the top
players in the space (no I won't name names, we are all friends here)
really start to shine and where they differentiate themselves from the
competition. Ease of use, centralized management, threat feeds, load
balancing, etc. Does that mean if you don't have those things you can't use
the tools effectively? Does it make it an inferior product? What if I don't
need an ADC-based WAF because I'm happy with my existing load balancer?
Does that mean my WAF can't still perform its core functions to detect and
mitigate web app attacks?

I do understand the direction you are going in here, and yes this will
likely be misused by compliance minded entities. I think it's important to
remember that any 2 WAF consumers may have vastly different requirements.
The objective here is to define the core of what a WAF is and should be
capable of, but what exactly are we talking about here? Many (but not all)
of these extrinsic criteria are dependent upon deployment architectures. We
aren't always comparing reverse proxy to reverse proxy here, and in fact
the customer may not even want to architect the solution that way. This
makes WAF a more complex product to develop criteria for because
implementations are frequently not very cookie-cutter. Then when you start
talking about framework filters and RASP, the conversation gets even more
confusing for a lot of people.

I think when you take the typical Gartner-centric view of a solution space
you are looking at a product category very generically. How can we evaluate
vendors in a way that fits as many scenarios as possible with the greatest
amount of options for consumers? That's great, and has substantial value.
But that doesn't always map to real world requirements.

Instead I think it makes sense to create a core framework for WAF criteria
and provide a modular approach we can leverage when we develop the WAFEC
Response Matrix that identifies those extrinsic criteria only when they are
mapped to actual customer requirements. The criteria will be broken up into
categories such that if you need SSL offload, or a cloud deployment we will
have a set of criteria that map to that requirement. If you don't, why
would you care about those capabilities?

Lastly, there's really 2 audiences for WAFEC. Firstly, are the vendors
that make WAFs and organizations like Gartner seeking to evaluate the
product space. Secondly, are consumers who likely have very specific needs.
They may not care about cloud deployments or DLP capabilities or how
friendly the support staff are that they never need to call or <insert extrinsic capability>. I honestly think with this approach you can be as
inclusive or exclusive as you want to when utilizing the framework for
evaluation. So if you are a vendor or Gartner, or you have not identified
all your requirements and you want to see who has the shiniest toys, you
just check all the boxes. If you have more selective requirements, you can
just build your menu and evaluate based on those criteria.

-Tony

On Thu, Oct 8, 2015 at 2:37 PM, Anton Chuvakin < anton@chuvakin.org
anton@chuvakin.organton@chuvakin.org> wrote:

So, assuming this WAF would be "a basic WAF", do you guys think that a
basic WAF is actually useful for anything apart from compliance? When was
the last time somebody used a generic SQLi (like so
https://xkcd.com/327/ https://xkcd.com/327/https://xkcd.com/327/) to
attack an app?    #just_wondering

On Wed, Oct 7, 2015 at 9:23 PM, Christian Heinrich <
christian.heinrich@cmlh.id.au christian.heinrich@cmlh.id.au
christian.heinrich@cmlh.id.au> wrote:

Tony,

http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf
http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf
http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf
are the slides of the WAF Product from Amazon Web Services:

I would assume that the AWS partner/vendor integration is via
http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html
http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html
http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html

--
Regards,
Christian Heinrich

http://cmlh.id.au/contact


wasc-wafec mailing list
wasc-wafec@lists.webappsec.org

http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org

--

Dr. Anton Chuvakin
Site: http://www.chuvakin.org
Twitter: @anton_chuvakin https://twitter.com/anton_chuvakin
Work: http://www.linkedin.com/in/chuvakin
http://www.linkedin.com/in/chuvakinhttp://www.linkedin.com/in/chuvakin

--

Tony Turner
OWASP Orlando Chapter Founder/Co-Leader

WAFEC Project Leader

STING Game Project Leader
tony.turner@owasp.org
https://www.owasp.org/index.php/Orlando
https://www.owasp.org/index.php/Orlando
https://www.owasp.org/index.php/Orlando

--
Tony Turner
OWASP Orlando Chapter Founder/Co-Leader
WAFEC Project Leader
STING Game Project Leader
tony.turner@owasp.org
https://www.owasp.org/index.php/Orlando

Hi Christian, thanks for your question. One of the things I'd like want to get away from in a future version of WAFEC, not sure if it will make the next release, is the yes/no responses. Ideally I'd like to identify a scoring mechanism, say on a scale of 1-10 how a product meets a specific criteria. So 2 WAF vendors may both have signatures for XSS, but one may be much better at detecting evasion attempts, maybe one doesn't normalize, etc. or 2 vendors may have mechanisms for mitigating that vary in effectiveness. Otherwise with a binary approach for core capabilities, you are correct there will likely be very little deviation from many WAF vendors until we start hitting the extrinsic criteria. There's likely a ton of research and testing as well as some tool development efforts to support more granular evaluations such as this, which is why I think the next release may be too soon for this. The additional qualifiers you mention could be covered by that scaled approach. Some of these qualifiers will also be called out in a separate set of extrinsic criteria. For instance, ease of use might be a category on its own that includes criteria for specific items related to ease of use such as GUI functionality as well as criteria that allows the evaluator to judge how intuitive the interface is, tuning violations and policies, how difficult to generate or customize reports, etc. Additional criteria like architectural change categories to clearly spell out where the solution requires things like additional ports to be opened, changing IP schemes, OS level changes like agent installation or authentication changes, etc. Other extrinsic criteria such as user communities, vendor training and documentation, product certification, sales process and more could be evaluated in the same fashion, but I don't intend to go beyond technical criteria, feature sets and usage for the purposes of WAFEC documentation. I'm not sure I'm prepared to provide guidance in WAFEC documentation, for example, identifying how vendors should be working with customers. Some of these extrinsic categories may never find their way into the core WAFEC document, but might still be included in a response matrix. The biggest problem with those kinds of evaluations is they tend to be very subjective and don't align well to a mature and repeatable process with multiple end-users comparing results. When I've created similar matrixes in the past, (namely for vulnerability management and SIEM products) I've always done something similar here and admittedly done a poor job of clearly identifying what is the difference between a 8 and a 6 as I went with a gut feel and have typically been the only user (except when a certain VM vendor got a copy of my matrix I used for a bake-off and thought it would be a great sales tool). That's a maturity consideration that will be planned for before we include that capability in a future Response Matrix. One thing I've done in the past that I intend to bring over, however, is the concept of weighting. For instance I've typically used a weight of 1-5 so that I could assign different weights to criteria. For instance, maybe you don't care as much about how impactful the deployment will be so you assign those criteria a weight of 1, while the ability to have robust support for policy tuning might be a 5 and associated scoring would have a much greater impact on overall score. This way, once a WAF has been evaluated, even if the requirements change the scores can be easily recalculated based on the new set of weighted requirements. On Fri, Oct 9, 2015 at 3:15 AM, Christian Strache <cs@strache-it.de> wrote: > Dear Tony, > > your explanations sound reasonable. > But, when you created your requirement catalogue and try to compare > different products, many 'big players' (i.e. Gartner listed WAFs) offer the > same set of features. > So, for example using the WAFEC to find a suitable product for a customer > with few requirements often ends up with a drawn in the evaluation. > I guess if try to find out what features an AWS WAF offers, Blacklisting, > Whitelisting, Signatures, regular updates etcetera the AWS WAF will fulfill > those features. The quality and the customization features however may > differ from the once you get with other 'on premise' products. > > > Do you think it may be possible to add a criteria to the WAFEC to > distinguish between: > - yes, product offers feature XYZ > and > - yes, product offers feature XYZ based on a GUI/requires manual adaption > or > - yes, product offers features XYZ but only allows restricted adaption > requiering operating system level changes. > > > Kind regards, > Christian > > > > > *Von:* wasc-wafec [ <wasc-wafec-bounces@lists.webappsec.org> > mailto:wasc-wafec-bounces@lists.webappsec.org > <wasc-wafec-bounces@lists.webappsec.org>] *Im Auftrag von *Tony Turner > *Gesendet:* Donnerstag, 8. Oktober 2015 21:20 > *An:* Anton Chuvakin <anton@chuvakin.org> <anton@chuvakin.org> > *Cc:* wasc-wafec@lists.webappsec.org > *Betreff:* Re: [WASC-WAFEC] AWS WAF > > > > Anton, A WAF is just a tool, it's all in how it's used. How can the tool > be extended through additional extrinsic features? This is where the top > players in the space (no I won't name names, we are all friends here) > really start to shine and where they differentiate themselves from the > competition. Ease of use, centralized management, threat feeds, load > balancing, etc. Does that mean if you don't have those things you can't use > the tools effectively? Does it make it an inferior product? What if I don't > need an ADC-based WAF because I'm happy with my existing load balancer? > Does that mean my WAF can't still perform its core functions to detect and > mitigate web app attacks? > > > > I do understand the direction you are going in here, and yes this will > likely be misused by compliance minded entities. I think it's important to > remember that any 2 WAF consumers may have vastly different requirements. > The objective here is to define the core of what a WAF is and should be > capable of, but what exactly are we talking about here? Many (but not all) > of these extrinsic criteria are dependent upon deployment architectures. We > aren't always comparing reverse proxy to reverse proxy here, and in fact > the customer may not even want to architect the solution that way. This > makes WAF a more complex product to develop criteria for because > implementations are frequently not very cookie-cutter. Then when you start > talking about framework filters and RASP, the conversation gets even more > confusing for a lot of people. > > > > I think when you take the typical Gartner-centric view of a solution space > you are looking at a product category very generically. How can we evaluate > vendors in a way that fits as many scenarios as possible with the greatest > amount of options for consumers? That's great, and has substantial value. > But that doesn't always map to real world requirements. > > > > Instead I think it makes sense to create a core framework for WAF criteria > and provide a modular approach we can leverage when we develop the WAFEC > Response Matrix that identifies those extrinsic criteria only when they are > mapped to actual customer requirements. The criteria will be broken up into > categories such that if you need SSL offload, or a cloud deployment we will > have a set of criteria that map to that requirement. If you don't, why > would you care about those capabilities? > > > > Lastly, there's really 2 audiences for WAFEC. Firstly, are the vendors > that make WAFs and organizations like Gartner seeking to evaluate the > product space. Secondly, are consumers who likely have very specific needs. > They may not care about cloud deployments or DLP capabilities or how > friendly the support staff are that they never need to call or <insert > extrinsic capability>. I honestly think with this approach you can be as > inclusive or exclusive as you want to when utilizing the framework for > evaluation. So if you are a vendor or Gartner, or you have not identified > all your requirements and you want to see who has the shiniest toys, you > just check all the boxes. If you have more selective requirements, you can > just build your menu and evaluate based on those criteria. > > > > -Tony > > > > On Thu, Oct 8, 2015 at 2:37 PM, Anton Chuvakin < <anton@chuvakin.org> > <anton@chuvakin.org>anton@chuvakin.org> wrote: > > So, assuming this WAF would be "a basic WAF", do you guys think that a > basic WAF is actually useful for anything apart from compliance? When was > the last time somebody used a generic SQLi (like so > <https://xkcd.com/327/> <https://xkcd.com/327/>https://xkcd.com/327/) to > attack an app? #just_wondering > > > > On Wed, Oct 7, 2015 at 9:23 PM, Christian Heinrich < > <christian.heinrich@cmlh.id.au> <christian.heinrich@cmlh.id.au> > christian.heinrich@cmlh.id.au> wrote: > > Tony, > > > <http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf> > <http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf> > http://www.slideshare.net/AmazonWebServices/sec323-new-securing-web-applications-with-aws-waf > are the slides of the WAF Product from Amazon Web Services: > > I would assume that the AWS partner/vendor integration is via > <http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html> > <http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html> > http://docs.aws.amazon.com/waf/latest/APIReference/Welcome.html > > > -- > Regards, > Christian Heinrich > > http://cmlh.id.au/contact > > _______________________________________________ > wasc-wafec mailing list > wasc-wafec@lists.webappsec.org > > <http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org> > <http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org> > http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org > > > > > > -- > > Dr. Anton Chuvakin > Site: http://www.chuvakin.org > Twitter: @anton_chuvakin <https://twitter.com/anton_chuvakin> > Work: <http://www.linkedin.com/in/chuvakin> > <http://www.linkedin.com/in/chuvakin>http://www.linkedin.com/in/chuvakin > > > > > > -- > > Tony Turner > OWASP Orlando Chapter Founder/Co-Leader > > WAFEC Project Leader > > STING Game Project Leader > tony.turner@owasp.org > <https://www.owasp.org/index.php/Orlando> > <https://www.owasp.org/index.php/Orlando> > https://www.owasp.org/index.php/Orlando > -- Tony Turner OWASP Orlando Chapter Founder/Co-Leader WAFEC Project Leader STING Game Project Leader tony.turner@owasp.org https://www.owasp.org/index.php/Orlando
CH
Christian Heinrich
Sat, Oct 10, 2015 1:10 AM

Tony,

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org wrote:

Hi Christian, thanks for your question. One of the things I'd like want to
get away from in a future version of WAFEC, not sure if it will make the
next release, is the yes/no responses. Ideally I'd like to identify a
scoring mechanism, say on a scale of 1-10 how a product meets a specific
criteria. So 2 WAF vendors may both have signatures for XSS, but one may be
much better at detecting evasion attempts, maybe one doesn't normalize, etc.
or 2 vendors may have mechanisms for mitigating that vary in effectiveness.
Otherwise with a binary approach for core capabilities, you are correct
there will likely be very little deviation from many WAF vendors until we
start hitting the extrinsic criteria. There's likely a ton of research and
testing as well as some tool development efforts to support more granular
evaluations such as this, which is why I think the next release may be too
soon for this.

I disagree, binary or yes/no is absolute and objective while assigning
a score of 1-10 is subjective such as the OWASP distrust of its own
benchmark i.e. http://lists.owasp.org/pipermail/owasp-leaders/2015-September/015120.html

Using your example, there would be three yes/no questions would be
addressed in the order of specific to general:

  1. XSS normalize
  2. XSS evasion
  3. XSS signature

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org wrote:

The additional qualifiers you mention could be covered by that scaled
approach. Some of these qualifiers will also be called out in a separate set
of extrinsic criteria. For instance, ease of use might be a category on its
own that includes criteria for specific items related to ease of use such as
GUI functionality as well as criteria that allows the evaluator to judge how
intuitive the interface is, tuning violations and policies, how difficult to
generate or customize reports, etc. Additional criteria like architectural
change categories to clearly spell out where the solution requires things
like additional ports to be opened, changing IP schemes, OS level changes
like agent installation or authentication changes, etc. Other extrinsic
criteria such as user communities, vendor training and documentation,
product certification, sales process and more could be evaluated in the same
fashion, but I don't intend to go beyond technical criteria, feature sets
and usage for the purposes of WAFEC documentation. I'm not sure I'm prepared
to provide guidance in WAFEC documentation, for example, identifying how
vendors should be working with customers. Some of these extrinsic categories
may never find their way into the core WAFEC document, but might still be
included in a response matrix.

I disagree and these should be excluded since this is reinventing the
wheel already established by Gartner and
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/#//apple_ref/doc/uid/TP40006556-CH66-SW1
for instance.

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org wrote:

The biggest problem with those kinds of evaluations is they tend to be very
subjective and don't align well to a mature and repeatable process with
multiple end-users comparing results. When I've created similar matrixes in
the past, (namely for vulnerability management and SIEM products) I've
always done something similar here and admittedly done a poor job of clearly
identifying what is the difference between a 8 and a 6 as I went with a gut
feel and have typically been the only user (except when a certain VM vendor
got a copy of my matrix I used for a bake-off and thought it would be a
great sales tool). That's a maturity consideration that will be planned for
before we include that capability in a future Response Matrix.

I established the relationships with ISCA i.e.
http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-June/000286.html
and Gartner i.e.
http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-July/000293.html
and should be driven by them [and NSS] not WAFEC as this is
detrimental to our independence.

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org wrote:

One thing I've done in the past that I intend to bring over, however, is the
concept of weighting. For instance I've typically used a weight of 1-5 so
that I could assign different weights to criteria. For instance, maybe you
don't care as much about how impactful the deployment will be so you assign
those criteria a weight of 1, while the ability to have robust support for
policy tuning might be a 5 and associated scoring would have a much greater
impact on overall score.  This way, once a WAF has been evaluated, even if
the requirements change the scores can be easily recalculated based on the
new set of weighted requirements.

Based on my experience with CVSSv2 (and addressed in CVSSv3),
weighting skew the result in favour of vendor and not the consumer so
I disagree with this too.

--
Regards,
Christian Heinrich

http://cmlh.id.au/contact

Tony, On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> wrote: > Hi Christian, thanks for your question. One of the things I'd like want to > get away from in a future version of WAFEC, not sure if it will make the > next release, is the yes/no responses. Ideally I'd like to identify a > scoring mechanism, say on a scale of 1-10 how a product meets a specific > criteria. So 2 WAF vendors may both have signatures for XSS, but one may be > much better at detecting evasion attempts, maybe one doesn't normalize, etc. > or 2 vendors may have mechanisms for mitigating that vary in effectiveness. > Otherwise with a binary approach for core capabilities, you are correct > there will likely be very little deviation from many WAF vendors until we > start hitting the extrinsic criteria. There's likely a ton of research and > testing as well as some tool development efforts to support more granular > evaluations such as this, which is why I think the next release may be too > soon for this. I disagree, binary or yes/no is absolute and objective while assigning a score of 1-10 is subjective such as the OWASP distrust of its own benchmark i.e. http://lists.owasp.org/pipermail/owasp-leaders/2015-September/015120.html Using your example, there would be three yes/no questions would be addressed in the order of specific to general: 1. XSS normalize 2. XSS evasion 3. XSS signature On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> wrote: > The additional qualifiers you mention could be covered by that scaled > approach. Some of these qualifiers will also be called out in a separate set > of extrinsic criteria. For instance, ease of use might be a category on its > own that includes criteria for specific items related to ease of use such as > GUI functionality as well as criteria that allows the evaluator to judge how > intuitive the interface is, tuning violations and policies, how difficult to > generate or customize reports, etc. Additional criteria like architectural > change categories to clearly spell out where the solution requires things > like additional ports to be opened, changing IP schemes, OS level changes > like agent installation or authentication changes, etc. Other extrinsic > criteria such as user communities, vendor training and documentation, > product certification, sales process and more could be evaluated in the same > fashion, but I don't intend to go beyond technical criteria, feature sets > and usage for the purposes of WAFEC documentation. I'm not sure I'm prepared > to provide guidance in WAFEC documentation, for example, identifying how > vendors should be working with customers. Some of these extrinsic categories > may never find their way into the core WAFEC document, but might still be > included in a response matrix. I disagree and these should be excluded since this is reinventing the wheel already established by Gartner and https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/#//apple_ref/doc/uid/TP40006556-CH66-SW1 for instance. On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> wrote: > The biggest problem with those kinds of evaluations is they tend to be very > subjective and don't align well to a mature and repeatable process with > multiple end-users comparing results. When I've created similar matrixes in > the past, (namely for vulnerability management and SIEM products) I've > always done something similar here and admittedly done a poor job of clearly > identifying what is the difference between a 8 and a 6 as I went with a gut > feel and have typically been the only user (except when a certain VM vendor > got a copy of my matrix I used for a bake-off and thought it would be a > great sales tool). That's a maturity consideration that will be planned for > before we include that capability in a future Response Matrix. I established the relationships with ISCA i.e. http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-June/000286.html and Gartner i.e. http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-July/000293.html and should be driven by them [and NSS] not WAFEC as this is detrimental to our independence. On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> wrote: > One thing I've done in the past that I intend to bring over, however, is the > concept of weighting. For instance I've typically used a weight of 1-5 so > that I could assign different weights to criteria. For instance, maybe you > don't care as much about how impactful the deployment will be so you assign > those criteria a weight of 1, while the ability to have robust support for > policy tuning might be a 5 and associated scoring would have a much greater > impact on overall score. This way, once a WAF has been evaluated, even if > the requirements change the scores can be easily recalculated based on the > new set of weighted requirements. Based on my experience with CVSSv2 (and addressed in CVSSv3), weighting skew the result in favour of vendor and not the consumer so I disagree with this too. -- Regards, Christian Heinrich http://cmlh.id.au/contact
TT
Tony Turner
Sat, Oct 10, 2015 1:57 AM

My responses inline below Christian, I don't necessarily agree with your
assessment

On Fri, Oct 9, 2015 at 9:10 PM, Christian Heinrich <
christian.heinrich@cmlh.id.au> wrote:

Tony,

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org
wrote:

Hi Christian, thanks for your question. One of the things I'd like want

to

get away from in a future version of WAFEC, not sure if it will make the
next release, is the yes/no responses. Ideally I'd like to identify a
scoring mechanism, say on a scale of 1-10 how a product meets a specific
criteria. So 2 WAF vendors may both have signatures for XSS, but one may

be

much better at detecting evasion attempts, maybe one doesn't normalize,

etc.

or 2 vendors may have mechanisms for mitigating that vary in

effectiveness.

Otherwise with a binary approach for core capabilities, you are correct
there will likely be very little deviation from many WAF vendors until we
start hitting the extrinsic criteria. There's likely a ton of research

and

testing as well as some tool development efforts to support more granular
evaluations such as this, which is why I think the next release may be

too

soon for this.

I disagree, binary or yes/no is absolute and objective while assigning
a score of 1-10 is subjective such as the OWASP distrust of its own
benchmark i.e.
http://lists.owasp.org/pipermail/owasp-leaders/2015-September/015120.html

This distrust has to do with lack of transparency and use of a benchmark by
a commercial entity before it is a mature framework. Don't confuse the
issues with your bias against OWASP by making inflammatory statements out
of context. I'm very aware of the issues and do not intend to repeat those
mistakes.

Secondly, yes/no answers are very easy to game unless we get very very
specific with the questions, far more specific than the previous version of
the Response Matrix. For example "Does the WAF support signature based
detection?" is a terrible binary question. There may be varying degrees of
how comprehensive default signatures are, how easy to create new, modify
existing, differences in regex that can impact performance, denial of
service conditions for overly greedy regex, etc.

I do not intend to provide a graduated scoring mechanism unless those
scores can be clearly defined. In some cases it may make sense to use a
binary value, but in others there are varying degrees that should be
measured. I'm willing to discuss an expansion of those criteria in the
Response Matrix, but also want to be mindful of the fact that there may be
circumstances we have not accounted for and do understand your reluctance
to move to a model that has a chance to become more subjective. The only
way a scale works is if there are very clear definitions of what the
variances in score mean.

Using your example, there would be three yes/no questions would be
addressed in the order of specific to general:

  1. XSS normalize
  2. XSS evasion
  3. XSS signature

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org
wrote:

The additional qualifiers you mention could be covered by that scaled
approach. Some of these qualifiers will also be called out in a separate

set

of extrinsic criteria. For instance, ease of use might be a category on

its

own that includes criteria for specific items related to ease of use

such as

GUI functionality as well as criteria that allows the evaluator to judge

how

intuitive the interface is, tuning violations and policies, how

difficult to

generate or customize reports, etc. Additional criteria like

architectural

change categories to clearly spell out where the solution requires things
like additional ports to be opened, changing IP schemes, OS level changes
like agent installation or authentication changes, etc. Other extrinsic
criteria such as user communities, vendor training and documentation,
product certification, sales process and more could be evaluated in the

same

fashion, but I don't intend to go beyond technical criteria, feature sets
and usage for the purposes of WAFEC documentation. I'm not sure I'm

prepared

to provide guidance in WAFEC documentation, for example, identifying how
vendors should be working with customers. Some of these extrinsic

categories

may never find their way into the core WAFEC document, but might still be
included in a response matrix.

I disagree and these should be excluded since this is reinventing the
wheel already established by Gartner and

https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/#//apple_ref/doc/uid/TP40006556-CH66-SW1
for instance.

I'm willing to debate this issue, and arguably have to draw the line
somewhere with regards to extrinsic features that should be included. That
being said, its still valuable for consumers to include a mechanism whereby
they can draw their own conclusions here. Don't get too caught up on a
single extrinsic criteria until we have identified a comprehensive list of
criteria for inclusion. There will be plenty of debate once that happens I
assure you.

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org
wrote:

The biggest problem with those kinds of evaluations is they tend to be

very

subjective and don't align well to a mature and repeatable process with
multiple end-users comparing results. When I've created similar matrixes

in

the past, (namely for vulnerability management and SIEM products) I've
always done something similar here and admittedly done a poor job of

clearly

identifying what is the difference between a 8 and a 6 as I went with a

gut

feel and have typically been the only user (except when a certain VM

vendor

got a copy of my matrix I used for a bake-off and thought it would be a
great sales tool). That's a maturity consideration that will be planned

for

before we include that capability in a future Response Matrix.

I established the relationships with ISCA i.e.

http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-June/000286.html
and Gartner i.e.

http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-July/000293.html
and should be driven by them [and NSS] not WAFEC as this is
detrimental to our independence.

I'm not quite sure what issue you are having here. Please elaborate. I do
not intend for WAFEC to perform evaluations. Its simply a framework and set
of tools for vendors, independent testing entities and consumers to conduct
their own evaluations. It should be designed for consistency and
flexibility to map as closely as possible to the unique scenario it is
being used to evaluate.

On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner tony.turner@owasp.org wrote:

One thing I've done in the past that I intend to bring over, however, is

the

concept of weighting. For instance I've typically used a weight of 1-5 so
that I could assign different weights to criteria. For instance, maybe

you

don't care as much about how impactful the deployment will be so you

assign

those criteria a weight of 1, while the ability to have robust support

for

policy tuning might be a 5 and associated scoring would have a much

greater

impact on overall score.  This way, once a WAF has been evaluated, even

if

the requirements change the scores can be easily recalculated based on

the

new set of weighted requirements.

Based on my experience with CVSSv2 (and addressed in CVSSv3),
weighting skew the result in favour of vendor and not the consumer so
I disagree with this too.

Lack of weighting makes all criteria equally important. For the purpose of
a generic evaluation, that's fine but that does not map to how consumers
need to use evaluation tools. Not all implementations are the same, not all
requirements are the same and not all consumers will consider criteria with
the exact same weight. I'm sorry but I strongly disagree with you here. I'm
not talking about setting weights, I'm talking about providing the
flexibility for the user of the tool to set their own weights. Please
provide concrete examples of how you find this to be problematic.

My responses inline below Christian, I don't necessarily agree with your assessment On Fri, Oct 9, 2015 at 9:10 PM, Christian Heinrich < christian.heinrich@cmlh.id.au> wrote: > Tony, > > On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> > wrote: > > Hi Christian, thanks for your question. One of the things I'd like want > to > > get away from in a future version of WAFEC, not sure if it will make the > > next release, is the yes/no responses. Ideally I'd like to identify a > > scoring mechanism, say on a scale of 1-10 how a product meets a specific > > criteria. So 2 WAF vendors may both have signatures for XSS, but one may > be > > much better at detecting evasion attempts, maybe one doesn't normalize, > etc. > > or 2 vendors may have mechanisms for mitigating that vary in > effectiveness. > > Otherwise with a binary approach for core capabilities, you are correct > > there will likely be very little deviation from many WAF vendors until we > > start hitting the extrinsic criteria. There's likely a ton of research > and > > testing as well as some tool development efforts to support more granular > > evaluations such as this, which is why I think the next release may be > too > > soon for this. > > I disagree, binary or yes/no is absolute and objective while assigning > a score of 1-10 is subjective such as the OWASP distrust of its own > benchmark i.e. > http://lists.owasp.org/pipermail/owasp-leaders/2015-September/015120.html > > This distrust has to do with lack of transparency and use of a benchmark by a commercial entity before it is a mature framework. Don't confuse the issues with your bias against OWASP by making inflammatory statements out of context. I'm very aware of the issues and do not intend to repeat those mistakes. Secondly, yes/no answers are very easy to game unless we get very very specific with the questions, far more specific than the previous version of the Response Matrix. For example "Does the WAF support signature based detection?" is a terrible binary question. There may be varying degrees of how comprehensive default signatures are, how easy to create new, modify existing, differences in regex that can impact performance, denial of service conditions for overly greedy regex, etc. I do not intend to provide a graduated scoring mechanism unless those scores can be clearly defined. In some cases it may make sense to use a binary value, but in others there are varying degrees that should be measured. I'm willing to discuss an expansion of those criteria in the Response Matrix, but also want to be mindful of the fact that there may be circumstances we have not accounted for and do understand your reluctance to move to a model that has a chance to become more subjective. The only way a scale works is if there are very clear definitions of what the variances in score mean. > Using your example, there would be three yes/no questions would be > addressed in the order of specific to general: > 1. XSS normalize > 2. XSS evasion > 3. XSS signature > > On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> > wrote: > > The additional qualifiers you mention could be covered by that scaled > > approach. Some of these qualifiers will also be called out in a separate > set > > of extrinsic criteria. For instance, ease of use might be a category on > its > > own that includes criteria for specific items related to ease of use > such as > > GUI functionality as well as criteria that allows the evaluator to judge > how > > intuitive the interface is, tuning violations and policies, how > difficult to > > generate or customize reports, etc. Additional criteria like > architectural > > change categories to clearly spell out where the solution requires things > > like additional ports to be opened, changing IP schemes, OS level changes > > like agent installation or authentication changes, etc. Other extrinsic > > criteria such as user communities, vendor training and documentation, > > product certification, sales process and more could be evaluated in the > same > > fashion, but I don't intend to go beyond technical criteria, feature sets > > and usage for the purposes of WAFEC documentation. I'm not sure I'm > prepared > > to provide guidance in WAFEC documentation, for example, identifying how > > vendors should be working with customers. Some of these extrinsic > categories > > may never find their way into the core WAFEC document, but might still be > > included in a response matrix. > > I disagree and these should be excluded since this is reinventing the > wheel already established by Gartner and > > https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/#//apple_ref/doc/uid/TP40006556-CH66-SW1 > for instance. > > I'm willing to debate this issue, and arguably have to draw the line somewhere with regards to extrinsic features that should be included. That being said, its still valuable for consumers to include a mechanism whereby they can draw their own conclusions here. Don't get too caught up on a single extrinsic criteria until we have identified a comprehensive list of criteria for inclusion. There will be plenty of debate once that happens I assure you. > On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> > wrote: > > The biggest problem with those kinds of evaluations is they tend to be > very > > subjective and don't align well to a mature and repeatable process with > > multiple end-users comparing results. When I've created similar matrixes > in > > the past, (namely for vulnerability management and SIEM products) I've > > always done something similar here and admittedly done a poor job of > clearly > > identifying what is the difference between a 8 and a 6 as I went with a > gut > > feel and have typically been the only user (except when a certain VM > vendor > > got a copy of my matrix I used for a bake-off and thought it would be a > > great sales tool). That's a maturity consideration that will be planned > for > > before we include that capability in a future Response Matrix. > > I established the relationships with ISCA i.e. > > http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-June/000286.html > and Gartner i.e. > > http://lists.webappsec.org/pipermail/wasc-wafec_lists.webappsec.org/2014-July/000293.html > and should be driven by them [and NSS] not WAFEC as this is > detrimental to our independence. > > I'm not quite sure what issue you are having here. Please elaborate. I do not intend for WAFEC to perform evaluations. Its simply a framework and set of tools for vendors, independent testing entities and consumers to conduct their own evaluations. It should be designed for consistency and flexibility to map as closely as possible to the unique scenario it is being used to evaluate. On Fri, Oct 9, 2015 at 11:20 PM, Tony Turner <tony.turner@owasp.org> wrote: > > One thing I've done in the past that I intend to bring over, however, is > the > > concept of weighting. For instance I've typically used a weight of 1-5 so > > that I could assign different weights to criteria. For instance, maybe > you > > don't care as much about how impactful the deployment will be so you > assign > > those criteria a weight of 1, while the ability to have robust support > for > > policy tuning might be a 5 and associated scoring would have a much > greater > > impact on overall score. This way, once a WAF has been evaluated, even > if > > the requirements change the scores can be easily recalculated based on > the > > new set of weighted requirements. > > Based on my experience with CVSSv2 (and addressed in CVSSv3), > weighting skew the result in favour of vendor and not the consumer so > I disagree with this too. > > Lack of weighting makes all criteria equally important. For the purpose of a generic evaluation, that's fine but that does not map to how consumers need to use evaluation tools. Not all implementations are the same, not all requirements are the same and not all consumers will consider criteria with the exact same weight. I'm sorry but I strongly disagree with you here. I'm not talking about setting weights, I'm talking about providing the flexibility for the user of the tool to set their own weights. Please provide concrete examples of how you find this to be problematic. > > -- > Regards, > Christian Heinrich > > http://cmlh.id.au/contact > > _______________________________________________ > wasc-wafec mailing list > wasc-wafec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org >
CH
Christian Heinrich
Sat, Oct 10, 2015 3:06 AM

Tony,

On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner tony@sentinel24.com wrote:

My responses inline below Christian, I don't necessarily agree with your
assessment
This distrust has to do with lack of transparency and use of a benchmark by
a commercial entity before it is a mature framework. Don't confuse the
issues with your bias against OWASP by making inflammatory statements out of
context. I'm very aware of the issues and do not intend to repeat those
mistakes.

I have not been involved in the recent public discussion at all about
the OWASP Benchmark Project so I am unaware how I could have
influenced this incident due to a perceived bias?

However, we did discuss the paid inclusion of  A9 of the OWASP Top Ten
2013 Release and the detrimental effect on your proposed CFV at
BlackHat USA 2015 and this same vendor has been caught again after not
even two months have passed since our discussion.

On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner tony@sentinel24.com wrote:

I'm not quite sure what issue you are having here. Please elaborate. I do
not intend for WAFEC to perform evaluations. Its simply a framework and set
of tools for vendors, independent testing entities and consumers to conduct
their own evaluations. It should be designed for consistency and flexibility
to map as closely as possible to the unique scenario it is being used to
evaluate.

I was willing to give your CFV proposal the benefit of the doubt
(hence I made no comment in the published minutes) and while I
understand these issues are outside of your control my recommendation
is to defer CFV until OWASP has published their determination against
the vendor.

On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner tony@sentinel24.com wrote:

Lack of weighting makes all criteria equally important. For the purpose of a
generic evaluation, that's fine but that does not map to how consumers need
to use evaluation tools. Not all implementations are the same, not all
requirements are the same and not all consumers will consider criteria with
the exact same weight. I'm sorry but I strongly disagree with you here. I'm
not talking about setting weights, I'm talking about providing the
flexibility for the user of the tool to set their own weights. Please
provide concrete examples of how you find this to be problematic.

I have no issue with the end user making their own independent
decision [on weighting] but I disagree WAFEC should be proposing a
scheme about weighing that influences the end user in making the wrong
decision in hindsight.

--
Regards,
Christian Heinrich

http://cmlh.id.au/contact

Tony, On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner <tony@sentinel24.com> wrote: > My responses inline below Christian, I don't necessarily agree with your > assessment > This distrust has to do with lack of transparency and use of a benchmark by > a commercial entity before it is a mature framework. Don't confuse the > issues with your bias against OWASP by making inflammatory statements out of > context. I'm very aware of the issues and do not intend to repeat those > mistakes. I have not been involved in the recent public discussion at all about the OWASP Benchmark Project so I am unaware how I could have influenced this incident due to a perceived bias? However, we did discuss the paid inclusion of A9 of the OWASP Top Ten 2013 Release and the detrimental effect on your proposed CFV at BlackHat USA 2015 and this same vendor has been caught again after not even two months have passed since our discussion. On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner <tony@sentinel24.com> wrote: > I'm not quite sure what issue you are having here. Please elaborate. I do > not intend for WAFEC to perform evaluations. Its simply a framework and set > of tools for vendors, independent testing entities and consumers to conduct > their own evaluations. It should be designed for consistency and flexibility > to map as closely as possible to the unique scenario it is being used to > evaluate. I was willing to give your CFV proposal the benefit of the doubt (hence I made no comment in the published minutes) and while I understand these issues are outside of your control my recommendation is to defer CFV until OWASP has published their determination against the vendor. On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner <tony@sentinel24.com> wrote: > Lack of weighting makes all criteria equally important. For the purpose of a > generic evaluation, that's fine but that does not map to how consumers need > to use evaluation tools. Not all implementations are the same, not all > requirements are the same and not all consumers will consider criteria with > the exact same weight. I'm sorry but I strongly disagree with you here. I'm > not talking about setting weights, I'm talking about providing the > flexibility for the user of the tool to set their own weights. Please > provide concrete examples of how you find this to be problematic. I have no issue with the end user making their own independent decision [on weighting] but I disagree WAFEC should be proposing a scheme about weighing that influences the end user in making the wrong decision in hindsight. -- Regards, Christian Heinrich http://cmlh.id.au/contact
CF
Christian Folini
Sat, Oct 10, 2015 5:42 AM

Hi there,

On Fri, Oct 09, 2015 at 09:57:45PM -0400, Tony Turner wrote:

Secondly, yes/no answers are very easy to game unless we get very very
specific with the questions, far more specific than the previous version of
the Response Matrix. For example "Does the WAF support signature based
detection?" is a terrible binary question. There may be varying degrees of
how comprehensive default signatures are, how easy to create new, modify
existing, differences in regex that can impact performance, denial of
service conditions for overly greedy regex, etc.

I do not intend to provide a graduated scoring mechanism unless those
scores can be clearly defined.

This is extremely hard. I think you would be better off by
improving the binary questions / breaking them down to be more
specific than by defining the varying degrees which should be measured.

In a competitive and somewhat blurry environment as WAFs, you seem
to be opening the door for all sorts of meta-discussions.

Ahoj,

Christian Folini

--
If liberty means anything at all, it means the right to tell people
what they do not want to hear.
-- George Orwell

Hi there, On Fri, Oct 09, 2015 at 09:57:45PM -0400, Tony Turner wrote: > Secondly, yes/no answers are very easy to game unless we get very very > specific with the questions, far more specific than the previous version of > the Response Matrix. For example "Does the WAF support signature based > detection?" is a terrible binary question. There may be varying degrees of > how comprehensive default signatures are, how easy to create new, modify > existing, differences in regex that can impact performance, denial of > service conditions for overly greedy regex, etc. > > I do not intend to provide a graduated scoring mechanism unless those > scores can be clearly defined. This is extremely hard. I think you would be better off by improving the binary questions / breaking them down to be more specific than by defining the varying degrees which should be measured. In a competitive and somewhat blurry environment as WAFs, you seem to be opening the door for all sorts of meta-discussions. Ahoj, Christian Folini -- If liberty means anything at all, it means the right to tell people what they do not want to hear. -- George Orwell
TT
Tony Turner
Sat, Oct 10, 2015 6:10 PM

Thanks Christian(s). It occurs to me that there is little difference
between a set of binary questions that produce a score and a graduated
score that is produced by using the well defined criteria I mentioned. I'm
fine with keeping it binary, and expanding the criteria as it makes sense
to do so.

That being said, I'm very interested to hear from any metrics nerds on the
list as we could use a good statistician on the core team.

-Tony
On Oct 10, 2015 1:42 AM, "Christian Folini" <
christian.folini@time-machine.ch> wrote:

Hi there,

On Fri, Oct 09, 2015 at 09:57:45PM -0400, Tony Turner wrote:

Secondly, yes/no answers are very easy to game unless we get very very
specific with the questions, far more specific than the previous version

of

the Response Matrix. For example "Does the WAF support signature based
detection?" is a terrible binary question. There may be varying degrees

of

how comprehensive default signatures are, how easy to create new, modify
existing, differences in regex that can impact performance, denial of
service conditions for overly greedy regex, etc.

I do not intend to provide a graduated scoring mechanism unless those
scores can be clearly defined.

This is extremely hard. I think you would be better off by
improving the binary questions / breaking them down to be more
specific than by defining the varying degrees which should be measured.

In a competitive and somewhat blurry environment as WAFs, you seem
to be opening the door for all sorts of meta-discussions.

Ahoj,

Christian Folini

--
If liberty means anything at all, it means the right to tell people
what they do not want to hear.
-- George Orwell


wasc-wafec mailing list
wasc-wafec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org

Thanks Christian(s). It occurs to me that there is little difference between a set of binary questions that produce a score and a graduated score that is produced by using the well defined criteria I mentioned. I'm fine with keeping it binary, and expanding the criteria as it makes sense to do so. That being said, I'm very interested to hear from any metrics nerds on the list as we could use a good statistician on the core team. -Tony On Oct 10, 2015 1:42 AM, "Christian Folini" < christian.folini@time-machine.ch> wrote: > Hi there, > > On Fri, Oct 09, 2015 at 09:57:45PM -0400, Tony Turner wrote: > > Secondly, yes/no answers are very easy to game unless we get very very > > specific with the questions, far more specific than the previous version > of > > the Response Matrix. For example "Does the WAF support signature based > > detection?" is a terrible binary question. There may be varying degrees > of > > how comprehensive default signatures are, how easy to create new, modify > > existing, differences in regex that can impact performance, denial of > > service conditions for overly greedy regex, etc. > > > > I do not intend to provide a graduated scoring mechanism unless those > > scores can be clearly defined. > > This is extremely hard. I think you would be better off by > improving the binary questions / breaking them down to be more > specific than by defining the varying degrees which should be measured. > > In a competitive and somewhat blurry environment as WAFs, you seem > to be opening the door for all sorts of meta-discussions. > > Ahoj, > > Christian Folini > > > -- > If liberty means anything at all, it means the right to tell people > what they do not want to hear. > -- George Orwell > > _______________________________________________ > wasc-wafec mailing list > wasc-wafec@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org >
TT
Tony Turner
Sat, Oct 10, 2015 6:16 PM

I do not intend to delay CFV as I do not feel there is any relevance. This
is not about OWASP and the outcome of that project has nothing to do with
WAFEC. We have taken steps to ensure transparency and mitigate conflict of
interest for vendor contributors and I am confident this will not be an
issue here.  In fact, preventing additional vendors from joining would do
the opposite in favor of established vendor participants.

-Tony
On Oct 9, 2015 11:06 PM, "Christian Heinrich" christian.heinrich@cmlh.id.au
wrote:

Tony,

On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner tony@sentinel24.com wrote:

My responses inline below Christian, I don't necessarily agree with your
assessment
This distrust has to do with lack of transparency and use of a benchmark

by

a commercial entity before it is a mature framework. Don't confuse the
issues with your bias against OWASP by making inflammatory statements

out of

context. I'm very aware of the issues and do not intend to repeat those
mistakes.

I have not been involved in the recent public discussion at all about
the OWASP Benchmark Project so I am unaware how I could have
influenced this incident due to a perceived bias?

However, we did discuss the paid inclusion of  A9 of the OWASP Top Ten
2013 Release and the detrimental effect on your proposed CFV at
BlackHat USA 2015 and this same vendor has been caught again after not
even two months have passed since our discussion.

On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner tony@sentinel24.com wrote:

I'm not quite sure what issue you are having here. Please elaborate. I do
not intend for WAFEC to perform evaluations. Its simply a framework and

set

of tools for vendors, independent testing entities and consumers to

conduct

their own evaluations. It should be designed for consistency and

flexibility

to map as closely as possible to the unique scenario it is being used to
evaluate.

I was willing to give your CFV proposal the benefit of the doubt
(hence I made no comment in the published minutes) and while I
understand these issues are outside of your control my recommendation
is to defer CFV until OWASP has published their determination against
the vendor.

On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner tony@sentinel24.com wrote:

Lack of weighting makes all criteria equally important. For the purpose

of a

generic evaluation, that's fine but that does not map to how consumers

need

to use evaluation tools. Not all implementations are the same, not all
requirements are the same and not all consumers will consider criteria

with

the exact same weight. I'm sorry but I strongly disagree with you here.

I'm

not talking about setting weights, I'm talking about providing the
flexibility for the user of the tool to set their own weights. Please
provide concrete examples of how you find this to be problematic.

I have no issue with the end user making their own independent
decision [on weighting] but I disagree WAFEC should be proposing a
scheme about weighing that influences the end user in making the wrong
decision in hindsight.

--
Regards,
Christian Heinrich

http://cmlh.id.au/contact

I do not intend to delay CFV as I do not feel there is any relevance. This is not about OWASP and the outcome of that project has nothing to do with WAFEC. We have taken steps to ensure transparency and mitigate conflict of interest for vendor contributors and I am confident this will not be an issue here. In fact, preventing additional vendors from joining would do the opposite in favor of established vendor participants. -Tony On Oct 9, 2015 11:06 PM, "Christian Heinrich" <christian.heinrich@cmlh.id.au> wrote: > Tony, > > On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner <tony@sentinel24.com> wrote: > > My responses inline below Christian, I don't necessarily agree with your > > assessment > > This distrust has to do with lack of transparency and use of a benchmark > by > > a commercial entity before it is a mature framework. Don't confuse the > > issues with your bias against OWASP by making inflammatory statements > out of > > context. I'm very aware of the issues and do not intend to repeat those > > mistakes. > > I have not been involved in the recent public discussion at all about > the OWASP Benchmark Project so I am unaware how I could have > influenced this incident due to a perceived bias? > > However, we did discuss the paid inclusion of A9 of the OWASP Top Ten > 2013 Release and the detrimental effect on your proposed CFV at > BlackHat USA 2015 and this same vendor has been caught again after not > even two months have passed since our discussion. > > On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner <tony@sentinel24.com> wrote: > > I'm not quite sure what issue you are having here. Please elaborate. I do > > not intend for WAFEC to perform evaluations. Its simply a framework and > set > > of tools for vendors, independent testing entities and consumers to > conduct > > their own evaluations. It should be designed for consistency and > flexibility > > to map as closely as possible to the unique scenario it is being used to > > evaluate. > > I was willing to give your CFV proposal the benefit of the doubt > (hence I made no comment in the published minutes) and while I > understand these issues are outside of your control my recommendation > is to defer CFV until OWASP has published their determination against > the vendor. > > On Sat, Oct 10, 2015 at 12:57 PM, Tony Turner <tony@sentinel24.com> wrote: > > Lack of weighting makes all criteria equally important. For the purpose > of a > > generic evaluation, that's fine but that does not map to how consumers > need > > to use evaluation tools. Not all implementations are the same, not all > > requirements are the same and not all consumers will consider criteria > with > > the exact same weight. I'm sorry but I strongly disagree with you here. > I'm > > not talking about setting weights, I'm talking about providing the > > flexibility for the user of the tool to set their own weights. Please > > provide concrete examples of how you find this to be problematic. > > I have no issue with the end user making their own independent > decision [on weighting] but I disagree WAFEC should be proposing a > scheme about weighing that influences the end user in making the wrong > decision in hindsight. > > > -- > Regards, > Christian Heinrich > > http://cmlh.id.au/contact >
CH
Christian Heinrich
Sun, Oct 11, 2015 3:36 AM

Tony,

On Sun, Oct 11, 2015 at 5:10 AM, Tony Turner tony.turner@owasp.org wrote:

Thanks Christian(s). It occurs to me that there is little difference between
a set of binary questions that produce a score and a graduated score that is
produced by using the well defined criteria I mentioned. I'm fine with
keeping it binary, and expanding the criteria as it makes sense to do so.

That being said, I'm very interested to hear from any metrics nerds on the
list as we could use a good statistician on the core team.

Your proposal might be easier to endorse if you convert from the
binary questions to the weighted criteria once these [binary
questions] are finalised for v2 and propose this for v3 in addition
with the request for a statistician.

--
Regards,
Christian Heinrich

http://cmlh.id.au/contact

Tony, On Sun, Oct 11, 2015 at 5:10 AM, Tony Turner <tony.turner@owasp.org> wrote: > Thanks Christian(s). It occurs to me that there is little difference between > a set of binary questions that produce a score and a graduated score that is > produced by using the well defined criteria I mentioned. I'm fine with > keeping it binary, and expanding the criteria as it makes sense to do so. > > That being said, I'm very interested to hear from any metrics nerds on the > list as we could use a good statistician on the core team. Your proposal might be easier to endorse if you convert from the binary questions to the weighted criteria once these [binary questions] are finalised for v2 and propose this for v3 in addition with the request for a statistician. -- Regards, Christian Heinrich http://cmlh.id.au/contact
CF
Christian Folini
Sun, Oct 11, 2015 6:43 AM

On Sat, Oct 10, 2015 at 02:10:50PM -0400, Tony Turner wrote:

Thanks Christian(s). It occurs to me that there is little difference
between a set of binary questions that produce a score and a graduated
score that is produced by using the well defined criteria I mentioned. I'm
fine with keeping it binary, and expanding the criteria as it makes sense
to do so.

Yes, that makes a lot of sense. The result would be the same.

That being said, I'm very interested to hear from any metrics nerds on the
list as we could use a good statistician on the core team.

I'm sure you could.

Good to see this project moving forward again.

Best,

Christian Folini

-Tony
On Oct 10, 2015 1:42 AM, "Christian Folini" <
christian.folini@time-machine.ch> wrote:

Hi there,

On Fri, Oct 09, 2015 at 09:57:45PM -0400, Tony Turner wrote:

Secondly, yes/no answers are very easy to game unless we get very very
specific with the questions, far more specific than the previous version

of

the Response Matrix. For example "Does the WAF support signature based
detection?" is a terrible binary question. There may be varying degrees

of

how comprehensive default signatures are, how easy to create new, modify
existing, differences in regex that can impact performance, denial of
service conditions for overly greedy regex, etc.

I do not intend to provide a graduated scoring mechanism unless those
scores can be clearly defined.

This is extremely hard. I think you would be better off by
improving the binary questions / breaking them down to be more
specific than by defining the varying degrees which should be measured.

In a competitive and somewhat blurry environment as WAFs, you seem
to be opening the door for all sorts of meta-discussions.

Ahoj,

Christian Folini

--
If liberty means anything at all, it means the right to tell people
what they do not want to hear.
-- George Orwell


wasc-wafec mailing list
wasc-wafec@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org

--
Christian Folini
Ringstrasse 2
CH-3639 Kiesen
+41 (0)31 301 60 71 (H)
+41 (0)79 220 23 76 (M)
mailto:christian.folini@netnea.com (Business)
mailto:christian.folini@time-machine.ch (Private)
http://www.christian-folini.ch

On Sat, Oct 10, 2015 at 02:10:50PM -0400, Tony Turner wrote: > Thanks Christian(s). It occurs to me that there is little difference > between a set of binary questions that produce a score and a graduated > score that is produced by using the well defined criteria I mentioned. I'm > fine with keeping it binary, and expanding the criteria as it makes sense > to do so. Yes, that makes a lot of sense. The result would be the same. > That being said, I'm very interested to hear from any metrics nerds on the > list as we could use a good statistician on the core team. I'm sure you could. Good to see this project moving forward again. Best, Christian Folini > > -Tony > On Oct 10, 2015 1:42 AM, "Christian Folini" < > christian.folini@time-machine.ch> wrote: > > > Hi there, > > > > On Fri, Oct 09, 2015 at 09:57:45PM -0400, Tony Turner wrote: > > > Secondly, yes/no answers are very easy to game unless we get very very > > > specific with the questions, far more specific than the previous version > > of > > > the Response Matrix. For example "Does the WAF support signature based > > > detection?" is a terrible binary question. There may be varying degrees > > of > > > how comprehensive default signatures are, how easy to create new, modify > > > existing, differences in regex that can impact performance, denial of > > > service conditions for overly greedy regex, etc. > > > > > > I do not intend to provide a graduated scoring mechanism unless those > > > scores can be clearly defined. > > > > This is extremely hard. I think you would be better off by > > improving the binary questions / breaking them down to be more > > specific than by defining the varying degrees which should be measured. > > > > In a competitive and somewhat blurry environment as WAFs, you seem > > to be opening the door for all sorts of meta-discussions. > > > > Ahoj, > > > > Christian Folini > > > > > > -- > > If liberty means anything at all, it means the right to tell people > > what they do not want to hear. > > -- George Orwell > > > > _______________________________________________ > > wasc-wafec mailing list > > wasc-wafec@lists.webappsec.org > > http://lists.webappsec.org/mailman/listinfo/wasc-wafec_lists.webappsec.org > > -- Christian Folini Ringstrasse 2 CH-3639 Kiesen +41 (0)31 301 60 71 (H) +41 (0)79 220 23 76 (M) mailto:christian.folini@netnea.com (Business) mailto:christian.folini@time-machine.ch (Private) http://www.christian-folini.ch