[WEB SECURITY] CSRF protection: What are the benefits of using the Synchronizer Token Pattern if your application is not vulnerable to XSS and using HTTPS only?
Arian J. Evans
arian.evans at anachronic.com
Sat Apr 23 20:40:16 EDT 2011
To Richard's original question: it is worth considering that statistically
speaking - few web applications are capable of maintaining an
XSS-free state over time. Especially as they evolve into the
Web 2.0 world. This is a great argument for increased software
security efforts in the SDLC. Now we need SAST to actually provide
value analyzing JS/Actionscript/Web 2.0 security.
Even the few organizations with mature SAST SDLC programs
do a poor job testing off-domain code and widgets sourced in
and run in-domain, from what we see. I think I've seen only one
organization with solid SDLC program around dealing with client
side code and external Web 2.0 constructs used in their applications.
Agreeing with Jim's comments - two additional considerations:
1) Roughly 10-15% of the webapps we see are vulnerable to some form
of HTTP/RS type attack. XSS++ if you will. So, HTTP-only doesn't protect
the cookies from the attacker if they can inject into HTTP-headers.
2) Collisions: In implementing double-cookie submit per-server you does
increase the chance of collisions. Now to be fair I've only seen this a
few times in my life. We are testing about 3500 apps today at WhiteHat....
so it's definitely an edge-case.
One particularly nasty collision scenario: the app would batch-mode generate
a large file of confidential information. That file download would have the
CSRF token from the report-request (cookie value) bound to it as the "unique"
part of the file name, and that value also controlled access Authorization
to the file.
This worked great until the system started getting into the hundreds of
thousands of concurrent users, and token collisions not only opened up
a large attack vector, but a few times a month someone would wind up
getting someone else's file when under heavy load....
And the reverse collision case also existed - legitimate users would get
booted from the system attempting to use a 'collided' token on any valid
resource requiring a token they requested. Per-server stickiness would
definitely mitigate this to a degree, but the problem in this case was also
a large design-flaw:
Some high-sensitivity functions would store the token in a shared DB instance
behind the LB server-farms, so you'd get collisions between Server #9
Sticky User and Server #3 Sticky user, for example (in functions that kept
state of the token in the shared DB for "increased security").
The token itself was fairly high entropy but not enough for the number of
unique token generators (unsynchronized servers) vs. concurrent users
and volume of requests with unique tokens. This is one of those dastardly
design-flaws that are so hard to find without either sophisticated
multi-user DAST testing, multi-user pen testing, or Design reviews. In fact,
I can even see where a design review early in the SDLC would miss the
multi-server implementation implications. A production-system Threat Model
might be a better way to catch these than an early design review, in
addition to multi-user DAST testing.
Today you should be able to generate high-entropy tokens without
too much computational overhead to mitigate the collision scenario.
That said - worth some quick calculations on token entropy vs your
concurrent user base and average request volume - if you are generating
tokens per-server (and especially per-request) in a large LB farm.
Software Security Statistician
On Sat, Apr 23, 2011 at 3:56 PM, James Manico <jim at manico.net> wrote:
> I think this (double-cookie submit) is a weak defensive choice since
> it requires that a browsers single-origin policy to be perfect, and
> history says otherwise. I feel that a cryptographic nonce, either
> per-session or per-request, is a more robust defense.
> Admittedly, supporting tokens on a per-request basis does require
> storing a queue of tokens - which can be tricky to get right for a
> number of reasons. I think one token per session is a reasonable
> tradeoff for a framework.
> If storing nonces is an issue, which I've seen in a few SSO
> environments, then a "stateless nonce" (ie: use the hash of the
> session ID) is a solid "second choice" for CSRF protection.
> The double-cookie submit always seemed like the least secure approach
> to me, but many disagree...
> I'm very glad that Django cares about this topic, thank you Paul.
> Jim Manico
> On Apr 20, 2011, at 1:00 PM, Paul McMillan <paul at mcmillan.ws> wrote:
>> In the Django web framework, we concluded that the cost of doing a
>> server side verification was too high for precisely these reasons.
>> Instead, we use a mostly client side CSRF solution.
>> -We only accept POST requests for actions that change application state.
>> -Each form we render includes a hidden csrftoken field
>> -We set a matching csrftoken cookie
>> -server-side, we compare the posted value to the value sent as a cookie
>> This works because an attacker can't set a cookie in a user's browser
>> for my domain.
>> This lets us do CSRF independent of the session, and prevents us from
>> storing large token tables for requests that may never happen.
>> On Wed, Apr 20, 2011 at 1:50 AM, Richard Hauswald
>> <richard.hauswald at googlemail.com> wrote:
>>> I'm playing around with different AJAX based web technologies in a
>>> spare time project. I managed to implement the Synchronizer Token
>>> Pattern to fully comply to the OWASP recommendation.
>>> Now I'm on my way playing around with load balancing. I managed to
>>> implement a "sticky" variant where the user is bound to a particular
>>> server instance for the lifetime of the session. But if I try to
>>> balance each request to a different machine I ran into random errors
>>> when doing heavy stress testing.
>>> I isolated to problem to the following: The session distribution
>>> between the server instances is sometimes not fast enough to
>>> synchronize new token stored in the session. This leads to false
>>> positives in the anti CSRF token Filter/Interceptor.
>>> This could be easily fixed by using a session wide anti CSRF token
>>> which is not regenerated with every request. But this violates the
>>> OWASP recommendation :-( I googled and thought a lot about the
>>> What are the benefits of using the Synchronizer Token Pattern if your
>>> application is not vulnerable to XSS and using HTTPS only?
>>> My conclusion is that if one is using HTTPS and a web application
>>> which is not vulnerable to XSS attacks there is not benefit of
>>> regenerating the anti CSRF token with each request compared to a
>>> session wide token. Is this conclusion correct?
>>> Best Regards,
>>> Richard Hauswald
>>> Blog: http://tnfstacc.blogspot.com/
>>> LinkedIn: http://www.linkedin.com/in/richardhauswald
>>> Xing: http://www.xing.com/profile/Richard_Hauswald
>>> The Web Security Mailing List
>>> WebSecurity RSS Feed
>>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>>> WASC on Twitter
>>> websecurity at lists.webappsec.org
>> The Web Security Mailing List
>> WebSecurity RSS Feed
>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>> WASC on Twitter
>> websecurity at lists.webappsec.org
> The Web Security Mailing List
> WebSecurity RSS Feed
> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
> WASC on Twitter
> websecurity at lists.webappsec.org
More information about the websecurity