Hello,
I'm playing around with different AJAX based web technologies in a
spare time project. I managed to implement the Synchronizer Token
Pattern to fully comply to the OWASP recommendation.
Now I'm on my way playing around with load balancing. I managed to
implement a "sticky" variant where the user is bound to a particular
server instance for the lifetime of the session. But if I try to
balance each request to a different machine I ran into random errors
when doing heavy stress testing.
I isolated to problem to the following: The session distribution
between the server instances is sometimes not fast enough to
synchronize new token stored in the session. This leads to false
positives in the anti CSRF token Filter/Interceptor.
This could be easily fixed by using a session wide anti CSRF token
which is not regenerated with every request. But this violates the
OWASP recommendation :-( I googled and thought a lot about the
question:
What are the benefits of using the Synchronizer Token Pattern if your
application is not vulnerable to XSS and using HTTPS only?
My conclusion is that if one is using HTTPS and a web application
which is not vulnerable to XSS attacks there is not benefit of
regenerating the anti CSRF token with each request compared to a
session wide token. Is this conclusion correct?
Best Regards,
Richard
--
Richard Hauswald
Blog: http://tnfstacc.blogspot.com/
LinkedIn: http://www.linkedin.com/in/richardhauswald
Xing: http://www.xing.com/profile/Richard_Hauswald
In the Django web framework, we concluded that the cost of doing a
server side verification was too high for precisely these reasons.
Instead, we use a mostly client side CSRF solution.
-We only accept POST requests for actions that change application state.
-Each form we render includes a hidden csrftoken field
-We set a matching csrftoken cookie
-server-side, we compare the posted value to the value sent as a cookie
This works because an attacker can't set a cookie in a user's browser
for my domain.
This lets us do CSRF independent of the session, and prevents us from
storing large token tables for requests that may never happen.
-Paul
On Wed, Apr 20, 2011 at 1:50 AM, Richard Hauswald
richard.hauswald@googlemail.com wrote:
Hello,
I'm playing around with different AJAX based web technologies in a
spare time project. I managed to implement the Synchronizer Token
Pattern to fully comply to the OWASP recommendation.
Now I'm on my way playing around with load balancing. I managed to
implement a "sticky" variant where the user is bound to a particular
server instance for the lifetime of the session. But if I try to
balance each request to a different machine I ran into random errors
when doing heavy stress testing.
I isolated to problem to the following: The session distribution
between the server instances is sometimes not fast enough to
synchronize new token stored in the session. This leads to false
positives in the anti CSRF token Filter/Interceptor.
This could be easily fixed by using a session wide anti CSRF token
which is not regenerated with every request. But this violates the
OWASP recommendation :-( I googled and thought a lot about the
question:
What are the benefits of using the Synchronizer Token Pattern if your
application is not vulnerable to XSS and using HTTPS only?
My conclusion is that if one is using HTTPS and a web application
which is not vulnerable to XSS attacks there is not benefit of
regenerating the anti CSRF token with each request compared to a
session wide token. Is this conclusion correct?
Best Regards,
Richard
--
Richard Hauswald
Blog: http://tnfstacc.blogspot.com/
LinkedIn: http://www.linkedin.com/in/richardhauswald
Xing: http://www.xing.com/profile/Richard_Hauswald
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
The technique Paul presents is popularly called 'Double Submit and is a nice CSRF protection that doesn't require server-side state.
Adding the token as a hidden field works well for forms-based sites and web 1.0 but might not be a perfect match for Ajax sites and single-page apps. In the latter case I recommend letting JavaScript read the CSRF token cookie and add it as a request parameter for both GET and POST. It can be done as a tweak to your framework's data resource proxy or as a wrapper to your Ajax calling function. The CSRF token cookie naturally can't be HTTPOnly so don't consider using the session ID for this purpose.
On the server you can either have a filter on your resources (meaning in the app), or have an intelligent web front handle it and strip cookie and param off. The filter checks that the cookie and parameter match and that they are of the correct format, for instance hex 16 chars.
Anyone else who has tweaks or ideas on how to enhance Double Submit? Love to hear about it!
Regards, John
Skickat från min iPhone
20 apr 2011 kl. 21:07 skrev Paul McMillan paul@mcmillan.ws:
In the Django web framework, we concluded that the cost of doing a
server side verification was too high for precisely these reasons.
Instead, we use a mostly client side CSRF solution.
-We only accept POST requests for actions that change application state.
-Each form we render includes a hidden csrftoken field
-We set a matching csrftoken cookie
-server-side, we compare the posted value to the value sent as a cookie
This works because an attacker can't set a cookie in a user's browser
for my domain.
This lets us do CSRF independent of the session, and prevents us from
storing large token tables for requests that may never happen.
-Paul
On Wed, Apr 20, 2011 at 1:50 AM, Richard Hauswald
richard.hauswald@googlemail.com wrote:
Hello,
I'm playing around with different AJAX based web technologies in a
spare time project. I managed to implement the Synchronizer Token
Pattern to fully comply to the OWASP recommendation.
Now I'm on my way playing around with load balancing. I managed to
implement a "sticky" variant where the user is bound to a particular
server instance for the lifetime of the session. But if I try to
balance each request to a different machine I ran into random errors
when doing heavy stress testing.
I isolated to problem to the following: The session distribution
between the server instances is sometimes not fast enough to
synchronize new token stored in the session. This leads to false
positives in the anti CSRF token Filter/Interceptor.
This could be easily fixed by using a session wide anti CSRF token
which is not regenerated with every request. But this violates the
OWASP recommendation :-( I googled and thought a lot about the
question:
What are the benefits of using the Synchronizer Token Pattern if your
application is not vulnerable to XSS and using HTTPS only?
My conclusion is that if one is using HTTPS and a web application
which is not vulnerable to XSS attacks there is not benefit of
regenerating the anti CSRF token with each request compared to a
session wide token. Is this conclusion correct?
Best Regards,
Richard
--
Richard Hauswald
Blog: http://tnfstacc.blogspot.com/
LinkedIn: http://www.linkedin.com/in/richardhauswald
Xing: http://www.xing.com/profile/Richard_Hauswald
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
Anyone else who has tweaks or ideas on how to enhance Double Submit? Love to hear about it!
I'm sure this has been mentioned somewhere at some point, but why not
just use an HMAC instead? Doesn't require synchronization and you
don't need to bother with cookies (blech).
(Note that Paul's comment about other people not being able to set
cookies in your domain may or may not be true depending on how old the
browser is. I'd have to dig to determine the current state of that
particular cookie brokenness, but [1] is a great resource start to
start understanding just how buggy implementations likely still are.)
This one possible way to use an HMAC:
A. Upon user session creation, store a random secret key in the
server-side session state. Do not bother changing this again for this
session.
B. On every form, in a hidden form field (POST body) include your CSRF
token which is constructed as:
csrftoken = timestamp || HMAC(key, timestamp)
C. When receiving each POST, just verify that the timestamp is not too
old and that the HMAC matches. This handles asynchronous requests
perfectly fine for the whole session.
I'm sure this could be improved, but that's the gist of it.
tim
I think this (double-cookie submit) is a weak defensive choice since
it requires that a browsers single-origin policy to be perfect, and
history says otherwise. I feel that a cryptographic nonce, either
per-session or per-request, is a more robust defense.
Admittedly, supporting tokens on a per-request basis does require
storing a queue of tokens - which can be tricky to get right for a
number of reasons. I think one token per session is a reasonable
tradeoff for a framework.
If storing nonces is an issue, which I've seen in a few SSO
environments, then a "stateless nonce" (ie: use the hash of the
session ID) is a solid "second choice" for CSRF protection.
The double-cookie submit always seemed like the least secure approach
to me, but many disagree...
I'm very glad that Django cares about this topic, thank you Paul.
Jim Manico
On Apr 20, 2011, at 1:00 PM, Paul McMillan paul@mcmillan.ws wrote:
In the Django web framework, we concluded that the cost of doing a
server side verification was too high for precisely these reasons.
Instead, we use a mostly client side CSRF solution.
-We only accept POST requests for actions that change application state.
-Each form we render includes a hidden csrftoken field
-We set a matching csrftoken cookie
-server-side, we compare the posted value to the value sent as a cookie
This works because an attacker can't set a cookie in a user's browser
for my domain.
This lets us do CSRF independent of the session, and prevents us from
storing large token tables for requests that may never happen.
-Paul
On Wed, Apr 20, 2011 at 1:50 AM, Richard Hauswald
richard.hauswald@googlemail.com wrote:
Hello,
I'm playing around with different AJAX based web technologies in a
spare time project. I managed to implement the Synchronizer Token
Pattern to fully comply to the OWASP recommendation.
Now I'm on my way playing around with load balancing. I managed to
implement a "sticky" variant where the user is bound to a particular
server instance for the lifetime of the session. But if I try to
balance each request to a different machine I ran into random errors
when doing heavy stress testing.
I isolated to problem to the following: The session distribution
between the server instances is sometimes not fast enough to
synchronize new token stored in the session. This leads to false
positives in the anti CSRF token Filter/Interceptor.
This could be easily fixed by using a session wide anti CSRF token
which is not regenerated with every request. But this violates the
OWASP recommendation :-( I googled and thought a lot about the
question:
What are the benefits of using the Synchronizer Token Pattern if your
application is not vulnerable to XSS and using HTTPS only?
My conclusion is that if one is using HTTPS and a web application
which is not vulnerable to XSS attacks there is not benefit of
regenerating the anti CSRF token with each request compared to a
session wide token. Is this conclusion correct?
Best Regards,
Richard
--
Richard Hauswald
Blog: http://tnfstacc.blogspot.com/
LinkedIn: http://www.linkedin.com/in/richardhauswald
Xing: http://www.xing.com/profile/Richard_Hauswald
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
I think this (double-cookie submit) is a weak defensive choice since
it requires that a browsers single-origin policy to be perfect, and
history says otherwise. I feel that a cryptographic nonce, either
per-session or per-request, is a more robust defense.
If you mean something implemented as:
Put something random in a cookie named XSRF_CHECK_COOKIE,
Copy over that cookie to a form field named xsrf_check_field,
Upon receiving form, check that XSRF_CHECK_COOKIE == xsrf_check_field,
...is completely busted for any application that wants to use HTTPS
and withstand active attackers on public wifi or so. This should be
evident here:
http://lcamtuf.blogspot.com/2010/10/http-cookies-or-how-not-to-design.html
It can also cause problems for domains that host multiple web
applications compartmentalized on a host-level, because
fuzzy-bunnies.example.com can then compromise the XSRF token of
payments.example.com, even if payments.example.com uses a completely
separate login cookie and such.
So, there are some basic uses where this approach can be recommended,
but it's not a good habit in general.
/mz
[Oh, and if Django is doing that, it doesn't sound too great.]
To Richard's original question: it is worth considering that statistically
speaking - few web applications are capable of maintaining an
XSS-free state over time. Especially as they evolve into the
Web 2.0 world. This is a great argument for increased software
security efforts in the SDLC. Now we need SAST to actually provide
value analyzing JS/Actionscript/Web 2.0 security.
Even the few organizations with mature SAST SDLC programs
do a poor job testing off-domain code and widgets sourced in
and run in-domain, from what we see. I think I've seen only one
organization with solid SDLC program around dealing with client
side code and external Web 2.0 constructs used in their applications.
Agreeing with Jim's comments - two additional considerations:
Roughly 10-15% of the webapps we see are vulnerable to some form
of HTTP/RS type attack. XSS++ if you will. So, HTTP-only doesn't protect
the cookies from the attacker if they can inject into HTTP-headers.
Collisions: In implementing double-cookie submit per-server you does
increase the chance of collisions. Now to be fair I've only seen this a
few times in my life. We are testing about 3500 apps today at WhiteHat....
so it's definitely an edge-case.
One particularly nasty collision scenario: the app would batch-mode generate
a large file of confidential information. That file download would have the
CSRF token from the report-request (cookie value) bound to it as the "unique"
part of the file name, and that value also controlled access Authorization
to the file.
This worked great until the system started getting into the hundreds of
thousands of concurrent users, and token collisions not only opened up
a large attack vector, but a few times a month someone would wind up
getting someone else's file when under heavy load....
And the reverse collision case also existed - legitimate users would get
booted from the system attempting to use a 'collided' token on any valid
resource requiring a token they requested. Per-server stickiness would
definitely mitigate this to a degree, but the problem in this case was also
a large design-flaw:
Some high-sensitivity functions would store the token in a shared DB instance
behind the LB server-farms, so you'd get collisions between Server #9
Sticky User and Server #3 Sticky user, for example (in functions that kept
state of the token in the shared DB for "increased security").
The token itself was fairly high entropy but not enough for the number of
unique token generators (unsynchronized servers) vs. concurrent users
and volume of requests with unique tokens. This is one of those dastardly
design-flaws that are so hard to find without either sophisticated
multi-user DAST testing, multi-user pen testing, or Design reviews. In fact,
I can even see where a design review early in the SDLC would miss the
multi-server implementation implications. A production-system Threat Model
might be a better way to catch these than an early design review, in
addition to multi-user DAST testing.
Today you should be able to generate high-entropy tokens without
too much computational overhead to mitigate the collision scenario.
That said - worth some quick calculations on token entropy vs your
concurrent user base and average request volume - if you are generating
tokens per-server (and especially per-request) in a large LB farm.
Arian Evans
Software Security Statistician
On Sat, Apr 23, 2011 at 3:56 PM, James Manico jim@manico.net wrote:
I think this (double-cookie submit) is a weak defensive choice since
it requires that a browsers single-origin policy to be perfect, and
history says otherwise. I feel that a cryptographic nonce, either
per-session or per-request, is a more robust defense.
Admittedly, supporting tokens on a per-request basis does require
storing a queue of tokens - which can be tricky to get right for a
number of reasons. I think one token per session is a reasonable
tradeoff for a framework.
If storing nonces is an issue, which I've seen in a few SSO
environments, then a "stateless nonce" (ie: use the hash of the
session ID) is a solid "second choice" for CSRF protection.
The double-cookie submit always seemed like the least secure approach
to me, but many disagree...
I'm very glad that Django cares about this topic, thank you Paul.
Jim Manico
On Apr 20, 2011, at 1:00 PM, Paul McMillan paul@mcmillan.ws wrote:
In the Django web framework, we concluded that the cost of doing a
server side verification was too high for precisely these reasons.
Instead, we use a mostly client side CSRF solution.
-We only accept POST requests for actions that change application state.
-Each form we render includes a hidden csrftoken field
-We set a matching csrftoken cookie
-server-side, we compare the posted value to the value sent as a cookie
This works because an attacker can't set a cookie in a user's browser
for my domain.
This lets us do CSRF independent of the session, and prevents us from
storing large token tables for requests that may never happen.
-Paul
On Wed, Apr 20, 2011 at 1:50 AM, Richard Hauswald
richard.hauswald@googlemail.com wrote:
Hello,
I'm playing around with different AJAX based web technologies in a
spare time project. I managed to implement the Synchronizer Token
Pattern to fully comply to the OWASP recommendation.
Now I'm on my way playing around with load balancing. I managed to
implement a "sticky" variant where the user is bound to a particular
server instance for the lifetime of the session. But if I try to
balance each request to a different machine I ran into random errors
when doing heavy stress testing.
I isolated to problem to the following: The session distribution
between the server instances is sometimes not fast enough to
synchronize new token stored in the session. This leads to false
positives in the anti CSRF token Filter/Interceptor.
This could be easily fixed by using a session wide anti CSRF token
which is not regenerated with every request. But this violates the
OWASP recommendation :-( I googled and thought a lot about the
question:
What are the benefits of using the Synchronizer Token Pattern if your
application is not vulnerable to XSS and using HTTPS only?
My conclusion is that if one is using HTTPS and a web application
which is not vulnerable to XSS attacks there is not benefit of
regenerating the anti CSRF token with each request compared to a
session wide token. Is this conclusion correct?
Best Regards,
Richard
--
Richard Hauswald
Blog: http://tnfstacc.blogspot.com/
LinkedIn: http://www.linkedin.com/in/richardhauswald
Xing: http://www.xing.com/profile/Richard_Hauswald
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
The Web Security Mailing List
WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss
Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
WASC on Twitter
http://twitter.com/wascupdates
websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
On Sat, Apr 23, 2011 at 4:07 PM, Michal Zalewski lcamtuf@coredump.cx wrote:
It can also cause problems for domains that host multiple web
applications compartmentalized on a host-level, because
fuzzy-bunnies.example.com can then compromise the XSRF token of
payments.example.com, even if payments.example.com uses a completely
separate login cookie and such.
Great example Michal.
Across domain-name compartmentalized apps you have a conceptually similar
problem to the multi-server token collision example I just posted.
Not only could the token be legitimately compromised or re-used as you
noted, but they can blindly collide in the same scenarios if the generators
are not synchronized (which we have observed in multi-node environments).
Arian Evans
Software Security Sophist