websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

How are you tackling CSRF?

TP
Thomas Ptacek
Sat, Apr 23, 2011 10:10 PM

He's not talking about referrer headers; he's talking about subsetting the inventory of forms down to those that actually require CSRF protection.

On Apr 23, 2011, at 4:23 PM, James Manico wrote:

Hey Steve,

In a intranet environment where all browser/network settings are
controlled, HTTP Referrer header verification (what you are suggesting
below) can be effective as a •partial• CSRF defense to be used in
addition to cryptographic nonces.

But when your website has Internet-facing customers/consumers, you can
no longer rely on referrer headers. Some organizations strip them in
an outbound way to prevent data leakage.

Jim Manico

On Apr 23, 2011, at 9:33 AM, "Steven M. Christey"
coley@rcf-smtp.mitre.org wrote:

Disclaimer: I'm mostly ignorant about automated detection of CSRF.

Just a random thought.  Has anybody investigated filtering/prioritizing forms based on how many pages invoke those forms?  I would guess that some critical state-changing forms would only be accessible from a single page, whereas (e.g.) a search or login function might be accessible from many.

In a CMS scenario for example, there might be lots of pages that link to a "create a new page" form, but only one page that points to the form "commit the new page content you just filled in."

This might not serve as proof that a form should have CSRF protection, but it might be one way of sorting the potential false-positives.

  • Steve

The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org


Thomas Ptacek // matasano security // founder, product manager
reach me direct: 888-677-0666 x7805

"The truth will set you free. But not until it is finished with you."

He's not talking about referrer headers; he's talking about subsetting the inventory of forms down to those that actually require CSRF protection. On Apr 23, 2011, at 4:23 PM, James Manico wrote: > Hey Steve, > > In a intranet environment where all browser/network settings are > controlled, HTTP Referrer header verification (what you are suggesting > below) can be effective as a •partial• CSRF defense to be used in > addition to cryptographic nonces. > > But when your website has Internet-facing customers/consumers, you can > no longer rely on referrer headers. Some organizations strip them in > an outbound way to prevent data leakage. > > Jim Manico > > On Apr 23, 2011, at 9:33 AM, "Steven M. Christey" > <coley@rcf-smtp.mitre.org> wrote: > >> >> Disclaimer: I'm mostly ignorant about automated detection of CSRF. >> >> Just a random thought. Has anybody investigated filtering/prioritizing forms based on how many pages invoke those forms? I would guess that some critical state-changing forms would only be accessible from a single page, whereas (e.g.) a search or login function might be accessible from many. >> >> In a CMS scenario for example, there might be lots of pages that link to a "create a new page" form, but only one page that points to the form "commit the new page content you just filled in." >> >> This might not serve as *proof* that a form should have CSRF protection, but it might be one way of sorting the potential false-positives. >> >> - Steve >> >> _______________________________________________ >> The Web Security Mailing List >> >> WebSecurity RSS Feed >> http://www.webappsec.org/rss/websecurity.rss >> >> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA >> >> WASC on Twitter >> http://twitter.com/wascupdates >> >> websecurity@lists.webappsec.org >> http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org --- Thomas Ptacek // matasano security // founder, product manager reach me direct: 888-677-0666 x7805 "The truth will set you free. But not until it is finished with you."
AJ
Arian J. Evans
Sun, Apr 24, 2011 12:00 AM

Tasos - absolutely fair response. I suspect our customer type and use-case
for our platforms are entirely different. You probably get the per-app focused
pen-tester, and reporting "highly probable, high-value" targets to review by
hand is valuable to them indeed.

At WhiteHat we do the same thing internally with Sentinel. The most common
use-case for Sentinel (scanning applications all year long, all day long or
nightly) requires a similar approach to what you describe below for us
internally.
Incidentally this is also how we catch new code, new forms, interesting new
implementations and patterns that require us to go in and tweak Sentinel
or write new custom tests.

The difference is we validate all the results for our customers, as you noted.
Our consumer typically has one person responsible for dealing with results
of dozens of applications. That ratio is usually 1:10 to 1:50
security_person:app.
Which means they can be dealing with hundreds to thousands of "vulnerabilities"
so they don't have time to deal with 'potential' or 'false positives'
or they can't
get anything done internally.

Additionally - in use-cases where Sentinel is wired deeply into the SDLC,
it is often integrated directly with bug tracking systems so developers can
interact with Sentinel results, unit-testing, and re-testing. Developers don't
have time to deal with potential issues, and if you feed them too many
findings they cannot validate as useful, they will start to rebel against the
security analysis technology being used.

This is what I was referring to when talking about how scanners that report
all replayable forms as 'CSRF' get heavily into the FN territory, which doesn't
scale well as a broad-enterprise web app testing strategy.

Again - very different use-cases for Sentinel vs. Arachni I suspect.

PS - It is exciting how far you've come with Arachni so quickly. Great work,


Arian Evans
Software Security Statistician

On Sat, Apr 23, 2011 at 1:51 AM, Tasos Laskos tasos.laskos@gmail.com wrote:

Huh... I probably missed it in the article.

There are 2 difficulties I've spotted with it:
  o How do you compare forms? i.e. What are the criteria for 2 forms being
equal?
  o Identifying CSRF tokens -- not all of them are going to be of the same
format.

The first one is not hard to solve, create an ID string made from
concatenated input names and the action URL.

The second one is harder, right now Arachni checks for CSRF tokens same way
that Skipfish does; it looks for strings that look likebase10/16/32/64.
Although these are the most common formats they're not the only ones.
The only false positives I've seen are caused by this -- they are quite rare
but they exist.

So all-in-all yes it works for me, if great minds do think alike then short
of doing what you guys do I doubt that we're going to
figure out a better fully automated way.

Maybe a DB with CSRF token signatures could help but we'll see...

I've accepted, in my old age,the fact that we've kind of hit a barrier
with our usual techniques so I'm kind of moving into baselines/meta-analysis
as a filtering mechanism.

Something akin to a subsystem saying:

Yes, huh...so these are the scan results...Wait, what?
There are 60 forms total and all of the are vulnerable? I seriously doubt
that.
Well, report them but put them in a special "These may be false positives"
section.

We've previously discussed my thoughts on the last subject but I've also
documented them here:
http://trainofthought.segfault.gr/2011/02/03/high-level-meta-analysis-in-webappsec-scanners/

I doubt that you business guys would like this approach since you have
qualified people between
the webapp and the scanner who's job is to interpret the results, so you
don't need it.

But in a system like Arachni I believe that even probable false positives
are worth reporting because they too have something to offer.
Since false positives appear mostly due to webapp or server quirks it's
worth looking into them during a pen test.

This way laypeople are happy that the pretty report at the end of the scan
isn't full of noise and
hacker-folk are happy that they are given the chance to dig deeper and see
why the webapp behaved in a way that produced false positives.

However, you guys have more customers than the size of my user-base (x10
probably)so I imagine that you've seen
a lot of edge cases which justify your design decisions.
What I mean is that it all comes down to pushing our ego aside and implement
whatever gets the job done.

PS. If I'm being perfectly honest only one person has reported a CSRF false
positive (https://github.com/Zapotek/arachni/issues/14) and
even though it technically was an FP it was a good thing overall that it
happened.

On 04/23/2011 04:19 AM, Arian J. Evans wrote:

Tasos - thank you for explaining how you test for this!

We actually cover this testing paradigm in the article. We find it to be
littered with so many false positives that business owners wind up
ignoring the overall results, as we discuss in the article. There are
other drawbacks as well from what we have observed.

What have you found in terms of response to your results so far?


Arian Evans
Sybarite Software Security Scanning Savant

On Fri, Apr 22, 2011 at 1:01 PM, Tasos Laskostasos.laskos@gmail.com
 wrote:

Hi,

When it comes to automated identification I[1] look for forms that only
appear with the set cookies and ignore the rest.
It's a fair bet to assume that those forms will be tightly coupled with
the
current user/session and thus affect business logic in one way or
another.

Then I look if they contain any CSRF tokens[2], if they don't then they
are
logged and reported.

This provides a more detailed breakdown:

http://trainofthought.segfault.gr/wp-content/uploads/2010/10/Automated-detection-of-CSRF-worthy-HTML-forms-through-4-pass-reverse-Diff-analysis.pdf

Cheers,
Tasos L.

[1] When I say "I" I mean Arachni.
[2] Unrecognized token formats is a weakness of this approach -- you
can't
anticipate everything.

On 04/22/2011 07:30 PM, Jeremiah Grossman wrote:

Hi All,

       Over the last year I've been noticing increased interest and
awareness of Cross-Site Request Forgery (CSRF). A welcome change as for
most
of the last decade few considered CSRF a vulnerability at all, but an
artifact of the way the web was designed. But, the as it normally
happens,
the bad guys have been showing us how damaging CSRF can really be.

To help bring more clarity we've recently published a detailed blog post
describing how our testing methodology approaches CSRF. What we're
interested is how other pen-testers and developers are tackling the
issue
because automated detection is currently of limited help.

WhiteHat Security’s Approach to Detecting Cross-Site Request Forgery
(CSRF)

https://blog.whitehatsec.com/whitehat-security%E2%80%99s-approach-to-detecting-cross-site-request-forgery-csrf/

FYI: Several weeks ago we launched our new blog, where I'll be diverting
all my web security material. We've been piling up new content:
https://blog.whitehatsec.com/

Regards,

Jeremiah Grossman
Chief Technology Officer
WhiteHat Security, Inc.
http://www.whitehatsec.com/


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org

http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

Tasos - absolutely fair response. I suspect our customer type and use-case for our platforms are entirely different. You probably get the per-app focused pen-tester, and reporting "highly probable, high-value" targets to review by hand is valuable to them indeed. At WhiteHat we do the same thing internally with Sentinel. The most common use-case for Sentinel (scanning applications all year long, all day long or nightly) requires a similar approach to what you describe below for us internally. Incidentally this is also how we catch new code, new forms, interesting new implementations and patterns that require us to go in and tweak Sentinel or write new custom tests. The difference is we validate all the results for our customers, as you noted. Our consumer typically has one person responsible for dealing with results of dozens of applications. That ratio is usually 1:10 to 1:50 security_person:app. Which means they can be dealing with hundreds to thousands of "vulnerabilities" so they don't have time to deal with 'potential' or 'false positives' or they can't get anything done internally. Additionally - in use-cases where Sentinel is wired deeply into the SDLC, it is often integrated directly with bug tracking systems so developers can interact with Sentinel results, unit-testing, and re-testing. Developers don't have time to deal with potential issues, and if you feed them too many findings they cannot validate as useful, they will start to rebel against the security analysis technology being used. This is what I was referring to when talking about how scanners that report all replayable forms as 'CSRF' get heavily into the FN territory, which doesn't scale well as a broad-enterprise web app testing strategy. Again - very different use-cases for Sentinel vs. Arachni I suspect. PS - It is exciting how far you've come with Arachni so quickly. Great work, --- Arian Evans Software Security Statistician On Sat, Apr 23, 2011 at 1:51 AM, Tasos Laskos <tasos.laskos@gmail.com> wrote: > Huh... I probably missed it in the article. > > There are 2 difficulties I've spotted with it: >   o How do you compare forms? i.e. What are the criteria for 2 forms being > equal? >   o Identifying CSRF tokens -- not all of them are going to be of the same > format. > > The first one is not hard to solve, create an ID string made from > concatenated input names and the action URL. > > The second one is harder, right now Arachni checks for CSRF tokens same way > that Skipfish does; it looks for strings that look likebase10/16/32/64. > Although these are the most common formats they're not the only ones. > The only false positives I've seen are caused by this -- they are quite rare > but they exist. > > So all-in-all yes it works for me, if great minds do think alike then short > of doing what you guys do I doubt that we're going to > figure out a better *fully* automated way. > > Maybe a DB with CSRF token signatures could help but we'll see... > > I've accepted, in my old age,the fact that we've kind of hit a barrier > with our usual techniques so I'm kind of moving into baselines/meta-analysis > as a filtering mechanism. > > Something akin to a subsystem saying: > ----------- > Yes, huh...so these are the scan results...Wait, what? > There are 60 forms total and all of the are vulnerable? I seriously doubt > that. > Well, report them but put them in a special "These may be false positives" > section. > ----------- > > We've previously discussed my thoughts on the last subject but I've also > documented them here: > http://trainofthought.segfault.gr/2011/02/03/high-level-meta-analysis-in-webappsec-scanners/ > > I doubt that you business guys would like this approach since you have > qualified people between > the webapp and the scanner who's job is to interpret the results, so you > don't need it. > > But in a system like Arachni I believe that even probable false positives > are worth reporting because they too have something to offer. > Since false positives appear mostly due to webapp or server quirks it's > worth looking into them during a pen test. > > This way laypeople are happy that the pretty report at the end of the scan > isn't full of noise and > hacker-folk are happy that they are given the chance to dig deeper and see > why the webapp behaved in a way that produced false positives. > > However, you guys have more customers than the size of my user-base (x10 > probably)so I imagine that you've seen > a *lot* of edge cases which justify your design decisions. > What I mean is that it all comes down to pushing our ego aside and implement > whatever gets the job done. > > > PS. If I'm being perfectly honest only one person has reported a CSRF false > positive (https://github.com/Zapotek/arachni/issues/14) and > even though it technically was an FP it was a good thing overall that it > happened. > > On 04/23/2011 04:19 AM, Arian J. Evans wrote: >> >> Tasos - thank you for explaining how you test for this! >> >> We actually cover this testing paradigm in the article. We find it to be >> littered with so many false positives that business owners wind up >> ignoring the overall results, as we discuss in the article. There are >> other drawbacks as well from what we have observed. >> >> What have you found in terms of response to your results so far? >> >> --- >> Arian Evans >> Sybarite Software Security Scanning Savant >> >> >> On Fri, Apr 22, 2011 at 1:01 PM, Tasos Laskos<tasos.laskos@gmail.com> >>  wrote: >>> >>> Hi, >>> >>> When it comes to automated identification I[1] look for forms that only >>> appear *with* the set cookies and ignore the rest. >>> It's a fair bet to assume that those forms will be tightly coupled with >>> the >>> current user/session and thus affect business logic in one way or >>> another. >>> >>> Then I look if they contain any CSRF tokens[2], if they don't then they >>> are >>> logged and reported. >>> >>> This provides a more detailed breakdown: >>> >>> http://trainofthought.segfault.gr/wp-content/uploads/2010/10/Automated-detection-of-CSRF-worthy-HTML-forms-through-4-pass-reverse-Diff-analysis.pdf >>> >>> Cheers, >>> Tasos L. >>> >>> [1] When I say "I" I mean Arachni. >>> [2] Unrecognized token formats is a weakness of this approach -- you >>> can't >>> anticipate everything. >>> >>> >>> On 04/22/2011 07:30 PM, Jeremiah Grossman wrote: >>>> >>>> Hi All, >>>> >>>>        Over the last year I've been noticing increased interest and >>>> awareness of Cross-Site Request Forgery (CSRF). A welcome change as for >>>> most >>>> of the last decade few considered CSRF a vulnerability at all, but an >>>> artifact of the way the web was designed. But, the as it normally >>>> happens, >>>> the bad guys have been showing us how damaging CSRF can really be. >>>> >>>> To help bring more clarity we've recently published a detailed blog post >>>> describing how our testing methodology approaches CSRF. What we're >>>> interested is how other pen-testers and developers are tackling the >>>> issue >>>> because automated detection is currently of limited help. >>>> >>>> WhiteHat Security’s Approach to Detecting Cross-Site Request Forgery >>>> (CSRF) >>>> >>>> >>>> https://blog.whitehatsec.com/whitehat-security%E2%80%99s-approach-to-detecting-cross-site-request-forgery-csrf/ >>>> >>>> FYI: Several weeks ago we launched our new blog, where I'll be diverting >>>> all my web security material. We've been piling up new content: >>>> https://blog.whitehatsec.com/ >>>> >>>> >>>> Regards, >>>> >>>> Jeremiah Grossman >>>> Chief Technology Officer >>>> WhiteHat Security, Inc. >>>> http://www.whitehatsec.com/ >>>> >>>> >>>> _______________________________________________ >>>> The Web Security Mailing List >>>> >>>> WebSecurity RSS Feed >>>> http://www.webappsec.org/rss/websecurity.rss >>>> >>>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA >>>> >>>> WASC on Twitter >>>> http://twitter.com/wascupdates >>>> >>>> websecurity@lists.webappsec.org >>>> >>>> >>>> http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >>>> >>> >>> >>> _______________________________________________ >>> The Web Security Mailing List >>> >>> WebSecurity RSS Feed >>> http://www.webappsec.org/rss/websecurity.rss >>> >>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA >>> >>> WASC on Twitter >>> http://twitter.com/wascupdates >>> >>> websecurity@lists.webappsec.org >>> >>> http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >>> >> > >
RP
Rohit Pitke
Mon, Apr 25, 2011 8:06 AM

Steve,

You are basically assuming that there is XSS on some of the requests. If there
is no XSS,  even if you keep a value of your CSRF token same as cookies, it wont
matter.
Also, if we correctly implement CSRF on each and every page (including GET),
that would automatically mitigate XSS too as a request carrying XSS string wont
be accepted on server side. (Provided CSRF token validation is done strictly).

Thoughts?

Thanks,
Rohit


From: Paul McMillan paul@mcmillan.ws
To: Steven M. Christey coley@rcf-smtp.mitre.org
Cc: web security websecurity@webappsec.org
Sent: Sun, April 24, 2011 3:28:15 AM
Subject: Re: [WEB SECURITY] How are you tackling CSRF?

Rohit,

Good point about CSRF tokens that are present but not actually validated.

From your remediation step list, it sounds like you're using your

session token as the CSRF token. This is a really bad idea. Your
session cookie should be set to HTTP-only, to prevent it from being
stolen or misused by Javascript in the event of an XSS  bug. If you
use the same value in your form, Javascript can access it, and
malicious attackers may be able to use that information to steal your
users sessions.

If you hash your session cookie value and use that for your token, you
will (mostly) mitigate this problem.

Steve,

In general, ALL forms should have CSRF protection. Things like search
don't appear to be important at first, until you imagine how easily a
non-CSRF search could be used to cause a DoS on your site. Search is
usually an expensive operation. Don't make it easier for attackers.

-Paul


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

Steve, You are basically assuming that there is XSS on some of the requests. If there is no XSS, even if you keep a value of your CSRF token same as cookies, it wont matter. Also, if we correctly implement CSRF on each and every page (including GET), that would automatically mitigate XSS too as a request carrying XSS string wont be accepted on server side. (Provided CSRF token validation is done strictly). Thoughts? Thanks, Rohit ________________________________ From: Paul McMillan <paul@mcmillan.ws> To: Steven M. Christey <coley@rcf-smtp.mitre.org> Cc: web security <websecurity@webappsec.org> Sent: Sun, April 24, 2011 3:28:15 AM Subject: Re: [WEB SECURITY] How are you tackling CSRF? Rohit, Good point about CSRF tokens that are present but not actually validated. >From your remediation step list, it sounds like you're using your session token as the CSRF token. This is a really bad idea. Your session cookie should be set to HTTP-only, to prevent it from being stolen or misused by Javascript in the event of an XSS bug. If you use the same value in your form, Javascript can access it, and malicious attackers may be able to use that information to steal your users sessions. If you hash your session cookie value and use that for your token, you will (mostly) mitigate this problem. Steve, In general, ALL forms should have CSRF protection. Things like search don't appear to be important at first, until you imagine how easily a non-CSRF search could be used to cause a DoS on your site. Search is usually an expensive operation. Don't make it easier for attackers. -Paul _______________________________________________ The Web Security Mailing List WebSecurity RSS Feed http://www.webappsec.org/rss/websecurity.rss Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA WASC on Twitter http://twitter.com/wascupdates websecurity@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
SS
Sebastian Schinzel
Tue, Apr 26, 2011 7:36 AM

Hi Rohit,

On Apr 25, 2011, at 10:06 AM, Rohit Pitke wrote:

You are basically assuming that there is XSS on some of the requests. If there is no XSS,  even if you keep a value of your CSRF token same as cookies, it wont matter.
Also, if we correctly implement CSRF on each and every page (including GET), that would automatically mitigate XSS too as a request carrying XSS string wont be accepted on server side. (Provided CSRF token validation is done strictly).

Thoughts?

It would only mitigate reflected XSS, but not persistent XSS.

In general, I would refrain from telling the developers that CSRF tokens also
mitigate reflected XSS. I fear that the developers could accept this as a
"best practice to fix XSS" with all the negative implications.

"Fixing" reflected XSS with CSRF tokens leads to a tightly coupled security
system, because as soon as CSRF protection fails, you have a much bigger
problem with reflected XSS.

Although, CSRF tokens may be temporal fix for reflected XSS that buys you time
to actually fix the reflected XSS with proper output encoding.

Cheers,
Sebastian

Hi Rohit, On Apr 25, 2011, at 10:06 AM, Rohit Pitke wrote: > You are basically assuming that there is XSS on some of the requests. If there is no XSS, even if you keep a value of your CSRF token same as cookies, it wont matter. > Also, if we correctly implement CSRF on each and every page (including GET), that would automatically mitigate XSS too as a request carrying XSS string wont be accepted on server side. (Provided CSRF token validation is done strictly). > > Thoughts? It would only mitigate reflected XSS, but not persistent XSS. In general, I would refrain from telling the developers that CSRF tokens also mitigate reflected XSS. I fear that the developers could accept this as a "best practice to fix XSS" with all the negative implications. "Fixing" reflected XSS with CSRF tokens leads to a tightly coupled security system, because as soon as CSRF protection fails, you have a much bigger problem with reflected XSS. Although, CSRF tokens may be temporal fix for reflected XSS that buys you time to actually fix the reflected XSS with proper output encoding. Cheers, Sebastian
RP
Rohit Pitke
Tue, Apr 26, 2011 11:24 AM

Yes, I do  not intend to say that CSRF fix would provide XSS protection. I was
trying to say that if you have extremely strong CSRF protection on all pages
post-login, it would become difficult for an attacker to exploit XSS. This is
of-course no relaxation for implementing strong XSS fix.

Rohit


From: Sebastian Schinzel ssc@seecurity.org
To: Rohit Pitke rohirp92@yahoo.com
Cc: web security websecurity@webappsec.org
Sent: Tue, April 26, 2011 1:06:20 PM
Subject: Re: [WEB SECURITY] How are you tackling CSRF?

Hi Rohit,

On Apr 25, 2011, at 10:06 AM, Rohit Pitke wrote:

You are basically assuming that there is XSS on some of the requests. If there
is no XSS,  even if you keep a value of your CSRF token same as cookies, it wont
matter.
Also, if we correctly implement CSRF on each and every page (including GET),
that would automatically mitigate XSS too as a request carrying XSS string wont
be accepted on server side. (Provided CSRF token validation is done strictly).

Thoughts?

It would only mitigate reflected XSS, but not persistent XSS.

In general, I would refrain from telling the developers that CSRF tokens also
mitigate reflected XSS. I fear that the developers could accept this as a
"best practice to fix XSS" with all the negative implications.

"Fixing" reflected XSS with CSRF tokens leads to a tightly coupled security
system, because as soon as CSRF protection fails, you have a much bigger
problem with reflected XSS.

Although, CSRF tokens may be temporal fix for reflected XSS that buys you time
to actually fix the reflected XSS with proper output encoding.

Cheers,
Sebastian

Yes, I do not intend to say that CSRF fix would provide XSS protection. I was trying to say that if you have extremely strong CSRF protection on all pages post-login, it would become *difficult* for an attacker to exploit XSS. This is of-course no relaxation for implementing strong XSS fix. Rohit ________________________________ From: Sebastian Schinzel <ssc@seecurity.org> To: Rohit Pitke <rohirp92@yahoo.com> Cc: web security <websecurity@webappsec.org> Sent: Tue, April 26, 2011 1:06:20 PM Subject: Re: [WEB SECURITY] How are you tackling CSRF? Hi Rohit, On Apr 25, 2011, at 10:06 AM, Rohit Pitke wrote: > You are basically assuming that there is XSS on some of the requests. If there >is no XSS, even if you keep a value of your CSRF token same as cookies, it wont >matter. > Also, if we correctly implement CSRF on each and every page (including GET), >that would automatically mitigate XSS too as a request carrying XSS string wont >be accepted on server side. (Provided CSRF token validation is done strictly). > > Thoughts? It would only mitigate reflected XSS, but not persistent XSS. In general, I would refrain from telling the developers that CSRF tokens also mitigate reflected XSS. I fear that the developers could accept this as a "best practice to fix XSS" with all the negative implications. "Fixing" reflected XSS with CSRF tokens leads to a tightly coupled security system, because as soon as CSRF protection fails, you have a much bigger problem with reflected XSS. Although, CSRF tokens may be temporal fix for reflected XSS that buys you time to actually fix the reflected XSS with proper output encoding. Cheers, Sebastian