[WEB SECURITY] How are you tackling CSRF?
Arian J. Evans
arian.evans at anachronic.com
Sat Apr 23 20:00:31 EDT 2011
Tasos - absolutely fair response. I suspect our customer type and use-case
for our platforms are entirely different. You probably get the per-app focused
pen-tester, and reporting "highly probable, high-value" targets to review by
hand is valuable to them indeed.
At WhiteHat we do the same thing internally with Sentinel. The most common
use-case for Sentinel (scanning applications all year long, all day long or
nightly) requires a similar approach to what you describe below for us
Incidentally this is also how we catch new code, new forms, interesting new
implementations and patterns that require us to go in and tweak Sentinel
or write new custom tests.
The difference is we validate all the results for our customers, as you noted.
Our consumer typically has one person responsible for dealing with results
of dozens of applications. That ratio is usually 1:10 to 1:50
Which means they can be dealing with hundreds to thousands of "vulnerabilities"
so they don't have time to deal with 'potential' or 'false positives'
or they can't
get anything done internally.
Additionally - in use-cases where Sentinel is wired deeply into the SDLC,
it is often integrated directly with bug tracking systems so developers can
interact with Sentinel results, unit-testing, and re-testing. Developers don't
have time to deal with potential issues, and if you feed them too many
findings they cannot validate as useful, they will start to rebel against the
security analysis technology being used.
This is what I was referring to when talking about how scanners that report
all replayable forms as 'CSRF' get heavily into the FN territory, which doesn't
scale well as a broad-enterprise web app testing strategy.
Again - very different use-cases for Sentinel vs. Arachni I suspect.
PS - It is exciting how far you've come with Arachni so quickly. Great work,
Software Security Statistician
On Sat, Apr 23, 2011 at 1:51 AM, Tasos Laskos <tasos.laskos at gmail.com> wrote:
> Huh... I probably missed it in the article.
> There are 2 difficulties I've spotted with it:
> o How do you compare forms? i.e. What are the criteria for 2 forms being
> o Identifying CSRF tokens -- not all of them are going to be of the same
> The first one is not hard to solve, create an ID string made from
> concatenated input names and the action URL.
> The second one is harder, right now Arachni checks for CSRF tokens same way
> that Skipfish does; it looks for strings that look likebase10/16/32/64.
> Although these are the most common formats they're not the only ones.
> The only false positives I've seen are caused by this -- they are quite rare
> but they exist.
> So all-in-all yes it works for me, if great minds do think alike then short
> of doing what you guys do I doubt that we're going to
> figure out a better *fully* automated way.
> Maybe a DB with CSRF token signatures could help but we'll see...
> I've accepted, in my old age,the fact that we've kind of hit a barrier
> with our usual techniques so I'm kind of moving into baselines/meta-analysis
> as a filtering mechanism.
> Something akin to a subsystem saying:
> Yes, huh...so these are the scan results...Wait, what?
> There are 60 forms total and all of the are vulnerable? I seriously doubt
> Well, report them but put them in a special "These may be false positives"
> We've previously discussed my thoughts on the last subject but I've also
> documented them here:
> I doubt that you business guys would like this approach since you have
> qualified people between
> the webapp and the scanner who's job is to interpret the results, so you
> don't need it.
> But in a system like Arachni I believe that even probable false positives
> are worth reporting because they too have something to offer.
> Since false positives appear mostly due to webapp or server quirks it's
> worth looking into them during a pen test.
> This way laypeople are happy that the pretty report at the end of the scan
> isn't full of noise and
> hacker-folk are happy that they are given the chance to dig deeper and see
> why the webapp behaved in a way that produced false positives.
> However, you guys have more customers than the size of my user-base (x10
> probably)so I imagine that you've seen
> a *lot* of edge cases which justify your design decisions.
> What I mean is that it all comes down to pushing our ego aside and implement
> whatever gets the job done.
> PS. If I'm being perfectly honest only one person has reported a CSRF false
> positive (https://github.com/Zapotek/arachni/issues/14) and
> even though it technically was an FP it was a good thing overall that it
> On 04/23/2011 04:19 AM, Arian J. Evans wrote:
>> Tasos - thank you for explaining how you test for this!
>> We actually cover this testing paradigm in the article. We find it to be
>> littered with so many false positives that business owners wind up
>> ignoring the overall results, as we discuss in the article. There are
>> other drawbacks as well from what we have observed.
>> What have you found in terms of response to your results so far?
>> Arian Evans
>> Sybarite Software Security Scanning Savant
>> On Fri, Apr 22, 2011 at 1:01 PM, Tasos Laskos<tasos.laskos at gmail.com>
>>> When it comes to automated identification I look for forms that only
>>> appear *with* the set cookies and ignore the rest.
>>> It's a fair bet to assume that those forms will be tightly coupled with
>>> current user/session and thus affect business logic in one way or
>>> Then I look if they contain any CSRF tokens, if they don't then they
>>> logged and reported.
>>> This provides a more detailed breakdown:
>>> Tasos L.
>>>  When I say "I" I mean Arachni.
>>>  Unrecognized token formats is a weakness of this approach -- you
>>> anticipate everything.
>>> On 04/22/2011 07:30 PM, Jeremiah Grossman wrote:
>>>> Hi All,
>>>> Over the last year I've been noticing increased interest and
>>>> awareness of Cross-Site Request Forgery (CSRF). A welcome change as for
>>>> of the last decade few considered CSRF a vulnerability at all, but an
>>>> artifact of the way the web was designed. But, the as it normally
>>>> the bad guys have been showing us how damaging CSRF can really be.
>>>> To help bring more clarity we've recently published a detailed blog post
>>>> describing how our testing methodology approaches CSRF. What we're
>>>> interested is how other pen-testers and developers are tackling the
>>>> because automated detection is currently of limited help.
>>>> WhiteHat Security’s Approach to Detecting Cross-Site Request Forgery
>>>> FYI: Several weeks ago we launched our new blog, where I'll be diverting
>>>> all my web security material. We've been piling up new content:
>>>> Jeremiah Grossman
>>>> Chief Technology Officer
>>>> WhiteHat Security, Inc.
>>>> The Web Security Mailing List
>>>> WebSecurity RSS Feed
>>>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>>>> WASC on Twitter
>>>> websecurity at lists.webappsec.org
>>> The Web Security Mailing List
>>> WebSecurity RSS Feed
>>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>>> WASC on Twitter
>>> websecurity at lists.webappsec.org
More information about the websecurity