[WEB SECURITY] RE: XSS-Phishing on Financial Sites (Tip of the iceberg)

RSnake rsnake at shocking.com
Mon Jun 26 13:46:35 EDT 2006


> I think there are two slightly different problems we are thinking
> about, which is why the proposal I made looks fairly different from
> the content-restrictions and site keys proposal.

 	That could easily be, sorry if I misunderstood the point, this
is a pretty complicated topic.

> Site keys and the firefox content restriction proposal seem to be
> targeted for sites that are knowingly allowing end users to publish
> limited HTML content.  They have an HTML cleaner in place, but they
> need some method of handling cases where the HTML cleaner fails.
> That's one use case for an XSS-killer proposal.

 	Yes, that's exactly right... honestly, it was intended for super
large websites (as I was working for one when I first thought of the
idea), not for small websites.  The big websites had a harder problem
with this typically, but that's starting to change a little with blog
spam.  Users of these large sites often have to deal with some amount of
user generated content (think MySpace), and instead of making a bleak
text based world, the business has to opt for some HTML allowed.
Without it, MySpace would not be used, I promise you, making it not a
viable business option to kill all HTML.

> The other use case is for sites that are just plain broken, where end
> users are not supposed to be able to publish any HTML tags.  Site keys
> and content restrictions are massive overkill for a site like hat.  A
> web developer with an ounce of clue can fix the problem without
> relying on any special browser extensions.  But many sites are coded
> up by developers without that ounce of security clue, and so reflected
> XSS is very common.  The situation for CSRF is similar; it's not that
> hard to prevent it once you know the problem exists.  The problem is
> that most developers don't think about it.  (This is why I don't think
> content restrictions will help much with reflected XSS: if you know
> enough to use content restrictions on a page, you've probably already
> dealt with most of the reflected XSS issues.)

 	Probably, but definitely not always.  Google, Yahoo, MySpace...
All of which have dealt with and continued to be plagued by it.  It's
hard for big companies with lots of developers to deal with these
problems perfectly on the first try.

> So the proposal I wrote up was trying to be a solution for
> administrators of sites, who think there might be XSS or CSRF
> somewhere, but don't have the time/money/skills/access to find and fix
> all of the places where XSS and CSRF exist on their site.  They could
> design a CSL policy to limit their exposure.

 	That makes more sense in that context, sure.

>> Firstly, the policies will vary from page to page and on some sites that
>> could mean millions of pages.
>
> Yeah, scalability could be an issue.  Wildcards are definitely an
> option for some site designs, but not for others.  Maybe as a test of
> the approach I'll pick a few sites and see how hard it is to design a
> CSL policy for them.

 	Pick Yahoo and Google if you want to get a feel for how big
sites would have an issue with your proposal.  Their sites live on
multiple domains, have user content (like in cache, on personal sites
like googlepages.com and aggregate newsfeeds among dozens of other
things).  Their site architecture is incredibly complex so single policy
files would be extraordinarily difficult to deal with.

>> Also, avoiding CSRF is very difficult… because many websites want to allow 
>> linking to
>> remote images.  If you link to a remote image, that can actually be a HTTP 
>> 301 redirect
>> back to your site function.
>
> No problem, unless I'm misunderstanding the attack.  The CSL policy
> would let you prohibit links from the remote image back to your site.
>
> 1) attacker uploads link to http://malicious/evil.jpg
> 2) victim's browser follows link to http://malicious/evil.jpg
> 3) evil.jpg redirects back to http://mysite/csrf.php?dosomethingbad
> 4) victim's browser checks the CSL policy for http://mysite, and
> discovers that http://malicious is not supposed to link to
> http;//mysite/csrf.php
> 5) victim's browser avoids sending the request to csrf.php, sends user
> to the front page of the web site instead.

 	Are you assuming that the parent DOM controls what the sub page
can and cannot do in relation to it?  That could cause problems
depending on what you intend to do with it.  For instance, iframes
should not be controlled by the parent page (Microsoft didn't agree with
that philosophy with security=restricted as we know, but still).  You
could argue and I might agree that iframes are different than images,
but they have the same effect if you are talking about 301s or
JavaScript embedded in the iframe.

>> Another major problem for this security model is things like Akamai.  When 
>> you are
>> running a very large website and you want to use a content caching service 
>> to
>> optimize your website, you inherantly need to link to other domains with 
>> your JavaScript.
>
> I'm not seeing why this would be an issue.  Couldn't the CSL policy
> just allow links from the Akami domain back to your domain?  Or does
> that make the CSL policy too permissive to be effective?

 	The latter.  Akamai is not a fantastic example since they don't
have a single domain for each of their customers.  But think about your
small customers.  I post a blog and want to embed a youtube movie.  It
would (or should anyway) break due to the fact that you have not opened
your site to youtube.com.  Opening it has obviously bad implications,
and closing it means I can't embed the movie remotely.  Tough choice!

> Again, I'm not really seeing the problem.  CSL policy affects what
> sites are allowed to link TO your site, not vice versa.  Your scripts
> will be able to link to anywhere... provided the targeted site's CSL
> policy lets that happen.  For this case, I don't think DOM based XSS
> is all that different from your garden variety XSS.

 	Maybe I am mis-understanding.  The CSL, would only be allowed to
say, which of YOUR pages could link to your pages?  So I would have to
know that  http://www.google.com/search?q=RSnake is allowed to link to
me, or I'd have to know that http://ha.ckers.org/ (the links on the
page) is allowed to link to me?  The permutations on a big site or even
a medium sized site would be far too large to document properly, would
be out of date the day you created it and therefor would have to be so
flexible it wouldn't stop a big chunk of what you needed to stop.  Also,
a side note, lots of sites have redirects built into their login
scripts, so if I can link to your login page, once you finish logging it
you'll be directed to the function I wanted to send you to anyway.  It's
still an interesting proposal for small sites, don't get me wrong, I'm
just thinking about how many permutations I'd have to make for even a
site my size, with hundreds of pages and dozens of links on each page.

 	One last comment, the size of these CSL pages has to stay
relatively small for our poor bandwidth constrained modem users, or it
will never get adopted globally.

-RSnake
http://ha.ckers.org/
http://ha.ckers.org/xss.html
http://ha.ckers.org/blog/feed/
-------------- next part --------------
----------------------------------------------------------------------------
The Web Security Mailing List: 
http://www.webappsec.org/lists/websecurity/

The Web Security Mailing List Archives: 
http://www.webappsec.org/lists/websecurity/archive/
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]


More information about the websecurity mailing list