[WEB SECURITY] Web security improvement ideas

Gervase Markham gerv at gerv.net
Thu May 26 10:20:45 EDT 2005


Ivan Ristic wrote:
> A thick red line around the browser window? It doesn't matter as long
> as it's clear the user is somewhere "safe".

Do you think a thick red line around the browser window would make it clear?

I'm not being silly - this is a hard UI problem.

>>I'm not sure going through a URI is necessary. My "Content Restrictions"
>>proposal, which you may know of, implements this idea directly in the
>>header: http://www.gerv.net/security/content-restrictions/
> 
> The amount of information we wish to convey may be or become too large
> for a header. That's why I prefer a simple link to a file. Plus I can
> format the file any way I like and make it easy to read and
> understand.

But you need an extra HTTP request which must be completed probably even 
before parsing starts. This could add a half-second delay to page load, 
which is really significant. Plus (unless you use either an XML format 
or a very simple parser) you need an extra parser for your nice readable 
format.

What verbose information do you plan to convey?

>>It is backwardly-compatible with existing user agents.
> 
> So is a link to a descriptor.

Indeed; I was merely noting that it was backwardly-compatible, not 
saying that making it a header vs. a link changed this.

>>These three things together present a problem. Most SSL certs expire
>>after a year. If the browser is to refuse to recognise expired certs,
>>how can there by any upgrade process for the browser's cert cache?
> 
> Because the browser would first encounter the current, valid,
> certificate. It can then tell the server "prove to me you posses this
> older certificate".

By extensions to which protocol?

> If the server cannot do that the trust would have
> to be established again in some other way.

What other way?

>>Also, what
>>happens if a person returns to a web app (say) 3 years later? Is the
>>webserver to continue to server copies of all its previous certs for ever?
> 
> Do you see a practical problem with that?

Well, a hard disk crash on the server could mean that none of your 
customers ever trust you again. There's also serious interactions with 
the certificate revocation model.

>>What problem are you trying to solve with these suggestions?
> 
> If a user goes to a web site that looks like the real thing but it is
> not I want the browser to be able to either detect that (if the
> address is the same) 

If the address is the same but you are in the wrong place, SSL has 
failed completely. An SSL cert ensures that if your browser says 
"www.foo.com", then that's where the content came from.

> or tell me that I've arrived at a completely new
> destination. 

If you are at a different destination, the browser should make it clear 
where you are. See Firefox's domain indicator for a step in the right 
direction.

> The authentication methods in use today generally focus on the client
> demonstrating (to the server) it knows the password. I am proposing a
> transition to an authentication method that supports mutual
> authentication, i.e. requires the server to demonstrate it knows the
> password too (to the client). This idea is already part of Digest auth
> but I don't think it was implemented anywhere.

OK - thanks.

> Not quite, there is another issue. If the server can prove to you it
> knows the password then you also know you are at the *correct*
> location, which is not quite the same. E.g. imagine a web site
> pretending to be your bank's web site. A login form appears, you enter
> your password, and the web site behaves as if the authentication was
> successful. So at this point you believe you are at the right place,
> but the web site can try to trick you into disclosing your private
> information in some other way.

OK. So if you have SSL and Digest Auth with server auth, doesn't that 
solve the problem? Why do you need some sort of additional 
browser-integrated forms-based authentication?

>>>* Design a mechanism for explicit log-out. And a mechanism for session timeout
>>>  (in the browser). Delete all session information after it is terminated.
>>
>>Surely single-session cookies combined with an in-web-UI logout link
>>achieve this?
> 
> It can be achieved, but as a user I don't know whether it will be
> done. Think about it simply as another layer of protection. If the
> browser purges my credentials and the session information from memory
> I should be safe no matter whether application is implemented
> correctly or not.

A great deal of security depends on correct implementation of the 
application. Why should the browser cover for the app in this particular 
instance?

>>>* Make SSL mandatory.
>>>
>>>* No information must go out of a web application (e.g. external links must
>>>  not be followed, no requests from the client-side code)).
>>
>>This would rather break e.g. webmail, wouldn't it? What happens if
>>someone sends me an HTML email with links in?
> 
> Hmm, good point. It's a problem that needs to be resoloved somehow. It
> would certainly break embedded images and I think that's a good thing.

I think you will have a serious problem in the real world with this 
restriction. If you are trying to mitigate cross-site scripting, there 
are other, better and finer-grained ways to do it. Again, see 
Content-Restrictions for an example.

> You are right, sorry about that. I meant to say Cross-Site Request
> Forgery must not be allowed. E.g. if you are logged in at your bank's
> secure web application you don't want a rogue web site you happen to
> be visiting at the same time (using your browser, in another window)
> to initiate a bank transfer on your behalf. Some call this session
> riding, and there are other names for it.

Yes. IMO, it's up to web application implementations to protect against 
this using form keys. They'd have to do that anyway, for user agents 
which didn't support your ideas. So, in fact, trying to fix this problem 
for them just reduces the pressure on them to fix it themselves.

> To solve the CSRF problem we can't simply reject all links into a
> secure web application. The idea is to have the descriptor list the
> URIs within the application that can be linked to freely.

I see all sorts of problems here:

- Sites should not be able to restrict where a user's browser can go 
(users won't like it and it could be abused)
- To get the descriptor, you need to make at least one request of the 
web application; that could be the one which does the unwanted action.

>>>* Separate cookies from session tokens, produce a new state maintenance
>>>  RFC that is non-ambiguous. Session tokens are not to be accessed by
>>>  client-side code. They mustn't be visible to the end user either. Make session
>>>  tokens worthless by separating authentication from session management (e.g.
>>>  require authentication to take place for every request).
>>
>>Content-Restrictions also solves this problem.
> 
> That can be debated. 

Well, it definitely prevents the accessing of cookies from client-side 
code. BTW, there's no way you can make cookies or any session token 
invisible to the end-user; again, it's a privacy issue.

> But since I didn't not present my ideas in
> opossition to your Content-Restrictions proposal let's just stick to
> the main topic of this thread.

Well, I don't think they are in opposition. I'm reading your proposals 
to look for problems my spec doesn't yet solve but could :-)

Gerv


---------------------------------------------------------------------
The Web Security Mailing List
http://www.webappsec.org/lists/websecurity/

The Web Security Mailing List Archives
http://www.webappsec.org/lists/websecurity/archive/



More information about the websecurity mailing list