[WEB SECURITY] Web security improvement ideas

Guy Cimo guy at cecorp.com
Fri May 27 15:19:53 EDT 2005




Proxy Clients ensure that real clients submit valid requests

If you could rely on client systems to execute HTML [including script]
exactly as written and you could enhance browsers anyway you want to without
having to update client machines, than ??????????

A security software guru at the RSA conference in San Francisco impatiently
listened to me describing CEWALL web application firewall. After some
frustrating moments and more time than the guru wanted to spend on this
subject [“I don’t have time for this unless you can show me how CEWALL will
help me sell my product”], the guru enlightened me on how I might make
CEWALL easier to understand. Whereas I had been referring to CEWALL as being
a reverse proxy server [this is true], I had failed to convey that it is
also, and more importantly, a proxy client.
If you proxy the client with a controlled system that accepts client input
[selections and data entry], then the input will be subjected to the
policies embedded in HTML and executed by its browser. In other words, it
doesn’t matter what the client browser does or doesn’t do. It is the
controlled system browser that executes the HTML policies, all of them,
including script.

This approach allows standard or even highly customized browsers to be used
by the controlled system that cannot be modified or bypassed by the client.
It isn’t necessary to develop standards, try to convince browser vendors to
rewrite and maintain their browsers accordingly and cause all the clients in
the world to update their systems.

Guy Cimo

-----Original Message-----
From: Ivan Ristic [mailto:ivan.ristic at gmail.com]
Sent: Friday, May 27, 2005 3:35 AM
To: Gervase Markham
Cc: websecurity at webappsec.org
Subject: Re: [WEB SECURITY] Web security improvement ideas


On 5/27/05, Gervase Markham <gerv at gerv.net> wrote:
> Ivan Ristic wrote:
> >>But you need an extra HTTP request which must be completed probably even
> >>before parsing starts. This could add a half-second delay to page load,
> >>which is really significant. Plus (unless you use either an XML format
> >>or a very simple parser) you need an extra parser for your nice readable
> >>format.
> >
> > I am sure the descriptor can be cached for the duration of the
> > session. XML format is fine by me.
>
> The descriptor is unlikely to be the same for every page in the web
> application - at least, if you want the best protection against
> cross-site scripting.

Maintaining a per-page descriptor is just not practical. No one will
want to do it. If there really is a need for per-page configuration it
can be a part of the same descriptor.


> >>If the address is the same but you are in the wrong place, SSL has
> >>failed completely. An SSL cert ensures that if your browser says
> >>"www.foo.com", then that's where the content came from.
> >
> > That's exactly my point. I want another layer of security for cases
> > when SSL has failed completely. Are you arguing it cannot happen?
>
> Has it ever happened?

I was thinking more along the lines of someone getting a valid
certificate for a domain name that does not belong to them, e.g. using
a bit of social engineering. It may have happened, it sounds plausable
to me.


> What happens if your new layer of security fails completely? Surely we
> need a third layer for that case?

In general, yes. We need multiple layered protective measures to avoid
having a single point of failure.

> There is no way you can avoid having a user take some action (even if
> it's just moving their eyes) to make sure they are in the right place,
> because the definition of "right place" is solely in the user's brain.
> The browser cannot accurately determine it.

It can if you define "right place" as "the same place I visited every
single time before".


> >>OK. So if you have SSL and Digest Auth with server auth, doesn't that
> >>solve the problem? Why do you need some sort of additional
> >>browser-integrated forms-based authentication?
> >
> > It doesn't solve the problem. For the scheme to work
> > non-password-leaking authentication scheme must be mandatory.
> > Otherwise, phishers will just use what suits them (e.g. form-based
> > auth) and keep collecting the passwords.
>
> Except that if I've been logging into my bank using my browser's
> Digest-Auth UI for months, and suddenly I get asked to type my password
> into a web page instead, I should be suspicious. If I'm not, I'm
> probably the sort of person who types my CC number into any web form
> that asks for it anyway.

You seem to be judging things by what you would do. But that's not
realistic. We need these additional mechanisms to protect the
innocent, not skilled computer professionals. Besides, I want to be
able to log in to my bank's web site without having to go through a
5-page long checklist in order to determine I am not being attacked in
some way.

--
Ivan Ristic
Apache Security (O'Reilly) - http://www.apachesecurity.net
Open source web application firewall - http://www.modsecurity.org

---------------------------------------------------------------------
The Web Security Mailing List
http://www.webappsec.org/lists/websecurity/

The Web Security Mailing List Archives
http://www.webappsec.org/lists/websecurity/archive/


---------------------------------------------------------------------
The Web Security Mailing List
http://www.webappsec.org/lists/websecurity/

The Web Security Mailing List Archives
http://www.webappsec.org/lists/websecurity/archive/



More information about the websecurity mailing list