websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

A technique for bypassing request header restriction of XMLHttpRequest

SM
Steven M. Christey
Fri, Jan 6, 2012 6:14 PM

On Fri, 6 Jan 2012, Hill, Brad wrote:

I would say the opposite.  Given a documented, common practice that,
however hacky, has been established for nearly 20 years, there is every
reason for a new technology (or at least, relatively new) not to
introduce vulnerabilities to a huge and established ecosystem.

(disclaimer: I'm late to this discussion and haven't seen all the posts.)

The industry does this all the time, for example by accounting for
non-standard browser behavior with XSS defenses (as reflected in the XSS
cheat sheet, HTML5 Security Cheatsheet, etc).  Not that this has been
wildly successful given all the XSS variants that keep cropping up that
only apply to 1 or 2 browsers, but it seems like a reasonable approach
short of normalizing inputs/outputs in a predictable fashion
(non-standard), or modifying the standards to design out these kinds of
problems in the first place (unrealistic, infeasible, and maybe
impossible).  Anti-virus products, IDS, mail scanners, and other products
also have to wrestle with the same issue of accounting for
popular-but-non-standard behaviors in order to work properly.  Should they
be responsible for this kind of protection?  Philosophically speaking, no;
but operationally speaking, they have no choice.

  • Steve

I'm not saying XHR should actually coerce non-alphanumeric characters
down when sending headers - only that the security checks for banned
headers should do this for purposes of pre-flight comparison.  There is
an obligation not to introduce new vulnerabilities into existing
systems, and the "cost" to long-term compatibility with HTTP is very
small: the blacklist becomes a set of simple regexes instead of string
literals.

Brad Hill
Co-chair, WebAppSec WG


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

On Fri, 6 Jan 2012, Hill, Brad wrote: > I would say the opposite. Given a documented, common practice that, > however hacky, has been established for nearly 20 years, there is every > reason for a new technology (or at least, relatively new) not to > introduce vulnerabilities to a huge and established ecosystem. (disclaimer: I'm late to this discussion and haven't seen all the posts.) The industry does this all the time, for example by accounting for non-standard browser behavior with XSS defenses (as reflected in the XSS cheat sheet, HTML5 Security Cheatsheet, etc). Not that this has been wildly successful given all the XSS variants that keep cropping up that only apply to 1 or 2 browsers, but it seems like a reasonable approach short of normalizing inputs/outputs in a predictable fashion (non-standard), or modifying the standards to design out these kinds of problems in the first place (unrealistic, infeasible, and maybe impossible). Anti-virus products, IDS, mail scanners, and other products also have to wrestle with the same issue of accounting for popular-but-non-standard behaviors in order to work properly. Should they be responsible for this kind of protection? Philosophically speaking, no; but operationally speaking, they have no choice. - Steve > I'm not saying XHR should actually coerce non-alphanumeric characters > down when sending headers - only that the security checks for banned > headers should do this for purposes of pre-flight comparison. There is > an obligation not to introduce new vulnerabilities into existing > systems, and the "cost" to long-term compatibility with HTTP is very > small: the blacklist becomes a set of simple regexes instead of string > literals. > > Brad Hill > Co-chair, WebAppSec WG > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >
T
Tim
Fri, Jan 6, 2012 6:34 PM

Hi Steve,

(disclaimer: I'm late to this discussion and haven't seen all the posts.)

The industry does this all the time, for example by accounting for
non-standard browser behavior with XSS defenses (as reflected in the
XSS cheat sheet, HTML5 Security Cheatsheet, etc).  Not that this has
been wildly successful given all the XSS variants that keep cropping
up that only apply to 1 or 2 browsers, but it seems like a
reasonable approach short of normalizing inputs/outputs in a
predictable fashion (non-standard), or modifying the standards to
design out these kinds of problems in the first place (unrealistic,
infeasible, and maybe impossible).  Anti-virus products, IDS, mail
scanners, and other products also have to wrestle with the same
issue of accounting for popular-but-non-standard behaviors in order
to work properly.  Should they be responsible for this kind of
protection?  Philosophically speaking, no; but operationally
speaking, they have no choice.

I appreciate what you're saying, and yes, many de facto hacks exist to
protect people.  The more I think about it, the more I think that
nearly all web frameworks probably do this header namespace squashing.
The squashing clearly creates security problems, as we have discussed.

Let us balance that then against the security down sides of making a
change to web frameworks.  If today, "X^MyHeader" and "X*MyHeader" are
treated as equivalent, what is the harm in not treating them as
equivalent in the future?  The only header an application can search
for is "X_MYHEADER".  If we make such a change, they're still going to
be looking up "X_MYHEADER".  Sure, applications sending that header
would need to be more careful about what format of the header they are
sending (such as sending "X_MyHeader" in all cases) , but that's not a
security issue, just a backward compatibility one.  How would the
application be harmed in a security sense?

Brad's suggested approach would be to start restricting the types of
headers that can be sent to begin with.  So if XMLHttpRequest simply
blocked the less common variants of these headers (essentially
rejecting "Accept[!#$%&'*+-._^`|~]Encoding", and so on), then great,
we've prevented these headers from working their way in to these
requests at the web layer.  But as I've pointed out, there are other
situations where this approach would need to be taken.  Essentially,
to enforce it, all servers would need to start rejecting the uncommon
characters.  This amounts to changing the HTTP spec.  From a security
perspective, this clearly prevents a lot of problems, but changing
HTTP header names from a permissive set of characters to a restrictive
set is difficult.  Any legacy servers will continue to be "vulnerable"
in a sense, basically forever.

Both changes are very difficult from an implementation perspective,
but if there are no security downsides to fixing web frameworks
(allowing most/all HTTP headers), then that seems like the better
approach to me, not to mention easier.

tim

Hi Steve, > (disclaimer: I'm late to this discussion and haven't seen all the posts.) > > The industry does this all the time, for example by accounting for > non-standard browser behavior with XSS defenses (as reflected in the > XSS cheat sheet, HTML5 Security Cheatsheet, etc). Not that this has > been wildly successful given all the XSS variants that keep cropping > up that only apply to 1 or 2 browsers, but it seems like a > reasonable approach short of normalizing inputs/outputs in a > predictable fashion (non-standard), or modifying the standards to > design out these kinds of problems in the first place (unrealistic, > infeasible, and maybe impossible). Anti-virus products, IDS, mail > scanners, and other products also have to wrestle with the same > issue of accounting for popular-but-non-standard behaviors in order > to work properly. Should they be responsible for this kind of > protection? Philosophically speaking, no; but operationally > speaking, they have no choice. I appreciate what you're saying, and yes, many de facto hacks exist to protect people. The more I think about it, the more I think that nearly all web frameworks probably do this header namespace squashing. The squashing clearly creates security problems, as we have discussed. Let us balance that then against the security down sides of making a change to web frameworks. If today, "X^MyHeader" and "X*MyHeader" are treated as equivalent, what is the harm in not treating them as equivalent in the future? The only header an application can search for is "X_MYHEADER". If we make such a change, they're still going to be looking up "X_MYHEADER". Sure, applications sending that header would need to be more careful about what format of the header they are sending (such as sending "X_MyHeader" in all cases) , but that's not a security issue, just a backward compatibility one. How would the application be harmed in a security sense? Brad's suggested approach would be to start restricting the types of headers that can be sent to begin with. So if XMLHttpRequest simply blocked the less common variants of these headers (essentially rejecting "Accept[!#$%&'*+-._^`|~]Encoding", and so on), then great, we've prevented these headers from working their way in to these requests at the web layer. But as I've pointed out, there are other situations where this approach would need to be taken. Essentially, to enforce it, all servers would need to start rejecting the uncommon characters. This amounts to changing the HTTP spec. From a security perspective, this clearly prevents a lot of problems, but changing HTTP header names from a permissive set of characters to a restrictive set is difficult. Any legacy servers will continue to be "vulnerable" in a sense, basically forever. Both changes are very difficult from an implementation perspective, but if there are no security downsides to fixing web frameworks (allowing most/all HTTP headers), then that seems like the better approach to me, not to mention easier. tim
HB
Hill, Brad
Fri, Jan 6, 2012 7:31 PM

Looks like this is a quite old issue, dating to 2007 with Flash:

http://kuza55.blogspot.com/2007/07/exploiting-reflected-xss.html

Adobe patched, it but the W3C editors for XHR won't:

http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0024.html

-----Original Message-----
From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-
bounces@lists.webappsec.org] On Behalf Of Hill, Brad
Sent: Friday, January 06, 2012 9:19 AM
To: Tim
Cc: websecurity@lists.webappsec.org
Subject: Re: [WEB SECURITY] A technique for bypassing request header
restriction of XMLHttpRequest

I think given the clearly defined HTTP RFC, we're really looking at
age-old insecure practices in interpreting headers on the web server.
There's no reason that headers with permitted special characters
shouldn't be accessible through an appropriate API.  No good reason to
continue squashing and polluting the namespace.

[Hill, Brad] I would say the opposite.  Given a documented, common practice
that, however hacky, has been established for nearly 20 years, there is every
reason for a new technology (or at least, relatively new) not to introduce
vulnerabilities to a huge and established ecosystem.

I'm not saying XHR should actually coerce non-alphanumeric characters
down when sending headers - only that the security checks for banned
headers should do this for purposes of pre-flight comparison.  There is an
obligation not to introduce new vulnerabilities into existing systems, and the
"cost" to long-term compatibility with HTTP is very small: the blacklist
becomes a set of simple regexes instead of string literals.

Brad Hill
Co-chair, WebAppSec WG


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn
http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or
g

Looks like this is a quite old issue, dating to 2007 with Flash: http://kuza55.blogspot.com/2007/07/exploiting-reflected-xss.html Adobe patched, it but the W3C editors for XHR won't: http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0024.html > -----Original Message----- > From: websecurity-bounces@lists.webappsec.org [mailto:websecurity- > bounces@lists.webappsec.org] On Behalf Of Hill, Brad > Sent: Friday, January 06, 2012 9:19 AM > To: Tim > Cc: websecurity@lists.webappsec.org > Subject: Re: [WEB SECURITY] A technique for bypassing request header > restriction of XMLHttpRequest > > > I think given the clearly defined HTTP RFC, we're really looking at > > age-old insecure practices in interpreting headers on the web server. > > There's no reason that headers with permitted special characters > > shouldn't be accessible through an appropriate API. No good reason to > > continue squashing and polluting the namespace. > > [Hill, Brad] I would say the opposite. Given a documented, common practice > that, however hacky, has been established for nearly 20 years, there is every > reason for a new technology (or at least, relatively new) not to introduce > vulnerabilities to a huge and established ecosystem. > > I'm not saying XHR should actually coerce non-alphanumeric characters > down when sending headers - only that the security checks for banned > headers should do this for purposes of pre-flight comparison. There is an > obligation not to introduce new vulnerabilities into existing systems, and the > "cost" to long-term compatibility with HTTP is very small: the blacklist > becomes a set of simple regexes instead of string literals. > > Brad Hill > Co-chair, WebAppSec WG > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn > http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or > g
SE
super evr
Mon, Jan 9, 2012 7:46 PM

In my testing, Apache did not handle all "bypassed" headers the same
way. For example, it was not possible to submit a modified
"Transfer-Encoding" header (eg. Transfer.Encoding; Transfer*Encoding;
Transfer Encoding), and have Apache handle the request with the
specified encoding.

The environment variable HTTP_TRANSFER_ENCODING is still set, but the
actual request isn't treated with the provided encoding.

If you could get the browser to send a Transfer*Encoding Header, and
if Apache treats it like normal, then you could possibly abuse it for
Request Splitting/Smuggling.
http://www.mindedsecurity.com/MSA01240108.html

On Thu, Jan 5, 2012 at 4:13 AM, Kousuke Ebihara kousuke@co3k.org wrote:

Hi,

Do you know that Apache HTTP Server and Lighttpd replace non-alnum characters with underscore in name of environment variables?

This might be useful to bypass restrictions of XMLHttpRequest.

Here is a simple CGI script to test server behavior::

   #!/usr/bin/env python
   # -- coding: UTF-8 --

   import os

   print "Content-Type: text/plain\n";

   for k, v in sorted(os.environ.items()):
        print "%s: %s" % (k, v)

And execute this script via Apache::

   $ telnet localhost 80
   GET /~co3k/envs.cgi.py HTTP/1.0
   X-Normal: Hello
   X_Under: Hello
   X.Dot: Hello

   HTTP/1.1 200 OK
   Date: Wed, 23 Nov 2011 10:30:53 GMT
   Server: Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch
   Connection: close
   Content-Type: text/plain

   HTTP_X_DOT: Hello
   HTTP_X_NORMAL: Hello
   HTTP_X_UNDER: Hello

Then, via Lighttpd::

   $ telnet localhost 8037
   GET /envs.cgi.py HTTP/1.0
   X-Normal: Hello
   X_Under: Hello
   X.Dot: Hello

   HTTP/1.0 200 OK
   Content-Type: text/plain
   Connection: close
   Date: Wed, 23 Nov 2011 10:43:12 GMT
   Server: lighttpd/1.4.28

   HTTP_X_DOT: Hello
   HTTP_X_NORMAL: Hello
   HTTP_X_UNDER: Hello

But the case of Nginx::

   $ telnet localhost 8080
   GET /env/ HTTP/1.0
   X-Normal: Hello
   X_Under: Hello
   X.Dot: Hello

   HTTP/1.1 200 OK
   Server: nginx/1.0.9
   Date: Wed, 23 Nov 2011 10:57:07 GMT
   Content-Type: text/plain
   Connection: close

   HTTP_X_NORMAL: Hello

Well, as you know, some XMLHttpRequest implementations deny sending some request headers via XMLHttpRequest.

(See also: http://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_XMLHttpRequest)

You can't send Accept-Charset, Accept-Encoding, User-Agent, and etc via Firefox's XMLHttpRequest, but you can send Accept_Charset, Accept.Encoding, UserAgent and etc. CGI script may trust UserAgent header value via "HTTP_USER_AGENT" environment variable.

I've found a vulnerability in the Japanese mobile phone by using this technique. But that vulnerability is caused by unusual custom of Japanese mobile world.

So I want to know more universal threats by using this technique. Do you have some ideas?

Thanks,

--
Kousuke Ebihara kousuke@co3k.org
http://co3k.org/


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

In my testing, Apache did not handle all "bypassed" headers the same way. For example, it was not possible to submit a modified "Transfer-Encoding" header (eg. Transfer.Encoding; Transfer*Encoding; Transfer Encoding), and have Apache handle the request with the specified encoding. The environment variable HTTP_TRANSFER_ENCODING is still set, but the actual request isn't treated with the provided encoding. If you could get the browser to send a Transfer*Encoding Header, and if Apache treats it like normal, then you could possibly abuse it for Request Splitting/Smuggling. http://www.mindedsecurity.com/MSA01240108.html On Thu, Jan 5, 2012 at 4:13 AM, Kousuke Ebihara <kousuke@co3k.org> wrote: > Hi, > > Do you know that Apache HTTP Server and Lighttpd replace non-alnum characters with underscore in name of environment variables? > > This might be useful to bypass restrictions of XMLHttpRequest. > > Here is a simple CGI script to test server behavior:: > >    #!/usr/bin/env python >    # -*- coding: UTF-8 -*- > >    import os > >    print "Content-Type: text/plain\n"; > >    for k, v in sorted(os.environ.items()): >         print "%s: %s" % (k, v) > > And execute this script via Apache:: > >    $ telnet localhost 80 >    GET /~co3k/envs.cgi.py HTTP/1.0 >    X-Normal: Hello >    X_Under: Hello >    X.Dot: Hello > >    HTTP/1.1 200 OK >    Date: Wed, 23 Nov 2011 10:30:53 GMT >    Server: Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch >    Connection: close >    Content-Type: text/plain > >    HTTP_X_DOT: Hello >    HTTP_X_NORMAL: Hello >    HTTP_X_UNDER: Hello > > Then, via Lighttpd:: > >    $ telnet localhost 8037 >    GET /envs.cgi.py HTTP/1.0 >    X-Normal: Hello >    X_Under: Hello >    X.Dot: Hello > >    HTTP/1.0 200 OK >    Content-Type: text/plain >    Connection: close >    Date: Wed, 23 Nov 2011 10:43:12 GMT >    Server: lighttpd/1.4.28 > >    HTTP_X_DOT: Hello >    HTTP_X_NORMAL: Hello >    HTTP_X_UNDER: Hello > > But the case of Nginx:: > >    $ telnet localhost 8080 >    GET /env/ HTTP/1.0 >    X-Normal: Hello >    X_Under: Hello >    X.Dot: Hello > >    HTTP/1.1 200 OK >    Server: nginx/1.0.9 >    Date: Wed, 23 Nov 2011 10:57:07 GMT >    Content-Type: text/plain >    Connection: close > >    HTTP_X_NORMAL: Hello > > Well, as you know, some XMLHttpRequest implementations deny sending some request headers via XMLHttpRequest. > > (See also: http://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_XMLHttpRequest) > > You can't send Accept-Charset, Accept-Encoding, User-Agent, and etc via Firefox's XMLHttpRequest, but you can send Accept_Charset, Accept.Encoding, User*Agent and etc. CGI script may trust User*Agent header value via "HTTP_USER_AGENT" environment variable. > > I've found a vulnerability in the Japanese mobile phone by using this technique. But that vulnerability is caused by unusual custom of Japanese mobile world. > > So I want to know more universal threats by using this technique. Do you have some ideas? > > Thanks, > > -- > Kousuke Ebihara <kousuke@co3k.org> > http://co3k.org/ > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
HB
Hill, Brad
Thu, Feb 2, 2012 1:19 AM

Ack.  I am learning PHP today to write some test cases and I find that PHP also uses the "-" to "_" conversion before exposing HTTP headers, so this server vulnerability is much more widespread than just ancient CGIs.  :(

-----Original Message-----
From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-
bounces@lists.webappsec.org] On Behalf Of Hill, Brad
Sent: Friday, January 06, 2012 9:19 AM
To: Tim
Cc: websecurity@lists.webappsec.org
Subject: Re: [WEB SECURITY] A technique for bypassing request header
restriction of XMLHttpRequest

I think given the clearly defined HTTP RFC, we're really looking at
age-old insecure practices in interpreting headers on the web server.
There's no reason that headers with permitted special characters
shouldn't be accessible through an appropriate API.  No good reason to
continue squashing and polluting the namespace.

[Hill, Brad] I would say the opposite.  Given a documented, common practice
that, however hacky, has been established for nearly 20 years, there is every
reason for a new technology (or at least, relatively new) not to introduce
vulnerabilities to a huge and established ecosystem.

I'm not saying XHR should actually coerce non-alphanumeric characters
down when sending headers - only that the security checks for banned
headers should do this for purposes of pre-flight comparison.  There is an
obligation not to introduce new vulnerabilities into existing systems, and the
"cost" to long-term compatibility with HTTP is very small: the blacklist
becomes a set of simple regexes instead of string literals.

Brad Hill
Co-chair, WebAppSec WG


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn
http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or
g

Ack. I am learning PHP today to write some test cases and I find that PHP also uses the "-" to "_" conversion before exposing HTTP headers, so this server vulnerability is much more widespread than just ancient CGIs. :( > -----Original Message----- > From: websecurity-bounces@lists.webappsec.org [mailto:websecurity- > bounces@lists.webappsec.org] On Behalf Of Hill, Brad > Sent: Friday, January 06, 2012 9:19 AM > To: Tim > Cc: websecurity@lists.webappsec.org > Subject: Re: [WEB SECURITY] A technique for bypassing request header > restriction of XMLHttpRequest > > > I think given the clearly defined HTTP RFC, we're really looking at > > age-old insecure practices in interpreting headers on the web server. > > There's no reason that headers with permitted special characters > > shouldn't be accessible through an appropriate API. No good reason to > > continue squashing and polluting the namespace. > > [Hill, Brad] I would say the opposite. Given a documented, common practice > that, however hacky, has been established for nearly 20 years, there is every > reason for a new technology (or at least, relatively new) not to introduce > vulnerabilities to a huge and established ecosystem. > > I'm not saying XHR should actually coerce non-alphanumeric characters > down when sending headers - only that the security checks for banned > headers should do this for purposes of pre-flight comparison. There is an > obligation not to introduce new vulnerabilities into existing systems, and the > "cost" to long-term compatibility with HTTP is very small: the blacklist > becomes a set of simple regexes instead of string literals. > > Brad Hill > Co-chair, WebAppSec WG > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn > http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or > g
T
Tim
Thu, Feb 2, 2012 8:22 PM

Ack.  I am learning PHP today to write some test cases and I find that PHP also uses the "-" to "_" conversion before exposing HTTP headers, so this server vulnerability is much more widespread than just ancient CGIs.  :(

Yeah, after our conversation a while back, I looked into a bit more
and realized that just about every web framework does this foolish
transliteration of characters to _.  Almost every special character is
converted like this.  The CGI spec only requires - be transliterated
to _ and doesn't require anything else be.

Of course those other frameworks don't have to adhere to CGI, but even
if their developers wanted to claim "oh, we're just being backward
compatible with CGI", they don't have a sound argument.

tim

> Ack. I am learning PHP today to write some test cases and I find that PHP also uses the "-" to "_" conversion before exposing HTTP headers, so this server vulnerability is much more widespread than just ancient CGIs. :( Yeah, after our conversation a while back, I looked into a bit more and realized that just about every web framework does this foolish transliteration of characters to _. Almost every special character is converted like this. The CGI spec only requires - be transliterated to _ and doesn't require anything else be. Of course those other frameworks don't have to adhere to CGI, but even if their developers wanted to claim "oh, we're just being backward compatible with CGI", they don't have a sound argument. tim
RA
Robert A.
Thu, Feb 2, 2012 8:27 PM

In your investigation did you find an instance of a framework converting single header names
such as _Referer or _Host?

Regards,

  • Robert

On Thu, 2 Feb 2012, Tim wrote:

Ack.  I am learning PHP today to write some test cases and I find that PHP also uses the "-" to "_" conversion before exposing HTTP headers, so this server vulnerability is much more widespread than just ancient CGIs.  :(

Yeah, after our conversation a while back, I looked into a bit more
and realized that just about every web framework does this foolish
transliteration of characters to _.  Almost every special character is
converted like this.  The CGI spec only requires - be transliterated
to _ and doesn't require anything else be.

Of course those other frameworks don't have to adhere to CGI, but even
if their developers wanted to claim "oh, we're just being backward
compatible with CGI", they don't have a sound argument.

tim


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

In your investigation did you find an instance of a framework converting single header names such as _Referer or _Host? Regards, - Robert On Thu, 2 Feb 2012, Tim wrote: > >> Ack. I am learning PHP today to write some test cases and I find that PHP also uses the "-" to "_" conversion before exposing HTTP headers, so this server vulnerability is much more widespread than just ancient CGIs. :( > > Yeah, after our conversation a while back, I looked into a bit more > and realized that just about every web framework does this foolish > transliteration of characters to _. Almost every special character is > converted like this. The CGI spec only requires - be transliterated > to _ and doesn't require anything else be. > > Of course those other frameworks don't have to adhere to CGI, but even > if their developers wanted to claim "oh, we're just being backward > compatible with CGI", they don't have a sound argument. > > tim > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >