websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

Re: [WEB SECURITY] NetSec Breaking Apps Better Than AppSec

AJ
Arian J. Evans
Sat, Jul 9, 2011 12:12 AM

Fair enough. I agree with that point. I had a different quibble.

When helping customers deal with thousands to tens of thousands of
vulnerabilities, I do tend to lump them into an equivalency bucket
called "maybe you should get to these things later, if ever", in any
cases where there is not immediate indication of
exploitability/confidentiality-compromise.

For sake of brevity we call this bucket "Best Practices" and let
people suppress them to keep their signal-to-noise manageable.

Now that we have run this into the ground,


Arian Evans
Software Security Referee

On Fri, Jul 8, 2011 at 6:32 PM, Thomas Ptacek thomas@matasano.com wrote:

I agree that SQLI is much more important than the Secure flag. I was prompted to comment only because of the equivalence implied between Secure and HttpOnly.

On Jul 8, 2011, at 6:20 PM, Arian J. Evans wrote:

I can see #1 being legitimately debatable. I do not see how #3-#5 are
debatable. I would like more detail if you find I am in error.

#3) What does =secure provide, inside a valid SSL tunnel, besides
protection against broken browsers or mixed domain content?

#4) Let me clarify - most retail and some other types of businesses
will tolerate !=secure before they will tolerate functional outage.
Do you disagree?

(insert early WAF shelfware stories here)

#5) If you have legitimate, non-confidential cookie usage (like most
web apps) and you have legitimate, non-transport encrypted traffic
(like many if not most web apps) then you will break things
arbitrarily slapping =secure on them, as I have seen both recommended
and done.

I am guessing if we disagree you are thinking of something
situation-specific e.g.- sensitive financial apps with a singular
session and/or auth cookie.

I still humbly submit that most organizations have thousands of XSS
and SQLi (amongst other types of directly exploitable syntax-attack
exposures) and until you help them fix those you're wasting their time
with this. And, of course, for many apps and cookies there's no reason
to care about =secure.

But I digress,


Arian Evans
Software Security Stuff

On Fri, Jul 8, 2011 at 5:57 PM, Thomas Ptacek thomas@matasano.com wrote:

The savvy reader will note that while #2 makes a valid point, #1, #3, #4, #5, and everything after it do something other than make valid points.

On Jul 8, 2011, at 5:49 PM, Arian J. Evans wrote:

On Fri, Jul 8, 2011 at 5:00 PM, Thomas Ptacek thomas@matasano.com wrote:

Unlike HttpOnly, missing the "Secure" flag on cookies in SSL apps is not a minor problem. You almost might as well not do HTTPS at all if you're not going to get that detail right.

Well, I respectfully disagree Tom.

  1. First - nobody gets Sony'd from cookie !=secure. If you go through
    the WASC and CWE attack and weakness nodes I can think of about 40
    that could get you Lulzed; I'd call those major, and stacking this up
    is minor in comparison.

  2. Secondly - very few cookies (of the total number of cookies set on
    the Internets) contain sensitive info and require confidentiality.
    (and thus =secure)

  3. Thirdishly - if you are using SSL, then this is a defense-in-depth
    best practice, and only provides protection for broken browsers or
    mixed-content mistake.

  4. Fourthedly - Most businesses will tolerate cookie leakage before a
    broken application, because of -

  5. Fifthously - Slapping =secure on cookies randomly will break
    legitimate features of many applications, whether or not there is any
    security benefit/attack surface reduction

I know security consultants love fluffy report filler. If you charge
your clients premium prices for one-time pen-tests - then customers
tend to feel like they pay by the pound for report size, so you stuff
everything you can in there and make it all sound as terribly
important as possible. However IMHO this is not very useful at moving
the software security bar in large enterprises, and definitely doesn't
scale...at all.

I also think the elitist security attitude towards software
development e.g. -"if you ain't gonna get these little bits right you
might as well pack up and go home" which sounds like "you're an idiot,
developer" are misguided - and may be a reason why some security folks
have a tough time getting traction in organizations regarding solving
software security. That's one of the reasons Jeremiah and I make a
point of going to developer conferences. It's smart to stay close to
the people living with the problem you're trying to solve.

$0.02,


Arian Evans
Software Security Solutions Chef

On Jul 8, 2011, at 3:42 PM, Arian J. Evans wrote:

The article has a stupid premise (I think intended for trolling) but a
valid point.

The real point here is that a big chunk of the domain of appsec does
not define "vulnerability" in the same way network penetration testing
does. Most appsec tools/vendors paint with a broader brush of defects.
And, because there are so many, they rarely focus on penetration of a
single one. Additionally, penetration does not directly help with
remediation, so many chose to spend their time elsewhere.

So many appsec tools are noisy, and generate "findings" that are
basically defects that may or may not have any security implications.
Like HTTPONLY and =SECURE bits set on cookies. At WhiteHat we call
these "best practices": they may or may not impact the security
posture of your application (and they may have other implications,
positive and negative).

If it isn't exploitable, or leaking confidential information - I have
trouble calling it a vulnerability. Many appsec vendors try to work
around their inherent weakness here by having some form of a
likelihood or "confidence" score associated with sub-par findings.
This weakness aside, we have to start somewhere and one-time network
pentests aren't cutting it.

People focused exclusively on penetration may really know a few new
SQLi exploitation tricks well. But for every exploitable SQLi they
find in an enterprise they probably only miss 1,000 more SQLi in 500
forms they didn't touch....


Arian Evans

On Fri, Jul 8, 2011 at 1:59 AM, Rob Fuller jd.mubix@gmail.com wrote:

So this is an opinion/poll/question piece by cktricky. By posting to
both the websecurity and pentest list hopefully there will be a good
discussion on all sides:

http://www.novainfosecportal.com/2011/07/07/netsec-breaking-apps-better-than-appsec/

--
Rob Fuller | Mubix
Certified Checkbox Unchecker
Room362.com | Hak5.org


This list is sponsored by: Information Assurance Certification Review Board

Prove to peers and potential employers without a doubt that you can actually do a proper penetration test. IACRB CPT and CEPT certs require a full practical examination in order to become certified.

http://www.iacertification.org


Thomas Ptacek // matasano security // founder, product manager
reach me direct: 888-677-0666 x7805

"The truth will set you free. But not until it is finished with you."


Thomas Ptacek // matasano security // founder, product manager
reach me direct: 888-677-0666 x7805

"The truth will set you free. But not until it is finished with you."


Thomas Ptacek // matasano security // founder, product manager
reach me direct: 888-677-0666 x7805

"The truth will set you free. But not until it is finished with you."

Fair enough. I agree with that point. I had a different quibble. When helping customers deal with thousands to tens of thousands of vulnerabilities, I do tend to lump them into an equivalency bucket called "maybe you should get to these things later, if ever", in any cases where there is not immediate indication of exploitability/confidentiality-compromise. For sake of brevity we call this bucket "Best Practices" and let people suppress them to keep their signal-to-noise manageable. Now that we have run this into the ground, --- Arian Evans Software Security Referee On Fri, Jul 8, 2011 at 6:32 PM, Thomas Ptacek <thomas@matasano.com> wrote: > I agree that SQLI is much more important than the Secure flag. I was prompted to comment only because of the equivalence implied between Secure and HttpOnly. > > On Jul 8, 2011, at 6:20 PM, Arian J. Evans wrote: > >> I can see #1 being legitimately debatable. I do not see how #3-#5 are >> debatable. I would like more detail if you find I am in error. >> >> #3) What does =secure provide, inside a valid SSL tunnel, besides >> protection against broken browsers or mixed domain content? >> >> #4) Let me clarify - most *retail* and some other types of businesses >> will tolerate *!=secure* before they will tolerate functional outage. >> Do you disagree? >> >> (insert early WAF shelfware stories here) >> >> #5) If you have legitimate, non-confidential cookie usage (like most >> web apps) and you have legitimate, non-transport encrypted traffic >> (like many if not most web apps) then you will break things >> arbitrarily slapping =secure on them, as I have seen both recommended >> and done. >> >> I am guessing if we disagree you are thinking of something >> situation-specific e.g.- sensitive financial apps with a singular >> session and/or auth cookie. >> >> I still humbly submit that most organizations have thousands of XSS >> and SQLi (amongst other types of directly exploitable syntax-attack >> exposures) and until you help them fix those you're wasting their time >> with this. And, of course, for many apps and cookies there's no reason >> to care about =secure. >> >> But I digress, >> >> --- >> Arian Evans >> Software Security Stuff >> >> >> On Fri, Jul 8, 2011 at 5:57 PM, Thomas Ptacek <thomas@matasano.com> wrote: >>> The savvy reader will note that while #2 makes a valid point, #1, #3, #4, #5, and everything after it do something other than make valid points. >>> >>> On Jul 8, 2011, at 5:49 PM, Arian J. Evans wrote: >>> >>>> On Fri, Jul 8, 2011 at 5:00 PM, Thomas Ptacek <thomas@matasano.com> wrote: >>>>> Unlike HttpOnly, missing the "Secure" flag on cookies in SSL apps is not a minor problem. You almost might as well not do HTTPS at all if you're not going to get that detail right. >>>> >>>> Well, I respectfully disagree Tom. >>>> >>>> 1) First - nobody gets Sony'd from cookie !=secure. If you go through >>>> the WASC and CWE attack and weakness nodes I can think of about 40 >>>> that could get you Lulzed; I'd call those major, and stacking this up >>>> is minor in comparison. >>>> >>>> 2) Secondly - very few cookies (of the total number of cookies set on >>>> the Internets) contain sensitive info and require confidentiality. >>>> (and thus =secure) >>>> >>>> 3) Thirdishly - if you are using SSL, then this is a defense-in-depth >>>> best practice, and only provides protection for broken browsers or >>>> mixed-content mistake. >>>> >>>> 4) Fourthedly - Most businesses will tolerate cookie leakage before a >>>> broken application, because of - >>>> >>>> 5) Fifthously - Slapping =secure on cookies randomly will break >>>> legitimate features of many applications, whether or not there is any >>>> security benefit/attack surface reduction >>>> >>>> I know security consultants love fluffy report filler. If you charge >>>> your clients premium prices for one-time pen-tests - then customers >>>> tend to feel like they pay by the pound for report size, so you stuff >>>> everything you can in there and make it all sound as terribly >>>> important as possible. However IMHO this is not very useful at moving >>>> the software security bar in large enterprises, and definitely doesn't >>>> scale...at all. >>>> >>>> I also think the elitist security attitude towards software >>>> development e.g. -"if you ain't gonna get these little bits right you >>>> might as well pack up and go home" which sounds like "you're an idiot, >>>> developer" are misguided - and may be a reason why some security folks >>>> have a tough time getting traction in organizations regarding solving >>>> software security. That's one of the reasons Jeremiah and I make a >>>> point of going to developer conferences. It's smart to stay close to >>>> the people living with the problem you're trying to solve. >>>> >>>> $0.02, >>>> >>>> --- >>>> Arian Evans >>>> Software Security Solutions Chef >>>> >>>>> >>>>> On Jul 8, 2011, at 3:42 PM, Arian J. Evans wrote: >>>>> >>>>>> The article has a stupid premise (I think intended for trolling) but a >>>>>> valid point. >>>>>> >>>>>> The real point here is that a big chunk of the domain of appsec does >>>>>> not define "vulnerability" in the same way network penetration testing >>>>>> does. Most appsec tools/vendors paint with a broader brush of defects. >>>>>> And, because there are so many, they rarely focus on penetration of a >>>>>> single one. Additionally, penetration does not directly help with >>>>>> remediation, so many chose to spend their time elsewhere. >>>>>> >>>>>> So many appsec tools are noisy, and generate "findings" that are >>>>>> basically defects that may or may not have any security implications. >>>>>> Like HTTPONLY and =SECURE bits set on cookies. At WhiteHat we call >>>>>> these "best practices": they may or may not impact the security >>>>>> posture of your application (and they may have other implications, >>>>>> positive and negative). >>>>>> >>>>>> If it isn't exploitable, or leaking confidential information - I have >>>>>> trouble calling it a vulnerability. Many appsec vendors try to work >>>>>> around their inherent weakness here by having some form of a >>>>>> likelihood or "confidence" score associated with sub-par findings. >>>>>> This weakness aside, we have to start somewhere and one-time network >>>>>> pentests aren't cutting it. >>>>>> >>>>>> People focused exclusively on penetration may really know a few new >>>>>> SQLi exploitation tricks well. But for every exploitable SQLi they >>>>>> find in an enterprise they probably only miss 1,000 more SQLi in 500 >>>>>> forms they didn't touch.... >>>>>> >>>>>> >>>>>> --- >>>>>> Arian Evans >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Jul 8, 2011 at 1:59 AM, Rob Fuller <jd.mubix@gmail.com> wrote: >>>>>>> So this is an opinion/poll/question piece by cktricky. By posting to >>>>>>> both the websecurity and pentest list hopefully there will be a good >>>>>>> discussion on all sides: >>>>>>> >>>>>>> http://www.novainfosecportal.com/2011/07/07/netsec-breaking-apps-better-than-appsec/ >>>>>>> >>>>>>> -- >>>>>>> Rob Fuller | Mubix >>>>>>> Certified Checkbox Unchecker >>>>>>> Room362.com | Hak5.org >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> This list is sponsored by: Information Assurance Certification Review Board >>>>>>> >>>>>>> Prove to peers and potential employers without a doubt that you can actually do a proper penetration test. IACRB CPT and CEPT certs require a full practical examination in order to become certified. >>>>>>> >>>>>>> http://www.iacertification.org >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> The Web Security Mailing List >>>>>> >>>>>> WebSecurity RSS Feed >>>>>> http://www.webappsec.org/rss/websecurity.rss >>>>>> >>>>>> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA >>>>>> >>>>>> WASC on Twitter >>>>>> http://twitter.com/wascupdates >>>>>> >>>>>> websecurity@lists.webappsec.org >>>>>> http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org >>>>> >>>>> >>>>> --- >>>>> Thomas Ptacek // matasano security // founder, product manager >>>>> reach me direct: 888-677-0666 x7805 >>>>> >>>>> "The truth will set you free. But not until it is finished with you." >>>>> >>>>> >>>>> >>>>> >>>>> >>> >>> >>> --- >>> Thomas Ptacek // matasano security // founder, product manager >>> reach me direct: 888-677-0666 x7805 >>> >>> "The truth will set you free. But not until it is finished with you." >>> >>> >>> >>> >>> > > > --- > Thomas Ptacek // matasano security // founder, product manager > reach me direct: 888-677-0666 x7805 > > "The truth will set you free. But not until it is finished with you." > > > > >
T
Tim
Sat, Jul 9, 2011 1:13 AM

I agree that SQLI is much more important than the Secure flag. I was prompted to comment only because of the equivalence implied between Secure and HttpOnly.

I second Thomas here.  There are bigger fish to fry, but this one
isn't just a best practice.

For those who are following this thread and are wondering what the
MitM attack is that Thomas and I are concerned about, consider this:

Assumptions:
A. Web application uses only HTTPS, uses cookies for session
management, fails to set secure flag. (Seems secure, right?)

B. One or more application users access the web application over an
insecure network (e.g. the Internet).

C. At any point before or during a user's session, they also access
one or more HTTP resources.

Attack:

  1. Attacker obtains privileged network access between a user and the
    site.

  2. Once any HTTP page is observed, attacker injects an http://.../
    link which points to the (vulnerable) web application.  This could
    happen either via an IMG tag, or an HTTP redirect, a script on a
    timer, or any number of other ways.

  3. User's browser attempts to access the attacker's HTTP URL, provided
    by attacker.

  4. Port 80 isn't open on the site, but the attacker simply MitMs this
    and allows them to connect.  User's browser sends session cookie to
    attacker.

Anticipated response arguments:

R. "That sounds really complicated, doubt it would work."

A. sslstrip is just as complicated.  This is just one extra
redirect/injection.

R. "You're assuming users will access an HTTP site as well."

A. Yup. Pretty common case.  And what browser today doesn't try to
phone home for updates of one sort or another?  How many
hotel/airport/other free wireless networks don't have an HTTP
landing page when you first access them?

R. "Gaining 'privileged network access' is hard."

A. Nope.  It's quite easy on any internal network or public wireless
networks (even with PSK encryption no less!).

R. "Well sure, anything is possible if you can do a MitM attack."

A. What's the point in using SSL again?  Yes, there are many ways to
break SSL protections, and this is just one of them. However,
preventing MitM is a core design goal of any secure protocol.

(Do you think I'm sick of hearing these arguments?  Yeah.)

A similar attack is possible if the secure flag is used, but the
application assigns session cookies before the user is authenticated
(session fixation).  We can thank Mitja for pointing that out.

Session cookies still suck for authentication.
We need a better alternative.

tim

> I agree that SQLI is much more important than the Secure flag. I was prompted to comment only because of the equivalence implied between Secure and HttpOnly. I second Thomas here. There are bigger fish to fry, but this one isn't just a best practice. For those who are following this thread and are wondering what the MitM attack is that Thomas and I are concerned about, consider this: Assumptions: A. Web application uses only HTTPS, uses cookies for session management, fails to set secure flag. (Seems secure, right?) B. One or more application users access the web application over an insecure network (e.g. the Internet). C. At any point before or during a user's session, they also access one or more HTTP resources. Attack: 1. Attacker obtains privileged network access between a user and the site. 2. Once any HTTP page is observed, attacker injects an http://.../ link which points to the (vulnerable) web application. This could happen either via an IMG tag, or an HTTP redirect, a script on a timer, or any number of other ways. 3. User's browser attempts to access the attacker's HTTP URL, provided by attacker. 4. Port 80 isn't open on the site, but the attacker simply MitMs this and allows them to connect. User's browser sends session cookie to attacker. Anticipated response arguments: R. "That sounds really complicated, doubt it would work." A. sslstrip is just as complicated. This is just one extra redirect/injection. R. "You're assuming users will access an HTTP site as well." A. Yup. Pretty common case. And what browser today doesn't try to phone home for updates of one sort or another? How many hotel/airport/other free wireless networks don't have an HTTP landing page when you first access them? R. "Gaining 'privileged network access' is hard." A. Nope. It's quite easy on any internal network or public wireless networks (even with PSK encryption no less!). R. "Well sure, anything is possible if you can do a MitM attack." A. What's the point in using SSL again? Yes, there are many ways to break SSL protections, and this is just one of them. However, preventing MitM is a core design goal of any secure protocol. (Do you think I'm sick of hearing these arguments? Yeah.) A similar attack is possible if the secure flag is used, but the application assigns session cookies before the user is authenticated (session fixation). We can thank Mitja for pointing that out. Session cookies still suck for authentication. We need a better alternative. tim
JW
John Wilander
Sat, Jul 9, 2011 1:48 AM

Firstly, I'm an application security guy and I don't pentest. I develop software in which security is important. Thus, there's no equivalence between appsec and breaking stuff.

Now, why is that important? Well, cktricky might want to poke the appsec community but is in fact poking pentesters. I'm just guessing the same goes for the netsec guys. IMO, the difference between netsec and appsec lies in code - custom written code to fulfill business requirements.

An appsecer asks herself whether the code is fulfilling the requirements in a fairly secure manner. If not she will either fix the system herself (write code and tests) or communicate with the developers on a technical level how to fix the system (discuss code and tests).

Is cktricky saying netsecers do that better?

Secondly ...

On 9 jul 2011, at 00:49, "Arian J. Evans" arian.evans@anachronic.com wrote:

On Fri, Jul 8, 2011 at 5:00 PM, Thomas Ptacek thomas@matasano.com wrote:
I also think the elitist security attitude towards software
development e.g. -"if you ain't gonna get these little bits right you
might as well pack up and go home" which sounds like "you're an idiot,
developer" are misguided - and may be a reason why some security folks
have a tough time getting traction in organizations regarding solving
software security. That's one of the reasons Jeremiah and I make a
point of going to developer conferences. It's smart to stay close to
the people living with the problem you're trying to solve.

... word.

Finally, the lack of appsec tools for developers is a problem.

Regards, John Wilander

Firstly, I'm an application security guy and I don't pentest. I develop software in which security is important. Thus, there's no equivalence between appsec and breaking stuff. Now, why is that important? Well, cktricky might want to poke the appsec community but is in fact poking pentesters. I'm just guessing the same goes for the netsec guys. IMO, the difference between netsec and appsec lies in code - custom written code to fulfill business requirements. An appsecer asks herself whether the code is fulfilling the requirements in a fairly secure manner. If not she will either fix the system herself (write code and tests) or communicate with the developers on a technical level how to fix the system (discuss code and tests). Is cktricky saying netsecers do that better? Secondly ... On 9 jul 2011, at 00:49, "Arian J. Evans" <arian.evans@anachronic.com> wrote: > On Fri, Jul 8, 2011 at 5:00 PM, Thomas Ptacek <thomas@matasano.com> wrote: > I also think the elitist security attitude towards software > development e.g. -"if you ain't gonna get these little bits right you > might as well pack up and go home" which sounds like "you're an idiot, > developer" are misguided - and may be a reason why some security folks > have a tough time getting traction in organizations regarding solving > software security. That's one of the reasons Jeremiah and I make a > point of going to developer conferences. It's smart to stay close to > the people living with the problem you're trying to solve. ... word. Finally, the lack of appsec tools for _developers_ is a problem. Regards, John Wilander >
AJ
Arian J. Evans
Sat, Jul 9, 2011 1:51 AM

Let's see if we can whip this horse to death!

Okay, here is what I typeled that tippled all these on and offline
secure-bit emails:

"So many appsec tools are noisy, and generate "findings" that are
basically defects that may or may not have any security implications.
Like HTTPONLY and =SECURE bits set on cookies. At WhiteHat we call
these "best practices": they may or may not impact the security
posture of your application (and they may have other implications,
positive and negative)."

Note that I did not say "session cookies", "authorization cookies", or
"cookies that control access to confidential data and can be compromised".
This was deliberate but unfortunately not explicit, or I would have
saved an hour today.

I said "cookies" and I accurate stated "this may or may not impact
the security posture of your application". Personalization cookies.
Urchin cookies. Non session or Auth/Z cookies. If it cannot be directly
exploited it is a "best practice" or "defense in depth" in my book.

So, we are all mostly agreeing. :) The disagreement is one of
degree, not of kind.

The problem I was contrasting to the "network pentester vs. appsec
scanner jockey" article is the appsec "defects" and "recommended
practices" often ranked equal to as "exploitable vulnerabilities"

Q. How many scanners report !=secure on cookies that do not need it?

A. Many

Q. How often do clueless scanner jockeys report !=secure on cookies
that do not need it?

A. All the time, from reports customers show us, and questions we get

On a single application basis, for cookies that need it, setting =secure
can reduce attack surface if not mitigate the exploit scenario described.
A fine practice and one I do not discourage.

In the average enterprise with hundreds to thousands of existing web
apps with access to interesting data, it is low on my list of things to do,
if I am prioritizing my list of things to do around "try not to get hacked".

Finally - apology to the author, I genuinely thought the title and premise
was a troll, but thought there was a gem of a valid point in the proof.
However, that's no excuse for me calling it stupid. To be fair, plenty of
things I've written are quite stupid, I just work to hide it well,


Arian Evans
Software Security Sophistry

On Fri, Jul 8, 2011 at 8:13 PM, Tim tim-security@sentinelchicken.org wrote:

I agree that SQLI is much more important than the Secure flag. I was prompted to comment only because of the equivalence implied between Secure and HttpOnly.

I second Thomas here.  There are bigger fish to fry, but this one
isn't just a best practice.

For those who are following this thread and are wondering what the
MitM attack is that Thomas and I are concerned about, consider this:

Assumptions:
A. Web application uses only HTTPS, uses cookies for session
  management, fails to set secure flag. (Seems secure, right?)

B. One or more application users access the web application over an
  insecure network (e.g. the Internet).

C. At any point before or during a user's session, they also access
  one or more HTTP resources.

Attack:

  1. Attacker obtains privileged network access between a user and the
      site.

  2. Once any HTTP page is observed, attacker injects an http://.../
      link which points to the (vulnerable) web application.  This could
      happen either via an IMG tag, or an HTTP redirect, a script on a
      timer, or any number of other ways.

  3. User's browser attempts to access the attacker's HTTP URL, provided
      by attacker.

  4. Port 80 isn't open on the site, but the attacker simply MitMs this
      and allows them to connect.  User's browser sends session cookie to
      attacker.

Anticipated response arguments:

R. "That sounds really complicated, doubt it would work."

A. sslstrip is just as complicated.  This is just one extra
  redirect/injection.

R. "You're assuming users will access an HTTP site as well."

A. Yup. Pretty common case.  And what browser today doesn't try to
  phone home for updates of one sort or another?  How many
  hotel/airport/other free wireless networks don't have an HTTP
  landing page when you first access them?

R. "Gaining 'privileged network access' is hard."

A. Nope.  It's quite easy on any internal network or public wireless
  networks (even with PSK encryption no less!).

R. "Well sure, anything is possible if you can do a MitM attack."

A. What's the point in using SSL again?  Yes, there are many ways to
  break SSL protections, and this is just one of them. However,
  preventing MitM is a core design goal of any secure protocol.

(Do you think I'm sick of hearing these arguments?  Yeah.)

A similar attack is possible if the secure flag is used, but the
application assigns session cookies before the user is authenticated
(session fixation).  We can thank Mitja for pointing that out.

Session cookies still suck for authentication.
We need a better alternative.

tim

Let's see if we can whip this horse to death! Okay, here is what I typeled that tippled all these on and offline secure-bit emails: "So many appsec tools are noisy, and generate "findings" that are basically defects that may or may not have any security implications. Like HTTPONLY and =SECURE bits set on cookies. At WhiteHat we call these "best practices": they may or may not impact the security posture of your application (and they may have other implications, positive and negative)." Note that I did not say "session cookies", "authorization cookies", or "cookies that control access to confidential data and can be compromised". This was deliberate but unfortunately not explicit, or I would have saved an hour today. I said "cookies" and I accurate stated "this may or may not impact the security posture of your application". Personalization cookies. Urchin cookies. Non session or Auth/Z cookies. If it cannot be directly exploited it is a "best practice" or "defense in depth" in my book. So, we are all *mostly* agreeing. :) The disagreement is one of degree, not of kind. The problem I was contrasting to the "network pentester vs. appsec scanner jockey" article is the appsec "defects" and "recommended practices" often ranked equal to as "exploitable vulnerabilities" Q. How many scanners report !=secure on cookies that do not need it? A. Many Q. How often do clueless scanner jockeys report !=secure on cookies that do not need it? A. All the time, from reports customers show us, and questions we get On a single application basis, for cookies that need it, setting =secure can reduce attack surface if not mitigate the exploit scenario described. A fine practice and one I do not discourage. In the average enterprise with hundreds to thousands of existing web apps with access to interesting data, it is low on my list of things to do, if I am prioritizing my list of things to do around "try not to get hacked". Finally - apology to the author, I genuinely thought the title and premise was a troll, but thought there was a gem of a valid point in the proof. However, that's no excuse for me calling it stupid. To be fair, plenty of things I've written are quite stupid, I just work to hide it well, --- Arian Evans Software Security Sophistry On Fri, Jul 8, 2011 at 8:13 PM, Tim <tim-security@sentinelchicken.org> wrote: > >> I agree that SQLI is much more important than the Secure flag. I was prompted to comment only because of the equivalence implied between Secure and HttpOnly. > > I second Thomas here.  There are bigger fish to fry, but this one > isn't just a best practice. > > > For those who are following this thread and are wondering what the > MitM attack is that Thomas and I are concerned about, consider this: > > Assumptions: > A. Web application uses only HTTPS, uses cookies for session >   management, fails to set secure flag. (Seems secure, right?) > > B. One or more application users access the web application over an >   insecure network (e.g. the Internet). > > C. At any point before or during a user's session, they also access >   one or more HTTP resources. > > > > Attack: > > 1. Attacker obtains privileged network access between a user and the >   site. > > 2. Once any HTTP page is observed, attacker injects an http://.../ >   link which points to the (vulnerable) web application.  This could >   happen either via an IMG tag, or an HTTP redirect, a script on a >   timer, or any number of other ways. > > 3. User's browser attempts to access the attacker's HTTP URL, provided >   by attacker. > > 4. Port 80 isn't open on the site, but the attacker simply MitMs this >   and allows them to connect.  User's browser sends session cookie to >   attacker. > > > Anticipated response arguments: > > R. "That sounds really complicated, doubt it would work." > > A. sslstrip is just as complicated.  This is just one extra >   redirect/injection. > > > R. "You're assuming users will access an HTTP site as well." > > A. Yup. Pretty common case.  And what browser today doesn't try to >   phone home for updates of one sort or another?  How many >   hotel/airport/other free wireless networks don't have an HTTP >   landing page when you first access them? > > > R. "Gaining 'privileged network access' is hard." > > A. Nope.  It's quite easy on any internal network or public wireless >   networks (even with PSK encryption no less!). > > > R. "Well sure, anything is possible if you can do a MitM attack." > > A. What's the point in using SSL again?  Yes, there are many ways to >   break SSL protections, and this is just one of them. However, >   preventing MitM is a core design goal of any secure protocol. > > (Do you think I'm sick of hearing these arguments?  Yeah.) > > > A similar attack is possible if the secure flag is used, but the > application assigns session cookies before the user is authenticated > (session fixation).  We can thank Mitja for pointing that out. > > Session cookies still suck for authentication. > We need a better alternative. > > tim >
MZ
Michal Zalewski
Sat, Jul 9, 2011 3:57 AM
  1. Thirdishly - if you are using SSL, then this is a defense-in-depth
    best practice, and only provides protection for broken browsers or
    mixed-content mistake.

In the spirit of nit-picking, this is not really true...
https://www.mybank.com does not need to have mixed content mistake to
leak their non-secure cookie.

It suffices for the user's browser to navigate to any other HTTP
site (say, http://www.facebook.com) within the period of validity of
the insecure www.mybank.com cookie. When this happens, any active
attacker may inject an invisible frame onto www.facebook.com, pointing
to http://www.mybank.com, prompting the browser to attempt an insecure
connection, and to leak that bank's authentication cookie.

It's an easy attack, very much practical... and really, almost nobody
uses a separate browser profile, and almost nobody abstains from
having any other tabs / windows open, while logging into a bank or so.

That said, "secure" cookies are hardly a panacea; they can be still
collided with by non-HTTPS content, which makes it unexpectedly tricky
to use them right. HSTS limits the exposure (but even that can be
subverted by domain-scoped cookies in some cases), but the only good
way to fix this mess would be proper origin-scoped cookies (a la Adam
Barth's Cake header; or localStorage, except that the latter is still
horribly insecure in some popular browsers).

/mz

> 3) Thirdishly - if you are using SSL, then this is a defense-in-depth > best practice, and only provides protection for broken browsers or > mixed-content mistake. In the spirit of nit-picking, this is not really true... https://www.mybank.com does not need to have mixed content mistake to leak their non-secure cookie. It suffices for the user's browser to navigate to *any* other HTTP site (say, http://www.facebook.com) within the period of validity of the insecure www.mybank.com cookie. When this happens, any active attacker may inject an invisible frame onto www.facebook.com, pointing to http://www.mybank.com, prompting the browser to attempt an insecure connection, and to leak that bank's authentication cookie. It's an easy attack, very much practical... and really, almost nobody uses a separate browser profile, and almost nobody abstains from having any other tabs / windows open, while logging into a bank or so. That said, "secure" cookies are hardly a panacea; they can be still collided with by non-HTTPS content, which makes it unexpectedly tricky to use them right. HSTS limits the exposure (but even that can be subverted by domain-scoped cookies in some cases), but the only good way to fix this mess would be proper origin-scoped cookies (a la Adam Barth's Cake header; or localStorage, except that the latter is still horribly insecure in some popular browsers). /mz
AJ
Arian J. Evans
Sat, Jul 9, 2011 8:25 AM

Ah, good point and scenario. That is a trivial way to
force a "fake mixed-content" connection.

-ae

On Fri, Jul 8, 2011 at 10:57 PM, Michal Zalewski lcamtuf@coredump.cx wrote:

  1. Thirdishly - if you are using SSL, then this is a defense-in-depth
    best practice, and only provides protection for broken browsers or
    mixed-content mistake.

In the spirit of nit-picking, this is not really true...
https://www.mybank.com does not need to have mixed content mistake to
leak their non-secure cookie.

It suffices for the user's browser to navigate to any other HTTP
site (say, http://www.facebook.com) within the period of validity of
the insecure www.mybank.com cookie. When this happens, any active
attacker may inject an invisible frame onto www.facebook.com, pointing
to http://www.mybank.com, prompting the browser to attempt an insecure
connection, and to leak that bank's authentication cookie.

It's an easy attack, very much practical... and really, almost nobody
uses a separate browser profile, and almost nobody abstains from
having any other tabs / windows open, while logging into a bank or so.

That said, "secure" cookies are hardly a panacea; they can be still
collided with by non-HTTPS content, which makes it unexpectedly tricky
to use them right. HSTS limits the exposure (but even that can be
subverted by domain-scoped cookies in some cases), but the only good
way to fix this mess would be proper origin-scoped cookies (a la Adam
Barth's Cake header; or localStorage, except that the latter is still
horribly insecure in some popular browsers).

/mz

Ah, good point and scenario. That is a trivial way to force a "fake mixed-content" connection. -ae On Fri, Jul 8, 2011 at 10:57 PM, Michal Zalewski <lcamtuf@coredump.cx> wrote: >> 3) Thirdishly - if you are using SSL, then this is a defense-in-depth >> best practice, and only provides protection for broken browsers or >> mixed-content mistake. > > In the spirit of nit-picking, this is not really true... > https://www.mybank.com does not need to have mixed content mistake to > leak their non-secure cookie. > > It suffices for the user's browser to navigate to *any* other HTTP > site (say, http://www.facebook.com) within the period of validity of > the insecure www.mybank.com cookie. When this happens, any active > attacker may inject an invisible frame onto www.facebook.com, pointing > to http://www.mybank.com, prompting the browser to attempt an insecure > connection, and to leak that bank's authentication cookie. > > It's an easy attack, very much practical... and really, almost nobody > uses a separate browser profile, and almost nobody abstains from > having any other tabs / windows open, while logging into a bank or so. > > That said, "secure" cookies are hardly a panacea; they can be still > collided with by non-HTTPS content, which makes it unexpectedly tricky > to use them right. HSTS limits the exposure (but even that can be > subverted by domain-scoped cookies in some cases), but the only good > way to fix this mess would be proper origin-scoped cookies (a la Adam > Barth's Cake header; or localStorage, except that the latter is still > horribly insecure in some popular browsers). > > /mz >
T
Tim
Sat, Jul 9, 2011 4:52 PM

Let's see if we can whip this horse to death!

I actually was trying to end the discussion there and make sure the
uninitiated understand the underlying issue.

Note that I did not say "session cookies", "authorization cookies", or
"cookies that control access to confidential data and can be compromised".
This was deliberate but unfortunately not explicit, or I would have
saved an hour today.

That's fine.  I wasn't trying to put words in your mouth or imply
extra meaning.  I merely wanted to make sure the importance of the
secure flag is understood by the broader audience.

I said "cookies" and I accurate stated "this may or may not impact
the security posture of your application". Personalization cookies.
Urchin cookies. Non session or Auth/Z cookies. If it cannot be directly
exploited it is a "best practice" or "defense in depth" in my book.

So, we are all mostly agreeing. :) The disagreement is one of
degree, not of kind.

Well, sure, if you don't care about defending against privileged
network access, then it doesn't matter.

On the other hand, if the application:
A. Uses SSL/TLS

B. Is unfortunately using cookies for authenticated user session
management

Then the secure flag is required in all cases.  Otherwise why use HTTPS?

The problem I was contrasting to the "network pentester vs. appsec
scanner jockey" article is the appsec "defects" and "recommended
practices" often ranked equal to as "exploitable vulnerabilities"

Q. How many scanners report !=secure on cookies that do not need it?

A. Many

Q. How often do clueless scanner jockeys report !=secure on cookies
that do not need it?

A. All the time, from reports customers show us, and questions we get

Well sure, but a scanner jockey that doesn't check their results
against reality isn't worth much.  All scanners produce false
positives in a variety of ways due to a lack of context.

It is, however, unfortunate that we use a HTTP construct (cookies) in
security contexts and non-security contexts such that to be secure,
server administrators have to take extra steps that aren't always
obvious (when and when not to apply the secure flag).  Security must
be simple to deploy.

On a single application basis, for cookies that need it, setting =secure
can reduce attack surface if not mitigate the exploit scenario described.
A fine practice and one I do not discourage.

Well, I wouldn't say it quite that way.  It is about providing
communications security or not providing it.  It only appears to be a
"mitigation" on the surface due to the poor security design (or
general lack of design) of the web.

tim

> Let's see if we can whip this horse to death! I actually was trying to end the discussion there and make sure the uninitiated understand the underlying issue. > Note that I did not say "session cookies", "authorization cookies", or > "cookies that control access to confidential data and can be compromised". > This was deliberate but unfortunately not explicit, or I would have > saved an hour today. That's fine. I wasn't trying to put words in your mouth or imply extra meaning. I merely wanted to make sure the importance of the secure flag is understood by the broader audience. > I said "cookies" and I accurate stated "this may or may not impact > the security posture of your application". Personalization cookies. > Urchin cookies. Non session or Auth/Z cookies. If it cannot be directly > exploited it is a "best practice" or "defense in depth" in my book. > > So, we are all *mostly* agreeing. :) The disagreement is one of > degree, not of kind. Well, sure, if you don't care about defending against privileged network access, then it doesn't matter. On the other hand, if the application: A. Uses SSL/TLS B. Is unfortunately using cookies for authenticated user session management Then the secure flag is required in all cases. Otherwise why use HTTPS? > The problem I was contrasting to the "network pentester vs. appsec > scanner jockey" article is the appsec "defects" and "recommended > practices" often ranked equal to as "exploitable vulnerabilities" > > Q. How many scanners report !=secure on cookies that do not need it? > > A. Many > > > Q. How often do clueless scanner jockeys report !=secure on cookies > that do not need it? > > A. All the time, from reports customers show us, and questions we get Well sure, but a scanner jockey that doesn't check their results against reality isn't worth much. All scanners produce false positives in a variety of ways due to a lack of context. It is, however, unfortunate that we use a HTTP construct (cookies) in security contexts and non-security contexts such that to be secure, server administrators have to take extra steps that aren't always obvious (when and when not to apply the secure flag). Security must be simple to deploy. > On a single application basis, for cookies that need it, setting =secure > can reduce attack surface if not mitigate the exploit scenario described. > A fine practice and one I do not discourage. Well, I wouldn't say it quite that way. It is about providing communications security or not providing it. It only appears to be a "mitigation" on the surface due to the poor security design (or general lack of design) of the web. tim
T
Tim
Sat, Jul 9, 2011 5:06 PM

... but the only good
way to fix this mess would be proper origin-scoped cookies (a la Adam
Barth's Cake header; or localStorage, except that the latter is still
horribly insecure in some popular browsers).

Well...  How about we use a real authentication protocol for users,
and leave cookies for the non-security use cases?  Giving web
developers control over individual bits in messages used for
authentication is always going to be a recipe for disaster.

I would be thrilled if some of you heavy-weights joined us in the
discussions:
https://www.ietf.org/mailman/listinfo/http-auth

tim

> ... but the only good > way to fix this mess would be proper origin-scoped cookies (a la Adam > Barth's Cake header; or localStorage, except that the latter is still > horribly insecure in some popular browsers). Well... How about we use a real authentication protocol for users, and leave cookies for the non-security use cases? Giving web developers control over individual bits in messages used for authentication is always going to be a recipe for disaster. I would be thrilled if some of you heavy-weights joined us in the discussions: https://www.ietf.org/mailman/listinfo/http-auth tim