websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

Great article outlining a core issue with many in the security community

SM
Steven M. Christey
Mon, Feb 14, 2011 2:23 PM

On Mon, 14 Feb 2011, Ory Segal wrote:

What we do need to ask ourselves is - if nobody is prioritizing security
as a critical software requirement - what are we doing wrong here???

The news isn't all bleak.  But, I think there are at least 3 main issues
at play right now:

  1. As already mentioned, there is not enough engagement or education with
    the developer community. I agree that reaching out more to developers is a
    very important strategy.

  2. We are not effectively translating security into actual impact on the
    business or mission.

  3. Customers don't know how to ask for "more security."

To expand on point 3 a bit...

I believe that in the past, there wasn't enough consumer interest in
security to make it affect a software vendor's bottom line.  At this stage
though, there are enough large organizations who care about security to
consider putting it into contract language.

Now the problem is shifting to the question: how do you know how secure a
piece of software really is?  Consumers need help in knowing what to ask
for and how to ask for it.  "New" efforts like CWSS and Jeff Williams'
security facts label, combined with ongoing work such as DHS' assurance
cases, OWASP's secure contract annex, BSIMM, OWASP Top Ten, CWE Top 25,
etc. are all laying the groundwork that should ultimately make it easier
for customers to ask for more security.

There will probably be a bit of churn and controversy as assurance metrics
are developed, since we still don't have a quantitatively-defensible way
to say how well various techniques work for which security concerns,
especially while there are questionable rates of false positives and false
negatives.

My belief is that once software-assurance measurement starts to become
"usable" to consumers, there will be an increasing push toward security as
a requirement (or at least, a desired feature), which will then ultimately
impact the day-to-day developer.  We're getting there, though.

  • Steve
On Mon, 14 Feb 2011, Ory Segal wrote: > What we do need to ask ourselves is - if nobody is prioritizing security > as a critical software requirement - what are we doing wrong here??? The news isn't all bleak. But, I think there are at least 3 main issues at play right now: 1) As already mentioned, there is not enough engagement or education with the developer community. I agree that reaching out more to developers is a very important strategy. 2) We are not effectively translating security into actual impact on the business or mission. 3) Customers don't know how to ask for "more security." To expand on point 3 a bit... I believe that in the past, there wasn't enough consumer interest in security to make it affect a software vendor's bottom line. At this stage though, there are enough large organizations who care about security to consider putting it into contract language. Now the problem is shifting to the question: how do you know how secure a piece of software really is? Consumers need help in knowing what to ask for and how to ask for it. "New" efforts like CWSS and Jeff Williams' security facts label, combined with ongoing work such as DHS' assurance cases, OWASP's secure contract annex, BSIMM, OWASP Top Ten, CWE Top 25, etc. are all laying the groundwork that should ultimately make it easier for customers to ask for more security. There will probably be a bit of churn and controversy as assurance metrics are developed, since we still don't have a quantitatively-defensible way to say how well various techniques work for which security concerns, especially while there are questionable rates of false positives and false negatives. My belief is that once software-assurance measurement starts to become "usable" to consumers, there will be an increasing push toward security as a requirement (or at least, a desired feature), which will then ultimately impact the day-to-day developer. We're getting there, though. - Steve
SM
Steven M. Christey
Mon, Feb 14, 2011 2:47 PM

On Sun, 13 Feb 2011, Michal Zalewski wrote:

I'm not sure we're "losing" any more than ten years ago - there is
more PR and community exposure, but perhaps that's it? But we might be
fighting the wrong battle to begin with.

I believe that software by many popular vendors/services is, in general,
more "objectively" secure than it was 10 years ago - in terms of having
fewer vulnerabilities, and a much smaller percentage of "obvious"
vulnerabilities that (generally) require more human effort to find.  Bug
bounties and Pwn2Own are symptoms of that.  But, maybe the day-to-day
threat has changed much more rapidly, and the attack surface is much
greater than it was 10 years ago.  So, the "operational" security may have
declined in that time.

Admittedly, we still regularly see brand-new software classes start off
with the same old security issues.  "Secure-by-design" has a longer way to
go than avoiding implementation bugs.

  • Steve
On Sun, 13 Feb 2011, Michal Zalewski wrote: > I'm not sure we're "losing" any more than ten years ago - there is > more PR and community exposure, but perhaps that's it? But we might be > fighting the wrong battle to begin with. I believe that software by many popular vendors/services is, in general, more "objectively" secure than it was 10 years ago - in terms of having fewer vulnerabilities, and a much smaller percentage of "obvious" vulnerabilities that (generally) require more human effort to find. Bug bounties and Pwn2Own are symptoms of that. But, maybe the day-to-day threat has changed much more rapidly, and the attack surface is much greater than it was 10 years ago. So, the "operational" security may have declined in that time. Admittedly, we still regularly see brand-new software classes start off with the same old security issues. "Secure-by-design" has a longer way to go than avoiding implementation bugs. - Steve
MD
Mike Duncan
Mon, Feb 14, 2011 3:15 PM

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Having been a developer for 13yrs prior to working now in AppSec, I
couldn't have said it better.

Businesses (and even .gov), especially now, are trying to grow every
year more so than the last -- so the demands from higher management
(which are unfounded) are going to be about this goal more so than doing
it right -- the overall Security goal. Doing it right means longer
development and assessment times and this just does not seem to fit many
business models right now.

This is the very typical argument of Security v. Business Goals
ultimately. Thanks for point that out -- because it definitely needs to
be noted.

Mike Duncan
ISSO, Application Security Specialist
Government Contractor with STG, Inc.
NOAA :: National Climatic Data Center

On 02/13/2011 06:56 PM, steve jensen wrote:

Having been a software developer for almost 10 years and then
transitioning into security full-time (I rode the developer/security
fence for 4-5 years), I've seen both sides of the fence first hand and
if there is anyone entity to blame, it isn't the developers. The blame
should be placed on the organization/business, rather than the
developers themselves. If companies placed higher importance on
security, mandated security as part of the SDLC and ensured their
developers received proper training on how to write secure apps, then we
wouldn't have this "us vs. them" mentality. Ultimately, the first
priority for software development is functionality. If it doesn't work,
you can't ship it. If there's issues with it later, you just patch it.
That's been the development mentality of the past and will continue to
be until the overall business mindset changes.

From: robert@webappsec.org
To: tasos.laskos@gmail.com
Date: Sun, 13 Feb 2011 19:08:09 -0500
CC: websecurity@webappsec.org
Subject: Re: [WEB SECURITY] Great article outlining a core issue with many

I don't think that a guy saying "Developers don't know shit about
security" (blaming developers) should be taken seriously by security
specialists and developers alike.
That goes for most generalizations I suppose (see, I side stepped that
land-mine ;) ).

While we agree, I tend to see on average 2-3 people per conference

saying exactly this, some of them

presenters. Of the people I've heard saying this, all worked for

either a consulting company or a vendor

and were not actually in a role in a company addressing issues.

Regards,


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1ZRyAACgkQnvIkv6fg9ha0OQCgk/Jl5DKtGO6ZhP5v9ZqdaA3+
1ooAnRv+Wer6pp0MjpfGwJ6GeHZ2HQ9g
=xQ21
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Having been a developer for 13yrs prior to working now in AppSec, I couldn't have said it better. Businesses (and even .gov), especially now, are trying to grow every year more so than the last -- so the demands from higher management (which are unfounded) are going to be about this goal more so than doing it right -- the overall Security goal. Doing it right means longer development and assessment times and this just does not seem to fit many business models right now. This is the very typical argument of Security v. Business Goals ultimately. Thanks for point that out -- because it definitely needs to be noted. Mike Duncan ISSO, Application Security Specialist Government Contractor with STG, Inc. NOAA :: National Climatic Data Center On 02/13/2011 06:56 PM, steve jensen wrote: > Having been a software developer for almost 10 years and then > transitioning into security full-time (I rode the developer/security > fence for 4-5 years), I've seen both sides of the fence first hand and > if there is anyone entity to blame, it isn't the developers. The blame > should be placed on the organization/business, rather than the > developers themselves. If companies placed higher importance on > security, mandated security as part of the SDLC and ensured their > developers received proper training on how to write secure apps, then we > wouldn't have this "us vs. them" mentality. Ultimately, the first > priority for software development is functionality. If it doesn't work, > you can't ship it. If there's issues with it later, you just patch it. > That's been the development mentality of the past and will continue to > be until the overall business mindset changes. > >> From: robert@webappsec.org >> To: tasos.laskos@gmail.com >> Date: Sun, 13 Feb 2011 19:08:09 -0500 >> CC: websecurity@webappsec.org >> Subject: Re: [WEB SECURITY] Great article outlining a core issue with many >> >> > I don't think that a guy saying "Developers don't know shit about >> > security" (blaming developers) should be taken seriously by security >> > specialists and developers alike. >> > That goes for most generalizations I suppose (see, I side stepped that >> > land-mine ;) ). >> >> While we agree, I tend to see on average 2-3 people per conference > saying exactly this, some of them >> presenters. Of the people I've heard saying this, all worked for > either a consulting company or a vendor >> and were not actually in a role in a company addressing issues. >> >> Regards, >> - Robert >> http://www.webappsec.org/ >> http://www.qasec.com/ >> http://www.cgisecurity.com/ >> >> >> >> _______________________________________________ >> The Web Security Mailing List >> >> WebSecurity RSS Feed >> http://www.webappsec.org/rss/websecurity.rss >> >> Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA >> >> WASC on Twitter >> http://twitter.com/wascupdates >> >> websecurity@lists.webappsec.org >> > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org > > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.15 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk1ZRyAACgkQnvIkv6fg9ha0OQCgk/Jl5DKtGO6ZhP5v9ZqdaA3+ 1ooAnRv+Wer6pp0MjpfGwJ6GeHZ2HQ9g =xQ21 -----END PGP SIGNATURE-----
PS
Paul Schmehl
Mon, Feb 14, 2011 4:33 PM

--On February 14, 2011 9:21:56 AM +0200 Ory Segal SEGALORY@il.ibm.com
wrote:

What we do need to ask ourselves is - if nobody is prioritizing security
as a critical software requirement - what are we doing wrong here???

This is a human nature issue.  You can't solve human nature issues with
technology.

Most humans will choose the path of least resistance when confronted with
decision paths.  Look how long it took to get safety features into
automobiles.  Things we now take for granted (crush zones, air bags, safety
belts) were considered extraneous at first, then luxuries, until the death
toll rose to a level that people began demanding action.  Even now some
people will opt not to use a seat belt even though the evidence for doing
so is overwhelming and the cost of having them is built in to the product.

Software is no different.  It's a human endeavor.  Until the perceived cost
of not having security built in exceeds some comfort level (and who knows
what that might be?) not much will change.  There will be leaders and
innovators who are out front working for change, and they will be able to
sell their products to the security conscious buyers, but it will not be a
commodity until enough "bad" happens to force the "good".

Telling someone there are security holes in their product doesn't mean they
will fix them.  Until those holes incur a cost to them that they perceive
is higher than the cost of fixing them, they're not going to fix them
unless altruism comes into play.  It seldom does.

--
Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.


"It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead." Thomas Jefferson
"There are some ideas so wrong that only a very
intelligent person could believe in them." George Orwell

--On February 14, 2011 9:21:56 AM +0200 Ory Segal <SEGALORY@il.ibm.com> wrote: > > What we do need to ask ourselves is - if nobody is prioritizing security > as a critical software requirement - what are we doing wrong here??? This is a human nature issue. You can't solve human nature issues with technology. Most humans will choose the path of least resistance when confronted with decision paths. Look how long it took to get safety features into automobiles. Things we now take for granted (crush zones, air bags, safety belts) were considered extraneous at first, then luxuries, until the death toll rose to a level that people began demanding action. Even now some people will opt not to use a seat belt even though the evidence for doing so is overwhelming and the cost of having them is built in to the product. Software is no different. It's a human endeavor. Until the perceived cost of *not* having security built in exceeds some comfort level (and who knows what that might be?) not much will change. There will be leaders and innovators who are out front working for change, and they will be able to sell their products to the security conscious buyers, but it will not be a commodity until enough "bad" happens to force the "good". Telling someone there are security holes in their product doesn't mean they will fix them. Until those holes incur a cost to them that *they* perceive is higher than the cost of fixing them, they're not going to fix them unless altruism comes into play. It seldom does. -- Paul Schmehl, Senior Infosec Analyst As if it wasn't already obvious, my opinions are my own and not those of my employer. ******************************************* "It is as useless to argue with those who have renounced the use of reason as to administer medication to the dead." Thomas Jefferson "There are some ideas so wrong that only a very intelligent person could believe in them." George Orwell
VM
Vance, Michael
Mon, Feb 14, 2011 6:19 PM

Where do you draw the line at what developers should know and do automatically and what they should only do if there is a specific, written requirement? Where do things get too technical for product owners and stakeholders to define them? We don't expect there to be a written requirement to use a specific data type or array structure; why should there have to be a specific requirement to sanitize input using a whitelist? Shouldn't that be automatic, part of "good coding practices?"

The problem is getting owners and stakeholders to think in terms of abuse and misuse cases.  Their requirements are always based on the "happy path." "If the user provides the input we are expecting, this is how the application should behave." They define edits and exception paths in terms of business logic, but never in terms of technical capabilities. They are concerned that a customer may try to withdraw more money from their account than the available balance, not that they may try to withdraw more money than the input buffer is designed to hold. That last type of requirement is up to the developer in many shops.

Even when developers put the question to non-technical stakeholders, it often perplexes them: "If the program receives input that it is not designed to handle, how should I have the program handle it?"  Seems a little circular or oxymoronic, doesn't it? You're asking for a design specification for something that is not in the design.

I've sat in requirements sessions and brought up abuse cases where the response from the business is, "What are the chances that someone will do that?" The answer, as we on this list know, is that the chance is low, but when that one determined, skilled individual turns his attention to your application, you still need to be ready for him. The chances that a burglar is going to try to walk in your front door on any given day is pretty low, too, but we all still lock our front doors.

We need to get rid of the antagonistic Us vs. Them attitude between Security and AppDev, and we need to start by a) stopping accusing the other of not knowing st about the other's discipline and b) admitting that we don't know st about the other's discipline. Only then will we actually start to listen to and learn from each other.

-Michael

From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-bounces@lists.webappsec.org] On Behalf Of Ory Segal
Sent: Monday, February 14, 2011 2:22 AM
To: robert@webappsec.org
Cc: websecurity@lists.webappsec.org; websecurity-bounces@lists.webappsec.org
Subject: Re: [WEB SECURITY] Great article outlining a core issue with many in the security community

Hi,

Developers shouldn't be blamed for not writing secure applications - it's usually the fault of product owners and stakeholders that don't define (and prioritize) security as a critical requirement for a software project.

You don't expect developers to build a pretty and usable user interface, you also don't expect them to define the flow and logic of your application. That's why product owners and stakeholders have to define product requirements, use cases, users, scenarios, etc.

Developers develop code, which should adhere to the requirements of the project.

As long as security won't be a 1st class citizen in the world of software requirements, I suspect we won't see software that is secure by design.

Having security requirements also means that product owners, developers and QA teams can verify that the requirements are met. They can measure their success, and understand how to get better. Anything less than this is simply a waste of time, i.e. bolting security on the project in hindsight.

What we do need to ask ourselves is - if nobody is prioritizing security as a critical software requirement - what are we doing wrong here???

-Ory

Ory Segal
Security Products Architect
AppScan Product Manager
Rational, Application Security
IBM Corporation
Tel: +972-9-962-9836
Mobile: +972-54-773-9359
e-mail: segalory@il.ibm.commailto:segalory@il.ibm.com
[cid:image001.gif@01CBCC1F.ED165F70]

From:        robert@webappsec.org
To:        websecurity@lists.webappsec.org
Date:        14/02/2011 12:36 AM
Subject:        [WEB SECURITY] Great article outlining a core issue with many in        the security community
Sent by:        websecurity-bounces@lists.webappsec.org


I saw this posted via twitter and thought it was worth mentioning here. While the example specifies owasp, I am not posting this link to slam
them in particular. I think that the point applies to MANY folks in the security industry.

Security Vs Developers
http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html

Regards,


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

This E-Mail has been scanned for viruses.

Where do you draw the line at what developers should know and do automatically and what they should only do if there is a specific, written requirement? Where do things get too technical for product owners and stakeholders to define them? We don't expect there to be a written requirement to use a specific data type or array structure; why should there have to be a specific requirement to sanitize input using a whitelist? Shouldn't that be automatic, part of "good coding practices?" The problem is getting owners and stakeholders to think in terms of abuse and misuse cases. Their requirements are always based on the "happy path." "If the user provides the input we are expecting, this is how the application should behave." They define edits and exception paths in terms of business logic, but never in terms of technical capabilities. They are concerned that a customer may try to withdraw more money from their account than the available balance, not that they may try to withdraw more money than the input buffer is designed to hold. That last type of requirement is up to the developer in many shops. Even when developers put the question to non-technical stakeholders, it often perplexes them: "If the program receives input that it is not designed to handle, how should I have the program handle it?" Seems a little circular or oxymoronic, doesn't it? You're asking for a design specification for something that is not in the design. I've sat in requirements sessions and brought up abuse cases where the response from the business is, "What are the chances that someone will do that?" The answer, as we on this list know, is that the chance is low, but when that one determined, skilled individual turns his attention to your application, you still need to be ready for him. The chances that a burglar is going to try to walk in your front door on any given day is pretty low, too, but we all still lock our front doors. We need to get rid of the antagonistic Us vs. Them attitude between Security and AppDev, and we need to start by a) stopping accusing the other of not knowing s**t about the other's discipline and b) admitting that we don't know s**t about the other's discipline. Only then will we actually start to listen to and learn from each other. -Michael From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-bounces@lists.webappsec.org] On Behalf Of Ory Segal Sent: Monday, February 14, 2011 2:22 AM To: robert@webappsec.org Cc: websecurity@lists.webappsec.org; websecurity-bounces@lists.webappsec.org Subject: Re: [WEB SECURITY] Great article outlining a core issue with many in the security community Hi, Developers shouldn't be blamed for not writing secure applications - it's usually the fault of product owners and stakeholders that don't define (and prioritize) security as a critical requirement for a software project. You don't expect developers to build a pretty and usable user interface, you also don't expect them to define the flow and logic of your application. That's why product owners and stakeholders have to define product requirements, use cases, users, scenarios, etc. Developers develop code, which should adhere to the requirements of the project. As long as security won't be a 1st class citizen in the world of software requirements, I suspect we won't see software that is secure by design. Having security requirements also means that product owners, developers and QA teams can verify that the requirements are met. They can measure their success, and understand how to get better. Anything less than this is simply a waste of time, i.e. bolting security on the project in hindsight. What we do need to ask ourselves is - if nobody is prioritizing security as a critical software requirement - what are we doing wrong here??? -Ory ------------------------------------------------------------- Ory Segal Security Products Architect AppScan Product Manager Rational, Application Security IBM Corporation Tel: +972-9-962-9836 Mobile: +972-54-773-9359 e-mail: segalory@il.ibm.com<mailto:segalory@il.ibm.com> [cid:image001.gif@01CBCC1F.ED165F70] From: robert@webappsec.org To: websecurity@lists.webappsec.org Date: 14/02/2011 12:36 AM Subject: [WEB SECURITY] Great article outlining a core issue with many in the security community Sent by: websecurity-bounces@lists.webappsec.org ________________________________ I saw this posted via twitter and thought it was worth mentioning here. While the example specifies owasp, I am not posting this link to slam them in particular. I think that the point applies to MANY folks in the security industry. Security Vs Developers http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html Regards, - Robert Auger WASC Co Founder/Moderator of The Web Security Mailing List http://www.qasec.com/ http://www.webappsec.org/ _______________________________________________ The Web Security Mailing List WebSecurity RSS Feed http://www.webappsec.org/rss/websecurity.rss Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA WASC on Twitter http://twitter.com/wascupdates websecurity@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org This E-Mail has been scanned for viruses.
PS
Paul Schmehl
Tue, Feb 15, 2011 3:36 PM

--On February 14, 2011 1:19:41 PM -0500 "Vance, Michael"
Michael.Vance@salliemae.com wrote:

Where do you draw the line at what developers should know and do
automatically and what they should only do if there is a specific,
written requirement? Where do things get too technical for product owners
and stakeholders to define them? We don’t expect there to be a written
requirement to use a specific data type or array structure; why should
there have to be a specific requirement to sanitize input using a
whitelist? Shouldn’t that be automatic, part of “good coding
practices?”

Absolutely....except...when it breaks the application in unexpected ways,
then the developer has to explain why he chose the path of input
verification over functionality when that wasn't in the specs he was given.

Recently we were asked to open some ports on our firewall so some folks
could engage in online multiplayer games.  When we asked why that was
needed, the answer we got from the game vendor was, "Other universities
have done it, and the game works."

Mind you, they want these ports opened to literally thousands of IP
addresses.  (They gave us a list of 12 networks in the /21 to /18 size.)

We said no.  You tell us what isn't working, and we'll find a way to get it
working without exposing ourselves to excessive risk.  Oh, and BTW, we can
telnet to every port they've asked for and get a banner, so the problem
isn't the ports they told us about anyway.  It's probably some port they
want to open without first getting a request from a client.

Ease of use will always trump security until someone puts their foot down
and says no.  When they do, they must be backed up by both facts and their
administration.

Making things work as expected in a secure environment is a non-trivial
exercise.  It requires a much clearer understanding of TCP/IP, networking,
routing, applications and protocols than many people have a grasp of.  It
also requires experience with the right tools used in the right ways to
track down actual causes and suggest solutions.

That's why it isn't often done.  It's hard.  Opening the firewall is easy.
The requestor is satisfied immediately, and there isn't that incredible
pressure to relent _just_this_one_time so someone can get done what they
want to do.  (It works at home,  Why doesn't it work here?)

These aren't easy problems to solve.  If they were, they would have been
solved already.  Arguing about whose fault it is is pointless and
non-productive.  Finding a way to blend security into technology until it
becomes an integral part is much more productive.

--
Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.


"It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead." Thomas Jefferson
"There are some ideas so wrong that only a very
intelligent person could believe in them." George Orwell

--On February 14, 2011 1:19:41 PM -0500 "Vance, Michael" <Michael.Vance@salliemae.com> wrote: > > > Where do you draw the line at what developers should know and do > automatically and what they should only do if there is a specific, > written requirement? Where do things get too technical for product owners > and stakeholders to define them? We don’t expect there to be a written > requirement to use a specific data type or array structure; why should > there have to be a specific requirement to sanitize input using a > whitelist? Shouldn’t that be automatic, part of “good coding > practices?” > Absolutely....except...when it breaks the application in unexpected ways, then the developer has to explain why he chose the path of input verification over functionality when that wasn't in the specs he was given. Recently we were asked to open some ports on our firewall so some folks could engage in online multiplayer games. When we asked why that was needed, the answer we got from the game vendor was, "Other universities have done it, and the game works." Mind you, they want these ports opened to literally thousands of IP addresses. (They gave us a list of 12 networks in the /21 to /18 size.) We said no. You tell us what isn't working, and we'll find a way to get it working without exposing ourselves to excessive risk. Oh, and BTW, we can telnet to every port they've asked for and get a banner, so the problem isn't the ports they told us about anyway. It's probably some port they want to open without first getting a request from a client. Ease of use will always trump security until someone puts their foot down and says no. When they do, they must be backed up by both facts and their administration. Making things work **as expected** in a secure environment is a non-trivial exercise. It requires a much clearer understanding of TCP/IP, networking, routing, applications and protocols than many people have a grasp of. It also requires experience with the right tools **used in the right ways** to track down actual causes and suggest solutions. That's why it isn't often done. It's hard. Opening the firewall is easy. The requestor is satisfied immediately, and there isn't that incredible pressure to relent _just_this_one_time so someone can get done what they want to do. (It works at home, Why doesn't it work here?) These aren't easy problems to solve. If they were, they would have been solved already. Arguing about whose fault it is is pointless and non-productive. Finding a way to blend security into technology until it becomes an integral part is much more productive. -- Paul Schmehl, Senior Infosec Analyst As if it wasn't already obvious, my opinions are my own and not those of my employer. ******************************************* "It is as useless to argue with those who have renounced the use of reason as to administer medication to the dead." Thomas Jefferson "There are some ideas so wrong that only a very intelligent person could believe in them." George Orwell
MS
Milton Smith
Thu, Feb 17, 2011 7:58 PM

I know there are a ton of replies but I'm going to get my 2 cents as
well...

It's likely the priority list would be different if John polled customers
<grin>.  Most large software customers with significant resources do not
take the claims of vendors in areas of security and performance at face
value.  These customers conduct their own security and performance
assessments.  Depending upon the results they may defer purchases if the
product does not, or cannot, meet their expectations.

The real challenge for security is visibility.  A product that is highly
secure presents the same user interface as one that is not.  It takes
significant resources and expertise to establish the security posture for
a perspective product -- to make the invisible, visible.  Such resources
are out of reach for most small to medium businesses and individuals.  All
product features being equal, it's likely most will pay a little more
money for product that is also secure.  After all, security does cost
money.

David Rice had an awesome presentation at OWASP AppSec 2010 in Irvine,
California.  He drives an analog between pollution and security and how
the world's tolerance for poor security is changing.  Security
professionals are the "tree huggers" of cyber space - I like that.  There
might be a YouTube version out there.

(ok, 3 cents)

Regards,
Milton

On 2/13/11 3:27 PM, "robert@webappsec.org" robert@webappsec.org wrote:

I saw this posted via twitter and thought it was worth mentioning here.
While the example specifies owasp, I am not posting this link to slam
them in particular. I think that the point applies to MANY folks in the
security industry.

Security Vs Developers
http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.
html

Regards,


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or
g

The information contained in this message may be legally privileged and
confidential.  It is intended to be read only by the individual or entity
to whom it is addressed or by their designee. If the reader of this
message is not the intended recipient, you are on notice that any
distribution of this message, in any form, is strictly prohibited.  If
you have received this message in error, please immediately notify the
sender and/or SuccessFactors, Inc. by telephone at (650) 645-2000 and
delete or destroy any copy of this message.

I know there are a ton of replies but I'm going to get my 2 cents as well... It's likely the priority list would be different if John polled customers <grin>. Most large software customers with significant resources do not take the claims of vendors in areas of security and performance at face value. These customers conduct their own security and performance assessments. Depending upon the results they may defer purchases if the product does not, or cannot, meet their expectations. The real challenge for security is visibility. A product that is highly secure presents the same user interface as one that is not. It takes significant resources and expertise to establish the security posture for a perspective product -- to make the invisible, visible. Such resources are out of reach for most small to medium businesses and individuals. All product features being equal, it's likely most will pay a little more money for product that is also secure. After all, security does cost money. David Rice had an awesome presentation at OWASP AppSec 2010 in Irvine, California. He drives an analog between pollution and security and how the world's tolerance for poor security is changing. Security professionals are the "tree huggers" of cyber space - I like that. There might be a YouTube version out there. (ok, 3 cents) Regards, Milton On 2/13/11 3:27 PM, "robert@webappsec.org" <robert@webappsec.org> wrote: >I saw this posted via twitter and thought it was worth mentioning here. >While the example specifies owasp, I am not posting this link to slam >them in particular. I think that the point applies to MANY folks in the >security industry. > >Security Vs Developers >http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers. >html > >Regards, >- Robert Auger >WASC Co Founder/Moderator of The Web Security Mailing List >http://www.qasec.com/ >http://www.webappsec.org/ > > >_______________________________________________ >The Web Security Mailing List > >WebSecurity RSS Feed >http://www.webappsec.org/rss/websecurity.rss > >Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > >WASC on Twitter >http://twitter.com/wascupdates > >websecurity@lists.webappsec.org >http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.or >g > > > >The information contained in this message may be legally privileged and >confidential. It is intended to be read only by the individual or entity >to whom it is addressed or by their designee. If the reader of this >message is not the intended recipient, you are on notice that any >distribution of this message, in any form, is strictly prohibited. If >you have received this message in error, please immediately notify the >sender and/or SuccessFactors, Inc. by telephone at (650) 645-2000 and >delete or destroy any copy of this message. > >
JW
John Wilander
Tue, Feb 22, 2011 2:06 PM

Hi Robert and WebSecurity!

Thanks for all the comments on the Security People vs Developers article.
I've read a lot of interesting perspectives on this list and the blog post
got 1200+ readers. Now we have to move forward.

Therefore we have started up the Developer Outreach Initiative. You can read
about my two proposed outreach projects here (and please comment!):
http://appsandsecurity.blogspot.com/2011/02/developer-outreach-initiative.html

Abbreviated:

Proposed Solution – Security Itches
My first proposition is this: Instead of pushing coding guidelines and
security tools onto developers I think we should start by asking them "What
are your security itches?". Whatever we get back will be our starting point.
Maybe we'll pick ten itches and publish good solutions.
What if they have the wrong itches? Well, the goal of the outreach is

  1. to find out what developers think, and 2) address their itches to build
    some well-needed credibility. Before we have credibility we cannot push
    coding guidelines. And if developers think SSL certs are their primary
    problem that is important.

Proposed Solution – Open Test Data
Security people tell developers to "do input validation". Input validation
is no news to developers. The problem is defining the data model and testing
the input validation. We can do something important here – building
opentestdata.org. I own the domain and dream about the following beautiful
community effort:
You go to the site and can either "submit test data" or "download test
data". On the submission page you can anonymously enter a e.g. Portuguese
postal address, an Indian human name, a Swedish postal/zip code ... or 100
SQL injection strings. The effort is almost zero.
On the download page you choose your format and download in context. "We
have European customers so we want European human names, postal addresses,
and phone numbers". Developers will love it. And that's where we can start
promoting security testing!

Kind regards, John Wilander

2011/2/14 robert@webappsec.org

I saw this posted via twitter and thought it was worth mentioning here.
While the example specifies owasp, I am not posting this link to slam
them in particular. I think that the point applies to MANY folks in the
security industry.

Security Vs Developers

http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html

Hi Robert and WebSecurity! Thanks for all the comments on the Security People vs Developers article. I've read a lot of interesting perspectives on this list and the blog post got 1200+ readers. Now we have to move forward. Therefore we have started up the Developer Outreach Initiative. You can read about my two proposed outreach projects here (and please comment!): http://appsandsecurity.blogspot.com/2011/02/developer-outreach-initiative.html Abbreviated: *Proposed Solution – Security Itches* My first proposition is this: Instead of pushing coding guidelines and security tools onto developers I think we should start by asking them "What are your security itches?". Whatever we get back will be our starting point. Maybe we'll pick ten itches and publish good solutions. What if they have the *wrong* itches? Well, the goal of the outreach is 1) to find out what developers think, and 2) address their itches to build some well-needed credibility. Before we have credibility we cannot push coding guidelines. And if developers think SSL certs are their primary problem that *is* important. *Proposed Solution – Open Test Data* Security people tell developers to "do input validation". Input validation is no news to developers. The problem is defining the data model and testing the input validation. We can do something important here – building opentestdata.org. I own the domain and dream about the following beautiful community effort: You go to the site and can either "submit test data" or "download test data". On the submission page you can anonymously enter a e.g. Portuguese postal address, an Indian human name, a Swedish postal/zip code ... or 100 SQL injection strings. The effort is almost zero. On the download page you choose your format and download in context. "We have European customers so we want European human names, postal addresses, and phone numbers". Developers will love it. And that's where we can start promoting security testing! Kind regards, John Wilander 2011/2/14 <robert@webappsec.org> > I saw this posted via twitter and thought it was worth mentioning here. > While the example specifies owasp, I am not posting this link to slam > them in particular. I think that the point applies to MANY folks in the > security industry. > > Security Vs Developers > > http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html > -- John Wilander, https://twitter.com/johnwilander Chapter co-leader OWASP Sweden, http://owaspsweden.blogspot.com <http://owaspsweden.blogspot.com>Co-organizer Global Summit, http://www.owasp.org/index.php/Summit_2011 <http://www.owasp.org/index.php/Summit_2011>Conf Comm, http://www.owasp.org/index.php/Global_Conferences_Committee
AM
Adam Muntner
Tue, Feb 22, 2011 5:53 PM

This has been my main research project for the last couple years, with a
security focus.

The repo is at HTTP://fuzzdb.googlecode.com

There have been a lot of quality submissions, already, and it could always
use more!

Adam

On Feb 22, 2011 12:28 PM, robert@webappsec.org wrote:

Proposed Solution =96 Open Test Data
Security people tell developers to "do input validation". Input validation
is no news to developers. The problem is defining the data model and

testin=

g
the input validation. We can do something important here =96 building
opentestdata.org. I own the domain and dream about the following beautiful
community effort:
You go to the site and can either "submit test data" or "download test
data". On the submission page you can anonymously enter a e.g. Portuguese
postal address, an Indian human name, a Swedish postal/zip code ... or 100
SQL injection strings. The effort is almost zero.
On the download page you choose your format and download in context.

"We

have European customers so we want European human names, postal addresses,
and phone numbers". Developers will love it. And that's where we can start
promoting security testing!

I tried to start something similar at www.qasec.com a couple years ago but
ended up removing
it as I couldn't dedicate the attention that it deserved. I think that
having a repository
of qa test cases (with and without a security focus) is something that is
sorely needed and would
go a long way. As is the case with any open project finding a
dedicated/qualified leader is the most
difficult aspect. If you're saying that you'll lead this effort then I'd be
willing to contribute some
sample plans.

Regards,


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org
http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org

This has been my main research project for the last couple years, with a security focus. The repo is at HTTP://fuzzdb.googlecode.com There have been a lot of quality submissions, already, and it could always use more! Adam On Feb 22, 2011 12:28 PM, <robert@webappsec.org> wrote: > *Proposed Solution =96 Open Test Data* > Security people tell developers to "do input validation". Input validation > is no news to developers. The problem is defining the data model and testin= > g > the input validation. We can do something important here =96 building > opentestdata.org. I own the domain and dream about the following beautiful > community effort: > You go to the site and can either "submit test data" or "download test > data". On the submission page you can anonymously enter a e.g. Portuguese > postal address, an Indian human name, a Swedish postal/zip code ... or 100 > SQL injection strings. The effort is almost zero. > On the download page you choose your format and download in context. "We > have European customers so we want European human names, postal addresses, > and phone numbers". Developers will love it. And that's where we can start > promoting security testing! I tried to start something similar at www.qasec.com a couple years ago but ended up removing it as I couldn't dedicate the attention that it deserved. I think that having a repository of qa test cases (with and without a security focus) is something that is sorely needed and would go a long way. As is the case with any open project finding a dedicated/qualified leader is the most difficult aspect. If you're saying that you'll lead this effort then I'd be willing to contribute some sample plans. Regards, - Robert http://www.cgisecurity.com/ http://www.webappsec.org/ http://www.qasec.com/ _______________________________________________ The Web Security Mailing List WebSecurity RSS Feed http://www.webappsec.org/rss/websecurity.rss Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA WASC on Twitter http://twitter.com/wascupdates websecurity@lists.webappsec.org http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org
R
robert@webappsec.org
Tue, Feb 22, 2011 6:20 PM

Proposed Solution =96 Open Test Data
Security people tell developers to "do input validation". Input validation
is no news to developers. The problem is defining the data model and testin=
g
the input validation. We can do something important here =96 building
opentestdata.org. I own the domain and dream about the following beautiful
community effort:
You go to the site and can either "submit test data" or "download test
data". On the submission page you can anonymously enter a e.g. Portuguese
postal address, an Indian human name, a Swedish postal/zip code ... or 100
SQL injection strings. The effort is almost zero.
On the download page you choose your format and download in context. "We
have European customers so we want European human names, postal addresses,
and phone numbers". Developers will love it. And that's where we can start
promoting security testing!

I tried to start something similar at www.qasec.com a couple years ago but ended up removing
it as I couldn't dedicate the attention that it deserved. I think that having a repository
of qa test cases (with and without a security focus) is something that is sorely needed and would
go a long way. As is the case with any open project finding a dedicated/qualified leader is the most
difficult aspect. If you're saying that you'll lead this effort then I'd be willing to contribute some
sample plans.

Regards,

> *Proposed Solution =96 Open Test Data* > Security people tell developers to "do input validation". Input validation > is no news to developers. The problem is defining the data model and testin= > g > the input validation. We can do something important here =96 building > opentestdata.org. I own the domain and dream about the following beautiful > community effort: > You go to the site and can either "submit test data" or "download test > data". On the submission page you can anonymously enter a e.g. Portuguese > postal address, an Indian human name, a Swedish postal/zip code ... or 100 > SQL injection strings. The effort is almost zero. > On the download page you choose your format and download in context. "We > have European customers so we want European human names, postal addresses, > and phone numbers". Developers will love it. And that's where we can start > promoting security testing! I tried to start something similar at www.qasec.com a couple years ago but ended up removing it as I couldn't dedicate the attention that it deserved. I think that having a repository of qa test cases (with and without a security focus) is something that is sorely needed and would go a long way. As is the case with any open project finding a dedicated/qualified leader is the most difficult aspect. If you're saying that you'll lead this effort then I'd be willing to contribute some sample plans. Regards, - Robert http://www.cgisecurity.com/ http://www.webappsec.org/ http://www.qasec.com/