websecurity@lists.webappsec.org

The Web Security Mailing List

View all threads

Re: [WEB SECURITY] Great article outlining a core issue with many in the security community

OS
Ory Segal
Mon, Feb 14, 2011 6:51 PM

"Vance, Michael" Michael.Vance@salliemae.com wrote on 14/02/2011
08:19:41 PM:

From: "Vance, Michael" Michael.Vance@salliemae.com
To: "websecurity@lists.webappsec.org" websecurity@lists.webappsec.org
Cc: Ory Segal/Haifa/IBM@IBMIL
Date: 14/02/2011 08:20 PM
Subject: RE: [WEB SECURITY] Great article outlining a core issue
with many in the security community

Where do you draw the line at what developers should know and do
automatically and what they should only do if there is a specific,
written requirement? Where do things get too technical for product
owners and stakeholders to define them? We don?t expect there to be
a written requirement to use a specific data type or array
structure; why should there have to be a specific requirement to
sanitize input using a whitelist? Shouldn?t that be automatic, part
of ?good coding practices??

IMHO -
That's the wrong kind of requirement you listed there. If you define the
users of the system, and the use cases, you can derive the rest (abuse
cases). For example, user group A should access functions f1,f2,f3, user
group B, shouldn't have access to f1,f2 but access f3, etc. From these
use-cases, you can define the abuse cases, in high level - for example,
user group A shouldn't be able to access information belonging to user
group B, etc.

Your architect (or security champion) should translate these use-cases &
abuse cases into more technical requirements, such as - "the system should
protect from horizontal privilege escalation", or "All attempts to
read/write from the database should be validated against SQL Injection to
avoid data corruption or leakage", etc.

These high level requirements, can then be fulfilled by your developers,
using the tools they have - secure coding best practices.

You shouldn't have your stakeholders state: "Make the system unbreakable",
and then expect your developers to solve the problem. That's a recipe for
disaster. The more granular your requirements and design are, the easier
it will be to implement and produce metrics for.

(But, I'm stating the obvious now)

The problem is getting owners and stakeholders to think in terms of
abuse and misuse cases.  Their requirements are always based on the
?happy path.? ?If the user provides the input we are expecting, this
is how the application should behave.? They define edits and
exception paths in terms of business logic, but never in terms of
technical capabilities. They are concerned that a customer may try
to withdraw more money from their account than the available
balance, not that they may try to withdraw more money than the input
buffer is designed to hold. That last type of requirement is up to
the developer in many shops.

Even when developers put the question to non-technical stakeholders,
it often perplexes them: ?If the program receives input that it is
not designed to handle, how should I have the program handle it??
Seems a little circular or oxymoronic, doesn?t it? You?re asking for
a design specification for something that is not in the design.

That's because it is exactly like asking the stakeholders - should I use a
While or a For loop. That's outside their domain. They should define the
security requirements, and the developer should fulfill them. If the
requirements are well defined, everyone will know what to do.

I?ve sat in requirements sessions and brought up abuse cases where
the response from the business is, ?What are the chances that
someone will do that?? The answer, as we on this list know, is that
the chance is low, but when that one determined, skilled individual
turns his attention to your application, you still need to be ready
for him. The chances that a burglar is going to try to walk in your
front door on any given day is pretty low, too, but we all still
lock our front doors.

That's why you do a risk assessment / threat model, etc. You should come
to the meeting prepared to answer technical questions about the relevance
of each threat or risk, so that the stakeholders will be able to
prioritize the solutions.

We need to get rid of the antagonistic Us vs. Them attitude between
Security and AppDev, and we need to start by a) stopping accusing
the other of not knowing st about the other?s discipline and b)
admitting that we don?t know s
t about the other?s discipline. Only
then will we actually start to listen to and learn from each other.

I totally agree.

-Michael

From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-
bounces@lists.webappsec.org] On Behalf Of Ory Segal
Sent: Monday, February 14, 2011 2:22 AM
To: robert@webappsec.org
Cc: websecurity@lists.webappsec.org;

Subject: Re: [WEB SECURITY] Great article outlining a core issue
with many in the security community

Hi,

Developers shouldn't be blamed for not writing secure applications -
it's usually the fault of product owners and stakeholders that don't
define (and prioritize) security as a critical requirement for a
software project.

You don't expect developers to build a pretty and usable user
interface, you also don't expect them to define the flow and logic
of your application. That's why product owners and stakeholders have
to define product requirements, use cases, users, scenarios, etc.

Developers develop code, which should adhere to the requirements of
the project.

As long as security won't be a 1st class citizen in the world of
software requirements, I suspect we won't see software that is
secure by design.

Having security requirements also means that product owners,
developers and QA teams can verify that the requirements are met.
They can measure their success, and understand how to get better.
Anything less than this is simply a waste of time, i.e. bolting
security on the project in hindsight.

What we do need to ask ourselves is - if nobody is prioritizing
security as a critical software requirement - what are we doing wrong

here???

-Ory

Ory Segal
Security Products Architect
AppScan Product Manager
Rational, Application Security
IBM Corporation
Tel: +972-9-962-9836
Mobile: +972-54-773-9359
e-mail: segalory@il.ibm.com
[image removed]

From:        robert@webappsec.org
To:        websecurity@lists.webappsec.org
Date:        14/02/2011 12:36 AM
Subject:        [WEB SECURITY] Great article outlining a core issue
with many in        the security community
Sent by:        websecurity-bounces@lists.webappsec.org

I saw this posted via twitter and thought it was worth mentioning
here. While the example specifies owasp, I am not posting this link to

slam

them in particular. I think that the point applies to MANY folks in
the security industry.

Security Vs Developers

Regards,


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org

This E-Mail has been scanned for viruses.

"Vance, Michael" <Michael.Vance@salliemae.com> wrote on 14/02/2011 08:19:41 PM: > From: "Vance, Michael" <Michael.Vance@salliemae.com> > To: "websecurity@lists.webappsec.org" <websecurity@lists.webappsec.org> > Cc: Ory Segal/Haifa/IBM@IBMIL > Date: 14/02/2011 08:20 PM > Subject: RE: [WEB SECURITY] Great article outlining a core issue > with many in the security community > > Where do you draw the line at what developers should know and do > automatically and what they should only do if there is a specific, > written requirement? Where do things get too technical for product > owners and stakeholders to define them? We don?t expect there to be > a written requirement to use a specific data type or array > structure; why should there have to be a specific requirement to > sanitize input using a whitelist? Shouldn?t that be automatic, part > of ?good coding practices?? *IMHO* - That's the wrong kind of requirement you listed there. If you define the users of the system, and the use cases, you can derive the rest (abuse cases). For example, user group A should access functions f1,f2,f3, user group B, shouldn't have access to f1,f2 but access f3, etc. From these use-cases, you can define the abuse cases, in high level - for example, user group A shouldn't be able to access information belonging to user group B, etc. Your architect (or security champion) should translate these use-cases & abuse cases into more technical requirements, such as - "the system should protect from horizontal privilege escalation", or "All attempts to read/write from the database should be validated against SQL Injection to avoid data corruption or leakage", etc. These high level requirements, can then be fulfilled by your developers, using the tools they have - secure coding best practices. You shouldn't have your stakeholders state: "Make the system unbreakable", and then expect your developers to solve the problem. That's a recipe for disaster. The more granular your requirements and design are, the easier it will be to implement and produce metrics for. (But, I'm stating the obvious now) > > The problem is getting owners and stakeholders to think in terms of > abuse and misuse cases. Their requirements are always based on the > ?happy path.? ?If the user provides the input we are expecting, this > is how the application should behave.? They define edits and > exception paths in terms of business logic, but never in terms of > technical capabilities. They are concerned that a customer may try > to withdraw more money from their account than the available > balance, not that they may try to withdraw more money than the input > buffer is designed to hold. That last type of requirement is up to > the developer in many shops. > > Even when developers put the question to non-technical stakeholders, > it often perplexes them: ?If the program receives input that it is > not designed to handle, how should I have the program handle it?? > Seems a little circular or oxymoronic, doesn?t it? You?re asking for > a design specification for something that is not in the design. That's because it is exactly like asking the stakeholders - should I use a While or a For loop. That's outside their domain. They should define the security requirements, and the developer should fulfill them. If the requirements are well defined, everyone will know what to do. > > I?ve sat in requirements sessions and brought up abuse cases where > the response from the business is, ?What are the chances that > someone will do that?? The answer, as we on this list know, is that > the chance is low, but when that one determined, skilled individual > turns his attention to your application, you still need to be ready > for him. The chances that a burglar is going to try to walk in your > front door on any given day is pretty low, too, but we all still > lock our front doors. > That's why you do a risk assessment / threat model, etc. You should come to the meeting prepared to answer technical questions about the relevance of each threat or risk, so that the stakeholders will be able to prioritize the solutions. > We need to get rid of the antagonistic Us vs. Them attitude between > Security and AppDev, and we need to start by a) stopping accusing > the other of not knowing s**t about the other?s discipline and b) > admitting that we don?t know s**t about the other?s discipline. Only > then will we actually start to listen to and learn from each other. I totally agree. > > -Michael > > From: websecurity-bounces@lists.webappsec.org [mailto:websecurity- > bounces@lists.webappsec.org] On Behalf Of Ory Segal > Sent: Monday, February 14, 2011 2:22 AM > To: robert@webappsec.org > Cc: websecurity@lists.webappsec.org; websecurity-bounces@lists.webappsec.org > Subject: Re: [WEB SECURITY] Great article outlining a core issue > with many in the security community > > Hi, > > Developers shouldn't be blamed for not writing secure applications - > it's usually the fault of product owners and stakeholders that don't > define (and prioritize) security as a critical requirement for a > software project. > > You don't expect developers to build a pretty and usable user > interface, you also don't expect them to define the flow and logic > of your application. That's why product owners and stakeholders have > to define product requirements, use cases, users, scenarios, etc. > > Developers develop code, which should adhere to the requirements of > the project. > > As long as security won't be a 1st class citizen in the world of > software requirements, I suspect we won't see software that is > secure by design. > > Having security requirements also means that product owners, > developers and QA teams can verify that the requirements are met. > They can measure their success, and understand how to get better. > Anything less than this is simply a waste of time, i.e. bolting > security on the project in hindsight. > > What we do need to ask ourselves is - if nobody is prioritizing > security as a critical software requirement - what are we doing wrong here??? > > -Ory > ------------------------------------------------------------- > Ory Segal > Security Products Architect > AppScan Product Manager > Rational, Application Security > IBM Corporation > Tel: +972-9-962-9836 > Mobile: +972-54-773-9359 > e-mail: segalory@il.ibm.com > [image removed] > > > > From: robert@webappsec.org > To: websecurity@lists.webappsec.org > Date: 14/02/2011 12:36 AM > Subject: [WEB SECURITY] Great article outlining a core issue > with many in the security community > Sent by: websecurity-bounces@lists.webappsec.org > > > > > I saw this posted via twitter and thought it was worth mentioning > here. While the example specifies owasp, I am not posting this link to slam > them in particular. I think that the point applies to MANY folks in > the security industry. > > Security Vs Developers > http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html > > Regards, > - Robert Auger > WASC Co Founder/Moderator of The Web Security Mailing List > http://www.qasec.com/ > http://www.webappsec.org/ > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org > > > This E-Mail has been scanned for viruses.
JM
James Manico
Tue, Feb 15, 2011 6:22 AM

> our architect (or security champion) should translate these use-cases &
abuse cases into more technical requirements, such as - "the system should
protect from horizontal privilege escalation", or "All attempts to
read/write from the database should be validated against SQL Injection to
avoid data corruption or leakage", etc.

This is not prescriptive enough, IMO.  The following requirement:

the system should protect from horizontal privilege escalation

..only leaves a developer scratching their heads. I would enhance this to
require something along the lines of:

  1.  Use a centralized data contextual access control methodology which
    

restricts users from modifying or viewing data of users with the same role.

  1.  Use a centralized authority and a deny-by-default mechanism so that
    

new features must be configured before being exposed

  1.  Hard-code ACTIVITIES in code instead of ROLES so that policy can be
    

changed in real-time without requiring code changes and re-deployment

"All attempts to read/write from the database should be validated against

SQL Injection to avoid data corruption or leakage", etc.

This is not an accurate requirement (and is quite dangerous, actually). I
would require something along the lines of

  1.  Use parameterized queries and data binding to prevent SQL injection
    

(both in code and within stored procedures).

  1.  When parameterized queries hurt performance, use escaping of each
    

individual untrusted variable

These high level requirements, can then be fulfilled by your developers,

using the tools they have - secure coding best practices.

But this is part of the problem. Requirement from security folks are often
inaccurate or “high level”. We need to get clear and prescriptive. I think
the best bet is collaboration between a security architect and a
developer-centric architect to build prescriptive requirements.

  • Jim

The problem is getting owners and stakeholders to think in terms of
abuse and misuse cases.  Their requirements are always based on the
“happy path.” “If the user provides the input we are expecting, this
is how the application should behave.” They define edits and
exception paths in terms of business logic, but never in terms of
technical capabilities. They are concerned that a customer may try
to withdraw more money from their account than the available
balance, not that they may try to withdraw more money than the input
buffer is designed to hold. That last type of requirement is up to
the developer in many shops.

Even when developers put the question to non-technical stakeholders,
it often perplexes them: “If the program receives input that it is
not designed to handle, how should I have the program handle it?”
Seems a little circular or oxymoronic, doesn’t it? You’re asking for
a design specification for something that is not in the design.

That's because it is exactly like asking the stakeholders - should I use a
While or a For loop. That's outside their domain. They should define the
security requirements, and the developer should fulfill them. If the
requirements are well defined, everyone will know what to do.

I’ve sat in requirements sessions and brought up abuse cases where
the response from the business is, “What are the chances that
someone will do that?” The answer, as we on this list know, is that
the chance is low, but when that one determined, skilled individual
turns his attention to your application, you still need to be ready
for him. The chances that a burglar is going to try to walk in your
front door on any given day is pretty low, too, but we all still
lock our front doors.

That's why you do a risk assessment / threat model, etc. You should come to
the meeting prepared to answer technical questions about the relevance of
each threat or risk, so that the stakeholders will be able to prioritize the
solutions.

We need to get rid of the antagonistic Us vs. Them attitude between
Security and AppDev, and we need to start by a) stopping accusing
the other of not knowing st about the other’s discipline and b)
admitting that we don’t know s
t about the other’s discipline. Only
then will we actually start to listen to and learn from each other.

I totally agree.

-Michael

From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-<websecurity->
bounces@lists.webappsec.org] On Behalf Of Ory Segal
Sent: Monday, February 14, 2011 2:22 AM
To: robert@webappsec.org
Cc: websecurity@lists.webappsec.org;

Subject: Re: [WEB SECURITY] Great article outlining a core issue
with many in the security community

Hi,

Developers shouldn't be blamed for not writing secure applications -
it's usually the fault of product owners and stakeholders that don't
define (and prioritize) security as a critical requirement for a
software project.

You don't expect developers to build a pretty and usable user
interface, you also don't expect them to define the flow and logic
of your application. That's why product owners and stakeholders have
to define product requirements, use cases, users, scenarios, etc.

Developers develop code, which should adhere to the requirements of
the project.

As long as security won't be a 1st class citizen in the world of
software requirements, I suspect we won't see software that is
secure by design.

Having security requirements also means that product owners,
developers and QA teams can verify that the requirements are met.
They can measure their success, and understand how to get better.
Anything less than this is simply a waste of time, i.e. bolting
security on the project in hindsight.

What we do need to ask ourselves is - if nobody is prioritizing
security as a critical software requirement - what are we doing wrong

here???

-Ory

Ory Segal
Security Products Architect
AppScan Product Manager
Rational, Application Security
IBM Corporation
Tel: +972-9-962-9836
Mobile: +972-54-773-9359
e-mail: segalory@il.ibm.com
[image removed]

From:        robert@webappsec.org
To:        websecurity@lists.webappsec.org
Date:        14/02/2011 12:36 AM
Subject:        [WEB SECURITY] Great article outlining a core issue
with many in        the security community
Sent by:        websecurity-bounces@lists.webappsec.org

I saw this posted via twitter and thought it was worth mentioning
here. While the example specifies owasp, I am not posting this link to

slam

them in particular. I think that the point applies to MANY folks in
the security industry.

Security Vs Developers

Regards,


The Web Security Mailing List

WebSecurity RSS Feed
http://www.webappsec.org/rss/websecurity.rss

Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA

WASC on Twitter
http://twitter.com/wascupdates

websecurity@lists.webappsec.org

This E-Mail has been scanned for viruses.

*>* our architect (or security champion) should translate these use-cases & abuse cases into more technical requirements, such as - "the system should protect from horizontal privilege escalation", or "All attempts to read/write from the database should be validated against SQL Injection to avoid data corruption or leakage", etc. This is not prescriptive enough, IMO. The following requirement: > the system should protect from horizontal privilege escalation ..only leaves a developer scratching their heads. I would enhance this to require something along the lines of: 1) Use a centralized data contextual access control methodology which restricts users from modifying or viewing data of users with the same role. 2) Use a centralized authority and a deny-by-default mechanism so that new features must be configured before being exposed 3) Hard-code ACTIVITIES in code instead of ROLES so that policy can be changed in real-time without requiring code changes and re-deployment > "All attempts to read/write from the database should be validated against SQL Injection to avoid data corruption or leakage", etc. This is not an accurate requirement (and is quite dangerous, actually). I would require something along the lines of 1) Use parameterized queries and data binding to prevent SQL injection (both in code and within stored procedures). 2) When parameterized queries hurt performance, use escaping of each individual untrusted variable > These high level requirements, can then be fulfilled by your developers, using the tools they have - secure coding best practices. But this is part of the problem. Requirement from security folks are often inaccurate or “high level”. We need to get clear and prescriptive. I think the best bet is collaboration between a security architect and a developer-centric architect to build prescriptive requirements. - Jim > > The problem is getting owners and stakeholders to think in terms of > abuse and misuse cases. Their requirements are always based on the > “happy path.” “If the user provides the input we are expecting, this > is how the application should behave.” They define edits and > exception paths in terms of business logic, but never in terms of > technical capabilities. They are concerned that a customer may try > to withdraw more money from their account than the available > balance, not that they may try to withdraw more money than the input > buffer is designed to hold. That last type of requirement is up to > the developer in many shops. > > Even when developers put the question to non-technical stakeholders, > it often perplexes them: “If the program receives input that it is > not designed to handle, how should I have the program handle it?” > Seems a little circular or oxymoronic, doesn’t it? You’re asking for > a design specification for something that is not in the design. That's because it is exactly like asking the stakeholders - should I use a While or a For loop. That's outside their domain. They should define the security requirements, and the developer should fulfill them. If the requirements are well defined, everyone will know what to do. > > I’ve sat in requirements sessions and brought up abuse cases where > the response from the business is, “What are the chances that > someone will do that?” The answer, as we on this list know, is that > the chance is low, but when that one determined, skilled individual > turns his attention to your application, you still need to be ready > for him. The chances that a burglar is going to try to walk in your > front door on any given day is pretty low, too, but we all still > lock our front doors. > That's why you do a risk assessment / threat model, etc. You should come to the meeting prepared to answer technical questions about the relevance of each threat or risk, so that the stakeholders will be able to prioritize the solutions. > We need to get rid of the antagonistic Us vs. Them attitude between > Security and AppDev, and we need to start by a) stopping accusing > the other of not knowing s**t about the other’s discipline and b) > admitting that we don’t know s**t about the other’s discipline. Only > then will we actually start to listen to and learn from each other. I totally agree. > > -Michael > > From: websecurity-bounces@lists.webappsec.org [mailto:websecurity-<websecurity-> > bounces@lists.webappsec.org] On Behalf Of Ory Segal > Sent: Monday, February 14, 2011 2:22 AM > To: robert@webappsec.org > Cc: websecurity@lists.webappsec.org; websecurity-bounces@lists.webappsec.org > Subject: Re: [WEB SECURITY] Great article outlining a core issue > with many in the security community > > Hi, > > Developers shouldn't be blamed for not writing secure applications - > it's usually the fault of product owners and stakeholders that don't > define (and prioritize) security as a critical requirement for a > software project. > > You don't expect developers to build a pretty and usable user > interface, you also don't expect them to define the flow and logic > of your application. That's why product owners and stakeholders have > to define product requirements, use cases, users, scenarios, etc. > > Developers develop code, which should adhere to the requirements of > the project. > > As long as security won't be a 1st class citizen in the world of > software requirements, I suspect we won't see software that is > secure by design. > > Having security requirements also means that product owners, > developers and QA teams can verify that the requirements are met. > They can measure their success, and understand how to get better. > Anything less than this is simply a waste of time, i.e. bolting > security on the project in hindsight. > > What we do need to ask ourselves is - if nobody is prioritizing > security as a critical software requirement - what are we doing wrong here??? > > -Ory > ------------------------------------------------------------- > Ory Segal > Security Products Architect > AppScan Product Manager > Rational, Application Security > IBM Corporation > Tel: +972-9-962-9836 > Mobile: +972-54-773-9359 > e-mail: segalory@il.ibm.com > [image removed] > > > > From: robert@webappsec.org > To: websecurity@lists.webappsec.org > Date: 14/02/2011 12:36 AM > Subject: [WEB SECURITY] Great article outlining a core issue > with many in the security community > Sent by: websecurity-bounces@lists.webappsec.org > > > > > I saw this posted via twitter and thought it was worth mentioning > here. While the example specifies owasp, I am not posting this link to slam > them in particular. I think that the point applies to MANY folks in > the security industry. > > Security Vs Developers > http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html > > Regards, > - Robert Auger > WASC Co Founder/Moderator of The Web Security Mailing List > http://www.qasec.com/ > http://www.webappsec.org/ > > > _______________________________________________ > The Web Security Mailing List > > WebSecurity RSS Feed > http://www.webappsec.org/rss/websecurity.rss > > Join WASC on LinkedIn http://www.linkedin.com/e/gis/83336/4B20E4374DBA > > WASC on Twitter > http://twitter.com/wascupdates > > websecurity@lists.webappsec.org > http://lists.webappsec.org/mailman/listinfo/websecurity_lists.webappsec.org > > > This E-Mail has been scanned for viruses.