[WEB SECURITY] code review techniques for when you don't trust your developers or testers

Bill Pennington bill.pennington at whitehatsec.com
Thu Aug 13 19:49:49 EDT 2009


Answers inline...

On Aug 13, 2009, at 4:42 PM, Jim Manico wrote:

>> I would love to see data on the frequency of bad evil programs  
>> doing stuff in code. I would be somewhat shocked if that was a  
>> bigger  problem than coder introduces SQLi for the 900th time.
>
> One of the greatest failings of modern risk analysis is to ignore  
> incredibly low likelihood but high impact risks.
>
> An insider senior coder who is evil can take you down, easily.
>
> - Jim

Totally agree on the total disaster one event can do but so can a  
direct hit from an asteroid, a wormhole opening under the office or a  
hoard of Ogres...

I have seen developers go bad, I know they do from time to time but a  
failure of modern risk analysis is spending a lot of time, effort and  
money protecting against rare high impact events, and ignoring the  
common everyday ones.

Can we not get data on the frequency of occurrence and does that not  
impact the time, effort and money we put towards solving the problem?

>
> ----- Original Message ----- From: "Bill Pennington" <bill.pennington at whitehatsec.com 
> >
> To: "Steven M. Christey" <coley at linus.mitre.org>
> Cc: "Jim Manico" <jim at manico.net>; <kuznetso at alum.mit.edu>; "Mat  
> Caughron" <mat at phpconsulting.com>; <websecurity at webappsec.org>;  
> "Hoffman, Billy" <billy.hoffman at hp.com>
> Sent: Thursday, August 13, 2009 1:01 PM
> Subject: Re: [WEB SECURITY] code review techniques for when you  
> don't trust your developers or testers
>
>
>> The only 2 insider evil coder forensics investigations I ever did   
>> where of the business logic type. Both where so subtle that you  
>> would  be highly unlikely to detect them. The only reason they  
>> where detected  is the bad guys had made verbal comments to others  
>> that lead them to  hand review every check-in these people had  
>> done. Even then they did  not catch it until 4 different developers  
>> had reviewed it. I have not  seen a case where people are just  
>> randomly making network connections  but I will readily admit my  
>> data set is small.
>>
>> I would love to see data on the frequency of bad evil programs  
>> doing stuff in code. I would be somewhat shocked if that was a  
>> bigger  problem than coder introduces SQLi for the 900th time.
>>
>>
>> ---
>> Bill Pennington
>> SVP Services
>> WhiteHat Security Inc.
>> http://www.whitehatsec.com
>>
>> On Aug 13, 2009, at 3:14 PM, Steven M. Christey wrote:
>>
>>>
>>> On Thu, 13 Aug 2009, Jim Manico wrote:
>>>
>>>> This is a pain to get right, even more of a pain to maintain -
>>>> especially since the tools to manage Java policy are poor, at  
>>>> best.  But
>>>> I think it's our best defense in protecting an enterprise against  
>>>> the
>>>> insider evil coder.
>>>
>>> There may be techniques for finding the "insider evil coder" doing  
>>> things
>>> that don't make sense like executing an external program and sending
>>> results across a hard-coded network connection - Veracode did a   
>>> paper on
>>> back door detection about a year ago.  To get any depth at all,   
>>> though,
>>> you'd need a well-developed model of what the application is   
>>> supposed to
>>> do, especially with respect to its interfaces with the OS,  
>>> downstream
>>> components, etc.
>>>
>>> That still won't address intentionally-introduced "business logic"  
>>> issues
>>> (with a narrower use of the term than perhaps Jeremiah's), or the   
>>> use of
>>> legitimate interfaces to create covert channels.  In these cases, it
>>> requires full human understanding of what the code is supposed to  
>>> be doing
>>> with *simultaneous* expert understanding of the domain.  Consider  
>>> an  odd
>>> divide-by-zero error that can only occur in a certain time zone  
>>> on  one day
>>> of a 30-year cycle with several other factors at play.  If you  
>>> could  do
>>> that type of analysis, then you would also probably have the  
>>> ability  to
>>> detect and produce a bug-free system.  The theorists throw around  
>>> the
>>> "undecidable" term a lot when it comes to proving that code  
>>> doesn't  have
>>> any bugs, and the evil-developer problem may be an alternate   
>>> expression of
>>> that.
>>>
>>> In other words, I think it's impossible to prove that modern web- 
>>> based
>>> applications are correct from a security perspective (even with   
>>> respect to
>>> only one layer of source code and ignoring the environment), and   
>>> this is
>>> even more impossible if you care about business-logic correctness  
>>> on  top
>>> of the injection side of the spectrum.
>>>
>>> - Steve
>>>
>>> ----------------------------------------------------------------------------
>>> Join us on IRC: irc.freenode.net #webappsec
>>>
>>> Have a question? Search The Web Security Mailing List Archives:
>>> http://www.webappsec.org/lists/websecurity/archive/
>>>
>>> Subscribe via RSS:
>>> http://www.webappsec.org/rss/websecurity.rss [RSS Feed]
>>>
>>> Join WASC on LinkedIn
>>> http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>>>
>>
>


----------------------------------------------------------------------------
Join us on IRC: irc.freenode.net #webappsec

Have a question? Search The Web Security Mailing List Archives: 
http://www.webappsec.org/lists/websecurity/archive/

Subscribe via RSS: 
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]

Join WASC on LinkedIn
http://www.linkedin.com/e/gis/83336/4B20E4374DBA



More information about the websecurity mailing list