[WEB SECURITY] code review techniques for when you don't trust your developers or testers

Bil Corry bil at corry.biz
Fri Aug 14 00:45:21 EDT 2009

travis at subspacefield.org wrote on 8/13/2009 5:59 PM: 
> On Thu, Aug 13, 2009 at 06:14:45PM -0400, Steven M. Christey wrote:
>> If you could do
>> that type of analysis, then you would also probably have the ability to
>> detect and produce a bug-free system.  The theorists throw around the
>> "undecidable" term a lot when it comes to proving that code doesn't have
>> any bugs, and the evil-developer problem may be an alternate expression of
>> that.
> While automated "understanding" of what an arbitrary program does is
> essentially the halting problem, it is not the case that a given
> program is impossible to analyze.  They just have to be written to be
> amenable to analysis (much easier said than done).

Even so, an "evil-developer" is likely to code in such a way as to subvert analysis.  For example, consider acts of omission -- code that doesn't exist can not be analyzed.  An organization concerned about developer threats should consider a defense-in-depth approach, such as compartmentalize source code development (e.g. the developer that codes a financial transaction function is not the same as the developer that codes the logging of it), an edge network device that attempts to recognize company-sensitive data traveling across it, or perhaps the most effective -- develop ways to detect the *outcome* from an insider attack such as odd traffic, $0.75 accounting errors (ala Cuckoo's Egg), errors in logs, etc.  The higher the probability the attacker will eventually get caught, the less likely they are to attempt it.

- Bil

Join us on IRC: irc.freenode.net #webappsec

Have a question? Search The Web Security Mailing List Archives: 

Subscribe via RSS: 
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]

Join WASC on LinkedIn

More information about the websecurity mailing list