[WEB SECURITY] Application Security Hacking Videos

Paul Schmehl pauls at utdallas.edu
Thu Jun 1 17:56:35 EDT 2006


Ivan Ristic wrote:
> On 6/1/06, Paul Schmehl <pauls at utdallas.edu> wrote:
>>
>> BTW, there are security companies that I will not even consider
>> purchasing prodcuts from simply because they have had remote exploit
>> vulnerabilities in their code.  I can assure you I'm not alone.  As more
>> of us practitioners begin to cull the poorly programmed applications
>> from our purchase mix, we will weed out the bad programmers ourselves.
> 
> I think it's more important to observe how companies deal with such
> issues. Security problems are a fact of life mostly due to the fact
> that there isn't a way to write code that is 100% guaranteed not to
> have any faults.
> 
I understand what you're saying, but as a consumer of the products, I 
think differently.  I think, if a security company is programming buffer 
overflows into their own code, then they probably don't understand 
buffer overflows very well.  How can I trust them, then, to protect me 
against buffer overflows in my applications?  Or even detect them?

Sometimes the choices are very simple.  For example, I don't use 
sendmail anywhere.  It's had a multitude of problems, indicating to me 
that the people who code it don't really understand what they're doing 
*or* the code is too complex for *anyone* to fully understand it.  I use 
postfix.  Can you recall a security problem with postfix?

So, when I install a new OS, sendmail comes off, postfix goes on.  Now I 
have one less thing to worry about.

Same reason I don't use Windows Servers in public IP space.  Not 
trustworthy (I use that word deliberately.)

I know programmers aren't perfect.  Nobody is.  But, if you're a 
security vendor, you'd better have enough quality control processes in 
place to catch the problems before you ship them out the door.  Because, 
if they get out the door and become known, I ain't buyin' your products 
any more.  I can't trust you.  I don't really care *why* I can't trust 
you.  It may be you don't care.  It may be your processes aren't 
thorough enough.  It may be your people aren't talented enough.  It may 
be you just don't understand the problem well enough.  But I don't care. 
  All I know is, I can't trust you.
>
>> Yes, we need much better training.  Yes, we need much better awareness
>> of the complexities of attack vectors.  But until programmers and
>> leadership in software companies take the bull by the horns and start
>> addressing the problem, we will continue to see point solutions that
>> hide the ugly warts.
> 
> For one reason or another I don't think we can ever expect the average
> programmer to understand all the security issues. That's why, IMHO, it
> is essential to move to (and design) programming languages and
> platforms that are not vulnerable to buffer overflows and, in general,
> make it very difficult or impossible to write insecure code.
> 
> The people in charge of major programming platforms need to take
> responsibility and make the (programming) world a more secure space. I
> am not saying that would solve all our problems, but I think it would
> solve most of the ones we are dealing with on daily basis.
> 
You can't solve people problems with technology.  You just can't.  If 
you could, we could throw enough boxes up on the network that we'd never 
have to worry about a breakin.  But, when a user uses "password" as the 
password for the root account, technology isn't going to save you. 
Policy and processes is.

The same is true for a development firm.  When programmers keep coding 
buffer overflows, technology isn't going to save you.  (Remember when 
Microsoft announced they had "eliminated buffer overflows in Windows XP" 
at their New York launch?  They bought a $10,000,000 tool that was 
supposed to go through the code and find them all.  Less than a month 
later eEye found the UPnP overflow - the most devastating single hole 
ever found in a Windows product.)

There has not yet been invented the mousetrap that will catch all mice. 
  There never will be.  The slickest, most "airtight" programming 
language will have unforeseen weaknesses that *somebody* smart will 
figure out.

It has ever been thus.

You fix the people, or you'll never fix the problem.

-- 
Paul Schmehl (pauls at utdallas.edu)
Adjunct Information Security Officer
The University of Texas at Dallas
http://www.utdallas.edu/ir/security/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 5007 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.webappsec.org/pipermail/websecurity_lists.webappsec.org/attachments/20060601/1847e5a4/attachment.p7s>


More information about the websecurity mailing list