[WEB SECURITY] 2009 Top 25 Programming Errors

Arian J. Evans arian.evans at anachronic.com
Thu Jan 15 21:13:11 EST 2009


<inline>

On Thu, Jan 15, 2009 at 1:00 PM, Steven M. Christey
<coley at linus.mitre.org> wrote:
>
> On Thu, 15 Jan 2009, Arian J. Evans wrote:
>
> > I believe the language and the misguided "remediation cost" sections
> > of the Top 25 do just that.
>
> The remediation costs were an attempt to help people sub-prioritize within
> the list - which things could they knock off right now.  The potential
> costs are at least defined, limited as they may be.  I can't find them in
> the OWASP Top Ten or the WASC Classification.

I agree that a notion of remediation cost is useful. As I stated I think
the current implementation is more harmful than helpful, which is why
I stated I think it would be better removed (than poorly implemented).

This is because I think it is either unrealistically low in about half of
the places, and another 25% is completely context-derived and
not worth guessing at.

> > I do not see this current Top 25 document being data-centric at all.
> > What data was used in the creation process? Or was it simply the biased
> > sample of a Mitre-mailing-list democracy? (not criticizing you here,
> > just asking)
>
> The Top 25 FAQ covers this (not to mention other concerns already
> mentioned on this list):
>
>  Why don't you use hard statistics to back up your claims?

I do when I deliver vulnerability data on web software. I believe
I have the largest bucket of black box defect statistics on web
software that exists.

There have been many scientific studies done on human
communication and effects of persuasion. This is where,
for example, NLP came from, though the science behind
many of these studies is questionable. I do find that my
own anecdotal experience mirrors these type of studies
and recommendations from other seasoned professionals:

http://www.asktog.com/columns/047HowToWriteAReport.html

That is an excellent, basic example.


>  The appropriate statistics simply aren't publicly available. The
>  publicly available statistics are either too high-level or not
>  comprehensive enough. And none of them are comprehensive across all
>  software types and environments.

Understood. I was asking this mostly rhetorically. I was unsure
if you used some existing CVE data for speculative stat generation.


>  For example, in the CVE vulnerability trends report for 2006, 25% of all
>  reported CVE's either had insufficient information, or could not be
>  characterized using CVE's 37 flaw type categories. That same report only
>  covers publicly-reported vulnerabilities, so the types of issues may be
>  distinct from what security consultants find (and they're often
>  prohibited from disclosing their findings).

Understood.


>  Finally, some of the Top 25 captures the primary weaknesses (root
>  causes)  of vulnerabilities - such as CWE-20, CWE-116, and CWE-73.
>  However, these root causes are rarely reported or tracked.

Understood, and they are not mathematically equal. In most cases
syntax weaknesses have a one-to-many mapping between *both*
attack vectors and vulnerabilities (unique exploitable instantiations
of a weakness).  I think we could still sort and represent this fairly
using a signed rank system like Wilcoxon or whatever the new trendy
assigned ranking algorithms in academia are.

The hard part I understand is getting the data. And no one really
has the data, or at least, no one admits to having the data. Which
means that the majority of pundits are anecdotal and unscientific
about their agendas and ideas, at best.

Just making that point really clear.


> In other words - we could have used CVE stats just like was done for the
> 2007 OWASP Top Ten, but we don't think they're adequate for covering the
> actual mistakes that programmers make.  I'm fairly sure that the 2009
> OWASP Top Ten effort is going to try alternate tactics.

Yes. The OWASP Top 10 has improved, but I have always
disliked the mixing of attack vectors with weak patterns.


> > The unique kind of thing you are addressing with this new standard is
> > that it this effort is dabbling in business case. Previous SANS lists
> > did not. This is a different kind of issue than server configs and IIS
> > patches.
>
> Agree.  I view it as a strength of the Top 25 that it even attempts to
> define a threat model of the skilled, determined attacker (Appendix B),
> which helps with prioritization.  The OWASP Top Ten and WASC don't seem to
> do this - there's some threat that's implicit in the selection of which
> issues are important.


How does this help with prioritization?



> > I want something simple and effective that is controlled by people in
> > the actual hands-on web security community.
>
> It should be noted that the Top 25 is intended to cover far more than web
> apps, which may be where some of the disconnect comes from.


I understand, but it *will* become the web app version. Unless you
make and maintain a distinct "web app" version, that is going to
become the web app security standard, which is where the vast
majority of modern software is moving.


> > I was very frustrated by lack of taxonomy and hierarchy confusion
> > between uses of Risk, Threat, Attack, Weakness, and Vulnerability, and
> > your efforts here are long overdue.
>
> BTW I still don't have a clear mindset for all of these either :-/

If you want to discuss offline I have a categorization system for these
that I have been working on for a long time (in fact, I am pretty sure
I stole some of it from you). Unfortunately it has some slippery slopes
in it. I think you and I have discussed this before and discussed the
slippery issues and IIRC we agree on both.

That said, I have no Failures and Impropers in it. I think that is
unnecessary pomp and bombast. And this coming from someone
who is pompous and bombastic.


> > I think that this thing needs to be approached and treated as a minimum
> > standard of due care (in testing, building, measuring) because that is
> > how it's going to be used.
>
> We are trying to promote it as such, but that's not the message that's
> being heard, as you and others are pointing out.


Come on. You have to know how this is going to be used.

I will have to build a "SANS/CWE Top 25 Report". I will have to respond
to "SANS/CWE Top 25 RFPs". I will have to agree to test managed
code running in clearly defined boundaries for "buffer overflows" etc etc.


>
> > Security people shoot themselves in the foot by telling business owners
> > and developers that they have failed, and that it won't cost much to
> > fix.
>
> I see your point here and agree it's a larger problem.  I don't think
> developers are receiving it this way, however.

I would like to see a study on that. My overall experience is that the
majority of developers do not respond well to criticism, and the more
important stakeholders (business owners, dev executives) find it to
be amateur and childish. And those are the people that you have
to communicate with to bring about change.

> > I do not want to use your document and language for communicating with
> > my clients, but the current initiative guarantees that we all will be
> > using it as a standard.
>
> A thought-provoking comment.  But, frankly, I have faith in the consulting
> community to educate their customers about why just caring about the Top
> 25 is naive.

I am not a religious man when it comes to security consultants.

But this is not the point. The Top 25 should stand on its own two
legs if it is competent.

You will have products and reports bound to this. The SANS/CWE
Top 25 is likely to become the language and features that is
required to be in them, from WAFs to Scanners to SIM/SEMs.

Just like OWASP is today.

Just like the BITS/Roundtable appsec garbage was before OWASP
(in the financial services sector).

Just like the NCUA and later regulator bodies took the worst possible
verbiage directly from Graff and van Wyk's Secure Coding book for
their guidelines (partially my fault).

Long and short -- I think we agree on everything but the contents
and use of this Top 25 list.

I have already stated my concerns, and I have received no new
evidence to make me reconsider them.

I do hope as the document matures that the language and
writing matures with it, and that the very-important "Remediation
Cost" bucket is reality-grounded, and then properly filled.

In the mean time I will focus on WASC and OWASP materials
for communications with my clients.

btw// SC-L is a great list for appsec academia and smart minds,
but this is the largest list if you want to reach people in both
the webappsec community, and out in the field "getting things done".

Actually, this is how I usually candidly describe them:

SCL: high signal to noise, errs on the side of pundits and academia
OWASP: fairly high quality, tends to be biased towards source-code reviews
WASC: more people new to the industry, great place to find/answer
newbie questions, and tends to have a bias towards blackbox testing
and widgets

I like them all for different reasons, but they are all three
communities affected by the SANS/CWE Top 25, so it
would be good to open up RFC to all three places.

Cheers,

--
--
Arian Evans

Anti-Gun/UN people: you should weep for
Mumbai. Your actions leave defenseless dead.

"Among the many misdeeds of the British
rule in India, history will look upon the Act
depriving a whole nation of arms, as the
blackest." -- Mahatma Gandhi

----------------------------------------------------------------------------
Join us on IRC: irc.freenode.net #webappsec

Have a question? Search The Web Security Mailing List Archives: 
http://www.webappsec.org/lists/websecurity/archive/

Subscribe via RSS: 
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]

Join WASC on LinkedIn
http://www.linkedin.com/e/gis/83336/4B20E4374DBA



More information about the websecurity mailing list