[WEB SECURITY] RE: [Webappsec] Tacking A Difficult Problem - Solutions
bob.fish at hotmail.com
Fri Apr 20 12:22:16 EDT 2007
I have had similar experiences with the scanners you mention. I know that you can get several "unique" vulnerabilities from a single root cause.
You give some good advice, but I would be careful about taking half measures to pass PCI but leave vulnerabilities in the site. Since this is for PCI we are dealing with peoples credit card numbers and it looks like a significant amount of work will have to be done anyway. The one thing I would add is that in addition to turning on ValidateRequest in the web.config for IIS, is to also incorporate the Anti-XSS Libraries for encoding output. They are far more robust than ValidateRequest alone. Here is a link to the Anti-XSS Libraries site: http://msdn2.microsoft.com/en-us/security/aa973814.aspx. I will be giving a free webcast on implementing these libraries next Tuesday, here is a link to the registration for that talk: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032333138&Culture=en-US.
Bubba - good luck on passing your PCI audit.
Date: Fri, 20 Apr 2007 00:21:21 -0700From: arian.evans at anachronic.comTo: bubbagump123 at gmail.com; webappsec at lists.owasp.org; websecurity at webappsec.org; webappsec at securityfocus.comSubject: Re: [Webappsec] Tacking A Difficult Problem - Solutions
ello Bubba -- Interesting thread. Sounds a bit ridiculous. Here are some "quick fixes" for your issues. Let's ping de WASC mons; some smart minds theere probably have better ideas dat meself:
I've never seen 1000+ *real*, *unique* XSS. I mean, I saw 18,000 once from one of the "big three" scanners, but that's because of default <scare> mode scanner vuln grouping, duplicate issues, and false positives that were tricky to validate. But we all know the scanner industry. I'll address that in a minute. For now some *solution* meat:
2. IIS/Web Server validation modules
3. Fix your code.
4. Are you interpreting/using the scanner correctly?
1. WAFs. Web app firewalls. It works like this:
+ Apache + mod_security + mod_proxy == done. You could do this all ON your existing IIS server right now, today.
Certainly review performance but you could block all the usual ASP Classic weakness suspects from exploitation, or at least make WI or AS incapable of detecting them. Green report == PCI OK.
2. IIS request validation module:
+ IIS + .NET + Custom HTTP Request module == done. (.NET's version of an IIS ISAPI filter)
A long time ago there was a directory traversal that, if properly hex encoded, ran all over .NET (it was near-0day posted on BT). The first fix from MS, while a performance cludge, is a pretty simple one and should work for you as well. It works like this:
2.1 Install .NET
2.2 Write an HTTP request validation module (quick, easy, painless)
2.3 Install it on your server and trap/process all requests, looking for dangerous strings. Block them.
2.4 Here's the KB on the HTTP Req module for canonicalizing URLs: http://support.microsoft.com/kb/887289
2.5 You want to do the same, but wire it up to: system.web.httprequest.validateinput (.NET 1.1 & IIRC 2.0 same namespace)
Scott Hanselman did something like this for Corillian; you can search his blog for more ideas: http://www.hanselman.com/blog/
2.6 The .NET request validator is basically a really big blacklist. The blacklist game (e.g.-you are smarter than the black hats) is a dangerous game to play, but hey, it's MS. And it gets you under your 8 week deadline, like next week.
For PCI you should be more than good.
3. Fixing the code, Baby Steps:
Migrate --> ASP.NET
3.1 Convert your ASP pages to .aspx to use the aspx new processing engine.
3.2 Leave them as is, convert with Visual Studio by renaming, try to recompile: debug the few issues you'll have
3.3 I would think 200 pages could be converted in a few days, tops, maybe a day
3.4 Now you can use the new request validators via page directives:
The .NET http.request.validators will block:
+ The XSS scanners can throw at them (minus false positives for encoding types like UTF-7 or full-width ASCII, since the scanners don't seem aware of server-set encoding types). There are a few things that will slip by the .NET request validators, but I haven't seen the scanners test for them (correctly).
+ You could also trap your SQL Injection with the request.validators and the I/O controls (the latter would take more coding).
4. Is this Appscan? I haven't seen WI 7, but no other scanner (than AS) is as aggressive about finding HTTP Response Splitting. Appscan is also pretty infamous for breaking out findings [over] aggressively. They used to list every successful *test* for every nv-pair for every protocol for every HTTP Verb as a unique vuln. Bleh. They seem to collapse slightly better now, but still far from ideal.
(Sorry WF guys, your checks are getting *a lot* better but you know I think your groupings still scsk. None of you [scanners] still can find the fairly trivial XSS (4 attack vectors to be precise) in my own website. >= 3 years. That's lame. )
Not sure how one could verify 1700 vulns for legitimacy, but I would suggest if you are using AS that you:
4.1. Turn off the HTTP Response Splitting check. Explain to your PCI auditor that you have no intermediary proxies (do you, eh?). Ask them how they intend to get the victim browser to make 2 HTTP requests w/out client side code execution. Yes, we call that XSS or getting the victim browser to run malicious code from your malware site. So this is a non-issue. Meaning: HTTP Response Splitting doesn't matter. That's why no one exploits it. Probably why no one understands it.
Sure you can split the response. But what exactly are you going to do with the second one?
If you can split the response, get the victim browser to make the 2nd request and get the browser to chomp on the split response, then you are already XSSing or CSRFing or SessionFixating or SessionHijacking etc.
4.2. Check those pesky XSS. AS will flag every name-value pair, then it will modify some nv-pairs in the request, retest, and flag it again, then it will change HTTP verb from POST to GET, and flag it again, [...] wash, rinse, repeat. If you have 1000 unique XSS vulns, then you are saying your app has over 1000 unique name-value pairs reflected/persisted in OUTPUT?!?. Or stored in unique db tables ( e.g.-forum entries)? Over a 1000 places of unique business logic? (though ASP spaghetti code can lend itself to this madness, and lacks global places to encode output, the math still eludes me, 200 pages * 5 unique nv-pairs == 1000 unique nv-pair XSS, hmm...).
I suspect that you can divide by /10 here. At least.
4.3. Not sure about SQL injection. If you have 200 pages, I can see 200 SQLi in a "classic" ASP Classic app. If it is Appscan, probably 30% or more of that is noise. However, you still need some filtering for the mean time, so see above.
Oh yeah mod_security/mod_proxy would allow you to get ride of your MSSQL OLE DB messages, probably reduce scanner detection of exploitable XSS by 80%. Same with writing a custom HTTP module for IIS or .NET. </progress>
That's my story.
I plan to fire up the blog next week. Need to return to productive contribution to the community. Or something like that.
For those of you (undoubtedly) dying to know where I've been for the last year: I burned out on scanner benchmarking, scrapped the OWASP tools project, escaped Kansas (www.venganza.org ), settled down, got married, and had a few kids.
Yes, of course I am kidding,
-- Arian J. Evanssolipsistic software security sophist"I spend most of my money on motorcycles, martinis, and mistresses. The rest of it I squander."
On 4/19/07, Bubba Gump <bubbagump123 at gmail.com> wrote:
Invite your mail contacts to join your friends list with Windows Live Spaces. It's easy!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the websecurity