[WEB SECURITY] Re: Tacking A Difficult Problem - Solutions

Arian J. Evans arian.evans at anachronic.com
Fri Apr 20 03:21:21 EDT 2007

ello Bubba -- Interesting thread. Sounds a bit ridiculous. Here are some
"quick fixes" for your issues. Let's ping de WASC mons; some smart minds
theere probably have better ideas dat meself:

I've never seen 1000+ *real*, *unique* XSS. I mean, I saw 18,000 once from
one of the "big three" scanners, but that's because of default <scare> mode
scanner vuln grouping, duplicate issues, and false positives that were
tricky to validate. But we all know the scanner industry. I'll address that
in a minute. For now some *solution* meat:

1. WAFs.
2. IIS/Web Server validation modules
3. Fix your code.
4. Are you interpreting/using the scanner correctly?

1. WAFs. Web app firewalls. It works like this:

+ Apache + mod_security + mod_proxy == done. You could do this all ON your
existing IIS server right now, today.

Certainly review performance but you could block all the usual ASP Classic
weakness suspects from exploitation, or at least make WI or AS incapable of
detecting them. Green report == PCI OK.

2. IIS request validation module:

+ IIS + .NET + Custom HTTP Request module == done. (.NET's version of an IIS
ISAPI filter)

A long time ago there was a directory traversal that, if properly hex
encoded, ran all over .NET (it was near-0day posted on BT). The first fix
from MS, while a performance cludge, is a pretty simple one and should work
for you as well. It works like this:

2.1 Install .NET
2.2 Write an HTTP request validation module (quick, easy, painless)
2.3 Install it on your server and trap/process all requests, looking for
dangerous strings. Block them.
2.4 Here's the KB on the HTTP Req module for canonicalizing URLs:
2.5 You want to do the same, but wire it up to:
system.web.httprequest.validateinput (.NET 1.1 & IIRC 2.0 same namespace)
Scott Hanselman did something like this for Corillian; you can search his
blog for more ideas: http://www.hanselman.com/blog/

2.6 The .NET request validator is basically a really big blacklist. The
blacklist game (e.g.-you are smarter than the black hats) is a dangerous
game to play, but hey, it's MS. And it gets you under your 8 week deadline,
like next week.

For PCI you should be more than good.

3. Fixing the code, Baby Steps:

Migrate --> ASP.NET

3.1 Convert your ASP pages to .aspx to use the aspx new processing engine.
3.2 Leave them as is, convert with Visual Studio by renaming, try to
recompile: debug the few issues you'll have
3.3 I would think 200 pages could be converted in a few days, tops, maybe a
3.4 Now you can use the new request validators via page directives:

The .NET http.request.validators will block:

+ The XSS scanners can throw at them (minus false positives for encoding
types like UTF-7 or full-width ASCII, since the scanners don't seem aware of
server-set encoding types). There are a few things that will slip by the
.NET request validators, but I haven't seen the scanners test for them

+ You could also trap your SQL Injection with the request.validators and the
I/O controls (the latter would take more coding).

4. Is this Appscan? I haven't seen WI 7, but no other scanner (than AS) is
as aggressive about finding HTTP Response Splitting. Appscan is also pretty
infamous for breaking out findings [over] aggressively. They used to list
every successful *test* for every nv-pair for every protocol for every HTTP
Verb as a unique vuln. Bleh. They seem to collapse slightly better now, but
still far from ideal.

(Sorry WF guys, your checks are getting *a lot* better but you know I think
your groupings still scsk. None of you [scanners] still can find the fairly
trivial XSS (4 attack vectors to be precise) in my own website. >= 3 years.
That's lame. )

Not sure how one could verify 1700 vulns for legitimacy, but I would suggest
if you are using AS that you:

4.1. Turn off the HTTP Response Splitting check. Explain to your PCI auditor
that you have no intermediary proxies (do you, eh?). Ask them how they
intend to get the victim browser to make 2 HTTP requests w/out client side
code execution. Yes, we call that XSS or getting the victim browser to
run malicious code from your malware site. So this is a non-issue. Meaning:
HTTP Response Splitting doesn't matter. That's why no one exploits
it. Probably why no one understands it.

Sure you can split the response. But what exactly are you going to do with
the second one?

If you can split the response, get the victim browser to make the 2nd
request and get the browser to chomp on the split response, then you are
already XSSing or CSRFing or SessionFixating or SessionHijacking etc.

4.2. Check those pesky XSS. AS will flag every name-value pair, then it will
modify some nv-pairs in the request, retest, and flag it again, then it will
change HTTP verb from POST to GET, and flag it again, [...] wash, rinse,
repeat. If you have 1000 unique XSS vulns, then you are saying your app has
over 1000 unique name-value pairs reflected/persisted in OUTPUT?!?. Or
stored in unique db tables (e.g.-forum entries)? Over a 1000 places of
unique business logic? (though ASP spaghetti code can lend itself to this
madness, and lacks global places to encode output, the math still eludes me,
200 pages * 5 unique nv-pairs == 1000 unique nv-pair XSS, hmm...).

I suspect that you can divide by /10 here. At least.

4.3. Not sure about SQL injection. If you have 200 pages, I can see 200 SQLi
in a "classic" ASP Classic app. If it is Appscan, probably 30% or more of
that is noise. However, you still need some filtering for the mean time, so
see above.

Oh yeah mod_security/mod_proxy would allow you to get ride of your MSSQL OLE
DB messages, probably reduce scanner detection of exploitable XSS by 80%.
Same with writing a custom HTTP module for IIS or .NET. </progress>

That's my story.

I plan to fire up the blog next week. Need to return to productive
contribution to the community. Or something like that.

For those of you (undoubtedly) dying to know where I've been for the
last year: I burned out on scanner benchmarking, scrapped the OWASP tools
project, escaped Kansas (www.venganza.org), settled down, got married, and
had a few kids.

Yes, of course I am kidding,

Arian J. Evans
solipsistic software security sophist

"I spend most of my money on motorcycles, martinis, and mistresses. The rest
of it I squander."

On 4/19/07, Bubba Gump <bubbagump123 at gmail.com> wrote:
> I recently ran a web application vulnerability scanner against one of the
> websites that I am in charge of securing.  I was shocked to find the
> following results:
> 1000+ unique Cross Site Scripting vulnerabilities
> 300+ unique SQL Injection vulnerabilities
> 400+ unique HTTP Response Splitting vulnerabilities
> All of these issues were valid, not false positives.
> This particular website consists of more than 200 ASP pages running on IIS
> 5.
> Upon further investigation, I found that this website does not have any
> type of centralized input or output validation, or database access
> component.  Every page has its own code for processing input and making
> database queries, all using dynamic SQL.  The only input validation for the
> entire website is Javascript based.
> This website is being audited for PCI compliance.  The auditors feel that
> all of these issues need to be fixed in order for the website to be
> compliant with PCI standards.  Our final audit is in 8 weeks, and at that
> time the auditors want to see another set of scan results showing a clean
> scan with all of the above vulnerabilities fixed.
> One option we have is to quickly assemble a team of developers that is
> totally dedicated to fixing all of these issues.  Each of our 200+ web pages
> will need substantial coding changes to add all of the necessary input
> validation, output encoding, and conversion of the dynamic SQL to either
> parameterized queries or stored procedures.  This will also involve lots of
> regression testing to ensure that we don't break the website in the process
> of fixing all of these vulnerabilities.
> Do we have any other good options to get to a clean scan in such a short
> timeframe?  Is there any type of global solution that could be applied at
> either the web server or network level that would mitigate all or most of
> these issues, without requiring a massive programming effort?
> This is the most challenging web application security issue I have faced.
> I am very interested to know if any of you have faced a similar problem and
> how you have tacked it.
> Thanks in advance for your help on this.
> _______________________________________________
> Webappsec mailing list
> Webappsec at lists.owasp.org
> http://lists.owasp.org/mailman/listinfo/webappsec
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webappsec.org/pipermail/websecurity_lists.webappsec.org/attachments/20070420/7fdd05ef/attachment.html>

More information about the websecurity mailing list