[WEB SECURITY] my website captcha broken??
bil at corry.biz
Sun Feb 1 09:36:33 EST 2009
Gunter Ollmann wrote on 1/31/2009 5:06 PM:
> Unfortunately, I don't believe you're going to be solving anything by
> increasing the sophistication of the CAPTCHA itself. The technology has
> already been beaten in so many different ways.
Indeed, if the only thing sitting between an attacker and monetizing your site is a CAPTCHA, you should know you've just signed up for a lifetime subscription of cat-and-mouse. If the CAPTCHA is weak or poorly implemented, it will be broken. And if the CAPTCHA isn't breakable, then human mules will be used just so long as the cost for doing so is less than the profit generated.
To give some sense to the extreme lengths an attacker will go to circumvent anti-automation technology, here's an interesting story about Craigslist:
Craigslist is fighting back. Its latest gimmick is phone verification. Posting in some categories now requires a callback phone call, with a password sent to the user either by voice or as an SMS message. Only one account is allowed per phone number. Spammers reacted by using VoIP numbers. Craigslist blocked those. Spammers tried using number-portability services like Grand Central and Tossable Digits. Craigslist blocked those. Spammers tried using their own free ringtone sites to get many users to accept the Craigslist verification call, then type in the password from the voice message. Craigslist hasn't countered that trick yet.
> Instead, you'll have to be smarter about how you rate limit access to the
> SMS sending - probably based upon a mix of IP address and some other
> login/verification identifier.
The latest solutions center on behavior analysis, such as tracking the speed or number of requests in a given amount of time, tracking request patterns (?id=1,?id=2,?id=3), etc. One company that is offering a solution based on behavior analysis is Pramana:
The company Sehgal founded a year ago, Pramana, takes a different approach. Instead of submitting users to a test, the Atlanta-based company's technology plugs into Web sites and invisibly analyzes users' online behavior to determine who's a human and who's a bot. "We don't demand that users prove they're human," Sehgal says. "We simply watch them and decide for ourselves."
Even that I suspect will eventually fail as bots are rewritten to behave more like humans. Consider the 'flirting bots' -- if a bot can be written to fool a human, it probably can be written to fool your site:
The artificial intelligence of CyberLover's automated chats is good enough that victims have a tough time distinguishing the "bot" from a real potential suitor, PC Tools said. The software can work quickly too, establishing up to 10 relationships in 30 minutes, PC Tools said. It compiles a report on every person it meets complete with name, contact information, and photos.
The game of cat-and-mouse continues...
Join us on IRC: irc.freenode.net #webappsec
Have a question? Search The Web Security Mailing List Archives:
Subscribe via RSS:
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]
Join WASC on LinkedIn
More information about the websecurity