Sorry to be reposting this from a forum topic, but its been a week and I'm kinda stuck. Originally I said, (quote):

Anyone have any success in stopping one of these CAPTCHA-porn attacks?

The site I administer must have got on one of those sites where people can complete my CAPTCHAS to get paid or see porn. I've been reading about these all day since it's the first I have heard about them having never been nabbed by one in the past few years.

Spam user accounts are being created at a rate of about 30 per hour. Then each account manages to create about 5-10 spam blog posts per hour. This site is already using a free Mollom account as a spam and quality control service, but Mollom isn't blocking the user creation very well and they are real people behind the keys anyway.

The IP addresses are from all over, like India, Estonia, France, Philippines, Brunei Darusalam, and the US, so it doesn't make any sense to block specific geographic IP addresses. The spam email addresses being used have no rhythm that I can predict so I can't successfully block them that way.

Manually moderating user submissions doesn't make any sense for this community website, since the admins would never find the real users amid the hundreds of CAPTCHA-porn user accounts. For now, I have stopped all public registration of the site. I've had the registration switched off for three days and I set it back to our normal public registration and the waves of spam picked right up where they left off.

CAPTCHA.net seems to think it's a pretty small threat, but it doesn't make it any less annoying to admins trying to keep the site content at a high quality:

It is sometimes rumored that spammers are using pornographic sites to solve CAPTCHAs: the CAPTCHA images are sent to a porn site, and the porn site users are asked to solve the CAPTCHA before being able to see a pornographic image. This is not a security concern for CAPTCHAs. While it might be the case that some spammers use porn sites to attack CAPTCHAs, the amount of damage this can inflict is tiny (so tiny that we haven't even noticed a dent!). Whereas it is trivial to write a bot that abuses an unprotected site millions of times a day, redirecting CAPTCHAs to be solved by humans viewing pornography would only allow spammers to abuse systems a few thousand times per day. The economics of this attack just don't add up: every time a porn site shows a CAPTCHA before a porn image, they risk losing a customer to another site that doesn't do this.

That's fine insight, but I'm still being porn attacked. Anyone have any tactic I can deploy to stop this madness?

Files: 
CommentFileSizeAuthor
#12 ga_spike.png18.25 KBseaneffel

Comments

Since first posting this on the forums a week ago, I thought I would just set user registration to admin only. A week later when setting the registration to public again, I am seeing twice as much spam account activity.

I understand that this is not strictly a Mollom issue, but maybe Mollom would like to look at this scenario and we can help each other make it better. I can give admin access to you Mollom folks, I've been trusting you for a year or more already in the use of your module.

Hmmm. Interesting.

My first suggestion is to configure your group permissions in such a way that when a user joins, he can't do anything on the site; his account is just there.

After that, you will have to come up with a way to make this person include some personal information that he can't easily fake. Maybe install Ubercart and require him to enter credit card info to gain additional access. The cost of the access could be $0.00, but he'd have to enter his credit card info. At that point, you have his name and address.

Just a thought.

Another way, since I doubt that these spammers will actually read their emails, is to send an email after they first log in, requiring them to do another step to gain full access. Click on a link and answer a series of questions, for example.

I was thinking something along the lines of preventing the CAPTCHA image from being lifted on to another site. I can't ask people to pull out a credit card to get an account, even if the payment is zero.

I understand.

Your problem is interesting, and the captcha people should consider working this out. Personally, it hasn't happened to me... yet. I'm hoping that a solution is in place before it happens.

Status:Active» Postponed (maintainer needs more info)

Trying to prevent images from being lifted is an endless, never-ending struggle that is not worth fighting. I'm trying to think of maybe adding a 'domain' to the initial image request and verifying the requesting site whenever an image is loaded, but I'm not sure it'd be fool-proof. If the 'spammers' are literally taking a screen shot of the image, there's really not much that can be done. :/

Maybe it's time to re-think using captcha and require something java-based. I'm not speaking of you specifically, but the web as a whole.

Have you ever seen those annoying little advertisement windows that move across the screen when you're trying to read a news web page? You have to click the "close button" to get rid of it. Maybe something along those lines. "Click on the green dot to verify that you are a human." The dot would then appear at a random point on the screen or even move slowly around the screen. By the time the porn site has a screenshot, the dot has already moved.

Status:Postponed (maintainer needs more info)» Active

I haven't even seen the site that is causing this harm - I don't have the chops to figure that out. But I would like to work with you guys to figure out how to properly protect this site. Especially so if it means we can strengthen spam protection for everyone involved. What can I do to provide more access to solve this problem?

@ericw If it turns out that the images are being lifted by screenshot then creating animated GIF files that scroll and obscure parts of the image at all times is a reasonable idea. Of course, as long as accessibility isn't a major concern.

Good points, seaneffel. I like the GIF idea.

Unfortunately, it's a never-ending battle. As soon as you make a new system for captcha, someone else will find a way to break it. It's best to keep trying and never give up.

Now as I understand it, your problem is this:

1. Evil spammers are creating accounts on your site.
2. When they try to post something, they encounter a captcha.
3. They're using captcha-porn techniques to allow them to post spam messages, NOT to activate an account.

Is all of that correct?

I don't know if email verification to post is an option for you, but it seems that would work well. However, I can see how this could be defeated with a program that would "read" the email and go to any links inside of it, thus posting the message.

Again, I think something java-based would work in this instance, requiring the user to click on a certain point of the screen. I know that may not be something you have the ability to do (God knows I don't!), but it's a logical solution, IMHO.

I am using email verification on all user accounts. I even get email from the spammers asking why they can't log in to the site. They read like this:

"No can logon. Must to help, please."

And since a it looks like a real person has contacted the admin asking for help, I assume they are legit users and approve their account.

Then that account goes apesh*t and clobbers the site with low mortgage loan ads.

So with email verification, captcha-protected user registration forms, and some access rules, the spammers are still coming through because they are human - probably being paid to complete spam posting.

If they're using real people behind it, I doubt there's much you can do beside IP blocking. Real people can circumvent a *LOT* of things.

The IPs are coming from all over the world. Estonia, Malaysia, and Peru were the most recent. Is there some way to trace back the origination point of the individuals creating accounts? I'm sure that they are getting linkage to our site from some other service - how do I find that service and block it from referring people to us?

StatusFileSize
new18.25 KB

So Google Analytics is showing me that on Nov 3 I have a spike in traffic to mysite.com/user, see the attachment. It went from about three per day to 90 something. The source of the traffic is marked as "direct" but when I break that number down further I can see that the source countries are all over the world.

I would like to see what happens if I alias the mysite.com/user location with the Path module. But I would like to block use from the original URL, which I don't think Path does. I would also like to be sure that I don't lock my users or my admin account out of the site. Is this possible?

Though mine is not a porn attack (it seems to be random business promotions as far as I can tell). I've tried all combinations of user settings Captha/Recaptha, suggestions above, and now Mollum, with no luck stopping spammers from creating accounts.

This thread is interesting, giving me some ideas how they are getting around the spam blockers, so I'm subscribing in hopes someone has a solution to suggest!

Akismet would be a great module to port over to Drupal. That would help.

Oops! Just checked. Akismet IS a Drupal module.

CAPTCHAS and email verification are primarily designed to prevent automated spam attacks by requiring a user to prove to the site that they are real humans. So, real humans can thwart CAPTCHAs and spam blockers without any problems.

I posed this question at a local Drupal meetup here in Boston and I got a couple of ideas for preventing real people from creating accounts and spamming. The objective of most of these methods is to create enough of a barrier threshold to ward off human spammers without warding off our intended community.

  • Charge for accounts. Even a fee of 1 dollar would prevent all of the spammers from engaging the site. The problem with this is that our desired users would probably pass us over too.
  • Slow down account creation. Since the human spammer is likely to be paid per post, it could mess with their business model to slow down the rate at which they can even create an account. If all users had to wait 1 hour before being granted user access, the spammers would have already moved on while the intended community chomped at the bit to post real stuff. This is likely to piss off our real community, and there is no telling how many of those spam users will come back in an hour.
  • Community moderation. Community members could flag inappropriate posts, and at some threshold those posts are unpublished or deleted. The trouble in my case is that the spammers utterly swarm the site, creating more spam content that real content by about 10 fold. This couldn't work for me, but it could on sites with larger user bases.
  • Proof of investment. This was the most interesting idea I heard and the one I'm most willing to try out. Basically create a method to force users to prove their investment in the community itself by answering common questions that prove they have certain knowledge already. In my case, this is a geographic community and the questions might be something like "name a university in this city" or "how large is this city". Picking the right questions might might help cut back on human spammers because they have little to no interest in the geographic community. Further, it may be too time consuming for a spammer to look up the correct answers to get access. Other user bases would have different questions to ask to make the users prove their investment in the specific community housed at that site.

Dries Buytaert of Mollom was also at that local meetup and suggested there were updated Mollom APIs to noodle with, which I am willing to try out to see if they capture spam activity any better. Mollom has been blocking nearly all of the comment spam we used to have, but I realize that spam prevention is practically an arms race so new APIs will only be useful for a while until the spammers develop a better nuke.

Very true, seaneffel. Spam of any form is a never-ending battle. Hopefully Mollom will find a way to deal with it.

Have you tried Antispam? I must admit that I have not and was curious if it addressed the issue any better.

Subscribing.

I am seeing exactly the same behavior on my site as the original poster. Mollom is doing a great job catching the comment spam (huzzah!), but now (apparently human) users are creating accounts and filling in their profiles with spam. I have had to turn my site to "administrator approves new members" which is a drag and a time suck.

I did already have a community-related question in the profile (aka proof of investment), which is helping me filter out spammers, but every once in a while they still get through. Though I've noticed that now that the site is admin-approval for new accounts, spammer traffic is way down. I'm sure the moment that I turned my settings back, they'll be back.

Also the spammers are getting smarter and have gone from creating account with spam in the profile, to now creating accounts with blank profiles and coming back later to spam. They're also starting to pick names that are community based.

This is the serious yuck, and I hope we can find a fix for this.

As another point of data. One of the "spammy" services that was advertised was a spam-links generator: http://www.submitqueen.com A site in India that pays humans to break Captcha.

P.S. I would be happy to beta-test any new APIs for this on my site, if you need a test case with a site under attack.

And here's one last thought (because obviously this has been on my mind a lot.)

What about the idea of creating a robust user profile? One that had several fields the submitter had to fill out and that were all required. It would annoy legitimate users a titch (once) and you could mitigate that with funny text and making it a journey of "let's learn all about you" and essentially exhaust the spammers and make your site too much trouble to deal with. It might only take 3-4 fields to make your site too much trouble for a human and too much a special case for a bot to handle.

I may just give that a go and report back.

Title:CAPTCHA-porn attack, how can I stop it?Fighting human-generated (not automated) spam attacks

Choosing different issue title to more accurately reflect the topic.

Also, it's not wholly a Mollom issue but I'll leave the issue in the queue until someone points out a better home for it.

The robust profiles are a decent idea, but this is already in place on our site and it's not slowing them down. We have 6-10 fields for city, state, organization, interests, bio, etc, and about four of them are required.

I might try making ALL those fields required setting those profile fields to reject HTML whole hog. This will probably reduce the spam account creation but it won't stop them, based on observations of the content being entered in to the profile fields.

hey, sean,

What if there was a time limit on filling in the registration with captcha? Say they have 30 seconds or something, otherwise the page keeps refreshing and they have to keep filling it in. After three tries, they get knocked out for 24 hours before they can try again.

That would eliminate the ability to do a print screen, get the answer, AND fill it all in... perhaps?

I have the same issue, even with CAPTCHAs registration there's still like 20 spammers able to register everyday... which is kinda annoying..

A captcha will not prevent real people from registering and spamming the site, which is the essence of this problem. I'm still have found no solution that works but I'll report back if I do.

Just got blitzed starting a few days ago. Set user accounts to be approved by admin for now. Subscribing to see if I can learn a way out of this. Thanks in advance.

Version:6.x-1.9» 6.x-1.x-dev
Category:support» feature
Priority:Critical» Normal

Good ip block would be httpbl project honeypot. Troll so you can block specific used ip and save.

Version:6.x-1.x-dev» 7.x-2.x-dev
Status:Active» Fixed

Please note that Mollom additionally leverages various external data sources, including Project Honeypot, in its spam classifications already. Also, Mollom contributes the IP address related information it gathers back to the Project Honeypot community.

However, IP address based reputation information always has to be taken with a fine grain of salt, and used and maintained properly. In the vast majority of cases and most ISPs and countries around this world, IP addresses are changing every 24 hours, so IP-based blocking usually is a measure that works for a very short time-frame only. An IP-based blocking system has to be capable of storing and maintaining millions of IP address records and adjust their reputations over short time-frames. This functionality is part of Mollom's features.

With regard to human-generated spam, manual adjustments like IP blocking do not really help. However, Mollom's (pretty advanced) content classifiers are able to detect patterns in posts of humans, and the machine learning experts at Mollom are constantly working on improving the content classifiers to detect and combat new patterns.

In short, human-generated spam is not something you want to combat on the client-side via one-off adjustments (which are prone to errors). Instead, such posts require advanced maths, science, as well as swarm knowledge in order to determine the difference between a "human hammer" and a "human spammer". That's what Mollom delivers and aims to deliver.

If you experience too many spam posts on your sites that originate from humans, please get in touch with Mollom Support. Make sure to provide the session/content IDs of posts that slipped through.

Thanks for your feedback!
sun

Status:Fixed» Closed (fixed)
Issue tags:-captcha

Automatically closed -- issue fixed for 2 weeks with no activity.