#2384 - Anti-spam heuristics
| Identifier | #2384 |
|---|---|
| Issue type | Feature request or suggestion |
| Title | Anti-spam heuristics |
| Status | Completed |
| Tags |
Type: Spam (custom) |
| Handling member | Chris Graham |
| Addon | core |
| Description | There are a number of factors we can use to detect increased likelihood of spam:
1) Posting speed (by looking at when the CSRF token was generated compared to when the form was posted) 2) Closeness to having joined (people may join for bypassing CAPTCHA, getting extra features) 3) Posting links 4) Posting frequency 5) Posting repeat content 6) Using particular keywords ("cialis", ...) 7) Using particular coding ("Times New Roman" [implies a paste], "<font face=" [implies a paste]) 8) Use of invalid coding from other software ("[link", ...) 9) Use of paste as opposed to typing 10) Presence of JavaScript (particular calculations could be done and submitted with the form, to know that a real working JavaScript engine was there; perhaps something computationally costly like factorisation; also detection of use of mouse and/or keyboard as a human would) 11) Triggering of the spam blackhole in a form 12) Particular user-agent substrings ("bot", "perl", ...) 13) Missing HTTP headers a real browser will always send: Accept, User-Agent, Cookie, Accept-Language, Accept-Encoding 14) Hits from particular countries (fully configurable) We can detect these factors and make them configurable to bump up the spam certainty ratings for a request. It would be cumulative, each factor would add together to give an overall spam rating. That overall rating would be subject to the approve/block/ban thresholds that already exist. Our LAME_SPAM_HACK hack-attack signal can be removed, and the code for that integrated into this new system. It would all be configurable. All the time factors, all the different spam certainty increments (including configuration per detected spammy keyword). |
| Steps to reproduce | |
| Additional information | Here's some simple temporary code in use on our own sites in an unofficial capacity, a small subset of what this final system would do...
require_code('antispam'); $hours_like_guest=2; $post=post_param('post',''); if ((is_guest() || $GLOBALS['FORUM_DRIVER']->get_member_join_timestamp(get_member())>time()-60*60*$hours_like_guest) && ((strpos($post,'<a ')!==false) || (strpos($post,'[url')!==false))) { handle_perceived_spammer_by_confidence(get_ip_address(),floatval(get_option('spam_approval_threshold'))/100.0,'internal checks',false); } |
| Related to | |
| Funded? | No |
The system will post a comment when this issue is modified (e.g., status changes). To be notified of this, click "Enable comment notifications".


Comments
But agreed. Technically, virtually any form of content can be submitted by guests... if permissions allow for it. Therefore, there needs to be a pipe for all content.
I don't really agree with much of the discussion, it's tangential to the issue, more related to #2057 and #2374 and #375 which will be considered separately.
The main issue discussed seems to be how can we do posting-frequency detection for guests, as all combined guest postings go under a single ID. However I think there's no real issue because guests get the CAPTCHA, or we'd generally limit guest posting access (who'd want guests submitting news for example). So we can implement posting-frequency for non-guests only, and still have a whole diverse set of other techniques that do work on guests (CAPTCHA, but also all the other heuristics). We couldn't really track guests anyway, people could use TOR (so have rotating IPs and session IDs).
Duplicate content submission can work on the guest ID with no issue - because different guests are not legitimately going to be posting the same content.
We do need to make sure heuristics do work effectively for contact forms though.
That isn't so necessary really. I've implemented a system where it can query via meta-data provided in the CMA hooks, over a time range for a particular submitter ID. That's simpler and better than trying to do it through reporting, because it works without any reporting needing to happen.
https://www.w3.org/TR/turingtest/
The TLDR is that we now do everything we can that isn't awful in some way, but it's still a good reference.