ChatGPT spam
Posted
#7589
(In Topic #1907)
We have been low-key plagued by spam for a long time. I want to thank {{Adam}} for doing a great job dealing with it, especially while I have been gone. However, in the past spam has been easier to at least manually detect. Obvious spammy links, and text that has little bearing on anything or is directly copy and pasted.
ChatGPT-like spam changes like.
I wonder what the solution is. Here's what we probably cannot do:
- Invest a lot of thought into each post to work out who the poster is and why they posted.
- Just reflectively delete things or ban things if we're not reasonable sure.
- Put everything behind an algorithm, so you have to establish a presence before people see your topics and posts.
- Auto-detect what is and isn't the output of a LLM (large language model).
Here's what we probably can do, at least at some point:
- Identify patterns of behavior across time, people who seem to be repeatedly posting content that probably is not what a real human would be posting.
- Clamp-down heavily on people having link spam in their profiles.
- Auto-moderation with mass-flagging.
Would be interested in people's thoughts. Bear in mind that any non-trivial software solutions would require people to step up as contributors.
Posted
Way back in the ocP days when I spent a large amount of time analyzing spammers through all the steps from the time they created the account to when they created their first forum post. The one thing that stood out to me the most was the names they used. It was very easy to spot a spammer, even up until today, before they even made their first post (maybe a 4/4.5 out of 5). The spammer's name/handle that I believe that was the trigger for this thread, without me writing it's name (hasn't been fully banned), was a little less noticeable (maybe a 2.5/3 out of 5).
I think as time goes on, spammers using AI will become more difficult to spot before the spam starts. But at the same time, I'm also wondering if posts (like this spammer) becomes easier to spot because they appear "too" perfect.
I think human intervention (eyes and hands) will always have to be a part of spam detection. How big a part?
0 guests and 0 members have recently viewed this.
