435

TL;DR: We did it, so... yes.


What is this?

Charcoal is the organization behind the SmokeDetector bot and other nice things. This bot scans new posts across the entire network for spam posts and reports them to various chatrooms where people can act on them. If a post has been created or edited, anywhere on the network, we've probably seen it. The bot utilizes our knowledge of how spammers work and what they have previously posted to come up with common patterns and rules to detect spam in the new and updated posts. You've likely seen the SmokeDetector bot if you visit chatrooms such as Tavern on the Meta, Charcoal HQ, SO Close Vote Reviewers and others across the network. Over time, the bot has become very accurate.

Now we are leveraging the years of data and accuracy to automatically cast spam flags. With approximately 58,000 posts to draw from and over 46,000 true positives, we have a vast trove of data to utilize.

What problem does this address?

To put it simply, spam. Stack Exchange is one of the most popular networks of websites on the Internet, and all of it gets spammed at some point. Our statistics show that we see about 100 spam posts per day that get past the system filters.

A decent chunk of this isn't the type you'd want to see at work (or at all). The faster we can get this off the home page, the better for all involved. Unfortunately, it's not unheard of for spam to last several hours, even on the larger sites such as Graphic Design.

Over the past three years, efforts with Smokey have significantly cut the time it takes for spam to be deleted. This project is an extension of that, and it's now well within reach to delete spam within seconds of it being posted.

What are we doing?

For over 3 years, SmokeDetector has reported potential spam across the Stack Exchange network so that users can flag the posts as appropriate. Users have provided feedback to inform the bot on whether the detection was correct or not (referred to as "feedback"). This feedback is stored in our web dashboard, metasmoke (code). Over time, we've used this feedback to evaluate our patterns ("reasons") and improve our accuracy. Several of our reasons are over 99.9% accurate.

Early last year, and after getting a baseline accuracy from jmac (thank you!), we realized we could use the system to automatically cast spam flags. On Stack Overflow the current accuracy of users flagging spam posts is 85.7%. Across the rest of the network users are 95.4% accurate. We determined we can beat those numbers and eliminate spam from Stack Overflow and the rest of the network even faster.

Without going into too much detail (if you really want it, it's available on our website), we leverage the accuracy of each existing reason to come up with a weight indicating how certain the system is that a post is spam. If this value exceeds a specific threshold, the system will cast up to three spam flags on the post. We cast multiple flags utilizing a number of different users' accounts and the Stack Exchange API. Via metasmoke, users are given the opportunity to enable their accounts to be used to flag spam (you can too, if you've made it this far). When a post is eligible for flagging because it exceeded the threshold set by each individual user, accounts are randomly selected from the pool of enabled users to cast a single flag each, up to a maximum of three per post so that we never unilaterally nuke something. (For this reason, accounts with moderator privileges on a site aren't selected to cast automatic spam flags, and only one flag is cast on sites with a deletion threshold of 3 flags.)

What are our safety checks?

We designed the entire system with accuracy and sanity checks in mind. Our design collaborations are available for your browsing pleasure (RFC 1, RFC 2, and RFC 3). The major things that make this system safe and sane are:

  • We give users a choice as to how accurate they want to be with their automatic flags. Before casting any flags, we check that the preferences the user has set result in a spam detection accuracy of over 99.5%1 over a sample of at least 1000 posts. Remember, the current accuracy of humans is 85.7% on SO and network wide it is 95.4%.
  • We do not unilaterally spam nuke a post, regardless of how sure we are it is spam. This means that a human must be involved to finish off a post, even on the few sites with lower spam thresholds.
  • We’ve designed the system to be tolerant of faults - if there’s a malfunction anywhere in the system, any user with access to SmokeDetector can immediately halt all automatic flagging - this includes all network moderators. If this happens, it needs a system administrator to step in to re-enable flags.
  • We've discussed this with a community manager and have their blessing on the project.

Results

We have been casting an average of 60-70 automatic flags per day for over two months, for a total of just over 6000 flags network wide. These flags were cast by 22 different users. In that time, we've had four false positives. We would like to be able to automatically cancel these particular cases. This isn't possible though, so we've created a feature request to retract flags via the API. In the meantime, the flags are either manually retracted by the user or declined by a moderator.

Weights and Accuracy

The above graph plots the weight of the reasons against its overall volume of reports and accuracy. As minimum weight increases, accuracy (yellow line and rightmost Y-axis) and total reports (blue line) on the left-hand scale increase. The green line represents the total number of reports (possible spam posts), and the blue line the number of true positives, which are verified by user feedback.

Automatic Flags per day

This shows the number of posts we've automatically flagged per day over the last month. The jump on February 15th, is due to increasing the number of automatic flags from 1 per post to 3 per post. You can see a live version of this graph on metasmoke's autoflagging page.

Spam Hours

Spam arrives on Stack Exchange in waves. It is easy to see the time of day that many spam reports come in. The hours, above, are UTC time. The busiest spam times of day are the 8 hour block between 4 am and noon. We have affectionately named this "spam hour" in the chat room.

Average Time to Deletion

Our goal is to delete spam quickly and accurately. The graph shows the time it takes for a reported spam post to be removed from the network. This section has three trend lines that show these averages. The first, red section is when we were simply reporting the posts to chatrooms and all flags had to come from users. You can see we are pretty constant in the time it takes to remove spam during this period. It took, on average, just over five minutes to get a post removed.

The green trend line is when we were issuing a single automatic flag. At implementation, we eliminated a full minute from time to deletion and after a month we'd eliminated two full minutes compared to no automatic flags.

The last section, the orange, is when we implemented three automatic flags to most sites. This was rolled out last week, but it's already had a dramatic improvement on the time to deletion. We are seeing between 1 and 2 minutes to time to deletion.

As mentioned above, spam arrives in waves. The dashed and dotted lines on the graph show the average deletion time during these two different time periods. The dashed lines show deletion time during 4 am and noon UTC, and the dotted lines show the rest of the 24 hour period. An interesting thing this graph shows is that time to deletion during spam hour was higher when we didn't cast any automatic flags. It was removed faster outside of spam hour. That reversed when we started issuing a single auto-flag. The spam hour time to deletion is slightly lower than the average. Comparing the two time periods though, time to deletion during non-spam hour at the end of the non-flagging time period and the end of the single flag period are roughly the same.

We'll update these in a few weeks too, to better show the trend we are seeing with three automatic flags.

Discussion

We are confident in SmokeDetector and the three years of history it has. We've had many talented developers assist us over the years and many more users have provided feedback to improve our detection rules. Let us know what you want us to elaborate on, features you're wondering about or would like to see added, or things we might have missed in the process or the tooling. Take a look at the feature we'd really like Stack Exchange to consider so that we can further improve this system (and some of the other community built systems). We'll have Charcoal members hanging around and answering your questions. Alternatively, feel free to drop into Charcoal HQ and have a chat.


1 As of 2018-03-05, the accuracy threshold is 99.75%, instead of 99.5%.

68
  • 47
    /me is leaving a comment here so I'm pingable. I'm one of those elusive "system administrators" this talks about.
    – ArtOfCode
    Commented Feb 20, 2017 at 15:24
  • 39
    Good job everyone! Looks amazing. Smokey itself is already fantastic, and the automated flagging looks neat! I hope that the proposed change to the API makes it sooner than later.
    – Seth
    Commented Feb 20, 2017 at 15:33
  • 5
    "An interesting thing this graph shows is that time to deletion during spam hour was higher when we didn't cast any automatic flags. It was removed faster outside of spam hour." - Guessing this correlates with the time zones that moderators tend to be active, which is something we've seen before when it comes to how long spam lives when flagged. Commented Feb 20, 2017 at 15:56
  • 66
    Charcoal team: excellent work! Thank you for all the effort you've put into this (and will continue to put in). This is freaking awesome. Commented Feb 20, 2017 at 20:35
  • 7
    And I'm very happy to see a public post about it here!
    – Jason C
    Commented Feb 21, 2017 at 2:48
  • 14
    @user3791372, As a general rule, spammers are lazy. Very few spammers will read this or dig into it more. The few that do are the ones that were already actively working to avoid detection anyway.
    – Andy
    Commented Feb 22, 2017 at 17:58
  • 54
    The 3rd graph is not a hat. That's a boa constrictor digesting an elephant.
    – Largato
    Commented Feb 22, 2017 at 23:30
  • 67
    This would be cool except for the fact that I JUST LOST 20 LBS IN LESS THAN 2 WEEKS WITH THIS NEW DIET! Click HERE to learn more! Commented Feb 23, 2017 at 6:14
  • 20
    @billynoah -1, your spam is too grammatically correct Commented Feb 23, 2017 at 7:04
  • 6
    @fedorqui actually, we didn't think it through it that much, we just wanted to halve the number of spam flags required :) However, what you've linked there does reinforce our decision to go with 3 flags Commented Feb 23, 2017 at 9:48
  • 17
    Great system & write up. One question; if SE are onboard with this, why do you need real user's accounts to flag things; couldn't they give Smokey an unrestricted account; or if that's problematic give it a few hundred designated accounts. That seems safer as it's then clear what's done by the bot vs a human, and avoids any risk of future misuse of this privilege (not that you would, but when talking of spam and security that option should be taken into account).
    – JohnLBevan
    Commented Feb 23, 2017 at 11:11
  • 6
    @JohnLBevan, Everything we do is done via the API. If there is a major problem, SE has the ability to see what is done using our application key. As for the unrestricted access, that seems more dangerous because someone needs to be able to use those credentials. Since Smokey is run by community members and not SE itself (like the Community User), that would mean a user has moderator (or higher) level access. We've been careful to build the system to not allow users with diamonds that ability to flag spam on their sites. We want to keep a human in the loop.
    – Andy
    Commented Feb 23, 2017 at 13:45
  • 6
    @JohnLBevan The maintenance of such a network would be challenging. We'd need multiple accounts for every site on the network (plus managing that every time a public beta launched) plus the required reputation needed to flag on each site. We'd spend more time managing the accounts than we would fighting spam.
    – Andy
    Commented Feb 23, 2017 at 14:12
  • 5
    @EJP Because the whole purpose of review audits is to test whether you are paying attention ... and in any case there is no link between review audits and Charcoal ... Commented Feb 24, 2017 at 15:40
  • 5
    @SteveBennett Failing SE giving us special treatment in this regard, thats all we can do. Charcoal is a community effort, not affiliated with SE. So we have to operate within the boundaries of normal users, largely. We're also not "appropriating" human accounts in order to act, the users explicitly give us their permission to do so.
    – Magisch
    Commented Feb 27, 2017 at 8:56

9 Answers 9

114
+500

Stack Exchange has its own spam detection and prevention system. If I understand its design goal correctly it prevents spam from even being posted. What SmokeDetector finds are basically the posts that passed their tests.

Two questions:

  • Is there any other feedback loop from SmokeDetector to that system, except posts being flagged as Spam? If not, any plans?
  • Are there statistics available that show that SpamRam got better by keeping spam out due to the successful efforts of the SmokeDetector and its human slaves?
16
  • 4
    SpamRam here is... SE's own spam detection/blocker?
    – TylerH
    Commented Feb 20, 2017 at 16:00
  • 6
    @TylerH that is what I picked up on how it is called, yes
    – rene Mod
    Commented Feb 20, 2017 at 16:01
  • 18
    (1) Yes. Possibly. We've had discussions with Stack Exchange staff about directly integrating the Smokey system with SE, and we intend to have some more.
    – ArtOfCode
    Commented Feb 20, 2017 at 16:04
  • 15
    (2) Currently, no - but SpamRam only works with IPs, not with post text like Smokey does.
    – ArtOfCode
    Commented Feb 20, 2017 at 16:05
  • 8
    There isn't any other feedback from SmokeDetector to SpamRam specifically. Removing a post via spam flags does feed it though, so indirectly, this is helping. There have been tentative discussions on if it'd be possible to integrate all/part of smoke detector.
    – Andy
    Commented Feb 20, 2017 at 16:05
  • To expand on AOC last comment: "SpamRam works with ip reputation and feedback"
    – Braiam
    Commented Feb 20, 2017 at 16:24
  • 89
    SmokeDetector is awesome! A few incredible community members got together and created it independently of Stack Overflow's internal spam-fighting efforts. It so happens that building some connections between our systems and Smokey is something I've been (very slowly) thinking about exploring, as ArtOfCode mentioned. No solid plans yet though. As for SpamRam, we don't really talk about it much publicly. True, the odds of spammers coming here to look up info on us are low, but if one does, that's maybe the one we'd actually need to be worried about.
    – Pops Staff
    Commented Feb 20, 2017 at 16:27
  • 2
    A spam detection system that can be bypassed by knowing how it works is fundamentally broken. It's like the school tests that use a corpus of questions that students have to answer, and can be bypassed by mere memorizing of a sufficient number of questions. We have known forever that this doesn't work without placing silly restrictions like people being unable to get their tests back (argh!) or the inability to discuss how a spam filter is designed. The design of a test, or a spam filter, must start with the assumption that it's all public and available for anyone. Then it'll work by design. Commented Feb 28, 2017 at 13:27
  • 1
    @KubaOber In theory, knowing our filters could let people bypass them. In practice, spammers aren't that smart and we can react and add new filters as necessary.
    – Magisch
    Commented Mar 1, 2017 at 6:31
  • @Magisch At the moment, the filters are nowhere near state of the art. It's mostly a bunch of regexes from what I see. This is not scalable and can't be but a temporary solution. Commented Mar 1, 2017 at 13:45
  • 1
    @KubaOber Our entire project is based on pattern detection. Other then adding more patterns and more clever patterns, there isn't really much else in scope here. We're not machine learning experts, unfortunately. If you have some suggestions or want to discuss this further, drop into charcoal hq in chat
    – Magisch
    Commented Mar 1, 2017 at 13:47
  • It's been scaling for three years now just fine - and the main bottleneck is human fatigue doing the actual flagging anyways. Commented Mar 1, 2017 at 13:49
  • @KubaOber Nope, no state of the art stuff here. But that doesn't matter - it works, and has worked for the past three years, and I don't see any reason why it won't continue to work.
    – ArtOfCode
    Commented Mar 1, 2017 at 13:55
  • Of course, it is a nice pragmatic approach. Right now the humans do the explicit work of pattern updates. To reduce the tedium, humans could do the work of flagging spam, and the machine can figure the patterns much better than us. I'm no machine learning expert either, though, so I'm not much help - I mostly consume libraries other people write :) Commented Mar 1, 2017 at 14:04
  • 2
    In Meta, questions are answers and answers are questions!
    – David
    Commented Sep 4, 2019 at 9:41
72

We determined we can beat those numbers and eliminate spam from Stack Overflow and the rest of the network even faster.

(Emphasis mine)

What, if any, work have you done to ensure the robustness of SmokeDetector (SD) across different sites in the network, given that they have broadly different scopes and topics? For example, you've finely tuned SD to detect when something is spammy on Stack Overflow, but how dependent on "sharing links that have nothing to do with programming" is SD's codebase?

Is it a matter of flipping a few switches and adding half a dozen phrases to an array in order for it to work on Biology.SE, where things like medicine names might be mentioned regularly, or Aviation.SE, where airlines might get mentioned frequently? (I picked those two because airline tickets and pills are two common spam topics) Or will it require a non-trivial customization per site?

12
  • 69
    We've run across the entire network since its inception - all of the accuracy numbers you see in the above post are network wide. Some reasons are tuned for specific sites, some are disabled on some sites. It's a fun balancing game, but we've gotten pretty good at it.
    – Undo
    Commented Feb 20, 2017 at 15:41
  • 14
    For example, here is some code which checks for health-related spam, but it works only on some sites of the network which are often targeted. And here another 'filter' which is active on all but a few sites which are likely to yield many false positives.
    – Glorfindel Mod
    Commented Feb 20, 2017 at 15:44
  • @Undo Thanks, that wasn't clear after reading; the post only mentions Stack Overflow specifically when talking about SD's flagging behavior.
    – TylerH
    Commented Feb 20, 2017 at 15:44
  • @Undo And to focus on that topic a bit, do you have numbers per site? I'm curious if there are any sites with 100% accuracy, and also curious what the site w/ the lowest accuracy is.
    – TylerH
    Commented Feb 20, 2017 at 15:51
  • 10
    Ask Patents is probably the worst site, with currently only 64% accuracy. But remember that those posts generally won't be autoflagged, only when they reach a certain threshold.
    – Glorfindel Mod
    Commented Feb 20, 2017 at 15:53
  • 16
    But AP is just... weird, so that's not exactly surprising.
    – ArtOfCode
    Commented Feb 20, 2017 at 15:54
  • @Glorfindel "but remember" Where might I have seen the threshold before if I am to remember it? Are you talking about each user's individual threshold? If that's the case, does that mean users set their own threshold preference before the bot can flag as them? If so, what if there is a user who sets their threshold to, say, 60% while everyone else sets theirs higher? Are the settings published? It wouldn't be random in that case... SD would always use the 60% account and two others.
    – TylerH
    Commented Feb 20, 2017 at 15:59
  • 11
    @TylerH sorry, I should have elaborated. My link shows all posts reported by SmokeDetector, often detected for just a single reason. Autoflags will only be cast if a post is detected for multiple reasons, and they need to be 'effective' reasons, too. You can't set a threshold resulting in lower than 99.5% accuracy.
    – Glorfindel Mod
    Commented Feb 20, 2017 at 16:02
  • It looks a bit tricky to apply. As in, I'm assuming I'd have to install Linux first? And then run this in the background on my PC? Commented Feb 23, 2017 at 6:30
  • 6
    @SirAdelaide You don't need to do anything, we (Charcoal) host the bot (see here for current location) and metasmoke (which does all the flagging), all you need to do is sign up and allow us to use your account for flagging. We then use the SE API to flag the posts. But yes, the bot does run on a form of linux/mac, due to compatiablility issues with bash and git which we use extensively. Feel free to drop into Charcoal HQ if you have any more questions Commented Feb 23, 2017 at 6:52
  • @Undo how does that balancing work with newly-created beta sites? Commented Feb 27, 2017 at 20:10
  • 3
    @NathanMerrill In practice, newly created beta sites have extremely low traffic anyway. Since our regexes are balanced for ~160 sites already, new ones usually don't fall much outside of what we've already seen. Usually, the only times we need to tune explicitly are for health-focused sites. We catch a lot of skin care spammers across the network, but the nature of those patterns see high false positive rates on health sites. It's always caught quickly and dealt with in a thirty-second-deploy cycle or two.
    – Undo
    Commented Feb 27, 2017 at 20:19
23

While Charcoal HQ and your GitHub and website have been publicly accessible in the past, posts like these will increase your visibility across the Stack Exchange network and maybe even reach the top search results in Google. While most of the spammers seem quite dumb (it seems they can't even write correct English sentences), aren't you afraid that this will lead to the more crafty spammers discovering ways to escape detection by SmokeDetector, for example by including their spam links in comments (to their own posts)?

16
  • 5
    Don't give them any ideas :P (jk). That's actually an interesting proposal.
    – ɥʇǝS
    Commented Feb 20, 2017 at 20:01
  • 7
    Spammers aren't usually that clever. We do see some people attempting to spam in comments, but not that many. We also see spammers occasionally posting apparently OK answers and then editing in spam later. Pretty much all of these attempts fail as people notice and flag them.
    – ChrisF Mod
    Commented Feb 20, 2017 at 20:02
  • 45
    No, I'm not concerned. Very few spammers will read this or look at the website or source code. The few that do are the ones that were already actively working to avoid detection anyway.
    – Andy
    Commented Feb 20, 2017 at 20:03
  • 3
    Also note that the Charcoal sites will not have their SEO pushed up from this - SE specifically makes that not happen.
    – Mithical
    Commented Feb 20, 2017 at 20:34
  • 79
    General rule: spammers are dumb. You can count on them to be dumb. Being intelligent takes time, which could be spent posting more spam.
    – ArtOfCode
    Commented Feb 20, 2017 at 20:35
  • 3
    Remember that comments are usually not indexed by google, so spammers won't win that much by it
    – Ferrybig
    Commented Feb 20, 2017 at 21:28
  • 9
    We've seen cases where the spam links were put in comments instead of the question body. It wasn't very successful. If we can keep raising the bar for spammers, to the point where they have to make meaningful contributions... Mission accomplished.
    – user307833
    Commented Feb 22, 2017 at 14:39
  • 1
    @Mego, be careful what you wish for. Commented Feb 22, 2017 at 20:36
  • This seems like it might be a interesting challenge to implement. Commented Feb 23, 2017 at 2:16
  • 19
    Regarding 'correct English sentences' - there's a theory that spammers/scammers are using mistakes deliberately in order to turn away anybody who is unlikely to be gullible enough to follow through with it.
    – Robotnik
    Commented Feb 23, 2017 at 6:15
  • 5
    @ArtOfCode, there are exceptions, though. I've dealt with a spammer who obviously read the spamassassin-users mailing list. I posted rules for blocking his spam; he shut down for a day or so, and came back with modified spam that didn't hit those rules. Commented Feb 23, 2017 at 16:06
  • 2
    @andybalholm Sure, there are always exceptions. But the vast majority of spammers here are dumb - we seem to cultivate an especially dumb breed of spammer, actually.
    – ArtOfCode
    Commented Feb 23, 2017 at 16:49
  • en.wikipedia.org/wiki/Wikipedia:BEANS
    – Nemo
    Commented Feb 24, 2017 at 11:07
  • I don't think most spammers will due this due to the rule of diminishing returns.
    – Klik
    Commented Feb 27, 2017 at 23:27
  • 1
    @FerryBig ..., => Oops...!!: google.com/…
    – chivracq
    Commented May 7, 2021 at 2:22
18

I understand the question was rhetorical, but let me answer anyway.

The English Wikipedia has had such a machine for a while, mostly ClueBotNG, which follows some rules and a bit of learning. Some summaries are available at

For more Wikimedia wikis, a similar but more general system is active since 2015, focused on providing editors with the best guesses machine-learning can make about the productivity of a contribution: Artificial intelligence service "ORES" gives Wikipedians X-ray specs to see through bad edits.

12

That's genuinely terrific; congrats to those involved:

Two quick questions: I hope this is directed at the right folks.

  1. How does it compare to Gmail, just very roughly, in filtering effectiveness?

  2. Can I now go back to PhysicsSE and say we have a possible way, sometime in the future, to filter homework questions (which are worse than spam, in some opinions)?

Apologies if I missed these questions in the previous responses. Just tell me that, and I will have a mooch myself through this post.

16
  • 12
    I'm not sure it can be compared to Gmail, really - SmokeDetector is tailored so specifically to the stuff we get on SE that I'm not sure direct comparison is possible/useful. That said, we see a very large percentage of any spam that gets past the SE-native filters.
    – ArtOfCode
    Commented Feb 22, 2017 at 23:12
  • 7
    As for (2), no. SmokeDetector is tailored to spam, and its method of detection (regex) is not easy to adapt to other purposes - we have enough spam tests to reach the top of the One World Trade Center, and recreating that for homework would take far too long to be useful. It's also out of scope for the main Smokey project, though anyone is of course welcome to fork it for their own use.
    – ArtOfCode
    Commented Feb 22, 2017 at 23:13
  • Thanks, I use regex in php and it can be hit and (usually miss). All the best
    – StudyStudy
    Commented Feb 22, 2017 at 23:19
  • 1
    No worries, happy to answer questions. If there are enough samples of stuff, anything can be identified eventually. What I wonder about homework questions is the variety - homework doesn't seem like something that would be the same (or extremely similar) every time, unlike spam.
    – ArtOfCode
    Commented Feb 22, 2017 at 23:20
  • It's a continuous issues, you could be dissing/ discouraging the next Einstein , but at the same time some users just ignore all warning, which is annoying. I don't have an agenda one way or another, but the PSE commununity periodically goes through cycles of questions that appear just before exams. I think every possible procedure has been discussed, and discussed....
    – StudyStudy
    Commented Feb 22, 2017 at 23:26
  • Yep, it's not an easy one to deal with. I'm not sure there's much Smokey can do to help with it, but if you do ever come up with an automated solution I'd like to hear about it.
    – ArtOfCode
    Commented Feb 22, 2017 at 23:27
  • 8
    SmokeDetector is a great platform for seeing everything that comes into a site - you could definitely fork it and strip out the unnecessary bits, then add whatever logic you'd use to detect these. But yes, it is out of scope for Charcoal.
    – Undo
    Commented Feb 23, 2017 at 0:59
  • 2
    There is a stackapp developed and running in sobotics, for detecting poor quality questions on Stack Overflow. You can certainly fork it, and make one for Physics.SE. Commented Feb 23, 2017 at 11:14
  • @BhargavRao thanks very much for that
    – StudyStudy
    Commented Feb 23, 2017 at 12:01
  • 2
    @AlternativeFacts I'm one of the developers of FireAlarm. If you are interested in this thing, and want to run it on Physics.SE, please visit this chat room where you can get further details. Commented Feb 23, 2017 at 14:00
  • IMO, homework isn't really comparable to spam. Spam is generally always negative, and posters of spam don't tend to be well-meaning. However, while there's definitely annoying and exasperating homework-question askers, there's also perfectly polite students who are coming to a useful resource to ask for help in explaining something they don't understand. SE websites are large collections of very knowledgeable people. If a particular user is abusing the site (i.e. frequently posting asking others to solve specific homework problems) then that should be handled on a case-by-case basis. Commented Feb 23, 2017 at 23:22
  • @AbigailFox Hi , I do take your point and I have a comment above that I hope reflects your comments. I love it when someone really has a go at trying a problem, as I can see myself (as a self study person) in the same position, but not so much when the OP ignores all the rules and basically demands an answer, that gets my goat a bit. It's hardly ever a particular repeating user on PSE (in my experience) as they get the message the first time they post ,it's rather lots of people who are often desperate as their exam is a day away. But I was just curious when I saw the new filter.
    – StudyStudy
    Commented Feb 23, 2017 at 23:33
  • @Countto10 Yes, that's for sure annoying. It's just awfully specific and difficult for a robot (I'd imagine) to discern. Either it's very clearly spam-like, or it falls in a grey area where a human notices that it's an obvious attempt to get a specific problem solved for the OP. I just think these should be human-handled and not auto flagged, because posters of these questions tend to be (at least marginally) more well-meaning than posters of true spam. Commented Feb 23, 2017 at 23:37
  • 1
    @ArtOfCode. Just for info, your next project bbc.com/news/technology-39063863
    – StudyStudy
    Commented Feb 24, 2017 at 13:13
  • 1
    @Countto10 There's a stackapp for that too! And strangely, the owner of that app requested for a perspectiveAPI key around 24 hrs before your comment. (Spooky stuff, right?) Commented Feb 27, 2017 at 15:33
11

Has there been any thought about a quarantine area?

Give the incredible accuracy you have reached, I am wondering if it would be worth switching tactics here: instead of posting by default and deleting later, I am wondering if it would make sense instead to check first, and only post "immediately" if the check is OK, putting the dubious stuff in a quarantine area (a review queue?) where users with the privilege to vote could cancel the bot decision if it is unfounded.

This way, detected spam would not even appear on the front-page (and be indexed by Google) ever, decreasing the benefits spammers gain from it further.

11
  • 5
    In theory this sounds like a good idea, until you look at the number of posts that are created across the network daily. With that volume, the review queue would be overwhelmed in a matter of hours, unfortunately.
    – Andy
    Commented Feb 23, 2017 at 13:38
  • 3
    @Andy: Wait, the OP says "We have been casting an average of 60-70 automatic flags per day for over two months", how would 60-70 posts per day overwhelm the queue in a matter of hours? Commented Feb 23, 2017 at 14:22
  • 4
    Assuming I understood your post correctly, those 60-70 flags only accounts for the bad posts that make it through SE's own filters. If we are going to quarantine stuff before posting so that it can be reviewed, we have to account for all the good posts too. Those would overwhelm the review queue. We see orders of magnitude more "good/ok" posts than we do spam. SmokeDetector isn't early enough in the process to prevent posts from being made. It would have to be integrated into the SE post process.
    – Andy
    Commented Feb 23, 2017 at 14:27
  • 2
    @Andy I don't think you're understanding their suggestion. From what I understand the idea is: run it against smokey before the post is sent to the site. If it fails, quarantine it. Otherwise post it. Other than requiring SE dev integration that doesn't sound like a bad idea. Certainly shouldn't fill up the review queues.
    – ɥʇǝS
    Commented Feb 23, 2017 at 16:04
  • 3
    Although at this point I'm not sure it's worth it since anything smokey detects is almost instantly deleted anyway.
    – ɥʇǝS
    Commented Feb 23, 2017 at 16:05
  • 2
    @Andy: I am not suggesting putting everything in quarantine. Citing myself: I am wondering if it would make sense instead to check first, and only post "immediately" if the check is OK, putting the dubious stuff in a quarantine area (a review queue?) => only the dubious stuff should go to quarantine, which is 60-70 posts per day according to the OP. Commented Feb 23, 2017 at 16:25
  • 2
    @ɥʇǝS: There's a window of a few minutes where those posts are polluting the front page AND get indexed. It would be more pleasant for users if they were never on the front page and less attractive for spammers if they were never indexed. Commented Feb 23, 2017 at 16:26
  • 1
    @MatthieuM. Actually, stuff smokey detects rarely lasts more than 2 minutes and since auto flagging came online I haven't seen it last even 1.
    – ɥʇǝS
    Commented Feb 23, 2017 at 16:27
  • 1
    I misunderstood then. Right now, it's not possible to prevent the posting from taking place. But, we've had some discussions with SE on how we can better integrate. I'll bring it up when we talk to them next.
    – Andy
    Commented Feb 23, 2017 at 16:29
  • 2
    @MatthieuM.: Since it's all nofollowed anyway I don't see any rational sense in which it really could be much less attractive to spammers. Commented Feb 23, 2017 at 18:10
  • 1
    @NathanTuggy: Good point, I guess they don't even realize it. Commented Feb 24, 2017 at 6:58
4

Do you think you'll incorporate more advanced machine learning (like neural networks) at any point?

3
  • Entirely possible. We've thrown around ideas before about doing some kind of machine learning.
    – Andy
    Commented Feb 25, 2017 at 2:36
  • 7
    We've tried various forms of machine learning, but due to the lack of our experience in that field we have found that regex-based searching is more effective. Commented Feb 25, 2017 at 4:25
  • 4
    I did write a Naive Bayes ML version of Smokey a while back, making use of our existing data for classification sets - but either I don't have enough experience with ML to do it right, or it just plain didn't work, because its accuracy was no better than just guessing.
    – ArtOfCode
    Commented Feb 25, 2017 at 22:49
4

Only one remark: What will you do when spammers train their bots to make automated constructive and helpful comments?

Otherwise, keep up the great work!

Seriously: Directly, actively preventing spam from being posted in the first place (error: unable to post this, because of spam) might cause spammers to quicker work around the system. One should assume that spammers feel less motivated working around a prevention system, when they actually still think, they get their messages delivered. Therefore, I like this pragmatic and successful approach!

1
  • 4
    This is actually a pretty core part of why SE is so spam free. The spammers think they can post semi-freely, what they don't check back to is that their post sometimes gets canned in <10s.
    – Magisch
    Commented Mar 2, 2017 at 16:35
-17

Why not push this a bit further? Would it not be even more transparent and effective?

As you have demonstrated and I had no doubt, programs are more efficient than humans.

Currently you are using other users' flags to reduce the number of humans needed to nuke the post with the objective to decrease effort and time to deletion (which will have, as a benefit, less interest to spam SE and less effort from SE users to flag it).

While this is great and again I have no doubt that the algorithms used are more precise and effective than any human, the problem of responsibility remains, four normal users stating "the bot flagged for me", a fewer normal users deciding if the post should be nuked.

Push it further! It will be more transparent and effective

What I suggest is to use moderator accounts on the different sites, to directly spam-delete the post.

If these moderators (as we) trust the algorithms and the statistics, let them use their account, the result will be:

  1. Responsibility of who deleted the post.

  2. Possibility for who was responsible to restore reputation and post if something goes wrong (it's the moderator)

  3. Increased efficiency of the spam blocking system. If we and moderators trust it, let's immediately delete these posts without using "socket" users.

You would only need volunteer moderators on major SE sites and since many moderators from different SE sites are already involved in this, I think it will not be a problem.

These moderators need to agree to use the account and are ready to check what have been deleted. In case of any harm is done, they have the privilege to restore the situation.

30
  • 16
    Moderators are ATM not allowed to autoflag on their sites because we never want to unilaterally nuke something. We want all of Charcoal to see if it needs action - having a mod nuke would negate that.
    – Mithical
    Commented Feb 20, 2017 at 20:28
  • 29
    That's a step we're not willing to take yet. Sending flags from mortals allows us to send signal to the system; sending flags from a moderator account would be a whole new step. If we were to do that, it'd be extremely limited in scope and with a lot of staff consultation.
    – Undo
    Commented Feb 20, 2017 at 20:28
  • 11
    @PetterFriberg Main difference is that using a moderator account could nuke a post within a few seconds of it being posted, bringing a 100-rep penalty and SpamRam fun - all without human eyes ever needing to be set upon it.
    – Undo
    Commented Feb 20, 2017 at 20:36
  • 11
    @PetterFriberg We may expand the system, but we're not going to expand from 3 flags to moderator flags - it'll be staged. If 3 flags goes well, maybe we move to 4. If that goes well, maybe 5, and maybe on to 6. We'll be talking to Stack Exchange staff throughout that process, and moderator flags aren't even something we'd consider until we're ready to use 6 flags anyway.
    – ArtOfCode
    Commented Feb 20, 2017 at 20:36
  • 42
    As a moderator, I feel that it is inappropriate to give someone else (even an automated system) access to my moderator privileges to take any sort of action on my behalf. As a moderator, my flags are immediate with few checks. Commented Feb 20, 2017 at 21:35
  • 28
    I don't think giving access to the moderator account to a bot is a good idea. It might even be a violation of the moderator agreement, it certainly is if non-moderators have access to the bot. Moderator accounts have access to PII, if a bot were to cast binding flags, it should happen via an SE-provided API that doesn't require giving out the full access a real moderator has. Commented Feb 20, 2017 at 21:42
  • 2
    @PetterFriberg no, we will never ask to give Smokey employee access, nor will SE ever let us. It would effectively allow anyone with access to the account (including non-mods such as me) to use employee-only tools (nuke every question ever created? why not?) Commented Feb 24, 2017 at 7:28
  • 2
    Letting regular users do this is already irresponsible. Setting accuracy aside, it misrepresents what the user is doing and who or what is doing the flagging. Another consideration: Flag weight still exists IIRC, just behind-the-scenes -- this allows the user to use a bot to inflate their own trustworthiness in the eyes of the system, giving greater weight to their own manual flags.
    – user154510
    Commented Feb 24, 2017 at 23:00
  • 2
    @MatthewRead The first step in getting something integrated with SE is to prove it can be done without their systems.
    – ɥʇǝS
    Commented Feb 25, 2017 at 0:12
  • 1
    @ɥʇǝS I doubt that. Regardless, the first step backward is abusing their systems and user account access. I can't and won't speak for SE, they might totally be OK with this, but it conflicts with everything I know as a mod.
    – user154510
    Commented Feb 25, 2017 at 0:14
  • 4
    @MatthewRead A number of Charcoal people are mods, myself included. We've had a chat about this with a CM over several months as its been moving towards implementation, and have been given permission to do this.
    – ArtOfCode
    Commented Feb 25, 2017 at 0:16
  • 6
    We are also moving towards tighter integration with SE, as Pops said in a comment above somewhere, but that takes both time and developer effort on their part, which is spread thin right now. Both of these options have advantages and disadvantages, but we believe the benefits of getting spam deleted faster outweigh the negatives of this system.
    – ArtOfCode
    Commented Feb 25, 2017 at 0:18
  • 4
    @MatthewRead I'm actually a little confused as to the nature of your general objection, as SpamRam is essentially already a fully automated post nuking system that you are presumably OK with. Smokey merely expands the spam filter with some extra rules, and as a bonus still requires human confirmation unlike the usual spam filter. If Smokey's ruleset were simply integrated into the existing fully automated system it does not seem like you would have the same objection, rather, you would likely appreciate SE improving their existing bot that unilaterally nukes posts (i.e. their spam filters).
    – Jason C
    Commented Feb 25, 2017 at 0:38
  • 3
    @MatthewRead To address "this allows the user to use a bot to inflate their own trustworthiness in the eyes of the system, giving greater weight to their own manual flags": There are more than 100 people signed up right now. We issue ~230 flags per day. It's load balanced (randomly distributed, actually) across those 100 users, then across a dozen or so high-spam sites. (230/100)/12 is a very small number. It's not going to win you an election.
    – Undo
    Commented Feb 25, 2017 at 0:48
  • 1
    I don't see how my comments could possibly be interpreted as being against automation. I am against abusing user accounts for this. It's true that I would have no objection with it being integrated into the system. "It's not going to win you an election" is a straw man.
    – user154510
    Commented Feb 28, 2017 at 18:52

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .