Stack Overflow recently announced that we are no longer to take action against content that was produced using large language model based text generators (e.g. ChatGPT). In my experience, such posts sound authoritative and are well-written, but contain anything from vacuous restating of the question to slightly misleading advice to outright bogus, e.g. a recent example hallucinated RPC commands that don’t exist.
After investing many hours a week for over ten years, the quality of posts on this site is dear to my heart. While we previously saw some astroturfing and lots of spam, I am concerned that the cost of generating posts with LLMs is negligible, but imposes a significant vetting cost on our users. It is incomprehensible to me that Stack Overflow is essentially giving a carte blanche to any user to flood our sites with mediocre to malicious content while taking our tools to curb such behavior.
You can read more about the moderation strike in this open letter, and this meta topic.
Update: Andrew Chow and meshcollider have joined the strike.
As part of my our strike, I we will not participate in
- Raising and handling flags.
- Closing or voting to close posts.
- Deleting or voting to delete posts.
- Reviewing tasks in the various review queues.