Discussion
Stop Sloppypasta
stabbles: I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.
verdverm: I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.
uniq7: This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
madrox: I find that I don't have a lot of sympathy for people angry at this type of behavior. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI.I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
valicord: > I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.Isn't it obvious? If I wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. Responding with a low effort sloppypasta is the equivalent of leaving your trash in my driveway: just because I have my own trashcans there doesn't mean I want yours.
kace91: >How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..
incognito124: Related: https://news.ycombinator.com/item?id=44617172
namnnumbr: 100% - was inspired by and quote "It's rude to show AI output to people" in this. Thanks for linking the discussions!
OptionOfT: It's very weird how many people take the output of ChatGPT/Gemini/Claude as gospel, and don't question it at all.It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.And it shows up the most with people who answer questions in domains they're not a 100% familiar with.
madrox: > If I'm asking humans, I want to see human responses I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).
valicord: But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.
simianwords: I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?It is easy to do in social media because the context is global but in enterprises it is a bit harder.Something like "flagged as very likely untrue by AI" is something I would really appreciate.I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
chewbacha: When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
rrr_oh_man: It's ironic, because the site has all the hallmarks of an LLM generated website.
namnnumbr: I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors.I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)
namnnumbr: Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do insteadsloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
ares623: I'm glad that the term "slop" really caught on. It's such a succinct way to describe the phenomenon, and at the same time it's so malleable. Sloppypasta, Microslop, Workslop, Ensloppification, etc.
Aeolun: I don’t mind this so much if they don’t know anything about the subject themselves. What bothers me is when they then copy it at domain experts as if it makes them qualified to talk.
Aeolun: Yes, I can replace the link to nohello in my automated responses now :)
paseante: The asymmetry is the core issue and it maps perfectly to a concept from economics: externalities. The sender gets 100% of the benefit (appears knowledgeable, responds quickly) and externalizes 100% of the cost (verification, parsing, filtering) to the recipient. It's pollution — you're dumping cognitive waste into someone else's attention.But I think the deeper problem is that sloppypasta is a symptom of something we haven't named yet: the collapse of the signal that someone has thought about something. Before LLMs, a long, detailed response in Slack implied the person had spent time thinking. Now it implies nothing — it could be 30 seconds of prompting. We've lost the ability to distinguish effort from output, and that breaks the social contract of professional communication.The fix isn't etiquette guides (the people who need them won't read them). It's cultural norms enforced through friction — the same way code review catches sloppy PRs. If your team starts routinely asking "did you verify this?" when someone pastes a wall of text, the behavior self-corrects fast.
namnnumbr: Oh, I 100% acknowledge the site itself was LLM generated. I'm not a web designer, so I needed a lot of help making a visually appealing site, even if that design language is at this point LLM trope.However, the essay and the guidelines were all human-written!
Terretta: Hits you in the first row of buttons with the classic gen-AI slop "Why It Matters".So trace* through ninerealmlabs and ahgraber and sure enough: I used AI: - to help build this website. - to help generate examples of sloppypasta based on my original guidance - to proofread and review the human-written copy to provide a critical review - to improve my arguments and ensure clarity. Kudos for being forthright.---* Turns out clicking "Open Source" bottom right gets there faster!
rrr_oh_man: Credit to you for your candor!I'm possibly too jaded / cynical already...