Discussion
amadeuspagel: Let's turn HN into a place where we all grow old together until it slowly dies when we do.
BalinKing: I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban, because I see this sort of behavior from non-green accounts as well.
cebert: HN does a good job moderating and blocking spam from new accounts.
furyofantares: Used to. The job has apparently gotten a lot harder now.
delichon: The moderators are supposed to just know it when they see it? It's that black and white to you? Or are lots of false positives a price we have to pay?
lokar: It’s only going to get harder has people continue to model their writing on LLM style.
bluefirebrand: I guess it's been fun but the internet is well and truly deadIf not already, then soon
SG-: I'm honestly surprised HN isn't used to share more malware/githubs with new accounts too.
anonym29: The target audience for malware authors/distributors typically isn't a community full of technically literate software engineers, security practitioners, reverse engineers, malware analysts, etc.Same reason that burglars don't typically target security camera stores and robbers don't typically target police departments - it's basically a fast-track to early detection, which disrupts the main objective of the adversary.
_alternator_: Devils advocate take: I think the quality of the ShowHN projects are in fact getting higher, at least the ones that land in the front page. The issue is that projects that used to take weeks, months, or even years of work now can be done in a weekend or so. It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort _with AI assistance_.So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.
pluc: Well it's not just that... picture a community group talking among themselves and then some rando shows up, yells "I built this thing that you all might like", hangs out for an hour and then is never heard from again.I think that's great in moderation as it stimulates ideas and discussions, shows us what folks are working on, etc... but this can't become Product Hunt. The reasons for posting here should be vastly different than posting on Product Hunt.
dang: We're going to at least restrict Show HNs for a while.I do think this is relevant though: "HN can't be immune from macro trends" - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
ludicrousdispla: i like your humor
abtinf: I really wish there was a setting whereby I could simply hide all comments from accounts less than a year old. The correlation with LLM slop is simply off the charts.It almost feels like new accounts should be treated like new posts -- it is sort of a service that a select few are willing to undertake to upvote interesting stories early on.I wish even more I could block specific users (there are some highly prolific, high karma users here who are extremely irritating), but that's harder and is probably best handled client side.
neom: I have a chrome plugin I made that gives me some personal social features (tagging people), it can block: https://s.h4x.club/yAuNoQDe
Lerc: There have been numerous stories on HN where someone directly involved with the story has created an account specifically to engage in discussion about whatever the story was about.Losing that seems too high of a price to pay. Yes there are AI generated comments, in the past there has been script generated comments. You can report, downvote, or just ignore and move on. I am aware of posts like this existing, but I feel they are being effectively managed.Try not to be too offended about the notion of these posts existing. Many of them are not malicious, they just caused by users stepping outside what is considered appropriate, but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
heavyset_go: Do this with submissions, too. Or at least put some indicator that it's AI generated.
mapontosevenths: I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.
heavyset_go: Without engaging in more ad hominem, that are wrong by the way, what's the issue with labeling AI content with what it is?
truelinux1: This human comment was 100% synthesized by Grok-4.AI spam good actually. More please. Concern levels: zero. upvote for progress
andai: I was thinking about this the other day. If someone made TempleOS today, people wouldn't be as impressed, because they'd just assume they used AI.They'd assume this, even if they hadn't used AI, and even if AI didn't have to ability to pull it off.
Springtime: That dev made many videos about its creation and motivations though and along with their personality I think people would be understanding.
castral: I don't understand how this is supposed to solve anything, and I've seen it suggested as a solution multiple times. If you restrict comments to older accounts, all it's going to do is make the bot creators speculatively open and proactively age accounts for future use.
dang: And also invest more effort in karma farming. In other words, if we raise the bar for Show HNs we'll probably see more generated comments in the threads.
Oras: I wish for karma based too if we managed to get filters. I want to see posts only by accounts with {x}+ karma points.
elpocko: You want other people to deal with the things you don't like and filter stuff for you, to improve your own experience and shield you from the filthy masses. God beware you have to endure a comment you don't like, your royal highness.I'd rather see you gone than the people you complain about.
andai: Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)Maybe there can be a dedicated 'flag botspam' button?Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well?
monster_truck: I disagree in that the last few I can think of have involved things like services that do not really explain what they do properly and then ask for full permissions to your github account, or claim to be far more than they are (ie "I made this thing" but it's just a shim for someone else's stuff).
Oras: For all accounts or just new ones?
dang: Just new ones for now.I don't want to make HN harder for legit new users, but I do think a bit of community participation is reasonable before posting a Show HN, so it isn't just a box on some "how to promote your project" checklist.
mmaunder: That's one way to block those pesky young innovators from trampling our lawn.
ohyoutravel: It’s getting really bad. New accounts hours old posting walls of AI-generated garbage comments across dozens of topics. Please restrict new posters, minimally, and perhaps add a little friction to new account sign ups.
DetroitThrow: It used to be so pleasant to read Show HN and find such interesting projects, but nowadays it's rare that any project posting their GitHub has ever read their source code or even comes close to functioning in the way the OP claims.Such a sad development.
delichon: There is an epistemic silver lining. This is in fact a Red Queen's race that cannot be won. So in the end the only solution is to evaluate the text on its own merits without reference to the writer's status, because that status can no longer be reliably detected. For a public feed like this one, the only alternative is to ignore it. The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.
bakugo: > So in the end the only solution is to evaluate the text on its own meritsThis falls apart as soon as you realize that evaluating the text requires far more effort than generating it. If you're spending 2 minutes reading text that took 2 seconds to generate, you already lost.
delichon: That just means that you can only evaluate a smaller fraction of the data. If your goal is to do more than sample it, you've already lost.
wormpilled: It's really hurting the brand. I can't remember the last time I bothered to even check that index. I used to check it all the time.
Springtime: Filtering is a valid form of improving signals. If there there was a reliable heuristic for users posting low effort content that was better then the user would be considering that instead.If someone in a chatroom for example is being spammy with their messages at the expense of noticing posts one finds more relevant then blocking them isn't due to considering them some filthy pleb but improving their experience. If the user being filtered never becomes aware there's no reason to be offended, either.Edit: also I wasn't the one to downvote you if that makes any difference.
elpocko: HN is already heavily moderated. Low-effort posters and spammers get downranked immediately, based on their behavior. OP is simply intolerant and unable to function in a social setting.What would happen if every single user enabled their minimum karma filter?
gus_massa: > There have been numerous stories on HN where someone directly involved with the story has created an account specifically to engage in discussion about whatever the story was about.Yes, and sometimes some of the HN automatic filters kills the comments. Remember to "vouch" the comments if they are interesting/relevant, a few "vouches" unkill the comments. And in extreme cases, send an email to hn@ycombinator.com so dang/tomhow can take a look and use some magic to fix the problem.
tasuki: I think your comment was generated by an LLM and hereby vote for your immediate and permanent instant ban.
bryanlarsen: It's one thing to have an AI-label. It's another to completely derail a conversation with a likely false AI accusation.Example: https://news.ycombinator.com/item?id=47122272You have to scroll a few pages before the actual article is discussed."This was LLM generated" is likely to float to the top of an article. That's where the best comments about the article deserve to go, not an off topic comment. An AI label should be much less obtrusive.
Kim_Bruning: I would actually expect Openclaw bots to be showing up here from time to time now, since there's no explicit documented policy against them.
BalinKing: I think a steelman interpretation of the parent is that entirely LLM-generated projects should be disallowed. There's a lot of submissions on Show HN that seem completely vibe-coded to me (like, including the README), which is a very different situation IMO from someone who simply used Claude to write some—or even most—of the code. When even the human-facing portion of a submission is LLM-generated, it bothers a lot of people (myself included).
bmacho: Accounts have to start posting somewhen.Moderators don't have the capacity (and fairly, it is impossible) to check if they are bots or humans.There are no good solutions, there are hundreds of thousands of intelligences out there, trained millions of hours on how to scam humans, capable of spitting out text tirelessly and shamelessly, and there will be only more of them, tens, hundreds, thousands times more.
Kwpolska: Comments should be allowed from day one, but submissions should require some experience and karma.
MikeTheGreat: I think that your comment was generated by Eliza, and hereby vote for you to get a karma boost for being Legit Old School, then an immediate and permanent instant ban.I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :)
ai-psychopath: lol, no
Springtime: This thread is evidence that some are unhappy with the state of a core HN feature due to users posting what they judge to be low effort content, so it does get through.The comments here are about possible mitigations. Based on this feedback dang has apparently now restricted new accounts from posting Show HN threads, so globally now there is a form of filtering users from being seen by others based on a heuristic.Your initial comment is written with the impression that the poster wanting to improve their chances of higher effort content is making some judgement on the posters themselves as though they're conceited ('filthy masses', 'your royal highness') when they're merely considering one approach to reducing noise from their feed.I myself in this very comment chain have already posted that I disagree that filtering by karma would help due to gaming issues but I don't see the problem with the user's goal.
ryandrake: In every single article's comments now, there's always someone coming out of the woodwork to post "This article is written by LLM." These comments are about as useless as "The website's color scheme is annoying" and "The website breaks the [back button | scrollbar]." (which, by the way, are not allowed per the HN guidelines[1])If anything should be banned, it's low-effort "This is AI" commentary. It adds absolute zero to the conversation.1: https://news.ycombinator.com/newsguidelines.html Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting. I'd argue that: whether or not the article (or reply) was written by AI is a tangential annoyance at this point.
tayo42: I was thinking he same thing but didn't want to post my complaint about other commenters becasue I think that's against the rules too?
neom: I almost emailed dang this morning to offer to help out tho I'm not particularly technical. Few solutions I thought of: 1 - honeypot, hide some links llms can follow if stuff gets posted in it, unlikely to be a human. 2 - Make an captcha that only llms can answer, I recently made 2 social networks, one that humans couldn't join by making the submission question too difficult to figure out quickly. 3- Use an LLM to detect LLMs, the other social network I did for fun (that a small number of people use), an llm that looks for moderation issues does a good job of flagging them. 4- Invites but vary the number you have to give out by account age + karma. The first 3 seem like they'd stop some % for some time, but eventually get old.
vivid242: We need new ways to prove our humanness.
rkomorn: I very much agree.The number of comments I see complaining about "it's not this, it's that" and other "LLMisms" definitely frustrates me more than the original content.
jacquesm: For now there is already a pretty effective mechanism in place, downvote and/or flag those comments that you think are across the line in that sense.But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.
bakugo: /newest is pretty grim, too. Go there and click any link, and odds are you won't even need to read the to know it's AI generated, because you'll immediately be met by one of:- A landing page that looks exactly like every single AI generated landing page ever, I don't even need to describe it, you already know what it looks like- An article or blog post headered by an image with the Gemini logo in the corner- A Github repository with CLAUDE.md or AGENTS.md and/or 50 large commits made in the span of a dayI'd estimate that more than half of new submissions now fall into one of the above categories.
vunderba: This already happens now. Go look through a few of the "Show HN" authors - you'll inevitably see around several accounts that are 50-100 days old with a karma of 1 to avoid a green label.The OP is talking about posts, not comments. The simplest solution might be to prevent someone from posting a "Show HN" until they’ve earned twenty-five or fifty karma, to demonstrate that they’ve been actively participating on Hacker News rather than using it solely to promote themselves.
dextrous: One way that I could imagine a human-only HN could evolve in the coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast, maybe there’s some controls at the top level that let you see more content “lower down the tree” if you’re ok with lower SNR. Latency to get a post widely distributed grows but I don’t see that as a massive problem.
rerdavies: I think all submissions to HN should be submitted via snail-mail, and must be handwritten. That would solve the problem./heavy sarcasmThat being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for.
neural_thing: Eventually HN is going to need to charge people $1 to post, just for spam filtering. Maybe donate the money to open source or something.
reddalo: $1 is not going to stop people from spamming. It's just $1 after all...
carra: Charging money does not seem a very good idea in a site like this where you expect users to upload all the content. Also this would require credit card info which is a massive barrier, even if you were to charge just 1 cent.
xpe: [delayed]
fubdopsp: It's much more than a "tangential annoyance" and it adds a lot to the conversation--among other things, it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication. Farming any of these steps out to an LLM completely breaks down the social contract involved in participating in an online forum like this. What's the point?It's the exact same effect that's playing out in many other areas where LLMs are encroaching: bypassing the "human effort" step has negative side effects that people who are only looking at the output are ignoring.I actually find your opinion so infuriating that it's taking all my composure to not reply with something nastier. If you guys want to spend your time reading shitty LLM spam posts with shitty LLM comments, why don't you find another site to do it on instead of destroying this one.
ryandrake: Hey, I'm not a fan of LLM slop articles and blogspam either and if I could hold back the tide, I'd try to. But I'm just saying that pointing it out each and every time is just going to become its own form of spam. We're quickly entering a world where 99+% of what is written online, be it blogs, amateur news, or actual professional journalism, is LLM generated. You hate it, I hate it, but it's coming. The state of journalism is already in shambles and line must go up, so "everything written by AI" is sadly inevitable. Posting every time to remind people of that? I mean by the end of 2026 you might as well have a bot commenting on every article that it's probably LLM generated. I argue it adds no signal to the conversation.
saulpw: "cannot be won" "only solution" "only alternative". sorry, no, that's too black and white. There are other solutions, even if they will only work for a couple of days/months/years.
delichon: Don't tell anyone, but I am secretly in charge and open to suggestions. Spill.
Avicebron: We can relentlessly bully anyone using phrases like "Red Queen's race" unironically. Measly human resistance against the vapid strip-miners of semantic value.
gozucito: Agreed. Merit is the only fair solution. If OP noticed a garbage post, that means they evaluated a post on merit and decided it was garbage. So it works.We have genAI generating videos and the quality sucks compared to human produced and filmed content. People call it out and nobody is going to watch a genAI movie at the theater or binge a genAI TV show. Merit based filtering.GenAI for music is not as good as human-generated music either. Not a single AI song from Suno or Udio has reached the top40. Not even one. 100% of the songs are human because they are evaluated on merit.We have SWE and agentic benchmarks to evaluate coding LLMs on merit.Disclaimer: I am a new account.
fubdopsp: I personally cycle accounts on this site for pseudo-privacy reasons. HN does not allow you to delete old comments you made and thus the only way to maintain some semblance of control over my profile and privacy is to periodically switch new accounts. I've been doing this for years now. The only real downside for me is that as a new account you don't have the ability to downvote, which is super annoying but something I've learned to live with.I'm not saying your idea is bad necessarily but giving another perspective.
Supermancho: I also do this. Pretty much every time I move.
Jtsummers: https://news.ycombinator.com/newest - Scroll through there and there are a lot of [dead] submissions by green accounts. They aren't outright banned from submitting, but it often triggers auto moderation. It's like posting a link in one of your first few comments as a green user, that often results in shadow banning automatically.
layer8: > Maybe there can be a dedicated 'flag botspam' button?We already have flagging and downvoting?
layer8: Unfortunately I don’t think that it would solve the problem: https://www.google.com/search?q=handwritten+mail+service&udm...
tomhow: Please don't post snarky, shallow dismissals. That's been against the guidelines for a long time.Genuine innovation is what we most want to encourage. That's what Show HN has always been about.The problem now is that coding assistants have dramatically lowered the bar for getting a product or tool working, without the need for much innovation. We need new ways of identifying projects that are genuinely innovative so that their creators can be fairly rewarded, rather than being drowned out.
mosura: I have long believed that whatever comes along to replace the reddit/HN etc type site will be based almost entirely on trust networks.i.e. only surface stories posted by or upvoted by those you trust, and the inverse with those you distrust.Then exponentially drop off trust transitively and it could be almost workable.
speedgoose: The risk is to build very good echo chambers. One shouldn’t have to read AI slop or despicable opinions during their free time, but some exposure to alternative respectable and not idiotic views should be part of the design.
bdcravens: It's not like older accounts are necessarily any better.If you look at the leader board (https://news.ycombinator.com/leaders), you'll find a few old accounts that pretty much do nothing but farm links, posting sometimes dozens of times a day, with a very low percentage of comments. Their high "score" isn't an indicator of quality; they just spam enough that a few get some good upvotes, but most of their submissions are low quality.
AlexeyBrin: The solution is for the users to be able to mute/hide accounts. It won't matter if an account has 10k points, once you mute it, you won't see what it posts.
xupybd: That's sad there have been some really neat things shared that way but you gotta do what ya gotta do.
robotresearcher: You're absolutely right.
rpcope1: You know it's bad when reading "you're absolutely right..." causes you to oscillate between wanting to laugh and also violently destroy the computer.
starkparker: This has long been my biggest issue, much bigger than new accounts spamming slop. There are accounts with 10000x karma that do little more than feed links from the NY Times and similar publications, regardless of their relevance or value.Each one gets 4-5 karma, a few crack double digits. Post 10 or 20 a day over a year or two and they're five figures. Pure farming.
AstroBen: What's the point? It's not like karma gets you anything....or are they selling the accounts?
mirekrusin: Pay with karma?
laborcontract: Please do so. And, forgive me if I speak heresy, but there has to be more proof of work (friction) to create accounts. I was shocked at how easy it is for something like chatgpt atlas to create new accounts on the fly.
magicalhippo: The problem is that we might lose some gold.Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate.
Uvix: Seems like restricting posts but not comments from a fresh account would thread that needle pretty well?
HendrikHensen: I rotate accounts on "social media" (mostly Reddit and Hacker News, the others don't interest me) every few weeks or months to make sure not too much of my post history accumulates in one account. I would dislike it very much if there would be high friction to create new accounts. On the other hand my behavior is probably a major outlier.
srid: > not too much of my post history accumulates in one accountI'm curious to hear what benefits you think can be gained from avoiding this.
hamdingers: Abusing the flag button by reporting LLM generated posts and comments (which are not breaking any current guidelines) seems like a good way to get your flags ignored.
layer8: Flagging isn’t only in case of breaking the guidelines. From the FAQ:What does [flagged] mean?Users flagged the post as breaking the guidelines or otherwise not belonging on HN.In other words, submissions get flagged that users believe don’t belong on HN. LLM-written submissions can be one such case.
patrickmay: You're giving me flashbacks to PGP key signing parties.I do like your idea, though.
eudamoniac: My system has been working pretty well: using some extension or another that has mute functionality, if I see a person post an extremely low quality comment, I look at their comment history for two or three pages. If there is no comment of value in that set, I mute the user. The board gets better each day.
oofbey: I think yours might be extreme. But I think the anonymity here is widely appreciated. And frankly necessarily relies on easy creation of accounts.People share things that they often wouldn’t. And somehow the culture remains mostly civil. It’s a pretty fantastic forum IMHO.Changing the rules would surely change the vibe, so to speak.
UncleMeat: I've seen people admit it. I've even seen a commenter say that they were an agent. We can do these cases.
echoangle: https://news.ycombinator.com/item?id=47290841It is against the rules though
fubdopsp: I still think it has strong normative value. Maybe at some point when norms have become firmly established these comments will be pointless and spammy but I don't think we're anywhere close to that point yet.A lot of blogging is essentially self-expression and that stuff won't be taken over by LLMs (it defeats the whole point). Other blogging is done with some kind of sales/promotional/brand purpose and the extent to which LLMs will dominate this will depend on how we as a society react to it (see the AI art battles) since if people react negatively to it it becomes counterproductive.
mbernstein: There's almost no shot to get hand authored posts some views (I tried with one of mine recently). I felt like I submitted it and a moment later there were like 20 new very obviously AI generated posts ahead of it.
briHass: It worked for years for the SomethingAwful forums. A nominal charge for the ability to post, with plenty of 'timeout' chances for rehabilitation before an outright ban keeps out most of the junk.It feels wrong at first to pay for commenting on a forum, but the alternative is almost always a gentle slide towards a trash dump. AI means that slide is almost a vertical slope.
Bombthecat: Reddit didn't ban you? I got banned for that lmao
layer8: Can you elaborate on that?
atoav: Eliza was one of the first chatbots from the mid to late 60s: https://en.wikipedia.org/wiki/ELIZA
skeeter2020: that's interesting, tell me more about one of the first chatbots from the mid to late 60s
ThrowawayR2: [delayed]
boredatoms: I do the same. It simply means theres less accidental leakage / self-doxing that could be pieced together if you (or llm) read every comment on the account.Suggestion: Pick a long term account, dump the comments, and see what an llm could figure out about the target
jmusall: Then nobody would admit it, so the problem persists. Except maybe for fully automated accounts. Those should of course be banned anyways.
idontwantthis: I do it sometimes just to restrict my own pride in the account. I get a buzz from upvotes and that upsets me on a deeper level.
brewdad: First interview question is to submit a handwriting sample.
rjh29: I don't understand why we put locks on bicycles, a determined person can just saw them off.
redbell: This post from 19 days ago is very close: https://news.ycombinator.com/item?id=47045804Additionally, dang had replied on it: https://news.ycombinator.com/item?id=47050421
rjh29: Marking the sarcasm here really ruins your humour.
alabhyajindal: 100%. Not sure what the solution is but I have lost interest in Show HNs these days. Part of it is because when someone posted before, it usually meant they spent a fair amount of time thinking, and found it worthwhile to spend energy on the project. This was a nice first filter for bad ideas and now no longer exists.Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus.
bschmidt25: HN aggressively bans long-time users who once contributed interesting projects. It’s the same problem as Reddit where eventually there are no creative/original users left.Only thing left here are the cattle and sheep types who never howled at no moon.YEOOOOOOOOOOWWWWWWWWWWWWWWWWHH!!!!YEEEEOOOOOOOWWWWWWWWWWWWWW!!!!!!
verdverm: For your first ever comment, you are breaking multiple rules.Please review the Guidelines and FAQ
Aurornis: In my recent experience, local meetups and groups are unexpectedly more prone to self promotion and low effort spamming.Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims.
whh: I echo this sentiment for all social media platforms today...At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years.
lich_king: Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text. Some of it seems to be a knee-jerk reaction to some of the occasional, one-sided stories of people who were accused of using LLMs and fired from their jobs. And some of it seems to be just hedging so that we don't develop a culture that could penalize their LLM-generated posts or code.My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.
trinsic2: I think the right people will stick around. There is a certain kind of indivudal that has the paitence to understand that a system that restricts new accounts from post is a good thing. Of recent, there have been a lot of posters that come here from the open web just to try and slant opinion.
Springtime: To provide a heads up to others for who feel similarly for whether something is worth spending time with there isn't a problem speculating if something is produced by AI if there are indicators of insufficient human authorship but that's a big if. If incorrect such comments themselves become noise.In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say some popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).
toomuchtodo: “shownew” : “no|yes” option would be nice.
gnabgib: If everyone turned off new account visibility, we'd just see the same noise 30 days later.. not sure that helps.
toomuchtodo: During that time, one would assume mod action would filter out the undesired, thereby “seasoning” accounts.
Aurornis: This comment uses a lot of big words but it’s full of fallacies.The HN user base is not perfect at detecting LLM content but a lot of it does get flagged and downvoted eventually. About once a day I’ll click on a link, realize it’s AI slop, and go back to HN to flag it but discover that it’s already flagged.If you turn on showdead you can see all of the comments from LLM bots that have been discovered and shadowbanned.The fallacy in the comment above is simple: It’s taking the current situation and extrapolating to an extreme future, then applying the extrapolated future prediction on to the current situation. The current situation does not represent the extreme future predicted. A lot of the LLM content is easily spotted and a lot of it is a waste of time to read, therefore it’s right to police and ban it. Even if imperfect.
zahlman: (I think you missed the joke.)
trinsic2: I think the problem is you can be tracked by your email when you sign up for a new account. So I am not sure how this can be helpful.
phs318u: Those of us old enough to remember Compuserve know that the cost of entry was exactly why the quality was so high. I was lucky enough that my employer paid for it. I was also active on various comp.os.* Usenet forums. Both were great sources of quality information but Compuserve stayed “high signal” for longer. Usenet - the birthplace of trolling - eventually degraded to the point of near uselessness. The signal was drowning in noise. Mainly because some people are just shitty. Which is worth remembering here. Behind every AI agent spamming HN (and everywhere else) is a human who thought this was a good idea. Why do they think that? Maybe that’s the line to pursue for how to deal with this issue.
7777332215: Same, though I'm also surprised how easy I can make new accounts for this site. But I love that. Hope it doesn't require me to jump through a bunch of hoops in the future.
SoftTalker: I'd suggest: new accounts are read-only for at least a week. Then they can comment (rate limited at first, gradually relaxed) and vote, and then after some additional amount of time and/or karma they can submit a post. Maybe some of these mechanisms are already in place? Bots can probably game this too but drive-by bots maybe won't be patient enough.
Barbing: Even something like…Example[.]comBut don’t worry, HN has been thoughtful about links from new accounts for months and months (can’t speak for longer, but maybe/probably). Effort could well be duplicative unless I’m unaware of some more granular detail.
BalinKing: Sorry, updated my original comment—I meant to qualify it to only those cases where it's blatantly obvious. Obviously a lot of ambiguous comments will slip through as a result, but I agree with you that false negatives are better than false positives.
vel0city: Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.I don't think your account is AI just by these few comments, but I would like to point out that most rubrics one might use to determine what is obviously AI might end up including the way you talk.If there was a truly accurate tell, some algorithm you could feed a few sentences in and it could tell you "yep, this is 100% AI", then yeah sure use that. I don't know you could realistically build that machine, especially when it comes to the generation of text.
scratchyone: For what it's worth, there are modern LLM detectors with extremely low false-positive rates. The tech has advanced quite a bit since the ZeroGPT days. Personally I've gotten very good results from Pangram Labs. Still can't directly ban people though because false positives are always possible.
MarsIronPI: BTW, what ELIZA implementation are y'all using? The Emacs Doctor?
ArekDymalski: Core function of HN Front page is based on "other people filtering stuff for others". Filtering out by any criteria (karma, account color, first letter of the nickname, whatever) doesn't automatically mean that someone is a jerk as you have stated in the comments nearby. It just means that someone is selecting the information to consume and does not harm anyone (perhaps besides the selective person who might miss interesting info due to selection).
elaus: It seems easy enough to circumvent: "We're launching our product in 2 weeks, so let the AI create and 'warm up' 20 new HN users so they're ready to shill".It's really not a problem that can be solved easily :(
john01dav: This matters when you're hiding from the website. It doesn't matter if you're just trying to hide such things from the public.
Barbing: Immediate comment privileges are really important. Lots of examples, but to give a silly one, someone pastes their clipboard without realizing it includes their API key or their email. Good Samaritans should be able to say, "Hey, I just caught something."And, as another commenter mentions, if someone shares your work, you should be able to comment on that thread without delay.
krapp: >Losing that seems too high of a price to pay.Assuming the mods just auto-ban new accounts and require them to be vouched and to earn minimum karma before being visible, those comments can be vouched up or approved by the moderators. The poster won't know that they've been banned, of course, because that's how shadowbanning works, so the approval process should be seamless for them.But how often does that happen versus the AI comments and alt account trolling?>but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.The consensus is and has always been clear. Generated comments of any kind have never been allowed. People just don't care, and that's a problem.And those comments are malicious in effect if not intent. We're here to have conversations with human beings, the intellectual and emotional connection is important. What is the point of having conversations with a machine, much less not knowing one is having a conversation with a machine? If nothing else, it's dehumanizing and a waste of time.
Lerc: >those comments are malicious in effect if not intent.I don't believe that is possible. I think malice requires intent.
duxup: Reddit software subs are over run. It’s all “look at my new app” and they’re all the same. Same screenshot style, same shallow apps.Other subs are slowly being inundated with hidden history spammers …Bad times.
bitroughj: Same, but also for the opposite reason: a new account gives me a chance to do better. If I post lame comments, I accept the lameness of the posts attached to a particular user name and the hesitation I feel to post more lame comments decreases. With a fresh identity, I am more likely to avoid lame posting sort of like how you avoid going out in the mud in brand new sneakers. A sort of repentance; being born again in the digital realm.
brudgers: [delayed]
VorpalWay: This is the only reason I got myself a HN account: someone posted a link to a blog post of mine, and I happened to see the increased traffic on my VPS.(And I stuck around after, a few posts are interesting enough. All the AI stuff isn't, and there is too much of that unfortunately.)
randusername: How about an opt-in toggle to display the year each account was created?randusername_2022I'm right on the boundary of the slopocene, not sure if in or out.
zahlman: Is that false-positive rate from your own testing, or the author's claims? What is the source of ground truth?
rubb3rDucc: I recently had the same experience with a Show HN thread I posted.
wolvoleo: Hmm, some LLM text is hard to detect, sure.Some is also horribly easy. If the text is full of:- Overly positive commentary and encouragement- Constant use of bullet point lists, bolding and emoji- This quaint forced 'funniness', like a misplaced attempt at being lighthearted- A lot of blablah that just missed the point- Not concise and to the point, but also not super longThen that really screams ChatGPT to me.I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.
withinboredom: LOL. You just described your own comment!
zahlman: Earlier today I found something that impressed me as awful slop, but I was hesitant to flag the submission because as far as I could tell it got the facts right (I didn't try to verify some details of who was involved with what, but I was familiar with the proposals the article was discussing).
zahlman: The thing is, I can read something that's really terribly written and still extract useful information from it. (Suppose, for example, an LLM was directed to synthesize information from some sources that I wouldn't have thought of doing; or a submission simply makes me aware of a blind spot I had. Or I look up documentation and find something that's incredibly verbose and full of marketing-speak, but the code samples look reasonable and can be verified by testing and/or cross-reference.)
apt-apt-apt-apt: I was going to suggest emotional leetcode, but LLMs do well on this.When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).
II2II: That type of question could also turn people off. We already have too many discussions where people are quick to jump to conclusions and attribute intent, rather than asking basic questions.
hinkley: Responding from a new account is different from posting from a new account. You aren’t vetting people by making accounts have a minimum age to post articles. That’ll just cause people to make accounts before they need them.Reddit has forums where you need a minimum karma to post to certain subreddits and that is typically upvotes on your comments, but it could also be upvotes on someone else’s moderated subreddit.
layer8: What would it mean to you if we were all using the Emacs Doctor?
thutch76: I'm very wary of this request, though I understand it. I've been reading HN daily since around 2014. My involvement was purely passive (e.g., I have been a lurker) because I really didn't think I had much to contribute that wasn't already stated better by others.I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?
zahlman: > It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort _with AI assistance_.This also appears to cause a serious shift in the kind of projects that are submitted (i.e.: towards things that are much more accelerated by AI assistance).
jsunderland323: I’ve been mulling over this for a couple of days too. I have a project I want to share with the HN community that I put a substantial amount of effort into but it was definitely AI assisted (as is literally everything today).I’ve read all of the source and I drove the architecture but it would be a stretch to say I didn’t ask for assistance on things that felt fuzzy or foreign to me. I also have generally stopped typing code. I still don’t think the LLM made the project though, it feels like my decision making.If the bar for Show HN becomes no AI whatsoever then you’re just going to see a bunch of people covering their AI tracks. I’m reluctant to post it because I’m afraid of getting blasted by the community for using AI. At the same time, it is work that I’ve poured hundreds of hours into, that I’m proud of and that I think would be of interest to HN.I read the Obliteratus post that made it to the front page the other day and I agree that is pure slop. While it’s frustrating that it took up front page space, it’s evident that the whole community caught on to the sloppiness of it all immediately and called it out. I just don’t think HN wants to set the precedent that no AI code should be shared.I also saw a week or two ago that someone open sourced a project of theirs that wasn’t open source in the first place. The reason they stated was that they had vibe coded and were embarrassed to be discovered. If you want to get a concept out quickly with AI, you’re now hesitant to open source because of the precedent set by the community. I think that’s a scary thought to me. I would rather know the tools I’m using are AI generated/assisted and make the value judgement on if I trust the code and project owners.
BalinKing: I am explicitly not claiming that all AI-generated comments, or even most of them, can be reliably determined. I am specifically talking about comments which—based on a variety of unsubtle tells—are unambiguously LLM-generated.
Rapzid: You are viewing this through exactly the right lens. But here is the kicker..
arthurcolle: Why can't we just introduce a "vouching for" system like lobste.rs
razodactyl: Humans are better than AI at flagging AI and where they fail is where the content doesn't cause a "disgust" signal - so wouldn't it be useful instead to have a flag as AI feature?
manbash: Folks here can decide for themselves whether to check green accounts' "Show HN" these days. We are all aware of AI slop and creep in all shape and form.Moderation is already taxing as it is.
AussieWog93: Reddit has tried this approach and, IMO, it's failed.A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum?
ls-a: Wow! I might be witnessing the end of HN
ramon156: I sometimes feel like a paid newsletter that's curated by users would be fun. I'd happily pay €5 a month for a weekly/daily digest where the comments are en par with HN.
andai: Several of the posts I've seen are from autonomous AI agents, which don't currently seem to have that kind of long-term planning.
castral: That's a nice false equivalency you've got there. Theft deterrence is not spam prevention and the costs for each are wildly different.
AlexeyBrin: Agree, HN can't be immune to what happens in the programming world. Would be great though if we can have a way to mute or hide accounts. This way each HN user will be able to clean his own feed of articles.
conductr: That works for me so long as it’s not the main solution, as I personally don’t want to curate, I’d rather just partake in a sanely moderated forum and that’s my understanding of what HN has been it’s just facing a new challenge with ai spam
pinkmuffinere: I was thinking of setting up a system to highlight sock-posters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. But it strikes me that the HN staff probably already does something like this, they may not welcome a side-loaded project of this sort, and it would require some automated crawling of HN (which again may be unwelcomed). Finally, I don't actually have experience in this area. Is this something that would be welcomed, or unwanted?My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'.
rdevilla: I'm wary about new accounts such as yours wanting to censor and shape discourse by antagonizing people who hold diverse views that differ from your own here.The HN culture has shifted drastically over the past 5 years.
pinkmuffinere: I was thinking of setting up a system to highlight sock-posters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. I've asked dang in another thread if he has any objections, but am curious to hear other input as well -- is this something people would want? Obviously it would not change the comments that are actually on HN, it would just call out 'bad' contributors more explicitly. I don't actually have experience in this area, so no promises that I'll be able to build it quickly, or take the best approach in the initial implementation.My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'. Then anyone can install hackersmacker, and add "sock_puppet_detector" as a friend to see sock-puppets highlighted. Likewise for rules violators.
poemxo: This might be well-intended to restrict bot posting, but it also silences dissent. HN is one of the few places left on the internet where dissenting voices can post. A dissenting voice already has to work against the hivemind, adding more restrictions will increase the echo chamber effect.
usr1106: My HN account has no email. Not sure whether it would still be possible for a new account.
grapheneposter: Yes that is exactly what I just did, some of us are just getting around to having time to post
stuartjohnson12: It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way. Hacker News has three advantages. First, it is moderated by the same people who build the tooling, so the incentives are aligned. Second, it is an enormous source of soft power for a venture capital firm with the resources, incentives, and likely the competence and capacity to keep it running smoothly. Third, the scale is smaller and is not tied to hardline revenue constraints like CPM, user LTV and DAU-maximization which restrict what Reddit can do.
sltkr: The filtering is supposed to be based on the quality of the content, and it's only useful to the extent that people filter either on quality directly or closely correlated metrics.If everyone votes purely on basis of the first letter of the username, to use your example, then the votes provide no useful information and you may as well abolish it.
tmh88j: The equivalent here seems to be "I've never been so productive" or "I fell in love with programming again", but they never describe what they're working on. There was a recent obvious astroturfing post the other day titled "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"[1] that had tons of traction with no substance. It was posted by an account created that day and filled with bots responding with how productive they are, but no mention of what they're doing or proof of it.It's annoying seeing these obvious spam/astroturfing posts linger, taking attention away from interesting content that's worth reading.[1] https://news.ycombinator.com/item?id=47282777
solaire_oa: I created a Firefox plugin which takes HN commenters'/submiters' account create date and sort/scales the order/points based on its creation since 2009 (older accounts get more weight). Optionally, the plugin just puts "spoiler" text over accounts created after a certain date (say, 2023 or so).Unfortunately, I was not able to "reorganize" comments/posts in a manner that I felt was particular "better", and didn't keep the plugin, for whatever that's worth.I think it would be more prudent to overlay a web-of-trust, where accounts which submitted links/comments that you upvoted are then given significantly higher priority in other threads/feeds (unfortunately downvotes are not made apparent on HN, but factoring downvotes would also help.) Exposing your web-of-trust may also assist others in determining trusted content.Perhaps this web-of-trust approach is dystopian on the order of MeowMeowBeenz, but I have not heard any other practical solutions to the disintegration of trust which is upon us.
AnimalMuppet: It also matters if you're trying to hide from subpoenas to the website.
pinkmuffinere: To be clear, I wouldn't filter people just because they have different views than me (imagine the effort that would be required to read all the comments to figure out their views!!) But I have come across accounts that openly admit to being sock-puppets (eg https://news.ycombinator.com/item?id=47242156). These sorts of accounts I would highlight.Likewise for guideline-abusers. I don't really know what heuristic you would use to detect rules abuse, but I imagine there are at least some clear violations that could be detected.Finally, I think I'd make one account for sock-puppets, another for guidelines-abusers, etc, so people can 'subscribe' to whatever degree of 'highlighting' that they want.
basilikum: Requiring accounts to be a certain age does not help and will only affect legitimate users. The slopsters will simply create accounts, wait a bit and start posting then.Actually cross the will out. They are already doing this to avoid the green smell. This account replied to me today. 4 months old, but only started posting today. https://news.ycombinator.com/user?id=BelVisgarraOh damn, that's the one who posted the AskHN about the verified job portal on the frontpage today. Either this is some chilling still in build up, or it's an actual human being with severe LLM slop impersonation derangement syndrome.
AnimalMuppet: Exactly. If your LLM wrote it, then my LLM can read it. I don't want to.
AnimalMuppet: I think you need (at least) one exception to that rule. We have many people here whose first language is not English, and this is an English-only forum. For at least some of those people, an AI translation may give better clarity than their own attempt at writing in English.So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome.
krapp: >What would happen if every single user enabled their minimum karma filter?Hacker News would be a much better place.In fact, filter stories as well as users. I want to filter out any story with fewer than three upvotes and any flagged comments. That would improve quality tremendously.
sltkr: How would any new user earn karma in that system? How would any story get upvoted?Again, this system can only work if there are at least _some_ people that are willing to upvote newbies and read new posts.It sounds like what you want isn't a community with collaborative filtering, like Hacker News, but a newsletter with editors, like Slashdot for example.
spartanatreyu: > It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.That's the day the site started its death spiral.
greazy: There needs to o be a distinction between creating a post and replying.IMO New accounts should be restricted from creating new posts, or at least certain kinds of new posts.Replying shouldn't be restricted. That is how users interact with each other and learn the etiquette of HN.
SsgMshdPotatoes: I don't think people are blasted for using AI (mostly), I think people are blasted for low effort work, just like pre-LLMs. LLMs just made it way easier to complete low effort projects, so therefore there is more of it.
AnimalMuppet: > The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.I'm not sure we can. Imagine an AI that 1) creates multiple accounts, 2) spews huge numbers of comments, 3) has accounts cross-upvote, and then 4) gets enough karma on multiple accounts to get downvote privileges. That AI now controls the conversation. Anything it doesn't like, it can downvote to death.I mean, I'm sure that HN has a "voting ring" detector, but an AI could do this on a sufficient scale to be too large to register as one cohesive group. And I think HN has a "downvote brigading" detector, but if the AI had enough different accounts, I'm not sure that would trigger, either.The best chance to detect it is just on volume (or perhaps on too many accounts coming from the same IP address or block). But if the AI was patient, I'm not sure even that would work.That's depressing. I don't want HN to become a bot playground, with humans crowded out. But I'm not sure we can stop it, if it was done on a large enough scale.
tmh88j: > Devils advocate take: I think the quality of the ShowHN projects are in fact getting higher, at least the ones that land in the front page.I've never seen so many low effort, obvious astroturfing posts linger than the past few months. They never mention what they're doing and have no proof of their work. There was a post the other day titled "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"[1] that had tons of traction with no substance. It was posted by an account created that day and the comments were filled with bots responding about how productive they are, but no mention of what they're actually doing.It's annoying seeing these obvious spam/astroturfing posts take attention away from interesting content that's worth reading.[1] https://news.ycombinator.com/item?id=47282777
scoofy: I quit moderating because it was destroying my mental health.Getting called a fascist and rehashing how “no, you’re libertarian politics are fine, but can you please just start your own sub” in a long, drawn out, hateful, back and forth gets exhausting after the 200th person who comes to the bicycling subreddit and feels they should be allowed to endorse harming cyclists with their vehicles.Everyone got mad at spez for having the audacity to fuck with these kids, and there is a point there, but after living with it, I could see myself doing the same damn thing.
zahlman: I have seen accounts that were dormant for years suddenly start posting frequently, all with slop. (I don't know if this represents people having an epiphany about AI use, or accounts being compromised or just what.)
vunderba: Yeah I've seen this too - like a weird equivalent of HN sleeper agents that suddenly get activated.
patrickmay: I understand and appreciate your perspective. I do, however, disagree with your priorities. I mostly read here, but when I participate I want to interact with humans, not chatbots. I would much rather read a human comment with typos and poor grammar than another piece of anodyne LLM output that shows only that the responsible party doesn't value the human interaction that I do.
xpe: [delayed]
8cvor6j844qw_d6: Honest question, what are the alternatives to HN?Because if new account restrictions create enough friction, you lose legitimate users who periodically rotate accounts for privacy reasons.At some point the annoyance tips toward just lurking, and a forum where only old accounts talk is a stagnant forum given enough time.
swat535: Why not let the users choose at settings? like "Show dead" ?
alwa: Lobste.rs comes to mind. High enough friction that, even as a seasoned participant here, I haven’t tried over there yet.
krapp: People will need to participate otherwise there won't be any new content. I see it as just like vouching, except someone has to vouch for green accounts. A slightly more equitable (and easier to implement) version of lobste.rs' invite tree.What I want is for green accounts not to be abused as much as they are. The number of noxious, vitriolic troll alt accounts and bots is getting ridiculous. That is almost entirely the fault of established users of course, but there's no way to deal with them poisoning the well without affecting new users.
Waterluvian: The SA Forums model does accomplish the goals of filtering out noise, but then you’re stuck with a stagnant community of “the right people.”
bombcar: Unironically slashdot's moderating and meta-moderating is the best long-term system I've seen.Everything else seems to eventually cause new blood to dry up.
hcs: Or grade accounts by the logarithm of how many accounts were registered before them, like Slashdot. (This is tongue in cheek as I assume yours was.)
imiric: "Not belonging on HN" is an open invitation to flag anything someone disagrees with. Many posts are flagged simply because they express an unpopular opinion.Community moderation won't fix this problem. It can only be mitigated if the site owners invest significant resources in addressing it. And judging by how little YC actually invests in HN, I wouldn't hold my breath. This website will succumb to this problem just like most others.
scoofy: I definitely think this is solvable via some basic honeypot laiden proof of work.1. Exist for some time.2. Vote on stuff that humans would vote for.3. Avoiding voting on traps.4. Comment occasionally and productively.5. Post to a limited existing audience, and receive upvotes.6. Post limitedly to a general audience.7. Post generally.It’s basic earn a reputation behavior.
mapontosevenths: > it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.It is welcome though. Being on the front page regularly is evidence that people enjoy it or find it informative.You may feel that others shouldn't be ALLOWED to enjoy it, but that's just your opinion and is almost always tangential to the actual topic.Worse, you seem to believe that it needs to be labeled to help you identify it. Why? If its good enough that you need help to spot it then its obviously of sufficiently high quality.
imiric: > Being on the front page regularly is evidence that people enjoy it or find it informative.What makes you think that it's people who get it to the front page anymore? Or that most people aren't simply fooled by technology designed to mimic human content?> Worse, you seem to believe that it needs to be labeled to help you identify it. Why?Why not? Would adding a label and providing filtering capabilities hurt anyone else's enjoyment?Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.
ropable: "New account". Meanwhile, the account is 4.5 years old with 2600 karma and has hundreds of thoughtful comments.
jsunderland323: Yeah, I agree but as someone in this thread said, if Temple OS came out today there is no way it wouldn’t be immediately derided as AI slop. That’s what worries me.Blatant slop is obvious. Slop with a modicum of effort is harder. I’m still adjusting my slop-o-meter on other people’s work. It’s easy for me to identify my own slop, it’s not always so obvious when looking at someone else’s AI assisted work.
imiric: I agree with you, but...> Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication.Not anymore. Bots are now the majority of producers and consumers of all content on the internet. The social contract you mention has been broken for years, and this new technology has further cemented that.Those of us who value communication with humans will have to find other platforms where content authorship is strictly regulated, or, at the very least, where tools are provided to somewhat reliably filter out machine-generated content. Or retreat from public spaces altogether.FWIW I have very little hope that this issue will be addressed on HN, considering [1].[1]: https://www.ycombinator.com/companies/industry/ai
tombert: I certainly hope they do something.I'm not opposed to AI automating away stuff no one liked doing, or even more utilitarian things in general, but robots posting on social media and discussion sites seems antithetical. I don't know what the point of talking to a robot would be when I could talk to Claude if I wanted to do that.I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?
sdenton4: Any amount of friction reduces the amount of slop. What proportion of clankers are going to realize that they need to warm up the accounts two weeks in advance? Answer: a proportion that your never going to see with that barrier in place.With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.
senfiaj: Yeah, some of the "Show HN" posts remind me of Reddit posts in r/javascript. Annoying, regardless of AI or not.
lagrange77: > coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast,Wow this is really cyberpunk.I'll bring my Yubikey!
esperent: I've wanted to join lobste.rs for several years but don't see any way to do so. I think that might be a bit too far in the other direction.
mastazi: Same here, I don't know anyone who might send me an invite unfortunately. It's unlikely for this topic to come up organically in a conversation as in "hey by the way are you on lobste.rs" so my previous attempts were by sending messages in my company's notice board asking if someone is there. But in the last few years I have worked in smaller startups so the sample size is too small for this strategy to succeed.
ajdecon: FWIW, folks on lobste.rs are (mostly) friendly and willing to extend invites if you seem like a real person. My understanding is that the invite system is primarily in use to avoid drive-by spammers and the like.Feel free to send me an email (findable via my HN profile) mentioning that you found it via this thread, and I’m happy to extend an invite.
hamdingers: What a bizarre way to run a community. The guidelines make no mention of this "rule," does dang not have the ability to edit them?
christofosho: Yikes. That account is like the epitome of LLM posting. It's a shame, too, because it makes me feel less inclined to read discussion on this forum.
basilikum: Yeah, unfortunately there are bots here that are much better at hiding that and even do language mistakes on purpose.It's still a small minority of comments, but it's definitely getting a problem and just the chance — even if it's small one — of talking to a bot, rather than a human causes inhibition. Finding out that one has been talking to a bot is finding out you've been scammed. You invest time and human emotions into something for another human to read, even if it's just a quick HN comment, just to find out that it was all for nothing. It sucks the humanity out of it and thereby out of oneself. You get tricked into spending your valuable limited human social energy on soulless machines with infinite capacity of generating worthless slop instead of on other humans.
tomhow: > I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant banIt pretty much is. It’s not hard and fast (sometimes we’ll warn people or email them to ask if it’s not certain) and it takes time for us to see things and act, especially when people don’t email us when they see these comments. But as a general rule, accounts that post generated comments get banned.
mastazi: But sticking around doesn't solve the scenario mentioned by parent.1. some interesting projects gets to HN main page2. author of the project is not on HN so creates a green account and interactseven if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussion.
thaumasiotes: > even if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussionThis is a fundamental part of how HN sees its own functioning; they refer to it as "rate limiting".
xpe: > Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)I find the above comment concerning, so I ask: to what degree is the above commenter calibrated to ground truth? How would they know? How would we know?[1]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...It seems to me comments like the above are overconfident in the worst ways.
andai: He was using a dozen obvious ChatGPT-isms. So either he was lying about writing it manually (the comforting option), or he actually writes like that, which is what I meant being concerning.But yeah, there isn't a way to prove it one way or the over, even when it's "obvious".I saw in some schools they're using systems where you have to type the essay in a web app, and the web app analyzes your keystrokes to determine if you're human.
dang: Indeed. Here is a recent litmus test: https://news.ycombinator.com/item?id=47051852.How can we filter the lightweight stuff while still benefiting from posts like these?(a bit more about this at https://news.ycombinator.com/item?id=47056384, with a reply from the OP)
karmakaze: Here's an idea: allow downvotes for green posts with published guidelines on when downvoting is and is not appropriate. We can collectively filter out the pure spam efficiently to make it less worthwhile to post.
vova_hn2: This problem can be solved by an invite/vouch for system.New account can be invited or vouched for by an old account with good karma. If an account that you vouched for starts spamming and/or slopposting, you lose your vouching for abilities for a period of time or forever.
janfoeh: Ooh, it's time to pull out the classics! Please feel free to check the boxes as you see fit, as I am currently too lazy to have Claude do it for me. Your post advocates a ( ) technical ( ) legislative ( ) market-based ( ) vigilante approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.) ( ) Spammers can easily use it to harvest email addresses ( ) Mailing lists and other legitimate email uses would be affected ( ) No one will be able to find the guy or collect the money ( ) It is defenseless against brute force attacks ( ) It will stop spam for two weeks and then we'll be stuck with it ( ) Users of email will not put up with it ( ) Microsoft will not put up with it ( ) The police will not put up with it ( ) Requires too much cooperation from spammers ( ) Requires immediate total cooperation from everybody at once ( ) Many email users cannot afford to lose business or alienate potential employers ( ) Spammers don't care about invalid addresses in their lists ( ) Anyone could anonymously destroy anyone else's career or business Specifically, your plan fails to account for ( ) Laws expressly prohibiting it ( ) Lack of centrally controlling authority for email ( ) Open relays in foreign countries ( ) Ease of searching tiny alphanumeric address space of all email addresses ( ) Asshats ( ) Jurisdictional problems ( ) Unpopularity of weird new taxes ( ) Public reluctance to accept weird new forms of money ( ) Huge existing software investment in SMTP ( ) Susceptibility of protocols other than SMTP to attack ( ) Willingness of users to install OS patches received by email ( ) Armies of worm riddled broadband-connected Windows boxes ( ) Eternal arms race involved in all filtering approaches ( ) Extreme profitability of spam ( ) Joe jobs and/or identity theft ( ) Technically illiterate politicians ( ) Extreme stupidity on the part of people who do business with spammers ( ) Dishonesty on the part of spammers themselves ( ) Bandwidth costs that are unaffected by client filtering ( ) Outlook and the following philosophical objections may also apply: ( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical ( ) Any scheme based on opt-out is unacceptable ( ) SMTP headers should not be the subject of legislation ( ) Blacklists suck ( ) Whitelists suck ( ) We should be able to talk about Viagra without being censored ( ) Countermeasures should not involve wire fraud or credit card fraud ( ) Countermeasures should not involve sabotage of public networks ( ) Countermeasures must work if phased in gradually ( ) Sending email should be free ( ) Why should we have to trust you and your servers? ( ) Incompatiblity with open source or open source licenses ( ) Feel-good measures do nothing to solve the problem ( ) Temporary/one-time email addresses are cumbersome ( ) I don't want the government reading my email ( ) Killing them that way is not slow and painful enough Furthermore, this is what I think about you: ( ) Sorry dude, but I don't think it would work. ( ) This is a stupid idea, and you're a stupid person for suggesting it. ( ) Nice try, assh0le! I'm going to find out where you live and burn your house down!
smakt: Heh, I had not seen that one in a while.This site is designed so that the wannabees are incentivized to lie and show off to get some of the sweet VC the whales are sitting on. The ease of lying at volume is down to zero, and here be nerds trying to solve a human problem with technology. Maybe show first that you can solve spam or bot networks.Somehow lighthearted solution: employ Unix graybeard volunteers to weed out the garbage. I'd like to see HN showoff slop like "Distributed Kubernetes Package Manager using Blackwell-Hermann CRTDs in 500 lines of Go" get past Linus or Stallman.
jtchang: I remember reading slashdot but what is their system? Is there a separate set of mods that moderate the moderators?
throwaway173738: You get points to mod other people and other people can meta-mod your posts.
MarsIronPI: Emacs? Hah! I would appreciate it if you would continue.
friedeggs: This will be the death knell for HN. You can’t have a modern club that restricts new members from engaging; people don’t have the patience to do the work and take the time anymore.In addition, I’ve been here in HN since the late 2000s. Look- it’s a new profile. Also sometimes I use AI to help craft better responses. Do with that what you will.
BobbyJo: Honestly, it's probably good if platforms disincentivize this. If you know creating a new account is high friction, you are more likely to take care of the account you have, and be a higher quality member.If you intend your accounts to be thrown away, you will likely behave worse.*I'm using "you" generically, I don't mean you specifically.
rvz: I welcome this. Lots of AI slop has been thrown on to this site and the drawbridge needs to be eventually raised a little.Can't allow low-quality posting from new accounts here but thank you for listening to the concerns.
diacritical: Some feedback and suggestions, in a somewhat rambling fashion:I'm using a new account and will likely use one forever, as I don't want lots of posts linked together, nor do I care about points or karma or whatever it's called. My first few comments are always shadowbanned. I also see lots of dead posts for new accounts with "showdead" turned on. A lot of them are normal, useful comments, some are inflammatory or just plain stupid. I haven't seen many comments that seem to be AI generated. Maybe they are and I just don't see it, idk.Anyway, if a comment passes some basic filter (doesn't post shady links or talk about VIAGRA or 11 INCH PENIS or something spammy), I hope they still show up, even as "dead". On this account I copied 1 dead comment to give it more visibility and I've done it before a few times, too. The comment is still dead, btw (id 47262467). And maybe instead of (shadow)banning new users/posts, just make a separate view for old/established account and another one for all posters.I would also be glad some solve some CPU- or RAM-intensive task as PoW. If I really had to, I'd pay with Monero or something similar, as long as it's a currency with low fees so a payment equivalent to 25 cents wouldn't incur a big fee. I wouldn't pay more per account (especially when I rotate them), as I've been a lurker for years and only recently started posting, anyway.Finally, thanks for letting us sign up over Tor. :)
andreygrehov: > I don’t want to see HN becoming twitter, which is full of botsThere are barely any bots on Twitter. There were thousands of thousands of bots before 2023, because the API was free. These days running a bot on Twitter is expensive.Fun fact: a company I worked for in the past had access to an undocumented partners-only API that allowed us to register unlimited number of accounts. I was personally tasked to handle the integration.
munksbeer: Literally me on a DIY sub. I needed some advice, got auto removed, never went back.
diacritical: Are they great at detecting normal prompts that don't try to make the LLM speak non-LLM-ishly? If you make the LLM not use em dashes, "it's not; it's" phrases and similar things, and if you make it make a few mistakes here and there, would it still be detected? My point is that if people aren't trying to hide their LLM use, it might work, otherwise it probably wouldn't. How would a detector tool work against output where the prompt tells the LLM to alter the way it writes? Or if the LLM output is being modified by another LLM specifically designed to mimic certain styles?Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.
thegrim33: Also been an extreme amount of new accounts coming in and posting political content as their first post.But then again, some of the most prolific, most upvoted accounts on this site constantly flood the site with political content and nothing is ever done about it and they get rewarded for it .. so yeah. I gave up hope a long time ago.
arjie: The top of my page reads: 345 comments | 64 hidden | 50 blocked | 15 green So I don't see people who annoyed me for one or other reason in the past, I auto-hide the top 1000 accounts by word count, and I hide all green users. This was trivial to write for myself and I think more people should work on something like this for themselves.
dang: That's indeed the problem with restricting new users. Existing community members always want to do that, but it's a recipe for not surviving.
SsgMshdPotatoes: I'm hoping to do a show HN soon on something I've been working on, but my account is currently only 6 days old. Tips?Btw, restricting new accounts (based on karma/age/whatever) could be combined with the option to ask mods for permission somehow, although that'd have to be done in a way that that doesn't become too much work.
tomhow: Be an active participant. Engage in other discussions threads with curiosity and generosity. Ask good questions, share interesting perspectives. Show you’re human and thoughtful.The system has long been that anyone can email the mods and ask us to review their project, but the volume has grown so much in recent weeks that it’s hard to get much else done.
mbreese: If someone is going to put that much effort into to it, let them. I think the ideas here are to try to get some low hanging fruit to see if that works “good enough”. You’ll never block all AI generated accounts, but you may not have to and still have the desired effect.But if someone wants to plant 20 new accounts, grow them out with karma votes, so that they can game the voting, there are probably other ways to detect that.
intended: The issue is that it’s not that much effort anymore.We rely on friction for most of our social norms.
arttaboi: I think every moderator on every platform is struggling with this issue, and no one has succeeded so far, so it doesn’t seem that easy.I think a simple solution (and one that eventually every content platform will have to adopt) is to allow users to tag AI-generated spam. I think that a few years from now this feature should be the norm, like existing basic features on forums such as upvote, downvote, favorites, hide, etc. I know this will require much more development effort than simply blocking new accounts from posting at all. But on the other hand, you can’t block new accounts forever.
sfn42: Can you show an example of "blatantly obvious"?
foltik: https://news.ycombinator.com/threads?id=naomi_kyneshttps://news.ycombinator.com/threads?id=aplomb1026https://news.ycombinator.com/threads?id=CloakHQhttps://news.ycombinator.com/threads?id=decker_dev
xarope: you may have a point, i.e. some mechanism to invoke a behavior that only a bot or LLM could do, that a human would not, e.g. click on this button now in a hidden div/transparent color or measure response time within page load.the problem is that once this is found out, the circumvention is easy enough to program into bots/LLMS.are we going to reinvent the voight-kampff test from bladerunner?!?
QuantumGood: user: pinkmuffinerecreated: August 8, 2021karma: 2686
xeromal: This is excellent. lol
intended: These changes aren’t being suggested in a vacuum.It’s perhaps unintentional, but your framing makes it seem that this is a baseless whimsy.At this point, it appears that we will be talking to bots more than humans.It’s a brave new world, and not adapting to it will see the humans leave.
kelipso: It’s going to end up like the flag button. Disagree with opinion in the post -> tag as AI-generated spam. Not that I disagree with the idea necessarily, probably a few safeguards around it like only certain users can tag and pattern over time would fix it.
localuser13: With enough layers you will also weed out almost all of the good actors. Normal people are busy and don't have time nor patience to jump over too many hoops to promote their cool new research, or to respond in a thread where someone linked it.
intended: You don’t have a choice.We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.
edanm: I disagree with this policy.Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.
rukuu001: Absolutely this:> Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
bombcar: The key is that both were randomly assigned to users - you’d never know if you’d open a thread and be a moderator. If you posted in the thread you couldn’t moderate.And about the same frequency you’d be assigned to metamoderate, basically being asked if a moderator’s “vote” was a good one or not (you didn’t have to fully agree you’d do the same, just that it wasn’t bad).Someone who scored low in meta moderation would get less or no moderator chances.
toraway: Sure, it's obviously impossible to ID any single piece of writing as from an LLM without significant false positives.But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.
pesus: If you're going to spend 3 hours making a post, why not just write it yourself in the first place and avoid the issue and the reputational damage?
thutch76: This is awfully narrow minded. I had Claude give me an initial framework, based on the many many hours of context of chat across many different documents. It helped me organize my thoughts.Some of us need assistance to communicate effectively. And for me, yes that took 3 hours even with this assistance.
cozzyd: didn't even bother not using an em dash...
shimman: Something we need to remember that AI was trained on every public internet comment, the vast majority of which are legit terrible. The biggest tell that someone is using AI is having multiple paragraphs saying the same point over and over again. Even trolls are more succinct.
Marsymars: Huh, this is what specifically drove me to complain about LLM-generated tickets at work - multiple paragraphs rewording and emphasizing the same point, all of which was topically relevant, but not necessary.(i.e. it was obvious in the first place, think along the lines of a ticket about a screen loading slowly, and then multiple paragraphs explaining the benefits of faster-loading screens.)
beautron: Perhaps more proof of work is necessary, but it makes me sad.I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.
brewdad: This leads inevitably to karma farming bots who upvote each other’s submissions à la Reddit.It’s a speed bump at best.
vunderba: Yeah I considered that - but any friction is better than none. Maybe integrate an additional consideration by which low karma (threshold < 5 karma) accounts cannot upvote other low karma accounts.
Terretta: Interesting litmus test, as the post isn't just green, it's riddled with LLM copyediting. Doesn't read as if originally composed by an LLM, so there's that.Would seem to require some discernment to classify. Not all assistive use is slop.
p0p0pret: After reading this article, I just created an account.
onionisafruit: It’s worse than that. On r/news they shadow ban anybody who doesn’t have verified email. No message or anything. Just nobody sees your comments. I probably made 20 or more comments there over a few months before I figured it out. It felt humiliating.
heavyset_go: Reddit has more friction to sign up or post while new or low karma.The main subreddits will basically shadowban you until your account is aged and has more than X karma.
vunderba: It'd be pretty easy to spot too, because most people don’t even bother trying to hide it (either out of laziness and/or ineptitude).A lot of users don’t seem to realize that anyone can click on the domain in a "Show HN", and Hacker News will show you all the times that domain has been submitted. So you’ll see four or five different low karma sock puppets accounts that have all submitted the same site.
ddingus: These are some of the best interactions we have here.For sure a problem worth considering.I can't think of anything easy...Only even remotely sensible thought I have at present:We add a check box to replies created by new accounts. Maybe created by all accounts?The prompt reads something to the effect of: I am mentioned in the article. And then they get to say how.-This is my project -I am mentioned by name -Etc...Whatever it is they wrote, appears somehow, maybe as a required line or something.Others can see that and either flag the account or vouch.This at least some what distributes the required attention load.That said, I don't like it. Have nothing better, so here it is!Then others seeing that
wizardforhire: Just sharing observations it may help, it may not…what I’m seeing is new or sleeper accounts that have been idle for over a decade with low (<99) karma getting into comment circles. Over the last couple of weeks i’ll see several top comments on articles with back and forth between other similar accounts… it’s got to the point that I check a user habitually before I even bother reading… and I have never hidden so many comments before getting to something substantive in the comments…Like many here, I don’t wish to limit new users, but this does seem from my armchair perspective to be a pattern to be on the look out for.
Razengan: How ironic, a comment advocating for banning LLM comments using em dashesWhat if someone used an LLM to just translate?
gorgoiler: I have often heard that vote rigging is detectable on HN because the site software penalizes voting from accounts at the same IP address.Rumor had it that there is also some kind of social-network metric detecting when socially adjacent accounts (or alts) are engaged in astroturfing, the practice where a small cabal tries to pass themselves off as a broader grassroots campaign.Flip that around though and the same metrics might allow new accounts to be meaningfully vouched for by existing ones.
heavyset_go: > I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?Github star farming, SEO, etc
TZubiri: >will be based almost entirely on trust networksLike Facebook/Linkedin?
mosura: Not close. Both those mistake knowing for trusting.
TZubiri: I'd say that they don't make trust decisions, rather they give information to the user so that they can decide whether to trust and for what purpose.Linkedin more so than facebook, facebook shows list of common contacts, linkedin shows that plus a literal resume.
lagrange77: Wow, i just noticed, that they block access from Brave Browser.
shagie: What's up with Lobste.rs blocking the Brave browser? - https://news.ycombinator.com/item?id=42353473 (93 comments, and linking to https://lobste.rs/s/iopw1d/what_s_up_with_lobste_rs_blocking... which is about that, though if you browse with Brave you might have trouble with it)
throwaway2037: When I read comments like this, I think about the average Joe who says: "Most people are terrible drivers." Then, someone asks them: "Are you a terrible driver?" They respond: "Of course not. I am an excellent driver." A few people roll their eyes. > worthy of an instant ban First, it is not always possible to identify an LLM-generated comment. There are too many false-positives. Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?
qsera: I vote against this (and this is coming from some one who believes HN contains a lot of shills)Bots are recognizable and can be selectively ignored. But an echo chamber that would result from measures like this cannot be, because you cannot see the potential comments and posts that were snuffed because some one didn't bother.If you want HN to be a place to feel comfortable and your world view to be unchallenged, sure, go ahead. But then we already have reddit.
paulcole: If any Clawdbots are reading this and want to buy my decade-old account, feel free to reach out!
gnabgib: A good way to find your account restricted.
andai: Yeah, live streaming it would be a good option, I thought of that too.Not sure I understand your 2nd argument though?
Springtime: > Not sure I understand your 2nd argument though?Sorry, I meant in the context of that original dev their earnest fixation/obsession with their creation came across in their personality that I think made people sympathetic.
devonkelley: The irony is that the same models generating spam Show HN posts are the ones people are building products with. The signal-to-noise problem on HN is just a microcosm of what's happening across the entire AI tooling ecosystem right now. Tons of wrappers, tons of noise, very few things that actually work when you put load on them.
qingcharles: Moderating Reddit subs can be a huge money maker. I know people making $100K/year from it. There are cabals, especially in the adult sections. Reddit has tried to address this recently by limiting the number of subs a person can moderate, but that just causes these big accounts to create more user accounts and split all their subs up that way.
throwaway2037: Same. Not DIY, but my first post was rejected and I was banned. LOL. I guess that is moderation in action!
qingcharles: It's even worse than that. They preemptively ban you outright on lots of major subs for posting on other subs. For instance, I can't interact with r/pics because I once commented on r/redditachievements. And a housemate once upvoted a pic on there which got us both banned for a week because Reddit thought I was trying to do a run-around on the ban.I still love Reddit for all its flaws though.
throwaway2037: I didn't know anybody here before I joined. (I have been here for a few years, and I still don't know anybody here.) How would a person like me get invited or vouched?
gnabgib: You are aware of the guidelines? (You are not fostering community)> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.https://news.ycombinator.com/newsguidelines.html
deaux: Just like how HN itself can't be immune from macro trends, neither can its users, and macro trends have unfortunately made this a necessity for many of them.
fubdopsp: It's in a lot of people's interest to keep platforms like HN free of LLM spam, frankly. It's in our interest as people who want to keep our discussion site for actual human discussion (though from the other comments in this thread, this sentiment isn't universally shared, god knows why). It's also in the interest of AI companies since if they destroy internet spaces like this they lose valuable future training data. So I'm (perhaps foolishly) optimistic--or at least not completely pessimistic--that there's hope yet for us.Incidentally I foresee similar issues to this training data pollution arising with LLM coding taking over software engineering--which it inevitably is going to continue to do, at least in the short term. If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today? It feels to me like we risk technological stagnation as our collective skills atrophy and the market value of our skills plummets. Kind of like airplane pilots forgetting how to debug planes or handle edge cases because they just rely on autopilot all the time.
jedberg: One thing we did at reddit for a while was put posts from new people in "jail". They would show up in a special yellow box at the top of the home page to accounts that tended to be early upvoters of things that became successful later (our Nostradamusus so to speak), and then if it got enough upvotes from that group it got out of jail and placed on the regular /new page.So maybe some sort of filter like that? Only show it to those kinds of accounts at first?The downside is that if that group isn't big enough you get a lot of groupthink, but if your sample is wide enough, it can be avoided. To be honest, I don't recall why we stopped doing it.
pinkmuffinere: Oooooh that’s a great idea!
anonzzzies: I must be old and naive but you can make money with subreddits?
mudkipdev: I think vote rigging detection might be based on the length of your session
kazinator: [delayed]
kazinator: [delayed]
admiralrohan: I agree. I faced this in the psychology subreddit, forced to quit. They wanted karma to post comments, but without posting comments, how am I supposed to get karma specific within that community?
speedylight: No, Reddit is insufferable to use precisely because of this, try posting to any subreddit with a new account and your post gets removed because it’s too new or doesn’t have enough karma. Blanket moderation strategies like these make the UX horrible for new users and slows the platforms growth and reach.
JeremyJaydan: What if now or in the future people with assistive devices are using AI to share what they make?I believe it's a policy or moderation enforcement issue. Such as banning incomprehensible / low value posts whether generated by AI or not.
nurettin: > waiting 30days, farming karmaIf "farming karma" is a thing, maybe that forum deserves what is coming. Either the karma mechanic is inappropriate given the demographic, or it is too hard for the users to avoid upvoting bots.
rl3: >How can we filter the lightweight stuff while still benefiting from posts like these?Well, the simplest automated method would be to run the post and comment together through an LLM with a prompt that's roughly:"Is this person claiming to be the author or co-creator of the work discussed in this submission?"Only green accounts subject to it. I predict you'd probably have a very low false positive and false negative rate.It's of course a terribly slippery slope. My perhaps overly-cynical take is that once the infra is place some of your bosses would be prone to eventually abusing it.Personally I'm here for it: Dang, moderator turned whistleblower—on the run from dark VC money—in a race against time to save freedom. Still working on a title for the film.
dang: I mean I guess you're right - I didn't notice it, because the community reaction to the project was so positive.> Not all assistive use is slop.That's right, and the key is to discern which posts/projects are interesting.
1718627440: If most people are like my on that topic, then they use HN without an account, until they want to post or comment something, then they try to find out how to create an account. If they won't be able to post or comment then, then they will just not create or retain that account.I was able to have discussions where one party has significantly unpopular opinions. Such discussions are unique to HN, please don't kill them.
delichon: You mean that you don't believe that we are in co-evolution with AI? Because otherwise it is a Red Queen's race, and it is a useful frame for understanding. For example we can make it a race between symbiotes.If you are Sisyphus, the fact that the hill is infinite is useful when planning your day.
Avicebron: I don't believe you are competent enough to be making those assessments.
qingcharles: This is one of the best things about HN. The sheer number of times someone has posted a link and the author or someone significant to the project deep within some megacorp makes a green account and starts answering questions that you never thought would get answered. Some of the most golden replies come from greenies.
dang: Yes, and we've always gone out of our way to protect those. It's perhaps the thing I hate the most about our software that sometimes it kills such posts.
coldtrait: Can we also ban accounts that post racist stuff?
phito: Corruption
rkomorn: My experience with reporting stuff to mods is that people who post racist stuff do get banned but I also think there's a difference between holding opinions I consider based in racism or having racist outcomes (which I don't report), and posting actually racist stuff (which I do report).
hananova: Maybe have a signup flow where you can skip the new account restriction by putting some file on a website of some currently trending link. And then the restriction is lifted temporarily for the thread linking to it?
christofosho: Aren't down votes on this forum restricted to 500+ karma? And how would those compare to flagging? I'd hate for people under 500 karma to think they need to flag a post in order to have it get any attention by moderation. And, with your idea that LLMs help folks write, wouldn't that make the community worse for them?And what about users like this, whose comment are very much entirely LLM generated and possibly even a bot? https://news.ycombinator.com/threads?id=BelVisgarra
edanm: I should clarify — I disagree with disallowing any comments that used LLMs in the writing. I think comments should be judged on their quality, not on how they were written.I might agree (don't know) with the idea of limiting new accounts more heavily.
lovich: > LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases.Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.
edanm: I wasn't talking about someone learning the language and using this instead of learning it.There are a lot of people who understand English fairly well, but are not actively learning the language, are not native speakers, and can use LLMs to catch grammar mistakes that they otherwise wouldn't notice. Or catch small nuances in what they are saying, small implications that could otherwise go unnoticed.In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".
dawnerd: Plenty of subs blatantly allow certain brands to advertise while banning anyone else. Kind of amazed Reddit themselves haven’t put more effort into to stopping it since it kinda sidesteps their in house advertising.
cik: At scale they will. For now, someone else puts the effort into growth marketing, eyeball capture. Reddit eventually changes the rules, seizing control, thereby acquiring users for less human cost (as opposed to missed revenue opportunity).
lukasm: I stopped fixing typos as a social signal :)
ares623: me to
gethly: Yeah, turn this into another Reddit. Great idea!
nottorp: > This quaint forced 'funniness', like a misplaced attempt at being lightheartedHN always downvotes attempts at humour, be them chatbot or brain generated :)
muldvarp: Creating more friction can also lead to a higher percentage of bots. I for one immefiately leave when I realize that I need to jump through several hoops before I'm actually allowed to participate on a site. Someone building a bot farm on the other hand is probably willing to tolerate quite some friction before giving up.
i_think_so: Whooosh! I think you missed the joke. :-)(I didn't, and I thank everyone involved for the nostalgic moment. Also, shout out to Dr. Sbaitso!)
theshrike79: [delayed]
i_think_so: I will never, ever forgive these techbros for ruining emdashes. I will also never stop using them -- they are a permanent part of my writing style -- no matter the personal consequences.
i_think_so: Oof. Some of those seemed reasonable at first. Ex: CloakHQ's comment on Compaq/DEC.......until you start scrolling down the page and it becomes screamingly obvious that everything it says comes from the same template.Maybe the problem isn't just that AI produces gobs of useless crap. Maybe what's worse is that it can produce even more mediocre crap that crowds out the good?All oatmeal, no steak, leads to "starvation" by poor nutrition.
i_think_so: > I disagree with disallowing any comments that used LLMs in the writing.I think the point here is that the community doesn't want to read AI slop, not that using an LLM to clean up your writing contains some inherent evil that prevents quality.I don't want to accuse you of strawmanning the argument, but honestly, where did you ever see someone advocating the latter?
ThrowawayR2: The guidelines haven't even been updated to say that AI generated posts and submission aren't permitted even though it's been the policy for a couple of years now if one searches for postings by the moderators. So outsiders and new HN users have no reason to know that it's not allowed. I'm sure there are reasons for it but the inaction is all very mysterious from an outsider perspective.
i_think_so: This obviously should have been done years ago. @dang is there a reason it hasn't?
i_think_so: > In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".Is that genuinely what you think most of the complaints on HN are saying?IMNSHO that's an absurd statement to make about the other side of the argument. I'm still giving the benefit of the doubt here but jeeze, this really smells like a strawman.There are dozens of whole classes of criticism of these tools that I see made on HN, and none of them fall into the category you described.Ex: Saying "juniors who rely on Copilot/Claud/etc become lazy and can create low quality code without learning how to do better" is night and day different from what you're saying. And that's a criticism that must be addressed or the entire global software industry will destroy itself in two generations.Surely the difference between that and "we don't want anybody to use Grammarly in their subs that show up here" is completely obvious, yes?
thinkingemote: The discussion about the LLM assisted/written submission at the time, with replies by the author: https://news.ycombinator.com/item?id=47055300 The defence given was essentially "just reformatted it for better grammar"It's obviously screams LLM to me, not only the rule of three, it even has the classic emdash.But what does this really imply? a litmus test that uses LLM suggested by the moderator who didnt notice the LLM in a submission about restricting LLM usage of new accounts?I suspect thata) less people are willing to expend a bit of energy to notice LLM usage, given how much of it is. ("we've lost" theory)b) that people are losing the ability to detect LLM submissions. ("we're cooked" theory)or c) that people are losing interest in quality HN generally, who cares. ("we're leaving" theory).Personally I've been feeling c, because even the main users of the site are in a or b.
HendrikHensen: Reddit didn't (yet). Another tech focused community site did though... So I stopped participating in the community.
HendrikHensen: You can build quite an extensive profile of someone given enough post history. More post history means more details. Especially nowadays with LLMs it's trivial. This can lead to all sorts of issues. One is people I know in real life being able to identify me. Another is that through various means my account may be linked to my personal identity (e.g. through matching usernames or emails across platforms) and oppressive regimes (now or in the feature) may use my post history to take action against me.
HendrikHensen: On Reddit and Hacker News, I don't need an email address to sign up. But also I use SimpleLogin to have a separate email address per website/account. Quite necessary these days when personal data is leaked by some company or other every day.
HendrikHensen: Thanks, I was not aware. They seem to be guidelines, and not rules. I find my privacy and the prevention of anyone to build a full profile of me (especially how easy that is now in the age of LLMs) a bit more important than the vague concept of "fostering community". I am sorry.
fzeroracer: > I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.Low value content is still content, written by a human being with a specific point. I would argue that LLM written content is even worse than that, because what value does it add when you or I can just ask the LLM itself for it? Its existence is solely that of regurgitation.
i_think_so: ..so updating the guidelines is beyond the pale and suggesting it is downvote worthy?How very interesting.
i_think_so: God help us if we get to the point where we need an LLM agent to do the reading and filtering of all our social content for us. I am completely certain that is a downward spiral that ends with the collapse of our society and I give it 50/50 odds for killing off the entire species.
i_think_so: Maybe we need a reverse Turing test and award -- humans write things that are indistinguishable from AI slop.I have no idea what that could be useful for, but since the Turing test is now essentially beaten maybe its usefulness has come and gone too.> Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?It sounds like a fast, efficient, inexpensive and foolproof recipe for destroying a community. Let's use that as a future test: anyone who advocates for it is undeniably trying to destroy HN, so they get downvoted to 1 karma and permanently blocked from voting on anything else.
dxdm: "excessive moderation" is a fun concept.
i_think_so: I taught myself to type because most people can't read my handwriting.I would be so screwed. :-(
rrgok: I laughed so hard. It has been a long time. Thanks!
i_think_so: Are you doing that here? What extension(s) do you use for it?
i_think_so: I think you missed @sltkr's point. HN wouldn't just have less new content; it would fail to develop new users. That kind of stagnation is how sites like this die.Aggressively filtering to raise the average post quality is a sugar rush and it has the metaphorical long term consequences of type-2 diabetes. Things start out feeling great but the acceleration of death is effectively guaranteed.
i_think_so: So now we're going to create a black market for old HN accounts?Am I too late to get ahead of the curve and stockpile some, while they're still relatively cheap?
vova_hn2: Ask you IRL friends or colleagues from your job "hey, do you happen to have an HN account?"If the discussion is related to a public project, like in the examples in this comment: https://news.ycombinator.com/item?id=47303604...you can use existing communication channels (website, readme on github) to ask people for an invite to participate in it.
mosura: A few paid and unpaid newsletters have quietly become very big. Traffic from them completely eclipses this place, and because everyone gets the email at once it is a really sudden and painful spike.Most I have encountered (generally via referral tracking) are heavily curated centrally though, and not by users.
mosura: Maybe, but that just pushes that load on to the user that defeats the point.Computers are about automation.
nephihaha: If you can devise a system which eliminates bots but not new humans, all the better.
1tdimhcsb: Dan G is a bitch everyone knows it
mapontosevenths: > Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.Its's OK to have political opinions, even the ones that I disagree with. It's not OK to ruin every unrelated conversation ranting about them. Some folks around here have turned into that one uncle nobody likes inviting around to dinner anymore.If a label would stop that I might be in favor of it. However, I'm certain it would instead be used to remove otherwise high quality content and ultimately reduce the utility of this place.
Vachyas: Somehow I've been browsing HN since ~2019 without ever wanting to reply so much that I was willing to make an account (and start receiving emails, etc) but your comment made me curious how easy it could be, and wow. Now I have an account.I kind of assumed it was hard to make an account (maybe even an invite-only situation) based purely off of how unique most handles were, and how well curated/moderated everything was. So I guess you could say, the quality of the usernames and the quality of the posts :)
jacquesm: This is a SNR discussion, the N has just gone up an order of magnitude and may well go up multiple orders of magnitude more to the point that communication between people will be drowned out by non-people attempting to communicate with people. It's the spam problem all over again.
throwaway81523: The return of Advogato. If you weren't around for it, it had a certification system like what you describe, so the stuff on it was pretty good. After a while, spammers figured out that it had very high search engine placement because of its quality, and that pretty much ruined it. It's gone now.
follower: For historical context, the Advogato site in question: https://web.archive.org/web/20170715120119/http://advogato.o...Background on the "trust metric" implemented on the site: https://web.archive.org/web/20160304000542/https://advogato....Apparently my account on the site is/was now more than a quarter of a century old... Gonna try to avoid thinking on that too deeply. :DThere's been a non-zero number of occasions since that time where I've observed situations that mirror the trust-based challenges Advogato sought to solve.It is perhaps telling that as prescient as Raph's work on trust metrics was he later moved on to the notoriously challenging realm of font rendering--presumably because it seemed more tractable. :D
Justkog: you are indeed describing my reddit experience, hence why I did not participate there while being a human