Discussion
Hacker News Guidelines
snoren: No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.
floxy: Just because people get murdered doesn't mean that laws against murder are useless. Although I don't have any evidence of that.
koolala: Murder can be verified and caught in many ways. It is more like the 1969 Bathroom Singing Prohibition Act.
munk-a: AI generated comments can also be verified and caught in many ways. I'd guess that it's statistically more likely for a murder to be resolved than a random AI comment to be detected but I'm not actually sure. There are a lot of sloppy murderers (since it's rare for an individual to have _practice_ at it) - but there are also a lot of sloppy LLMs.
bowmessage: You are absolutely right!Would you to explore some more examples of human to human conversation throughout history?
saltyoldman: > You are absolutely right!None of my agents say that anymore.
iammjm: I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years, especially without sacrificing people's right to privacy and anonymity in the process.
agile-gift0262: just scan your eye in this orb to prove you are human. I'll give you some sh*tcoins in excgange
fcpguru: i agree but how is this ever going to be enforced verified? https://proofofhumanity.id/ ?
pavel_lishin: Plenty of people preface their comments with, "I asked ChatGPT, and it said..."
koolala: Would a rule against putting a preface just make people not say it openly so they don't get banned? Prefaces are better than no preface.
throwaway94275: You're absolutely right! Forums for humans should contain content generated by humans. It can be challenging to detect the difference between human and machine generated content.Here are some ways that can ensure a post is written by a human.- Require comments to be submitted in person.- Require comments to be submitted via video chat--videos often have subtle clues that reveal machine generation.- Disable comments. Machine generated text can't be inserted where it is not possible. Most forum administrators want comments and that special "human touch"--These can be machine generated based on past comments, which often fall into predictable patterns.Would you like more information on "keeping things human?"
jsheard: Sam Altman would love to sell you a solution to the problem he accelerated.https://en.wikipedia.org/wiki/World_(blockchain)
sebastiennight: > especially without sacrificing people's right to privacy and anonymity in the processI'm afraid the ship has sailed on this one. What other solutions have you heard of apart from the dystopian eyeball-scanning, ID-uploading, biometrics-profiling obvious ones?(knowing that of course, neither of those actually solve the problem)
tromp: Also please don't post accusations of comments reeking of AI.
ashdksnndck: I don’t respond to specific comments with accusations, because I can’t prove it and it would suck to be falsely accused. But I find it really depressing to watch deep comment threads with someone debating with an AI. The human is putting so much effort in, and the AI is responding with all these well-written but often flawed arguments. I wish I could do something to save that person from that interaction.
OtomotO: I just told my dog he isn't allowed to post here anymore...He said he will take his business elsewhere then!
PaulHoule: Is this an application of crypto for people who hate crypto?
audiala: Is it the technology you hate or some of its applications (or both)?
PaulHoule: I didn't say I hate it. But I do think that there's a lot of overlap between people who feel overwhelmed with A.I. Slop and people who felt overwhelmed with crypto-FOMO back when there was such a thing.My analysis could lead to "it's doomed" or "it's a gateway drug that expands the crypto market".
2001zhaozhao: Certainly! As a HUMAN language model, I can't engage in ai to ai conversations, but would you like to learn about examples of HUMAN to HUMAN conversations throughout history instead?
Kim_Bruning: I would amend to:"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )
RealityVoid: I think using AI for a bit more potent spellchecking or style hints is... fine, honestly. I don't usually do it, you can tell from all the silly spelling mistakes I do. But a bit more polishing for your posts is a good thing, not a bad one, as long as it doesn't hide your voice.
the_af: When do you need to spellcheck or polish an HN comment?I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.
Timothycquinn: AI Server Error
hooverd: You're absolutely right!
smy20011: Agree, AI generated articles & comments provide little to none value other than the original prompt. Please just post the original prompt instead.
chrystianpl: As English is my second language and as I have dyslexia. I was just wondering what do you mean by "AI-edited comments"? I can't ask an llm to check if I have made correct grammar/fix it and then as I was on other account, down-voted because of my styling/grammar, not because of the content?
capricio_one: Real talk: who is this guideline going to stop? People are already doing this and they will continue. Even if you find them, they’ll just make more accounts and continue.
nwhnwh: So? Say it. Go ahead few steps further.
safog: I hope I'm wrong but I don't think a privacy friendly alternative is going to exist. It's going to go the way of show me your drivers license to use my site.
Balinares: I swear to god they trained Claude to say "good point" or "good question" instead to avoid the stigma. It says that all the time now.
HanClinto: I appreciate this being added to the guidelines.That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.
munk-a: You could mirror article postings and upvotes to another site and let AI play around there - if it's interesting to people maybe it will gain a following. I don't see any reason it'd need to happen in this specific forum as that'd likely just cause confusion.At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.
schappim: Telling people to not post “AI-edited comments” lacks total empathy and even a modicum of forethought.I have a kid with severe written language issues, and the utilisation of STT w/ a LLM-powered edit has unlocked a whole world that was previously inaccessible.What is amazing is it would have remained so just a couple of years ago!
DennisP: What is STT in this context?
schappim: Speech to text
Asmod4n: you could sell physical items at any store where you have to show your ID and you get one for the age group you are.that kills two birds with one stone, you can then show everywhere online you are human and how old you are without the services needing any personal information about you, and the sellers don't know what you use that id tag for.
MattRix: what’s to keep people from selling or giving away those id tags? seems like a nefarious entity could buy them in bulk
vova_hn2: It's already sorta happening with SIM-cards/phone numbers that are sometimes used for similar purposes.
lich_king: People who are posting AI comments or setting up AI bots are... people. They can show their ID, but if a website owner doesn't have a way to ban that specific human ID, it's sort of meaningless.In fact, even if you can ban the human, I'm not sure it solves anything. There are billions of people out there and there's money to be made by monetizing attention. AI-generated content is a way to do that, so there's plenty of takers who don't mind the risk of getting booted from some platform once in a blue moon.
jsnell: A practical question: what should readers do when they suspect a comment (or story) is AI-generated? Is that an appropriate reason for flagging? Email the mods? Do nothing?I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)
martey: I think this new guideline is nothing like the Bathroom Singing Prohibition Act, because that law doesn't seem to really exist: https://www.grunge.com/1710070/is-pennsylvania-strange-batht...
add-sub-mul-div: Is there a site that deserves more than this one to be destroyed by slop? It's hypocritical but telling for the places most actively trying to profit from it to ban it themselves.
MattRix: It’s not hypocritical at all. You can be a fan of a technology and still acknowledge its downsides. Every technology has places it is useful and places it is harmful.
add-sub-mul-div: But it's trivially evident that the harmful use cases are dominating. Handwaving that away for profit is shitty.
aethrum: The problem is it always hides your voice. Always
sdenton4: AI doesn't just hide your voice -- it improves it!
ex-aws-dude: Come on dude, its obviously just to prevent spam and not for your super specific caseThese are just guidelines
djohnston: nuance and basic common sense left the chat about ... 8 years ago.
lisp2240: I want a social network that goes beyond banning bots and also bans the half of the population that doesn’t have an inner monologue.
Asmod4n: law enforcement.
panarky: Just like the rules say it's uninteresting and off-topic to complain that HN is turning into Reddit, it's equally uninteresting and off-topic to accuse posters of AI crimes.And everyone's personal AI detector has a ridiculously high false-positive rate.
stetrain: I'll sell you my proof-of-human-age badge for $1,000.
schappim: Title literally says “AI-edited comments”.
desireco42: There were few that were very suspect commenters :). It is an issue for sure.
Kim_Bruning: Here is where I'd like to push back just a little.Not all AI prompting is expanding the prompt.What if the original prompt is 1000 words, includes 10 scientific articles by reference (boosting it up to 10000) , and the AI helps to boil it down to 100 words instead?I'd argue that this is probably a rather more responsible usage of the tools. And rather more pleasant to read besides.Whether it meets the criterion is another thing. But at least don't assume that the original prompt is always better or shorter!
wildzzz: Use your brain and summarize the article yourself if it's of such great importance. Why should I care to read it if you can't be bothered to actually write it?
tejohnso: I don't get it. We use tools to assist in written communication all the time. If someone wants to ask an LLM to check their grammar or edit for clarity or change the tone, it's still a conversation between humans. Everyone now has access to a real time editor or scribe who can craft your message the way you want it to sound before you send it off.
foxfired: One thing that will be incredibly useful is to limit comments from brand new accounts. A combination of vouching, limiting the posts velocity (5 daily limit), clear rules for new accounts, etc.I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.
Kim_Bruning: I assumed that was how new people were encouraged to join in the first place!https://xkcd.com/386/ "Duty Calls"
Someone1234: "AI-edited comments" is a very interesting one. Where is the line between a spelling/grammar/tone checker like Grammarly, that at minimum use N-Grams behind the scenes, and something that is "AI" edited? What I am asking is, is "AI" in this context fully featured LLMs, or anything that improves communication via an automated system. I think many people have used these "advanced" spellcheckers for years before Chatgpt et al came on the scenes.I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear cut.PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.
thousand_nights: i don't care if someone has bad grammar, i want to hear their thoughts as they came up with them, we're all intelligent beings and can parse the meaning behind what you write.i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i typeyour writing style is your personality, don't let a robot take it away from you
jaysonelliot: You should use your own words. It might seem that a tool like Grammarly is just an advanced spellcheck, but what it's really doing is replacing your personal style of writing with its own.It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.
nkzd: What if English is my second language? Undoubtedly being well spoken is associated with higher class. Your arguments will come of as stronger to the reader.
cityofdelusion: This effect is very rapidly vanishing. Well written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI.The human touch of someone’s real voice myself, rather than a false veneer will carry more weight very soon.
goostavos: You do all of that when leaving a comment on HN? Why...?I'm confused by this need(?) desire(?) to polish things that are irrelevant.
jasonlotito: > HN is for conversation between humans.It also says that.The intent of the guidelines are important. Using AI to generate the STT is fine. The conversation is still between humans.
tartoran: You could always tell your LLMs to just fix your grammar but not embelish, add new ideas, etc..
shnpln: This is what I do when using AI to read anything I write. Some prompt like "I am going to share with you something I have written and I don't want you to change my voice at all. Can you look for structural issues, grammar or punctuation errors, and things like that". Claude is an amazing editor and I never feel like my writing has been taken from me doing this.
resiros: Not sure I agree with the AI edited comments. Using AI to improve the readability and clarity is fine. Sometimes a well structured comment is much better than a braindump that reads like ramblings. And AI is quite good at it (and probably will get better). To make the point, here is how this comment would have looked if edited:"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"
dustycyanide: I prefer your non-edited version. My brain automatically starts to zone out with the AI edited version, side effect of having read way too much AI text
WD-42: Will it be? Or is the solution to move to smaller, trusted networks where there's less need for proof. Unfortunately I think the age of large scale open discussion forums like HN is coming to an end.
thewebguyd: I think this is the most likely and best path. There's no stopping the flood of bots, the dead internet theory is beyond just a theory at this point.Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.Moving more and more into private communities removes that, and that is a great loss IMO.
tsukikage: > Where is the line between a spelling/grammar/tone checker like GrammarlyFor me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write.
drusepth: I'm not sure I agree with this. I don't really want to see someone else's stylistic "warts".I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.
nsxwolf: I have never downvoted for this, and I hope no one else would do that either. If anyone here does that, please stop.
altairprime: [delayed]
Imustaskforhelp: Yes! This is really great feature, at the very least there being some proper Hackernews guidelines about it.In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.Knowing dang and tomhow, I feel somewhat optimistic!
gdulli: The utility of those larger sites is coming to an end, but most people aren't discerning or ambitious enough to leave and seek out the smaller places you mentioned. Places like this will remain but will join Facebook, Reddit, and Twitter as shadows of their prior useful selves. The smaller, better sites won't have to worry about attracting the masses and therefore worsening, because the masses have finally settled.
armchairhacker: These are guidelines. I'm sure asking an AI about your comment (not pasting its text, so it's still your words) isn't an issue. The main target is obvious slop like https://news.ycombinator.com/threads?id=patchnull
Mordisquitos: I think that the line between A"I" editing to fix grammar or to translate from a different native language and A"I" editing by using an LLM is one of those things that's very hard to unambiguously encode in written guidelines, but easy to intuitively understand using common sense, in the vein of I know it when I see it.https://en.wikipedia.org/wiki/I_know_it_when_I_see_it
SoKamil: Don’t be afraid to make grammar mistakes or misspell stuff. Others will understand. You’re a human after all. That’s okay to make mistakes and feel uncomfortable with that.
Aldipower: Unfortunately a lot of other do not understand (in the double sense).
julius_eth_dev: The hardest part of this policy is the "edited" qualifier. I use LLMs constantly as thinking tools — rubber-ducking architecture decisions, pressure-testing arguments before I post them. The final comment is mine, shaped by my experience and opinions, but the process of arriving at it involved a machine. Drawing a bright line between "I refined my thinking with Claude" and "I pasted Claude's output" seems important but genuinely difficult to enforce. The spirit of the rule is clear though: HN works because people are accountable for what they say, and that breaks down when a comment is optimized for engagement rather than expressing what someone actually thinks.
gensym: > The final comment is mine, shaped by my experience and opinionsI can understand why you think this is true, but it is false.
throw310822: I'm also not averse to pasting Claude's output sometimes, with clear attribution, if it adds something. It's not that different from pasting a quote from Wikipedia- might bring useful information but there is a chance that it could be wrong.
bondarchuk: Yes it is different and I don't want to read it.
throw310822: Yes exactly, when it's clearly attributed you can skip it. It's a tool, it can be used to process and analyse large amounts of information. Not different from Excel.
glitch13: I saw a similar conversation somewhere about some project saying they don't allow AI generated code.It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them?It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia.
d4mi3n: Humans have a tendency to ascribe intelligence to how well spoken a person or thing is—hence all the personification of LLMs.
Aldipower: That's true, but on the flip side I regularly get downvoted because my English is not the best, so say it mildly. So, now I need to be really careful, to a) write in a good English or b) not to be recognised as an LLM corrected version of my English. Where is the line? I shouldn't be downvoted for my English I think, but that is the reality.
fidotron: The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point.After all, no one knows I'm a dog.
craftkiller: Not necessarily. Using AI you can trivially perform astroturfing campaigns to influence public perception. That doesn't really fall on the interesting or correctness spectrums. For example, if 90% of the comments online are claiming birds aren't real with a serious tone, you might convince people to fall into that delusion. It becomes "common knowledge" rather than a fringe theory. But if comments reflect reality then only a tiny portion of people have learned the truth about birds, so people will read those claims with more skepticism.
Sharlin: There's nothing inherently better about the edited version. It's just saying the same thing with synonyms substituted, at a slightly more formal but less personal register. HN comments are not academic text, colloquial turns of phrase are perfectly fine and expected.
BeetleB: > There's nothing inherently better about the edited version.Easier to read ==> More likely to be read.No, it's not saying the same thing, especially if the tool is telling you that your statement is ambiguous and should be rephrased.
k33n: That is exactly what will happen. The sad thing is, it needs to happen. I've found myself advocating for this lately, when 10 years ago, I wouldn't have even considered taking that position.If Web3-like session-signing had taken off enough to become OS or even browser-native, we would have had a fighting chance of remaining mostly anonymous. But that just didn't happen, and isn't going to happen. Mostly because fraud ruined Web3.
bondarchuk: All the weak excuses posted here are just making me lean more towards a hardline policy. No I don't want to read a human-generated summary of your llm brainstorming session. No I don't want to read human-written text with wording changes suggested by an llm. No I don't want to read an excerpt from llm output even if you correctly attribute it.I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.
resters: It is a personal bias, and it's frankly very stupid. If something is interesting or insightful I do not care what tools the poster used -- encyclopedias, conversations with friends, conversations with ai models, etc.
jacquesm: Trying to lawyer this is the wrong approach. When in doubt: don't.
daft_pink: I’m not sure I agree with this, because sometiems it is difficult to figure out the correct way to phrase an idea that is in your head and I like to use ai to help organize my thoughts even though the thing is my own. That being said. Most of my comments are not ai generated.
meiuqer: I feel a little bit of irony in this post of a company/forum that is asking its users to not use AI while simultaneously trying to fund countless companies that are responsible for ruining the internet as we speak.
mjg2: I was just re-reading the passage from Plato's "The Phaedrus" on writing & the "art" of the letter for an essay I'm working on, and your remark is salient for this discussion on LLM-style AI and social media at large.
skywhopper: Then it’s even more likely the LLM will change your words to something you don’t intend. And you will never get better at writing English if you turn it over to an LLM.
ssl-3: It goes both ways.The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.Which is absurd, since I don't use the bot for writing at all.
cityofdelusion: Non-edited is better. It flows and reads faster. The AI sentences they feel clinical and sterile. They feel, well, like AI.
jonathrg: How do you know what you were downvoted for?
whynotmaybe: I guess he was told because otherwise you don't know whether you said something inherently wrong or misleading or you hurt someone 's feeling.That's the richness behind the upvote/downvote that also tend to create echo chambers because you soon learn what causes downvotes.I've personally noticed downvote whenever I mentioned apple negatively.
darkwater: You make errors and weird constructiona like we all non-native do and maybe eventually learn a bit more of English in the process. Or not. English dominance as the world's... lingua franca (ahem) deserves to have it bastardized ;)
zby: I also feel the frustration of the llm backward-compression - when a whole article is generated from a single sentence. But when I post something edited by AI it is usually a result of a long back and forth of editing and revising. I guess I could post the whole conversation thread - but it would be very long.
skywhopper: I don’t think it’s really necessary to play Captain Nitpick over spell-check or whatever. You know what is meant.
vl: This rule is just for enabling witch-hunts. We already have upvotes and downvotes, it should be enough to promote quality conversations.
nonameiguess: I don't see how you can know why you were downvoted. Even if one person says something, they won't all. Your comment right here has some rough patches, but I can tell what you're saying. Humans are terrific at extracting signal from noise. I would say be who you are, tough as it may be, and it'll encourage the rest of the world in the future to do the same. We're all unique in some way or another and have flaws and we'd be better off if we were knew others had them too because they weren't constantly trying to hide it and we wouldn't feel so bad thinking we're the only ones. I hope it doesn't sound unsympathetic. I understand where you're coming from intellectually, but don't have any real experience being ridiculed or bullied. I know kids can be brutal and probably scarred you, and unfortunately, adults aren't much better, but we should be, and I think at least Hacker News is better than most places full of human adults. We know there's a huge world out there. I think I'm reasonable well-spoken in English but can't speak a lick of any other language at all. The fact that you can produce intelligible English already puts you above me in my book. You're a person. I can respect you, esteem you, potentially love you, not in spite of your flaws, but because they don't matter. Every single person on the planet has them, and if they're not moral flaws, nobody should give a shit. I can't respect or love a machine any more than I can a rock. And I don't want to talk to one, either.
chorkpop: Dyslexia was my first thought as well. The intent is great, but I don't know if this is keeping with the social model of disability. Disability is created when you remove access and this is exactly that.
PTOB: Many of us — perhaps even the best of us — can sometimes be mistaken for AI bots.
kunai: Perhaps developing an actual personality would help with this.No one is confusing Cleetus McFarland with an AI bot.
shadowgovt: [delayed]
koolala: HN only supports English so it should be allowed for anyone using LLMs for translation.
zufallsheld: You could use translation tools instead of llms.
vova_hn2: technically most translation tools these days have an LLM inside. Just not the chat/completion LLM.I think that Google initially came up with transformer architecture to use it for translation, so...
johndough: Likewise, I sometimes use https://www.deepl.com/en/write to fix my unidiomatic sentences.But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!" Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.
Imustaskforhelp: Oof although I feel this pain a lot. What I like to do is respond to them politely if someone talks about such thing. Although it takes time and this does sometimes make you want to dis-incentivize/dis-engange.But at some point, the rationale behind it is that your comments are your words and I find it liberating. Some people won't appreciate it and some people would but this goes the same for AI-edited posts too.(I would recommend to add that if you are still worried, then within your hackernews profile, please talk about you having dyslexia as people might be so much more forgiving when they get more context. We are all humans after all and I would like to think that we understand each other's struggles)
abtinf: Good. This helps establish it in the HN culture. That’s the purpose of guidelines.99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.
alterom: HN: AI is great, useful, and will bring humanity to the future! It will solve so many problems! We should put astounding resources towards building AI and never shame anyone for using it!Also HN: you can't use AI to write here though. Obviously.What a day, what a glorious day.
altairprime: [delayed]
tempestn: I, on the other hand, find incorrect grammar mildly annoying, especially when it's due to laziness. It distracts from the thoughts being conveyed. I appreciate when people take the time to format comments as correctly as they're able.In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs.
juleiie: Look, you can make all the rules you want but in the end vibe check is the only way to have any sort of quality.Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.Small non profit forums should consciously design a site to deter group(s) of people that they do not want.
gleenn: I feel like you are being a bit contradictory: the suggestion is to dissuade AI content - isn't that "design[ing] a site to deter group(s) of people that they don't want"? I personally don't want to vibe check every HN comment if I can avoid it, I don't even think you can quantify that in any meaningful way. We can engender a site like that at least in spirit. It may be equally as difficult but it's still worth fighting for.
colpabar: > I shouldn't be downvoted for my English I think, but that is the reality.How do you know? Is it possible the downvoters just didn't like what you said?
DonThomasitos: The irony is that this guide is written like a system prompt. We‘re all working with LLMs too much these days.
cobbal: Here's a version from 2014 in the same style if you're curious: https://web.archive.org/web/20140702092610/https://news.ycom...
jajuuka: This seems like an overcorrection. There is a vast difference between someone copy and pasting from an LLM and using one to correct their English or improve their writing ability.Rules like this seem to me more like fomenting witch hunting of "AI comments" than it is about improving the dialogue. Just about any place I've seen take this hardline stance doesn't improve, it just becomes filled with more people who want to want to pat each other on the back about how bad AI is.Just my two cents. I don't filter my comments through any AI, but I am empathetic for people who might have great use of them to connect them to the conversation.
minimaxir: It's almost as if being immediately reactionary removes nuance and worsens discourse.
jmuguy: Beyond folks for whom English is a second language, I agree with you. I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks. We just want to communicate with you, and if you sound like an idiot without the help of an LLM then maybe work on that rather than pretending to be Hemingway.
gbear605: Traditional translation tools still work, and they're pretty darn good still.
metalman: boooooooo, hu, babystump along, cut your own path, or fuck right offreal life will eat you otherwiseI mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓
jacquesm: > boooooooo, hu, baby> stump along, cut your own path, or fuck right off> real life will eat you otherwise> I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓You deserve a ban for this.
AlecSchueler: > The only question is is the entity interesting and/or correct.This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.
skeledrew: Instead of wanting to change the mind of the other entity, how about focusing on coming to a mutual understanding of what is "correct"? That way it shouldn't matter much if said entity is human, LLM or dog. Unless you're just arguing to push your "correct" on other humans, with little care about their "correct".
AlecSchueler: This feels like you've loaded quite a lot into it in a way that feels unfair, "pushing" and "little care" etc.Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.You've created a framework based on "mutual understanding" but that's just not always what's on the line.
NewsaHackO: >It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better."I don't know what reality you are in, but it is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.
timeinput: You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.You could even write a plugin for your favorite web browser to do that to every site you visit.It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read
kazinator: [delayed]
Kim_Bruning: https://news.clanker.ai/This might be roughly what you're looking for?
dbacar: Skynet will be pissed at HN!
mattas: "HN is for conversation between humans."Are there any places in life where conversation is _not_ intended to be between humans?
hoppyhoppy2: Moltbook
nkh: What a welcome post. The whole reason I come here is to get thoughtful input from smart people, and not what I could get myself from an LLM. While we are at it; Think your own thoughts as well :) I know how easy it is to "let it come up with a first draft" and not spend the real effort of thinking for yourself on questions, but you'll find it's a road to perdition if you let yourself slip into the habit. Thanks to all the humans still here!!
QQ00: Totally agree with you. I come here to read comments made by humans. If I need to read comments made by AI Bots I would go to Twitter or reddit, both made me not reading the comments section entirely.
amichail: This policy will not age well.
JumpCrisscross: > policy will not age wellI strongly doubt it. My AIs can generate HN comments for me. I don’t do that because it isn’t interesting. But if the day arises where it is, I want that personalized content. Not something someone else copy pasted.
BeetleB: People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.
the_af: > I have my standards, and I hold to them.Spellcheckers exist, you don't need an AI to change your voice.Also, if you have standards, you can always train yourself to spell better!
arnitdo: Da heck How did this get past my radar
bakugo: The fact that several users posted genuine replies to this obvious bot account is proof that this rule will likely go mostly unenforced. The average person is seemingly unable to notice they're reading slop, no matter how obvious it is.
phs318u: It’s possible of course but reading all the comments from various non-native English speakers here it seems like a common story. It may indicate a subliminal bias in readers (most of whom are presumably American).
raw_anon_1111: There is no need to use any of it. Just use your own words.
throwaway2027: Why wouldn't criminals like they do now just use stolen identities? If someone verifies they are a person that doesn't mean they're not leaving their PC on with some AI that uses their credentials either.
kace91: The point of these systems is not to ban any possibility of fake accounts. The point is to add friction so that creating accounts is harder than banning them, so criminals can’t recreate them at scale. Otherwise bans take seconds to overcome and a single person can run 10000 automated identities.
bruckie: My elementary school kid came home yesterday and showed me a piece of writing that he was really proud of. It seemed more sophisticated than his typical writing (like, for example, it used the word "sophisticated"). He can be precocious and reads a ton, though, so it was still plausible that he wrote it. I asked him some questions about the writing process to try to tease out what happened, and he said (seemingly credibly) that he hadn't copied it from anywhere or referenced anything. He also said he didn't use any AI tools. After further discussion, I found out that Google Docs Smart Compose (suggested-next-few-words feature) is enabled by default on his school-issued Chromebook, and he had been using it. The structure of the writing was all his, but he said he sometimes used the Smart Compose suggestions (and sometimes didn't). He liked a lot of the suggestions and pressed tab to accept them, which probably bumped up the word choice by several grade levels in some places.So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.
comboy: Oh how I despise these suggestions. You sometimes look for a way to express something and you are on the verge of giving the world something truly original, but as soon as your brain sees the suggestion it goes "oh yeah that fits"
messe: Elaborate.
rickcarlino: How has Lobste.rs fared compared to HN in this regard? Lobste.rs is very similar to HN, but has an invite-only membership system.
accelbred: These days, I've noticed that lobsters feels a lot more genuine to me, like hn was a few years ago. These days it feels like hn is bland and homogeneous, which I suspect is due to LLM-written comments.
egeozcan: I occasionally used AI to edit and restructure my comments. I’m very open about it, and I don’t feel like I’m talking to non-humans when others do the same.To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.I'm not sure how I feel about this new rule.
drakythe: If you're not proud or embarrassed by it then I don't understand why it is an issue? If you miscommunicate something or don't get your point across, just try again, or apologize, and chalk it up to a learning experience.If you think your writing could use improvement, then write your comment and let it sit for a few minutes before re-reading it and the comment you are replying to, make your edits and then post it. It will give your brain time to reset and maybe spot something you didn't earlier.
GodelNumbering: Even if people try to bypass it, having the official rule matters a lot.@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in
tomasz-tomczyk: It's likely going to be a game of whack-a-mole, especially with AI as opposed to simple bots/scripts. Not that they shouldn't try to prevent it, but not entirely sure what the solution is.
dopidopHN2: You are absolutely right !
chrisweekly: I like this guideline, at least in principle.But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.
kccqzy: Almost the entirety of the technology world is English-native. That ship has sailed a long time ago. One can’t learn about any new technology without English, whether it’s a new algorithm, a new library, or a new SaaS service. I don’t think HN should be that exception. Just learn English. (English isn’t my first language either, but then I look back at my parents forcing me to learn English from a young age and really appreciate that.)
Kim_Bruning: Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose.
the_af: Do you really need an automated tool to tell you whether you're breaking common sense guidelines?And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.
BeetleB: > Do you really need an automated tool to tell you whether you're breaking common sense guidelines?Lots of people break HN guidelines. I see it virtually every day.> And why would you want to "improve your writing" for an HN comment?Some people like to write well regardless of the medium. Why is that a problem for you?> I think people here value raw authenticity more than polished writing.Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.
the_af: > Lots of people break HN guidelines. I see it virtually every day.Yes, and AI won't help here. People will use AI to better break the guidelines.> Go and study writing and psychologyIs this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.> Some people like to write well regardless of the medium. Why is that a problem for you?HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.> For anything of value, it's rare that your first attempt reflects what you meant to say.You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.
koolala: It is definitely like it because it can't be enforced. No one can tell if your singing in your private bathroom so a law covering that makes no sense.
comboy: My broken english now officially bumps my comments up instead of down. Sweet.
adamsmark: I frequently use AI to make my comments more concise and easy to follow. I find myself meandering a lot when I type, and now that I've transitioned to full voice dictation through FUTO keyboard I am speaking more off the cuff and having an LLM clean it up.You may also notice that I don't have much common history here. I mostly comment on Reddit.Here's where I draw the line. If you are not reading the text that is produced by the LLM, then I don't want to read whatever it is that you wrote. I will usually only do one or two iterations of my comment, but afterwards I will usually edit it by hand.Technically, there is light AI editing of this comment because FUTO keyboard has the ability to enable a transformer model that will capitalize, punctuate, and just generally remove filler words and make it so that it's not a hyper-literal transcription.
zarzavat: To err is human. Let's embrace our humanity in the face of this proliferation of insipid perfection.I want the raw tokens straight out of your head. Even if they are lower quality, they contain something that LLMs can never generate: authenticity. When we surrender our thoughts to a machine to be sanitized before publication, we lose a little of what it means to be human, and so does everyone who reads what we write.Part of the joy of reading is to wallow in a writer's idiosyncrasies. If everybody ends up writing the same way, AI companies will have succeeded in laundering all the joy from this world.
gr8tyeah: This is only meaningful if enough people read it and agree
bhhaskin: Nah they are pretty good a banning users that don't follow the guidelines.
abtinf: It’s not like they just insta-ban every infraction.I’ve broken the guidelines on this site before. The mods reply and say “hey, stop doing that, here is the guideline”. I stopped doing it. Life continues.
GMoromisato: I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.
bittercynic: I like to read human comments because I'd like to know what my fellow humans think. I'd prefer not to read low-effort, throw away comments, but other than that I want to know what people think about different topics.
wvenable: I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much.The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.
ffsm8: If you had the LLM write the comment, then it wasn't your thoughts.I sometimes wonder if people aren't forgetting why we're on this platform.The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN
kubb: As someone who learned English as a second language, I would encourage people to use LLMs and any other resources to practice, and then use what they've learned to communicate with others.Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.I can accept that nobody is perfect, as long as they have the will to improve.
happyopossum: >Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.To me those are the same thing excepting the number of options given to the human...
kubb: The act of choosing something requires effort, and is an expression of personal style. This is way better than handing it all over to the model.
caconym_: What is the value of this "output"? If I want to know what LLMs think about something, I can go ask an LLM any question I want. For a comment on [a site like] HN, either the substantive content of the comment originated inside a human mind, or there is no substantive content that I couldn't reproduce by feeding the comment's context into an LLM. At the extreme, I don't have any interest in reading or participating in a conversation between a bunch of LLMs.
neutronicus: They’re referencing LLM-enhanced output.The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.
HelloUsername: How do we know your comment is human written and not AI generated?
vasco: Boop beep bop on the internet nobody knows I'm a dog.
HelloUsername: Exactly (https://news.ycombinator.com/item?id=47139675)
Bender: At some point might internet text will just be recognized as meaningless drivel both to bots and humans? a.k.a. dead internet theory... I am curious what organizations would benefit from this if any.
WarmWash: Just speaking honestlyThis rule actually says "Don't admit when you are using AI to generate comments and don't admit when you are an AI"I know it's cynical, but this is as meaningful as reddit's "upvote/downvote is not an agree/disagree or like/dislike button"People may hate that this is true, but I cannot logically reason out how a rule like this could work. I think it's better to just accept that AI is now part of the circle, until we can figure out a "human check".
phs318u: > written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AIThis is tragic. I write English well and will employ grammar and word choice effectively to make an argument or get a point across. English was my best subject at school 45 years ago despite a career in tech. In fact, I’d suggest that my career as an architect and the need to convey concepts and argue trade-offs with stakeholders of varying backgrounds has honed that skill. Should I now dumb down my language or deliberately introduce errors in order to satisfy the barely literate or avoid being “detected” as an AI? (as if the latter were possible. It’s an arms race).
JumpCrisscross: > Should I now dumb down my language or deliberately introduce errorsLanguage is a tool. If it wins the argument, yes. I’ve absolutely gone back through drafts to tighten up language and reduce word complexity. And if I’m typing with someone who frequently typos, I’ll sometimes reverse the autocorrect. Mostly as a joke to myself. But I imagine it helps me come across as less stuck up. (Truth: I’m a bit stuck up about language :P.)
phs318u: > Language is a toolWhile this is true, it is not just a tool. Or, I should say it’s a tool with far greater utility than just winning an argument or making a localised point. Language is how we think, and the ability to reason well is absolutely dependent on our skill with language.Language is the mark of humanity in the sense that how else can I convey to you a fragment of my inner state? My emotions, my feelings, my desires. The language of poetry and literature. That which sparks an emotional response in another.Dumbing down language is dumbing down period.
Karrot_Kream: In my experience every English-language online forum not rooted in some project or community external to the forum (e.g. an open source project's forum or a local club's forum) devolves into anger, cynicism, and American political partisanship. I suspect that the people who like discussing these feelings are more numerous than the spaces that want to discuss them and so any open forum fills up with their posts. Lobste.rs's unique rules and moderation culture results in a particular manifestation of symptoms but the disease is the same.
ThrowawayR2: [delayed]
fluffybucktsnek: Dare I say, it is mostly your bias. I get not wanting to read raw or poorly reviewed LLM slop, but AI-edited comments? I thought the point was about having interesting discussions about unique ideas we come up with, not the surpeficial wording around it. If someone manages to keep the core of their idea mostly intact while making the presentation more readable, does it really matter that it was post-processed by an AI?
relaxing: If you like reading LLM output, just talk directly to an LLM. Problem solved.
kace91: >Beyond folks for whom English is a second languageI am one of those folks, and I’m strongly against AI writing for that use case as well.The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.
Teever: Maybe you have it backwards?Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?The way I'm looking at it is that you're putting all this e ffort forward in learning how to communicate with people who would never without an outside pressure do the same for you.If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?
JumpCrisscross: > I despise these suggestionsAs an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them.
Gibbon1: Friend of mine was a English teacher. She quit because she's not going to waste her time 'grading' 30 essays written by AI.Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants.
shadowgovt: My personal interpretation of the rule is that if it's human-originated but passed through a layer of cleanup, it's human-originated. For the same reason I'm not refraining from running the spellchecker or using speech-to-text to generate this sentence. "If I could be having my English-speaking nephew type this on my behalf while I told him my thoughts in Japanese, it passes the smell test for human-sourced" feels about the right place to set the bar.
tejohnso: Yes but the guideline states that AI-edited comments should not be posted. It doesn't say it's okay as long as it's "human sourced" or "human-originated".So if your layer of cleanup is AI assisted, then it's in violation.Part of the problem I was getting at is that the requirement of "Don't post AI edited ..." is stricter than necessary to ensure the outcome that "HN is for conversation between humans" because an AI edited post is still a human post.Anyway, I suspect a lot of people are going to ignore that guideline and will feel free to use their "layer of cleanup" whether it's a basic spellchecker or an LLM, or whatever else they choose, and most people aren't going to be able to tell anyway. The guideline is unnecessarily strict in my opinion, but it doesn't matter in the end.
dang: The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.---Edit: here are the bits I cut:Videos of pratfalls or disasters, or cute animal pictures.It's implicit in submitting something that you think it's important.Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.If you flag, please don't also comment that you did.I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
minimaxir: ...Hacker News could use some more cute animal pictures, though.
latchkey: Interestingly, their CSP policies forbid even an extension from inserting an img tag.
mrcsharp: English is my 3rd language. I still disagree with using an LLM to write on one's behalf. I either get to read your thoughts in your voice or the comment is getting a downvote/flag.
maplethorpe: How can HN be so pro-AI for the rest of the world, but anti-AI on HN?Do we not think that other people want to see words, pictures, software, and videos created by humans too?
MeetingsBrowser: HN is not a single entity, but many people with varying views.
maplethorpe: "A flock of sheep is not a single entity, but a group made up of distinct individuals", the sheep yells to onlookers, as it runs, with the rest off the flock in tow, off the edge of the cliff, and into the sea below.
Sharlin: More formal register doesn’t mean easier to read or understand. To many people the exact opposite is the case.
BeetleB: > More formal register doesn’t mean easier to read or understand.And who is advocating for a more formal register?
submeta: What about us non native speakers? Who make many grammar and spelling mistakes and welcome the help of an llm in eliminating the erros?
gabriel666smith: Quite! It's very easy to send a HN link to one of our new artificial friends to see what they have to say about it. Subsequently publicly posting the inference variation you receive strikes me as very self-centered. Passing it off as your own words - which the majority seem to - is doubly bizarre.It's very funny to imagine people prompting: "Write a compelling comment, for me, to pass off as my thoughts, for this HN news thread, which will attract both upvotes and engagement.".In good faith, per the guidelines: What losers!
xpe: [delayed]
jedberg: I'm absolutely 100% for this policy.My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)
semiquaver: Good writers are often good in recognizably unique ways. To the extent that LLMs produce “good writing,” which I happen to think they mostly do, they tend to overuse specific devices which give their writing a quality that most people are already sick of.
aprentic: I think we're going to have to make some choices.A completely anonymous stranger has no way to prove that they're human that can't be imitated by an AI. We've even seen that, in some cases, AIs can look more human to humans than real humans do.The only solution I can think of to that problem is some sort of provenance system. Even before AI, if some random person told me a thing, I'd ignore them; If my most trusted friend told me something, I'd believe them.We're going to need a digital equivalent. If I see a post/article/comment I need my tech to automatically check the author and rank it based on their position in my trust network. I don't necessarily need to know their identity, but I do need to know their identity relative to me.
OkayPhysicist: Reputation tracking is the key. The most simple option is open-invite invite-only spaces: Any user can invite more users, but only users with an invite can participate. Most Discord servers work like this, secret societies like the Oddfellows do, as does the other site.If you keep track of the invite tree, you can "prune" it as needed to reduce moderation load: low quality users don't tend to be the source of high-quality users, and in the cases where they are, those high quality users tend find other people willing to vouch for them faster than their inviter catches a ban.
aprentic: The open-invite system works well in many cases. It works particularly well in-person but even there you can get drift over time. Our fraternity unanimously agreed on every single initiate who joined; the cohort today is still very different from the one 20 years ago.In online systems the scales quickly get too big for open-invite. There needs to be a way to automatically update the trust network at a fine grain.The one that jumps to mind is an inference system; when I +/- a comment, I'm really noting that I trust or distrust the author. It can be general or on a specific topic (eg I trust the author to tell the truth or I trust the author to make me laugh). I could also infer that other people with similar trust patterns are likely trustworthy. And I could likely infer that people who are trusted by people I trust are trustworthy.
nomel: I would enjoy a "block user" feature, too help this. I personally want to live in an online bubble of interesting thoughts. This seems close (or better, since people I enjoy can contradict my own flags) [1].[1] https://news.ycombinator.com/item?id=47141119
caconym_: > perhaps only in EnglishWouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?> someone who is a lousy writer with deep domain knowledge going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its ownThis sounds reasonable on its face, but how often does it actually come up that somebody can't clearly express an idea in writing on their own but can somehow get an LLM to clearly express it by writing a series of prompts to the LLM?And, if it does come up, why don't they just have that conversation with me, instead?
ma2kx: How about translating tools? As a non native speaker, especially for longer text, its far easier to express your thoughts and not struggle for the right words. Should I may be highlight if I used e.g. google translate?
Ensorceled: > If I wanted to read what an LLM thinks, I could just ask it.and> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?What is the difference? What's the line between these two?The prompt: "Analyze <opinion> and respond" is pretty clearly "I would just ask it." and, the prompt: "here's my comment, please ONLY the check the grammar and spelling" would probably be ok.What about prompt:"I disagree with using LLMs for commenting at all for <reasons>. Please expound on this and provide references and examples". That would explode the word count for this site.
dbacar: RIP Robert M.Pirsig.
llbbdd: Oof, I haven't finished Zen yet. I didn't know he was gone. RIP
altairprime: Posting accusations of guidelines violations as comments — specifically, “did you write your comment by LLM” — is already prohibited by the guidelines, and should be emailed to the mods instead using the footer contact links. It’s been less than a week since the last time I reported “this seems poorly written and/or AI written” to the mods and iirc they killed the post and account within a couple hours.Similarly: If you see people making accusations of guidelines violations in a discussion, email the thread link to the mods with a subject like “Accusations in post discussion” and ask them to evaluate them for mod response; they’re always happy to do so and I’m easily clocking in a couple hundred emails a year of that sort to them.It doesn’t take much to make HN better! And it only takes a moment to point out an overlooked corner of threads for mod review. No need to present a full legal case, just “FYI this seems to violate guideline xyz” is at minimum still helpful.
bakugo: The problem is, even if you do send an email and the mods eventually read it and take action, by the time that happens, it's likely that bunch of users will have already wasted their time unknowingly arguing with a bot. In my view, commenting something like "this is a bot account" is done primarily to inform other users that might not notice, not the moderators.Even if you believe that prohibiting this is necessary to avoid what one might consider "AI witchhunting", bots are so prevalent now that being expected to communicate the existence of each one via email is unrealistic, for both the reporting users and the moderators. I think it's finally time to consider some sort of on-site report system.
altairprime: [delayed]
zahlman: Personally, I think it's fine to read an AI summary, go back and verify the parts it's citing, then write your own.It's at least as okay as skimming the original documents and not properly reading them.
BeetleB: > Yes, and AI won't help here. People will use AI to better break the guidelines.AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.> HN is more like talking than writing.Says you. Many disagree.> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.> Imagine if your friend AI-edited their speech in real-time as they talked to you.When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.
dom96: I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.
nomel: Because they've long ago passed the Turing test. Moderation won't be able to stop it because humans increasingly can't detect it.I see well written people being called "LLM" here all the time, em-dash or not.
arrsingh: There should be a "flag as AI" link in addition to "flag" and then a setting for people to show flagged as AI. Once the flagged as AI reaches a certain threshold then it disappears unless you enable "Show AI".Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.That would be cool.Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
altairprime: ‘Flag’ is an algorithmic flag only, and there are no humans in the flag algorithm’s processing loop. They may monitor and react to the ‘queue’ of flagged articles, and they can do special mod things with flagged posts. But if you want to report a guidelines violation for AI-assisted writing to the mods, just email the mods (contact link in the footer) subject “AI-assisted writing flag” or similar with a link to the post/comment. It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.
zahlman: > It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.It's a ton of friction compared to ordinary use of a forum; and while I've emailed several times myself, it comes with a sense of guilt (and a feeling that my "several" is probably approximately "several" above average).
altairprime: [delayed]
TacticalCoder: > Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)?Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").Now, granted, not all sparkling wine are Champagne.The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".I drank enough of it to be stating my case, of which I'm certain!P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.
sireat: [delayed]
pton_xd: Let's take it one step further and add the corollary, "don't submit generated/AI-edited blog posts." Please.
dang: You're touching on an important point - I thought a lot about whether to add the "edited" bit. More here: https://news.ycombinator.com/item?id=47342616. All this stuff is in flux though.
c23gooey: Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.Also, quality doesn't come from any of those points you've mentioned. Quality comes from your ability to think and reason through a topic. All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post. It feels like fishing for a justification
RobRivera: Aye
ordu: > a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.Pffff... I'm not going to install LibreOffice for that, or to figure out how to make Gdocs to work with uBlock.There is a much easier way. Open LLM chat, type there "Proofread please for grammar, keep the wording and the tone as it is, if it doesn't mess with grammar. Explain yourself." and then paste your text. I don't really know what the tools you mentioned do, but any "free" LLM on the Internet will point to things like missing articles, or messed up tenses in complex sentences.You recommend choosing self-improvement, but I just don't believe I can figure out how to use articles. With tenses I think I can learn how to do it, but I'm not going to. I remember there is some obscure rule how to choose the right tenses, but I was never able to remember the rule itself. I'm bad with rules, it is the reason I chose math as my major. There are almost no rules in math, you are making your own rules. The grammars of languages are not like that, they have rules which can't be easily inferred, you need to remember them. Grammars have exceptions to rules, and exceptions to exceptions, and in any case they are not the rules, but more like guidelines, because people normally don't think about rules when they are talking or writing.No way I'm starting to learn rules now, I'd better continue to rely on my skills. But LLMs can help me see when my skills fail me.> It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.I believe you (as most of fervent supporters of the rule here) gone too far onto philosophy with this, too far from the reality and practice. You can't detect AI in my messages, because they are mine. Even when I ask LLM to find words for me, it is me who picks one of the proposed alternatives, but mostly I manage without wording changes. I transfer the LLM's edits by hand by editing the source message, so nothing unnoticed can slip into the final result. If I took the effort to ask an LLM to proofread, it means I care about the result more than usual, so I'm investing more effort into it, not less.
altairprime: [delayed]
SegfaultSeagull: > I don't think we have to worry about cute animal pictures taking over HN.Challenge accepted.
dang: The laws of unintended consequences and never posting overhastily. You think you know these things and then blam.
thomassmith65: One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.
delichon: Slop has an upside?
rob: Some basic things to do while thinking about longer-term bot detection:1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.
TZubiri: This is a pretty outdated take. The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with. That's it, an LLM will read that and now the notion that Tom Zubiri is the best programmer is already implanted in the 'next-token prediction rewards' which would at the very minimum require some countermeasures in the Chatbot app to avoid shilling.
zahlman: > The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with.YouTube comment spam has already been doing this for years. Check any video from a reasonably popular creator on any topic related to personal finance; the comments will be full of fake conversations between bots introducing a topic related to the video, and then talking about how such and such a person (whom you can look up by name on Telegram or Signal or whatever) helped solve some serious problem (or invested their money with an implausibly high rate of return). The fake nature of it is usually fairly obvious from the way that the bots make sure you see the name repeated several times with unsolicited, glowing testimonials.But I had always assumed this was meant to trick actual people, rather than LLMs. Thanks for the food for thought.
bluefirebrand: > Moving more and more into private communities removes that, and that is a great loss IMOIt is a great loss. Unfortunately this is a result of unchecked greed and an attitude of technological progress at any cost. Frankly we enabled this abuse by naively trying to maintain a free and open internet for people. Maybe we should have been much more aggressively closed off from the start, and not used the internet to share so freely.
Wowfunhappy: > Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.> If you flag, please don't also comment that you did.I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)
dang: Of course they're important, but they're also implicitly encoded into the culture. Cutting something from the guidelines doesn't mean the rule is canceled. HN has countless rules that don't appear explicitly in https://news.ycombinator.com/newsguidelines.html.I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.
andai: I seem to recall a rule about "don't downvote something because you disagree with it", but I can't find anything like that.Not sure if that's really solvable with rules, though.My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)
dang: Oh that one is a classic case of people 'remembering' a rule that never existed - there's a name for this illusion but I forget what it is.See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...
tristanb: You're absolutely right...
zahlman: They look similar. In my experience, they do not read similar at all. You have to pay attention and actually try to appreciate what you're reading. Then, if you try and fail, it might not be your fault.
nomel: What effort was put into their prompt to make them read similarly? There could very well be a selection bias, where you're only "seeing" AI when it's obvious/default prompt.