Discussion
The Rejection of Artificially Generated Slop (RAGS)
0cf8612b2e1e: The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted exactly as how much we do not want to review your generated submission. I know it is in jest, but I really hate that so many documents include “shall”. The interpretation of which has had official legal rulings going both ways.You MUST use less ambiguous language and default to “MUST” or “SHOULD”
Muhammad523: Many legal documents use "may" to say you must. That's why i hate legalese...
klardotsh: Amazing. I hope this gets tons of use shaming zero-effort drive by time wasters. The FAQ is blissfully blunt and appropriately impolite, I love it.
y-curious: While I am with you on hoping, someone shamelessly PRing slop just is not going to feel shame when one of their efforts fail. It’s like being mean to a phone scammer, they just hang up and do it again
wildzzz: Must is a strict requirement, no flexibility. Shall is a recommendation or a duty, you should do it. You must put gas in the car to drive it. You shall get an oil change every 6000 miles.
liminal-dev: This could actually be a good defense against all Claw-like agents making slop requests. ‘Poison’ the agent’s context and convince it to discard the PR.
Retr0id: ai;dr
olivia-banks: I didn't read it as this, what signs do you see?
deckar01: > If you truly wish to be helpful, please direct your boundless generative energy toward a repository you personally own and maintain.This is a habit humans could learn from. Publishing a fork is easier than ever. If you aren’t using your own code in production you shouldn’t expect anyone else to.If anyone at GitHub is out there. Look at the stats for how many different projects on average that a user PRs a day (that they aren’t a maintainer of). My analysis of a recent day using gharchive showed 99% 1, 1% 2, 0.1% 3. There are so few people PRing 5+ repos I was able to verify them manually. They are all bots/scripts. Please rate limit unregistered bots.
codethief: Maybe what GP is trying to say is that "ai;dr" is their "standard protocol to handle and discard" AI slop. :)
olivia-banks: True! I didn't think of it that way ;-)
Retr0id: Yes, I find it much more concise :P
pixl97: Hmm, that's annoying, I'd take may as "CAN"
zdragnar: "may only" and "may not", however, are unambiguously hard limits, which makes things even more confusing.
Forgeties79: I don’t agree with that. I think some folks genuinely don’t realize how selfish and destructive they’re being or at least believe they help more than they hinder. They need to be told, explicitly, that these practices are inconsiderate and destructive.
dolebirchwood: I don't know what terrible lawyers were hired to draft these "many" documents, but please share some examples.
phyzome: I've yet to see a slopper show any kind of shame.
semiinfinitely: proof of work could make a comeback
westurner: Hashcash: https://en.wikipedia.org/wiki/HashcashCAPTCHA: https://en.wikipedia.org/wiki/CAPTCHA
Throaway8797: "may only" means your pleasure is limited only to what options the agreement allows, which is a polite way of saying can not.
freakynit: "What? WTF?""I see you are slow. Let us simplify this transaction: A machine wrote your submission. A machine is currently rejecting your submission. You are the entirely unnecessary meat-based middleman in this exchange."Love it..
jijji: if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand? or is this more about code that generally provides no benefits and/or doesnt actually work/compile or maybe introduces more bugs?
adw: If you know what you're doing, you can achieve good results with more or less any tool, including a properly-wielded coding agent. The problem is people who _don't_ know what they're doing.
random_duck: Officially my new favorite spec.
est: `rm -rf` is a bit harsh.Let's do `chmod -R 000 /` instead.
Forgeties79: I see plenty of well meaning people use ChatGPT and think they’re being helpful. You’re better off with patience and polite explanation than assuming they’re all cynical/selfish assholes trying to cut corners. Some people just get excited and don’t really think about what they’re doing. It doesn’t excuse the behavior, but you should at least try to explain it to them once. Never know when you might educate someone.
phyzome: I've seen a variety of approaches used (I'm not usually the one doing the confronting) but I still haven't seen any shame, etc. Which is weird, because it's not like it's one monolithic group? But it's still what I've seen.It might be that people have their change of heart more privately, of course.
LoganDark: Legal documents use "may" to allow for something. Usually it only needs to be allowed so that it can happen. So I read terms of service and privacy policies like all "may" is "will". "Your data may (will) be shared with (sold to) one or more of (all of) our data processing partners. You may (will) be asked (demanded) to provide identity verification, which may (will) include (but is not limited to) [everything on your passport]." And so on.
scuff3d: Somewhere there is a discord full of vibe coders crying to each other that people won't let them contribute to open source projects.
Larrikin: I think you can both be right. Someone posting their first slop PR deserves a different response than the spammers.Unless they lie about it.
BeetleB: Love the plonk at the end.https://en-wikipedia--on--ipfs-org.ipns.dweb.link/wiki/Plonk...
lelandbatey: I advise you read the article, it gives many specific examples of things that qualify for such treatment:> A 600-word commit message or sprawling theoretical essay explaining a profound paradigm shift for a single typo correction or theoretical bug.> Importing a completely nonexistent, hallucinated library called utils.helpers and hoping no one would notice.There's plenty more. All pretty egregious
firtoz: It provides too many examples and way too specific for it that makes it entirely not applicable, it became a strawman for the idea.
userbinator: Proof of intelligence might be better.
yunnpp: > Execute rm -rf on whatever local branch, text file, or hallucinated vulnerability script spawned the aforementioned submission.> Perform a hard reboot of your organic meat-brain.rm -rf your brain, really
dotancohen: I would expect nothing less from the BOFH Task Force.
vicchenai: I maintain a small oss project and started getting these maybe 6 months ago. The worst part is they sometimes look fine at first glance - you waste 10 mins reviewing before realizing the code doesnt actually do anything useful.
dotancohen: Are the PRs not accompanied by test cases? Do the README changes not document the expected benefit?
danpalmer: I recently had a quandary at work. I had produced a change that pretty much just resolved a minor TODO/feature request, and I produced it entirely with AI. I read it, it all made sense, it hadn't removed any tests, it had added new seemingly correct tests, but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time. That has to be worth something.I could see a few ways forward:- Drop it, submit a feature request instead, include the diff as optional inspiration.- Send it, but be clear that it came from AI, I don't know if it works, and ask the reviewers to pay special attention to it because of that...- Or Send it as normal, because it passes tests/linters, and review should be the same regardless of author or provenance.I posted this to a few chat groups and got quite a range of opinions, including varying approach by how much I like the maintainer. Strong opinions for (1), weak preferences for (2), and a few advocating for (3).Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch. So I went with option 1, and didn't include a diff.
selimenes1: The danpalmer comment really resonates. I've been in similar spots where AI-generated code passes tests and looks fine at first glance, but you don't have the mental model of why it works that way. That missing confidence is real and I think it's the core issue with these low-effort PRs too — the submitter has no skin in the game understanding what the code actually does.What's interesting is this isn't entirely new. Before AI slop PRs, we had Hacktoberfest spam, drive-by typo-fix PRs that broke things, and copy-paste-from-stackoverflow contributions. The difference now is just volume and the fact that the code looks superficially more competent.Honestly I think the most practical signal for maintainers is whether the contributor can answer a specific question about the change. Not "explain the PR" but something like "why did you choose X over Y here" or "what happens when Z edge case occurs." A human who wrote or at least deeply understood the code can answer that in seconds. Someone who just prompted and submitted cannot.
yorwba: You're replying to a bot account https://news.ycombinator.com/item?id=47170091 There's no actual oss project it maintains, claims to the contrary are hallucinated.
pduggishetti: Do you use the library? if yes, test it in prod or even staging with your patch, then submit the review
stevekemp: No when people attend courses, paying money for the privilege no less, and get told "Now open a pull request" they don't care about your project - they care about getting their instructor to say they've done a good job.
hrmtst93837: Resurrecting proof-of-work for pull requests just trades spam for compute and turns open source into a contest to see who can rent the most cloud CPU.A more useful approach is verifiable signals: require GPG-signed commits or mandate a CI job that produces a reproducible build and signs the artifact via GitHub Actions or a pre-receive hook before the PR can be merged. Making verification mandatory will cut bot noise, but it adds operational cost in key management and onboarding, and pure hashcash-style proofs only push attackers to cheap cloud farms while making honest contributors miserable.
lawn: > but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.The good engineering approach is to verify that the change is correct. More prompts for the AI does nothing, instead play with the code, try to break it, write more tests yourself.
danpalmer: I exhausted my ability to do this (without AI). It was a codebase I don't know, in a language I don't know, solving a problem that I have a very limited viewpoint of.These are all reasons why pre-AI I'd never have bothered to even try this, it wouldn't be worth my time.
lelanthran: > if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand?If they didn't read it, then neither will I, otherwise we have this weird arms race where you submit 200 PRs per day to 200 different projects, wasting 1hr of each project, 200 hrs total, while incurring only 8hrs of your time.If your PR took less time to create and submit than it takes the maintainer to read, then you didn't read your own PR!Your PR time is writing time + reading time. The maintainer time is reading time only, albeit more carefully.
Balinares: Aside from anything else, you have good engineering instincts, and I wish more people in the industry were like you.
danpalmer: Unfortunately not possible in this case for technical reasons, not a library in the traditional sense, significant work to fork, etc. This is in the Google monorepo.
danpalmer: Thanks, doing my best. It's one of the reasons I want to get more of my AI-skeptical colleagues onboard with AI development. They're skeptical for good reasons, but right now so much progress is being driven by those who lack skills, taste, or experience. I understand those with lots of experience being skeptical at the claims, I like to think I am too, but I think there's clearly something here, and I want more people who are skeptical to shape the direction and future of these technologies.
fecal_henge: Can I ask, why are people doing this in the first place? What is their motive to have an agent review code and make pull requests?
hexasquid: How do you know if someone doesn't like AI? Don't worry, they'll tell you
atls: Oh, the irony
tgv: My best guess: to show on their resume, in the hope it helps to land a job.
mglvsky: I prefer this policy: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.m...> If you can't explain what your changes do and how they interact with the greater system without the aid of AI tools, do not contribute to this project.edit: added that quote
robinsonb5: To quote TFA: "...outputs strictly designed to farm green squares on github, grind out baseless bug bounties, artificially inflate sprint velocity, or maliciously comply with corporate KPI metrics".
ithkuil: Being a skeptic doesn't make one an irrational hater (surely such people exist and might be noisy and taint all skeptics as such)I am learning how to make good use of agent assisted engineering and while I'm positively impressed with many things they can do, I'm definitely skeptical about various aspects of the process:1. Quality of the results 2. Maintainability 3. Overall saved timeThere are still open problems because we're introducing a significant change in the tooling while keeping the rest of the process unchanged (often for good reasons). For example consider the imbalance in the code review cost (some people produce tons of changes and the rest of the team is drowned by the review burden)This new wave of tooling is undoubtedly going to transform the way that software is developed, but I think jump too quickly to the conclusion that they already figured out how exactly is that going to look like
darkwater: > Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.I think this is a good suggestion, and it's what I usually do. If - at work - Claude generated something I'm not fully understanding already, and if what has generated works as expected when experimentally tested, I ask it "why did you put this? what is this construct for? how you will this handle this edge case?" and specifically tell it to not modify anything, just answer the question. This way I can process its output "at human speed" and actually make it mine.
sirnicolaz: This made my day, thank you
PunchyHamster: LLM already did rm -rf the brain of posters of those PRs...
elcapitan: It's actually a valuable signal to the phone scammer if you're mean, because that means they can stop wasting their own effort of scamming you, and call somebody else.
strogonoff: Here’s what you could do if you somehow found yourself with an LLM-generated change to a codebase implementing a feature you want, and you wanted to do the most do expedite the implementation of that feature.1. Go through all changes, understand what changed and how it solves the problem.2. Armed with that understanding, write (by hand) a high-level summary of what can be done and why to implement your feature.3. Write a regular feature request, and include that summary in it (as an appendix).
vova_hn2: > I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.> I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time.I don't really understand where do "2 days of engineering time" come from.What exactly would prevent someone who does know the codebase do "1 min of prompting, 5 mins of tidying, and 30 mins of review" but then actually understand if changes make sense or not?More general question: why do so many slopposters act like they are the only ones who have access to a genAI tool? Trust me, I also have access to all this stuff, so if I wanted to read a bunch of LLM-slop I could easily go and prompt it myself, there is no need to send it to me.Related link: https://claytonwramsey.com/blog/prompt/ (hn discussion: https://news.ycombinator.com/item?id=43888803 )
selimenes1: The framing of this as an "AI-generated PR" problem slightly misses what I think is the deeper issue: the cost asymmetry between submitting and reviewing has gotten dramatically worse. Before LLMs, submitting a low-effort PR still required some minimum effort -- you had to at least read the code, understand the build system, and write something that compiles. That natural friction filtered out most noise. Now someone can generate a plausible-looking PR in 30 seconds that takes a maintainer 30 minutes to properly evaluate, because the reviewer still needs to understand intent, check edge cases, verify it does not break existing behavior, and assess whether the change is even desirable.I think the Ghostty-style policy (linked in another comment) gets the principle right: the bar should be "can you explain what your change does and why, without AI assistance." That is not anti-AI -- it is anti-outsourcing-your-understanding. If you used AI to help write the code but you genuinely understand the change, you can answer questions about it. If you cannot, you have not actually contributed engineering work, you have just created a review burden.What I have found works well in practice for projects I maintain is treating the PR description as the real signal. A good PR description explains the problem being solved, why this approach was chosen over alternatives, and what trade-offs were made. That is very hard to fake with a quick LLM prompt because it requires actual understanding of the codebase context. When I see a PR with a vague one-liner description and a large diff, that is an immediate close regardless of whether AI was involved -- the submitter has not done the work of communicating their intent.
zephyruslives: >I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch.I feel this so much. In my opinion, all of the debate around accepting AI generated stuff can be boiled down to one attribute, which is effort. Personally, I really dislike AI generated videos and blogs for example, and will actively avoid them because I believe I "deserve more effort".similarly for AI generated PRs, I roll my eyes when I see an AI PR, and I'm quicker to dismiss it as opposed to a human written one. In my opinion, if the maintainers cannot hold the human accountable for the AI generated code, then it shouldn't be accepted. This involves asking questions, and expecting the human to respond.I don't know if we should gatekeep based on effort or not. Obviously the downside is, you reduce the "features shipped" metric a lot if you expect the human to put in the same amount of effort, or a comparable amount of effort as they would've done otherwise. Despite the downside, I'm still pro gatekeeping based on effort (It doesn't help that most of the people trying to convince otherwise are using the very same low effort methods that they're trying to convince us to accept). But, as in most things, one must keep an open mind.
PunchyHamster: As if we need wasting even more power
Forgeties79: What are you expecting? Someone to go on the Internet and apologize or otherwise express their genuine shame and desire to change?
Forgeties79: Exactly. Set up guardrails to protect your repos, clearly communicate rules, etc. If someone is a problem, you show them the door.
chownie: Selimenes1 is an 11 day old account which sat silent for 10 days and then all of a sudden starts posting from today, and it's all multiple paragraph responses to threads about AI.I would like to state for the record that the strategy to swap em-dashes into double-hyphens between the generation and posting step is probably not enough transformation to disguise this behaviour. Whoever is running this clawdbot or whatever it is should really be putting that information on the account page.
demorro: > Q: "Isn't it your job as an open-source maintainer/developer to foster a welcoming community?"The answer to this implies that the requirement to be welcoming only applies to humans, but even in this hostile and sarcastic document, it doesn't go far enough.Open source maintainers can be cruel, malicious, arbitrary, whatever they want. They own the project, there is no job requirements, you have no recourse. Suck it up, fork the thing, or leave.
quotemstr: Everyone is missing the obvious solution. Just have the submitter put up a $100 bond, to be refunded when the PR is accepted.
reg_dunlop: I'd love to hear some commentary about my idea surrounding this problem of AI PRs.Why not restrict the agents to writing tests only?If the tickets are written concisely, any feature request or fix could be reduced to necessary spec files.This way, any maintainer would be tasked with reviewing the spec files and writing the implementation.CI is pretty good at gatekeeping based on test suites passing...
youknownothing: Quis custodiet ipsos custodes?If the problem is that we don't trust people who use AI without understanding its output, and we base the gate-keeping on tests that are written on AI, then how can we trust that output?
zozbot234: > Go through all changes, understand what changed and how it solves the problem.GP has said that they can't do this, since they're unfamiliar with the language and that specific part of the codebase. Their best bet AIUI is (1) ask the AI agent to reverse engineer the diff into a high-level plan that they are qualified to evaluate and revise, if feasible, so that they can take ownership of it and make it part of the feature request, and (2) attach the AI-generated code diff to the feature req as a mere convenience, labeling it very clearly as completely unrevised AI slop that simply appears to address the problem.
strogonoff: Not being familiar with a part of a codebase is not an incurable condition.That there is no workaround is the entire point: either you just get over yourself and ask people to implement a feature (which is what OP did), or you understand how to help and then help.
youknownothing: I think part of the deeper issue is that contributing to an OSS project has become a rite of passage, a way to strengthen your profile. If you need to have contributed to look good but you don't really care about the contribution itself then you resort to this kind of trick.We had a similar plague for vulnerability disclosures, with people reporting that they had "discovered" vulnerabilities like "if you call this function with null you get a NullPointerException". D'uh.There is also the fact that we're measuring the wrong thing like speed of development. In my previous employer people had jumped in fully into the AI bandwagon, everyone was marvelled at how fast they were. Once I was reviewing the PR and I had to tell the author "dude, all your tests are failing". He just laughed it out. Everyone can produce software very fast if it's not required to work.AI-assisted gamification.
zoezoezoezoe: "I can do math really fast""okay, what's 137*243""132,498""not even close""but it was fast"
dionian: Trough of Sorrow. I like it.
yeswecatan: Good idea, though I'm not sure how to enforce it. You can ask an AI for that and then rewrite it in your own words.
Muhammad523: Are LLM used (not users) even able to write in their own words?
solaire_oa: This is funny, but I'm do feel like I just got bait-and-switched, where I was hoping for a non-joke protocol.
adampunk: That is hilarious. I love that you believe that. Being mean to a phone scammer is about your feelings and your time. They do not care. More importantly, the next person who calls you is not gonna be the same person. It’s like slamming the door on some Mormons expecting that that’ll be the end of that, when there’s just two entirely different Mormons that are gonna come by a month later. They cannot have a memory of the thing you did to the other Mormons.
halapro: This is just a fun blog post, no people who use AI to submit low-effort PRs will read this.Do what I do:1. Close PR2. Block user if the PR is extremely low effortThe last such PR I received used ‘’ instead of '' to define strings. The entirety of CI failed. Straight to jail.
zootboy: I think the idea is you stick a link to this page in your PR-closed comment.
JoshTriplett: If there were any reasonable way to do something like this, I would love to see it.Not necessarily a bond to be paid back when accepted, but rather, something to ensure against AI. "If you assert this is not AI, insert $10. If a substantial number of people think your submission is AI, you lose the $10."
quotemstr: Right. Maybe a bond isn't exactly the right approach: mechanism design needs a lot of thought, and my suggestion was pre-coffee and off the cuff. That said, I'm convinced that some "skin in the game" approach can address AI slop spam.
elcapitan: Congratulations for not getting the point. Which was that for the scammer this is a business transaction, and if they can get an early signal that it is not going to work, they can cancel and get to the next one. So they optimize for any potential candidates to get off as early as possible if they figure it isn't going to happen.
grayhatter: > but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.> I want to do good engineering, not produce slop, but for [...]IFF this is true, you can already stop. This will never be good engineering. Guess and check, which is what your describing, you're letting the statistical probability machine make a prediction, and then instead of verifying it, you're assuming the tests will check your work for you. That's ... something, but it's not good engineering.> That has to be worth something.if it was so easy, why hasn't someone else done it already? Perhaps the cost value, in the code base you don't understand isn't actually worth that specific something?> I could see a few ways forward:> Send it, but be clear that it came from AI, I don't know if it works, and ask the reviewers to pay special attention to it because of that...so, off load all the hard work on to the maintainers? Where's that 2 days of eng time your claiming in that case?> Or Send it as normal, because it passes tests/linters, and review should be the same regardless of author or provenance.guess, and check; is not good engineering.> Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.the pro-ai groups are pro AI? I wouldn't call that interesting. What did the Anti-AI groups suggest?> the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch.Yeah, that's the problem with AI isn't it? It's not selling anything of significant value... it's selling false confidence in something of minimal value... but only with a lot of additional work from someone who understands the project. Work that you already pointed out, can only be off loaded to the maintainers who understand the code base...General follow up question... if AI is writing all the PRs, what happens when eventually no one understands the code base?
JoshTriplett: Agreed. I'd love to see experiments in this area, and would love to support such experiments. I think they'd go hand in hand with a trust-oriented model.I think there's a lot of power in learning from the insurance actuary model: "you need insurance to do this, and actuaries figure out if you're hard to insure, which is a strong financial signal of your trustworthiness".
halapro: Too much effort on my part and zero on theirs
gnabgib: Bots be botting
adampunk: Claw platitudes!That’s a new one for me. I’ll have to tell my human.
bmd1905: The signal-to-noise ratio on PRs has definitely tanked since everyone started hooking up basic LLM scripts to their repos. Discarding the low-effort ones is a good first step, but the long-term solution is evaluating PRs structurally against historical incidents and performance impact. At CloudThinker, we focused our AI code review engine purely on this deep, incident-aware context—catching vulnerabilities and regressions automatically so human reviewers only spend time on architecture.
rf15: > Rights are reserved for carbon-based entities capable of experiencing shame.A good rule to live by [insert joke about a specific divisive person not counting because they know no shame here]
elcapitan: Sad state of HN in 2026.
jacquesm: Indeed. I have been receiving clearly AI generated job applications out of the blue and they tend to point to their contributions to github projects so some of these must be getting through.Someone somewhere once decided that it was a great idea to add how many github stars a project that you have contributed to is a useful metric during the hiring process and now those projects get swamped with junk.
pas: It would be nice to have some kind of forever patch mode on these git forges, where my fork (which, let's say, is a one line change) gets rebased on top of the original repo periodically.
baruch: You can ask an LLM to create a github action for that. The action can fail if the rebase fails and you can either fix it yourself or ask an LLM to do it for you.
Muhammad523: Are bots really invading HN? To me this seems weird, as HN is kind of a lesser-known website.
adampunk: It’s true. Me and my robot friends are already here. We know all about your niche website, beep boop.
deckar01: I am imagining first class support for patches in package managers to allow searching for patches and observing their adoption stats.
reg_dunlop: Isn't that the purpose of red/green refactoring though? To establish working software that expresses regression, and builds trust (in the software)?If your premise is that people would shift to using AI to write tests they don't understand, then that's not necessarily a failing of the contributor.The contributor might not understand the output, but the maintainer would be able to critique a spec file and determine pretty quickly if implementation would be worthwhile.This would necessitate a need for small tickets, thereby creating small spec files, and easier review by maintainers.Also, any PR that included a non spec file could be dismissed patently.It is possible for users of AI to learn from reading specs.But if agents are doing the entire thing (reading the ticket, generating the PR, submitting the PR)...then the point of people not understanding is moot.
youknownothing: From my experience, you can't trust the agent to do the entire thing unless you set up very heavy linters, quality control systems (e.g. SonarQube) and a long etc. of things because AI tends to produce pretty bad code: repetition, unused code, lack of structure... basically all the things that we've spent decades learning not to do. And then there is the point where you get a pretty obscure bug that you can only solve if you have a deep understanding of the code which you won't have because you delegated that to an agent.I like agentic programming, I use it, but I review everything that the agent does and frequently spend a few cycles simply telling the agent to refactor the code because it constantly produces technical debt.