Discussion
Appendix: Additional Results
love2read: Seems to fail to mention that clearly documented AI-generated PR's (especially autonomously created ones) tend to have a much higher bar of acceptance, hinging on the reviewer's relationship with AI.With this consideration, I submit that All of the SWE-bench-Passing PRs over a certain line count threshold would not be merged (if clearly noted as autonomous AI contributions).
refulgentis: Well, no: one of the first things it says is reviewers were blind to human vs. ai.
yorwba: The comment you're replying to is talking about a hypothetical scenario.In any case, the blinding didn't stop Reviewer #2 from calling out obvious AI slop. (Figure 5)
collabs: I feel like I don't have the context for this conversation. If slop is obvious as slop, I feel like we should block it.If you look at the comment it says what the code following the comment does. It doesn't matter whether it is a human or a machine that wrote it. It is useless. It is actually worse than useless because if someone needs to change the code, now they need to change two things. So in that sense, you just made twice the work for anyone who touches the code after you and for what benefit?
nubg: > mid-2024 agentsIs this a post about AI archeology?
zozbot234: The point is that AI models do these kinds of things all the time. They're not really all that smart or intelligent, they just replicate patterns or boilerplate and then iterate until it sort of appears to work properly.
languid-photic: makes sense! we wrote something yesterday about the weaknesses of test-based evals like swe-bench [1]they are definitely useful but they miss the things that are hard to encode in tests, like spec/intent alignment, scope creep, adherence to codebase patterns, team preferences (risk tolerance, etc)and those factors are really important. which means that test-evals should be relied upon more as weak/directional priors than as definitive measures of real-world usefulness[1] https://voratiq.com/blog/test-evals-are-not-enough/
AndrewHampton: [delayed]
p1necone: They might have tried, but this would be pretty hard to achieve for real - especially for the older/worse models. For changes that do more than alter a couple of lines llm output can be very obvious. Stripping all comments from the changeset might go a long way to making it more blind, but then you're missing context that you kinda need to review the code properly.