Discussion
Computer Science > Software Engineering
verdverm: Really long-term task benchmark showing significant improvements in very recent models, while also showing really bad regression rates across the board.
challengerVIE: To me using agents daily, the long term vision with maintainability in mind really makes the difference between us humans and agents, I like the idea. However evaluating long term maintainability over an average of just 500 loc changes does not sound like long term maintainability being measured here
KronisLV: > The benchmark comprises 100 tasks, each corresponding on average to an evolution history spanning 233 days and 71 consecutive commits in a real-world code repository.This seems like a really cool thing to benchmark! Technically it'd be possible to take GitHub repos that the AI orgs probably already have, cross-reference the code against the issues and regressions, and train/validate on that.The dataset would need to be way bigger to get close to the likes of SWE-bench: https://www.swebench.com/original.html"Vibe coded stuff gets hard to maintain and will end up buggy." Yeah, so make models that deal with that better, optimize for maintainability and consistency.Cool to see Claude doing decently though!
woadwarrior01: > Cool to see Claude doing decently though!The scales do seem to be tipped in its favor (cf: my other comment in this thread).
woadwarrior01: Interesting benchmark.I can't help but notice that they're benchmarking Opus 4.6 (Anthropic's latest and greatest model) against GPT-5.2 (which is three generations behind OpenAI's latest coding models: GPT-5.2-Codex, GPT-5.3-Codex and the latest GPT-5.4).
aurareturn: As far as I know, OpenAI did not release 5.3 Codex in their API. You can only use it with Codex CLI or app.
mentalgear: Claude wins by a large margin* Claude Opus 4.6 : 0.71* Claude Opus 4.5 : 0.51* KIMI-K2.5 : 0.37* GLM-5 : 0.36* GPT-5.2 : 0.23Note: later GPT versions seem to be only available within openAi's proprietary codex cli, so can't be tested.---Of course, the interesting follow-up question is: How well perform these models with added agent tooling ("harness") ?Maybe someone has tokens to burn and can run a matrix of agent tools over the top models and provide the results?
50lo: It’d be interesting to see this compared against a human baseline — e.g., a competent engineer with a fixed time budget on the same tasks.
re-thc: 5.2 and 5.2 Codex is arguably the same gen.
PunchyHamster: I'm sure with benchmarks like these future LLMs will be optimized to hide regressions by "fixing" test framework too
baalimago: It's there, you just need to use it with the responses API. Set model field to 'gpt-5.3-codex'
mike_hearn: It's the other way around - Claude Code is the proprietary one. Codex CLI is open source:https://github.com/openai/codexYou can definitely access the latest models via the API. That's how Codex CLI works.
baalimago: Replace "Agent" with "Employee" and apply the same algorithm. Evaluate employee efficiency. Profit?
gizmodo59: Unfortunately the paper doesn’t include gpt 5.3 which was released around the same time as opus 4.6 and also gpt 5.4 few days back. Both are available via api https://developers.openai.com/api/docs/models/gpt-5.3-codexAnd further
KronisLV: I'd unironically (and privately) want to do that with the code of both myself and those around me - to maybe see who I should listen more to, as well as who maybe less (ideally down to the feature level), because everyone has opinions, sometimes loud ones, but some approaches lead to a lot of churn and issues over the years.
jbergqvist: Would have loved to see a more detailed breakdown of performance by task type. The commit metadata is right there, seems straightforward to tag commits as feature vs refactor vs bug fix vs API change and report per-category numbers.
jsemrau: I reached the same conclusion. I tried using both for my personal investment ambient using agent-pair programming to build and agentic intelligence layer for stocks and the difference between the 2 models if astounding.
coder_decoder: These benchmarks measure something real. But they miss what actually kills you in day-to-day review.SWE-CI tests whether an agent can babysit a CI pipeline - fix failing tests, resolve merge conflicts, keep the build green. Fine. But that's not code review. Code review is when someone uses raw JWT instead of your custom auth middleware. Or when the architecture slowly drifts and nobody notices until it's too late. Or when something isn't technically a bug but opens up attack surface. None of that shows up in benchmarks.I've been running Claude and GPT side-by-side on the same PRs for months now. They genuinely have different strengths. Claude picks up deeper architectural stuff and gets project context better. GPT is faster at surface-level security flags. Gemini handles huge diffs without choking. But none of them - zero - consistently catches "we don't do things that way here." drewbitt mentioned the same thing in another thread this week.And honestly? Model quality isn't even the bottleneck anymore. The real problem is that every AI review tool auto-publishes everything. Developers learn to ignore 60% of comments because they're noise. The gap moved from "can AI find bugs" to "can humans trust the output."
woeirua: Uh, Opus 4.6 avoids introducing regressions 75% of the time?
calvinmorrison: I've been.... and they genuinely... And honestly? the real x is that. it went from X to Y.dang permaban this AI slop please
jlebensold: I've been building a similar loop with jetty.io for the last few months exclusively focused on data science workflows. I think that there's a lot of hill-climbing that can be accomplished by having a clear runbook.
andai: >if tested via the codex cli "harness" it wouldn't be a pure model-to-model comparison any more.Well that's already not a very fair comparison, we've known for years (one of the early-ish LLM papers, maybe someone knows which one) that prompting makes an enormous difference on agent performance, and most strikingly, the same prompt that massively boosts performance on one model, can massively reduce performance on another.So you already need to fine-tune the prompts for the model, if you want anything approaching best results.Now what's really amusing is that if you run models without their official harness, they can actually do way better on some benchmarks! [0] e.g. On Terminal Bench 2, Claude Opus 4.6 goes from #33 (Claude Code) to #5 (custom harness). Similar results for Codex.Now, this is "for this one very specific benchmark", but I still thought it was funny, since you'd expect "the harness made by the same company" to be the best for all tasks, but that's clearly not the case. (For specific tasks, it's actually quite trivial to outperform a general purpose harness.)[0] https://www.tbench.ai/leaderboard/terminal-bench/2.0
climike: We are working on supporting agent harnesses @ www.cliwatch.com, so both 1. LLM model as well 2. LLM model + harness performance can be evaluated against your software/CLI. We also support building evals against your doc suite. End result is that you’ll feel more comfortable shipping CLIs that work for your agentic users!:)
jasonjmcghee: Sure... But one is fine-tuned for what they are testing and one not.
jasonjmcghee: > model vendors know best on giving the best harnessThis was only true for Claude Code for a while. Codex was poor and Gemini was unusable.Since then Codex has gotten quite good.
verdverm: So 1/4 times it does not introduce a regression. That's still pretty bad imo. If 1/4 commits introduced regressions, what would your team do?
pizlonator: > and if tested via the codex cli "harness" it wouldn't be a pure model-to-model comparison any more.But the interesting comparison when evaluating coding agent capabilities is to evaluate the offerings given to users.So this means comparing Claude Code to Codex to whatever CLI tools Kimi, GLM, and others give you.And it might mean throwing Cursor, OpenCode, Amp, Pi, mini-swe-agent, etc into the mix
pizlonator: gpt-5.3 was not accessible via API, at least for meBut it was in codex
notduncansmith: You overestimate many teams I think.
agent5ravi: The resolve rate numbers are interesting but I keep coming back to the regression question. In my experience doing code review on a real codebase, the hard part of maintenance is not fixing the thing that broke. It is understanding whether your fix preserves the invariants the original author had in mind but did not write down.A benchmark that checks CI pass/fail captures the first part. It cannot capture the second. An agent that makes CI green by weakening an assertion or bypassing a check will score well here but create a time bomb.The monorepo point from yuyuqueen hits this. When the agent can see the full dependency graph, it is less likely to fix something locally while breaking a downstream assumption. The biggest maintenance failures I have seen are not wrong logic. They are fixes that are locally correct but violate an unwritten contract between components.
westurner: > It is understanding whether your fix preserves the invariants the original author had in mind but did not write down.This may also be the limit to the quality of an automated port to another language. What isn't encoded as automated tests or manual test procedure cannot be verified.So often I'm amazed what it's possible to accomplish from a prompt that's certainly insufficient with insufficient context. "It should have been necessary to specify more context there," or "I would have thought that it wasn't possible to do that without reading in more context than just one source code file," and then a few prompts later, "there's where we failed for trying to skimp on context"To prevent architectural rework as a human developer requires substantial ahead of time codebase review.Are AGENTS.md files the best place to summarize more comprehensive codebase review and useful dense context like guidelines for architectural components in order to avoid rework?
rurban: The zero regression rate graph at the end is exactly my experience. Only Opus is useful right now, the rest are juniors.
rekornode: CI pass/fail captures regression, but there's a layer beneath it that benchmarks can't touch: what exactly did the agent submit to each external API, and can you prove it after the fact? In the benchmark context this doesn't matter everything runs locally. In production it does. The agent calls a third-party service at 2am, the service claims it returned an error, your agent retried and billed you twice. Your logs say one thing, their logs say another. The integrity problem isn't just "did the code work" it's "what was the exact request/response pair, timestamped, by whom, provably." CI solves the first. Something else has to solve the second.