Discussion
From 0% to 36% on Day 1 of ARC-AGI-3
lairv: Note that this uses a harness so it doesn't qualify for the official ARC-AGI-3 leaderboardAccording to the authors the harness isn't ARC-AGI specific though https://x.com/agenticasdk/status/2037335806264971461
falcor84: I for one think that harness development is perhaps the most interesting part at the moment and would love to have an alternative leaderboard with harnesses.
sanxiyn: There is. Official leaderboard is without harness, and community leaderboard is with harness. Read ARC-AGI-3 Technical Paper for details.
falcor84: I went through the technical paper again, and while they explain why they decided against the harness, I disagree with them - my take is that if harnesses are overfitting, then they should be penalized on the hidden test set.Anyway, searching both in ARC-AGI's paper and website and directly on kaggle, I failed to find a with-harness leaderboard; can you please give the link?
sanxiyn: Here it is: https://arcprize.org/leaderboard/community
krackers: > this uses a harnessThis seems like an arbitrary restriction. Tool-use requires a harness, and their whitepaper never defines exactly what counts as valid.
modeless: On the public set of 25 problems. These are intended for development and testing, not evaluation. There are 110 private problems for actual evaluation purposes, and the ARC-AGI-3 paper says "the public set is materially easier than the private set".
SchemaLoad: Benchmarks on public tests are too easy to game. The model owners can just incorporate the answers in to the dataset. Only the private problems actually matter.
osti: Doesn't the chat version of chatgpt or gemini also have interleaved tool calls, so do those also count as with harnesses?
sanxiyn: In this case the code is public and you can see they are not cheating in that sense.
SchemaLoad: Once the model has seen the questions and answers in the training stage, the questions are worthless. Only a test using previously unseen questions has merit.
steve_adams_86: I'm so into harness development right now. Once it clicked that harnesses can bring more safety and determinism to LLMs, I started to wonder where I'd need that and why (vs MCP or just throwing Claude Code at everything), and my brain gears have been turning endlessly since then. I'd love to see more of what people do with them. My use cases are admittedly lame and boring, but it's such a fun paradigm to think and develop around.
lambda: They aren't training new models for this. This is an agent harness for Opus 4.6.
measurablefunc: All traffic is monitored, all signal sources are eventually incorporated into the training set in one way or another. The person you're responding to is correct, even a single API call to any AI provider is sufficient to discount future results from the same provider.