Discussion
EsoLang-Bench
deklesen: Mhh... my hunch is that part of this is that all python keywords are 1 token, I assume. And for those very weird languages, tokenizing might make it harder to reason over those tokens.Would love to see how the benchmarks results change if the esoteric languages are changed a bit to make them have 1-token keywords only.
bwestergard: I'm shocked to see how poorly these models, which I find useful day to day, do in solving virtually any of the problems in Unlambda.Before looking at the results my guess was that scores would be higher for Unlambda than any of the others, because humans that learn Scheme don't find it all that hard to learn about the lambda calculus and combinatory logic.But the model that did the best, Qwen-235B, got virtually every problem wrong.
__alexs: They are also weirdly bad at Brainfuck which is basically just a subset of C.
chychiu: Considering that brainfuck only has 8 characters and models are scoring at 6.2% I don't think tokenization is the issue
orthoxerox: > Frontier models score ~90% on Python but only 3.8% on esoteric languages, exposing how current code generation relies on training data memorization rather than genuine programming reasoning.I would probably score about the same, does this prove I also rely on training data memorization rather than genuine programming reasoning?Or does this simply show that esolangs are hard to reason in by design? A more honest approach would use a "real", but relatively unpopular, language. Make them use CoffeeScript or Ada or PL/I or Odin or that other systems programming language that that very opinionated guy is implementing on top of QBE.
iloveoof: Try MUMPS, widely used but little training data online. Probably less than some esolangs