Discussion
I built a programming language using Claude Code
tines: Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.
amelius: The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.
andsoitis: > While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
johnbender: In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it’s remains important they can read and understand what has been generated.
johnfn: > If you’re not writing or reading it, the language, by definition doesn’t matter.By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.
craigmcnamara: Now anyone can be a Larry Wall, and I'm not sure that's a good thing.
_aavaa_: I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.
jetbalsa: That is why Typescript is the main one used by most people vibe coding, The LLMs do like to work around the type engine in it sometimes, but strong typing and linting can help a ton in it.
jaggederest: I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler
kerkeslager: > While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.These "guardrails" are made of silly putty.
andyfilms1: I've been wondering if a diffusion model could just generate software as binary that could be fed directly into memory.
entropie: Yeah, what could go wrong.
spelunker: Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?
marginalia_nu: Yeah it's a rewarding project. Getting a language that kinda works is surprisingly accessible. Though we must be mindful that this is still the "draw some circles" pane. Producing the rest of the rest of the famous owl is, as always, the hard bit.
ramon156: AI written code with a human writted blog post, that's a big step up.That said, it's a lot of words to say not a lot of things. Still a cool post, though!
ivanjermakov: > with a human writted blog postI believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
wavemode: We're definitely not at that point.If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
marssaxman: The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
danielvaughn: In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.
idiotsecant: I think I remember seeing research right here on HN that terse languages don't actually help all that much
craigmart: You can make an LLM sound very natural if you simply ask for it and provide enough text in the tone you’d like it to reproduce. Otherwise, it’s obvious that an LLM with no additional context will try to stick to the tone the company aligned it to produce
raincole: > every AI coding bot will learn your new languageIf there are millions of lines on github in your language.Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.
atoav: [delayed]
iberator: Nope. You didn't write it. You plagiarized it. AI is bad
voxleone: In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.Roughly: machine code → assembly → C → high-level languages → frameworks → visual tools → LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.
aleksiy123: One topic of llms not doing well with UI and visuals.I've been trying a new approach I call CLI first.Essentially instead of trying to get llm to generate a fully functioning UI app.You focus on building a local CLI tool first.It can directly call the CLI tool and you can iterate on the design of whatever you are building quickly.You can get it to walk through the flows, and journeys using the CLI prototype and iterate on it quickly.Your commands structure will very roughly map to your resources or pages.Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)You can get it to build the remote storage, then the apis, finally the frontend.All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.
shevy-java: That was step #1.Step #2 is: get real people to use it!
Insanity: That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.
gf000: Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.
grumpyprole: Does this really test Claude in a useful way? Is building a highly derivative programming language a useful use case? Claude has probably indexed all existing implementations of imperative dynamic languages and is basically spewing slop based on that vibe. Rather than super flexible, super unsafe languages, we need languages with guardrails, restrictions and expressive types, now more than ever. Maybe LLMs could help with that? I'm not sure, it would certainly need guidance from a human expert at every step.
quotemstr: Those constraints can be enforced by a library too. Even humans sometimes make a whole new language for something that can be a function library. If you want strong correctness guarantees, check the structure of the library calls.Programming languages function in large parts as inductive biases for humans. They expose certain domain symmetries and guide the programmer towards certain patterns. They do the same for LLMs, but with current AI tech, unless you're standing up your own RL pipeline, you're not going to be able to get it to grok your new language as well as an existing one. Your chances are better asking it to understand a library.
zahirbmirza: "Just one more prompt..." I can relate. who else has been affected by this?
onlyrealcuzzo: > Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:1) It maximizes local reasoning and minimizes global complexity2) It makes the vast majority of bugs / illegal states impossible to represent3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
idiotsecant: How does a programming language prevent the vast majority of bugs? I feel like we would all be using that language!
gf000: I agree with your questioning of it being capable of preventing bugs, but your second point is quite likely false -- we have developed a bunch of very useful abstractions in "research" languages 50 years ago, only to re-discover them today (no null, algebraic data types, pattern matching, etc).
abraxas: I agree with the sentiment but want to point out that the biggest drive behind UML was the enrichment of Rational Software and its founders. I doubt anyone ever succeeded in implementing anything useful with Rational Rose. But the Rational guys did have a phenomenal exit and that's probably the biggest success story of UML.I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.
thomasmg: I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.