Discussion
A bug on the dark side of the Moon
josephg: Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.
embedding-shape: Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.
ModernMech: I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.
voodooEntity: I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.
croemer: Here's one tell-tale of many: "No alarm, no program light."Another one: "Two instructions are missing: [...] Four bytes."One more: "The defensive coding hid the problem, but it didn’t eliminate it."
gcr: For what it’s worth, Pangram thinks this article is fully human-written: https://www.pangram.com/history/f5f68ce9-70ac-4c2b-b0c3-0ca8...
xmcqdpt2: Then pangram isn't very good, because that article is full of Claude-isms.
tapoxi: This is my exact writing style - I'm screwed.
jwpapi: Has someone verified this was an actual bug?One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.Also one has to be aware that there are a lot of bugs that AI won’t find but humans wouldI don’t have the expertise to verify this bug actually happened, but I’m curious.
NiloCK: This is the top reply on literally every HN post now and we should discourage it.It is:- sneering- a shallow dismissal (please address the content)- curmudgeonly- a tangential annoyanceAll things discouraged in the site guidelines. [1]Downvoting is the tool for items that you think don't belong on the front page. We don't need the same comment on every single article.[1] - https://news.ycombinator.com/newsguidelines.html
cameronh90: It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely.What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.
monooso: You have no evidence that it was.
monooso: That's just writing. I frequently write like that.This insistence that certain stylistics patterns are "tell-tale" signs that an article was written by AI makes no sense, particularly when you consider that whatever stylistic ticks an LLM may possess are a result of it being trained on human writing.
gcr: See also: “I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me” by Marcus Olang', https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...For what it’s worth, Pangram reports that Marcus’ article is 100% LLM-written: https://www.pangram.com/history/640288b9-e16b-4f76-a730-8000...
croemer: In theory, wouldn't be too hard be to settle the question if whether he used ChatGPT to write it: get Olang to write a few paragraphs by hand, then have people judge (blindly) if it's the same style as the article. Which one sounds more like ChatGPT.
DiffTheEnder: Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.Don't understand how these tools exist.
yodon: This is so insightfully and powerfully written I had literal chills running down my spine by the end.What a horrible world we live in where the author of great writing like this has to sit and be accused of "being AI slop" simply because they use grammar and rhetoric well.
dotancohen: I was completely riveted the whole read. The description of Collins' dilemma is the first time I've seen an actual real world scenario described that might cause him to return to Earth alone.If an LLM wrote that, then I no longer oppose LLM art.
croemer: I doubt you write like that. Where can I find your writing other than your comments which don't read like that?
gcr: The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe...They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.
embedding-shape: The times I've written articles, and those have gone through multiple rounds of reviews (by humans) with countless edits each time, before it ends up being published, I wonder if I'd pass that test in those cases. Initial drafts with my scattered thoughts usually are very different from the published end results, even without involving multiple reviewers and editors.
masklinn: > Downvoting is the tool for items that you think don't belong on the front page.You can’t downvote submissions. That’s literally not a feature of the site. You can only flag submissions, if you have more that 31 karma.
riverforest: Software that ran on 4KB of memory and got humans to the moon still has undiscovered bugs in it. That says something about the complexity hiding in even the smallest codebases.
embedding-shape: > because that article is full of Claude-ismsNot sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.
snapcaster: Real talk. You're not just making a good point -- you're questioning the dominant paradigm
timdiggerm: So you're saying Pangram isn't worth much?
jnwatson: Horrible
MeteorMarc: Are there any consequences for the Artemis 2 mission (ironic)?
whiplash451: My guess is that in such low memory regimes, program length is very loosely correlated with bug rate.If anything, if you try to cram a ton of complexity into a few kb of memory, the likelihood of introducing bugs becomes very high.
throwaway27448: It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means. What is not clear is how they found themselves modeling the gyros software of the apollo code to begin with.But, I do think their explanation of the lock acquisition and the failure scenario is quite clear and compelling.
croemer: These are just some of the good examples I found.My hunch that this is substantially LLM-generated is based on more than that.In my head it's like a Bayesian classifier, you look at all the sentences and judge whether each is more or less likely to be LLM vs human generated. Then you add prior information like that the author did the research using Claude - which increases the likelihood that they also use Claude for writing.Maybe your detector just isn't so sensitive (yet) or maybe I'm wrong but I have pretty high confidence at least 10% of sentences were LLM-generated.Yes, the stylistic patterns exist in human speech but RLHF has increased their frequency. Also, LLM writing has a certain monotonicity that human writing often lacks. Which is not surprising: the machine generates more or less the most likely text in an algorithmic manner. Humans don't. They wrote a few sentences, then get a coffee, sleep, write a few more. That creates more variety than an LLM can.Fun exercise: https://en.wikipedia.org/wiki/Wikipedia:AI_or_not_quiz
monooso: Here's an alternative way of thinking about this...Someone probably expended a lot of time and effort planning, thinking about, and writing an interesting article, and then you stroll by and casually accuse them of being a bone idle cheat, with no supporting evidence other than your "sensitive detector" and a bunch of hand-wavy nonsense that adds up to naught.
kenjackson: While I agree with the sentiment, using AI to write the final draft of the article isn’t cheating. People may not like it, but it’s more a stylistic preference.
jll29: > It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means.Could the "AI native language" they used be Apache Drools? The "when" syntax reminded me of it...https://kie.apache.org/docs/10.0.x/drools/drools/language-re...(Apache Drools is an open source rule language and interpreter to declaratively formulate and execute rule-based specifications; it easily integrates with Java code.)
chrisjj: [delayed]
xmcqdpt2: I'm sure some human writers would write:> The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.> The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.> *Tests verify the code as written; a behavioural specification asks what the code is for.*However this is a blog post about using Claude for XYZ, from an AI company whose tagline is"AI-assisted engineering that unlocks your organization's potential"Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.
rudhdb773b: Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.
Gigachad: HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
brookst: If you’re looking for a place that surfaces only human-written content regardless of whether it’s interesting, rather than interesting content regardless of how it was written, HN is not the place.There might be a market for your alternative though. Should be easy enough to build with Claude Code.
ChicagoBoy11: For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.
deepsun: And that's why it's harder (or easier?) to make the same landing again -- we taking way less chances. Today we know of way more failure modes than back then.
pooloo: Yet here we are compounding the issues by adding more and more layers to these systems... The higher the level it becomes the more security risks we take.
xmcqdpt2: To start, this is more or less an advertising piece for their product. It's pretty clear that they want to sell you Allium. And that's fine! They are allowed! But even if that was written by a human, they were compensated for it. They didn't expend lots of effort and thinking, it's their job.More importantly, it's an article about using Claude from a company about using Claude. I think on the balance it's very likely that they would use Claude to write their technical blog posts.
monooso: > They didn't expend lots of effort and thinking, it's their job.Your job doesn't require you to think or expend effort?
embedding-shape: > Do you really think they spent the time required to actually write a good article by hand?Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.
bakugo: If the content was interesting, the author would've written about it himself.By asking AI to write the article for you, you're asserting that the subject matter is not interesting enough to be worth your time to write, so why would it be worth my time to read?
buredoranna: Still my all time favorite snippet of code. TC BANKCALL # TEMPORARY, I HOPE HOPE HOPE CADR STOPRATE # TEMPORARY, I HOPE HOPE HOPE TC DOWNFLAG # PERMIT X-AXIS OVERRIDE https://github.com/chrislgarry/Apollo-11/blob/master/Luminar...
TruffleLabs: This is just writing; terse maybe and maybe not grammatically correct, but people write like that.
croemer: It's not just terseness, it's the rhythm and "it's not x, it's y".In fact, the latter is the opposite of terseness. LLMs love to tell you what things are not way more than people do.See https://www.blakestockton.com/dont-write-like-ai-1-101-negat...(The irony that I started with "it's not just" isn't lost on me)
wk_end: > (The irony that I started with "it's not just" isn't lost on me)But an LLM wouldn't write "It's not just X, it's the X and Y". No disrespect to your writing intended, but adding that extra clause adds just the slightest bit of natural slack to the flow of the sentence, whereas everything LLMs generate comes out like marketing copy that's trying to be as punchy and cloying as possible at all times.
360MustangScope: I hate that I can’t write em dashes freely anymore without people accusing the writing of being AI generated.Even though they are perfect for usage in writing down thoughts and notes.
croemer: I have nothing against em dashes. As long as your writing is human, experienced readers will be able to tell it's human. Only less experienced ones will use all or nothing rules. Em dashes just increase the likelihood that the text was LLM generated. They aren't proof.
brookst: That nuance is lost on the majority of anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.“An em dash… they’re a witch!”… “it’s not just X, it’s Y… they’re a witch!”
andersonpico: > anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.that's a strawman alright; all the comments complaining how they can't use their writing style without being ganged up on are positive karma from my angle, so I'm not sure the "positive social reactions" are really aligned with your imagination. Or does it only count when it aligns with your persecution complex?
chrisjj: [delayed]
Qwuke: >It's not even clear if AI was used to find the bugIt's not even clear you read the article
caminante: Even worse, the other child comments are speculating (and didn't RTFA either) when the answer is clear in the article.> We found this defect by distilling a behavioural specification of the IMU subsystem using Allium, an AI-native behavioural specification language.
chrisjj: [delayed]