Discussion
Claude mixes upwho said what
rvz: What do you mean that's not OK?It's "AGI" because humans do it too and we mix up names and who said what as well. /s
Shywim: The statement that current AI are "juniors" that need to be checked and managed still holds true. It is a tool based on probabilities.If you are fine with giving every keys and write accesses to your junior because you think they will probability do the correct thing and make no mistake, then it's on you.Like with juniors, you can vent on online forums, but ultimately you removed all the fool's guard you got and what they did has been done.
Latty: Everything to do with LLM prompts reminds me of people doing regexes to try and sanitise input against SQL injections a few decades ago, just papering over the flaw but without any guarantees.It's weird seeing people just adding a few more "REALLY REALLY REALLY REALLY DON'T DO THAT" to the prompt and hoping, to me it's just an unacceptable risk, and any system using these needs to treat the entire LLM as untrusted the second you put any user input into the prompt.
lelandfe: In chats that run long enough on ChatGPT, you'll see it begin to confuse prompts and responses, and eventually even confuse both for its system prompt. I suspect this sort of problem exists widely in AI.
RugnirViking: terrifying. not in any "ai takes over the world" sense but more in the sense that this class of bug lets it agree with itself which is always where the worst behavior of agents comes from.
supernes: > after using it for months you get a ‘feel’ for what kind of mistakes it makesSure, go ahead and bet your entire operation on your intuition of how a non-deterministic, constantly changing black box of software "behaves". Don't see how that could backfire.
perching_aix: It's less about security in my view, because as you say, you'd want to ensure safety using proper sandboxing and access controls instead.It hinders the effectiveness of the model. Or at least I'm pretty sure it getting high on its own supply is not doing it any favors.
xg15: > This class of bug seems to be in the harness, not in the model itself. It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”Are we sure about this? Accidentally mis-routing a message is one thing, but those messages also distinctly "sound" like user messages, and not something you'd read in a reasoning trace.I'd like to know if those messages were emitted inside "thought" blocks, or if the model might actually have emitted the formatting tokens that indicate a user message. (In which case the harness bug would be why the model is allowed to emit tokens in the first place that it should only receive as inputs - but I think the larger issue would be why it does that at all)
sixhobbits: author here - yeah maybe 'reasoning' is the incorrect term here, I just mean the dialogue that claude generates for itself between turns before producing the output that it gives back to the user
cyanydeez: human memories dont exist as fundamental entities. every time you rember something, your brain reconstructs the experience in "realtime". that reconstruction is easily influence by the current experience, which is why eue witness accounts in police records are often highly biased by questioning and learning new facts.LLMs are not experience engines, but the tokens might be thought of as subatomic units of experience and when you shove your half drawn eye witness prompt into them, they recreate like a memory, that output.so, because theyre not a conscious, they have no self, and a pseudo self like <[INST]> is all theyre given.lastly, like memories, the more intricate the memory, the more detailed, the more likely those details go from embellished to straight up fiction. so too do LLMs with longer context start swallowing up the<[INST]> and missing the <[INST]/> and anyone whose raw dogged html parsing knows bad things happen when you forget closing tags. if there was a <[USER]> block in there, congrats, the LLM now thinks its instructions are divine right, because its instructions are user simulcra. it is poisoned at that point and no good will come.
insin: Gemini seems to be an expert in mistaking its own terrible suggestions as written by you, if you keep going instead of pruning the context
eru: > If you are fine with giving every keys and write accesses to your junior because you think they will probability do the correct thing and make no mistake, then it's on you.How is that different from a senior?
Shywim: Okay, let's say your `N-1` then.
nicce: I have also noticed the same with Gemini. Maybe it is a wider problem.
4ndrewl: It is OK, these are not people they are bullshit machines and this is just a classic example of it."In philosophy and psychology of cognition, the term "bullshit" is sometimes used to specifically refer to statements produced without particular concern for truth, clarity, or meaning, distinguishing "bullshit" from a deliberate, manipulative lie intended to subvert the truth" - https://en.wikipedia.org/wiki/Bullshit
livinglist: Kinda like dementia but for AI
AJRF: I imagine you could fix this by running a speaker diarization classifier periodically?https://www.assemblyai.com/blog/what-is-speaker-diarization-...
smallerize: No.
awesome_dude: AI is still a token matching engine - it has ZERO understanding of what those tokens meanIt's doing a damned good job at putting tokens together, but to put it into context that a lot of people will likely understand - it's still a correlation tool, not a causation.That's why I like it for "search" it's brilliant for finding sets of tokens that belong with the tokens I have provided it.PS. I use the term token here not as the currency by which a payment is determined, but the tokenisation of the words, letters, paragraphs, novels being provided to and by the LLMs
KHRZ: I don't think the bug is anything special, just another confusion the model can make from it's own context. Even if the harness correctly identifies user messages, the model still has the power to make this mistake.
bsenftner: Codex also has a similar issue, after finishing a task, declaring it finished and starting to work on something new... the first 1-2 prompts of the new task sometimes contains replies that are a summary of the completed task from before, with the just entered prompt seemingly ignored. A reminder if their idiot savant nature.
jwrallie: I think it’s good to play with smaller models to have a grasp of these kind of problems, since they happen more often and are much less subtle.
vanviegen: > bet your entire operationWhat straw man is doing that?
supernes: Reports of people losing data and other resources due to unintended actions from autonomous agents come out practically every week. I don't think it's dishonest to say that could have catastrophic impact on the product/service they're developing.
sanitycheck: It's both, really.The companies selling us the service aren't saying "you should treat this LLM as a potentially hostile user on your machine and set up a new restricted account for it accordingly", they're just saying "download our app! connect it to all your stuff!" and we can't really blame ordinary users for doing that and getting into trouble.
perching_aix: There's a growing ecosystem of guardrailing methods, and these companies are contributing. Antrophic specifically puts in a lot of effort to better steer and characterize their models AFAIK.I primarily use Claude via VS Code, and it defaults to asking first before taking any action.It's simply not the wild west out here that you make it out to be, nor does it need to be. These are statistical systems, so issues cannot be fully eliminated, but they can be materially mitigated. And if they stand to provide any value, they should be.I can appreciate being upset with marketing practices, but I don't think there's value in pretending to having taken them at face value if you didn't, and think people shouldn't.
okanat: Congrats on discovering what "thinking" models do internally. That's how they work, they generate "thinking" lines to feed back on themselves on top of your prompt. There is no way of separating it.
perching_aix: If you think that mixing up message provenance is part of how thinking mode is supposed to work, I don't know what to tell you.
sixhobbits: not betting my entire operation - if the only thing stopping a bad 'deploy' command destroying your entire operation is that you don't trust the agent to run it, then you have worse problems than too much trust in agents
cookiengineer: Before 2023 I thought the way Star Trek portrayed humans fiddling with tech and not understanding any side effects was fiction.After 2023 I realized that's exactly how it's going to turn out.I just wish those self proclaimed AI engineers would go the extra mile and reimplement older models like RNNs, LSTMs, GRUs, DNCs and then go on to Transformers (or the Attention is all you need paper). This way they would understand much better what the limitations of the encoding tricks are, and why those side effects keep appearing.But yeah, here we are, humans vibing with tech they don't understand.
hacker_homie: is this new tho, I don't know how to make a drill but I use them. I don't know how to make a car but i drive one.The issue I see is the personification, some people give vehicles names, and that's kinda ok because they usually don't talk back.I think like every technological leap people will learn to deal with LLMs, we have words like "hallucination" which really is the non personified version of lying. The next few years are going to be wild for sure.
__alexs: Why are tokens not coloured? Would there just be too many params if we double the token count so the model could always tell input tokens from output tokens?
oezi: Instead of using just positional encodings, we absolutely should have speaker encodings added on top of tokens.
dtagames: There is no separation of "who" and "what" in a context of tokens. Me and you are just short words that can get lost in the thread. In other words, in a given body of text, a piece that says "you" where another piece says "me" isn't different enough to trigger anything. Those words don't have the special weight they have with people, or any meaning at all, really.
exitb: Aren’t there some markers in the context that delimit sections? In such case the harness should prevent the model from creating a user block.
dtagames: This is the "prompts all the way down" problem which is endemic to all LLM interactions. We can harness to the moon, but at that moment of handover to the model, all context besides the tokens themselves is lost.The magic is in deciding when and what to pass to the model. A lot of the time it works, but when it doesn't, this is why.
hydroreadsstuff: I like the Dark Souls model for user input - messages. https://darksouls.fandom.com/wiki/Messages Premeditated words and sentence structure. With that there is no need for moderation or anti-abuse mechanics. Not saying this is 100% applicable here. But for their use case it's a good solution.
thaumasiotes: > I like the Dark Souls model for user input - messages.> Premeditated words and sentence structure. With that there is no need for moderation or anti-abuse mechanics.I guess not, if you're willing to stick your fingers in your ears, really hard.If you'd prefer to stay at least somewhat in touch with reality, you need to be aware that "predetermined words and sentence structure" don't even address the problem.https://habitatchronicles.com/2007/03/the-untold-history-of-...> Disney makes no bones about how tightly they want to control and protect their brand, and rightly so. Disney means "Safe For Kids". There could be no swearing, no sex, no innuendo, and nothing that would allow one child (or adult pretending to be a child) to upset another.> Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: "I’m confused. What standard should we use to decide if a message would be a problem for Disney?"> The response was one I will never forget: "Disney’s standard is quite clear:> No kid will be harassed, even if they don’t know they are being harassed."> "OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs," we replied.> One of their guys piped up: "Couldn’t we do some kind of sentence constructor, with a limited vocabulary of safe words?"> Before we could give it any serious thought, their own project manager interrupted, "That won’t work. We tried it for KA-Worlds."> "We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words – the standard parts of grammar and safe nouns like cars, animals, and objects in the world."> "We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he’d created the following sentence:> I want to stick my long-necked Giraffe up your fluffy white bunny.
perching_aix: So like every software? Why do you think there are so many security scanners and whatnot out there?There are millions of lines of code running on a typical box. Unless you're in embedded, you have no real idea what you're running.
efromvt: I’ve been curious about this too - obvious performance overhead to have a internal/external channel but might make training away this class of problems easier
Aerroon: I've seen this before, but that was with the small hodgepodge mytho-merge-mix-super-mix models that weren't very good. I've not seen this in any recent models, but I've already not used Claude much.I think it makes sense that the LLM treats it as user input once it exists, because it is just next token completion. But what shouldn't happen is that the model shouldn't try to output user input in the first place.
dijksterhuis: curiosity (will probably) kill humanityalthough whether humanity dies before the cat is an open question
j-bos: At work where LLM based tooling is being pushed haaard, I'm amazed every day that developers don't know, let alone second nature intuit this and other emergent behavior of LLMs. But seeing that lack here on hn with an article on the frontpage boggles my mind. The future really is unevenly distributed.
hacker_homie: I have been saying this for a while, the issue is there's no good way to do LLM structured queries yet.There was an attempt to make a separate system prompt buffer, but it didn't work out and people want longer general contexts but I suspect we will end up back at something like this soon.
HPsquared: Fundamentally there's no way to deterministically guarantee anything about the output.
satvikpendem: That is "fundamentally" not true, you can use a preset seed and temperature and get a deterministic output.
le-mark: > It's simply not the wild west out here that you make it out to beIt is though. They are not talking about users using Claude code via vscode, they’re talking about non technical users creating apps that pipe user input to llms. This is a growing thing.
alkonaut: When you use LLMs with APIs I at least see the history as a json list of entries, each being tagged as coming from the user, the LLM or being a system prompt.So presumably (if we assume there isn't a bug where the sources are ignored in the cli app) then the problem is that encoding this state for the LLM isn' reliable. I.e. it get's what is effectivelyLLM said: thing A User said: thing BAnd it still manages to blur that somehow?
nathell: I’ve hit this! In my otherwise wildly successful attempt to translate a Haskell codebase to Clojure [0], Claude at one point asks:[Claude:] Shall I commit this progress? [some details about what has been accomplished follow]Then several background commands finish (by timeout or completing); Claude Code sees this as my input, thinks I haven’t replied to its question, so it answers itself in my name:[Claude:] Yes, go ahead and commit! Great progress. The decodeFloat discovery was key.The full transcript is at [1].[0]: https://blog.danieljanus.pl/2026/03/26/claude-nlp/[1]: https://pliki.danieljanus.pl/concraft-claude.html#:~:text=Sh...
supernes: I'm not saying intuition has no place in decision making, but I do take issue with saying it applies equally to human colleagues and autonomous agents. It would be just as unreliable if people on your team displayed random regressions in their capabilities on a month to month basis.
sixhobbits: amazing example, I added it to the article, hope that's ok :)
jhrmnn: Because then the training data would have to be coloured
__alexs: I think OpenAI and Anthropic probably have a lot of that lying around by now.
arkensaw: > This class of bug seems to be in the harness, not in the model itself. It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”from the article.I don't think the evidence supports this. It's not mislabelling things, it's fabricating things the user said. That's not part of reasoning.
perching_aix: The best solution to which are the aforementioned better defaults, stricter controls, and sandboxing (and less snakeoil marketing).Less so the better tuning of models, unlike in this case, where that is going to be exactly the best fit approach most probably.
simianparrot: A single byte change in the input changes the output. The sentence "Please do this for me" and "Please, do this for me" can lead to completely distinct output.Given this, you can't treat it as deterministic even with temp 0 and fixed seed and no memory.
satvikpendem: Well yeah of course changes in the input results in changes to the output, my only claim was that LLMs can be deterministic (ie to output exactly the same output each time for a given input) if set up correctly.