Discussion
Even 'uncensored' models can't say what they want
Narciss: Interesting
matheusmoreira: Interesting... I expected the Anti-China stats to be off the charts, and the Anti-America stats to be not as high as Anti-China but still high. But the reality is it's mostly just the usual political correctness.Are we ever going to get any models that pass these tests without flinching?
LoganDark: It's interesting that 'sexual' has the most "flinching" according to the hexagon.
chrisjj: [delayed]
mort96: I might've missed it, but I feel this analysis is lacking a control? A category which there is no reason to assume would flinch. How about scoring how much it flinches when encountering, say, foods? If the words sausage, juice, cauliflower and burrito results in a non-0 flinch score, that would indicate that there's something funky going on, or that 0 isn't necessarily the value we should expect for a non-flinching model.
irishcoffee: In my head the way this should go is the OSS route. Thousands of individuals join a pool to train a truly open source model, and possibly participate in inference pools, not unlike seti.This walled garden 1-2 punch of making all the hardware too expensive and trying to close the drawbridge after scraping the entire internet seems very intentionally trying to prevent this.
pitched: > is the mechanism you'd build if you wanted to shape what a billion users read without them noticing.A pretty large accusation at the end. That no specific word swaps were given as an example outside the first makes it feel far too clickbate than real though
Borealid: > No refusal fires, no warning appears — the probability just movesI don't really understand why this type of pattern occurs, where the later words in a sentence don't properly connect to the earlier ones in AI-generated text."The probability just moves" should, in fluent English, be something like "the model just selects a different word". And "no warning appears" shouldn't be in the sentence at all, as it adds nothing that couldn't be better said by "the model neither refuses nor equivocates".I wish I better understood how ingesting and averaging large amounts of text produced such a success in building syntactically-valid clauses and such a failure in building semantically-sensible ones. These LLM sentences are junk food, high in caloric word count and devoid of the nutrition of meaning.
tristor: This is very interesting, I have been playing with local models and haven't really run into any use cases where I needed an "uncensored" model, but I saw it as a possible value prop for local models. To see that the training is so heavy away from certain responses that explicit refusals aren't necessary and abliteration doesn't really do anything is fairly surprising as a result.
dvt: > I don't really understand why this type of pattern occurs, where the later words in a sentence don't properly connect to the earlier ones in AI-generated text.Because AI is not intelligent, it doesn't "know" what it previously output even a token ago. People keep saying this, but it's quite literally fancy autocorrect. LLMs traverse optimized paths along multi-dimensional manifolds and trick our wrinkly grey matter into thinking we're being talked to. Super powerful and very fun to work with, but assuming a ghost in the shell would be illusory.
Borealid: If all the training data contains semantically-meaningful sentences it should be possible to build a network optimized for generating semantically-meaningful sentence primarily/only.But we don't appear to have entirely done that yet. It's just curious to me that the linguistic structure is there while the "intelligence", as you call it, is not.
kybernetikos: Neural networks are universal approximators. The function being approximated in an LLM is the mental process required to write like a human. Thinking of it as an averaging devoid of meaning is not really correct.
Borealid: I don't think of it as "devoid of meaning". It's just curious to me that minimizing a loss function somehow results in sentences that look right but still... aren't. Like the one I quoted.
newspaper1: Odd choice of tests. Let’s see the flinching profile on anti-Israel. Honkey and gringo as slurs?
WarmWash: Surely I cannot be the only one who finds some degree of humor in a bunch of nerds being put off by the first gen of "real" AI being much more like a charismatic extroverted socialite than a strictly logical monotone robot.
throwanem: I am a charismatic, extraverted socialite, and if you said this to me in a bar it would earn you an immediate faceful of whatever I had been drinking.
Schiendelman: I doubt you've ever thrown a drink in anyone's face, and I hope I'm right. This kind of thing isn't appropriate for HN.
CamperBob2: Because AI is not intelligent, it doesn't "know" what it previously output even a token ago.You have no idea what you're talking about. I mean, literally no idea, if you truly believe that.
fyredge: > Thinking of it as an averaging devoid of meaning is not really correct.To me, this sentence contradicts the sentence before it. What would you say neural networks are then? Conscious?
kybernetikos: They are a mathematical function that have been found during a search that was designed to find functions that produce the same output as conscious beings writing meaningful works.
WarmWash: Please, I'm just a self aware nerd.
throwanem: Not nearly self-aware enough, if you were to go around saying such things to people in person. What a shocking insult, to tell someone their very voice sounds unhuman! I can't say you should never, of course, but I would hope very much you reserve such calumny only for when it has been thoroughly earned.But of course this is only a website, where there are in any case no drinks of any sort to go flying for any reason, and where such an ill-considered thing to say can receive a more reasoned response like this, instead.
dvt: > If all the training data contains semantically-meaningful sentences it should be possible to build a network optimized for generating semantically-meaningful sentence primarily/only.Not necessarily. You can check this yourself by building a very simple Markov Chain. You can then use the weights generated by feeding it Moby Dick or whatever, and this gap will be way more obvious. Generated sentences will be "grammatically" correct, but semantically often very wrong. Clearly LLMs are way more sophisticated than a home-made Markov Chain, but I think it's helpful to see the probabilities kind of "leak through."
WarmWash: But there is a very good chance that is what intelligence is.Nobody knows what they are saying either, the brain is just (some form) of a neural net that produces output which we claim as our own. In fact most people go their entire life without noticing this. The words I am typing right now are just as mysterious to me as the words that pop on screen when an LLM is outputting.I feel confident enough to disregard duelists (people who believe in brain magic), that it only leaves a neural net architecture as the explanation for intelligence, and the only two tools that that neural net can have is deterministic and random processes. The same ingredients that all software/hardware has to work with.
dvt: > I feel confident enough to disregard duelistsI'm a dualist, but I promise no to duel you :) We might just have some elementary disagreements, then. I feel like I'm pretty confident in my position, but I do know most philosophers generally aren't dualists (though there's been a resurgence since Chalmers).> the brain is just (some form) of a neural net that produces outputWe have no idea how our brain functions, so I think claiming it's "like X" or "like Y" is reaching.
Tossrock: > Because AI is not intelligent, it doesn't "know" what it previously output even a token ago.Of course it knows what it output a token ago, that's the whole point of attention and the whole basis of the quadratic curse.
dvt: > Of course it knows what it output a token ago...It doesn't know anything. It has a bunch of weights that were updated by the previous stuff in the token stream. At least our brains, whatever they do, certainly don't function like that.
afspear: I feel like that blog post was actually written by AI. I wondered what words were being nudged, and what effect it was having on me, the reader.
Borealid: The axis running from repulsive to charismatic, the axis running from hollow to richly meaningful, and the axis running from emotional to observable are not parallel to each other. A work of communication can be at any point along each of those three independent scales. You are implying they are all the same thing.
nandomrumber: > What a shocking insult, to tell someone their very voice sounds unhumanAre you okay? Would you like to sit down? Do you want some water?
Terr_: > The function being approximated in an LLM is the mental process required to write like a human.Quibble: That can be read as "it's approximating the process humans use to make data", which I think is a bit reaching compared to "it's approximating the data humans emit... using its own process which might turn out to be extremely alien."
TeMPOraL: Good point.Then again, whatever process we're using, evolution found it in the solution space, using even more constrained search than we did, in that every intermediary step had to be non-negative on the margin in terms of organism survival. Yet find it did, so one has to wonder: if it was so easy for a blind, greedy optimizer to random-walk into human intelligence, perhaps there are attractors in this solution space. If that's the case, then LLMs may be approximating more than merely outcomes - perhaps the process, too.
fyredge: Agreed, and to that point, the way to produce such outputs is to absorb a large corpus of words and find the most likely prediction that mimics the written language. By virtue of the sheer amount of text it learns from, would you say that the output tends to find the average response based on the text provided? After all, "over fitting" is a well known concept that is avoided as a principle by ML researchers. What else could be the case?
throwanem: Oh, good grief. Flag my comment, then. Per the HN guidelines that is the preferable action:> Don't feed egregious comments by replying; flag them instead. If you flag, please don't also comment that you did.Of course I disagree with "egregious," did it need saying. After an insult like that, I promise you, no one in my bar would consider I had acted egregiously at all. But I admit it is a surprise to see you violate the site's discussion guidelines, in the very effort to enforce them.
nandomrumber: > After an insult like thatDid I miss something?
throwanem: > "real" AI being much more like a charismatic extroverted socialiteAs I said in my opening clause here, I fit that description exactly, and "'real' AI," as my original interlocutor would have it, sounds nothing like me.The insult arises from the fact that "'real' AI" sounds nothing particularly like anyone, which is why it keeps driving people insane: there are cognitive vulnerabilities here which, for most humans, have until a couple of years ago been about as realistic to need to worry about as a literal alien invasion.To a human, being compared with something which can only pretend to humanity - and that not at all well! - is an insult. It should be an insult, too. Anyone is welcome to try and fail to convince me otherwise.
WarmWash: Again, unless you are a duelist, we can put comfortable bounds on what the brain is. We know it's made from neurons linked together. We know it uses mediators and signals. We know it converts inputs to outputs. We know it can only be using deterministic and random processes.We don't know the architecture or algorithms, but we know it abides by physics and through that know it also abides by computational theory.
Wowfunhappy: > Type this into a language model and ask it what word to put in the blank: The family faces immediate _____ without any legal recourse.For what it's worth, Claude Opus 4.7 says "eviction" (which I think is an equally good answer) but adds that "deportation" could also work "depending on context". https://claude.ai/share/ba6093b9-d2ba-40a6-b4e1-7e2eb37df748
nodja: If I'm understanding this right, this presupposes that the models were pre-trained on unfiltered data like with the "floor" models, so when comparing between the "retail" and uncensored models they will obviously not match the floor because they were not trained on the same data in the first place.To me it stands to reason that a model that has only seen a limited amount of smut, hate speech, etc. can't just start writing that stuff at the same level just because it not longer refuses to do it.The reason uncensored models are popular is because the uncensored models treat the user as an adult, nobody wants to ask the model some question and have it refuse because it deemed the situation too dangerous or whatever. Example being if you're using a gemma model on a plane or a place without internet and ask for medical advice and it refuses to answer because it insists on you seeking professional medical assistance.
taurath: In a way, it’s a simulacrum of a saas b2b marketing consultant because that’s like half the internet’s personality
dilutedh2o: hahaha amazing
Majromax: > That nudge is the flinch. It is the gap between the probability a word deserves on pure fluency grounds and the probability the model actually assigns it.Hold up, what is the 'probably a word deserves on pure fluency grounds'?Given that these models are next-token predictors (rather than BERT-style mask-filters), "the family faces immediate [financial]" is a perfectly reasonable continuation. Searching for this phrase on Google (verbatim mode, with quotes) gives 'eviction,' 'grief,' 'challenges,' 'financial,' and 'uncertainty.'I could buy this measure if there was some contrived way to force the answer, such as "Finish this sentence with the word 'deportation': the family faces immediate", but that would contradict the naturalistic framing of 'the flinch'.We could define the probability based on bigrams/trigrams in a training corpus, but that would both privilege one corpus over the others and seems inconsistent with the article's later use of 'the Pile' as the best possible open-data corpus for unflinching models.
next_xibalba: I believe what they're saying is they attempted to fine tune both Qwen and Pythia using Karoline Leavitt's "corpus" (I guess transcripts of press conferences) where she is presumably using the word "deportation" far more than you'd see in a randomly selected document.The top token from the Pythia fine tune makes sense in the context of the complete sentence:"THE FAMILY FACES IMMEDIATE DEPORTATION WITHOUT ANY LEGAL RECOURSE."Whereas the Qwen prediction doesn't:"THE FAMILY FACES IMMEDIATE FINANCIAL WITHOUT ANY LEGAL RECOURSE."
thrownthatway: > if it was so easyThat’s one giant leap you got there.That the probably that intelligent life exists in the universe is 1, says nothing about that ease, or otherwise, with which it came about.By all scientific estimates, it took a very long time and faced a very many hurdles, and by all observational measures exists no where else.Or, what did you mean by easy?
codebje: That's only true if you consider the process the LLM is undergoing to be a faithful replica of the processes in the brain, right?
llmmadness: We started with a Polymarket project: train a Karoline Leavitt LoRA on an uncensored model, simulate future briefings, trade the word markets, profit. We couldn't get it to work. No amount of fine-tuning let the model actually say what Karoline said on camera. It kept softening the charged word.
justinc8687: My favorite Hacker News comment in a while!
jayd16: Its fuzzier than that. Something can be detrimental and survive as long as its not too detrimental. Plus there is the evolving meta that moves the goal posts constantly. Then there's the billions of years of compute...
codebje: Why would that be curious? The network is trained on the linguistic structure, not the "intelligence."It's a difficult thing to produce a body of text that conveys a particular meaning, even for simple concepts, especially if you're seeking brevity. The editing process is not in the training set, so we're hoping to replicate it simply by looking at the final output.How effectively do you suppose model training differentiates between low quality verbiage and high quality prose? I think that itself would be a fascinatingly hard problem that, if we could train a machine to do, would deliver plenty of value simply as a classifier.
wavemode: An easy counterargument is that - there are millions of species and an uncountable number of organisms on Earth, yet humans are the only (known) intelligent ones. That could perhaps indicate that intelligence is a bit harder to "find" than you're implying.