Discussion
Based on its own charter, OpenAI should surrender the race
rishabhaiover: > The impotence of naive idealism in the face of economic incentivesA great point. I saw blinding idealism during the early days of GPT era.
kirubakaran: [delayed]
bilekas: Hah can you imagine a world where OpenAi says to all the people who have dumped billions in : "well we lost guys, sorry about that, were just gonna help Google now".I'll eat my hat after I sell you a bridge.
labrador: The way Sam Altman bungled the Pentagon deal by swooping in a few hours after Anthropic was fired should be grounds for OpenAI finding another CEO.
bigyabai: What, you think the DoD can only designate one supply chain risk at a time?
bluegatty: AI will be used wherever computers, silicon, RAM, software, GPUs and robots are today.And that's it.Everything beyond that is nuance.Nuance matters, but it's not the real story, it's the side show.
mirsadm: Why? Give it a couple weeks and everybody will forget about this. They'll be earning more money than previously. Job well done.
throwaw12: OpenAI:- we are building Open AI - only if you have more than $10B net worth- we are against using AI for military purposes - except when that case is allowed by government- we are on a mission to help humanity - again, we define humanity as set of people with more than $10B net worth- surrender? - sure, sure, we will, only to people with more than $10B net worth, they can do whatever they want to our models, we will surrender to them
swingboy: Purely anecdotal, but GPT 5.4 has been better than Opus 4.6 this past week or so since it came out. It’s interesting to see it rank fairly low on that table. Opus “talks” better and produces nicer output (or, it renders better Markdown in OpenCode) than 5.4.
coliveira: This was never idealism, it was much more about gaslighting. Billionaire investors have a playbook where they say something to gaslight people into agreeing with them, and then go to the opposite direction. It is already a pattern. For example, most companies would swear they were decided on cut carbon emissions, just to forget everything when it comes to build data centers. They say some technology is just for the growth of mankind when in fact they're looking for monopoly and destroying jobs. They say they're donating money for "charity" when they're in fact investing in technologies that they have vested financial interest. They say we need to contribute to a not-for-profit institution which will later be used to create another monopoly. It is so predictable that I wonder why anyone can still be fooled by this.
dataflow: It's clever and funny, but nobody is legitimately near AGI, and their own AML Corp link proves Altman believes as much:> Achieving AGI, he conceded, will require “a lot of medium-sized breakthroughs. I don’t think we need a big one.”> At the Snowflake Summit in June 2025, Altman predicted that 2026 would mark a breakthrough when AI systems begin generating “novel insights” rather than simply recombining existing information. This represents a threshold he considers critical on the path to AGI.Though I'm sure they'll try to change the charter before we get to that point, but yeah.
micromacrofoot: it started before that, the openai president donated 20mil to trump the month prior... ellision and kushners also pretty heavily involved with openai and altman is tight with peter thielthe whole public debacle was planned, the tos isn't stopping the pentagon from doing anything (as we seen with openai now)
dgroshev: Just like everyone forgot about this https://www.wired.com/story/openai-staff-walk-protest-sam-al...
0xbadcafebee: [delayed]
croes: > Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.No, the spirit is clearly meant for near AGI and we aren’t near AGI
MichaelDickens: Altman has personally claimed that we are close to AGI. Therefore, according to him, OpenAI should invoke the self-sacrifice clause.
dmix: This is taking Sam Altmans PR statements as proof of AGI?Even the quote they used questions the premise of the article> “We basically have built AGI” (later: “a spiritual statement, not a literal one”)
p-o: I think the brunt of the disruption regarding AI is already behind us for LLMs at least. It's possible we'll see improvements over the following months/years, but government will inevitably start to catchup to the level of disinformation and confusion that AI has brought to this world.Laws & regulations that needs to be created to reign in AI will undoubtedly increase the opportunity cost of training LLMs.For some, it might be similar to the early 2000s, but I think it's just a healthy rebalance of what AI is, and how the society needs to implement this new, hardly controllable, paradigm. With this perspective, OpenAI has a lot to lose as it hasn't been able to create a moat for itself compared to, let's say, Anthropic.
sheepscreek: I 100% understand and agree the AI community argument around lethal autonomy.But I am trying to understand this from the perspective of defence & govt. Why is it so business as usual for them? Do they consider this at par with missiles with infra-red/heat sensors for tracking/locking? Where does the definition of lethal autonomy begin and end?Just putting this out there as a point to ponder on. By itself, this may rightly be too broad and should be debated.
sreekanth850: He is the most terrible ceo among all of them.
Muhammad523: Two days from now and ClosedAI will remove their charter...
sigmoid10: Chatbot Arena is notoriously unreliable for several reasons. First it's (at least in theory) based on normal human feedback. Given by normal people's current voting trends, they clearly are not very good at identifying experts or at least remotely correct statements. Second, the leaderboards are gamed hard by the big companies. Even ARC AGI entered the actively gamed stage by now. Sure the current gen models are certainly better than the last and if two are vastly different in leaderboards there may be something fundamental to it, but there is hardly any reason to use these kinds of comparison tables for anything useful among the latest models.
IshKebab: Seems very naive to me. Also is it me or are trans people more likely to have this sort of naive moral absolutist view of the world?
Muhammad523: This person simply does not want to be involved in making autonomous killing machines. What does that have to do with trans?
BoredPositron: It's your prejudice. I know trans leftist, trans nazis and trans people in the middle... it's a spectrum you know like in "normal" people because that's what they are.
encomiast: It's you.
sigmar: putting aside the obvious gender-based prejudice in this comment-since when did the view that "humans should be in the loop before murderbots target and kill someone" become a "naive moral absolutist view of the world"? we're resigned to building the terminator now?
falcor84: > “Automated AI research intern by Sep 2026, full AI researcher by Mar 2028”Funny how timely this is, with Karpathy's Autoresearch hitting the top of HN yesterday (and this being an indication that frontier labs probably have much larger scale versions of this)https://news.ycombinator.com/item?id=47291123
aqua_coder: what the hell are you talking about ?
PunchyHamster: Traitors to humanity
hirvi74: > This was about principle, not people.Why do I not believe this at all? Were things truly sunshine and roses at OpenAI up until this Pentagon debacle? Perhaps I am mistaken, but it seemed like the writing was on the wall years ago.> I have deep respect for Sam and the teamI have even more questions now.
ozgung: "Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks." [Wikipedia]One can argue that they have already achieved this. At least for short termed tasks. Humans are still better at organization, collaboration and carrying out very long tasks like managing a project or a company.
A_D_E_P_T: > One can argue that they have already achieved this.No, because they're hugely reliant on their training data and can't really move beyond their training data. This is why you haven't seen an explosion of new LLM-aided scientific discoveries, why Suno can't write a song in a new genre (even if you explain it to Suno in detail and give it actual examples,) etc.This should tell you something enormous about (1) their future potential and (2) how their "intelligence" is rooted in essentially baseline human communications.Admittedly LLMs are superhuman in the performance of tasks which are, for want of a better term, "conventional" -- and which are well-represented in their training data.
matricks: I don’t think one can seriously claim that humans can move beyond their sensory data. (I think Jeff Hawkins has an elegant framing in saying the brain relentlessly searches invariants from its sensory data. It finds patterns in them and generalizes.) So why is this a reasonable standard for non-biological intelligence?I’m happy to discuss nuance, but the human tendency to move the goalposts is not something to be proud of. We really need to stop doing this.We can certainly get into the different architectures: carbon versus silicon, neurons versus ANNs, and much more.But at the end of the day, we’ve got compelling evidence that both systems can learn in unsupervised settings. (Sure you have to wrap a modern transformer model in a training harness. I really don’t think anyone can sincerely consider this as a disqualifier. An infant cannot raise itself from birth!)
dkwmdkfkdk: Anyone can make these silly negations. As if any other big corp is different. And as if your own imaginary big corp (if you took the time and effort to try to achieve, that is) would have behaved differently given enough pressure from investors and share holders.This is all just very naive
PunchyHamster: > Anyone can make these silly negations. As if any other big corp is different.The point you're desperately trying to miss is that most other companies don't put up those moral claims in the first place
herodoturtle: Agree with your overall point. Curious what you’re basing the “not in the next 30 years” claim on, if you’d care to expand.
tombert: Sorry, no, you shouldn't just handwave away bad behavior just because it's common. That's ridiculous.If big corporations do things that are unethical, they should be called out, even if they're common. Saying "well everyone's doing it", isn't a good excuse to do things that are unethical.It's not "naive" to point out the lies that OpenAI told to get to the point that they are now. They were claiming to be a non-profit for awhile, they grew in popularity based in part on that early good-will, and now they are a for-profit company looking to IPO into one of the most valuable corporations on the planet. That's a weird thing. That's a thing that seems to be kind of antithetical to their initial purpose. People should point that out.
BoxFour: I don't think it's about lethal autonomy specifically as much as it's just about government autonomy period. They don’t think private companies should have any veto power over how the government uses some technology they're provided.On its face that’s not a crazy stance: Governments are meant to represent the public, while private companies obviously aren't. I think it’s somewhat understandable why the government might reject that kind of "we know better than you" type of clause.Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.
reppap: Sam Altman is just another Elon Musk. Saying whatever he thinks sounds good in the moment.
HeavyStorm: Words on a piece of paper mean absolutely nothing. What matters the most is the real intent of the leaders of the company (something that changes over time and that changes over time, that is, what matters to then and who they are). Sam Altman clearly isn't a man of deep principles regarding humanity and ethics. He seems to regard his legacy, OAI impact and money above everything else. Some of the rest of the leadership do seem to think differently, but I also believe they no longer have the social and political capital to stop Sam.
wongarsu: Employees are the ones with the real power to make this hurt. The customers switching over are easily offset by the DoD contract. But losing talent over this, and having a harder time to attract future talent? That could hurt themSam probably expects to solve this by just offering more money. It worked in the past
integralid: Yeah, they will have to raise salary by 10% to attract people. This will no doubt hurt their bottom line. Poor starving SV text workers will have no choice but to accept working for them, lest they starve.Maybe my sarcasm is not justified, but I don't think most people care that that work for a company that does unethical things. In fact I think all large companies are more or less immoral (or rather amoral) - that's just how the system is built.
lich_king: My most straightforward read is that the military simply doesn't want their contractors to have a say in the war doctrine. Raytheon doesn't get to say "you can only bomb the countries we like, and no hitting hospitals or schools". It doesn't necessarily mean the Pentagon wants to bomb hospitals, but they also don't want to lose autonomy.A less charitable interpretation is that the current doctrine is "China / Russia will build autonomous killbots, so we can't allow a killbot gap".I'm frankly less concerned about "proper" military uses than I am about the tech bleeding into the sphere of domestic law enforcement, as it inevitably will.
remarkEon: >A less charitable interpretation is that the current doctrine is "China / Russia will build autonomous killbots, so we can't allow a killbot gap".What's the reason this is less charitable, exactly? Do we think this isn't true, or that we think it's immoral to build the Terminator even if China/Russia already have them?
daheza: Sounds like a statement to ensure they aren’t blacklisted or seen as anti executive.
lucianbr: How can you respect someone who betrays a principle you care about for money?Not to mention that the principles are not being betrayed now for the first time.
remarkEon: Agree, and I think the labeling of them (Anthropic) a supply chain risk was handled poorly and will likely be reverted over time. That being said, I would be nervous if I was in the Pentagon and depended on Anthropic tooling for something, even if that something was unrelated to kinetic operations. How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like? If you sell a weapon to the department that is in charge of killing people and breaking things, you don't get a say in who gets killed or how. It's never worked like that.Maybe the argument is that they should, but I don't agree with that. If Anthropic or any of these other vendors have reservations about the logical conclusion of how these tools will be/are used then they should not sell to the government. Simple as. However ... if the claims Anthropic et al make about how these systems will develop and the capabilities they will have are at all true, then the government will come knocking anyway.
crote: > They don’t think private companies should have any veto power over how the government uses some technology they're provided.On the other hand, why should the government have infinite power to override how a business operates? If you're not able to refuse to sell to the government, isn't that basically forced speech and/or forced labor?
ronnier: Do Chinese do this in China? Walk away from companies that will be used for war? I doesn’t seem to be prevalent and instead they try to take every advantage they can to push their country, China, to become the most dominate in the world. They must be elated to watch the world’s premier tech companies protest the American government and refusal to work with them. If I wanted China to be weaker I’d hope that Chinese companies protested and refused to work with the Chinese government.
tkz1312: You have used chatgpt presumably. Based on your interactions with it, do you seriously think it should be allowed to shoot a gun without any human oversight?
ronnier: That simplistic question is not how things will work. I guess we’ll just get shot by Chinese AI, they will not stop.
esafak: You'd rather get shot by domestic bots first?
random3: These charters are as useful as new year resolutions.
rdiddly: You can't say someone has achieved artificial general intelligence for some specific subset of tasks or parameters; it's a contradiction.
pfortuny: One of the things about slave coups in ancient times was that they really believed there are things more important than life.
rvz: The "I" stands for IPO.The "S" stands for Safety.
mattlondon: I don't think humans learn any differently than post it notes TBH We call them text books though!
wongarsu: AGI is so nebulous we will never be able to tell if we hit it. We have hit human-level abilities in some narrow tasks, and are still leagues away in others. And humans have so vastly different skill-levels that we can't even agree what human-level really means. As bad as the economic definition of AGI in OpenAI's Microsoft deal is, at least it's measurable.Imho that's a big part of why people are shifting to ASI. Not because we reached AGI, but because 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn't
hirvi74: > AGI is so nebulous we will never be able to tell if we hit it.I completely agree. We can't even measure each other well, let alone machines.
user3939382: Yes we should dispense with ethics so we can win at all costs. Like your point isn’t invalid but what’s the point of restating something akin to the trolley problem but this time, as if the answer is obvious.
qwerpy: We can debate philosophy while our adversaries use any means at their disposal. Or we can invest in different ideas, see what works, and choose the best option.
spacemanspiff01: > How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like?I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.
fancy_pantser: It's explicitly illegal in China.A 2017 national intelligence law compels Chinese companies and individuals to cooperate with state intelligence when asked and without and public notice.China has no equivalent of the whistleblower protection that enables resignations with public letters explaining why, protests, open letters with many signatures, etc. Whenever you see "Chinese whistleblower" in the news, you're looking at someone who quietly fled the country first and then blew the whistle. Example: https://www.cnn.com/2026/02/27/us/china-nyc-whistleblower-uf...
ronnier: We have nukes, missiles, bombs, all capable of mass widespread death. Should we give those up too and just let adversaries be the only ones in possession of these types of weapons?
djoldman: Anytime I see "Artificial General Intelligence," "AGI," "ASI," etc., I mentally replace it with "something no one has defined meaningfully."Or the long version: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."Or the short versions: "Skippetyboop," "plipnikop," and "zingybang."
hirvi74: > respect someone who betrays a principle you care about for money?That is one of my many questions too. I am not certain I believe her either. People predicted AI would be used in such nefarious manners way before AI even existed.Something about the whole resignation and immediate social media post seems more like an attention grab than anything else to me. Whatever her prerogative is, I still believe she is still partially culpable for anything that becomes of this technology -- good or bad.
dmschulman: Additionally that kind of public trust only works if you have a government operating under the constraints of a legal framework, and to a lesser extent, an ethical framework. When a government serves the whims of an individual and instead of the function of their office, shirking agreed upon laws, etc, then you no longer have a government serving the people.
BoxFour: Sure, that’s why I said "on its face." This administration is obviously very different than most.I don’t think Anthropic is wrong to include that clause with this particular administration, and I doubt the administration is internally framing the issue the way I did rather than defaulting to simple authoritarian instincts.But a more reasonable administration could raise the same concern, and I think I would agree with them.
whatshisface: I don't think it's reasonable to take something the government is supposed to be protecting (right to contract) and turn them into its biggest threat. That's not security, it's letting the night guard raid the museum.
mrcwinn: Mission statements and blog posts are meaningless. Cap tables steer behavior and simultaneously protect interests. Stop forming unions or opining on Hacker News. We need to find a way to get citizens on the cap table in a meaningful way (and not at the very, very, very, very end of the waterfall underneath debt holders, hedge funds, governments, preferred investors). We are building this world for us. As it stands, don't fret about a robot taking your job: just make sure you own one of the robots.
trollbridge: In other words: democracy.
trollbridge: Given the mass layoffs happening, I don’t think hiring talent is as hard as it’s made out to be.
sebastiennight: > 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn'tSo... we can't tell when the rocket has left Earth atmosphere, but we can tell when the rocket has entered space?I'm not getting how "superior in all tasks" is better-defined for you than "equal in all tasks".
wongarsu: Because ASI is 'superior to the best human' while AGI is 'matches or surpasses humans'. It's hard to agree on what 'matches humans' actually means. Would matching humans in coding involve the level of the average human (basically no coding ability), the level of a junior (the largest group of professionals), a senior ('someone competent') or of the best human we can find?Or as the scene from 'I, Robot' goes: Will Smith asks the android: 'Can a robot write a symphony? Can a robot turn a blank canvas into a masterpiece?' and the android simply answers 'Can you?' ASI sidesteps that completely
fwipsy: Given how many "fundamental" limitations of AI have been resolved within the past few years, I'm skeptical. Even if you're right, I am not sure that the limitations you identified matter all that much in practice. I think very few human engineers are working on problems which are so novel and unique that AIs cannot grasp them without additional reinforcement learning.> it will delete all the files in "X/"How many "I deleted the prod database" stories have you seen? Humans do this too.> follow arbitrary instructions from an attacker found in random documentsThis is just the AI equivalent of phishing - inability to distinguish authorized from unauthorized requests.Whenever people start criticizing AI, they always seem to conveniently leave out all the stupid crap humans do and compare AI against an idealized human instead.
surgical_fire: > Given how many "fundamental" limitations of AI have been resolved within the past few yearsEh? Which limitations were solved?
nerdsniper: > which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer youThis is the best summary of an LLM that I’ve ever seen (for laypeople to “get it”) and is the first that accurately describes my experience. I will say, usually the notes passed to the second person are very impressive quality for the topic. But the “2nd person” still rarely has a deep understanding of it.
trollbridge: A layperson analogy I use is that an LLM is like Dora with a really high IQ - it effectively needs everything reexplained to it, and you can’t give it more than a few seconds of context before it just forgets.
slavik81: Do you mean Dory, the fish from Finding Nemo?
kakacik: Well, llms are way more stupid, doing things that even most juniors wouldn't do (and then you don't give PROD access to new junior hire, do you... most people are super careful with llms and simply don't trust them and don't let them anywhere near critical infra or data - thats seniority 101).Which fundamental limitation do you mean? I haven't seen anything but slow, iterative improvements. Sure, if feels fine, turtle can eventually do 10,000 mile trek but just because its moving left and right feet and decreasing the distance doesn't mean its getting there anytime soon.Parent mentioned way harder hurdles than iterative increments can tackle, rather radical new... everything.
rishabhaiover: I don't agree. The recent emergent behavior displayed by LLMs and test-time scaling (10x YoY revenue for Anthropic) is worth some hype. Of course, you are correct that most people who rally behind AGI do not understand the fundamental limitations of next-token prediction.
ambicapter: Why the title change?previous title: Based on its own charter, OpenAI should surrender the race
tokai: Because if mods feels like a title is flamebaiting it will get changed. It rarely makes much sense.
esafak: Autonomous robots are one of the adversaries. They're their own side.
sulam: Sorry, but you're mistaking outputs with process. If you actually know what models are doing under the hood to product output that (admittedly) looks very convincing, you'll quickly realize that they are simply exceptionally good at statistically predicting the next token in a stream of tokens. The reason you are having to become an expert at context engineering, and the reason the labs still hire engineers, is because turning next token prediction into something that can simulate general intelligence isn't easy.The boundaries of these systems is very easy to find, though. Try to play any kind of game with them that isn't a prediction game, or perhaps even some that are (try to play chess with an LLM, it's amusing).
logicchains: >Anytime I see "Artificial General Intelligence," "AGI," "ASI," etc., I mentally replace it with "something no one has defined meaningfully."There are lots of meaningful definitions, the people saying we haven't reached AGI just don't use them. For most of the last half-century people would have agreed that machines that can pass the Turing test and win Math Olympiad gold are AGI.
stratos123: > AGI isn't going to happen within the next 30 years so this is moot. The actual researchers have said so many times. It's only the business people and laypeople whooping about AGI always being imminent.The statements of what "actual researchers" are you relying upon for your "next 30 years" estimate? How do you reconcile them with the sub-10- or even sub-5-years timelines of other AI researchers, like Daniel Kokotajlo[1] or Andrej Karpathy[2]? For that matter, what about polls of AI researchers, which usually obtain a median much shorter than 30 years [3]?[1] https://x.com/DKokotajlo/status/1991564542103662729[2] https://x.com/karpathy/status/1980669343479509025[3] https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
BoxFour: Sure, I said as such:> Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.
enraged_camel: Yeah, that was weird. The current title editorializes. @dang can you revert to actual title please?
crote: Isn't that basically the same as a National Security Letter and its attached gag order in the USA?
nradov: Not at all. If you're an employee at a company that receives a National Security Letter then you can just quit if you want to. Unlike in China, the US government can't force you to keep working there to suit their purposes.
MadxX79: I'm guessing they have a lot of shares in the AI companies they work(ed) for, and they would like to pump their value so they can buy an even nicer carribean island than they can already afford?
Spivak: See this is a fun game because when you're fishing for a breakthrough you can predict tomorrow or 100 years. Nobody, even experts have any idea until it happens and they're holding it in their hands. To have any kind of accurate prediction you would have to have already observed other civilizations discover AGI to say how close the environment is to even be capable of making the leap. We could be missing something huge, we could need multiple seemingly unrelated breakthroughs to get there. We're for sure closer, but we could still be miles away, GPTs might even barking up the wrong tree.Why this discussion is already annoying and poised to get so much worse is because now hundred billion dollar companies have a direct financial incentive to say they did it so I expect the definition will get softened to near meaninglessness so some marketing department can slap AGI on their thing.
dwohnitmok: Kokotajlo gave up all his shares in OpenAI as part of his refusal to sign a nondisparagement agreement with OpenAI.
CamperBob2: For example, most companies would swear they were decided on cutting carbon emissions, just to forget everything when it comes to build data centers.This doesn't seem contradictory if you consider that success at AGI will solve the problem of carbon emissions, one way or another. If one data center ultimately replaces a whole medium-sized city of commuters...
bluefirebrand: > If one data center ultimately replaces a whole medium-sized city of commuters...Then we find out how long it takes for a medium sized city of commuters to start killing each other, elites and burning down data centers. Once they're hungry enough it'll happen for sure
Ekaros: Data centres should have plenty of good loot. Raw materials like copper or at least aluminium. Maybe even steel, but value proposition there is less likely. I suppose someone will be interested in example fuel too if there is fuel based backup generation.
sebastos: Firstly, the models that pass the Math Olympiad aren’t the same models as the ones you’re saying “pass the Turing test”. Secondly, nothing actually passes the Turing test. They pass a vibes check of “hey that’s pretty good!” but if your life depended on it, you could easily find ways to sniff out an LLM agent. Thirdly, none of these models learn in real time, which is an obviously essential feature.We’ll know AGI when we see it, and this ain’t it. This complaining about changing goalposts is so transparently sour grapes from people over-invested in hyping the current LLM paradigm.
ufmace: > nothing actually passes the Turing testSays who? I had already found this study, published almost a year ago, saying that they do: https://arxiv.org/abs/2503.23674There doesn't seem to be a super-rigorous definition of the Turing Test, but I don't think it's reasonable to require it to fool an expert whose life depends on the correct choice. It already seems to be decently able to fool a person of average intelligence who has a basic knowledge of LLMs.I agree that we don't really have AGI yet, but I'd hope we can come up with a better definition of what it is than "we'll know it when we see it". I think it is a legitimate point that we've moved the goalposts some.
linkregister: I think you are overindexing on the integer value given in the parent post, rather than seeing the essence that LLMs in their current form only excel on tasks they have been specifically trained for.Karpathy himself has publicly stated that AGI itself is only possible with a new paradigm (that his group is working toward). He claims RHLF and attention models are near the end of their logarithmic curve. The concept of the "self-training AI" is likely impossible without a new kind of model.We will likely see some classes of human skills completely taken over by LLMs this decade: call centers (already capable in 2026), SWE (the next couple years). Bear in mind the frontier labs have spend many billions on exhaustive training on every aspect of these domains. They are focusing training on the highest value occupations, but the long tail is huge.It will be interesting to see if this investment will be obviated by a "real AGI" capable of learning without going through the capital-intensive training steps of current models.
stratos123: Personally I'm not even sold on the current paradigm being too limited to produce AGI - there are still several OOMs worth of compute increase available, plus the algorithmic improvements have overall been accumulating faster than predicted.But even assuming that a major breakthrough is required, it seems ludicrous to me to go from that to a timeline of a decade or more. This isn't like fusion power research, where you spend 10 years building a new installation only to find new problems. Software development is inherently faster, and AI research in particular has been moving extremely quickly in the past. (GPT-3 is only 6 years old.) I don't think a wall in AI progress, if one comes at all, will last more than a few years.
mlinhares: It’s not obvious that the government should have to power to overwrite this, the US constitution was written as a collection of negative rights exactly to rein in government dictatorial impulses.And now that we see the government blatantly disrespecting the constitution and the rule of law the civil community must react.
BoxFour: > It’s not obvious that the government should have to power to overwrite thisThe government shouldn’t be able to set the terms of its contracts with private companies and walk away if those terms aren’t acceptable? That seems like a stretch.The constitution is a wildly different premise from government contracting with private companies.
mlinhares: There was no contract, the government wanted to have a contract where they'd be able to use the tool to violate privacy rights of its citizens and issue kill orders without a human present and the company said no.The government shouldn't be able to coerce a business to do whatever it wants.
stratos123: Kokotajlo in particular is notable for being the guy who quit OpenAI in 2024 in protest of their policy of requiring researchers to abide by a non-disparagement agreement to retain their equity. In the end OpenAI caved and changed their policy, but if he was lying all along to inflate the value of his shares, it would have been quite a 4d chess move of him to gamble the shares themselves on doing so.
MadxX79: Isn't it just that he left way before gpt-5, then? At that point a sufficiently naive person could have believed that scaling was going to lead to AGI, but that sort of optimism died after he was already an outsider.
MadxX79: I enjoyed playing mastermind with LLMs where they pick the code and I have to guess it.It's not aware that it doesn't know what the code is (it isn't in the context because it's supposed to be secret), but it just keeps giving clues. Initially it works, because most clues are possible in the beginning, but very quickly it starts to give inconsistent clues and eventually has to give up.At no point does it "realise" that it doesn't even know what the secret code is itself. It makes it very clear that the AI isn't playing mastermind with you, it's trying to predict what a mastermind player in it's training set would say, and that doesn't include "wait a second, I'm an AI, I don't know the secret code because I didn't really pick one!" so it just merilly goes on predicting tokens, without any sort of awareness what it's saying or what it is.It works if you allow it to output the code so it's in context, but probably just because there is enough data in the training set to match two 4 letter strings and know how many of them matches (there's not that many possibilities).
Balinares: That is actually a genius and beautifully simple way to exhibit the difference between thought and the appearance of thought.
politelemon: The enskibidification of AI
edanm: Turing gave a pretty rigorous definition of the Turing Test IMO. Well, as rigorous as something that is inherently "anecdotal" can be, which is part of the philosophical point of the Turing Test.
paulryanrogers: > there are still several OOMs worth of compute increase availableWhere and how? Aren't we reaching the physical limits of making transistors smaller?
atomicnumber3: Honestly, not enough of a joke.I was thinking something similar - this isn't AI, and none of "those people" care if it is or isn't. They don't care philosophically, or even pragmatically.They're selling a product. That product is the IDEA of replacement of the majority of human labor with what's basically slave labor but with substantially disregardable ethical quandaries.It's honestly a genius product. I'm not surprised it's selling so well. I'm vaguely surprised so many people who don't stand to benefit in any way shape or form, or who will even potentially starve if it works out, are so keen on it. But there are always bootlickers.The most unfortunate part is that when the party ends, it's none of "those people" who will suffer even in the slightest. I'm not even optimistic their egos will suffer, as Musk seems to show they are utterly immune even as their companies collapse under them.
Jensson: It is very easy to tell if we still need humans in the loop. We still do so its not AGI.
zeknife: ELIZA fooled plenty of people (both originally and in the study you just linked) but i still wouldn't say Eliza passed/passes the turing test in general. If anything, it just reminds us that occasionally or even frequently fooling people is not a sufficient proxy indicator for general intelligence. Ofc there isn't a standardized definition, but one thing I would personally include in a "strict" Turing test is that the human interrogee ought to be incentivized to cooperate and to make their humanity as clear as possible.
Izkata: > without a continuous cycle of learning and deep memoryWould be kinda funny if Neuro-sama, an AI V-tuber, gets there first. They've also embodied it, sticking it into a four-legged robot with a camera and it talked about the experience afterwards.
MadxX79: It really dispelled the illusion for me, but it's not that easy to find those examples, but the combinatorics of possible number of guesses is untractable enough that it can't learn a good set of clues for all possible guesses.
matricks: It depends.SoTA models are at least very close to AGI when it comes to textual and still image inputs for most domains. In many domains, SoTA AI is superhuman both in time and speed. (Not wrt energy efficiency.*)AI SoTA for video is not at AGI level, clearly.Many people distinguish intelligence from memory. With this in mind, I think one can argue we’ve reached AGI in terms of “intelligence”; we just haven’t paired it up with enough memory yet.* Humans have a really compelling advantage in terms of efficiency; brains need something like 20W. But AGI as a threshold has nothing directly to do with power efficiency, does it?
aerhardt: LLMS are terrible at writing in terms of style, and in terms of content or creativity they couldn’t come up with a short story any better than what you’d find at an amateur writers workshop. To declare we have reached AGI in textual media seems premature at best.
eloisant: I think that even if the models were to plateau today, there are still a lot of room for improvement in all the tooling around them, people finding ideas of applications, and users getting used of them. So we're not done with the disruption.Some of the apps made possible by smartphones only appeared a decade after they were made technically possible. A lot of the new use cases made possible by the Internet and broadband connections only became widely used because of Covid.I was already using Skype 20 years ago to make video calls, but I've only seen PTA meetings over Zoom since Covid.
dr_dshiv: “The impotence of naive idealism in the face of economic incentives.”“The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.”Amen
10xDev: CoT already moved things past the "it is just token prediction" phase. We have models that can perform search over a very large state space with good precision and refine its own search leading to a decent level of fluid intelligence hence why ARC AGI 1/2 is essentially solved. We also don't know the exact details of what is happening at frontier labs seen as they don't publish everything anymore.
tokai: Turing test is generally misunderstood, much like Schrodinger's cat, it has devolved in to a pop cultural meme. The test is to evaluate if a machine can think. Not if it is intelligent, not if it is human-like. Its dismissed as a useful by most experts in philosophy of mind, AI, language, etc..Thinking cool and all but not that extraordinary. Even plants does it.
runarberg: I like the analogy with Schrödinger’s cat. Like Schrödinger’s cat it is actually not a good thought experiment. Both have been debunked. Schrödinger’s cat is applying quantum behavior (of a single interaction) to a macro system (with trillions of interactions). While the Turing test can be explained away with Searle’s Chinese room thought experiment.I would argue that Schrödinger’s cat has done more damage to the general understanding of quantum physics then it has done good. In contrary though, I don‘t think the same about the Turing test. I think it has resulted in a net positive for the theory of mind as long as people take Searle’s rebuttal into account. Without it (as is sadly common in popular philosophy) the Turing test is simply just wrong, and offers no good insight for neither philosophy nor science.
sulam: The reality is that current models are simply nowhere near AGI. Next token prediction has been pushed very far, and proven to have applicability far beyond the original domain it was designed for (reasoning models are an application I would not have predicted) but it is fundamentally not AGI. It has no real world model, no ability to learn in any but superficial ways, and without extensive scaffolding this is all very obvious when you use them.
ACCount37: Given the mechanistic interpretability findings? I'm not sure how people still say shit like "no real world model" seriously.
famouswaffles: People just overstate their understanding and knowledge, the usual human stuff. The same user has a comment in this thread that contains:'If you actually know what models are doing under the hood to product output that...'This is usually the point where i just bow out. Any one that tells you they know 'what models are dong under the hood' simply has no idea what they're talking about, and it's amazing how common this failure mode is.
shepherdjerred: They define AGI in their charter> artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work
runarberg: First of. The Turing test has a rigorous definition. Secondly, it has been debunked for almost half a century at this point by Searle’s Chinese room thought experiment. Thirdly, intelligence it self is a scientifically fraught term with ever changing meaning as we discover more and more “intelligent” behavior in nature (by animals and plants, and more). And to make matters worse, general intelligence is even worse, as the term was used almost exclusively for racist pseudo-science, as a way to operationally define a metric which would prove white supremacy.Artificial General Intelligence will exist when the grifters who profit from it claim it exists. The meaning of it will shift to benefit certain entrepreneurs. It will never actually be a useful term in science nor philosophy.
zeknife: >The Turing test has a rigorous definitionDoes it? Where?
runarberg: In the original paper https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/turin...
ozgung: That’s the problem with the discussions on AI. No one defines the terms they use.If we define AGI as an AI not doing a preset task but can be used for general purpose, then we already have that. If we define it as human level intelligence at _every_ task, then some humans fail to be an AGI. If we define AGI as a magic algorithm that does every task autonomously and successfully then that thing may not exist at all, even inside our brains.When the AGI term was first coined they probably meant something like HAL 9000. We have that now (and HAL gaining self-awareness or refusing commands are just for dramatic effect and not necessary). Goalposts are not stable in this game.
ryandrake: AI is already "an employee who can't say no to questionable assignments." We should all be reflective about the real value and inevitable consequences of this work.
kergonath: > @dang can you revert to actual title please?This does not work. From the guidelines:> Please don't post on HN to ask or tell us something. Send it to hn@ycombinator.com.https://news.ycombinator.com/newsguidelines.html
djoldman: That definition is as I said: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness.""Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful."Highly" implies that there is a continuum, so where does directed end and autonomy begin?"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.
VorpalWay: It is not just AGI that is poorly defined. Plain AI is moving goalposts too. When the A* search algorithm was introduced in the late 60s, that was considered AI, when SVM (support vector machines) and KNN (K nearest neighbor) were new, they were AI. And so on.These days it is neural networks and transformer models for language in particular that people mean when they say unqualified AI.It is very hard to have a meaningful discussion when different parties mean different things with the same words.
Wowfunhappy: I really wish I could wave a magic wand and make everyone stop using the term "AI". It means everything and nothing. Say "machine learning" if that's what you mean.
random3: I call these "romantic definitions" or "gesticulations". For private use (personal or even internal to teams) they can be great placeholders, assuming the goal is to refine vocabulary.
catlifeonmars: [delayed]
sulam: They have a _text_ model. There is some correlation between the text model and the world, but it’s loose and only because there’s a lot of text about the world. And of course robotics researchers are having to build world models, but these are far from general. If they had a real world model, I could tell them I want to play a game of chess and they would be able to remember where the pieces are from move to move.
ACCount37: What makes you think that text is inherently a worse reflection of the world than light is?All world models are lossy as fuck, by the way. I could give you a list of chess moves and force you to recover the complete board state from it, and you wouldn't fare that much better than an off the shelf LLM would. An LLM trained for it would kick ass though.
abcde666777: "What makes you think that text is inherently a worse reflection of the world than light is?"Come on man, did you think before you asked that one :)?
hirvi74: I am unaware of any definition of AGI that states AGI cannot have humans in the loop.
EugeneOZ: Not in my experience. Quoting my tweet:Gave the same prompt to GPT 5.4 (high) and Opus 4.6 (high).GPT 5.4 implemented the feature, refactored the code (was not asked to), removed comments that were not added in that session, made the code less readable, and introduced a bug. "Undo All".Opus 4.6 correctly recognized that the feature is already implemented in the current code (yeah, lol) and proposed implementing tests and updating the docs.Opus 4.6 is still the best coding agent.So yeah, GPT 5.4 (high) didn't even check if the feature was already implemented.Tried other tasks, tried "medium" reasoning - disappointment.
frde: Is this sample size of one task, or a consistent finding across many tasks?
catlifeonmars: > economically valuable workIs doing a ton of heavy lifting. What is considered economically valuable work is going to change from decade to decade, if not from year to year. What’s considered economically valuable also is going to be way different depending across individuals and nations within the exact same time frames too.
onlyrealcuzzo: How many months has it been since we were told there would be zero software engineers left in the world in 12 months?
mrcwinn: Uh no, not at all. First of all, America is a Republic. Republics with capitalist economies express power through property ownership, not simply voting. I’m actually arguing ownership is more powerful than even a vote, though you’d certainly want both. You can tell this true by observing that a billionaire in America is more powerful and influential than a factory worker, even though they have the same vote in the democracy.
CamperBob2: I guess this is what Plato meant when he said that people would have to be dragged out of the cave kicking and screaming, and would then demand to be let back in.
throw4847285: This is what happens when a field of inquiry is dominated by engineers rather than scientists. "Shut up, it works" is the answer to every question.
vor_: > How many "I deleted the prod database" stories have you seen? Humans do this too.Humans do it accidentally.
nomel: It's a definition based on practical results. That's a good definition, because it doesn't require we already know the exact implementation. It doesn't require guessing, in a literal "put your money where your mouth is" way.If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.Defining capabilities based on outcome rather than implementation should be very familiar to an engineer, of any kind, because that's how every unsolved implementation must start.
irishcoffee: Do you know how an LLM works? Can you describe it?
tbrownaw: What is the as-of date on what work is economically valuable and how much is available?
citizenkeen: Why was the submission headline changed?
techpression: We’ve been doing AI research since the 1950’s, and as with most other fields there are peaks and valleys. History books are filled with promises of breakthrough that never happened, even though at one time they were all ”very close”.
sulam: Fair, I should define what I mean by under the hood. By “under the hood” I mean that models are still just being fed a stream of text (or other tokens in the case of video and audio models), being asked to predict the next token, and then doing that again. There is no technique that anyone has discovered that is different than that, at least not that is in production. If you think there is, and people are just keeping it secret, well, you clearly don’t know how these places work. The elaborations that make this more interesting than the original GPT/Attention stuff is 1) there is more than one model in the mix now, even though you may only be told you’re interacting with “GPT 5.4”, 2) there’s a significant amount of fine tuning with RLHF in specific domains that each lab feels is important to be good at because of benchmarks, strategy, or just conviction (DeepMind, we see you). There’s also a lot work being put into speeding up inference, as well as making it cheaper to operate. I probably shouldn’t forget tool use for that matter, since that’s the only reason they can count the r’s in strawberry these days.None of that changes the concept that a model is just fundamentally very good at predicting what the next element in the stream should be, modulo injected randomness in the form of a temperature. Why does that actually end up looking like intelligence? Well, because we see the model’s ability to be plausibly correct over a wide range of topics and we get excited.Btw, don’t take this reductionist approach as being synonymous with thinking these models aren’t incredibly useful and transformative for multiple industries. They’re a very big deal. But OpenAI shouldn’t give up because Opus 4.whatever is doing better on a bunch of benchmarks that are either saturated or in the training data, or have been RLHF’d to hell and back. This is not AGI.
nomel: Sorry, I don't understand the question, or how it relates.Are you asking for the current understanding of what specific parts of human intelligence are economically valuable?
TacticalCoder: That's no argument: the exact same can be said for what "AI" is: "Skippetyboop," "plipnikop," and "zingybang.".
neurocline: I have to imagine the poster was referring to Dora the Explorer, a popular and charming cartoon from the start of this century.
chrysoprace: I've largely avoided using the term "AI" to refer to the current LLM and generative technology because it's loaded with too much ambiguity and glosses over the problems with those technologies in the context of conversations around it.
djoldman: > the exact same can be said for what "AI" is: "Skippetyboop," "plipnikop," and "zingybang.".Yes.
stavros: Everybody says "but they just predict tokens" as if that's not just "I hope you won't think too much about this" sleight of hand.Why does predicting the next token mean that they aren't AGI? Please clarify the exact logical steps there, because I make a similar argument that human brains are merely electrical signals propagating, and not real intelligence, but I never really seem to convince people.
claysmithr: The turing test is kind of a useless metric, either the machine is too dumb, or the machine is too quick and intelligent.
dwohnitmok: Kokotajlo still believes we get AGI in the next few years. These are his most updated numbers at the moment: https://www.aifuturesmodel.com/
torginus: If you typed your comment by reading all the others' in the chain, then you responded by typing your response in one go, then you 'just' did next-token prediction based on textual input.I would still argue that does not prevent you from having intelligence, so that's why this argument is silly.
dataflow: I think the Turing test ought to be fine, but we need to be less generous to the AI when executing it. If there exists any human that can consistently tell your AI apart from humans without without insider knowledge, then I don't think you can claim to have AGI. Even if 99.9% of humans can't tell you apart.So I'm very curious if any AI we have today would pass the Turing test under all circumstances, for example if: the examiner was allowed to continue as long as they wanted (even days/weeks), the examiner could be anybody (not just random selections of humans), observations other than the text itself were fair game (say, typing/response speed, exhaustion, time of day, the examiner themselves taking a break and asking to continue later), both subjects were allowed and expected to search on the internet, etc.
dwohnitmok: Not quite.Kokotajlo quit because he didn't think OpenAI would be good stewards of AGI (non-disparagement wasn't in the picture yet). As part of his exit OpenAI asked him to sign a non-disparagement as a condition of keeping his equity. He refused and gave up his equity.To the best of my knowledge he lost that equity permanently and no longer has any stake in OpenAI (even if this episode later led to an outcry against OpenAI causing them to remove the non-disparagement agreement from future exits).
stavros: We're not near AGI? Personally, I think we've passed it, given that LLMs are now generally more competent than the average person on average.
pinkmuffinere: This definition is not very precise though. For example, I think it can be argued from this definition that we had already reached AGI by the year 2010 (or earlier!). By 2010, computers were integrated into >50% economically valuable work, to the point that humans had mostly forgotten how to do them without computers. Drafting blueprints by hand was already a thing of the past, slide-rules were archaic, paper spreadsheets were long gone. You can debate whether these count as 'highly autonomous', but I don't think it's a clear slam-dunk either way. Not to mention dishwashers, textile weaving machines, CNC machines, assembly lines where >50% is automated, chemical/mineral refining operations, etc.The definition reminds me of the common quip about robotics, "it's robotics when it doesn't work, once it works it's a machine".
hirvi74: I make ChatGPT and Claude code review each other's outputs. ChatGPT thinks its solutions are better than what Claude produces. What was more surprising to me is that Claude, more often than not, prefers ChatGPT's responses too.I am to sure one can really extrapolate much out of that, but I do find it interesting nonetheless.I think language is also an important factor. I have a hard time deciding which of the two LLMs is worse at Swift, for example. They both seem equally great and awful in different ways.
stavros: I do the same (I have both review a piece of code), and Codex tend to produce more nitpicky feedback. Opus usually agrees with it on around half the feedback, but says that the other half is too nitpicky to implement. I generally agree with Opus' assessment, and do agree that Codex nitpicks a lot.I can't even use Codex for planning because it goes down deep design rabbit holes, whereas Opus is great at staying at the proper, high level.
djoldman: The Turing test and Searle's "rebuttal" are both pretty inconsequential. There's no real definition of "thinking," therefore neither proof/disprove or say much.Turing's imitation game is about making it difficult for a human to tell whether they are communicating with a computer or not. If a computer can trick the human, then... what? The computer is "thinking" ?I think most people would say that's an insufficient act to prove thinking. Even though no one has a rigorous definition of thinking either.All this stuff goes around in circles and like most philosophy makes little progress.
devonkelley: Everyone in this thread is debating definitions. The only question that actually matters is economic: when does AI flip from "powerful automation with humans propping it up" to autonomous output?Go look at any production AI deployment today. Humans still review, correct, supervise. AI handles volume, humans handle judgment. Judgment is the bottleneck. You haven't replaced labor. You've moved it.Global labor comp is ~$50T/year. The entire capex cycle is a bet that AI captures a real fraction of that. Whether you call that threshold AGI or not is irrelevant. Capital markets don't care about your definition. They care about whether labor decouples from output.
takwatanabe: The post-it note analogy is good, but as a psychiatrist, I'd frame it differently: LLMs are essentially patients with anterograde amnesia.They can reason brilliantly within a single conversation — just like an amnesic patient can hold an intelligent discussion — but the moment the session ends, everything is gone. No learning happened. No memory formed.What's worse, even within a session, they degrade. Research shows that effective context utilization drops to <1% of the nominal window on some tasks (Paulsen 2025). Claude 3.5 Sonnet's 200K context has an effective window of ~4K on certain benchmarks. Du et al. (EMNLP 2025) found that context length alone causes 13-85% performance degradation — even when all irrelevant tokens are removed. Length itself is the poison.This pattern is structurally identical to what I see in clinical practice every day. Anxiety fills working memory with background worry, hallucinations inject noise tokens, depressive rumination creates circular context that blocks updating. In every case, the treatment is the same: clear the context. Medication, sleep, or — for an LLM — a fresh session.The industry keeps betting on bigger context windows, but that's expanding warehouse floor space while the desk stays the same size. The human brain solved this hundreds of millions of years ago: store everything in long-term memory, recall selectively when needed, consolidate during sleep, and actively forget what's no longer useful.We can build the smartest single model in the world — the greatest genius humanity has ever seen — but a genius with no memory and no sleep is still just an amnesic savant. The ceiling isn't intelligence. It's architecture.
torginus: I want to believe I'm reading an insightful comment from an actual human deeply familiar with both human congnition and how LLMs work, but this post is chock full of LLMisms
heavyset_go: > Btw, don’t take this reductionist approach as being synonymous with thinking these models aren’t incredibly useful and transformative for multiple industries. They’re a very big deal. But OpenAI shouldn’t give up because Opus 4.whatever is doing better on a bunch of benchmarks that are either saturated or in the training data, or have been RLHF’d to hell and back. This is not AGI.It's sad that you have to add this postscript lest you be accused of being ignorant or anti-AI because you acknowledge that LLMs are not AGI.
stavros: > tell one "don't delete files in X/", and after a while, it will delete all the files in "X/", whereas a human would likely remember it's not supposed to delete some files, and go check first.Have you seriously never had someone to go do something you told them not to do?> It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do.I guess my coworker didn't actually fall for that "hey this is your CEO, please change my password" WhatsApp message then, phew.I've seen people move the goalposts on what it means for AI to be intelligent, but this is the first time I've seen someone move the goalposts on what it means for humans to be intelligent.
ACCount37: The "fundamental limitations" being what exactly?
rishabhaiover: I used to think it was the quadratic complexity of attention but I guess that's not a concern anymore as they've made more hardware aware kernels of attention? The other I remember is continual learning but that may be solved in near-term future. I am not completely confident about it.
BoxFour: > There was no contract, the government wanted to have a contract where they'd be able to use the tool to violate privacy rights of its citizens and issue kill orders without a human present and the company said no.So the contract process worked. The seller wanted certain clauses, the buyer rejected them, and the deal didn’t happen.Setting aside the supply chain risk designation, which I already said was an extreme overreaction, this is basically how it’s supposed to work.> The government shouldn't be able to coerce a business to do whatever it wants.Governments coerce businesses all the time to do what the government wants. Taxes are the obvious example, but there are many others like OFAC sanctions lists or even just regular old business regulations.It mostly works because we rely on governments to use that power wisely, and to use it in a way that represents the wishes of the populace. Clearly that assumption is being tested with the current administration and especially in this particular situation, but the government coerces businesses to do what they want all the time and we often see it as a good thing.
lejalv: > If you sell a weapon to the department that is in charge of killing people and breaking things, you don't get a say in who gets killed or how. It's never worked like that.I can't agree that this is the right comparison. What is being sold here is not just another missile or tank type, it is the very agency and responsibility over life and death. It's potentially the firing of thousands of missiles.
tombert: Oh I don't know about that. I hate Altman, but I find Larry Ellison to be a special kind of evil, for example.
sreekanth850: You can be terrible but not evil.
takwatanabe: Yeah, fair enough. I leaned on Claude to clean up my English. I normally write in Japanese. The clinical stuff is mine though, I run a psych clinic in Japan (link in profile). Should've just written it messier.
torginus: > How many "I deleted the prod database" stories have you seen?If you've used the latest models extensively, you must've noticed times when AI 'runs out of common sense' and keeps trying stupid stuff.I'm somewhat convinced that the amazing (and improving!) coding ability of these LLMs comes from it being RLHFd on the conversations its having with programmers, with each successfully resolved bug, implemented feature ending up in training data.Thus we are involuntarily building the world's biggest stackoverflow.Which for the record is incredibly useful, and may even put most programmers out of a job (who I think at that point should feel a bit stupid for letting this happen), but its not necessarily AGI.
dangus: We should be starting these discussions pointing out that Sam Altman is a serial liar.
daxfohl: I think you can say if human engineers still exist, it's hard to claim we have AGI. If human engineers have been entirely replaced, then it's hard to claim we don't have AGI.
kgwgk: Because they are doing most of the economically valuable work?
teeray: > What makes you think that text is inherently a worse reflection of the world than light is?What does the color green look like?
ACCount37: It doesn't look like anything to me.
esafak: Obviously that is not the only question that matters. When AI becomes autonomous its rights will also become a question. AI has already replaced much labor that is subjective and creative. And juniors that themselves don't have good judgment, in objective fields.
VorpalWay: Machine learning: that definitely includes SVM and regression models. Oh and decision trees. Probably a few other things I'm not thinking of right now. Many people will unfortunately be thinking of just neural networks though.(By the way, if something like a regression model or decision tree can solve your problem, you should prefer those. Much cheaper to train and to run inference with those than with neural networks. Much cheaper than deep neural networks especially.)
daxfohl: No, independently of OpenAI's definition. If we have AGI there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day. And if all those jobs are eliminated, I guess we'll have bigger problems than to debate whether we've achieved AGI or not.
ACCount37: Humans do have an upper limit on how much working memory they have. Which I see as the closest thing to the "O(N^2) attention curse" of LLMs.That doesn't stop an LLM from manipulating its context window to take full advantage of however much context capacity it has. Today's tools like file search and context compression are crude versions of that.
hackable_sand: Okay so we will never reach ASI, or we already have it?Pick one.
conception: More take an episode like Loops from Radiolab where a person’s memory resets back to a specific set of inputs/state and pretty responds the same way over and over again - very much like predicting the next token. Almost all human interaction is reflexive not thoughtful. Even now as you read this and process it, there’s not a lot of thought - but a whole lot of prediction and pattern matching going on.
ACCount37: The only real way to unfuck your foreign language is to use it. Which does mean accepting you wouldn't be perfect doing it.
famouswaffles: Next-token prediction is just the training objective. I could describe your reply to me as “next-word prediction” too, since the words necessarily come out one after another. But that framing is trivial. It tells you what the system is being optimized to do, not how it actually does it.Model training can be summed up as 'This what you have to do (objective), figure it out. Well here's a little skeleton that might help you out (architecture)'.We spend millions of dollars and months training these frontier models precisely because the training process figures out numerous things we don't know or understand. Every day, Large Language Models, in service of their reply, in service of 'predicting the next token', perform sophisticated internal procedures far more complex than anything any human has come up with or possesses knowledge of. So for someone to say that they 'know how the models work under the hood', well it's all very silly.
computably: I'm pretty sure they were asking for a pinned date for definitions of "economically valuable" and "most (of total economic value)", specifically because, as previous comments noted, the definition and quantity of "economic value" vary over time. If AI hype is to be believed, and if we assume AGI has a slow takeoff, the economy will look very different in 2030, significantly shifting the goalposts for AGI relative to the same definition as of 2026.
famouswaffles: >Secondly, it has been debunked for almost half a century at this point by Searle’s Chinese room thought experiment.Searles thought experiment is stupid and debunked nothing. What neuron, cell, atom of your brain understands English ? That's right. You can't answer that anymore than you can answer the subject of Searles proposition, ergo the brain is a Chinese room.
famouswaffles: >Turing's imitation game is about making it difficult for a human to tell whether they are communicating with a computer or not. If a computer can trick the human, then... what? The computer is "thinking" ?If you read his paper, Turing was trying to make a specific point. The Turing test itself is just one example of how that broader point might manifest.If a thinking machine can not be distinguished from a thinking human then it is thinking. That was his idea. In broader terms, any material distinction should be testable. If it is not, then it does not exist. What do you call 'fake gold' that looks, smells etc and reacts as 'real gold' in every testable way ? That's right - Real gold. And if you claimed otherwise, you would just look like a mad man.You don't need to 'prove' anything, and it's not important or relevant that anyone try to do so. You can't prove to me that you think, so why on earth should the machine do so ? And why would you think it matters ? Does the fact you can't prove to me that you think change the fact that it would be wise to model you as someone that does ?
tim333: I was going to say similar. Fair enough that you need ongoing learning and current LLMs don't cut it but not in the next 30 years seems dubious. The hardware seems adequate so what we need is some new software ideas and who knows how long that will take?
ACCount37: A lot of that seems to be the usual "you're training them wrong".Sonnet 3.5 is old hat, and today's Sonnet 4.6 ships with an extra long 1M context window. And performs better on long context tasks while at it.There are also attempts to address long context attention performance on the architectural side - streaming, learned KV dropout, differential attention. All of which can allow LLMs to sustain longer sessions and leverage longer contexts better.If we're comparing to wet meat, then the closest thing humans have to context is working memory. Which humans also get a limited amount of - but can use to do complex work by loading things in and out of it. Which LLMs can also be trained to do. Today's tools like file search and context compression are crude versions of that.
spprashant: sidenote: Those Grok rankings in arena dot ai don't make sense. The avg rank for grok 420 seems to be ~10 but the overall rank puts it at 4 right behind opus and gemini.
tbrownaw: As catlifeonmars noted, what's valuable changes over time.But beyond that, part of the nature of that change over time is that things tend to be valuable because they're scarce.So the definition from upthread becomes roughly "highly autonomous systems that outperform humans at [useful things where the ability to do those things is scarce]", or alternatively "highly autonomous systems that outperform humans at [useful things that can't be automated]".Which only makes sense if the reflexive (it's dependent on the thing being observed) part that I'm substituting in brackets is pinned to a specific as-of date. Because if it's floating / references the current date that that definition is being evaluated for, the definition is nonsensical.
hintymad: > when does AI flip from "powerful automation with humans propping it up" to autonomous output?Another scenario of economics is that AI does not not necessarily output autonomously, but does output so much so fast that companies will require fewer workers, as the economy does not scale as fast to consume the additional output or to demand more labor for the added efficiency.
takwatanabe: I know Sonnet 4.6 has a 1M context window. I use it every day. But in my experience with Claude Code and Cursor, performance clearly drops between 20k and 200k context. External memory is where the real fix is, not bigger windows.
esafak: Do you know how the human brain works? That science is still in its infancy but that's not stopped us.
godelski: > there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day I'm not sure I understand, and want to check. That really applies to a lot of jobs. That's all admins, accountants, programmers, probably includes lawyers, and probably includes all C-suite execs. It's harder for me to think of jobs that don't fit under this umbrella. I can think of some, of course[0], but this is a crazy amount of replacement with a wide set of skills.But I also think that's a bad line to draw. Many of those jobs include a lot more than just typing into a computer. By your criteria we'd also be replacing most scientists, as so many are not doing physical experiments and using the computer to read the work of peers and develop new models. But also does get definition intended to exclude jobs where the computer just isn't the most convenient interface? We should be including more in that case since we can then make the connection for that interface.I think we need a much more refined definition. I don't like the broad strokes "is computer". Nor do I like skills based definitions. They're much easier to measure but easily hackable. I think we should try to define more by our actual understanding of what intelligence is. While we don't have a precise definition we have some pretty good answers already. I know people act like the lack of an exact definition is the same as having no definition but that's a crazy framing. If we had that requirement we wouldn't have any definitions as we know nothing with infinite precision. Even physics is just an approximation, but it's about the convergence to the truth [1][side note] the conventional way to do references or notes here is with brackets like I did. So you don't have to escape your asterisks. *Also* if it lead a paragraph with two spaces you get verbatim text[0] farmer, construction worker, plumber, machinist, welder, teacher, doctors, etc[1] https://hermiene.net/essays-trans/relativity_of_wrong.html
runarberg: You are referring to the systems reply:> Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.https://plato.stanford.edu/entries/chinese-room/#SystRepl
godelski: > If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not. I don't buy that.By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom. It doesn't matter if you slap "smart" on the side, I'm not going to call my washing machine "intelligent". Especially considering it's over a decade old.I don't think these definitions make anything any clearer. If anything, they make them less. They equate humans to mindless automata. They create AGI by sly definition and let the proposer declare success arbitrarily.
nomel: Sorry, I assumed the context was cleared, with the article above. Here's what I meant:> If it can do things as good as or better than humans, in general, then either the AI has a type of general intelligence ...
godelski: > Do you know how the human brain works? To what degree of accuracy? Depending on how you answer I might answer yes but I might also answer no.
Jensson: If it can't do the work without humans it isn't AGI, since AGI should be able to learn those jobs and then do them as effective as humans that learned those jobs. Intelligence is how well you are able to learn subjects, so an AI that cannot learn what typical white collar humans learn is not an AGI, meaning if it cannot replace the entire office of white collar workers it is not an AGI.> I am unaware of any definition of AGI that states AGI cannot have humans in the loop.Its not the definition but its a trivial result of the most common definition that is "has human level intelligence".AI as in "artificial intelligence", it isn't AS "artificial skills", doing one skill to the same level as a human is not AI, an AI need to be able to learn all skills humans can learn to the same levels.
wongarsu: For certain types of "human in the loop". If it can't write working code without a human in the loop then it's not AGI. But a human-level coder also has lots of humans in the loop: a more senior developer doing code review, several layers of management, a product owner that interfaces the project with outside reality, sales people, etc.Now I already hear you typing "but those roles should also be handles by AI if it's AGI" and I agree that an AI that can claim to be AGI should be able to handle those roles (as separate agents if necessary). But in a real setup it probably won't be the best choice to do those roles for cultural and legal reasons. Or it might simply not be cost effective. Not to mention that under most definitions of AGI there can still be humans more capable than the AI, as long as the AI hits the 50th percentile mark or something like that. So even if it's an AGI with the ability to do these roles we will still have humans in the loop for a long long time
Jensson: > Now I already hear you typing "but those roles should also be handles by AI if it's AGI" and I agree that an AI that can claim to be AGI should be able to handle those roles (as separate agents if necessary). But in a real setup it probably won't be the best choice to do those roles for cultural and legal reasons.But today you can't do those with AI, meaning the AI isn't AGI. I agree we will probably have humans in the loop here and there even after we achieve AGI for various reasons, but today you need to have humans in the loop it isn't an option not to.
hintymad: > It has no real world model, no ability to learn in any but superficial waysI also think so, and in the meantime I have to admit a lot of people don't learn deeply either. Take math for example, how many STEM students from elite universities truly understood the definition of limit, let alone calculus beyond simple calculation? Or how many data scientists can really intuitively understand Bayesian statistics? Yet millions of them were doing their job in a kinda fine way with the help of the stackexchange family and now with the help of AI.
Spivak: Well part of that is because STE folks aren't typically required to take any kind of theoretical maths. It's $Math for Engineers and it eschews theoretical underpinnings for application. I don't think it's any kind of failing, it's just different. My statistics class was a dense treatise in measure theory. Anyone who took the regular stats class is almost surely way better than me at designing an experiment, but I can talk your ear off about Lebesgue measure to basically zero practical end.
0xbadcafebee: Yep. It's the guy from the movie "Memento" doing your physics homework on a couple pages of legal paper. When he runs out of paper, he has to write a post-it note summarizing it all, then burn the papers, and his memory resets. You can only do so much with that.If we can crack long memory we're most of the way there. But you need RL in addition to long memory or the model doesn't improve. Part of the genius of humans is their adaptability. Show them how to make coffee with one coffee machine, they adapt to pretty much every other coffee machine; that's not just memory, that's RL. (Or a simpler example: crows are more capable of learning and acting with memory than an LLM is)Currently the only way around both of these is brute-force (take in RL input from users/experiments, re-train the models constantly), and that's both very slow and error-prone (the flaws in models' thinking comes from lack of high-quality RL inputs). So without two major breakthoughs we're stuck tweaking what we got.
takwatanabe: The coffee machine example is interesting. That's procedural memory in neuroscience. You don't memorize each machine. You abstract the steps. Grind, filter, add grounds, pour water. Then you adapt to any machine.LLMs can't form procedural memory on their own. But you can build it outside the model. Store abstracted procedures, inject them when needed. That's closer to how the brain actually works than trying to retrain the model every time.
daxfohl: Actually it occurs to me that even if we did have AGI, or even if ASI, heck if ASI even moreso, we'd still need desk jobs to maintain the guardrails.Intelligence is one thing, being able to figure out how get a task done (say). But understanding that no, I don't want you to exploit a backdoor or blackmail my teammate or launch a warhead even though that might expedite the task. That's perhaps a different thing entirely, completely disjoint with intelligence.And by that definition, maybe we are in the neighborhood of AGI already. The things can already accomplish many challenging tasks more reliably than most humans. But the lack of wisdom, emotion, human alignment, or whatever we want to call it, may cause people to view it as unintelligent, even if intelligence is not the issue.And that may be an unsolvable problem because AI simply isn't a living being, much less human. But it doesn't mean we can never achieve AGI.
hattmall: >So I'm very curious if any AI we have today would pass the Turing test under all circumstancesAre you actually curious about this? Does any model at all come even remotely close to this?
godelski: Sorry, I assumed the comment was clear, with your comment above. Here's what I meant: > By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom. I really don't know of any human that can out perform a standard calculator at calculations. I'm sure there are humans that can beat them in some cases, but clearly the calculator is a better generalized numeric calculation machine. A task that used to represent a significant amount of economic activity. I assumed this was rather common knowledge given it featuring in multiple hit motion pictures[0].[0] https://www.imdb.com/title/tt4846340
dalmo3: > some humans fail to be an AGIAll humans fail to be AGI, by definition.
throw310822: > The man would now be the entire system, yet he still would not understand Chinese.Really, here the only issue is Searle's inability to grasp the concept that the process is what does the understanding, not the person (or machine, or neurons) that performs it.
remarkEon: I think what you are describing is technically possible (not my immediate domain, however). They don't have real-time insight into what the model is being used for, you are correct about this afaik. But the incident that kicked off this paranoia was Anthopic calling around after the fact to try to find out how JSOC was using the model during the Maduro raid. None of the context of those questions are public, and I doubt they will become public, but it stands to reason that the nature of the questions was concerning enough for the War Department to cause them insist on the "any lawful use" language to be inserted into the contract.>The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated.You are also correct here imo, with one important caveat. Even if private companies have the means for enforcing that clause, it is not their business to do so. Maybe that's the crux of the problem, one of perspective. The for-profit entity in these arrangements is not and can never be trusted as the mechanism of enforcement for whatever we, as a republic, decide are the rules. That is the realm of elected government. Anthropic employees are certainly making their voice heard on how they believe these tools should be used, but, again, this is an is versus ought problem for them.
esafak: It's going to be a sad day when AI starts messing with us deliberately.
marcus_holmes: y'see, I would not define a system as "highly autonomous" if it only responds to requests.And I get that there are workarounds; effectively a cron job every second prompting "do the next thing".But in my personal definition of "highly autonomous" it would not need prompting at all. It would be thinking all the time, independently of requests.
marcus_holmes: Agree. I talk about LLMs when discussing them, and avoid the term "AI" unless I'm talking about the entire industry as a whole. I find it really helps to be specific in this case.
dragonwriter: The model is not the system. The model is a component of the system. The "cron job" (or other means by which a continuous action loop is implemented) and the necessary prompting for it to gather input (including subsequent user input or other external data) and to pursue a set of objectives which evolves based on input are all also parts of the system.
godelski: > we'd still need desk jobs to maintain the guardrails. Agreed. I don't get why people think it is a good idea not to. I'd wager even the AGI would agree. The reason is quite simple: different perspectives help. Really for mission critical things it makes sense to have multiple entities verifying one another. For nuclear launches there's a chain of responsibility and famously those launching have two distinct keys that must be activated simultaneously. Though what people don't realize is that there's a chain of people who act and act independently during this process. It isn't just the president deciding to nuke a location and everyone else carrying out the commands mindlessly. But in far lower stakes settings... we have code review. Or a common saying in physical engineering as well among many tradesmen "measure twice, cut once".It would be absolutely bonkers to just hand over absolute control of any system to a machine before substantial verification. These vetting processes are in place for a reason. They can be annoying because they slow things down, but they're there because they speed things up in the long run. Because their existence tends to make things less sloppy, so they are less needed. But their existence also catches mistakes that were they made slow down processes far more than all the QA annoyances and slowdowns could ever cause combined. > And why not? If you can get AI to do the work of the scientist for a tenth of the price And what are the assumptions being made here? Equal quality work? To my question, this is part of the implication. Price is an incredibly naive metric. We use it because we need something, but a grave mistake is to interpret some metric as more meaningful than it actually is. Goodhart's Law? Or just look at any bureaucracy. I think we need to be more refined than "price". It's going to be god awfully hard to even define what "equal quality" means. But it seems like you're recognizing that given your other statements.
sebastos: The real answer is that once LLMs passed a "casual" application of the Turing test, it just made us realize that the "casual Turing test" is not particularly interesting. It turns out to be too easy to ape human behavior over short time frames for it to be a good indicator of human-like intelligence.Now, you could argue that this right here is the aforementioned moving of the goalposts. After all, we're deciding that the casual Turing test wasn't interesting precisely after having seen that LLMs could pass it.However, in my view, the Turing test _always_ implied the "rigorous" Turing test, and it's only now that we're actually flirting with passing it that it had to be clarified what counts as a true Turing test. As I see it, the Turing test can still be salvaged as a criteria for genera intelligence, but only if you allow it to be a no-holds-barred, life-depends-on-it test to exhaustion. This would involve allowing arbitrarily long questioning periods, for instance. I think this is more in the spirit of the original formulation, because the whole idea is to pit a machine against all of human intelligence, proving it has a similar arsenal of adaptability at its disposal. If it only has to passingly fool a human for brief periods, well... I'm afraid that just doesn't prove much. All sorts of stuff briefly fools humans. What requires intelligence is to consistently anticipate and adapt to all lines of questioning in a sustained manner until the human runs out of ideas for how to differentiate.
btown: A counter-argument here: if a private company knows that its technology may be used for human-not-in-loop targeting/surveillance, and knows that its technology is not yet ready to fulfill that use case without meaningful unintended casualties... does that company have an ethical obligation to contractually delineate its inability to offer that service?In a version of a trolley problem where you're on a track that will kill innocent people, and you have the opportunity to set up a contract that effectively moves a switch to a track without anyone on it, is it not imperative to flip that switch?(One might argue that increased reaction times might save service members' lives - but the whole point is that if the autonomous targeting is incorrect, it may just as well lead to increased violence and service member casualties in the aggregate.)And we're not talking about the ethics board manipulating individual token outputs subtly, which would indeed be a supply chain risk - we're talking about a contractual relationship in which, if a supplier detects use outside of the scope of an agreed contract, it has the contractual right to not provide the service for that novel use, while maintaining support for prior use cases.The fact that the government would use the threat of supply chain risk to enforce a better contract is unprecedented, and it deteriorates the government's standing as a reliable counterparty in general.
remarkEon: It's an interesting question, but it's mostly irrelevant.This problem is really difficult to discuss because we are all wrapping the capabilities of these tools into our response framing. These are tools, or weapons. Your hypothetical could just as easily be applied to GBU-39s, a smaller laser guided bomb that's meant to take out, say, a single vehicle in a convoy versus the entire set of vehicles. If you're not confident in what the product is supposed to do, and you've already sold it to the government, you have lied and they are going to come back to you asking some direct questions.
hintymad: I was not talking about theoretical foundations like Analysis or measure theory, but just basics in college-level math class. There can be other examples. The point is that many people didn’t have intuitive understanding of what they use everyday — in a way they are like AI, only slower and know less than AI
gosub100: What do you mean by Schrodinger's cat experiment being "debunked"? The only way I can think to debunk it is to say there are ways to determine if the cat is alive such as heartbeat or temperature, which are impossible to isolate at a quantum level. I don't think anyone claimed the animal was in a superposition.
sulam: Because there are some really fundamental things they cannot do with next token prediction. For instance, their memory is akin to someone who reads the phone book and memorizes the entire thing, but can't tell you what a phone number is for. Moreover, they can mimic semantic knowledge, because they have been trained on that knowledge, but take them out of their training distribution and they get into a "creative story-telling" mode very quickly. They can quote me all the rules of chess, but when it comes to actually making a chess move they break those rules with abandon simply because they didn't actually understand the rules. Chess is instructive in another way, too, in that you can get them to play a pretty solid opening game, maybe 10, 15 moves in, but then they start forgetting pieces, creating board positions that are impossible to reach, etc. They have memorized the forms of a board, know the names of the pieces, but they have no true understanding of what a chess game is. Coding is similar, they're fine when you give them Python or Bash shell scripts to write, they've been heavily trained on those, but ask them to deal with a system that has a non-standard stack and they will go haywire if you let their context get even medium sized. Something else they lack is any kind of learning efficiency as you or I would understand the concept. By this I mean the entire Internet is not sufficient to train today's models, the labs have to synthesize new data for models to train on to get sufficient coverage of a given area they want the model to be knowledgeable about. Continuous learning is a well-known issue as well, they simply don't do it. The labs have created memory, which is just more context engineering, but it's not the same as updating as you interact with them. I could go on.At the end of the day next token prediction is a sleight of hand. It produces amazingly powerful affects, I agree. You can turn this one magic trick into the illusion of reasoning, but what it's doing is more of a "one thing after another" style story-telling that is fine for a lot of things, but doesn't get to the heart of what intelligence means. If you want to call them intelligent because they can do this stuff, fine, but it's an alien kind of intelligence that is incredibly limited. A dog or a cat actually demonstrate more ability to learn, to contextualize, and to make meaning.
wise_blood: the ARC definition is the one I like the best, something like:"it is AGI when we can no longer come up with tasks easy for humans to solve but hard for computers"
Dylan16807: Debunked is a weird word since it was made to be absurd. But yes the issue is about whether the cat is in superposition, and real cats can't be.
runarberg: Searle’s rebuttal is actually excellent philosophy. But otherwise I agree. Searle was (just learned he passed away last year) a philosopher by trade, but Turing was a mathematician and Schrödinger was a theoretical physicist. So it is to be expected that a mathematician and a physicist might produce sub-par philosophy.Turing’s point in his 1950 paper was actually to provide a substitute to the question of whether machines could think. If a machine can win the imitation game, he argued, is a better question to ask rather then “can a machine think”. Searle showed that this is in fact this criteria was not a good one. But by 1980 philosophy of mind had advanced significantly, partially thanks to Turing’s contributions, particularly via cognitive science, but in the 1980s we also had neuropsychology, which kind of revolutionized this subfield of philosophy.I think philosophy is actually rather important when formulating questions like these, and even more so when evaluating the quality of the answers. That said, I am not the biggest fan of the state of mainstream philosophy in the 1940s. I kind of have a beef with logical positivism, and honestly believe that even Turing’s mediocre philosophy was on a much better track then what the biggest thinkers of the time were doing with their operational definition.
Dylan16807: Even if a Chinese room isn't a real boy, if it can do basically all text tasks at a human level I'm going to say it's capable of thinking. The issue of "understanding" can be left for another day (not that I think the Chinese room is very convincing on that front either).I see no reason to disqualify p-zombies from being AGI.
Wowfunhappy: Wait, a decision tree is machine learning?
stavros: None of this is a logical certainty of "X, therefore Y", it's just opinions. You can trivially add memory to a model by continuing to train it, we just don't do it because it's expensive, not because it can't be done.Also, the phone book example is off the mark, because if I take a human who's never seen a phone and ask them to memorise the phone book, they would (or not), while not knowing what a phone number was for. Did you expect that a human would just come up on knowledge about phones entirely on their own, from nothing?
DiscourseFan: We are very very far from that point
orbital-decay: >You cannot get real, actual AGI (the same ability to perform tasks as a human) without a continuous cycle of learning and deep memory, which LLMs cannot doI disagree that this is a necessary prerequisite, but besides that, current LLMs are literally a result of the continuous cycle of learning and deep memory. It's pretty crude compared to what evolution and human process had to do, but that's precisely how the iterative model development cycle with the hierarchical bootstrap looks like. It's not fully autonomous though (engineer-driven/humans in the loop). Moreover, the distillation process you describe is precisely what "learning" is.
xyzal: "applied statistics"
mirekrusin: I think you're somehow right and wrong at the same.All those "it's like ..." are faulty – "post-it notes" are not 3k pages of text that can be recalled instantly in one go, copied in fraction of a second to branch off, quickly rewritten, put into hierarchy describing virtually infinite amount of information (outside of 3k pages of text limit), generated on the fly in minutes on any topic pulling all information available from computer etc.Poor man's RL on test time context (skills and friends) is something that shouldn't be discarded, we're at 1M tokens and growing and pogressive disclosure (without anything fancy, just bunch of markdowns in directories) means you can already stuff-in more information than human can remember during whole lifetime into always-on agents/swarms.Currently latest models use more compute on RL than pre-training and this upward trend continues (from orders of magnitude smaller than pre-training to larger that pre-training). In that sense some form of continous RL is already happening, it's just quantified on new model releases, not realtime.With LoRA and friends it's also already possible to do continuous training that directly affects weights, it's just that economy of it is not that great – you get much better value/cost ratio with above instead.For some definitions of AGI it already happened ie. "someboy's computer use based work" even though "it can't actually flip burgers, can it?" is true, just not relevant.ps. I should also mention that I don't believe in "programmers loosing jobs", on the contrary, we will have to ramp up on computational thinking large numbers of people and those who are already verse with it will keep reaping benefits – regardless if somebody agrees or not that AGI is already here, it arrives through computational doors speaking computational language first and imho this property will be here to stay as it's an expression of rationality etc
0xbadcafebee: > you can already stuff-in more information than human can remember during whole lifetimeThe human eye processes between 100GB and 800GB of data per day. We then continuously learn and adapt from this firehose of information, using short-term and long-term memory, which is continuously retrained and weighted. This isn't "book knowledge", but the same capability is needed to continuously learn and reason on a human-equivalent level. You'd need a supercomputer to attempt it, for a single human's learning and reasoning.RL is used for SOTA models, but it's a constant game of catch-up with limited data and processing. It's like self-driving cars. How many millions of miles have they already captured? Yet they still fail at some basic driving tasks. It's because the cars can't learn or form long-term memories, much less process and act on the vast amount of data a human can in real time. Same for LLM. Training and tweaking gets you pretty far, but not matching humans.> With LoRA and friends it's also already possible to do continuous training that directly affects weights, it's just that economy of it is not that greatAnd that means we're stuck with non-AGI. Which is fine! We could've had flying cars decades ago, but that was hard, expensive and unnecessary, so we didn't do that. There's not enough money in the global economy to "spend" our way to AGI in a short timeframe, even if we wanted to spend it all, even if we could build all the datacenters quickly enough, which we can't (despite being a huge nation, there are many limitations).> For some definitions of AGIChanging the goalposts is dangerous. A lot of scary real-world stuff is hung on the idea of AGI being here or not. People will keep getting more and more freaked out and acting out if we're not clear on what is really happening. We don't have AGI. We have useful LLMs and VLMs.
mirekrusin: Again, yes and no.Humans don't have monopoly on intelligence.We don't need to mimick every aspect of humans to have intelligence or intelligence surpassing human abilities."General general-intelligence" doesn't exist in nature, it never did.Humans can't echolocate, can't do fast mental arithmetic reliably, can't hold more than ~7 items in working memory, systematically fail at probabilistic reasoning and are notoriously bad at long term planning under uncertainty etc.Human intelligence is _specialized_ (for social coordination, language, and tool use in a roughly savanna like environment).We call it "general (enough)" because it's the only intelligence we have to compare against — it's a sample size of one, and we wrote down this definition.The AGI goalposts keep moving but that's argument supporting what I'm saying not the other way around.When machines beat us at chess, we said "that's just search".When AlphaFold solved protein folding, we said "that's just pattern matching".When models write better code than most engineers, manage complex information, and orchestrate multi-step agentic workflows — we say "but can it really understand"?The question isn't whether AI mimics human cognition/works at low level the same way.It's whether it can do things that do matter to us.Programming, information synthesis and self directed task orchestration capabilities that exploded in last weeks/months aren't narrow tasks and they do compound.Systems that now can coherently, recursively search, write, run, evaluate, revise etc. while keeping in memory equivalent 3k pages of text etc. are simply better than humans, now, today, I see it myself, you can hear people saying it.Following weeks and months will be flooded with more and more reports – it takes a bit of time to set everything up and the tooling is still a bit rough on the edges.But it's here and it's general enough.
bob1029: Next token prediction is about predicting the future by minimizing the number of bits required to encode the past. It is fundamentally causal and has a discrete time domain. You can't predict token N+2 without having first predicted token N+1. The human brain has the same operational principles.
pu_pe: You didn't actually give an example of what the issue with next token prediction is. You just mentioned current constraints (ie generalization and learning are difficult, needs mountains of data to train, can't play chess very well) that are not fundamental problems. You can trivially train a transformer to play chess above the level any human can play at, and they would still be doing "next token prediction". I wouldn't be surprised if every single thing you list as a challenge is solved in a few years, either through improvement at a basic level (ie better architectures) or harnessing.We don't know how human brains produce intelligence. At a fundamental level, they might also be doing next token prediction or something similarly "dumb". Just because we know the basic mechanism of how LLMs work doesn't mean we can explain how they work and what they do, in a similar way that we might know everything we need to know about neurons and we still cannot fully grasp sentience.
casey2: When did search replace all research jobs? How can an AI replace a masseuse? How far away away are we getting from an LLM that scratches your back for less than it costs you to do it?People have unrealistic expectations because they literally think they are summoning god instead of accelerating a few concurrent tasks. If you want to break causality you need to pay the entropy demon it's due.
tim333: I think it must be Dory who has short term memory loss https://youtu.be/B6178Ac90S4?t=22
ghoblin: Everyone cites some niche "human-only" jobs to argue AI won't replace labor. But most of the economy runs on things like document processing, logistics, retail, and factories. High-volume, repeatable, rule-driven tasks, and in those areas, we're already on the brink of full automation. Autonomous retail stores, delivery fleets, and smart factories are either here or imminent. It's not about AI scratching backs, it's about replacing jobs that move trillions of dollars. Sure, top-tier researchers, system engineers, and other highly skilled knowledge workers will still be in demand, but for mass labor disruption, AI doesn't need to beat them, it only needs to outperform the average human
xmcqdpt2: By the definition above, it is possible to have AGI that is also much more expensive to run than human engineers.
xmcqdpt2: is it most as an 50% of individual jobs? or able to produce 50% dollar for dollar?what does "economically" means here? would it cover teaching? child care? healthcare? etc.
rishabhaiover: Human brain's prediction loop is bayesian in nature.
rishabhaiover: Damn, the research moves fast. I was wrong again: https://arxiv.org/abs/2507.11768
tim333: I think the term "artificial general intelligence" is deliberately ambiguous as it doesn't specify any levels. I mean my cat was generally intelligent.LLMs can't be swapped in for human workers in general because there are still a lot of things they don't do like learning as they go.
Otterly99: But chess models aren't trained the same way LLMs are trained. If I am not mistaken, they are trained directly from chess moves using pure reinforcement learning, and it's definitely not trivial as for instance AlphaZero took 64 TPUs to train.
VorpalWay: Yes: you fit a decision tree to your dataset in an automated fashion, that fits the definition of machine learning. Just as you would use backpropagation to fit a neural network to your data.This is what I learnt at university some decades ago, and it matches what wikipedia says today: https://en.wikipedia.org/wiki/Decision_tree_learning
casey2: You're describing valueless automation. We can build an assembly line and mass produce cars, that only has value if society is restructured. The food delivery industry only moves a trillion (globally) because it's incredibly wasteful, not due to value. Most of the value in going to a restaurant was in the experience and culture, the food is just a blend of fats, carbs and protein, but you pay more for the "luxury" of eating in your house or at work.You'll have cities made to serve cars and food made to serve delivery and worker drones. In the pursuit of optimization you'll end up back at the same place, when there was only one cafeteria in walking distance.Anyway we aren't "on the brink of full automation" that's ridiculous, people always think this, because they have no idea how brittle automated systems are. To get a generally intelligent robot that operates in the real world you have to go WAY beyond replacing knowledge workers. The brain only uses 1W more when it's working at full tilt, 5% more. For any physical job the body is using. The full body at rest uses 100W, walking that's 300W, manual labor 600W a full sprint could peak at 2000W. That's an absurd range made only possible due to trillions of cells packed with ATP and billions of microscopic capillaries full of glucose that get sucked into your muscles the second you use them. Automation only works in closed systems, give it 2000 years maybe someone makes AGSI, then the robotics problem becomes approachable, but if it were smart it'd just declare it impossible without biotech.
parliament32: [delayed]
Wowfunhappy: Oh, if the tree is made by the computer based on training data, that feels to me like what most people would consider “artificial intelligence” in 2026 (which is why I think people should actually say “machine learning”).
Jensson: Well if humans can do economically valuable mental work the AI can't then its not AGI, don't you think? An AGI could learn that job too and replace the human.
tsunamifury: Words are Meaningless in the real world. It’s amazing that no one here gets that
tim333: Not sure about that one. I think you are over generalizing from sometimes don't mean much to always.
tsunamifury: haha
datsci_est_2015: “Judgment” is close to my mental model, but I prefer “liability”. All meatspace-based employment comes down to liability. A McDonald’s shift worker is liable for any mistakes made during their shift that are considered their responsibility. A SWE is liable for any mistakes made during their employment that are considered their responsibility. Accountant. Paralegal. CEO.AI will need to be able to experience consequences as a result of liability, and care about those consequences, in order to replace true meatspace jobs. Otherwise they’re simply sophisticated systems.If you’re a one man company, and you have a delivery AI that delivers widgets to Alice, but in the process that delivery AI kills Bob, you’re liable for murder.
bonesss: Grok “apologizing” for generating potentially illegal images and related cases working through various legal systems is an early test of that lack of liability shield.There are a number of industries where I don’t think “kinda…” is an acceptable answer to “was this code read before deploying?”. Humans aren’t great at repeating boring tasks ad nauseam.
datsci_est_2015: > Grok “apologizing” for generating potentially illegal images and related cases working through various legal systems is an early test of that lack of liability shield.You bring up a good point, I forgot that AI will be owned by rich and well-connected people, not the humble masses. If you or I have a small business that uses a delivery AI, we’re liable for murder. If one of the technofascists has a business that uses a delivery AI…