Discussion
Yann LeCun Raises $1 Billion to Build AI That Understands the Physical World
jsnell: The premise is wrong, we are not seeing diminishing returns. By basically any metric that has a ratio scale, AI progress is accelerating, not slowing down.
0x3f: For example?
verdverm: Link does not work, goes into loop at verify human check with some weird redirectLooks like you appended the original URL to the end
Sebguer: Probably related to the reasoning behind: https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-a...Or you're using Cloudflare DNS.
lich_king: > The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters.I've never heard about the Wason selection task, looked it up, and could tell the right answer right away. But I can also tell you why: because I have some familiarity with formal logic and can, in your words, pattern-match the gotcha that "if x then y" is distinct from "if not x then not y" and "y if and only if x".In contrast to you, this doesn't make me believe that people are bad at logic or don't really think. It tells me that people are unfamiliar with "gotcha" formalities introduced by logicians that don't match the everyday use of language. If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.
koakuma-chan: They took money and haven't released anything. How are they doing?
levocardia: To be fair to SSI, they were very explicit about their plan: "we are going to take money and not release anything until we one-shot superintelligence."If you invested in that you knew what you were getting yourself into!
sothatsit: RL on LLMs has changed things. LLMs are not stuck in continuation predicting territory any more.Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes.
taint69: WE HAVE RAISED A BILLION DOLLORSbut you don’t even have a product/cape
brandonb: FWIW, the single blood draw is 6-8 vials -- so we're not claiming to get 100 biomarkers from a single drop. The point of that is mostly that it just takes one appointment / is convenient.
owlcompliance: I raised $1 to understand your physical world.
w4yai: Europe becoming really attractive right now!
volkk: oh nice, i actually used you guys for some labs a few months ago. Glad you're competing with function & superpower
chpatrick: Einstein was heavily inspired by Mach: https://en.wikipedia.org/wiki/Mach%27s_principle
chpatrick: https://news.ycombinator.com/item?id=46094037
ml-anon: Honestly, how do people who know so little have this much confidence to post here?
mvc: You must be new here
bethekidyouwant: Are you asking how many books a large language model would need to read to learn a new language if it was only trained on a different language? probably just the dictionary.
sylware: Where have you been in the last 2 decades?
ainch: It's unintuitive to me that architecture doesn't matter - deep learning models, for all their impressive capabilities, are still deficient compared to human learners as far as generalisation, online learning, representational simplicity and data efficiency are concerned.Just because RNNs and Transformers both work with enormous datasets doesn't mean that architecture/algorithm is irrelevant, it just suggests that they share underlying primitives. But those primitives may not be the right ones for 'AGI'.
yalogin: This feels like more justified investment as it’s try to move the needle. Hope he succeeds
leptons: LLMs produce slop far to often to say they are in any way better than cold fusion in terms of usable results. "AI" kind of is the cold fusion of tech. We've always been 5 or 10 years away from "AGI" and likely always will be.
levodelellis: I have no faith in anyone doing AI to accomplish anything (especially relative to how much money they spend) except John Carmack. People should be trying to throw money at him
mandeepj: Appreciate your work! Healthcare is a regulated industry. Everything (Research, proposals, FDA submissions, Compliance docs, Accreditation Standards, etc.) is documented and follows a process, which means there's a lot of thesis. You can't sneak in anything unverified or unreliable. Why does healthcare need a JEPA\World model?
brandonb: Regulation is quickly catching up to modern AI techniques; for the most part, the approach is to verify outputs rather than process. For example, Utah's pilot to let AI prescribe medications has doctors check the first N prescriptions of each medication. Medicare is starting to pay for AI-enabled care, but tying payment to objective biomarkers like cholesterol or blood pressure actually got better.