Discussion
Billion-Parameter Theories
dakiol: > You could capture the behavior of every falling object on Earth in three variables and describe the relationship between matter and energy in five characters.What we can do is to approximate. Newton had a good approximation some time ago about gravitation (force equals a constant times two masses divided by distance squared. Super readable indeed) But nowadays there's a better one that doesn't look like Newton's theory (Einstein's field equations which look compact but nothing like Newton's). So, what if in a 1000 years we have yet a better approximation to gravity in the universe but it's encoded in millions of variables? (perhaps in the form of a neural network of some futuristic AI model?)My point is: whatever we know about the universe now doesn't necessarily mean that it has "captured" the underlaying essence of the universe. We approximate. Approximations are useful and handy and will move humanity forward, but let's not forget that "approximations != truth"
b450: Reminds me of the blog post about Waymo's "World Model". Training on real-world data results in a sufficiently rich model to start simulating novel scenarios that aren't in the training data (like the elephant wandering into the street), which in turn can feed back into training. One could imagine scientific inquiry working the same way.It strikes me that many of these complex systems have indeterminate boundaries, and a fair amount of distortion might be baked into the choice of training data. Poverty (to take an example from this post) probably has causes at economic, psychological, ecological, physiological, historical, and political levels of description (commenters please note I didn't think too hard about this list). What data we feed into our models, and how those data are understood as operationalizations of the qualitative phenomena we care about, might matter.
gwerbin: This "world model" concept has been a big deal in AI research, in LLMs.
ileonichwiesz: This might be an unkind reading, but to me this just sounds like an attempt to reinvent the very same kind of mysticism that it mentions in the first paragraph.“No need to study the world around you and wonder about its rules, peasant - it’s far beyond your understanding! Only ~the gods~ computers can ever know the truth!”I shudder to think about a future where people give up on working to understand complex systems because it’s hard and a machine can do it better, so why bother.
seanlinehan: Not the intention at all. The part about mechanistic interpretability was meant to gesture at how building such systems can provide new tool kit for building further intuition and understanding.
seanlinehan: Agreed!
curuinor: Connectionist models have lots of theory by theoreticians explicitly pissed off about Chomsky's assertion that there is an inbuilt ability for language. Jay McClelland's office had a little corkboard thingy with Chomsky mockery on the side, for example. Putting forth even the implicature that the present direct descendants are intellectual descendants of Chomsky is like saying Protestants are intellectual descendants of Pope Leo X.
seanlinehan: Perhaps a failure of communication -- I was indeed attempting to say that Chomsky was wrong and his ideas were interesting, but more or less a dead end.
brunohaid: Very skeptical Adam Curtis hat on while reading this, but it is quite well written. Thanks & kudos!
lobofta: Might we ever distinguish what is complex and complicated? Probably not, but I guess the author argues that this gives us a way forward because we can try to distill large models.
js8: I disagree with the article. I think it is always possible to come up with reasonably small theories that capture most of the given phenomena. So in a sense, you don't need complex theories in the form of large NNs (models? functions? programs?), other than for more precise prediction.For example - global warming. It's nice to have AOGCMs that have everything and the carbon sink in them. But if you want to understand, a two layer model of atmosphere with CO2 and water vapor feedback will do a decent job, and gives similar first-order predictions.I also don't think poverty is a complex problem, but that's a minor point.
pdonis: > I also don't think poverty is a complex problem, but that's a minor point.I'm not sure it's a minor point. I don't think poverty is a "complex" problem either, as that term is used in the article, but that doesn't mean I think it fits into one of the other two categories in the article. I think it is in a fourth category that the article doesn't even consider.For lack of a better term, I'll call that category "political". The key thing with this category of problems is that they are about fundamental conflicts of interest and values, and that's a different kind of problem from the kind the article talks about. We don't have poverty in the world because we lack accurate enough knowledge of how to create the wealth that brings people out of poverty. We have poverty in the world because there are people in positions of power all over the world who literally don't care about ending poverty, and who subvert attempts to do so--who make a living by stealing wealth instead of creating it, and don't care that that means making lots of other people poor.
xikrib: Let's gather authors of 15 different world languages together in a room and see if they can collaboratively write a short story. Surely their inability to do so will prove their inadequacy in their native language. /sSimplicity brings us closer to truth — Occam's razor has underpinned the development of our species for centuries. It's enterprise, empire, and capital that feed off of complexity.We're entering a period of human history where engineers and businesspeople drive academic discourse, rather than scientists or philosophers. The result is intellectual chicken scratch like this article.
niemandhier: He talks about the Santa Fe institute and how they failed to carry their findings into the real world.They did not.They showed that for certain problems one could not do more than figure out some invariant and scaling laws. Showing what is impossible is not failure.For the rest: Modern gene networks and lots of biological modelling is based on their work as well as quite a few other things. That’s also not failure.I agree that modern AI is alchemy.
MarkusQ: Clarke's second law:When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.Also see Minsky's "Perceptrons"The problem with almost all such proofs is that people (even those who know better) read them as "this can't be done" when in fact they tell you "it can't be done unless you break one of the following assumptions."I agree that it's unfair to say they failed, but it's likewise unfair to say that their success was in telling us our limits rather than exploring what we need to do to get around the roadblocks.
suddenlybananas: >Jay McClelland's office had a little corkboard thingy with Chomsky mockery on the side, for example.I've never understood why the idea of linguistic nativism is so upsetting to people.
bbor: Well that anecdote is referencing the Scruffies v. Neat war[1], within which the nativism debate was merely a somewhat-archaic undercurrent.IMHO, a lot of the more specifically anti-nativist sentiments of today are based in linguistics itself rather than philosophy, CS, or CogSci, where again it is part of a broader (and much dumber) debate: whether linguistics is the empirical study of languages or the theoretical study of language itself. People get really nasty when they're told that they work in an offshoot field for some reason, which is why I blame them for the ever-too-common misunderstandings of Chomsky -- the most common being "Universal Grammar has been disproven because babies don't speak English in the womb".If Chomsky weren't so obviously right, this would be a worrying development! Luckily I expect it to be little more than a footnote in history, so it's merely infuriating rather than depressing.[1] Minsky, 1991: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
zkmon: > It's remarkable how much of reality turned out to be modelable by theories that fit in a few symbols.The admiration for "remarkable" things puts humanity on a dangerous path that is disconnected from the real goals of human progress as a species. You don't need any of this compression of knowledge or truths. Folklore tales about celestial bodies are fine and hood enough. The vulgar pursuit for knowledge is paving the way for extinction of humans as biological creatures.
rbanffy: If we think of spacetime as some sort of cellular automaton, where each state of a given point is a function (with some randomness, because God likes to throw dice) of previous states of the surrounding points, if the rules for a new state generation are extremely complex, there will be some significant overhead in dimensions we don't see, because the rules need to be somehow represented outside the observable reality. Another issue with this idea is that while the rules might be "outside", the parameters themselves have to be somehow encoded in the state of a cell, and can't propagate faster than light, or one cell (an indivisible unit of space) per indivisible unit of time), which limits the number of parameters accessible to any given cell to the ones immediately surrounding it.Disclaimer: I hope it's obvious, but I'm no physicist. This is just how I would build a universe.