Discussion
Asimov Press
vivid242: I wasn’t aware of the map empire, thank you!Taking away some complexity comes at a price, and for some people, it’s hard to see that it outweighs the practicality.
boulos: Please don't editorialize titles unless they're clearly clickbait."Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
cogman10: The article presumes that the models we have today describing everything could still be subject to a major paradigm shift.Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.
bananaflag: I find it funny how people are so concerned that AI cannot innovate, that AI coding agents only give the most bland solutions to any problem etc. when the next step in OpenAI's 5 stages to AGI is literally called "Innovators".
thegrim33: My two step plan is to go to sleep and then wake up the next day and be a billionaire. Surely because that's my stated next step that means when I wake up tomorrow I'll be rich.
tech_ken: My hot take is that mathematical and scientific 'soundness' is ultimately more of an aesthetic preference than an objective quality of reality. Good science makes sense to humans, and 'what makes sense' is ultimately what fits satisfyingly in your brain. There's nothing inherently wrong with an enormous epicycle model of reality from the perspective of the God of Math; so long as your formal system is consistent and expressive enough to represent everything then meh, it's a model. But the model that humans want to elevate to canonical status has far stricter requirements, and ultimately it's the one which the majority of sufficiently credentialed tastemakers decide is 'best'. Parsimony works well in physics where you have closed form expressions for all your stuff, but the biology cases are so much messier because it turns out that sometimes reality isn't parsimonious. All this to say that good science is a matter of taste, and while AI can gist the broad strokes of taste I've yet to see it take on the role of genuine tastemaker.
ArRENCEAI: What's more alarming isn't that AI is limited to existing domain data, it's that when people push it to deviate outside those known data points it confidently hallucinates nonsense.
tech_ken: I don't think paradigm shifts have to be 'better' in some march-toward-progress sense, they can be lateral or even regressive in that way and still lead to longer-horizon improvements.I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.
cogman10: > I don't think paradigm shifts have to be 'better'But they do. Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model. If prior data is unexplained with a paradigm shift, the shift will never be adopted.> Perhaps we're truly at the End of ScienceWho said that? Just because the core of our current models seem pretty rock steady doesn't mean there's not more science. It simply means that we can mostly just expect refining rather than radical discovery.There will be sub-paradigm shifts, but there's likely not going to be major "relativity" moments from here on out.