Discussion
maplethorpe: > What Moravec was describing was a difference in how skills are stored, not how complex they are. Physical skills are encoded in the body, almost impossible to put into words. But knowledge work, the analysis, the diagnosis, the strategy, the legal argument, is stored in text. Humans wrote it all down. Every framework, every protocol, every insight accumulated across every profession for centuries, captured in documents, papers, books, case files, and reports.I don't think this is true. Text is a lossy form of communication. There's no way to get the sum of my knowledge from my brain over to your brain purely through text.Also, anyone who has ever had to deal with incomplete documentation knows that humans did not, in fact, write it all down.
effable: The core idea of ASI arriving before AGI seems to be true: we have already seen that through Chess Programs, LLMs etc.However what caught my eye and that to me does reflect the lens through which the author sees the world, unless I am completely misunderstanding their point:"Most of the world's important problems have never been modelled at the precision AI requires to act on them. Pollution, traffic, healthcare, taxation, public infrastructure, water distribution."Pollution, traffic, healthcare and public infrastructure however are not really problems that require "clever" solutions - rather they are problems of political will, regulating industry and moving to cleaner energy sources. For example, we have known about human caused climate change for decades and carbon emissions are just hitting their peak now.
bamboozled: There is a jarring assumption in this article, which is that LLMs are performing much much better then they are. Thy are awesome tools, but they just aren't that great where I'd be replacing my accountant with anything like an LLM and personally, as a software engineer, the more I use these tools, the more I realize I need to understand software better than I ever have before to actually be proficient with these tools. Maybe we're agreeing to some degree because the author seems to think there will still be need for certain skill sets, even with AGI, but I think we're still in the figuring shit out phase.
mkdelta221: Fascinating article. Everyone knows they will be replaced by AI but nobody wants to talk about it.
pjmlp: Worse are the folks claiming how much productive they are with AI tools, without understanding that it means companies will require less of us to do the same job.Like in many scenarios, they always think the victims will be the other ones.
Atomic_Torrfisk: Based on what? Do you have data for that or is that just a feeling, or what you want.
Traubenfuchs: I really hope it won‘t take my job and I am bery afraid, nut:Why hasn‘t it happened yet? Why hasn‘t the job market imploded? What‘s missing? Why do my colleagues, I and my friends still have their bullshit jobs? Why didn‘t my companies output explode through our unlimited claude access? What about all the other companies?
guillego: There might be a really good conclusion in this article but I had to give up halfway through. The LLM-writing chapter after chapter is unbearable, full of short sentences leading into paragraphs that read like LinkedIn posts.> AlphaFold solved protein structure prediction, a fifty-year problem, not in decades but in a fraction of the time traditional research would have required. Not by thinking like a biologist. By finding patterns at a scale no human could reach. That is a domain detonation. Not progress. A before-and-after. The same logic is now moving through radiology, legal research, financial analysis, drug discovery, software engineering.If you have good ideas, good insights and good stories, they deserve your own words. If you can't respect your own ideas enough to spend time writing them down and forming them into paragraphs and sentences, why should I respect them any more?
roysting: The irony is that I think the author may have meant granularity, not precision. You could have the highest precision model (not the AI type) of any given topic or domain and not only be totally inaccurate, but being categorically flawed, i.e., you’re not even shooting at the right target.From his statement it seems what he is really saying though is that it is the granularity of data is insufficient for an AI model to accurately or precisely evaluate a problem and then presumably solve it, assuming there is, let alone a human-acceptable solution.As I mentioned, you can have the most precisely modeled problem in the world and it won’t make a difference if it’s not accurate, especially since there is a very uncomfortable reality starting to face us, at least in the West, that all the little lies we were told and we perpetuated because we have been trained on them from birth, across generations now, are simply wrong and have polluted our minds to such a degree that many people could never accept if AI told them they’re wrong and everything they believe they know and have known all their life is wrong.On top of that, it shatters people’s narcissistic self-image of having been the good guy, because accepting what AI tells them is actually the truth means accepting that they were abusive to those who were right all along, meaning they are actually the bad guy.And if we definitely know anything as good guys, it’s that the majority is always right, because that is what we were taught is the democratic way. The majority is always right and you always have to trust the minority that are experts unless it’s a majority of experts, then you have to trust them too, especially if they are beholden to the minority ruling class! Right? Right!
rembal: I love the water/ice metaphor, but the author tends to completely ignore the physical world. Example with cardiologist - we all know what happened to the radiologist prediction. Example with defence (or war) becoming mostly a case of having a better AI model: well, try to win without a solid, distributed production capabilities, energy access and safe supply chains, in a geographical disadvantage. Embodiment is coming, but it will require moving a lot of atoms. Also, even in text heavy domains, a lot of knowledge is not written down, often of purpose (especially in legal), and that's the juicy part...