Discussion
Sam Altman
drcongo: Wait, so his keyboard has got a shift key?!
throwaway132448: Whatever happened to that key is a key part of his origin story, that I’m sure will be revealed in due course.
Jensson: > And maybe we don't want to build machines that are concious in this sense. The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.This is where LLM is currently going. Not really AGI since they can't think like humans, but they can do a lot of things and humans can train them on novel things.Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.
lebek: > Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.But it's not fun to be figuring out new things all the time. Some amount of routine work is necessary to 1) exercise mastery (feels good), and 2) recover energy. This is why a lot of people find agentic coding exhausting and less fun, you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.
alexyoung: Well he is called Sam _Alt_man, not Sam Shiftman
trilogic: Nailed it 12 Years ago... damn it, then after all Sam is not just talk and money. I just got humbled. This make me reconsider all my POV about Sam Altman.
jryio: > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.Man will do nothing and machine will do everything. That's a bleak world no one is preparing for.How is that universal basic income scheme coming along?
climike: Resource allocation based on your hackernews upvotes? Thanks in advance folks ;)
Jensson: This was written before you have to add in errors in your text so people can tell you aren't an LLM.
mpalmer: The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking Steve Yegge said on some podcast recently that AI is going to have to come up with a more visual medium for communicating, because people don't want to read several paragraphs. He shared this uncritically, seemingly without judgement or disappointment. Yegge himself is a former Googler and by all accounts was an impressive person at one point, now best known as the person who vibe-birthed the inanity that is GasTown.At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have.I see humans getting worse at reading, worse at writing, and worse at programming by themselves.We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.
9dev: Yeah, maybe don't. He's a smart guy for sure, but that really doesn't redeem him from the awful qualities he undoubtedly has—insatiable greed, a compulsion to lie and manipulate, a special flavour of god complex, no moral compass at all, and more.
embedding-shape: > insatiable greed, a compulsion to lie and manipulate, a special flavour of god complex, no moral compass at all, and moreBesides these traits that every CEO/big-time investor seems to share, is there anything uniquely awful with Altman?
embedding-shape: > you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.Maybe I'm wired differently, but this is fun to me, and "exercising mastery" by doing routine work is almost never fun, things stop being fun and feel good once I've "mastered" them, and I can't say I've ever "recovered energy" by doing routine work, it seems to suck energy out of me faster than anything. To recover, I tend to rest and do anything but work. But again, maybe it's just weird wiring.
elevatortrim: That world is not necessarily bleak.We currently have two broad mechanisms to equate people's value.*Employees:*Easy to replace = Low Salary = Gets Few ResourcesHard to replace = High Salary = Gets Many Resources*Entrepreneurs:*Output consumed low = Low Pay = Gets Few ResourcesOutput consumed high = High Pay = Gets Many Resources(Resource consumption ignored)In a world where machines do everything, aspects of these change:*Employees:*Easy to replace = Gets whatever resources(no-one hard to replace)It is up to us to define whether 'whatever' is bleak or not. If we decide that resources need to be shared fairly, it could be heaven, not hell.*Entrepreneurs:*Resource consumption: WhateverIt is up to us how much resource consumption we allow. If we decide that resource consumption need to be sustainable, it could be heaven, not hell.
oytis: That's an expression of class thinking from the beginning IMO. People think of themselves as thinkers and creators, while those who do labour they rely on without getting too much into details are merely doers and can ideally be replaced. But it's really thinking and creativity all the way down if you try to learn to do things well
Jensson: > But it's really thinking and creativity all the way down if you try to learn to do things wellYes, everyone starts out creative.But we all can tell the difference between a worker that is still creative and learning and a worker that gave up creativity and is just doing his job. The first will still be useful in this AI age the second will be replaced by AI learning what he already knows.
nik736: > (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.)Isn't that how LLM models are trained right now? Trying to predict the next word within a "gigantic solution space". Interesting.
ben_w: In one sense, all intelligence is a search in a gigantic solution space.But the difference is:What Deep Blue did was (if the Wikipedia page is correct) Alpha-beta pruning[0], where some humans came up with the function for what "better" and "worse" board states look like.And what LLMs do (at least the end models) includes at least some steps where there's an AI trying to learn what human preferences are in the first place, in order to maximise the human evaluation scores. Some of those things are good, like "what's the right answer to the trolley problem?" and "which is the better poem?", but some are bad such as "what answer best flatters the ego of the user without any regard for truth?"The former is exactly like route-finding, in that you could treat travel time as your score of better-worse and the moves as if they're on a map rather than a chess board.The latter is like being dumped into a new video game with no UI and all NPCs interact with you only in a language you don't know such as North Sentinelese.[0] https://en.wikipedia.org/wiki/Alpha–beta_pruning
Loughla: Lol this does not fill me with hope.If there is person A who can become a squillionnaire by making sure that the employees of a company make as little as possible due to AI, that's what's going to happen. There is zero way "we" will decide resources need to be shared fairly.If person A can amass more money and power, then resource consumption literally doesn't matter. There is no way "we" will be involved in that process at all.Call me cynical, but it appears that human history has proven over and over and over again that whatever the short sighted, selfish option that enriches a very few is, is what will happen, until there is finally violence.I do not look forward to the AI wars that my children will be forced to fight in.
DeathArrow: In a sane world AI revolution would be driven by the likes of Andrew Ng, Andrej Karpathy, Yann LeCun and not by a brigade of Sam Altmans.
dannersy: I see a lot less thinking as a result of using LLMs as they are today and I don't see the providers building tools to promote a better way to use them. They are still way too sycophantic.
gjadi: “The doers are the major thinkers. The people that really create the things that change this industry are both the thinker and doer in one person.”Steve JobsNow, what are doers in the age of LLM is another question.
9dev: Besides the fact that he's an especially awful specimen (the lying and manipulation alone made it to the news several times), I just don't think that a rather clear-sighted blog post from 12 years ago is a valid reason to change your views about Altman.
yobbo: > Isn't that how LLM models are trained right nowIt's neither how computer chess works or how LLMs are trained.Computer chess uses various tricks to prune the search space of board states, where the search is guided by the "value" of each board state. Neural networks can be used (and probably was at the time) to approximate this value, but there can be hand coded algorithms with learned statistics or even lookup tables for smaller games than chess.There's no search in LLM training.
trilogic: I am the last person on earth who would ever write a positive thing about Altman, but can´t lie neither, a fact is a fact, there available to everyone. Fair is fair.
mofeien: > > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking.> This is where LLM is currently going.This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.
ugtr3: lol what a load of gibberish.
jack_pp: Well was Jobs a "doer"? Did he get his hands dirty on the code? Or did he use his employees how we would like to use LLMs?
aleph_minus_one: > Well was Jobs a "doer"?Jobs' talent was that he was an incredibly talented salesman.
ugtr3: Why do people write such nonsense?Jobs envisioned the iPad and iPhone. Did he do the physical work? No. But he created direction.Everyone around him at that time has commented on this. Are you going to claim they’re all lying?
jack_pp: > Yes, everyone starts out creative.Are there studies done on this or is this just wishful thinking?
jstummbillig: Is the novel idea behind this recurring critique that a CEO must be the chief scientist or that we uniquely hate Sam Altman over all other CEOs?
cheschire: You must have had limited exposure to uncreative types. You might be shocked to find there are people that can do nothing more than follow checklists.Sometimes it's a lack of capacity for novel thinking. Sometimes it's fear caused by past trauma. Or it can be age. Or an inability to overcome habits. The list goes on, but the point is that I've had to work with or supervise employees (even in IT!) that didn't have a creative bone in their body. It wasn't a lack of motivation, it was usually something on the list above.These people absolutely deserved the feeling of being useful, and those are the people I'm most concerned for in this new post-LLM world. The creative types will most likely be fine, but we have words to describe creativity as an acknowledgement that there can be an absence of creativity.
mmustapic: You are only thinking about people and creativity in the workplace. Creativity can be applied anywhere: cooking, a new route on your way to somewhere, read some random paragraphs in a book that spawns new thoughts, a new game with a child, optimize the way you paint the walls on your house, choose the plants in your garden (and how you'll water them), do a doodle, try or buy a new outfit, typing this paragraph in response to your message (kinda LLM-y maybe).
jack_pp: Sure and all the same, most people just don't have it.
virgildotcodes: I don't see how this doesn't equally apply to the pre-AI economy. The results there have been quite stark, with the "entrepreneurs" ending up far better off than the "employees".
Jensson: > I don't see how this doesn't equally apply to the pre-AI economy. The results there have been quite stark, with the "entrepreneurs" ending up far better off than the "employees".This is wrong, in most cases the entrepreneur is worse off than the employees, since the entrepreneur spent all his savings on the projects and the employees walks away with all the money they got from their salaries.And even when it is fully funded by external investors most of the time the founder just gets to keep the salary since the company fails and become worthless.The only time the entrepreneur is better off is when the company succeeds and becomes big, but that is rare, most of the time it is better to be an employee.
ugtr3: It depends on risk preferences.Risk seekers should be entrepreneurs.Risk averse people, probably, should not.
Alan_Writer: "If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine."Even if AI can't reach (yet) the creativity level, it performs well while trying, at least for now. Who knows in the near future? So far, the roadmap is clear.The AI push is causing major layoffs in the tech and crypto industries nowadays. But we have been receiving the message "adapt or pay the consequences." Right now, even management positions are being replaced by software. It could sound rude, but it's also part of human nature and evolution. We have created these machines, and now we have to deal with them.On the other hand, it could be rare at these stages, but we (regular human beings) barely know how the brain really works. And AI has demonstrated, at some point, that it can work very well in some roles (mostly operational, ofc), but it's also turning indispensable. Even governments like the Abu Dhabi one are pushing to rule the emirate fully by AI.So yeah, even if we don't like it, AI is silently replacing humans. The best you can do is to learn how to leverage and not be left behind.
Jensson: I have never met an uncreative kid, and studies show kids tend to be more open and creative. But I have to admit I haven't met and interacted with that many average kids, so there maybe some that aren't creative, but a majority are.