Discussion
Unfinished Side Projects
Legend2440: AI discussion is a waste of time. It's just the same tired talking points argued by extremely opinionated people, across 800 comments per thread.
nancyminusone: Among non-programmers, you always hear about some fool that fell in love with an AI girlfriend or whatever, but you never hear about the people who open chatgpt up once, tried some things with it, said to themselves "huh, that's kind of neat" and then lost interest a day or two later, having conceived of no further items to which AI could provide assistance.
MrLeap: I'm old enough to remember being fatigued with so many people talking about making "apps". Programs that run on a phone. Before that everyone was excited about blogging. Web 2.0 ugh.Before that we were excited about the wheel and the creation of fire. All capital drained into those ephemeral fancies.The cycles cycle on.
jimjimjim: It feels like during the previous hype cycle of bitcoins, blockchains and NFTs. People are trying to find uses for new technologies but it seems like a lot of the conversations come from people (at this point I guess it's still people?) trying to increase the hype. Maybe they are trying to be thought leaders or maybe they are trying to boost some stock valuations.
mememememememo: Yes. Go to Mastodon. I accidently stumbled on Mastodon last night (I knew about it of course but largely ignored it). Of the 100 or so posts they were all cool stuff. Only one was AI related and it was more a researchy geeky thing than the brainrot "I fired all my staff an hour ago. They were not happy. CRLF. CRLF. I have an agentic circus and I am the ringmaster of 666 agents. CRLF....." crap you get on Linked in.
augusto-moura: I think you meant Mastodon[1][1]: https://en.wikipedia.org/wiki/Mastodon_(social_network)
mememememememo: Thanks. Edited. I have a mental block for some reason spelling it!
neerajk: I'm just getting started ;)
1a527dd5: I don't think I'm quite bored. I'm exhausted/fatigued with the pace.
SunshineTheCat: I definitely get the comment about HN and seeing a billion posts about OpenClaw, Claude, or yet another post on an industry being disrupted by AI.Tack on to that the increasing number of political stuff on here as well just makes it less and less an interesting place to visit.Don't agree with the angry mob on the political stuff especially and you get downvoted/flagged into oblivion.Just another echo chamber looking to have viewpoints confirmed in yet another one of the disappearing places online that foster any level of intellectual curiosity.
jimmyjazz14: I think the advancements around models and such are still somewhat interesting but its all the hype around peripheral things like OpenClaw, agentic workflows and other hyped up AI-adjacent news that are getting pretty old.
PowerElectronix: AI is an ok tool.
Siah: Not me
guywithahat: I'm not sure if this is a joke but the field is advancing so dramatically it's hard to stop talking about it. Every week at work I have to show a new AI feature to an executive, about how we can now write 1000's of lines of codes in minutes at a higher quality than the greatest engineers. This necessitates new tools and new purchases, as well as team and org shifts.If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times. There are a lot of people who are deep in tech who still don't understand what agents and LLM's can do
smartmic: AI has become a commodity- for the better or worse. And yes, we should treat it as such, especially no more big ideas from (C-level) managers, please.
IshKebab: I wish there was an option to hide AI stories on HN, and AI-related repos on Github's trending page.You could use AI to do it! Fight fire with fire.I'm neutral on AI - so far it seems useful but flawed. But I don't want to hear about it constantly.
mervz: 100% this; GitHub is littered with a bunch of AI shit projects that pollute the trending page.
ekropotin: No, because what most people miss about AI is that…I’m just kidding. LinkedIn feed became so unbearable, that I had to install an extension to turn it off.
MisterTea: It is getting stuffy in the tech sector lately with all these AI postings but it's still a very new and very disruptive technology.I also have to say that I don't use AI in my personal or professional life. And that is simply because I haven't felt any need to use it.
bmau5: The debate around "AGI" is the thing that gets me. People just moving goalposts and arbitrarily applying their own standards makes for a lot of wheel spinning
somewhereoutth: My only hope is that it is such a disaster that it is effectively an extinction level event for this current technoscene (along the lines of the Permian–Triassic extinction event and others).Then we can get back to the unglamorous, boring, thankless task of delivering business value to paying clients, and the public discourse will no longer be polluted by our inane witterings.
ogou: Everything is fandom now. I grew up around people obsessed with Nascar and NFL. So much of the discourse sounds exactly the same. It beats listening to people talk about their dogs though.
vrganj: AI is fine. The hype is annoying. What's even worse though are the incredible amounts of money and energy that are being thrown at it, with no regard for the consequences, in times of record inequality and looming climate apocalypse.AI is the red herring that'll waste all our attention until it's too late.
dvt: > incredible amounts of ... energySo tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.[1] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...
mondrian: Let's get back to filling the front page with Web3, DeFi, NFTs. Oh the good ol' days.
peterlk: Modern AI is a miracle. The math that makes it work is beautiful and really impressive. For example, if you wanted to map all knowledge on earth, how would you do it? AI answers that question by building a high dimensional vector space of embeddings, and traversing that space moves you through a topology of basically every concept that humans have.Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.
olivia-banks: I actually hear about this fairly often. In quite a few of my college classes, there's a large focus on AI (even outside the computer science department). I find it surprising the amount of non-technical people who don't even think to use it, or otherwise haven't interacted with it except when required.
cmollis: yes, so bored. yada yada.. i've been 'obsolete' for 36 years and counting.
themafia: You can call an engineer a "product manager" but that does not make them one.
jimjimjim: "higher quality than the greatest engineers". right...and why do so many articles or comments have a general approach of 'It's great and if you don't think it is it's because you don't understand it.'
Aerroon: I think the workflows can be really interesting to read about. The other week I read a reddit post how someone got Qwen3.5 35B-A3B to go from 22.2% on the 45 hard problems of swebench-verified to 37.8% (opus 4.6 gets 40%).All they essentially did was tell the LLM to test and verify whether the answer is correct with a prompt like the following:>"You just edited X. Before moving on, verify the change is correct: write a short inline python -c or a /tmp test script that exercises the changed code path, run it with bash, and confirm the output is as expected."Now whether this is true, I don't know, but I think talking about this kind of stuff is cool!
internet2000: Not really? It's kind of a big deal.
schaefer: > Not really? It's kind of a big deal.Why on earth is the parent comment downvoted? the title of the TFA asks a question. This statement directly answers that question. Seems very on-topic.
cdrnsf: It's ruined the sparkle emoji for everyone.
vincentabolarin: Management spins up something on Lovable and believes that building any software is as easy as typing a few prompts.It's worse when there's a colleague of yours encouraging that by using AI blindly, piling up technical debt just to move at the pace that Management expects after signing you all up on some AI tool.At the end of the day, everyone is talking about AI. For AI or against AI, it doesn't really matter.
jwilliams: It's the most transformative technology I've clocked in my lifetime (and that includes home computers and the Internet).Large organizations are making major decisions on the basis of it. Startups new and old will live and die by the shift that it's creating (is SaaS dead? Well investors due will make it so). Mass engineering layoffs could be inevitable.Sure. I vibe coded a thing is getting pretty tired. The rest? If anything we're not talking about it enough.
jonhuber: I think what's crazy is the desire to replicate current day corporate structures. Look at this multi agent Jira story reading bot that builds stuff cause we let it churn overnight. Like the whole idea that you don't need that nonsense to build something amazing.
heavyset_go: See also this insanity: https://github.com/garrytan/gstack/
criddell: [delayed]
Sohcahtoa82: > AI is fine. The hype is annoying.I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.
Garlef: “Everything has already been said, but not yet by everyone.” — Karl Valentin---Personally, I'm still very interested in the topic.But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.
QubridAI: Honestly, a bit but only because the hype cycle is louder than the genuinely interesting work.
beej71: To answer the OP's question, apparently not! :)
oulipo2: It's not fine at all.It's a capitalistic device, which in its current form is going to increase even more the inequalities.We should fight against capitalism before it ruins our planet
emp17344: No, it’s… fine. Useful in a limited capacity. Not the machine god, but not machine Satan either. The reality is kind of boring.
vablings: This summarizes mostly how I feel about it. It's a tool like any other tool we have advanced since the beginning of human civilizationMachine tools replaced blacksmithsCNC machines replaced manual machines.Robots replaced CNC machine tendersCAD replaced draftsman (and also pushed that job onto engineers (grr))P&P robots replaced human production lines.The steam train replaced the horse and cartThis is a tale as old as time itself
kerblang: Those were deterministic rather than stochastic
xyzzy123: [deleted]
lpcvoid: AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.
tonmoy: Im not sure I follow. AI barely consumes energy compared to other industries and instead of focusing on the heavy hitters first wasting time on the climate impact on AI doesn’t seem useful
elbasti: This is wrong. AI uses ~4% of the US grid, and projections are that it will grow to 10%+ in the next 6 years.And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
kgwxd: Oh great, we're at the stage of constantly talking about it, AND talking about how we're sick of talking about it. Now every article will be as long as before + a prefix paragraph explaining how they know we're all sick of talking about it, but...
lukev: This is bad in tech. But at least we are (relatively) well equipped to deal with it.My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.Its an absolute disaster.
chatmasta: The wild part is they’re having this reaction while using the most rigid and limited interfaces to the LLMs. Imagine when the capabilities of coding agents surface up to these professions. It’s already starting to happen with Claude Cowork. I swear if I see another presentation with that default theme…
iugtmkbdfil834: This. As annoying as all sorts of 'safety features' are, the sheer amount of effort that goes into further restricting that on the corporate wrapper side side makes llm nigh unusable. How can those kids even begin to get the idea of what it can do, when it seems like its severely locked down.
runarberg: 1% of worldwide energy expenditure is massive, incredible amounts of energy in fact.
01100011: It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).
emp17344: AI enthusiasts love to misuse and abuse the goalpost metaphor. It’s practically always an attempt to silence opponents.
bmau5: It's easily abused by both sides of the debate because there's no strict widely accepted definition. I find it tiring because it's a largely inconsequential benchmark anyways (outside of Microsoft-OpenAI contract disputes).
mr_bob_sacamano: I wish there were a filter on Hacker News to hide all AI related posts.
erikerikson: This is hacker news. Somebody made that and uses it so they don't see this post to tell you about that but it exists.
mr_bob_sacamano: a Kafkaesque loop
hirako2000: Before large models things were starting to move to micro VM, lean hardware, firecracker cloud platforms running thin containers.Ai buzz and now we are building giga factories. It stands for gigawatt usage, no less target.
paulsutter: How did HN become this kind of website?
iwontberude: I went to a conference and people were suggesting nationalizing AI companies so it's basically everywhere.
beej71: I don't think it's worthless. It can greatly speed up coding. And learning foreign languages. And many other things.But I do think humanity is worse off because of it. So I'm a detractor in that way. :)
matsemann: Yeah. I don't mind AI, but I'm waiting for it to stabilize and a good work flow being replicable for non-toy problems that should survive and evolve for a long time. I don't think I lose out much by not having 10 agents doing my work for me right now. In 6 months or some years or whatever I can just learn the new way of doing it. It's just exhausting with how much it changes month to month.Like the new frontend frameworks coming every week after 2010 sometime. Not jumping on every single one, and waiting until react was declared the winner and learn that worked well. Sure, someone that used it from day 1 had more experience, but one quickly catch up.
yladiz: This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?
erikerikson: https://news.ycombinator.com/item?id=35654401
jakelsaunders94: This is really interesting. I've been out of education for a long time, but I was wondering how they were dealing with the advent of AI. Are exams still a thing? Do people do coursework now that you can spew out competent sounding stuff in seconds?
_doctor_love: This might sound like snark, but I truly don’t mean it that way.I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development. All the people I work with who are getting the most value out of using AI to deliver software are people who are already very high-skilled engineers, and the more years of real experience they have, the better.I know some guys who were road warriors for many years —- everything from racking and cabling servers, setting up infrastructure, and getting huge cloud deployments going all the way to embedded software, video game backends, etc. These guys were already really good at automation, seeing the whole life cycle of software, and understanding all the pressure points. For them, AI is the ultimate power tool. They’re just flying with it right now. (All of them also are aware that the AI vampire is very real.)There’s still a lot to learn, and the tools are still very, very early on, but the value is clear.I think for quite a few people, engaging with AI is maybe the first time ever in their entire career they are having to engage with systems thinking in a very concrete and directed way. Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.So I think we’re going to keep talking for quite a while, and the conversation will continue to be very unevenly distributed. Paradoxically, I’m not bored of it, because I’m learning so much listening to intelligent people share their learnings.
Lerc: Never tired of talking about AI. There are so many fascinating aspects to explore and papers delivering new ideas. It's a bit tiring keeping up with the new stuff but talking about what we've found is one of the things that makes it easier to keep up.I'm somewhat tired of seeing the same rehashed claims of future ability, non-ability, profit, loss.I actually like talking about the implications, future risks and challenges of AI. I have made submissions on ways AI should be regulated to benefit society. The problem is the assumption of what is happening and what will happen.To many people seem to enter the conversation feeling that the absence of doubt is the same thing as being informed.And especially people making claims based on premises that they seem to believe that if they build big enough towers on them, they will become true.The number one thing that bothers me in all this, is people assuming the contents of the minds of others.I find the pathologising of Sam Altman to be the most egregious form of this. It is one thing to disagree with someone's decisions, another thing to disagree with their stated opinions, but to decide upon a person's character based upon what you believe they are thinking in their private thoughts is simply projection.I know this is an opinion of little worth to many, but my impression of Sam Altman is just a person who has different perspectives to me. The capitalist tech world he lives in would inevitably shape different values to me. What I have seen of him is consistent with a sincere expression of values. I can accept that a person might do something different to what I would, even the opposite of what I want while believing that they can be doing so for reasons that seem to be morally the right thing to do.This also happened with cryptocurrency. Crypto advocates believe that it is a good thing for the world. Too many consider those who believe that crypto could benefit society to be evil. There is a difference between being wrong and being evil. No matter how certain you are you can still be wrong, in fact beyond a point I would say increased certancy would indicate a higher likelihood of being wrong.So I'm happy to talk about AI. I have plenty to learn. I wonder if others went in with the goal to learn whether they would find it less tiring.
vrganj: If we wanna go full-on Marxist analysis it is an attempt of the capitalist class to finally rid themselves of their dependence on labor and their pesky demands like sick leave and fair wages.Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...
mattgreenrocks: Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.
webdood90: > These people are absolutely lost, with administration totally sold on the idea that "AI is the future" ...Doesn't sound that different from my tech job
CrzyLngPwd: Yup.Bored of hearing about it, bored of reading about it.I love using these LLM tools, but honestly, it feels like every man and his dog has something to say about it, and is angling to make a quick buck or two from it.And the slop, oh my goodness, it's never-ending on every site and service.
emp17344: The parent comment is a pretty measured take. What’s your problem with it?
mhitza: Pretty funny boasting about accelerated results, when his public contributions are only in two repositories (gstack itself and a rails bundle with 14 commits).Endlessly grooming the Agent reminds me of Gastown.Curios to see what he'll present, if, from his 700+ contributions in private repositories.
johnea: Yes!There are other interesting things in the world today, and HN is overwhelmed with pretend intelligence.Hype, detractors, ALL OF IT!Maybe a separate web page or RSS feed could be created that is dedicated to the subject...
surgical_fire: Which is why talk about AI datacenters typically involve energy supply constraints, and possibly the need to build power plants along with it.It is, of course, because it barely uses any energy.
xvector: Seeing this kind of populist misinformation/bikeshedding on HN is particularly disappointing.
lpcvoid: So then explain to me where I wrote misinformation?
d675: absolutely. as a early/mid level SDET/SRE, I can move so fast on prototyping full good apps now. That style of thinking is serving me well, even knowing about queues, basic infra knowledge, etc is plenty to produce decent code.That said, I can see AI makes a ton of bad decisions.ex: React web app, user facing dashboard that queries 1 db- dashboard was making 300 queries per second, instead of collecting data once and parsing it and setting a stale timer / on refresh only- implicit flows in the dashboard, rather than creating hard checks in the db to prevent bypassing of payment status- exposing tokens in the url- and a ton more I don't remember.AI can help me catch them tooI built out a great full stack app and now realized I can do far more complex workflows. About to tackle it
mostertoaster: The EPA repealed its 2009 conclusion that greenhouse gases warm the Earth and endanger human health and well-being.So this is not a good reason to oppose AI. Now the sheer energy it requires does mean we might want to go nuclear though.Natural gas is nice though because it does pollute the air far less than coal.You might argue the EPA only repealed that because of political agendas, but the same argument could be made for why it was passed.A lot of people got very rich off the fear mongering from climate alarmists.
nisten: In 2-3 decades 30% of the world population will be over 60 years old (~3 BILLION seniors).We don't have an economic model for it, nor does gen-z want to all be Personal Support Workers while paying rent. Nvidia only makes 6million data center GPUs a year. Huawei makes 900k. We need 10 to 100x more to be able to automate enough just to hold civilization together. Amazon built datacenters with near 0 water use but it used 35% more electricity overall. So tha problem can be solved however we need to change out of the whole scarcity mentality if we're going to actually make the planet nice.
datsci_est_2015: What do LLMs replace, pray tell? More like moving from a screwdriver to a drill, rather than replacing the carpenter all together.Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.
jakelsaunders94: This is a really intersting take, and maybe shows that I haven't been thorough enough with my reading. My guess is that the deep technical articles are few and far between and the higher level 'hot takes' are what fills the room. Do you have any recommendations for interesting places to start?
heavyset_go: > certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.For example, I will bet that non-straw people will take issue with this, and for good reasons:> Hallucinations are exceptionally rare now
doug_durham: Nope. It remains the most dynamic and impactful area in software today. I'm sure it will fade in to common practice over the next few years and become less talked about. I find it infinitely more interesting than yet another article talking about the wonders/horrors of the Rust borrow checker.
amelius: Of course talking about AI is boring.The analogy is someone from the 19th century talking about their slaves all day which is of course nonsense because they had other things to talk about.
pjc50: Could you provide an example of such a thing that is prevented?
gastonf: > If we get computers to think for us, we can solve a lot of our most pressing issuesIf AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?
thethirdone: Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.[0]: https://www.statista.com/statistics/183943/us-carbon-dioxide...
axi0m: The worst part in all that noise: ask your customers what they need ; they will tell you "AI features". No matter what it is, or even how it compares to more traditional approaches when it comes to solving their pains. These two letters got beyond obsession.
s_u_d_o: Gosh how i miss the old HN Days… where one would actually code, read docs, and develop stuff and feel happy about it. Not write a prompt and watch a chatbox do all the work in a matter of seconds. It’s like we’re losing the meaning of building something… dk how to explain it more. But yeah, it’s tech! Nothing stays the same
whattheheckheck: When industrialization was taking root yes indeed the factory jobs sucked AND it was the future. Two things can be true
jakelsaunders94: Hey, I don't think this sounded like snark at all. Super grounded take.> I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development.This I agree with completely. You can see it in the difference between a prompt where you know exactly what you want and when things are a little woolley. A tool in the hands of a well trained craftsperson is always better used.> So I think we’re going to keep talking for quite a while Me neither, and to be clear I'm okay with that. This was mostly a rant at the lack of diversity of discourse.
teaearlgraycold: > Hallucinations are exceptionally rare nowThe way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.
cyanydeez: Isn't that scary though: A bunch of people are going to be forced to use a tool that keeps them ignorant and they absolutely won't know if it's doing correct things, to the point that as you retire, the next crop is going to be much less involved in knowing whats going on.It's what happened with the internet and computer usage. As Apple made it easier to get online with zero computer knowledge, suddenly we're electing people like donald trump.
gAI: Agreed, though I prefer "Fae Folk" to vampires.
arcxi: This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.In contrast, what harm do those detractors cause? They don't generate as much code per hour?
xvector: By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)
arcxi: > detractors slow the pace of progressConsidering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.
keybored: And this one will be different?> At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.Umm.> I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.It’s all positives. So what’s the problem?There isn’t a problem with AI. Of course. It’s just the discourse around it is “boring”. And the managers are lame about it.And what has been the AI discourse for the last few years. The same formula.- AI is either good- ... or it is the best thing to have happened to Me- But I have feelings[1] or concerns about everything around AI, like the discourse, or people having two-hundred concurrent AI agents maniaIt’s all just grease for the AI Inevitabilism bytemill.[1] https://news.ycombinator.com/item?id=47487774> … And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”Very self-aware.And now 117 points and 53 comments in 23 minutes.
jakelsaunders94: Hey :)> And this one will be different? I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.>Umm. ??> It’s all positives. So what’s the problem? The article is trying to say that these things are great, but the level of conversation leads to a lack of novelty.> It’s just the discourse around it is “boring”. And the managers are lame about it. Exactly.> OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe” Very self-aware.Is this sarcasm?
SpicyLemonZest: [delayed]
slfnflctd: > having conceived of no further items to which AI could provide assistanceFor me, the issue isn't that I can't conceive of work AI could help with. It's that most of the work I currently need to be doing involves things AI is useless for.I look forward to using it when I have an appropriate task. However, I don't actually have a lot of those, especially in my personal life. I suspect this is a fairly common experience.
delbronski: AI is starting to look like a net negative for humanity. I remember the early days of OpenAI. I was super excited about it. There was a new space to uncover and learn about. I was hopeful.Now I have this love/hate relationship with it. Claude Code is amazing. I use it everyday because it makes me so much more efficient at my job. But I also know that by using it I’m contributing to making my job redundant one day.At the same time I see how much resources we are wasting on AI. And to what end? Does anybody really buy the BS that this will all make the world a better place one day? So many people we could shelter and feed, but instead we are spending it on trying to make your computer check and answer your emails for you. At what point do we just look up and ask… what is the damn purpose of all of this? I guess money.
xvector: > But I also know that by using it I’m contributing to making my job redundant one day.I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities. You can do so much more now. We are more limited by our ideas at this point than anything else.Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"(You can go on about corporate culture as the cause, but I've worked at regular corporations and most of FAANG. Initiative is rewarded almost everywhere.)> Does anybody really buy the BS that this will all make the world a better place one day?Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor. The march of automation and technology has already "made the world a better place."
delbronski: And I’m shocked that anyone into tech can be so blind to the adverse effects the current tech industry is having on our world and our society.We owe it to the world, as the experts, to be critical. The march of automation and technology has made the world a better place in some ways. I sure love modern medicine, but those drones flying over Ukraine and Russia sure don’t seem like they are making the world a better place. Nuclear bombs are not making the work a better place. Misinformation in social media is not making the world a better place.Any belief you drink blindly will eventually find a way to harm you.
leontrolski: Yes, AI or no AI, tell me about something actually interesting that you're working on.Currently it feels a bit like everyone is talking about what new editor they're using. I don't care about that type of developer tooling very much. AI isn't coming up with some exciting new database, type system, etc etc."Look at how I'm able to web dev x% faster" because of LLMs is boring.
overgard: I'm like 99% convinced that most of the AI conversation upvotes at this point is astroturfing. I just don't see the correlation with the sentiment I get from talking to people in the real world (mostly negative AI sentiment) vs what I see here
solenoid0937: I think some companies are just behind the curve, so this sentiment seems bizarre to some.At my big tech, AI is every conversation with everyone, every day. Becoming AI native is a huge deal for us. Literally everyone is making AI usage a core part of their job and it's been a big productivity accelerator.Perhaps it's different where you work, so you don't see the sentiment.
geldedus: I am bored of Luddite people yelling at AI
trigvi: Not necessarily. Personally, I'm both in love with AI (likely to upvote a convo) and scared about the short/medium term societal changes its job displacement will bring.
brookst: AI is more likely to destroy capitalism than it is to increase inequality.Ten years ago, what would it have cost you to build a Jira clone / competitor? Today one person can do it in a week, at least for the core tech.In a year, only the very largest companies will pay for that kind of infrastructure tooling.We’ve just started seeing the democratization of software and the capitalists are terrified.
plagiarist: I just don't know how to explain that you won't be destroying capitalism with AI. You have a subscription.
keybored: > I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.Is anybody else bored of talking about AI? I’m beyond bored.
LogicFailsMe: Spot on, I am having the time of my life with AI, more fun than I've had in decades. But I was in the top 10% of engineering, and top 1% of the bits of engineering I do best, so it's easy for me to use AI to explore more ideas than I could have possibly explored by hand. And if I get replaced, cool bro, my investments are in compute, and compute's just getting started IMO.
djeastm: Any thoughts on what the next generation of software devs is going to look like without as much manual experience?
_doctor_love: Honestly, I think it will look pretty much like this one. There’s a lot of manual experience that the current generation doesn’t have.For example, I haven’t racked and cabled a server in over 15 years. That used to be a valuable skill.I also used to know how to operate Cisco switches and routers (on the original IOS!). I haven't thought about CIDR and the difference between a /24 and a /30 since the year 2008. A class IP addresses, how do those work? What subnet am I on? Is thing running a different VLAN? Irrelevant to me these days. Some people still know it! But not as many as in the past.The late Dr. Richard Hamming observed that once a upon a time, "a good man knew how to implement square root in machine code." If you didn't know how to do that, you weren't legit. These days nobody would make such a claim.So some skills fade and others rise. And also, software has moved in predictable cycles for many decades at this point. We are still a very young field but we do have some history at this point.So things will remain the same the more they change on that front.
jakelsaunders94: I've been meaning to try Mastodon for a long time (I was never really a Twiiter user). As others have said elsewhere though, I'm not sure where to start. Did you just download the app and join mastodon.social?
mememememememo: I did much less. Just went to mastodon.social in my browser and read what is there. I think you can create an account from there. You can also choose another instance to read and create the account from.
steve_adams_86: I don't see the threat from AI as capitalist at all, but more so feudalist. I mean, if things go in the direction of the worst-case scenario. It seems like the power potential transcends the problems of capitalism entirely.But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.
plagiarist: Capitalism is feudalism but with raw generational wealth instead of generational wealth with divine right characteristics.
steve_adams_86: I see some overlap, but I think it's more complex than that. If we conflate the two so easily they lose meaning. Certainly, some people have that experience under capitalism. I think there are systemic failures which lead to life experiences that are probably not all that different from some peoples' experiences in feudal society, both at the top and bottom of the hierarchy.The more I think about it though, I'm not sure feudalism is the right analogy. Serfs had a purpose and were depended upon. In a society where AGI is in the hands of a few, it seems reasonable to believe that there wouldn't be a need for serfs at all. Labour would become utterly irrelevant. You'd have no lord to be bound to. You'd be unnecessary.I imagine the transition there would be some brutal form of capitalism, but the destination would not be fuedalism. I don't think we have a historical analog for that hypoethical destination.
mhitza: Big Data, The Cloud, Quantum Computing, Web 3.0, and maybe a few I've forgotten about.Only thing that stuck thus far is the cloud. Though not for infinite scalability and resiliency, cause that just dumps big invoices in your lap.
solenoid0937: Big Data absolutely became a thingThe Cloud happened as well, as you've pointed outAI adoption is well past Quantum and Web 3. Comparing it to those two is nonsensical.
mhitza: It only is nonsensical if you create your own comparison dimension ("adoption") to construct your argument, to call what I said, nonsensical!All those listed and more, are part of the cycles that the parent comment mentioned and which I've continued.Same thing with Agile. Mostly sprint-based waterfall, iterative development is not something I've ever seen in practice. Or people over processes, remember those ideas?BigData, was another hype cycle where even smaller companies wanted a "piece of the action". I've worked at the time in a sub 50 developers company, and the higher ups where all about big data. When in fact our system was struggling with GB of data due to frugality in hardware.For a moment in time you couldn't spit in any direction without hiting a Domain Driven Design talk. And now we disable safeguards and LLMs write a mix of garbled ideas from across all the laundered open source training data.To early to tell where AI will land, and if it will bring down the economy with it, but spending rate doesn't deliver equal results for all, and we will have to see after the dust settles.
cmrdporcupine: This.I don't know how I'm burnt out from making this thing do work for me. But I am.
computerdork: AI Fatigue seems real: https://www.businessinsider.com/ai-fatigue-burnout-software-...
JoshTriplett: > It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues.All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.
xvector: What are you talking about? AGI is practically a prerequisite for transhumanism, and, well, not dying.If you want to "keep existing" AGI happening is probably your only hope.
JoshTriplett: Aligned AGI, yes. Unaligned AGI is a fast way to die.If you want to keep existing, slow down, make sure AGI is aligned first, and go into cryo if necessary.If you don't want to keep existing, that doesn't mean you get to risk the rest of us.
jvanderbot: How do I answer this without spamming: Yes, very much.Everyone is in their own place adapting (or not) to AI. The disconnect b/w even folks on the same team is just crazy. At least it's gotten more concrete (here's what works for me, what do you do) vs catastrophizing jobpocolypse or "teh singularity", at least on day to day conversations.
peruvian: Yeah maybe some workplaces are starting to get more organized but in general there's teams with anti-LLM engineers still and some that have Claude Code running all day.
_doctor_love: Thanks friend! Appreciate it.Agree, the diversity of the discourse is not great. There's a lot of "omg I just got started waaauw" articles out there along with "we're all gonna die!" stuff. And then a few seams of very excellent insight.Deep research at least helps with dowsing for the knowledge...
artur_makly: [delayed]
miltonlost: Keep marching that automation and tehcnology to an acidified ocean. But hey, at least now we can code faster than we can review!
solenoid0937: AI won't be what acidifies our ocean, but AGI might save us from it.Strangely enough, I don't see you calling to end the consumption of meat which would have a far larger environmental impact while not slowing global progress at all.
palata: > AI won't be what acidifies our oceanTech is what got us where we are. AI allows us to use more energy to produce more of what is currently measurably killing us.> but AGI might save us from it.This is just faith. Some believe that prayers may save us.
solenoid0937: "AI energy usage" is a convenient scapegoat not backed by data.Many things are orders of magnitude bigger than AI in the energy usage problem that bring less comparable value.
htx80nerd: I'm bored of the everyday Claude spam. I've used Claude extensively and it was very sub par.
d675: what did you try to make? how?
johnbarron: Yes, all this talk about AI is extremely distracting... https://www.youtube.com/shorts/LDPDDS3HaGo
gojomo: You may be bored of AI, but because AI is not yet bored of us, turning away may be dangerous.
tartoran: AI is neither bored or nor engaged with us. It's just a technology that we can use or abuse. I doubt it'll become conscious anytime soon though our the desire to invent God or deceive others will push us to invent many contraptions to make it appear conscious.
_doctor_love: Serious reply to this one: I truly don’t find it any more scary than what’s already taken place many times in human history.We have hundreds and thousands of years of history showing humans committing atrocities against each other well before the advent of computers, or even the introduction of electricity. So while the tool may become so ubiquitous that there’s no option not to engage with it, I don’t think it really fundamentally alters the dynamics of human behavior.Some people are motivated by greed. Others are motivated by nobility. It really just comes down to which wolf they're feeding.In terms of the tool keeping people ignorant, there’s a part I agree with and a part that I don’t. I think, in terms of information dissemination, AI is probably the autocrat’s wet dream in terms of finally being able to achieve real-time redefinition of reality. That’s pretty scary, and I’m not sure what to do about it.On the other hand, people have always been free to not really learn their craft and to just sort of get by and make a living. That was true a thousand years ago, and it’s true today. There’s always somebody who can do really a high-quality job, but they’re very expensive, and then there's a vast population who will do a medium to terrible job for less money. You get what you pay for. There's a reason history is primarily written about people with power and wealth, they were the only ones with the means to do anything.I don’t agree with the assertion about the internet and the election of someone like Donald Trump. Well before the internet existed, politicians were using communication mediums to influence things and get elected—whether it was the telegraph, the telephone, or the TV. JFK famously was the first TV president (notably, he didn't wear a hat).These technologies simply give politicians more reach, and they may change the dynamics of how voters are persuaded. But what’s true today was true three hundred years ago: there’s the face of power that you see publicly, and then there’s what really happens behind the scenes.
bluefirebrand: > Serious reply to this one: I truly don’t find it any more scary than what’s already taken place many times in human historySpoken like someone who thinks they are going to be insulated from the fallout
solenoid0937: It might be difficult to comprehend, but many of us are fine with the fallout because we understand the net benefit to humanity is going to be similar to the previous waves of automation.Sure, it might hurt me. I'm not selfish enough to put that over what will be an incredibly empowering development for our species.
peterlk: My favorites are the micrograd series by Andrej Karpathy on youtube [0], and “Why Deep Learning Works Unreasonably Well” [1]The greats on youtube are also worth watching: 3B1B, numberphile, etc.[0] https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9Gv... [1] https://youtu.be/qx7hirqgfuU?si=8zmrbazuvnz379gk
keybored: A post supposedly about being bored of talking about AI. But psyche, it’s the same AI talking points. And psyche, the top comment is the same sentiment about how the truly skilled will finally have their time to shine.I don’t know if it’s the Universe delivering this farce or it’s the emergent LLM Singularity.
pesus: I'm convinced that the majority of people hyping up AI don't actually interact with many people in real life, let alone people that aren't software engineers.
elbasti: > Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.I don't follow. The comparison is 30% of energy use for transportation vs 4% for AI, and soon 30% for transportation vs 10% for AI.
SyneRyder: Honestly, I think there's a big divide, and those of us who are using AI intensively might just be increasingly "going dark" & distancing ourselves from those "real world" people. It's becoming detrimental being around people who are so actively negative about AI. It feels like being around people who still insist the sun orbits the earth. Those people are actually happier believing what they believe, so why spend any more time trying to convince them they're wrong?I spent 2024 on Mastodon and I absorbed their groupthink that AI was useless... I wish I could get that year of my life back. I wish I had that extra year headstart on AI compared to where I am now. So much of my coding frustrations that year that might have been solved from using AI. I am reluctantly back on X - I hate what has been done to Twitter, but that's where so much of the useful information on using AI is being shared.Well, back to it. Claude has been building another local MCP server tool for me in the background.
solenoid0937: > It feels like being around people who still insist the sun orbits the earth.100% feeling this divide as well.People that deny the benefit of AI in 2026... I can't even engage with them anymore. I just move on with my life. These people are simply not living in reality, it will catch up to them eventually (unfortunately.)
scorpioxy: To me, it is very scary. I know people who have sort of "outsourced" their critical thinking to chatgpt. So to me it's extra scary when I see it outside technical circles. They'll just believe whatever that generation of LLM tells them because it is doing it so confidentially and never question or check the information. Maybe I'm naive but I thought easier access to knowledge was supposed to make us more intelligent, not less.
_doctor_love: > how the truly skilled will finally have their time to shine.That's not what I said. I said that those who are already shining, are now shining even brighter. Give a great craftsman a new tool and he will find a way to apply it. If it is valueless, he will throw it away.For what it's worth, your comment is also an HN trope, the disaffected low-effort armchair keyboard warrior.
keybored: Expressing a negative sentiment is a trope now?
rarisma: I love AI, think its super useful, I use claude daily and follow the industry closely but I would love to go a day without hearing about it
metalliqaz: By my understanding, the administrators at small colleges are among the least capable professionals one might find anywhere in the economy.