Discussion
chromehearts: Incredible website
feverzsj: More like Lunatic.
Mordisquitos: In can be both. There are two L's to pick from.
kombookcha: What a wonderful read.
GuestFAUniverse: And "lazy".Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.If I point it to that by something like "include that yourself", it does a decent job.That's so _L_azy.
Meneth: That's a lie.
wilg: LLMs are pretty cool technology and are useful for programming.
5o1ecist: A pointless opinion-piece of low information density, perfect for an echo chamber of equally minded people.
emsign: If you check the code afterwards. You do check the code yourself, don't you?
emsign: LLMs are cheaters because their goal isn't to produce good code but to please the human.
theshrike79: > This sort of protectionism is also seen in e.g. controlled-appelation foods like artisanal cheese or cured ham. These require not just traditional manufacturing methods and high-quality ingredients from farm to table, but also a specific geographic origin.Maybe "Artisanal Coding" will be a thing in the future?
boxed: This geographic protection is extremely bogus in many cases, if not most cases, which imo undermines his argument.
anilgulecha: >If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.
emsign: > It's not a co-pilot, it's just on auto-pilot.Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.
_flux: Well initially it was a lot less capable. Someone might describe it auto-complete on steroids.Someone might call LLMs that today, except they've stepped a bit up from steroids.
simianwords: What the author and many others find hard to digest is that LLMs are surfacing the reality that most of our work is a small bit of novelty against boiler plate redundant code.Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.
emsign: Then MS is conveniently keeping the old name.
emsign: You are missing the point of the author. He literally said no court should have rendered a judgement, that's the exact opposite of guilty until proven innocent. Guilty means a court has made a judgement.He is proposing to not make a judgement at all. If the AI company CLAIMS something they have to prove it. Like they do in science or something. Any claim is treated as such, a claim. And it's true that LLMs by design cannot cite sources. Thus they cannot by design tell you if they made something up, if they just copy and pasted it, or somehow created something new. This ambiguity is benefitting the AI companies and they are exploiting it to the maximum. Going even as far as illegally obtaining pirated intellectual property from an entity that is banned in many countries.
plasticeagle: Acko.net remains the best website on the internet.
js8: That's a problem with any self-improving tools, not just LLMs. Successful self-improvement leads to efficiency, which is just another name for laziness.
est: I won't call that forging, but commission.btw you can make git commits with AI as author and you as commiter. Which makes git blame easier
DavidPiper: > This stands in stark contrast to code, which generally doesn't suffer from re-use at all ...This is an absolute chef-kiss double-entendre.
malka1986: Hello, I am a single dev using an agent (Claude Code) on a solo project.I have accepted that reading 100% of the generated code is not possible.I am attempting to find methods to allow for clean code to be generated none the less.I am using extremely strict DDD architecture. Yes it is totally overkill for a one man project.Now i only have to be intimate with 2 parts of the code:* the public facade of the modules, which also happens to be the place where authorization is checked.* the orchestrators, where multiple modules are tied together.If the inners of the module are a little sloppy (code duplication and al), it is not really an issue, as these do not have an effect at a distance with the rest of the code.I have to be on the lookout though. It happens that the agent tries to break the boundaries between the modules, cheating its way with stuff like direct SQL queries.
silon42: Abstraction isn't free... even if you had the correct abstraction and the tools to remove the parts you don't need for deployment, there is still the cost of understanding and compiling.There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).There's also the platform lockin/competition aspect...
raincole: > It turns out vibe-coding an Electron app is still preferable to vibe-coding on multiple platforms and delivering a tailored experience for each.Off-topic, but people have some weird obsession over native apps. What does "delivering a tailored experience for each platform" look like?Blender is probably the most successful non-Electron, open-sourced multiple platform app we ever. It completely ignores each platform's native UI. VSCode is the most used editor for programmers [0] and it's literally Electron-based.Is there even one (1) app that1. is as successful as Blender or VSCode2. delivers a tailored experience for each platform, or at very least use the platform's native UI?[0]: https://survey.stackoverflow.co/2025/ and it's not even close.Back to the topic:> Video games stand out as one market where consumers have pushed back effectivelyNo, it's simply untrue. Players only object for AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
vladms: > Whether something is a forgery is innate in the object and the methods used to produce it. It doesn't matter if nobody else ever sees the forged painting, or if it only hangs in a private home. It's a forgery because it's not authentic.On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.> but also a specific geographic origin. There's a good reason for this.Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.> To stop the machines from lying, they have to cite their sources properly.I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
xg15: > An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful.There are a lot such artists who can do that after having seen Van Gogh's paintings before. Only Van Gogh (as far as we know) did paint those without having seen anything like it before - in other words, he had a new idea.
wilg: eyeroll
gck1: Enforce this with deterministic guardrails. Use strictest linting config you possibly can, and even have it write custom, domain specific linters of things that can't happen. Then you won't have to hand hold it that much
wonnage: Even the mechanical skill of painting gets a lot harder without an example to look at. Most people can get pretty good at painting from example within a year or two but it’s a big leap to simply paint from memory, much less create something original.
nurettin: Question is: Which L? Or How many Ls?
eucyclos: I wrote a book a while back where I argued that coding involves choosing what to work on, writing it, and then debugging it, and that we tend to master these steps in reverse chronological order.It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it
theshrike79: Also "AI" has been in gaming, especially mobile gaming, for a literal decade already.Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.And then they go around blaming EVERYTHING as AI.
trashymctrash: If you read the next couple of paragraphs, the author addresses this:> That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.I only quoted the first paragraph, but there is more.
azizam: Sounds a lot like this entire website!
EugeneOZ: I do, 100%, every line.
theshrike79: [delayed]
Daz912: are you too stupid to use it properly?
Otterly99: Art in general is a bit weird like that.The value of a piece is definitely not completely tied to its physical attributes, but the story around it. The story is what creates its scarcity and generates the value.It is similar for collectible items. If I had in my possession the original costume that Michael Jackson wore in thriller, I am sure I could sell it for thousands of dollars. I can also buy a copy for less than a hundred.Same with luxury brands. Their price is not necessarily linked to their quality, but to the status they bring and the story they tell (i.e. wearing this transforms me into somebody important).It can seem quite silly, but I think we are all doing it to some extent. While you said that a good forgery shouldn't affect one's opinion on the object (and I agree with you), what about AI-generated content? If I made a novel painting in the style of Van Gogh, you might find it beautiful. What if I told you I just prompted it and painted it? What if I just printed it? There are levels of involvement that we are all willing to accept differently.
tovej: An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.
vjerancrnjak: Libraries create boundaries, which are in most cases arbitrary, that then limit the way you can interact with code, creating more boilerplate to get what you want from a library.Abstractions are the source of bloat. Without abstractions you can always reduce bloat, or you can reduce bloat in your glue, but you can't reduce glue.It takes discipline to NOT create arbitrary function signatures and short-lived intermediate data structures or type definitions. This is the beginning of boilerplate.So many advances in removing boilerplate are realizing your 5 function calls and 10 intermediate data structures or type definitions, essentially compute a thing that you can do with 0 function calls and 0 custom datatypes and less lines of code.The abstraction hides how simple the thing you want is.Problem is that all open source code looks like the bloat described above, so LLMs have no idea how to actually write code that is without boilerplate. The only place where I've seen it work is in shaders, which are usually written to avoid common pitfalls of abstraction.LLMs are incapable of writing a big program in 1 function and 1 file, that does what you want. Splitting the program into functions or even multiple files, is a step you do after a lot of time, yet all open source looks nothing like that.
vladms: So, if we apply to software, should we quote Dijkstra each time we use his graph algorithm?Should we also say "if you can implement Dijkstra's algorithm" it's irrelevant because "you did not have the idea"?It's great to credit people that have an idea first. I fail to see how using an idea is that "bad" or "not worthy", ideas should be spread and used, not locked by the first one that had them (except some small time period maybe).
Papazsazsa: This is actually quite an insightful comment into the mindset of the tech set vs. the many writers and artists whose only 'boilerplate redundant code' is the language itself, and a loose aggregate of ideas and philosophies.Probably the original sin here is that we started calling them programming languages instead of just 'computer code'.Also - most of your work is far more than mere novelty! There are intangibles like your intellectual labor and time.
wonnage: Boilerplate has been with us since the dawn of programming.I still think LLMs as fancy autocomplete is the truth and not even a dig. Autocomplete is great. It works best when there’s one clear output desired (even if you don’t know exactly what it is yet). Nobody is surprised when you type “cal” and California comes up in an address form, why should we be surprised when you describe a program and the code is returned?Knowledge has the same problem as cosmology, the part we can observe doesn’t seem to account for the vast majority of what we know us out there. Symbolic knowledge encompasses unfathomable multitudes and will eventually be solved by AI but the “dark matter” of knowledge that can’t easily be expressed in language or math is still out in the wild
jesterswilde: Regarding art, what do you feel about museums? Why would you go see an original instead of simply looking at a jpg.Even if you aren't in the group, there is clearly a group of people who appreciate seeing the original, the thing that modified our collective artistic trajectory.Forgeries and master studies have a long history in art. Every classically trained worth their salt has a handful of forgeries under their belt. Remaking work that you enjoy helps you appreciate it further, understand the choices they made and get a better for feel how they wielded the medium. Though these forgeries are for learning and not intended to be pieces in their own right.
vladms: > Regarding art, what do you feel about museums? Why would you go see an original instead of simply looking at a jpg.I go to a museum to see a curated collection with explanations in a place that prevents distractions (I can't open a new tab) and going with people that might be interested to talk about what they see and feel. It's as well a social and personal experience on top information gathering.> there is clearly a group of people who appreciate seeing the original,There are many people interested in many things, do you want to say that "because some people think it is important, it must be important"? There were many people with really weird and despicable ideas along history and while I am neutral to this one, they definitely don't convince me just by their numbers.> simply looking at a jpg.Technically a jpg would not work because is lossy compression. But a png at the correct resolution might do the trick for some things (paintings that you see from far), but not for others. Museum have multiple objects that would be hard to put in an image (statues, clothes, bones, tables, etc.). You definitely can't put https://en.wikipedia.org/wiki/Comedian_(artwork) in a jpg - but the discussion surrounding it touches topics discussed here.
lisper: > why hasn’t the boilerplate been automated as libraries or other abstractions?Cue the smug Lisp weenies.
GaryBluto: I think it says a lot about this opinion piece that the people agreeing with it are posting short comments saying "So true!" and "Great!" whilst the people criticizing it are writing paragraphs of well-spoken criticism.
PunchyHamster: No, that would limit our velocity, we can't check code, that eats into the LLM gains
delaminator: "I hate CGI video""So you hated the TV Series Ugly Betty then?""What? that's not CGI!"This video is 15 years oldhttps://www.youtube.com/watch?v=rDjorAhcnbY
auggierose: Yep, people not understanding the value of abstraction is exactly why LLM coded apps are going to be a shit show. You could use them to come up with better abstractions, but most will not.
heavyset_go: Books are just simple theses and themes with hundreds of pages of boilerplate
vntok: > An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.This reads like a skill issue on your end, in part at least in the prompting side.It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.
Jooror: I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”. Or, to be a bit less generous, consider the possibility that the code you’re generating is boilerplate, marshaling, and/or API calls. A facade of perceived complexity over something that’s as complex as a filter-map or two.
gampleman: Actually I think this is one of the more tragic outcomes of the LLM revolution: it was already hard to get funding for ergonomic advances in programming before. Funding a new PL ecosystem or major library was no mean feat. Despite that, there were a number of promising advances that could have significantly raised the level of abstraction.However, LLMs destroy this economic incentive utterly. It now seems most productive to code in fairly low level TypeScript and let the machines spew tons of garbage code for you.
shinycode: No I don’t agree. Just because it’s « boilerplate », that does not mean it’s worthless or doesn’t carry novelty. There is « boilerplate » in building many things, house, cars etc where to add real new stuff it’s « always the same base » but you have to nail that base and there is real value in it. With craft and deep knowledge and pride. Every project is different and not everything can be made from a generic out-of-shelf product
fzeroracer: > Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thingDo you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.
lxgr: > I don't know how one can spins this as a bad thing.People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.> Spore is well acclaimed. Minecraft is literally the most sold game ever.Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.
hwers: Its unfortunate that there’s mode collapse around what the consensus “best way” to use these things are. It’s too bad we didn’t have a period where these things were great teachers but didn’t attempt to write code because in my opinion the ideal way to use them is not by agents mass producing sloppy buggy disorganized code, but to teach you things way faster than the old alternatives, rubber duck, and occasionally write snippets of functions when your brain is too tired or it’s throwaway cli code or some api you’re not familiar with.
utopiah: > to teach you things way faster than the old alternativesI'm not sure if you ever had a teacher or instructor that you didn't trust, because they were a compulsive liar or addiction or any other issue. I didn't (as least not that I can remember) but I know I would be VERY on guard about it. I imagine I would consequently be quite stressed learning with them, even if they were brilliant, kind, etc.It would feel a bit like walking on thin ice to get to a beautiful island. Sure, it's not infeasible and if you somehow make it, it might be worth the risk, but honestly wouldn't you prefer a slower boat?
simianwords: lot of people are saying this
richardjam73: Verbosity doesn't equate to correctness.
anilgulecha: sure, no "court" should render it, but then>AI output should be treated like a forgeryWho's passing this judgement this? Author? Civil society?
dntrshnthngjxct: If you do not plan out the architecture soundly, no amount of prompting will fix it if it is bad. I know this because my "handmade" project made with backward compatibility and horrible architecture keeps being badly fixed by LLM while the ones that rely on preemptive planning of the features and architecture, end up working right.
flohofwoe: The 'Handmade Network' is essentially this (in a good way though) - and long before LLMs got good enough for code-generation - instead as a counter philosophy to the soulless "enterprise software development" where a feature that could be implemented in 10 lines of code is wrapped in 1000 lines of "industry-best-practices" boilerplate.Programming via LLMs is just the logical conclusion to this niche of industrial software development which favours quantity over quality. It's basically just replacing human bots which translate specs written by architecture astronauts into code without having to think on their own.And good riddance to that type of software development, it should never have existed in the first place. Let the architecture astronauts go wild with LLMs implementing their ideas without having to bother human programmers who actually value their craft ;)
wormpilled: I think that's a different category, though. Those backgrounds are actual video recordings of real places, not 3D environments modeled from scratch. It looks 'real' because the background actually exists.
hwers: I agree, it can be incredibly frustrating at times. My rule is that if it “compiles” in my brain as an understood idea then i accept it. I also push back a lot (sometimes it points out good errors in my thinking, sometimes it admits it hallucinated). Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway. It’s basically the same approach when presented with a “formula” for something in school. If i dont know how to derive/prove it then i dont accept it as part of my memorized or accepted toolkit/things i use (and try to forget it). If it fits with the rest of my network of understood ideas i do. It’s annoying but still more time efficient than trawling through lecture slides with domain specific language etc
endymion-light: I feel like this is partially a skill issue - You can get direct, cited information from LLMs. There's a level of personal responsibility for over-using the tools and letting them feed you bad/false information, but if you try researching specific abstractions, newer documentation, most LLMS now correctly call and research the tools available, directly citing them.I think you can build a very easy workflow that reinforces rather than replaces learning, I've used a citation flow to link and put into practice a ton of more advanced programming techniques, that I found incredibly difficult to locate and research before AI.I'd say the comparison is faulty, it's more akin to swimming to an island (no-ai) vs using a boat. You control the speed and direction of the boat, which also means you have the responsbility of directing it to the correct location.
otabdeveloper4: Programmers aren't paid to code.FORTRAN ("formula translator") was one of the first programs ever written and it was supposed to make coding obsolete. Scientists will now be able to just type in formulas and the computer will just calculate the result, imagine that!
zorked: Is this claim historical? As in, it was actually made at the time?
otabdeveloper4: Which claim, exactly? That "coding will be made obsolete"?Yes, it is. Literally every programming innovation claims to "make coding obsolete". I've seen a half dozen in my own lifetime.
endymion-light: This is a great example actually.To me, a function is a single sentence within a book. It may approach the larger picture, but that sentence can be reviewed, changed, switched around, killed by an editor.Some programmers believe they're fantastic sentence writers. They brag about how good of a sentence they write, they're entire worldview has been built on being good sentence creators. Especially within enterprises, you may spend your entire life writing sentences without ever really understanding the whole book.If your worldview has been built on sentence creation, and suddenly there's a sentence creator AI, you're going to be deathly afraid of it replacing you as a sentence writer.
doodaddy: There’s a cold reality that we in this profession have yet to accept: nobody cares about our code. Nobody cares whether it’s pretty or clever or elegant. Sometimes, rarely, they care whether it’s maintainable.We are only craftsmen to ourselves and each other. To anyone else we are factory workers producing widgets to sell. Once we accept this then there is little surprise that the factory owners want us using a tool that makes production faster, cheaper. I imagine that watchmakers were similarly dismayed when the automatic lathe was invented and they saw their craft being automated into mediocrity. Like watchmakers we can still produce crafted machines of elegance for the customers who want them. But most customers are just going to want a quartz.
DonHopkins: Pretend Intelligence (PI) — Design Note & TributeA short design note and tribute to Richard Stallman (RMS) and St. IGNUcius for the term Pretend Intelligence (PI) and the ethic behind it: don’t overclaim, don’t over-trust, and don’t let marketing launder accountability.https://github.com/SimHacker/moollm/blob/main/designs/PRETEN...1. What PI IsRichard Stallman proposes the term Pretend Intelligence (PI) for what the industry calls “AI”: systems that pretend to be intelligent and are marketed as worthy of trust. He uses it to push back on hype that asks people to trust these systems with their lives and control.From his January 2026 talk at Georgia Tech (YouTube, event, LibreTech Collective):https://www.youtube.com/watch?v=YDxPJs1EPS4> "So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them." — Richard Stallman, Georgia Tech, 2026-01-23. Source: YouTube (full talk) — "Dr. Richard Stallman @ Georgia Tech - 01-23-2026," Alex Jenkins, CC BY-ND 4.0; transcript in video description.So PI is both a label (call it PI, not AI) and a stance: resist the campaign to make people trust and hand over control to systems and vendors that don’t deserve that trust. In MOOLLM we use the same framing: we find models useful when we don’t overclaim — advisory guidance, not a guarantee (see MOOAM.md §5.3).[...]Richard Stallman critiques AI, connected cars, smartphones, and DRM (slashdot.org) 42 points by MilnerRoute 38 days ago | hide | past | favorite | 10 commentshttps://news.ycombinator.com/item?id=46757411https://news.slashdot.org/story/26/01/25/1930244/richard-sta...Gnu: Words to Avoid: Artificial Intelligence:https://www.gnu.org/philosophy/words-to-avoid.html#Artificia......currently not responding... archive.org link:https://web.archive.org/web/20260303004610/https://www.gnu.o...
marginalia_nu: Most of the people doing the most rote and monotonous work were and are doing so in some of the least productive circumstances, with clear ways of increasing speed and productivity.If development velocity was truly an important factor in these businesses, we'd migrated away from that gang of four ass Java 8 codebase, given these poor souls offices, or at least cubicles to reduce the noise, we wouldn't make them spend 3 hours a day in ceremonial meetings.The reason none of this happens is that even if these developers crank out code 10x faster, by the time it's made it past all the inertia and inefficiencies of the organization, the change is nearly imperceptible. Though the bill for the new office and the 2 year refactoring effort are much more tangible.
Sharlin: Yep. It's ridiculous to talk about 10x or 5x or 2x anything in any but the smallest companies. All this talk about programmer velocity is micro-optimizing something that's not a bottleneck.
bdangubic: it is like knocking down the vending machine, you have to rock it back amd forth a lot before it falls down
Sharlin: > Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.Before LLMs we did already have a way to "save developers time from writing the same thing that has been done by other developers for a thousand times", you know? A LLM doing the same thing the 1001st time is not code reuse. Code reuse is code reuse.
zimpenfish: > Oblivion, one of the first high-profile games to use procedural terrain/landscape generationI might be misremembering but wasn't the Oblivion proc-gen entirely in the development process, not "live" in the game, which means...> "In the best case, a skilled artist uses automation to fill in mechanical rote work"...is what Bethesda did, no?
utopiah: The analogy was about the unknown thinnest of the ice, not just the fastest way to get there. It's specifically about the lack of reliability of the process.
qsera: > most of our work is a small bit of novelty against boiler plate redundant code...Care to share some examples that prove your point?
thendrill: Exactly....I will just copy paste my comment from another thread but still very relevant>Coding isn’t creative, it isn’t sexy, and almost nobody outside this bubble caresMost of the world doesn’t care about “good code.” They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?”Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market.Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.” In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is.That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering.So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry.I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was.
endymion-light: Yes, I was disagreeing with the premise of the analogy - what would the slow boat in this case be? As my experience, going through software engineering before AI, is that you'd get lost to the ice, with nobody to really help you get out.
utopiah: If you get lost on the ice and you have someone who confidently tells you the path but is sometimes wrong, is it actually helpful?PS: sorry if the analogy is a bit wonky but it's quite dear to me as I do ice skating on frozen lakes and it's basically a life or death information "game" that I can relate to. It might not be a great analogy for others.
krige: > Spore is well acclaimedAnd yet it also effectively ended Will Wright's career. Rave press reviews are not a good indicator of anything, really.
artisin: Hit songs are just simple four-chord loops stretched over three minutes of synthetic boilerplate.
Joel_Mckay: Know a few people now looking for a job after the game industry cutbacks.The diffusion models are good enough now to replace some parametric, and supporting artist works."AI" is isomorphic plagiarism, requiring existing artistic, scientific, and users/authors works."Lying" implies intent, and LLMs are not "thinking" despite marketing companies framing the statistical output.Indeed, people are losing their support jobs in the short-term, but from a sustainable corporate copyright policy only the foolish feed ectoparasites eating their business IP. =3
vntok: [delayed]
dannersy: You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance at the cost of game development being more approachable.LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.
mikkupikku: I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out.
noemit: Many people don't know this, but the Luddites were right. I studied Art History and this particular movement. One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)I was able to feel wool scarves made in europe from the middle ages. (In museum storage, under the guidance of a curator) They are a fundamentally different product than what is produced in woolen mills. A handmade (in the old traditiona) woolen scarf can be pulled through a ring, because it is so thin and fine. Not so for a modern mill-made scarf.Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.Weavers in Kashmir work a similar level of quality, but their wool is different, their needs and techniques are different, so while we still have craftsman that can produce wool by hand, most of the traditions and techniques are lost.Is it a tragedy? I go back and forth. Obviously the heritage fabrics are phenomenal and luxurious. Part of me wishes that the tradition could have been maintained through a luxury sector.Automation is never a 1:1 improvement. It's not just about the speed or process. The process itself changes the product. I don't know where we will net out on software, and I do think the complaints are justified - but the Luddites were also justified. They were *Right*. Their whole argument was that the mills could not product fabric of the same quality. But being right is not enough.I'm already seeing vibe-coded internal tools at an org I consult at saving employees hundreds of hours a month, because a non-technical person was empowered to build their own solution. It was a mess, and I stepped in to help optimize it, but I only optimized it partially, making it faster. I let it be the spaghetti mess it was for the most part - why? because it was making an impact already. The product was succeeding. And it was a fundamentally different product than what internal tools were 10 years ago.
stanko: I think you are going to enjoy this talk by Jonathan Blow - Preventing the Collapse of Civilization:https://www.youtube.com/watch?v=ZSRHeXYDLko
endymion-light: Haha it's a good analogy, i'm being a little bit argumentative for the sake of it potentially.I guess in my view - the main alternative you'd have beforehand is just to drown.For me, AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.I'd agree there's currently a ton of people that are using these tools to essentially just find the specific route - but i'd argue those people probably shouldn't be skating in the first place, and would've fallen one way or the other.
mikkupikku: > Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.Is that even a counter point? Nobody in their right mind would ever claim that procedural generation is impossible to fuck up. The reason Minecraft/etc are good examples is because they prove procedural generation can work, not that it always works.
porridgeraisin: https://news.ycombinator.com/item?id=47260385
lxgr: True, I should have said "counterexample". Procedural generation is just another tool, in the end, and it can be used for great or mediocre results like any other.
porridgeraisin: Oh come on, you don't have to be condescending about function calls.When you make a function f(a, b, c) It is reusable only if simply changing a, b, c is enough to give the function that you want. Options object etc _parameterise_ that function. It is useful only if the variability in reuse you desire is spanned by the parameters. This is syntactic reuse.With LLMs, the parameterisation goes into semantic space. This makes code more reusable.A model trained on all of GitHub can reuse all that code regardless of whether they are syntactically reusable or not. This is semantic reuse, which is naturally much broader.Surely you should realise this before writing a condescending comment.
lxgr: Yes, but I beg to differ on the "skilled" part. I find the result very jarring somehow; the scale of the world didn't seem right. (Probably because it was too realistic; part of the art of game terrain design is reconciling the inherently unrealistic scales.)
keyringlight: Another example is upscaled texture mods, which has been a trend for a long while before 'large language' took off as a trend. Mods to improve textures in a game are definitely not new and that probably means including from other sources, but the ability to automate/industrialize that (and presumably a lot of training material available) meant there was a big wave of that mod category a few years back. My impression is that gamers will overlook a lot so long as it's 'free' or at least are very anti-business (even if the industry they enjoy relies upon it), the moment money is involved they suddenly care a lot about the whole fabric being hand made and need verification that everyone involved was handsomely rewarded.
Nursie: I accepted this years ago. In fact I go a step further - code is a liability.It's certainly intellectually stimulating to create it, but I've learned to take joy in discarding vast swathes of it when it's no longer required.
sph: I've had this talk in mind during the past 2/3 years of AI boom, and it feels like rewatching a video from the 80s about the dangers of global warming. Prescient, and perhaps a bit quaint in its optimism that somehow we won't make things even worse for ourselves.Now we're way past the point of no return.
mikkupikku: > I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”.1. Personal experience. Lazy prompting vs careful prompting.2. They're coincidentally good at things I'm good at, and shit at things I don't understand.3. Following from 2, when used by somebody who does understand a problem space which I do not, they easily succeed. That dog vibe coding games succeeded in getting claude to write games because his master knew a thing or two about it. I on the other hand have no game Dev experience, even almost no hobby experience with games specifically, so I struggle to get any game code that even remotely works.
thunderbong: Both the books and the song analogies are incorrect. In the case of code, the users for whom the programmes are written, are not engaging with the statements of the code, they are interacting with interfaces the programmes provide.This is not the same when it comes to books and music.
liampulles: I see a future where I program at work less, which is sad but c'est la vie. I think the challenge of the job will be heralding and managing my own context for larger codebases managed by smaller teams, and finding ways to allow for more experimental/less verified code in prod. And plenty of consulting work for companies which have vibe coded their business and who are left with a totally fucked data model (if not codebase).A Private (system) Investigator. :)
sph: > A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions?Because our ways of programming computers are still woefully inadequate and rudimentary. This is why we have a tons of technique for code reuse, yet we keep reinventing the wheel because they break in contact with reality.In other fields we've had a lot of time to figure out basic patterns and components that can be endlessly reused.One example that has bugged me for a decade is: we've been in the Internet era for decades at this point, yet we spend a lot of time reinventing communication. An average programmer can't spend two days without having to deal with JSON serialization, or connectivity, or sending notifications about the state of a process. What about adding authentication and authorization? There is a whole cottage industry to simplify something that should be, by now, almost as basic as multiplying two integers. Isn't that utter madness? It is a miracle we can build complex systems at all when we have to focus on this minutiae that pop up in every single application.Now we have intelligences that can create code, using the same inadequate language of grunts and groans we use ourselves in our day to day.
orthoxerox: What if my pasties cannot be singled out by any Karelian chef in a blind taste test? Does it matter how they were made?
rybosworld: When web search first arrived, the same thing happened. That is, some people didn't like using the tool because it wasn't finding what they wanted. This is still true for a lot of folks today, actually.It's less "git gud; prompt better", and more, "be able to explain (well) what you want as the output". If someone messages the IT guy and says "hey my computer is broken" - what sort of helpful information can the IT guy offer beyond "turn it on and off again"?
Kwpolska: > Engineers who know their craft can still smell the slop from miles away when reviewing it, despite the "advances" made. It comes in the form of overly repetitive code, unnecessary complexity, and a reluctance to really refactor anything at all, even when it's clearly stale and overdue.I’ve seen reluctance to refactor even 10+-year-old garbage long before LLMs were first made available to the broader public.
Sharlin: I was talking about libraries, higher-level units of reuse than individual functions. And your "syntactic" vs "semantic" reuse makes zero sense. Functions are literally written and invoked for their semantics – what they make happen. "Syntactic reuse" would be macros if anything.You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.
mr_toad: > fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions?Sometimes it has. The amount of generated code that selected count(distinct id) from customers would produce is huge.
fzeroracer: There are two important failures I see with this logic:First, I am not arguing for reusability. Reusability is one of the most common mistakes you can make as a software engineer because you are over-generalizing what you need before you need it. Code should be written for your specific use case, and only generalized as problems appear. But if you can recognize that your specific use case fits a known problem, then you can find the best way to solve that problem, faster.Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.Coding is a constant balance between understanding what you're solving for and what can solve it. Using LLMs takes the worst of both worlds, by offloading both your understanding of the problem and your understanding of the solution.
dncornholio: LLM's keep messing up even on a plain Laravel codebase..
foobarbecue: Hard agree. Before LLMs, if there was some bit of code needed across the industry, somebody would put the effort into writing a library and we'd all benefit. Now, instead of standardizing and working together we get a million slightly different incompatible piles of stochastic slop.
raincole: Because code reuse is hard. Like, really hard. If it weren't we wouldn't be laughing at left-pad. If it weren't hard we wouldn't have so many front-end JavaScript frameworks. If it weren't Unreal wouldn't have their own GC and std-like implementation.The whole history of programming language development is exploring how to properly reuse code: are functions or objects the fundamental unit of reuse? is diamond inheritance okay? should a language have an official package management? build system? should std have network support?Electron is the ultimate effort of code reuse: we reuse the tens of thousands of human-years invested to make a markup-based render engine that covers 99% of use case. And everyone complains it, the author of OP article included.
KellyCriterion: This should be completely crushed by Nano Banana models?
theshrike79: [delayed]
voidUpdate: > "Why hasn’t the boilerplate been automated as libraries or other abstractions?"Because a lot of programmers don't know how to copy-paste or make packages for themselves? We have boilerplate at my work, which comprises of some ready made packages that we can drop in, and tweak as needed, no LLMs required
simianwords: they don't go as deep as llm's which capture regularities at much more granular levels
theshrike79: [delayed]
hermannj314: only code anyone will be touching in a museum in 800 years will be the good code. I hope they don't talk about what great craftsmen we all were because someone saw an original Fabrice Bellard at the Louvre.Survivor bias plays a role in glorifying the past.
mr_toad: > One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)Sounds like a tautology. If you deliberately hoard knowledge of course it’s going to be hard to obtain.
theshrike79: It's still 100% CGI compositing and definitely not all of them are real places or real objects.In that specific 15 year old example they're mostly composited, you're right about that.
regentbowerbird: If you consider only the product is relevant and not how it is made, then no it does not matter.But the comment you reply to explicitly points out the process is in fact relevant as it is itself a cultural artifact. You're not replying to their main point.
porridgeraisin: I am not talking about using an LLM to make code reusable in the sense youre arguing.My point is that the very act of training an LLM on any corpus of code, automatically makes all of that code reusable, in a much broader semantic way rather than through syntax.
mr_toad: > Regarding art, what do you feel about museums? Why would you go see an original instead of simply looking at a jpg.Generally you get a much better ‘view’ of the artwork in a museum. It’s higher ‘resolution’ you can view it from multiple angles etc.There are some exceptions. You’re probably going to get a better look at the Mona Lisa online than if you try and see it at the Louvre.
porridgeraisin: A library is a collection of data structures functions. My argument still holds.> Syntactic reuse would be macrosWell sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.> PragmaticsDidn't know what that is. Consider my post updated with the better terms.
bartread: > Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.Yes, this is a wildly uneducated perspective.Procedural generation has often been a key component of some incredibly successful, and even iconic games going back decades. Elite is a canonical example here, with its galaxies being procedurally generated. Powermonger, from Bulldog, likewise used fractal generation for its maps.More recently, the prevalence of procedurally generated rogue-likes and Metroidvanias is another point against. Granted, people have got a bit bored of these now, but that's because there were so many of them, not because they were unsuccessful or "failed to deliver".
Jensson: > I guess in my view - the main alternative you'd have beforehand is just to drown.Before most who didn't know the ice didn't went out on it, today a lot of people who shouldn't be there go far out on the ice.
wolvesechoes: I am bit tired of such discussions.I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.
orthoxerox: The main point is "It's good for the heritage and good for the customers."How are the customers hurt if their pie has not been baked by a babushka in Petrozavodsk using the old original recipe, but by an anonymous migrant worker in a dark kitchen using an optimized recipe if the end result is objectively the same? The packaging doesn't have to say who it was made by.I also don't see the problem with the heritage. The comment I replied to already said anyone could call their pies Karelian, so there was no restriction that benefitted the residents of a specific region. I can see a PDO-like carveout that goes "we want to preserve the traditional pie-making of Karelia, so we want this activity to remain economically viable. Therefore, only pies baked in Karelia can be sold as Karelian pies." But I don't see how Sysco baking the same pies and distributing them nationwide helps maintain the heritage.
spicyusername: luddites were right And yet in the 200 years since human civilization has improved by every imaginable metric, in most cases by orders of magnitude.I get there are many things happening today that are frustrating or moving some element of human life in negative or ambiguous directions, but we really have to keep perspective on these things.Nearly every problem today is a problem with a solution.The feelings of panic we have that things are going wrong are useful signals to help guide us towards implementing those solutions, but we really must avoid letting the doomerism take over. Just because we hear constant negative news doesn't mean things are bad.
nathias: I'm selling hand-crafted template code if anoyne is interested.
nikitau: Roguelike/lites are is of the most popular genres of indie games nowadays. One of it's main characteristics is randomization and procedural generation.
qsera: >Real humans hallucinate..You seem to have a different understanding of what it means in the context of neural networks.Real humans will not make up non existent api and implement a solution with it, (unless they do it on purpose).
cure_42: This is just sad. If your passion for creating something you can be proud of is entirely propped up by imaginary sex appeal that not even most teenagers would believe exists, it's no surprise you'd arrive at such a cynical, pathetic conclusion.Your perspective is a path with only one logical end. That nothing you do or think or believe matters unless someone you're attracted to finds it attractive.That is not how I or most others live. We take pride in and derive satisfaction from our accomplishments without the need for external validation.Yeah, only I care whether the solution I found to a problem today was elegant, or whether my kitchen was pristine and well organized after I prepped for next week's lunches, but so what? I care and it injects more than enough meaning into my life to be worth it.
thendrill: Yeh cool story. But being passionate about a hobby is not gonna pay my bills...When I charge a customer for a solution they don't care about how elegant my code is. They just care if it works for solving their problem...
forinti: Your comment made me think of the Japanese. They have a highly industrialised society, but they also value greatly hand-made products from food and clothes to woodwork and houses.And they also like to emphasise how long it takes for someone to become a master at a given trade.
spacecadet: The authors logic only works for software engineers and as I have said time and time again- software engineers have been automating people out of their passions for decades and now it has come for yours... The lying here is LYING TO YOURSELVES.
spacecadet: Demand full automation. Demand universal basic income. Notice how the later is nearly absent from the conversation.Another distraction is AGI that which is a danger to humanity- the only danger is people...
Dumblydorr: Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games? Wouldn’t a better example be No Man’s Sky, if we’re talking procedural gen and eventually a good game.In any case, I agree that gamers by and large don’t care to what extent the game creation was automated. They are happy to use automated enemies, automated allies, automated armies and pre-made cut scenes. Why would they stop short at automated code gen? I genuinely think 90% wouldn’t mind if humans are still in the loop but the product overall is better.
whazor: I found that it normally takes one prompt early-on to go from 'vibe-coded spaghetti' to something having a decent architecture.
K0balt: I’ve been thinking a lot about this. I think that AI software automation tools are disproportionately more useful in greenfield work done by small or tiny organizations. By an order of magnitude, maybe 2 in some cases.What that means is anyone’s guess, but it seems like it should result in a Cambrian explosion of disruptive new companies, limited in scope by the idea space.The thing about small teams is, with a few exceptions, the biggest challenges are typically funnels for users and product-market fit, overcoming and exploitation of network effects, etc… so even in small orgs, if you make 30 percent of the problem 4x faster/smaller you still have the other 70 percent of the problem 4x faster, which is now 92.5% of the problem.This applies even more acutely in larger organizations… so for them, 99.99 percent of the problem remains.
techpression: I’m a hard core rogue-like player (easily over a thousand hours at least in all the games I’ve played) but even so I can admit that hey have nothing compared to a well crafted world like you’d find in From Software titles or Expedition 33, or classic Zelda games for that matter. Making a great world is an incredibly hard task though and few studios have the capabilities to do so.
bombcar: Procedural generation underlies the most popular game of all time (Minecraft) and is foundational for numerous other games of a similar type - Dwarf Fortress, et al.And it's used to power effect where you might not expect it (Stardew Valley mines).What procedural generation does NOT work at is generating "story elements" though perhaps even that can fall, Dwarf Fortress already does decently enough given that the player will fill in the blanks.
optionalsquid: > And it's used to power effect where you might not expect it (Stardew Valley mines).Apparently Stardew Valley's mines are not procedurally generated, but rather hand-crafted. Per their recent 10 year anniversary video, the developer did try to implement procedural generation for the mines, but ended up scrapping it:https://www.stardewvalley.net/stardew-valley-10-year-anniver...
bombcar: They're quasi-generated with random elements and fixed elements - similarly to early Diablo procedural generation.
qsera: >does it work, is it fast enough..Isn't the problem right now the vibe coded sotware does not appear to meet these requirements?
imiric: You're right. Automation often trades quality for speed and quantity.The difference between automating the creation of software and automating the creation of physical products is that software is everywhere. It is relied on for most tools and processes that keep our civilization alive. Cutting corners on that front, and deciding to entrust our collective future to tech bros and VC firms fiending for their next payout, seems like an incredibly dumb and risky proposition.
larodi: > No one cares about how the code is written.I would overstate:No one even cares how architecture is done. Unless you are the one fixing it or maintaining it.Sorry, no one. We all know Apple did some great stuff with their code, but we care more about the awful work done on the UI, right? I mean - the UI seems to not be breaking in these new OSs which is amazing feature... for a game perhaps, and most likely the code is top notch. But we care about other things.This is the reality, and the blind notion that so-many people care about code is super untrue. Perhaps someone putting money on developers care, but we have so many examples already of money put on implementation no matter what the code is. We can see everywhere funds thrown at obnoxious implementations, and particularly in large enterprises, that are only sustained by the weird ecosystem of white-collar jobs that sustains this impression.Very few people care about the code in total, and this can be observed very easy, perhaps it can be proved no other way around is possible.
Ensorceled: > Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games?Yes. It is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you are unfamiliar with large parts of that field.
aosnsbbz: It was really eye opening seeing they’re able to eat raw eggs and (to maybe a lesser degree of safety) raw chicken because their society requires high standards of cleanliness in food production. We are literal cattle over here in the states.Though, given Amodei and Altman’s behavior (along with the rest of the billionaire class) that shouldn’t be a surprise to anyone.
salad-tycoon: Agreed, I think the good gained by wool mills is greater in that little Timmy is less likely to lose a leg to frostbite than the bad loss of my scarf not passing through a ring.Long term though, I’ve always wondered if the Amish turn out to be the only survivors.
boesboes: try reading :)
lesam: If it's lasted 10 years and someone is still using it after all that time, that seems like a pretty good signal there's a lot of value in the 'garbage'?I've seen a lot of 'fixes' for 10 year old 'garbage' that turned out to be regressions for important use cases that the author of the 'fix' wasn't aware of.
dostick: It seems like with time hallucinations and lying increased, it’s very different now from what it was 2 years ago. is this because of training bias ? Is there any research data on dynamics over past years ?
mogoman: try scrolling up and down a few times on the logo and see what happens
firmretention: You didn't write this post. Your LLM did. Are you proud of yourself for copy and pasting "thoughts" that aren't yours?
qsera: >Me labeling "authentic" it or not should not affect it's artistic value.The problem with automated imitation generators is that they can produce thousands of painting that imitate Van Gogh, but does not have the same soul.It is the same reason why these things cannot create genuinely funny jokes. They cannot assess the funnyness of the themselves. They cannot feel.It is easy to recognize the emptiness of a joke, but not so easy for a painting, or some other form of art.
BloondAndDoom: One the topic procedural generation; rogue likes are all about it and new generation Diablo like games have definitely similar things, well respected new games like Blue Prince. There has never been such as successful period of time for procedural generation in games like now, and all of these are pre-AI. AI powered procedural generation is wet dream of rogue-like lovers
endymion-light: Totally true - althuogh I feel like that's been the case since the first coding bootcamps
h2zizzle: Tbf Spore's acclaim comes with the caveat that it completely failed to live up to years of pre-release hype. Much of the goodwill it's garnered since, which is reflected in review scores, only came after the storm of controversy over Spore not being "the ultimate simulator which would mark the 'end of history' for gaming" died down.And you wouldn't really have any idea this was the case if you weren't there when it happened.
notepad0x90: The framing of an LLM's response as truth vs lie is in itself incorrect.In order to lie, one needs to understand what truth and objective reality are.Even with people, when a flat-earther tells you the earth is flat, they're not lying, they're just wrong.All LLM output is speculation. All speculation, by definition, has some probability of being incorrect.---We can go even deeper in a philosophical sense. If I made the audacious claim that 2 +2 = 4, I may think it's true, but I'm still speculating that the objective reality I experience is the same one others also experience, and that my senses and mental faculties, and therefore the qualia making up my reality, are indeed intact, correct, and functional. So is there a degree of speculation when I made that claim?Regardless, I am able to agree upon a shared reality with the rest of the world, and I also share a common understanding of truth and untruth. If I lied, it can only be caused by an intention to mislead others. For example, if I claimed to be the president of the united states, of course that would be incorrect (thankfully!), but since we all agree that no one reading this post would actually be mislead into thinking I am the POTUS, then it isn't a lie. Perhaps sarcasm, a failed attempt at humor, or just trolling. it is untruth, but it isn't a lie, no one was mislead. You need intent (LLM isn't capable of one), and that intent needs to be at least in part, an intent to mislead.
wnevets: > Even with people, when a flat-earther tells you the earth is flat, they're not lying, they're just wrong.Al-least some of them know they're wrong and are thus lying.
utopiah: > AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.Right, but AFAICT most people just venture over the ice and don't bother to check. In fact a lot of people venture there, do check once or twice, then check less and less frequently. The fact that you do it is great but others seem a lot less careful, until cracks start to show and then it might be too late.
DonHopkins: [delayed]
simmerup: I guess you didn't read the article?
wolvesechoes: Keep guessing
noemit: You're right in that we kept the best examples (as coding museums will do in the future) but the best of something is a benchmark. It is striking that modern automation, even hundreds of years later, can't touch what a skilled craftsman could do in the past.With programming, we documented a lot of it, so it's unlikely to go the way of fine weaving. People will always be able to learn to think and be great programmers.Maybe if the wool weavers had internet, they could have blogged, made youtube videos, and cataloged their profession so it could last Millenia.
topaz0: They're not saying the LLM is lying; they're saying the human user is lying by using the counterfeit as though it were the genuine artifact.
wepple: > It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious positionCould be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”
anonymous_sorry: Eggs in the UK are safe to eat raw (and I presume the EU as well).
cess11: What weirds me out is that it seems few US corporations care that they don't have copyright to their synthesised code, if the rumours regarding this are correct.If you don't have the copyright, then you can't license or litigate it under the common rules of software. If someone 'steals' it you can at best go after them with some trade secret case, and I suspect this would be limited if you had already shared the code with them, e.g. because they helped you synthesise it.
luxuryballs: I dunno if counterfeit is the right word here, a lab grown diamond is still a diamond.
hiddevb: I don't think I agree with this take.I love procedural generation, and there is definitely a craft to it. Creating a process that generates a playable level or world is just very interesting to explore as an emergent system. I don't think LLMs will make these system more interesting by default. Of course there are still things to explore in this new space.It's similar to generative/plotter art compared to a midjourney piece of slop. The craft that goes into creating the code for the plotter is what makes it interesting.
endymion-light: Very true - I won't dispute it!I'd only argue that people were doing this before AI, slop development was just copy pasting from the first stack overflow issue that matched the question rather than thinkingSo i'd argue there's a part of it that is just personal responsibility with how these tools are used
butILoveLife: lol at OP.This has upvotes?Anyway, as I train people in LLMs/AI. I unapologetically will say "DONT LISTEN TO IT, IT LIES!" and send commands like "Try again, try harder"
Almondsetat: Large? That's your opinion
simonw: Wait for the header animation to run and then hit the "play" button on the about page, it's very cool: https://acko.net/about
whywhywhywhy: Startfield is boring because of the bad writing and they made a space exploration game where there are loading screens between the planet and space and you don’t actually explore space.They fundamentally misunderstood what they were promising, it’s the same as making a pirate game where you never steer the ship or drop anchor.You can prove people are not bored with the concept as new gamers still start playing fallout new Vegas or skyrim today despite them being old and janky.
incomingpain: I find these criticisms to be similar to those low context brain teasers.If a woman gets married at 25 and her kid is 25, how old is she?This is what LLMs are dealing with. You dont tell them everything they need to know and they are left to fill in the gaps. Which may, and sometimes often means they lie.That's what Agentic does differently, it'll go find the gaps before answering.Agentic is AGI. You can hire many minimum wage workers who are generally intelligent who dont even go to that level.
lopis: If progress had been limited to solving people's problems, we would be fine.> The feelings of panic > It just means we have been hearing a lot of negative news.This is part of the problem at hand, not just a footnote.
aeon_ai: This is such a comically bad take.The use of loaded and pejorative language like "forgery" emphasizes that this is not a logical argument, but a moral one. The repeated comparisons to "true craft" reveals the author would prefer that code be regarded like artisanal cheese.Beyond the pretension, it's head in the sand to imply that the technology hasn't progressed. It's just very clearly not true to anyone who is paying attention - longer tasks, better code, less errors. I'm somebody who actively despises the hype bullshit-machine that SV has turned into, but technology is an industry for pragmatists that can leverage what works. And LLMs do.If you don't like the technology, you have every right to scream that from the mountaintops. As it stands, this just serves as no more than a rallying cry to the ignorant.
rglover: This is the most rational take. I'm a quality guy (Deming, Juran, etc), but nothing about incorporating an LLM into my own work has lowered its quality. That isn't to say that I haven't encountered slop. The difference is that, self-identifying as a craftsman, I have the ability to decide whether or not something stays or goes on the scrap heap. It seems a lot of people are missing that point: just because you can churn out shit doesn't mean you have to (and sorry, sunk-cost bias re: tokens isn't an excuse—that's the cost of doing business). It's a choice. AI-assisted coding is a tremendous boon on productivity, if (and I'd argue only if) you treat it like a power tool and not a genie lamp.No, you won't be rewarded magic beans for churning out crappy dashboards any more. But if you're serious about shipping quality, nothing is stopping you here. It's. A. Choice.
MagicMoonlight: Before mass production, the women of the household would be forced to spend every free moment of their day, outside of their other work, making fabric.Before mechanised farming, the men were forced to spend all day in the fields.Never again.
theshrike79: Customers want a very specific thing, rules exist that say if you sell something called Specific Thing it must be made a very specific way or you can't call it that.Even if you make something that tastes and looks exactly like the original, you still can't call it Specific Thing because the process wasn't followed as it's an integral part of the product. Think of it like a trademark. You can't create some brown sugary stuff and sell it as Coca-Cola - even if it tastes EXACTLY like it does.Nothing about this is about profit or economic viability, it's not even a small part of the equation. The purpose is to preserve cultural heritage and not dilute it with shitty imitations calling themselves something they are not.
Garlef: I think you could also argue that LLMs in coding are actually just a novel approach at code reuse: At the microscopic level, they excel at replicating known patterns in a new context.(Many small dependencies can be avoided by letting the LLM just re-implememt the desired behavior; ~ with tradeoffs, of course)The issue is orchestrating this local reuse into a coherent global codebase.
tanjtanjtanj: While there are many Roguelikes with procedural generation, I think the most popular ones do not. Slay the Spire, Risk of Rain 2, Hades 1/2, BoE etc are all handmade stages with a random order with randomized player powers rather than procedurally generated.
delaminator: Computer Generated Imagery.
FpUser: >"well perhaps except the function vs object one"If this is what I think it is, I consider it very lopsided view, failure to recognize what model fits for what case and looking at everything from a hammer point of view
larsiusprime: Also RE: procgen, one of the hit games right now, Mewgenics, is doing super well and uses it extensively. Obviously it's old school procgen that makes use of tons of authored content, but it's still procgen.
mathgradthrow: localization? Why would you oppose LLMs doing localization?
JadeNB: In case they hallucinate? There's no point having content in a wide variety of languages if it's unpredictably different from the original-language content.
mexicocitinluez: Before LLMs companies and people were forced to use one-size-fits-all solutions and now they can build custom, bespoke software that fits their needs.See how it's a matter of what you're looking at?
not_the_fda: They don't care until the whole thing collapses in on itself from the technical debt. Then they have surprised pikachu face when it takes an insane about of effort to add a simple feature.
SirMaster: >No one cares about how the code is written.People definitely do care. Nobody wants vibe-coded buggy slop code for their game.They want well designed and optimized code that runs the game smoothly on reasonable hardware and without a bunch of bugs.
SkyBelow: >if the end result is objectively the sameThe issue is how this will be handled in law. Can the law define this in a way that is not overly strict or overly permissive? The current attempts is effectively the law doing this, but with an overly strict approach of what counts as 'objectively the same' by judging the process and not purely the outcome. Would it be possible to make the law's definition of this more permissive, to focus only on the product produced, without accidentally becoming overly permissive?
raincole: Germans even eat raw pork. Plus it has nothing to do with the parent comment.
BoredomIsFun: > Just because it’s « boilerplate », that does not mean it’s worthlessOf course it is not. It is needed, by definition.> or doesn’t carry novelty.Of course it does not. Why would a piece of code that simply fills a large C structure with constants be innovative?> Every project is different and not everything can be made from a generic out-of-shelf productTangential to use of LLMs for boring boilerplate stuff.
simonask: This is magical thinking.LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.
mexicocitinluez: > Starfield in all its grandeur is pretty boring.And yet "No Mans Sky" is massively popular.> ny software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.And any software engineer worth their salt realizes there are 100s if not 1000s of problems to be solved and trying to paint a broad picture of development is naive. You have only seen 1% (at best) of the current software development field and yet you're confidently saying that a tool that is being used by a large part of it isn't actually useful. You'd have to have a massive ego to be able to categorically tell thousands of other people that what they're doing is both wrong and not useful and that they things they are seeing aren't actually true.
runarberg: I’m not sure your logic is sound. It sounds like you are insisting on some nuance which simply isn’t there. LLM generates unmaintainable slop, which is extremely difficult to reason about, uses wrong abstractions, violates DRY, violates cohesion, etc.The industry has known how to reuse codes for two decades now (npm was released 16 years ago; pip 18 years ago). Using LLMs for code reuse is a step in the wrong direction, at least if you care about maintaining your code.
phyzix5761: At some point, if most people lose their jobs, you have no market to sell your services to. So, either, new jobs have to be created in order to keep the capitalism machine running, or you have to provide for the needs of every human being from whatever you're doing with your AI. Otherwise, a lot of hungry people revolt and you have violence against these businesses.I think new jobs will be created because AI is always limited by hardware and its current capabilities. Businesses, in order to compete, want to do things their competitors aren't currently doing. Those business needs always go beyond the current technological capabilities until the tech catches up and then they lather, rinse, repeat.
llm_nerd: >the Luddites were rightThe Luddites were right in the sense that the social order had changed in a negative way. In a careless way.In the same way that we look at America now that has effectively put a plutocracy in absolute control of the country, at the same time that there is going to be a massive devaluing of labour. Elon Musk likes to talk about the coming golden age of automation, but I hope Americans realize that unless they happen to be a billionaire, they will enjoy zero fruits of that advance. Quite contrary, plump yourself up to be Soylent Green because it turns out that giving a bunch of psychopaths/sociopaths absolute control of government isn't good for the average person.>One of the claims of the Luddites is that quality would go downThen people will choose the better quality items and it will be easy for them to compete? Right?
roesel: No one wants _buggy slop code_ for their game, but ultimately no one cares whether is has been hand crafted or vibe-coded.As proof, ask yourself which of the following two options you would prefer:1. buggy code that was hand-written 2. optimized code that was vibe-codedI'll bet most people will choose 2.
SirMaster: I've never seen something as complex as a video game vibe coded that was actually well optimized. Especially when the person doing the prompting is not a software developer.So I personally do care and I am someone, so the answer is not no one.
idopmstuff: It's also worth nothing that the "our" in that sentence is just SWEs, who are a pretty small group in the grand scheme of things. I recognize that's a lot of HN, but still bears considering in terms of the broader impact outside of that group.I'm a small business owner, and AI has drastically increased my agency. I can do so much more - I've built so many internal tools and automated so many processes that allow me to spend my time on things I care about (both within the business but also spending time with my kids).It is, fortunately, and unfortunately, the nature of a lot of technology to disempower some people while making lives better for others. The internet disempowered librarians.
layer8: The problems with leftpad are a problem with the NPM ecosystem, not with code reuse as such. There are other dependency ecosystems that don't have these problems.
amiga386: > Players only object against AI art assets. And only when they're painfully obvious.Restaurant-goers only object against you spitting in their food if it's painfully obvious (i.e. they see you do it, or they taste it)Players are buying your art. They are valuing it based on how you say you made it. They came down hard on asset-flipping shovelware before the rise of AI (where someone else made the art and you just shoved it together... and the combination didn't add up to much) and they come down hard on AI slop today, especially if you don't disclose it and you get caught.
adamtaylor_13: This argument falls apart on the very first bullet point. The author claims:> If someone produces a painting in the style of Van Gogh, and passes it off as being made by Van Gogh, by putting his signature on it, that painting is a forgery.Which is true. But the implication that follows is false.Van Gogh's artwork is valuable specifically because of his identity. I find much of his artwork particularly hideous. That's fine! Someone else finds value in it specifically because of who wrote it.This metaphor doesn't appear to apply to code at all. The entire value of code is what it does not who wrote it.Honestly, I stopped reading after the first bullet point because these types of arguments feel lazy and the attitude of the people writing these articles frequently comes across as holier than thou.You don't like LLMs? Great, don't use them. Using Van Gogh's paintbrush doesn't mean I'm making a forgery. I'm just painting, my friend.
uriahlight: Made with AI. Ironic isn't it considering the article?
uriahlight: Ironic article considering this website's 3D parallax effect with music and animation was all made with AI.
ruhith: Hit a rate limit on HN fetches. Give me a minute and I'll try again, or you can paste the thread content directly if you have it.
mikkupikku: If you haven't heard of the modern roguelike genre you've probably been living under a rock, it seems like every other game these days at least calls itself such. Usually the resemblance to Rogue is so remote that it strains the meaning of the term, but procedural generation of levels is almost universal in this loosely defined genre.
wolvesechoes: > It's also worth nothing that the "our" in that sentence is just SWEsIt isn't, it just a matter of seeing ahead of the curve. Delegating stuff to AI and agents by necessity leads to atrophy of skills that are being delegated. Using AI to write code leads to reduced capability to write code (among people). Using AI for decision-making reduces capability for making decisions. Using AI for math reduces capability for doing math. Using AI to formulate opinions reduces capability to formulate opinions. Using AI to write summaries reduces capability to summarize. And so on. And, by nature, less capability means less agency.
LetsGetTechnicl: It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with. Lmfao LLM's can barely count rows in a spreadsheet accurately, this is just batshit crazy.
Ensorceled: Of course it is.
Almondsetat: Then it is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you cannot substantiate your claims.
vips7L: And yet many of us would prefer to be in a field instead of behind a laptop screen all day.
Jooror: Irrespective of the domain you specifically listed in 3 (game dev is, believe it or not, one of the “more complex” domains), you have completely failed to miss the point.> 2. They're coincidentally good at things I'm good at, and shit at things I don't understand.This may well be! In the perfect world this would be balanced with the knowledge that maybe “the things you’re good at” are objectively* easier than “things you don’t understand”. Speaking for myself, I’m proficient in many more easy things than hard things.*inasmuch as anything can be “objectively” easier
aeon_ai: > you won't be rewarded magic beans for churning out crappy dashboards any moreThe days of being treated like a wizard for making buttons and widgets hit an endpoint were good while they lasted.
pixl97: >all the inertia and inefficiencies of the organizationHonestly you can probably use this as a means to measure the amount of regulation, graft, and corruption in an economy.In a wild west free for all code velocity would likely be very fast with software popping up, changing rapidly, and some quickly disappearing.But in an economy that doesn't care what you make, then who you pay or what laws you buy is far more important.
3371: Sharing my 2 cents.In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".
Jooror: I wonder if my intuition here is correct; I would posit that “PL implementation” is a far more popular and well-explored field than it seems. How many toy/small/labor-of-love langs make it to Show HN? How many more simply don’t?I’ve never personally caught the language implementation bug. I appreciate your perspective here.
falcor84: Which of the two Ls?
pixl97: > the only danger is people...Simply put, no it is not.But on the reverse, the first danger with AI is people.Over the longer term it will look like this. The rich 'win' the world by using AI to enslave the rest of mankind and claim ownership over everything. This will suck and a lot of us will die.The problem is this doesn't solve the greed that cause the problem in the first place. The world will still be limited in a resources of something which will end with the rich in a dick measuring contest and to win that contest they will put more and more power in AI and they connive and fight each other. Eventually the AI has enough power that it kills us all, intentionally or not.We'll achieve nearly unlimited capability long before we solve the problem of unlimited greed and that will spell our end.
nerdyadventurer: Economy is going to collapse with the war anyways. (https://www.youtube.com/watch?v=4Ql24Z8SIeE&t=247s)
3371: I totally agree, and I was fully aware of how common people make language for fun when I replied.But I feel like the rationale would still stands: Considering LLMs' natures, common boilerplate tasks are easy because they can kind of just "decompress" from training data. But for a new language design, unless the language is almost identical to some other captured by the model, "decompression" would just fail.
raincole: > It’s too bad we didn’t have a period where these things were great teachers but didn’t attempt to write codeThe period is now. Just add "be a great teacher but don't attempt to write code" in the prompt.(yes, it's a teacher who gets things wrong from time to time. You still need to refer to the source and ground truth just like when you're taught by a human teacher.)
f311a: No, expectations have shifted. In a lot of companies, managers expect you to use LLMs to produce more features faster.
slibhb: > What matters in the end is that this tech is not to empower a common person (although it could).How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person". It lowers barriers!A lot of the anti-AI sentiment on HN concerns people losing their jobs. I don't think this will happen: programmers who know what they're doing are going to be way, way more effective at using AIs to generate code than others.But even if it is true and we do see job losses in tech: are software devs really "in a precarious position"? Do they really qualify as "those that have little"? Seems like a fantasy to me. Computer programmers have done great over the past 30 years.More broadly, anti-AI sentiment comes from people who dislike change. It's hard to argue someone out of that position. You're allowed to prefer stasis. But the world moves on and I think it's best to remain optimistic, keep an open mind, and make the most of it.
nerdyadventurer: Problems is these likes can be sophisticated which can eat up hours of our time.
vips7L: > I don't think this will happenBlock just laid off 40% of their company citing AI.
pixl97: Standardization and regulation forced a lot of the physical industries to change as the industrial revolution progressed. Before that point standards didn't really exist, especially over any large areas and technological progress suffered because of that. After that point solutions became much closer to drag and drop than what they were before.The question is, at what point of progress will it benefit the software industry.
xerox13ster: That’s not the same procedural generation as GPT or diffusion and you know it.It’s not even in the same ballpark as Elite, NMS, terraria, or Minecraft.The levels are all hand drawn, not generated by an algorithm, even if they’re shuffled.It’s like the difference between sudoku/crossword and conways game of life
ModernMech: What happens when they decide it's a national security threat and an act of domestic terrorism to use AI to undermine commercial dependencies? We're all acting like AI isn't being invented within the context of and used by a fascist regime.
dec0dedab0de: No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.This reminded me of a conversation about AI I had with an artist last year. She was furious and cursing and saying how awful it is for stealing from artists, but then admitted she uses it for writing descriptions and marketing posts to sell her art.
raincole: Sinix even explicitly says that AI is an IP theft machine but it's okay to use AI to generate 360 rotation video to market your 2D works[0].To summarize this era we live in: my AI usage is justified but all the other people are generating slop.[0]: https://www.youtube.com/watch?v=z8fFM6kjZUk[1]: Disclaimer: I deeply respect Sinix as an art educator. If it weren't him I wouldn't have learnt digital painting.
Aurornis: You could eat raw eggs in most modern countries and be mostly fine. It’s not as uncommon as you would think. There are many drink recipes with raw eggs as an ingredient. You just happened to be exposed to it in Japan.Eating raw chicken is risky even in Japan. There are cultures that eat raw chicken, pork, and other meat products by choice but it’s always a risk. There are outbreaks of serious food borne illness in Japan from raw chicken: https://pubmed.ncbi.nlm.nih.gov/18406474/
margalabargala: This is one of those things where people who don't know how to use tools think they're bad, like people who would write whole sentences into search engines in the 90s.LLMs are bad at counting the number of rows in a spreadsheet. LLMs are great at "write a Python script that counts the number of rows in this spreadsheet".
fwlr: My apologies if this is a joke I’m not understanding, but as far as I can tell with the wayback machine, this animation predates not just coding/generative AI, but the Attention paper and the founding of OpenAI too.https://web.archive.org/web/20150314221334/http://acko.net/
porridgeraisin: Oh sure the quality is extremely unreliable and I am not a fan of its style of coding either. Requires quite a bit of hand holding. I am just saying that LLM technology opens up another dimension of code reuse.
pixl97: >but does not have the same soul.Define soul, how about a legal/scientific description that accurately covers all bases?The funny jokes thing is funny too, if someone told you a joke and you thought it was funny, then they told you it was from an LLM, would it stop being funny.
slibhb: Tech companies have been laying off employees for a while now. I think it's mostly due to pandemic overhiring and higher interest rates but I suppose we'll see.
vips7L: I agree that AI was not the _actual_ reason, however, it did allow them to do massive layoffs without admitting they are doing poorly and not taking a massive hit to their stock price.
oreally: > I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?Yea this is the kind of BS and counter-productiveness that irrational radicals try to push the crowd towards.The idea that one owns your observations of their work and can collect rent on it is absurd.
pixl97: The Right to Read is a great story on showing just how greedy and stupid people would get if we allow them. Society and culture is large scale theft. Imagine having to pay to learn about the idea of fire, or to use the alphabet. Simply put humanity would have never progressed much farther than animals. And yet, as the complexity of our ideas increase, suddenly many humans start thinking that owning ideas, many forever, is just great and will not have any negative ramifications.
Izkata: > Spore is well acclaimed.Its creature creator was, but as a game it was always mediocre to bad. They had to drop something like 90% of the features and extremely dumb down the stages to get it released.It was also what introduced a lot of us to SecuROM DRM - it bricked my laptop in the middle of a semester.
tovej: I am prompting better. It doesn't help the LLM be more productive than me on a regular tuesday.Sure, I can get the task done by delegating everything to an agentic workflow, but it just adds a bunch of useless overhead to my work.I still need to know what the code does at the end of the day, so I can document it and reason about it. If I write the code myself, it's easy. If an LLM does it, it's a chore.And even without those concerns, the LLM is still slower than me. Unless it's trivial boilerplate, in which case other tools serve me better and cheaper.I'll note that a compiler is one of the most well understood and implemented software projects, much of it open source, which means the LLM has a lot of prior art that it can copy.
bee_rider: It’s a different type of thing, really. I like rogue-likes because they are a… pretty basic… story about my character, rather than a perfectly crafted story about somebody else’s.Even when I play a game like Expedition 33 or Elden Ring, my brain (for whatever reason) makes a solid split between the cutscene versions of the characters and the gameplay version. I mean, in some games the gameplay characters is a wandering murderer, while the cutscene characters have all sorts of moral compunctions about killing the big-bad. They are clearly different dudes.
dkersten: Almost every 3D game in the past 20 years uses procedural foliage generation (eg SpeedTree and similar). Many use procedural terrain painting. Many use tools like Houdini.So procedural generation is extremely prevalent in most AAA games and has been for a long time.
tovej: That is not at all what I said, please read my post more carefully before speculating.I am talking about using LLMs in general, not for boiler plate specifically.My point about boilerplate is that I have tools that solve this for me already, and do it in a more predictable way.
Marha01: It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)
remich: Yeah and then when that library stops being maintained or gets taken over, everything breaks.
tovej: I can assure you I give LLMs all the information they need. Including hints to what kind of solution to use. They still fail.
qsera: That is why I said it is hard to spot an souless painting.>if someone told you a joke and you thought it was funny, then they told you it was from an LLM, would it stop being funny.No. It wouldn't. But they can't generate funny jokes.I would be really happy if I am wrong, and it is possible to laugh all day by reading endless jokes from an LLM...
tptacek: This argument can be used, and has been used, about every innovation in automation since the dawn of the industrial revolution.
ducttapecrown: It is not the technology that sucks ever more money out of the populace, it's the people at the top!
Retr0id: > Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.With the notable exception of Minecraft terrain generation, which I think most would say was successful in what it set out to achieve.
tylervigen: There are tons of examples of this. Heck, even Tetris has procedural generation. I think this argument was a mistake.
visarga: Stochastic Parrots hits back! another author thinks LLMs only reproduce. If that were so we could have used `cp` for free
kevstev: I am kind of lost here on this whole scarf through a ring thing as well. This is just a function of the thickness of the scarf? My wife went through a scarf phase about a decade ago, and I am pretty sure a Pucci scarf could easily fit through a typical sized ring meant to go on a finger?Its entirely possible that old manufacturing methods produced things that are different, but I would be entirely surprised if they are entirely better overall. If the defining metric for scarves is how well they fit through rings, I am sure they would all be made so you could fit 3 through a ring if people were willing to pony up for that. If you look at a lot of old clothes, they are generally a lot heavier, but I am not sure I would really want to wear them, they look quite uncomfortable. I also think its wonderful that today you can get a set of clothes for a few hours of minimum wage work while in the past this was a major investment. You can also choose to pay thousands for a shirt if you wish, but from 10 feet away its going to be hard to tell the difference.
AnotherGoodName: Games with ai art assets are some of the most popular right now in any case. Arc raiders being a great example where some of the voice assets are AI generated.Be careful of reading any viewpoint on the internet. Apparently no one used facebook or instagram and everyone boycotts anything with ai in it.In reality i think you’d be foolish not to make use of the tools available. Arc Raiders did the right thing by completely ignoring those sorts of comments. There may be a market for 100% organic video games but there’s also a market for mainstream ‘uses every ai tool available’ type of games.
bluefirebrand: LLMs in coding are like code reuse in the same way your neighbor hotwiring your car you parked in your driveway is just borrowing itYou didn't park your car in your driveway so anyone could take it to get groceries
scuff3d: Literally the most useful LLMs have been to me is dealing with the pile of corporate bullshit we have to put up with day to day.For example, we have to plan 8 to 12 sprints in advance. Full acceptance criteria, story points, and slotted into a sprint with points balanced across the team. Of course this is utterly useless, anything past the second sprint is going to be wrong, but they want it done. LLMs got me through that in a few hours instead of a few days.
mikkupikku: I have definitely considered the possibility that I'm simply good at easy things and the LLM is good at easy things, and that hard things are hard for both of us. And there certainly must be some element of that going on, but I keep noticing that different people get different quality results for the same kind of problems, and it seems to line up with how good they themselves would be at that task. If you know the problem space well, you can describe the problem (and approaches to it) with a precision that people unfamiliar with the problem space will struggle with.I think you can observe this in action by making vague requests, seeing how it does, then roll back that work and make a more precise request using relevant jargon and compare the results. For example, I asked claude to make a system that recommends files with similar tags. It gave me a recommender that just orders files by how many tags they had in common with the query file. This is the kind of solution that somebody may think up quick but it doesn't actually work great in practice. Then I reverted all of that and instead specified that it should use a vector space model with cosine similarity. It did pretty good but there was something subtly off. That is however about the limit of my expertise in this direction, so I tabbed over to a session with ChatGPT and discussed the problem on a high level for about 20 minutes, then asked ChatGPT to write up a single terse technically precise paragraph describing the problem. I told ChatGPT to use no bullet points and write no psuedocode, telling it the coding agent was already an expert in the codebase so let it worry about the coding. I give that paragraph to claude and suddenly it clicks, it bangs out a working solution without any drama. So I conclude the quality of the prompting determined the quality of the results.
WarmWash: Everyone is in it for themselves.The world makes waay more sense when you really internalize that. It doesn't necessarily mean people are selfish, large groups often have aligned interests, but when an individuals interest alignment changes, then their group membership almost always changes too.I'd be she has a bunch of pirated content and anti-copyright remarks from the golden age of piracy as well.
tovej: That's not true. Most people are interested in fostering a community, even when it means sacrifice.There _have_ however been studies that show that this attitude is prevalent in (neoclassical) economics students and others who are exposed to (neoclassical) economic thinking: https://www.sciencedirect.com/science/article/abs/pii/S22148...It's very effective propaganda. And we have a good example of it here. (Mot saying you're spreading it maliciously, but you are spreading it).
moffkalast: Have you ever tried it? I did, and sure as hell wouldn't.Nobody's really stopping you from studying agriculture and working in that proverbial field either.
recursive: There are many orders of magnitude more songs based on four chord loops than there are hit songs. Some people say it's easy to make a hit song. But there are a lot more people that want to do it than those that succeed. So I say no. Your take is reductive, and there is necessarily more to it.
recursive: You talking about the billion dollar model training efforts done in secret?
noemit: A full size wool scarf cannot go thorough a ring. You are probably thinking of a silk scarf. I have a wool scarf next to me from Kashmir and it went down about 25 cm. The full scarf is a bit over a meter.Looked up Pucci - looks like a designer that makes silk scarves. Totally different material that is thinner. Making wool thinner thorough spinning it very thin without breaking it and weaving it to be fine without breaking is the craft that was lost.
moffkalast: It has been automated as much as possible, boilerplate is the result of people being terrible at designing programming languages. The whole idea of increasingly higher level ones was just that all along. Is there any point in writing a billion ADDC commands in assembly by hand when it takes one line in python?LLM type systems are the final level of abstraction that lifts it up to literal natural language. Any dev with decent self awareness would admit they were just copying shit from stackoverflow half the time before LLMs anyway, high level languages and libraries just streamline that process with canonical implementations.The value we provide is turning "person with problem" -> "person with solution to said problem" with as few caveats as possible. A programmer is that arrow, we solve problems. The more code we have to write to solve that problem, the worse we are at our job.
spacecadet: This is entirely assumptions about a future that has not happened.Ive worked in "AI" for 20 years, through 2 winters, and run an alignment shop and AIRT... The problem is people. People will use the problem as a scapegoat.
dannersy: No Man's Sky got better as they were more intentional with their content. The game has more substance and a lot of that had to be added by hand. It is dropped in procedurally but they had to touch it up, manually, to make it interesting. Let's not revise history.I don't think it has anything to do with ego. There are studies on the topic of AI and productivity and I assume we have a way to go before we can say anything concretely. Software workflows permeate the industry you're in. You're putting words in my mouth, I said nothing about what people are doing is wrong or not useful. I said the claim that generative AI is making engineers more productive is an unfounded one. What code you shit out isn't where the work starts or ends. Using expedient solutions and having to face potentially more work in the future isn't even something that is a claim about software, I can make that claim about life.You need to evaluate what you read rather than putting your own twist on what I've said.
alexpotato: This is why Sid Meier's Pirates [0] remains such a great game.It was really a combination of mini-games:- you got steer a ship (or fleet of ships) around the Caribbean- ship to ship combat- fencing- dancing (with the Governors' daughters)- trading (from port to port or with captured goods0- side questsEach time I played it with my oldest, it felt like a brand new game.https://en.wikipedia.org/wiki/Sid_Meier%27s_Pirates!
GeoAtreides: what i find hard to digest is not being able to pay rent and dying of old age in a ditch in poverty
tovej: As someone who has implemented a fair few DSLs, lexical and syntactic analysis is pretty much the same anywhere, and the structure of the lexer/parser does not really depend on the grammar of the language.And even semantic analysis is at least very similar in most PLs. Even DSLs. Assuming you're using concepts like variables and functions.When it comes to codegen / interpreter runtimes, things start to diverge. But this also depends on the use case. More often than not a DSL is a one-to-one map to an existing language, with syntactic sugar on top.I'm curious, what's the DSL you're working on?
3371: It's pretty much WIP but if you are interested here is the repo. https://github.com/No3371/zohThe points you brought up all are valid. Lexer, parser and general concepts are language-agnostic, yes, and I wasn't talking about how the implementation is different.When I said "you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design", I was thinking about the many times they just fail to write in my language even when provided will full language specs. LLMs don't "think" and boilerplate is easy for LLMs because highly similar syntax structure even identical code exist in their training data, they are kind of just copying stuff. But that doesn't work that well when they are tasked to write in a original language that is... too creative.
mexicocitinluez: You said:> LLMs saving engineers and developers time is an unfounded claimBy whom exactly? If I say it saves me time, and another developer says the same, and so on, than it is categorically not unfounded. In fact, it's the opposite.You've completely missed the point if you don't understand how telling other people that their own experience in such a large field is "unfounded" simply because it doesn't line up with your experience.> we have a way to go before we can say anything concretelyNo YOU do. It's quite apparent to me how it can save time in the myriad of things I need to perform as a software developer (and have been doing).
angry_octet: A set of clothing used to cost a month's wages. Yearning for the pre-industrialised era is an unintended pean to aristocracy, whitewashed by fiction and movies to be clean and virtuous.At the moment, a single line of production code costs hundreds of dollars. I'm not talking about the bedrock of technology, like compilers, mysql, the Linux kernel, which represent hundreds of billions of value. I'm talking about the shitty code that powers Salesforce and ERP integration, Drupal modules, intranet customisation, insurance company call centre agent policy workflows, the thrice cursed apps that ship with cheap Chinese android phones, the putrid code to analyse our shopping loyalty card purchases and turn it into business insights.All that code is shit, and it costs a fortune. Meanwhile regular people have no code. Even I run my life on almost no code, I have to use SaaS (like Gmail and Docs). If I want something like a financial analysis to be understood by my family I don't code it in python, I use Excel. I use whatever automation comes in my car. But once simple code becomes a process of thinking about what you want rather than knowing esoterica like calling conventions, allocation lifetimes etc, then we have made custom software accessible to billions of people, people who are clever and industrious.So stay in your cathedral and illuminate your manuscript if you like, there is a need for excellent code, and tooling like Lean that can define what correct means, but let the people eat.
miyoji: > I think it's mostly due to pandemic overhiring and higher interest ratesIt's not because of pandemic overhiring, and if that were true, the layoffs in 2021-2022 would have handled it. It's 2026. The people getting laid off (on average) haven't worked at these companies since before the pandemic, they got hired in ~2023 (average tenure at a tech company is ~3 years).It's not because of AI either. Nobody is replacing jobs with AI, AI can't do anyone's job.It's not because of interest rates. People hired like crazy when interest rates were this high in the oughts.It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine. It's not true, but the downsides come later and the cash influx comes now, so they're doing it anyway.
glitch13: > It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine.I agreed with you up to this point. Twitter largely operated in the red for its entire existence prior to his "restructuring" to make it leaner and profitable. In my opinion, twitter went to shit when the incentive for creating engagement switched from gaining social capital to gaining... erm... actual capital. The laissez-faire attitude about allowing fairly terrible behavior on there gave it a PR black eye that probably didn't help either in the eyes of advertisers.If I had to guess what happened with Block (and that's what we're all doing, guessing): a CEO's job is to make the line go up, and saying you introduced tools to increase productivity with half the staff (especially if you're overstaffed) seems to me a pretty easy way to do that. I saw someone on here refer to it as "Vibe CEOing", which I think is pretty on point. Again, just my opinion/guess.
idopmstuff: I'll grant that it does extend beyond SWEs, but whether AI atrophies skills is entirely up to the user.I used to use a bookkeeper, but I got Claude a QuickBooks API key and have had it doing my books since then. I give it the same inputs and it generates all the various journal entries, etc. that I need. The difference between using it and my bookkeeper is I can ask it all kinds of questions about why it's doing things and how bookkeeping conventions work. It's much better at explaining than my bookkeeper and also doesn't charge me by the hour to answer. I've learned more about bookkeeping in the past month than in my entire life prior - very much the opposite of skill atrophy.Claude does a bunch of low-skill tasks in my business, like copying numbers from reports into different systems into a centralized Google Sheet. My muscle memory at running reports and pulling out the info I want has certainly atrophied, but who cares? It was a skill I used because I needed the outcome, not because the skill was useful.You say that using AI reduces all these skills as though that's an unavoidable outcome over which people have no control, but it's not. You can mindlessly hand tasks off to AI, or you can engage with it as an expert and learn something. In many cases the former is fine. Before AI ever existed, you saw the same thing as people progressed in their careers. The investment banking analyst gets promoted a few times and suddenly her skill at making slide decks has atrophied, because she's delegating that to analysts. That's a desirable outcome, not a tragedy.Less capability doesn't necessarily mean less agency. If you choose to delegate a task you don't want to do so you can focus on other things, then you are becoming less capable at that skill precisely because you are exercising agency.Now in fairness I get that I am very lucky in that I have full control of when and how I use AI, while others are going to be forced to use it in order to keep up with peers. But that's the way technology has always been - people who decided they didn't want to move from a typewriter to a word processor couldn't keep up and got left behind. The world changes, and we're forced to adapt to it. You can't go back, but within the current technological paradigm there remains plenty of agency to be had.
oxag3n: Software engineers are anything but Luddites.This labeling tactic became pretty common and tries to build a narrative that software engineers are going away. Artisan coders, craftsmen,First and foremost, wool craftsmen are not engineers (which doesn't make their work less valuable).Second, most software engineers, especially not in FAANG-like companies, don't engineer a shit. My spouse worked at a large telecom company in US and employees with "software engineer" title were doing mechanical tasks following some scripts, like daily system reload - run the script, verify status, open a ticket for a sub-contractor if anything is wrong, support the contractor via the ticket system until it's resolved. To be fair, two of my close family members work in FAANG and say COVID over-hire created a similar landscape there too.My point is, creating CRUD internal tools was not an engineering to begin with, it was a craft, matching most craftsmen features such as small-scale, bespoke work, hands-on practice, tacit knowledge, apprenticeship-like learning (even if it's SO or tutorial), iterative refinement, tool mastery, adaptation during build.
Throaway1985123: Spore was not well-acclaimed precisely because it failed to live up to its promises as a world-builder. Only the 1st two stages were any good.
sowbug: This is just the TTP (Time to Penis) metric[1] all over again, isn't it?[1] https://knowyourmeme.com/sensitive/memes/time-to-penis-ttp
Ensorceled: Honest question: are you enjoying this? I looked at your comment history and you don't seem like a troll. What is going on right now?
rybosworld: So how do you rectify your anecdotal experience against those made by public figures in the industry who we can all agree are at least pretty good engineers? I think that's important because if we want to stay ~anonymous, neither you nor I can verify the reputation of one another (and therefore, one another's relative master of the "Craft").Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:- Donald Knuth: https://www-cs-faculty.stanford.edu/%7Eknuth/papers/claude-c...- Linus Torvalds: https://arstechnica.com/ai/2026/01/hobby-github-repo-shows-l...- John Carmack: https://x.com/ID_AA_Carmack/status/1909311174845329874My point being - some random guy on the internet says LLM's have never been useful for them and they only output garbage vs. some of the best engineers in the field using the same tools, and saying the exact opposite of what you are.
tovej: You are overstating those sources. That alone makes me doubt that you're engaging in this discussion in good faith.I read them all, and in none of them do any of the three say that they "regularly use LLMs for development".Carmack is speculating about how the technology will develop. And Carmack has a vested interest in AI, so I would not put any value on this as an "engineers opinion".Torvalds has vibe coded one visualizer for a hobby project. That's within what I might use to test out LLM output: simple, inconsequential, contained. There's no indication in that article that Linus is using LLMs for any serious development work.Knuth is reporting about somebody else using LLMs for mathematical proofs. The domain of mathematical proofs is much more suitable for LLM work, because the LLM can be guided by checking the correctness of proofs.And Knuth himself only used the partial proof sent in by someone else as inspiration for a handcrafted proof.I don't mind arguing this case with you, but please don't fabricate facts. That's dishonest
bendmorris: >Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:This is a huge overstatement that isn't supported by your own links.- Donald Knuth: the link is him acknowledging someone else solved one of his open problems with Claude. Quote: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."- Linus Torvalds: used it to write a tool in Python because "I know more about analog filters—and that’s not saying much—than I do about python" and he doesn't care to learn. He's using it as a copy-paste replacement, not to write the kernel.- John Carmack: he's literally just opining on what he thinks will happen in the future.
pixl97: Dinosaurs lived 100 million years, before they didn't.And walls between France and Germany were effective, until they weren't.Hell, even the 'people' is the problem doesn't work well for things like Moloch problems. Which people? The problem can no longer be pointed at any individual but a super-organizational response. Once you have an issue that is abstracted from it's base components, then any agent capable of parsing the abstraction can be part of influencing it and becoming part of Moloch.
spacecadet: Im familiar. Our group employs game theory in our research... In practice, if you are at the point of blame- yes you have failed.
angry_octet: Rogue-like games use the most simple randomisation to generate the next room, and I burnt hundreds of hours in Mines of Moria before I forced myself to quit.Now with an LLM I could have AD&D-like campaigns, photorealistic renders of my character and the NPCs. I could give it the text of an AD&D campaign as a DM and have it generate walking and talking NOCs.The art of those great fantasy artists is definitely being stolen in generated images, and application of VLMs should require payment into some sort of art funding pool. But modern artists could well profit by being the intermediary between user and VLM, crafting prompts, both visual and textual, to give a consistent look and feel to a game.The essay author is smoking crack.
tadfisher: Artists want to create. They do not want to tweak prompts and click "Generate" repeatedly until the output matches their vision. I would find this maddening.But this wouldn't make sense anyway. Game companies won't foot the bill for real-time renders of your character, let alone a world of generated NPCs. If/when costs are low enough, and players accept a recurring subscription to play games, then this could happen, sure. No way in hell will artists be available in real-time to keep the generated imagery consistent.
NeutralCrane: Vibe coding as we know it has only been a thing for the last 12-18 months. So by definition the vibe-coded games you have seen are the ones being rushed.
bigstrat2003: > No one wants _buggy slop code_ for their game, but ultimately no one cares whether is has been hand crafted or vibe-coded.Right. And vibe coding is only capable of producing buggy slop code, therefore people won't want something which is vibe coded.
Throaway1985123: People are in it for themselves...when it comes to participating in our capitalist economic system. The 2nd part is often left unsaid.
WarmWash: Humans overwhelming group themselves with groups that provide themselves with the best value prop. When the individuals circumstances change, which causes the value prop of the group to change, people overwhelming move to a new group. It's not a capitalist or socialist thing.
ozmodiar: I'm having trouble thinking of groups of people who are even able to change members outside of modern capitalism. Through most of human history we've been stuck with our group or tribe. Heck, I see people stick with groups that are toxic to them just out of the sense of connection it gives.
bigstrat2003: Sure... but the slop that LLMs spit out isn't going to solve their problem, which is what they care about.
vor_: > If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well.And if you actually read the article, you'd see it addressed that.> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.Like a library?
cweagans: The parent comment didn't seem to say anything offensive. Why so hostile?
pixl97: God damnit eukaryotes, I told you this bullshit was going to have long term ramifications 2 billion years ago!
thunky: > Because code reuse is hard. > humans are yet not smart enough to properly reuse codeAll of these difficulties you outline are because program designs cater to the human developer experience over the machine.If this weren't the case Python wouldn't exist because it's clearly not made with mechanical sympathy in mind. Dynamic languages as a whole would probably not exist.It's because humans will never agree with eachother or with machines and that's what has led us to the proliferation of complexity and lack of reuse we see everywhere.
astrange: You can see what Claude's goal is, and it's not that.https://www.anthropic.com/constitution> Anthropic wants Claude to be genuinely helpful to the people it works with or on behalf of, as well as to society, while avoiding actions that are unsafe, unethical, or deceptive.
spicyusername: Got any recommendations?
bitwize: > So allow me to drop a line that would shock a weathered San Franciscan more than open defecation on Market Street: it's perfectly okay not to use AI.No. AI use is best practice now; if you're not proficient in the tools you're not really an engineer. Shape up or ship out.
bluefirebrand: I would argue that chatbots still barely pass the turing testThey have such obvious patterns and tells that humans have already picked up on them and they can eventually sus out that they're talking to an LLMFor instance I heard recently about someone talking (verbally) with an AI voiced customer support. They were very convinced, so they asked the support agent to calculate the product of two large numbers, and it replied with the result instantlyI would argue that fails the chinese room
eucyclos: Barely pass is still a step change though. Either you can be sure what's on the other end of the line or you can't, and I'd say that, while there are still tests that work sometimes, for a purely text based exchange there are none that will work at all times.
antonvs: The person you’re replying to has only posted two short comments in this thread.The reason a few different people are arguing this point is because it is in fact wrong, or at least poorly expressed, to refer to someone’s unfamiliarity with some aspect of a field like the gaming market as “wildly uneducated.”Ironically, the person using that phrase is demonstrating a lack of understanding of its common meaning, suggesting that they may be a better fit for the word “uneducated”. See e.g: https://www.merriam-webster.com/dictionary/uneducated> What is going on right now?As Wittgenstein put it, we’re playing language games.
1vuio0pswjnm7: "It's undeniable there's a metric gigaton of hype surrounding the technology."Why does the hype (cf. "LLMs") need to be defendedFor example, it's unlikely to see HN replies that go something like, "Yeah the hype sucks, but..."Instead, there will almost always be attempts to defend against any criticism of _the hype_A comment such as "I do not use LLMs", i.e., I don't believe the hype, is likely to be challengedThat's weird, IMHO
array_key_first: Yes, because when it comes to most artist expression the process behind it is the product. These pastries have cultural weight, so their value is inseperable from that.Think of it this way. Nobody would eat a Karelian pastry and not care that it's Karelian. That's why they're eating it, otherwise they'd just eat some other pastry without the cultural weight.It's the same thing with paintings and sculptures. The painting has value because someone thought it up and put the time to make it. And you view it not for the colors, but what they mean. Why did they choose to paint this? What was going through their heads? What is their perspective?If you just shoot out a painting, it has no value, because the value isn't the painting. It's your take, your perspective, and the painting is a tiny window into that.
dannersy: Anecdotal evidence, how scientific of you. When I say it's unfounded, I'm saying it hasn't been proven with actual research and data. So when you ask, "by whom?", that's exactly my point, it is unfounded. That's what that word means, no one has made a claim, backed by data, that AI is making significant waves on productivity. I don't think I've missed the point at all, but it seems I hit an emotional nerve with you though, so the conversation is over.
zombot: > So allow me to drop a line that would shock a weathered San Franciscan more than open defecation on Market Street: it's perfectly okay not to use AI.The commenters here all seem to be weathered San Franciscans. They all deflect and change the subject. Everyone is falling prey to the hype, no hope left.
zombot: Making the bullshit cheaper to inflict only means that soon you will be graced with even more of it.
zombot: As if that weren't enough, LLM coding "productivity" is measured in lines of code, of all things.
FuckButtons: junior dev hiring is down 60%, that’s not just a post pandemic correction.
FuckButtons: dying of old age, what luxury, I’m more worried about starving to death in my mid 50s.
ajewhere: Look, from a point of view of a person outside of US, you are all fascists, "democrats" and trumpists. Dont take this as "trolling", but as a sincere opinion (I dont care about your internal brawls, I care for what you do to others.)
kombookcha: It's unfortunate but the marketing spin has transcended into public discourse, to the point where people take these wild aspirational claims about LLMs for granted. As opposed to what they are - sales pitches.It doesn't help that there is now so much capital tied up in these products whether directly or indirectly that a lot of people and organizations find themselves unwilling to face the reality of what the products can and cannot do.
simianparrot: Let's not forget No Man's Sky here. Or Elite Dangerous' planet-scale procedural generation using solar system properties to fuel the deterministic but procedural generation of tectonic plates that again seed how a planet's surface is deterministically generated, even down to impact craters over millennia, for a universe of billions of consistent deterministically generated full-scale planets you can land on. Something you couldn't do without proc-gen because there's not enough disk space to store it.
wepple: Batshit crazy?3 years ago LLMs couldn’t solve 7x8.Now they’re building complex applications in one shot, solving previously unsolved math and science problems.Heck, one company built a (prototype but functional) web browserAnd you say it’s crazy that in the future it’ll be able to build a mail app or OS?
ezst: JFYI, LLMs still can't solve 7x8, and well possibly never will. A more rudimentary text processor shoves that into a calculator for consumption by the LLM. There's a lot going on behind the scenes to keep the illusion flying, and that lot is a patchwork of conventional CS techniques that has nothing to do with cutting edge research.To many interested in actual AI research, LLMs are known as the very flawed and limiting technique they are, and the increasing narrative disconnect between this and the table stakes where they are front and center of every AI shop, carrying a big chunk of the global GDP on its back, is annoying and borderline scary.
mejutoco: I think txt2img and img2img are terms to find those uses.
bavell: And comfyUI workflows. People have been doing this for awhile now.
mejutoco: And stablediffusion-web-ui before that and others, yes.When googling, txt2img and img2img, or txt2video img2video etc. (for video) are useful terms, since they encapsulate the usage in a few terms. One could search img2video comfyui workflows, for example.I thought it would be useful for the conversation to provide these terms, not mentioned before in the thread.
angry_octet: Why would game companies be paying for rendering on my computer? My computer can fantasise player specific details, in a palette created by game artists, and render them itself.Game artists could indeed be working in real time in MMORPGs to tweak the world, impresarios of the shared experience. Paying for live human shaped performance art is a great way to keep human creativity central to the experience.
mexicocitinluez: Do I have to explain to another adult (presumably) what the word "unfounded" means? Are you purposely ignoring the hundreds of articles popping up on this site demonstrating the capabilities of these tools? Are they all lying?
classified: But you make more money selling subscriptions to `cp` stuff you stole from others.
ezst: > How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person".This must be sarcasm. This has to be.
classified: Both of them.
classified: > In order for vibe-coding to be acceptable and justifiable, they have to consider their own output disposable, highly uncreative, and not worthy of credit.This seems to describe most commenters in this thread, seeing how the majority defend vibe-coding.
bunderbunder: It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.$20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.
pigpop: >More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.Similar things happened with the adoption of computers in the workplace. Perhaps there's a case for banning all digital technology and hiring typists and other assistants to perform the work using typewriters and mechanical calculators? There would certainly be less multitasking when you have 8 hours worth of documents to retype and file/mail. Perhaps there would be less overtime when your boss can see you have a high workload by the state of papers piled upon your desk. Or maybe we can solve these problems in a different way.
gabrieledarrigo: > You can hire many minimum wage workers who are generally intelligent who dont even go to that level.That's pretty rude to say, at minimum.
incomingpain: Oh noes, an anonymous person on the internet thinks my anonymous account is rude. Whatever will I do?Plus this is a valid discussion. A minimum wage worker is generally intelligent, BGI - Biological General Intelligence.Hiring them to do tasks that AGI does is a cost-benefit analysis. Whereas that BGI has sick days and may not even work. Afterall minimum wage, minimum work.
dannersy: I think my point stands. Procedural generation is a tool that usually works best when it is supplementary. What makes New Vegas an amazing game is all the hand built narratives and intricate storylines. So yeah, I agree, Starfield is boring because of the story. But if the procedural vastness was interesting enough to not be boring, then we wouldn't be talking about this to begin with.
whywhywhywhy: Starfield wasn't procedural vastness though, No Man's Sky is but what Starfield was is handmade content then a loading screen then a minigame then a loading screen then a small procedural "instance"/"dungeon" not a vast seamless world to explore.
vips7L: I’m not sure you’re viewing this correctly. No one is claiming a counterfeit painting is not a painting. It’s just not a Rembrandt.
luxuryballs: are we talking about a painting that someone is trying to pass off as a Rembrandt?
nickcoffee: The human-in-the-loop framing gets undersold in these debates. From what I've seen, the people getting the most out of this stuff aren't replacing judgment, they're delegating the parts that didn't need judgment in the first place.
xerox13ster: Honestly, just white knighting for one of my favorite developers and biggest inspirations.Someone lying about the pseudo randomization of the hand drawn efforts to make it seem entirely algorithmically generated rubs me really wrong, especially when that dev has publicly broadcast the reasoning of the decision to eschew procedurally generated mines.
GreenWatermelon: Yeah, and for good reason. The invitation of the light bulb meant factory owners could force workers into 16 hours shifts. The main beneficiaries of new tech were always the capital owners. Workers had to literally fight and die for our 8-hour workdays and 5-days workweek.This is still going on today: the massive gains from automation are being hoarded by the wealthy capital owners, while workers struggle to make ends meet.
tptacek: Ok but also I want to light my house with lightbulbs, not with animal-fat candles.
casey2: That's because they spent a couple hundred years with a society structured around low/no growth and enforced familial occupations. There isn't much to do in such a society other than refine your craft aesthetically and wait for the growth-structured countries to come knocking.LLMs are yet another example of science and technology leading to growth. Which means it's STUPID to restructure society into a low/no growth model like the smelly Europeans degrowthers want. On our current trajectory the only time it makes sense to degrowth is few hundred years after we make a dyson sphere or until FTL travel becomes possible.
teaearlgraycold: Time to learn design, how to talk to customers, and how to discover unsolved problems. Used right LLMs should improve your software quality. Make stuff that matters that you can be proud of.
thesz: > how to discover unsolved problems There are plenty of them and LLMs will not help you much there because unsolved problems are, by definition, out-of-distribution samples.Neural networks are interpolators, not extrapolators.
Lumping6371: "Juan said his dad works at Nintendo"
Lumping6371: "Programmers are obsolete, but like, for real this time guys!"
Lumping6371: There's a lot of money injected into astroturfing and propaganda by big LLM in my opinion. I'm unsure who it is, specifically, but it is undeniable it's there; from reddit posts talking about how "I havent written code in 60 years" to "I have 30 YoE, FAANG experience, impeccable track record, and I can't get a job....Drumroll It's due to AI!".Even here it can get pretty negative in the comments, in waves. I've noticed a lot of articles about LLMs, making wild claims about its reach tend to get a lot of pro-LLM, supportive comments at first and then on less generalizing, fear-mongering comments are added, arguing against LLMs or limiting their predictions of it.I'm guessing a lot of people aren't necessarily technical, and fully believe the big-llm promises and make broad assumptions about our profession; that or they're bots.
tovej: You're just strawmanning now. I've prompted extremely well-specced, contained features, and the LLM has failed nonetheless.In fact, the more details I give it about a specific problem, the more it seems to hallucinate. Presumably because it is more outside the training set.
dntrshnthngjxct: because you need to consider the context window, thus separate the prompts by task. Separating by tasks and planning things out is still your own work, no AI can replace that. assuming you do that properly, AI-generating the code may save you up to 15% of your full work time. Please reread my comment: "If you do not plan out the architecture soundly", planning includes breaking the task down and make multiple prompts.Our job is to break problems down into simpler ones until they are easily solvable, and if a machine simplifies the last steps, it is fine.
bdangubic: > most software engineers, especially not in FAANG-like companies, don't engineer a shit.true for FAANG as well