Discussion
Good Taste the Only Real Moat Left
ibero: https://x.com/netcapgirl/status/2024140332963705342?s=46evergreen.
gmaster1440: If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?
sparker72678: At least in part because some of Taste is fashion.
allears: And if anybody knows about good taste, it's techies, right?
furyofantares: Extremely ironic piece of slop.
echelon: No - at face value, our work has diminished value. The entire supply and demand economics of our careers is changing in the blink of an eye.There are people trying to figure out what this means and where to create value. "Taste is the only moat" is one such hypothesis. "Senior engineers will be fine" is another.Everything is super frothy right now and we're in for a wild 2026.
CharlieDigital: > One of the most useful things about AI is also one of the most humbling: it reveals how clear your own judgment actually is. If your critique stays vague, your taste is still underdeveloped. If your critique becomes precise, your judgment is stronger than the model output. You can then use the model well instead of being led by it. Something I find that teams get wrong with agentic coding: they start by reverse engineering docs from an existing codebase.This is a mistake.Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated.This exercise is hard for some folks to grasp because they've never thought much about what well-constructed code or architectures looks like; they have no "taste" and thus no ability to precisely dictate the framework for "perfect" (yes, there is some subjectivity that reflects taste).
sodapopcan: > Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated.I think this goes against what a lot of developers want AI to be (not me, to be clear).
CharlieDigital: I'm looking at it from a team perspective.With the right docs, I can lift every developer of every skill level up to a minimum "floor" and influence every line of code that gets committed to move it closer to "perfect".I'm not writing every prompt so there is still some variation, but this approach has given us very high quality PRs with very minimal overhead by getting the initial generation passes as close to "perfect" as reasonably possible.
rvz: Well, nope. There are three real moats left in software:Distribution, Data (Proprietary) and Iteration Speed.Very successful companies have all three: Stripe, Meta, Google, Amazon.
wavemode: [delayed]
everyone: I dont buy the authors argument. Not much has changed imo. Mediocre slop has always been the easiest thing to generate.
dinkleberg: Yeah I feel like we’re getting pranked here
dlev_pika: Rick Rubin said it best.https://youtu.be/jg1WUOxY6Cg?si=0ajVvgKnyuSz0e2Y
johnfn: It is profoundly ironic that this article is AI generated.
hackerman70000: This reads like cope. If taste were a real moat, designers and art directors would be the highest paid people in tech. They arent. Execution speed, distribution, and capital are moats. Taste is a tiebreaker at best. The market consistently rewards "good enough, shipped fast" over "exquisite, shipped late".
DrewADesign: It’s not that straightforward. Art directors and designers get paid to visually communicate things the business wants to communicate— anything from brand vibes, to directing people to click on a “buy me” button, to the state of an interface. Most designers in tech companies aren’t even the ones that design things like branding — that’s done by specialists in extremely well-compensated studios, and corporate designers are stuck following their guidelines. Taste is nearly irrelevant to an interface designer, for example.
hackerman70000: Fair point. I was conflating taste with design, which is a different thing. But I think this actually strengthens the argument. If even the people whose entire job is visual judgment are mostly executing within constraints set by someone else, then "taste" as an individual moat is even weaker than the article claims. The leverage is in setting the constraints, not in having better judgment within them
micromacrofoot: taste isn't a moat at all because it's so variable, in fact this stuff will start dictating what taste is through broad proliferationyou already see it on facebook with all the ai generated meme sharing... taste is being eroded there
sodapopcan: Oh I agree with you, I'm just saying a lot of developers don't want to use it like that. AI has liberated them from the drudgery of reading and writing code and they won't accept that they should still be doing a bit of both, if not a lot of reading.
danielvaughn: Disagree with the overall argument. Human effort is still a moat. I've been spending the past couple of months creating a codebase that is almost entirely AI-generated. I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
taude: I feel like you're pretty strongly agreeing that taste is important: " I'm finding that you have to have an extremely clear product vision...""Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.
boshalfoshal: I think "taste" is definitely an overused meme at this point, its like tech twitter discovered this word in 2024 and never stopped using it (same with "agency", "high leverage", etc).Having read the article though, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.Now where I disagree is that1. "taste" is a defensible moat2. "taste" is "ai-proof" to some extent"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.
tayo42: > That is why so much AI-generated work feels familiar:This was already a complaint people had before Ai. Like when logos and landing pages all used to look the same. Or coffee shops all looking the same.
nayuki: Reminds me of PG's classic essay, "Taste for Makers" (2002): https://paulgraham.com/taste.html
eru: Is the joke that the guy is drinking bad coffee?
crystal_revenge: > ... for AI to be used effectively.I'm continually fascinated by the huge differences in individual ability to produce successful results with AI. I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well, and have some irrational belief that they can get AI to brute force their way to a solution.For me I don't even use the more powerful models (just Sonnet 4.6) and have yet to have a project not come out fairly successful in a short period of time. This includes graded live coding examples for interviews, so there is at least some objective measurement that these are functional.Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.
alfalfasprout: > Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.This feels a bit like a strawman. How do you assess it to be bad software without being an engineer yourself? What constitutes successful for you?If anything, AI tools have revealed that a lot of people have hubris about building software. With non-engineers believing they're creating successful work without realizing it's a facade of a solution that's a ticking time bomb.
aaaronic: It does amaze me when colleagues refuse to read what I (personally, deliberately) wrote (they ask AI to summarize), but then tell AI to write their response and it's absolutely bloated and full of misconceptions around my original document.If they aren't willing to read what I put effort into, why should I be expected to read the ill-conceived and verbose response? I really don't want to get into a match of my AI arguing with your AI, but that's what they've told me I should be doing...
roncesvalles: >AI and LLMs have changed one thing very quickly: competent output is now cheap.Already wrong.
lostathome: I already disagree with the first line: competent output is not cheap. At least if defined as a final product.- Just think about scientific research. Lots of data analysis results are not cheap to get.- Even vibe coding is difficult: you need to think very hard about what you want.What is cheaper now are some building blocks. We just have a new definition of building blocks. But putting the blocks is still hard.
scared_together: It’s also possible this is the first iteration of the loop described in the “A practical loop for training taste” section. Which would be less of a “prank” and more of “using the HN audience to feed the machine”.The loop (some points snipped for brevity):> 1. Pick one high-leverage artifact from your week. A paragraph…> 2. Generate 10 to 20 versions with an AI model.> 3. For each version, write one sentence that starts with "fails because..."> 4. Rewrite the strongest version with a hard constraint…> 5. Ship the final version somewhere real and observe what happens.
andai: I think you're missing the point. Effort is a moat now because centaurs (human+AI) still beat AIs, but that gap gets smaller every year (and will ostensibly be closed).The goal is to replicate human labor, and they're closing that gap. Once they do (maybe decades, but probably will happen), then only that "special something" will remain. Taste, vision... We shall all become Rick Rubins.Until 2045, when they ship RubinGPT
monknomo: do you need taste if you can massively parallel a/b test your way to something that is tasteful? say like you take your datacenter of geniuses and have a a rubin-loop supervising testing different directions. shouldn't that be close enough?
AlexCoventry: Try using a coding agent to write an efficient GPU kernel. I guess they might get good at it soon, but they definitely aren't there yet.
Yokohiii: He has taste. The LLM knows that and creates a tasteful article. /s
periodjet: It has been for a while. Hollywood and other outlets didn’t need AI tools to create abysmal slop.
sodapopcan: [delayed]
abkolan: Looks like the comments on this article are too.
roncesvalles: Roncesvalles' law: Bad posts have bad comments.
jdpigeon: Seriously. Very Claude-y vibes from this post. I guess the value of human effort doesn’t extend to writing your own blog posts
osti: I had a very complex cuda kernel and codex cli managed to improve the throughout 20x.
jatins: Title: Good Taste the Only Real Moat LeftFollowed by an entire AI generated fluff piece https://www.pangram.com/history/347cd632-809c-4775-b457-d9bc...Flagged
boshalfoshal: The joke is that "taste" usually implies you have some strong personal sense of self and style, but if you walked into tech offices in the bay area everyone looks like that and acts/talks the same.So its ironic that these same people are talking about "taste" when they ostensibly have very little.
acuozzo: > AI and LLMs have changed one thing very quickly: competent output is now cheap.If you're working on something not truly novel, sure.If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.
liuliu: I am not sure you set it up right. Did you have a runnable WolframLanguage file so it can compare results? Did you give it H100 / H200 access to compile and then iterate?My experience is that once you have these two, it does amazing kernel work (Codex-5.4).