Discussion
Laws of Software Engineering
WillAdams: Visual list of well-known aphorisms and so forth.A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhouthttps://www.goodreads.com/en/book/show/39996759-a-philosophy...(and of course Brook's Law is from _The Mythical Man Month_)Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....
grahar64: Some of these laws are like Gravity, inevitable things you can fight but will always exist e.g. increasing complexity. Some of them are laws that if you break people will yell at you or at least respect you less, e.g. leave it cleaner than when you found it.
d--b: It's missing:> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
ozgrakkurt: For anyone reading this. Learn software engineering from people that do software engineering. Just read textbooks which are written by people that actually do things
ghm2180: Any recommendations? I read designing data intensive applications(DDIA) which was really good. But it is by Martin Klepmann who as I understand is an academic. Reading PEPs is also nice as it allows one to understand the motivations and "Why should I care" about feature X.
conartist6: Remember that these "laws" contain so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify. The hard part is knowing which law break when, and why
IshKebab: Calling these "laws" is a really really bad idea.
mojuba: > Get it working correctly first, then make it fast, then make it pretty.Or develop a skill to make it correct, fast and pretty in one or two approaches.
Antibabelic: Software engineering is voodoo masquerading as science. Most of these "laws" are just things some guys said and people thought "sounds sensible". When will we have "laws" that have been extensively tested experimentally in controlled conditions, or "laws" that will have you in jail for violating them? Like "you WILL be held responsible for compromised user data"?
r0ze-at-hn: Love the details sub pages. Over 20 years I collected a little list of specific laws or really observations (https://metamagic.substack.com/p/software-laws) and thought about turning each into specific detailed blog posts, but it has been more fun chatting with other engineers, showing the page and watch as they scan the list and inevitably tell me a great story. For example I could do a full writeup on the math behind this one, but it is way more fun hearing the stories about the trying and failing to get second re-writes for code.9. Most software will get at most one major rewrite in its lifetime.
tfrancisl: Remember, just because people repeated it so many times it made it to this list, does not mean its true. There may be some truth in most of these, but none of these are "Laws". They are aphorisms: punchy one liners with the intent to distill something so complex as human interaction and software design.
threepts: I believe there should be one more law here, telling you to not believe this baloney and spend your money on Claude tokens.
AussieWog93: DRY is my pet example of this.I've seen CompSci guys especially (I'm EEE background, we have our own problems but this ain't one of them) launch conceptual complexity into the stratosphere just so that they could avoid writing two separate functions that do similar things.
AussieWog93: I recently had success with a problem I was having by basically doing the following:- Write a correct, pretty implementation- Beat Claude Code with a stick for 20 minutes until it generated a fragile, unmaintainable mess that still happened to produce the same result but in 300ms rather than 2500ms. (In this step, explicitly prompting it to test rather than just philosophising gets you really far)- Pull across the concepts and timesaves from Claude's mess into the pretty code.Seriously, these new models are actually really good at reasoning about performance and knowing alternative solutions or libraries that you might have only just discovered yourself.
tmoertel: One that is missing is Ousterhout’s rule for decomposing complexity: complexity(system) = sum(complexity(component) * time_spent_working_in(component) for component in system). The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.
stingraycharles: That’s not some kind of law, though. And I’m also not sure whether it even makes sense, complexity is not a function of time spent working on something.
tmoertel: First, few of the laws on that site are actual laws in the physics or mathematics sense. They are more guiding principles.> complexity is not a function of time spent working on something.But the complexity you observe is a function of your exposure to that complexity.The notion of complexity exists to quantify the degree of struggle required to achieve some end. Ousterhout’s observation is that if you can move complexity into components far away from where you must do your work to achieve your ends, you no longer need to struggle with that complexity, and thus it effectively is not there anymore.
layer8: Physical laws vs human laws.
RivieraKid: Not a law but a a design principle that I've found to be one of the most useful ones and also unknown:Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.
kijin: What's the smallest unit of functionality to which your principle applies?For example, each comment on HN has a line on top that contains buttons like "parent", "prev", "next", "flag", "favorite", etc. depending on context. Suppose I might one day want to remove the "flag" functionality. Should each button be its own file? What about the "comment header" template file that references each of those button templates?
sverhagen: Maybe the buttons shouldn't be their own files, but the backend functionality certainly could be. I don't do this, but I like the idea.
horsawlarway: At least for your last point... ideally never.Look, I understand the intent you have, and I also understand the frustration at the lack of care with which many companies have acted with regards to personal data. I get it, I'm also frustrated.But (it's a big but)...Your suggestion is that we hold people legally responsible and culpable for losing a confrontation against another motivated, capable, and malicious party.That's... a seriously, seriously, different standard than holding someone responsible for something like not following best practices, or good policy.It's the equivalent of killing your general when he loses a battle.And the problem is that sometimes even good generals lose battles, not because they weren't making an honest effort to win, or being careless, but because they were simply outmatched.So to be really, really blunt - your proposal basically says that any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data. That's not good policy, period.
Antibabelic: Incidents happen in the meat world too. Engineers follow established standards to prevent them to the best of their ability. If they don't, they are prosecuted. Nobody has ever suggested putting people in jail for Russia using magic to get access to your emails. However, in the real world, there is no magic. The other party "outmatches" you by exploiting typical flaws in software and hardware, or, far more often, in company employees. Software engineering needs to grow up, have real certification and standards bodies and start being rigorously regulated, unless you want to rely on blind hope that your "general" has been putting an "honest effort" and showing basic competence.
wduquette: And in addition, the time you spend making a component work properly is absolutely a function of its complexity. Once you get it right, package it up neatly with a clean interface and a nice box, and leave it alone. Where "getting it right" means getting it to a state where you can "leave it alone".
theandrewbailey: Modern SaaS: make it "pretty", then make it work, then make it "pretty" again in the next release. Make fast? Never.
Symmetry: On my laptop I have a yin-yang with DRY and YAGNI replacing the dots.
bronlund: Pure gold :) I'm missing one though; "You can never underestimate an end user".
James_K: I feel that Postel's law probably holds up the worst out of these. While being liberal with the data you accept can seem good for the functioning of your own application, the broader social effect is negative. It promotes misconceptions about the standard into informal standards of their own to which new apps may be forced to conform. Ultimately being strict with the input data allowed can turn out better in the long run, not to mention be more secure.
Sergey777: A lot of these “laws” seem obvious individually, but what’s interesting is how often we still ignore them in practice.Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.
devgoncalo: The Pragmatic Programmer https://www.amazon.com/Pragmatic-Programmer-Journeyman-Maste...
fenomas: Nice to have these all collected nicely and sharable. For the amusement of HN let me add one I've become known for at my current work, for saying to juniors who are overly worried about DRY:> Copy-paste is free; abstractions are expensive.
ndr: Same vibe, different angle:> 11. Abstractions don’t remove complexity. They move it to the day you’re on call.Source: https://addyosmani.com/blog/21-lessons/
dassh: Calling them 'laws' is always a bit of a stretch. They are more like useful heuristics. The real engineering part is knowing exactly when to break them.
pydry: DRY is misunderstood. It's definitely a fundamental aspect of code quality it's just one of about 4 and maximizing it to the exclusion of the others is where things go wrong. Usually it comes at the expense of loose coupling (which is equally fundamental).The goal ought to be to aim for a local minima of all of these qualities.Some people just want to toss DRY away entirely though or be uselessly vague about when to apply it ("use it when it makes sense") and thats not really much better than being a DRY fundamentalist.
layer8: DRY is misnamed. I prefer stating it as SPOT — Single Point Of Truth. Another way to state it is this: If, when one instance changes in the future, the other instance should change identically, then make it a single instance. That’s really the only DRY criterion.
mcv: That's how I understood it. If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.
Aaargh20318: I’m missing Curly’s Law: https://blog.codinghorror.com/curlys-law-do-one-thing/“A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”
ipnon: I usually invoke this by naming with POSIWID.
jimmypk: Postel's Law vs. Hyrum's Law is the canonical example. Postel says be liberal in what you accept — but Hyrum's Law says every observable behavior of your API will eventually be depended on by someone. So if you're lenient about accepting malformed input and silently correcting it, you create a user base that depends on that lenient behavior. Tightening it later is a breaking change even if it was never documented. Being liberal is how you get the Hyrum surface area.The resolution I've landed on: be strict in what you accept at boundaries you control (internal APIs, config parsing) and liberal only at external boundaries where you can't enforce client upgrades. But that heuristic requires knowing which category you're in, which is often the hard part.
throwaway173738: I look at Postel’s law more as advice on how to parse input. At some point you’re going to have to upgrade a client or a server to add a new field. If you’ve been strict, then you’ve created a big coordination problem, because the new field is a breaking change. But if you’re liberal, then your systems ignore components of the input that they don’t recognize. And that lets you avoid a fully coordinated update.
busfahrer: I think I remember a Carmack tweet where he mentioned in most cases he only considers it once he reaches three duplicates
whattheheckheck: Why 3? What is this baseball?Take the 5 Rings approach.The purpose of the blade is to cut down your opponent.The purpose of software is to provide value to the customer.It's the only thing that matters.You can also philosophize why people with blades needed to cut down their opponents along with why we have to provide value to the customer but thats beyond the scope of this comment
ta20240528: "The purpose of software is to provide value to the customer."Partially correct. The purpose of your software to its owners is also to provide future value to customers competitively.What we have learnt is that software needs to be engineered: designed and structured.
andreygrehov: `Copy as markdown` please.
ebonnafoux: There is a small typos in The Ninety-Ninety Rule> The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.It should be 90% code - 10% time / 10% code - 90% time
Edman274: It sounds like you are unfamiliar with the idea that software engineering efforts can be underestimated at the outset. The humorous observation here is that the total is 180 percent, which mean that it took longer than expected, which is very common.
ebonnafoux: Oh OK, that is something I learn today.
Kinrany: SOLID being included immediately makes me have zero expectation of the list being curated by someone with good taste.
jcgrillo: > any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their dataNo. Not the company, holding companies responsible doesn't do much. The engineer who signed off on the system needs to be held personally liable for its safety. If you're a licensed civil engineer and you sign off on a bridge that collapses, you're liable. That's how the real world works, it should be the same for software.
horsawlarway: Define "safety".
jcgrillo: Obviously if someone dies or is injured a safety violation has occurred. But other examples include things like data protection failures--if for example your system violates GDPR or similar constraints it is unsafe. If your system accidentally breaks tenancy constraints (sends one user's data to another user) it is unsafe. If your system allows a user to escalate privileges it is unsafe.These kinds of failures are not inevitable. We can build sociotechnical systems and practices that prevent them, but until we're held liable--until there's sufficient selection pressure to erode the "move fast and break shit" culture--we'll continue to act negligently.
horsawlarway: None of those are what OP proposed. Frankly, we also cover many of these practices just fine. What do you think SOC 2 type 2 and ISO 27001 are?It seems like your issue is that we don't hold all companies to those standards. But I'm personally ok with that. In the same way I don't think residential homes should be following commercial construction standards.
mosburger: I said this elsewhere in the comments, but I think there's sort of a fundamental tension that shows up sometimes between DRY and "a function/class should only do one thing." E.g., there might be two places in your code that do almost identical things, so there's a temptation to say "I know! I'll make a common function, I'll just need to add a flag/extra argument..." and if you keep doing that you end up with messy "DRY" functions with tons of conditional logic that tries to do too much.Yeah there are ways to avoid this and you need to strike balances, but sometimes you have to be careful and resist the temptation to DRY everything up 'cuz you might just make it brittler (pun intended).
sigma5: I would add also Little's law for throughput calculation https://en.wikipedia.org/wiki/Little%27s_law
GuB-42: > Premature optimization is the root of all evil.There are few principle of software engineering that I hate more than this one, though SOLID is close.It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.Today, late optimization is just as bad as premature optimization, if not more so.
cogman10: Completely agreed here [1].And as I point out, what Knuth was talking about in terms of optimization was things like loop unrolling and function inlining. Not picking the right datastructure or algorithm for the problem.I mean, FFS, his entire book was about exploring and picking the right datastructures and algorithms for problems.[1] https://news.ycombinator.com/item?id=47849194
ghosty141: What's the problem with SOLID? It's very very rare that I see a case where going against SOLID leads to better design.
heap_perms: That's interesting, what makes you think that? Not long ago, I was working on my degree in Computer Science (Software Engineering), and we were heavily drilled on this principle. Even then, I found it amusing how all the professors were huge fanboys of SOLID. It was very dogmatic.
optimization principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. applying that trade-off before understanding where performance actually matters leads to unreadable systems.
_dain_: I have a lot of issues with this one:https://lawsofsoftwareengineering.com/laws/premature-optimiz...It leaves out this part from Knuth:>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.Moreover:>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.
cogman10: > I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.Just a little historic context will tell you what Knuth was talking about.Compilers in the era of Knuth were extremely dumb. You didn't get things like automatic method inlining or loop unrolling, you had to do that stuff by hand. And yes, it would give you faster code, but it also made that code uglier.The modern equivalent would be seeing code working with floating points and jumping to SIMD intrinsics or inline assembly because the compiler did a bad job (or you presume it did) with the floating point math.That is such a rare case that I find the premature optimization quote to always be wrong when deployed. It's always seems to be an excuse to deploy linear searches and to avoid using (or learning?) language datastructures which solve problems very cleanly in less code and much less time (and sometimes with less memory).