Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I was in school in the 70's. (That's NINTEEN seventies.) There was this book called The Psychology of Computer Programming. This predates the microcomputer era as we know it. Punched cards were still common when the book was written.

A computer was to control a new assembly line for a car company. They couldn't get the software to work. They called in an outside insultant. The outsider developed a program that worked. (It was more complex.) The book was about the psychology part: The original programmer says: "How fast does YOUR program process a punched card?". Answer: "About one card per second." "Ah!" said the original programmner, "but MY program processes ten cards per second!"

The outsider said, "Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.

Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.



Your example is a case of premature optimization. That is not what the author is concerned with.

The problem are not the programs that obviously do not work or who break in a very visible fashion. Programs whose deficiencies are known can be fixed or worked around.

The real problem are programs that appear to work correctly but aren't.

To say it with the words of Tony Hoare:

    There are two ways of constructing a software design:
    One way is to make it so simple that there are obviously
    no deficiencies, and the other way is to make it so
    complicated that there are no obvious deficiencies. The
    first method is far more difficult. It demands the same
    skill, devotion, insight, and even inspiration as the
    discovery of the simple physical laws which underlie the
    complex phenomena of nature.
Source: 1980 Turing Award Lecture; Communications of the ACM 24 (2), (February 1981): pp. 75-83.


That quote actually refutes the OP by reinforcing that correctness is more important than simplicity. Achieving correctness is the whole point of making things simple, after all. To put simplicity before correctness would be missing the forest for the trees.


In terms of designing solutions, I would say that "correctness" is relative to the problem statement at hand. It's also a degree and not an absolute. It may be correct and incorrect at the same time, depending on the context. From this I would prefer simplicity over correctness to allow for ease of optimization.


Correctness is achieved by simplicity.


    For every problem there is a solution that is simple, neat - and wrong.


Indeed - for example, you can make things look simple by leaving necessary parts out.

I think, however, that the more important part of this quote are the words 'problem' and 'solution'. Until you have an understanding of the problem that is correct, it is unlikely that you will come to a solution at all. Avoiding the introduction of gratuitous complexity is not necessary to reaching that understanding, but it sure helps.


Clearly, that solution isn't simple enough.


> Correctness is achieved by simplicity.

That's... literally what I just said? "Achieving correctness is the whole point of making things simple, after all."


That's... literally what the author said.

    If your solution is not simple, it will not be correct or fast.
Correctness may be the end-goal. But correctness is absolute. So it is a bad performance indicator to set as the goal. Yes, we can track bugs. But the absence of open bugs is no guarantee for correctness.

I can never say "We are 5% more correct than last week. Keep up the good work!"

Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.


> That's... literally what the author said.

Excellent, so we both agree with the author that correctness is the ultimate point and that simplicity is just a useful tool for achieving correctness. :)

> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.

How does one purport to measure simplicity?


I was considering ASM's for it:

http://pages.di.unipi.it/boerger/Papers/Methodology/BcsFacs0...

My thinking was like this. The complexity of software is synonmyous with us saying we don't know what it will do on given inputs. As complexity goes up, it gets more unpredictable. That's because of the input ranges, branching, feedback loops, etc. So, a decent measure of complexity might be simplifying all that down to purest form that we can still measure.

The ASM's are a fundamental model of computation basically representing states, transitions, and conditionals making them happen. So, those numbers for individual ASM's and combinations of them might be good indicator of complexity of an algorithm. And note that they can do imperative and functional programming.

What you think of that idea?


> ...reinforcing that correctness is more important than simplicity

It's the other way around. Correctness is obviously the goal (and likely performance too, depending on your use case), but the way to achieve it is through simplicity. So simplicity should be prioritized - as it allows you to ensure correctness.


I'm glad that we can agree that correctness is the goal, though I still take umbrage to the blog post's title, thesis, and conclusion. :P


By that logic, "fast" goes before "correct"; you can't print the answer quickly if you don't have the answer, after all.

> if your solution is not simple, it will not be correct or fast.

The point of the article is that "simple" is a prerequisite of "correct" (and "fast").


We reached the maximum thread depth.

>> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.

>How does one purport to measure simplicity?

There's 40 years of research into that. And loads of tools to support dev teams.

You can start here: https://en.wikipedia.org/wiki/Cyclomatic_complexity

Also related are costing models: https://en.wikipedia.org/wiki/COCOMO


Derek Jones argues McCabe Complexity and COCOMO were scientifically unsupported with little bandwagons pushing them for reasons of fame and/or funding:

http://shape-of-code.coding-guidelines.com/2018/03/27/mccabe...

http://shape-of-code.coding-guidelines.com/2016/05/19/cocomo...


We also have 40 years of research into improving program correctness, e.g. static analysis, test suites (unit, integration, etc.), fuzzing/mutation testing, and the benefits of code review. The idea that simplicity (which I'm pretty sure that nobody in here is using to specifically mean "the lack of cyclomatic complexity") can be measurably improved but that correctness cannot is incorrect.


> The idea that simplicity (which I'm pretty sure that nobody in here is using to specifically mean "the lack of cyclomatic complexity") can be measurably improved but that correctness cannot is incorrect.

Have you seen a program that comes with a formal proof of correctness? I have. And boy, they are really simple.

The end result can be complicated. But the program is broken up into small, simple, easy-to-understand pieces that are then composed.

http://spinroot.com

https://frama-c.com


I think maybe you mistakenly assumed that response was in opposition to your comment, I read it as a simplification and restatement of what you said.


Yes but I think OP is saying that, paradoxically, prioritizing correctness over simplicity actually makes correctness more elusive than if simplicity were prioritized.


No, that's just the easiest path to it if your only tool is an unaided human brain.


That doesn’t mean simplicity is more important than correctness. The simplest program ever is an empty file, and it doesn’t solve any problem.


Depending on interpretion of terms, I'd agree with either simplicity or correctness first. To disambiguate I would say:

  Working, simple, correct, optimized.


Would deffo agree to this.

My approach is usually sending out a PR as soon as I can to a group of reviewers / users and goes in following stages.

1) POC - proof of concept. It does 90% of things, some parts are ugly and messy but validates a hypothesis. The unknown unknowns are discovered. I want to stage this and get this in front of some alpha internal users as soon as I can. First pass reviewers give a on the plan of attack. Lots of sub TODO’s are listed in PR. The goal is to discover edge cases and unknown unknowns.

2) Simple - Go through PR and refactor any existing / new code so it’s readable and DRY. If reviewers don’t understand the “why” of some code, a comment is left. Now 90% of scenarios are covered, probably some edge cases may not work but the edge cases are known. The code is simple and at right layer of abstraction.

3) Correct, Testable - Edge cases are covered, tests are written, internal users have validated that the feature is working as expected.

4) Polish - if it’s slow, then slow internals are swapped out for fast parts. Tests would mostly work as is. Same with UI, css fixes to make it elegant and pretty.

Sometimes the process is a day, sometimes it’s a week.


> Your example is a case of premature optimization. That is not what the author is concerned with.

I think he is. Premature optimisation is putting the order: fast, simple, correct.

So although the author doesn't explicitly state it, premature optimisation is something that would be avoided if you followed his advice.


>> They called in an outside insultant.

This is either a great typo, or a hilarious moniker I have somehow missed (almost 40 years in the business). Either way, it's worth recognizing.

Equal parts hilarious and accurate as "/in/con/sultants" are often brought in to play the part of the court jester -- they can speak the hard truths no-one else could, and survive.

>>"Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.

I think I wrote a device driver like that, more than once. :( Fast as hell, to the point of outstripping the bitrate of the device it talked to, and about as useful as a sailboat on the moon.


There's a great Dilbert where Dogbert wants to both con and insult someone. So he goes to consult for Dilbert's PHB.


> If the program doesn't have to work, I could make it read 100 cards per second.

> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Correctness isn't binary. Roughly no software today is 100% correct, but for most purposes you'd still pick the current version over a highly complex, slower, more-correct version.

Simplicity can save you a lot of cost as you edit the software, which helps you make it correct sooner. Simplicity and correctness go very well together.


Simplicity (however it is vaguely defined) is desirable, but at the end of the day it is a vehicle for correctness, and hence necessarily subordinate to it. Correctness is the destination of any piece of software (ultimately the goal of any piece of software is to work), and simplicity is just one route to it.


> Correctness is the destination of any piece of software

"Good enough" is the destination of any piece of software. Sometimes that means correct, but more often it means "oh yeah, sometimes it starts acting funny, just restart it when that happens"


Seems a bit like the "worse is better" philosophy :

    It is slightly better to be simple than correct.
See https://en.m.wikipedia.org/wiki/Worse_is_better


Agree. And "good enough" depends on your use case.


It never means "correct". Not to mention that 100% correctness is even impossible.


> Not to mention that 100% correctness is even impossible.

In which case please consider that everyone here is using "correctness" to mean "correctness that is achievable by reasonable human effort". :P It's easy to win any argument by taking one side to its logical extreme and asserting that it is therefore impossible, but that doesn't create a useful discussion. By the same logic we could assert that 100% simplicity is impossible, but that would be just as silly.


They said "destination" being correct, with me interpreting "destination" in the sense of "goal." My point was that some software has the goal of being 100% correct, but most software does not.


It depends on the severity of a bug. If it's very severe, you'll favor the complex but more-correct solution. Otherwise, you'll favor the simple but more-often-wrong solution - because it's easier to fix and get progress.

I use more-* phrases because it's always in a relation. Even NASA can't claim to have 0 bugs although people die if they fail.

bit OT: There's a great article about NASA programming: https://www.fastcompany.com/28121/they-write-right-stuff


Why do you equate working software with correctness? Software that works is never correct by any definition of correctness. Because working software is a system that exists in the real world and therefore can never have a specification against it.


> Why do you equate working software with correctness?

Because the original author neglected to provide an adequate definition of correctness, thereby inspiring an epic HN flamewar as people now must run around endlessly debating semantics. :P


Sometimes people say things that might not be literally true or even "true in spirit", not because they are lying liars who love to lie, but because relating an exaggeration or a caricature or some other sort of not-totally-true thing will have a better effect on their audience than the strict truth would have. As we're now up to 17 comments you've made here emphasizing the skepticism you have for TFA's message, it seems that you value the "correct" more than the "simple". It could be that you are in the intended audience for TFA...


If correctness is some kind of continuum rather than a binary choice, then pick whatever trade offs, cost, and other factors you want.

Plenty of times correctness is binary. In some cases it would be: passes all tests. Or: meets all requirements. Even if it could be "more" correct (or "more" simple), but those aren't part of the tests / requirements.


I always thought correctness begins when the result of your work does what it's supposed to do.

Maybe it's supposed to move from A to B, maybe it should do it in under x seconds, maybe it should go via Y, maybe it has to be easily understood by a 6 years old, etc.

But I can't really imagine something that has simplicity as the only requirement ("nothing" is the simplest thing so that requirement would always be met with no action). So as long as the other requirements are met simplicity is usually the nice to have "add-on". And you can have correct and simple, or correct and complex. But correct (does the job) trumps simple. And the world is surrounded by examples that prove this point.

I think the author meant "simple should be part of good design" but couldn't properly convey the message. He focused on making the message simple and ignored the fact that it's not correct.


I’ve noticed a pattern where the simplest solution DOES accomplish the goal, but isn’t what a user might consider the “shortest path”. How do you count a workaround where it technically can accomplish the end result, but requires a minor annoyance? What about a major annoyance?

What about a process so painful nobody has even thought of it?


It's nigh impossible to solve any complex issue with the "simplest solution" from the first try. This means that when you're faced with a complex issue you will postpone the fix because it's not the simplest.

And you never know if it can be done in an even simpler fashion later.


This seems to mix correctness and completeness.

A good program does only the correct thing in a particular area. It is known to be reliable in that area, sometimes even formally proved to be so.

Outside that area, an ideal program refuses to work, because it detects that it cannot obtain the correct result. This is normally called "error handling".

There's also some gray area where a program may fail to reliably detect whether it can produce the correct result, given the inputs / environment. A reasonably good program would warn the user about that, though.

A "garbage in, garbage out" program is only acceptable in very narrow set of circumstances (e.g. in cryptography).


Agree. In many (most?) cases there is no formal, verifiable correctness proof. And then you are way better of with the simpler solution once feedback from the real world arrives.


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Seconded. I'm highly confused at how many upvotes the OP has gotten in such a short time despite appearing to say that implementation details matter more than program output. A beautiful machine that doesn't work is, at best, a statue. I'm all for the existence of pretty things that do not need to demonstrate inherent practicality, but most people are not printing out source code for use as wallpaper.


To defend the idea, I think it starts with the assumption that software is often a moving target so "correctness" is at best a temporary state. If you had to use a codebase at any point of time you would obviously want the correct one, but if you look at the lifespan of software it would be better to have the simpler code. Simpler is (usually) easier to fix, easier to extend and easier to learn.

I think the author made this a little inflammatory to get people to think about it in these terms.


Easier to fix, yes. Tends to get more complex in nasty and ugly ways.

Easier to extend, almost never. Proper design for extensibility has an extra bit of complexity over the most obvious. Simplistic implementations tend to be tossed away and are good for unscalable prototypes.

Easier to learn, definitely not. The simplest code comes from deep understanding of problem domain and algorithms. It is almost exactly like with writing brevity while not losing the point. It is easy to end up with simplistic instead of simple. There is that famous quote by Dijkstra which I'd rather not butcher from memory.


I think the core consideration is that software isn't static, and a machine that is held together with chewing-gum and silly string can produce the correct output and be a terrible machine at the same time.

What happens when it breaks? What happens when you need to produce doodads as well as gizmos, or a different size gizmo is desired? Who wants to reach inside the silly string and hope for the best?

I'm reminded of that old saying that even a broken clock is right twice a day; an overly complicated piece of software that produces the correct output is only coincidentally correct. Which I think is the point of the article.


That was my first thought as well but then I realized that by correctness the author means "no bug", which is quite more ambitious than just making it "work".

I think the author implicitly assumes the software basically works right from the beginning of the article.


If that's the case then the author is attacking a straw man, because nobody (besides Dijkstra) is suggesting that we rewrite all the software in the world in Coq in order to 100% eliminate bugs at the cost of simplicity.


That's not what Coq would do, and a misrepresentation of Dijkstra's position. We certainly could use tools like TLA+ to assist us with existing code.

Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.


I apologize for using Coq speicifically, I just needed a scapegoat for formal verification that people might have actually ever heard of. :P I'm happy to debate definitions, which the author of the OP has regretfully omitted (and the contentious definition here is probably the OP's notion of correctness, rather than their notion of simplicity).


You'd be surprised how simple a well written proof can be compared to a program implementing an algorithm to do the same.

That said, Coq itself is not the best vehicle for this. There are nicer high order logic languages.


To be honest, I'm fine skipping it. I don't understand why this article is so upvoted anyways.


> Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.

Agreed, see Rich Hickey's "Simplicity Matters" presentation on the difference [0].

Simple-Complex vs Easy-Hard

[0]: https://www.youtube.com/watch?v=rI8tNMsozo0


I agree. What he means is, it should work first, as simply as possible. Then you worry about correctness- correctness is not referring to working/no working. Correctness means, 'how SHOULD this work?' or 'How should this logic or code be written to be most efficient or effective?'

Third is performance.

1. Write a working piece of software that does the job.

2. Refactor to make the working piece of software do the job more efficiently and elegantly.

3. Refactor to make the working piece of software do the job as fast as possible.


Seen this several times when someone refactors. The code is much simpler and easier to read, but does not actually work for several important test cases anymore.

I've never thought of simplicity adding upfront cost. That's probably true, but also true that it pays dividends later on in the project.


If it no longer works then I don't consider that to be "refactoring" but "rewriting".

I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.

And think of the word "factoring" as in high school algebra. or rather "factoring out" something.

I have a dozen examples of this calculation. How about let's refactor it into a function, and replace all the instances with a function call?


> I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.

This kind of transformation is precisely what the person who coined the term meant: Taking code which works and turning it into easier-to-read code which works precisely as well, because refactoring never introduces a change in behavior.

To quote Martin Fowler and Kent Beck:

> A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior… It is a disciplined way to clean up code that minimizes the chances of introducing bugs.

[snip]

Not a direct quote this time:

> Fixing any bugs that you find along the way is not refactoring. Optimization is not refactoring. Tightening up error handling and adding defensive code is not refactoring. Making the code more testable is not refactoring – although this may happen as the result of refactoring. All of these are good things to do. But they aren’t refactoring.

https://dzone.com/articles/what-refactoring-and-what-it-0


Simplicity is ALWAYS something desirable to achieve. Correctness comes first.

As code is originally written, people are (or should be) using the most "obviously" simple approach.

A Breakthrough in simplicity is often the result of additional thinking and hard work. (And cost)


Maybe the test cases / requirements are "wrong"? I think simplicity is the ultimate test that you found a good problem!


How sure are you that a given program is bug free? I feel that only very rarely would I ever assert 100%. In fact, I would generally assert with 100% confidence that there is some overlooked edge case. How many users that bug may affect... well I would generally give that a small percentage, but it still doesn't hit the boolean state of correct.

So correctness is generally never satisfied in my mind. At any given moment, the programs I am working on are in some way broken in my mind. Even if the other programmers thought that correctness was priority number 1, I will never consider the program correct. I will always suspect there is some snake in the grasses.

I suppose you could feel the same way about simplicity. I think the most charitable stance would be to give them the same level of importance. Overtly complex code cannot easily be proven to be correct amid changing business requirements. Easily testable, complex code with a full functional test suite is at less simple in one sense. Patently incorrect code is hardly valuable regardless of how easily one can understand its function.


None of them are absolutes. Just as we do not expect that "simple" before "fast" means "the code must be 100% as simple as it could possibly be before we begin even thinking about speed", we do not mean "the code must be 100% correct in every possible way before we even start thinking about simpleness"

It is relative preferences, more about what takes precedence over what than an absolute measure. Nothing is ever perfectly correct, nor perfectly simple nor perfectly fast.


I can not be sure the code was bug free. It was an anecdote in a book, the focus of which was more about the psychology of those who wrote the code. But it worked, and the first program did not work. The non working code's author took pride in the speed of his code.


Don't get me wrong, I think its an excellent anecdote. I just shy away from a focus on correctness since in my experience people who prioritize correctness above all else usually make a shambles. I feel that people who prioritize simplicity still understand that it still needs to work more or less correctly.


Yes, correctness absolutely comes first.

One way to achieve greater simplicity is to negotiate for fewer/simpler requirements for the first revision. There's often a core set of functionality that can be implemented correctly in a simpler way, and that gets the work done. Once that's in place it's interesting to see how often people lose interest in what were "hard" requirements before. It's also common that new asks have little to no resemblance to those unimplemented features, and are instead things that they found out they needed after using the new system.


The thing is, correctess is often a transient property. Requirements are frequently changing or evolving. What's correct on a Tuesday may no longer be correct by Friday. Under these conditions it's important that the software be amenable to change. It's for that reason I believe simplicity is more important than correctness.


Simplicity is also a transient property.

> Under these conditions it's important that the software be amenable to change.

At the same time, under all conditions, it is important that the software actually works (i.e. correctness), which is why it's more important than simplicity. Irate users who come to us telling us that our program doesn't work will find little comfort as we regale them with how simple it is.

First, make it correct. Then, make it simple. If requirements change what correctness means, then make it correct again, then make it simple again.


I encountered a similar argument in Clean Architecture by Robert Martin of correctness vs maintainability, where he argues for maintainability over correctness. The argument goes that if you had to choose between code that did the wrong thing but was easy to make do the right thing, and code that does the right thing but is hard to change, you should always pick the former.

He also talks more abstractly about the value of software (as opposed to hardware for instance) being primarily in its "soft"-ness, or ease of changing.

Ultimately this comes from his point of view as an architect, who fights more for system design than say, a PM might for user features. I've encountered the opposite school of thought that says: MVP to deliver features, refactor/rewrite later. I think the strategy to use will depend on the project and team (budget, certainty, tolerance for failure, etc)


"A program that produces incorrect results twice as fast is infinitely slower."

- John Ousterhout


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

It is true mainly for one-time contracts where you actually might not care about simplicity at all. Enough is enough.

However, in the case of iterative projects keeping complexity under control has much higher priority including top priority for very big projects. Complexity and high entropy can easily kill everything.


I don't think this is the form of correctness discussed here. I believe this is more the Correct as discussed in the famous "Worse is better" article.


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness

This may be a matter of definitions. It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get. That way allows us to dismiss clearly degenerate cases (you can always do a one-statement no-op program that will be simple but do nothing).

General correctness is what I want in most cases. Example: voice dictation. It requires a final read & polish, but errors are infrequent enough to save me a lot of time. Full correctness is usually requested for jet avionics, nuke power plant control, etc.

With that addition one should optimize for general correctness and simplicity as a first goal, full correctness and performance as a very distant second.

When I write software (or build systems) what I end up with is usually significantly different from what I started with; not externally, but under the hood. Keeping designs simple (on large teams being almost militant about it) helps large systems morph as it goes from a proof of concept into an actual thing. My 2c.


> It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get.

Which is the root of the endless back-and-forth in this thread: a program has to do what it says on the tin ("general correctness") before anything else, and then probably be as simple and as "fully correct" as possible. But it's easier said than done for us to posit a distinction between general and full correctness than to actually find exactly where the dividing line lies between the two. A blog post to discuss such a dividing line might have been valuable, but the one we've got here unfortunately just handwaves away all the hard questions.


There is no line between the two. It's something that depends on how much effort and time is put into this, what methods were used, etc. But, the world doesn't actually care about this specific property, as it has no inherent value. Instead we have various levels of assurance of more practical properties, like safety, but not correctness.


I assure you the world cares if your algorithm is generally correct, passing unit and integration tests, etc. This is a programming basics.


You cannot have correctness without simplicity.

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system." - John Gail


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Yes, but what is "Correctness". Its not usually so binary. Get to "good enough" and move onto the next thing.


This book, "The Psychology of Computer Programming" is by Gerald Weinberg, an author that really explores the design and complexity of systems. I recommend his other books, esp, "On the Design of Stable Systems : A Companion Volume to an Introduction to General Systems Thinking"

http://www.ppig.org/library/book/psychology-computer-program...


> Simplicity always comes after correctness.

Strong disagreement here. A program that isn't kept simple will stop being correct, fast, or any desirable quality over time.


Nobody's saying that simplicity is unimportant, but if the failure mode of a loss of simplicity is that the program is no longer correct, then it inherently suggests that correctness is the primary metric to strive for. :P


Ability to change ("simplicity") is the key metric that allows to maintain, or further desirable invariant. eg: in b2c "correctness" may be less valued than another trait. Do you prefer to know something or to be able to learn fast?


Works today, but tomorrow the complex solution does not do what is needed and presents a barrier to delivering what is needed now. In a static environment you are right, but static environments are vanishingly rare for software, by definition (because with stops after one iteration!)


What is this comment trying to say? That a simple program that doesn't work today is better than a complex program that works today, but that might not work in some nebulous future? Not all complexity is reducible. The point of software is to work correctly, not to satisfy the author's aesthetic notions, which is what most of the modern hype over simplicity boils down to.


I think you missed the parent comment’s point, which is that a highly complex implementation might have problems very similar to overfitting in statistics. Simplicity in some sense means “room to expand to handle future unseen cases.” If an implementation is very complex, chances are it has some assumptions baked in somewhere and when it hits the wrong corner case or a new requirement is added, it manifests not as some mere refactoring annoyance, but as a complete meltdown where the system is revealed to be incapable entirely, and has to undergo major delays due to huge refactoring that can lead to ripple effect problems in other parts of the system.

In that sense, simplicity is like insurance against the future, and so at any given moment you don’t solely care about the system’s total correctness or performance right now but also you care about some diversification benefit of investing in simplicity too.

Very much like how you don’t choose stocks based solely on what will have the highest expected return right now, but instead you also incorporate some notion of risk management when optimizing.


What I am saying is that a ball of mud that passes all tests is worse than something clear that fails corner cases (genuine corner cases) because sorting that out can be done. Whereas the ball of mud will definitely fail in the future and when it does nothing will help you apart from a complete rebuild. "It passes the tests" simply doesn't cut it.


This makes me think of the Donald Knuth quote, "Beware of bugs in the above code; I have only proved it correct, not tried it."

More info here: https://en.wikiquote.org/wiki/Donald_Knuth


Which is why we now have automated theorem provers that can refine proofs into programs.


+1. Simplicity won't work if it isn't correct.

Simple correctness is the best way to create beginners that use software to get faster results. Fast isn't all about computation - it's taking the least amount of the user's time as reasonably neccesary.


Gerald Weinberg. A classic, inpired much of Demarco and Lister's Peopleware.

https://leanpub.com/thepsychologyofcomputerprogramming


Does the time it takes to write come into play here at all?

I’m a novice of sorts. Thanks.


And since "fast enough" is a part of "correct", the order should really be "correct, fast enough, simple".


How do you know it was more complex because from what I read it was slower which is different.


It has been too long. The book described both approaches that both programmers used. I simply no longer remember those details. As I recall, the working program, when explained gave you the "Ah Ha!" experience, and thus was simple enough. The focus of the entire book was more about psychology aspects. One chapter was about how programmers come to feel "ownership" of code.

Another thing: what was an entire program back then, is sometimes a mere function, or maybe a class or code library today.


If it's not simple, it might be incorrect and you'd never know until it bites you.


I was going to say the same.

The conclusion I came to personally was always

Accuracy > Maintainability > Performance

in that order


> Correctness comes first.

I agree with this.

Interestingly, the post is very simple, and not correct. I prefer posts which are slightly more complex but correct, but those don't get as many upvotes.


> Correctness comes first.

Not always. Have you ever used a SNES emulator? There is one emulator that is more correct than all others combined - it's called BSNES and it's the most true to the original SNES hardware of all the available emulators. Yet it is horrifically memory/cpu hungry - that correctness comes at a huge cost.

So no, correctness does not always come first, especially if you value other things like user experience.


Your definition of correctness is wrong in this case. If the purpose is to emulate the hardware as accurately as possible, BSNES wins. If the purpose is to make as many games as possible enjoyable for as many people as possible on the lowest common denominator hardware available today, BSNES loses.

There's no clinical definition of correctness here. Intent matters.


Correctness does come first, or else you can't play games the wa they're intended to be played, but the way BSNES does it is wrong.

I believe that it does so through attemptng to mimic the working circuit logic and chips, the physical hardware, within code alone, hens it requiring a powerful computer. This is an incredibly unoptimized way of doing it, especially since it's formed out of incorrect assumptions n what "accurate emulation" is.

It's the effects that we want, not the logic. If you're going to emulate something that, through common sense, shouldn't even require that much power, you're doing it wrong.

The saying goes, "keep it simple, stupid!" To overcomplicate things, like the programmer of BNES did, results in unweildy an unoptimized code.

Even Nintendo doesn't do this tactic with their official emulators. Yeah, sure, they're known to be inaccurate at times, but that's only because Nintendo's not aiming to build a general emulator to handle all case scenarios. Besides, much of the inaccuracies, as far as I could understand, deal with undefined behaviors of the system, something only things like glitches and bugs ever take advantage of.


The follow on from that is “performance is a feature”. If the emulator is supposed to emulate a fun, playable, game, then perf would be a required feature :-)


BSNES trades off performance for emulation accuracy. Other emulators trade off emulation accuracy for performance. No widely-used emulator that I know of has any care for simplicity at all (all of them are chock full of one-off special cases to benefit specific games). This has little to do with the screed in the OP, especially given how little the OP appears to value performance.


Correctness then is a trade off against other factors. Also it seems correctness in this case is a continuum rather than a binary choice. And you would prefer to trade other important factors for "true" correctness.

But I'll assume that you want the software that calculates your paycheck to be correct.


> So no, correctness does not always come first, especially if you value other things like user experience.

I think you're using a different definition of "correctness" than most other people in this thread. Which is understandable, a lot of folks are using different senses of it. What matters is not, "Does this perfectly and unobservably play hardware" in the definition of correctness for an emulator. What matters is, "Can this emulate the cart I want to play right now with a good experience?" and perhaps, "Will this allow a malware maker to own my entire computer if they run a cleverly crafted fake cart file?"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: