Hacker News new | past | comments | ask | show | jobs | submit login
Conventional wisdom as an anti-pattern (bower.sh)
53 points by qudat on Feb 27, 2023 | hide | past | favorite | 89 comments



I don't think conventional wisdom is an anti-pattern, exactly.

Better to think of it this way:

It's a starting point, not an ending point.

E.g., for DRY, if you notice yourself copying and pasting the same code around you might spend a few seconds thinking about whether there's some commonality worth capturing or abstraction worth building right now or not. The answer could be yes or no (for a variety of reasons), but it costs very little to reflect on it.

(I'm not a fan of DRY though, as it's generally presented. Whether something is a good abstraction point or not doesn't really directly depend on how many times you're writing that code. WET, as presented here is even worse in this regard. If you're copying code around there's probably some useful abstraction to make, but simply barfing out a utility function every time probably leads to a lot of bad abstractions. Code is expensive -- to write, test and maintain, so it's good to minimize it. But abstractions are expensive too, since they directly affect the code written in terms of them. Bad abstractions don't end up making anything better, and probably make them worse since they will force you into a form of spaghetti code you might otherwise have avoided.)


> E.g., for DRY, if you notice yourself copying and pasting the same code around you might spend a few seconds thinking about whether there's some commonality worth capturing or abstraction worth building right now or not.

DRY doesn't stop the kind of people who blindly copypasta, anyway.

To me, DRY is more about avoiding desync. I should not define a constant in two different places--that will desync into a bug. I should not define a data structure in two different places--that will desync into a bug.

A function, on the other hand, might actually want to desync after copypasta.


Pretty sure this is what DRY is supposed to be. Should be renamed really. If something needs to stay in sync it should be defined once.

Things that looks sort of similar, don't necessarily need to be defined only once because its okay for them to diverge or desync and they probably eventually will.


> Should be renamed really.

There’s SPOT = Single Point Of Truth [0], which fits better for this. When multiple pieces of code are to be kept in sync, it’s because they’re based on a shared truth. And that truth should be defined in a single place, and not have multiple copies that need to be kept in sync.

[0] https://wiki.c2.com/?SinglePointOfTruth


Problem is that the very definition of a cache is that you have the same thing in two (or more) places, which have to stay in sync. So blind adherence to DRY amounts to never using the technique of caching.


Its more a general guideline. In practice we break it all the time because it's not always possible or we may want to trade off for something else (performance in your case).

Though generally I think when we talk about DRY in the code style sense vs the data (source of truth) sense these are related but different.

However, even at Google there is a if_change_this_change_that lint rule to keep code changes between different files in sync. Meaning there's cases when you will have to define the same thing twice.


If it's a general guideline, it reasonably extends to cover copy-paste code scenarios too.


Data structure has traps like - it has the same properties so it seems like something to DRY up. But if we add context that one of these is used on exchange of data with DB and the other on exchange data with frontend. Now we want these to to desync because there are valid reasons for that but at the start of project it seems that these have the same properties but again these are used in different contexts.


I feel there are two separate things going on:

First, the question of deduplication a/k/a normalization. Deduplication is cheap, and it's "a two-way door, not a one-way turnstile:" It's cheap to re-duplicate when you find that one of the users of your de-duplicated function or whatever needs new behaviour.

Second, there is the question of the right versus wrong abstraction: If two different things use the same thing, that doesn't necessarily mean that they share the same abstraction conceptually. But of course, sometimes the duplication is a hint that there is some semantic commonality, and when you get this right, it's glorious, but if instead of de-duplicating we actually build the wrong abstraction, it can be extremely expensive to fix. That's more of a "one-way turnstile" problem.

To me, when I see code that could be "DRY'd up," I have to ask myself whether it represents an opportunity to write a better abstraction, but I treat that decision conservatively, because of the cost of getting it wrong. In many cases, I choose to de-duplicate the code without creating what might be "the wrong abstraction," and it is often less expensive and lower risk to start there and only later consider abstractioneering.

p.s. The tension between de-duplication and abstractioneering resembles the tension between is-a and has-a, often expressed by classical OO programmers as "inheritance" In those terms, I'm really describing my inclination as "Prefer composition to inheritance by default."

p.p.s. Sandi Metz asserts that de-duplication is creating a new abstraction: https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction. I feel that there is a sliding scale from "extract method" to "These two things share a superclass with a private message," but read her thoughts and come to your own conclusion.


Manual testing is for chumps (the US government is a good source of chumps). It's a great way to stretch a project out and balloon the budget because you need a separate team (maybe even multiple teams) to do in a few weeks or months what could have been done in a few hours or days by a couple computers. Why anyone would prefer manual testing for anything is beyond me, it's a great source of errors because it becomes your big time crunch in large projects. You end up cutting out tests because "that one never fails" (until you cut it) and rushing through the tests handwaving issues with "I'm sure we just fat-fingered it".

Automated testing takes less time, fewer people, less hardware (normally, but not always), and can be kicked off by anyone in the project team (if you're sane about your project management) at any time. So you can run your suite today and know (worst case on huge projects) by Friday or next week that it's actually passing or failing. If you stick with manual testing, you won't know about failures for weeks, months, and sometimes years. Genius if you're going for government money, moronic if you're spending your own.


> Automated testing takes less time

Hmm, this has not been my experience, but I spend a lot of time in the UI. Writing UI tests is a massive pain, the functions are impure, naturally deal with side-effects, and involve robo-interacting with a fake browser. The amount of code required to simply scaffold a fake browser that works reasonably is a monumental feat and that's before you start writing a single test.

Further, it's not enough to measure how long it takes to write a test but also to maintain them. That's the time creep that can be absolutely brutal.


We do a lot of mock 15 different things so that you can unit test this one thing. I too think it's a drag because it tends to ossify things. Not only do I have to change the code I have to change all the unit tests and mocks. And it just doesn't seem to catch a lot of stuff. Or at least a lot of bugs slip through

I wish we had spent that time on better anomaly detection and defensive coding and stuff like blue green deployments. Users are the best testers and it's not really a big deal if 2% of your users see a bug for 10 minutes.

Maybe different if people will die or if money will be lost. But for general business or consumer stuff I think it's fine.


> Users are the best testers and it's not really a big deal if 2% of your users see a bug for 10 minutes.

This is the product engineer mindset. Your job is to deliver value, and you use tools that deem appropriate and most efficient.

But most engineers tend to be disconnected from value (both by their own choice, and by organization structure). When you don't know what the value of what you're producing is, you start clinging to other signals, most of which are actually noise.


> Why anyone would prefer manual testing for anything is beyond me,

Formal software testing is great, particularly if you're working on a hard problem with clear goals like building some hypothetical API to do some financial calculations or transactions where an error might cost tens of thousands of dollars or more.

But there's some cases where literal manual testing is the best business decision, even if it's not the best software engineering decision.

To give one example: have you ever worked in an agency environment?

What do clients do when you hand them off any scale of project? Some people within their organization will then go over every inch of the software and see if it works as they expected. They will literally never ask once what your test coverage is. But they will notice if some test data they input themselves doesn't give the right answer, or if some minor UI element they never really thought through doesn't seem to work the way they like.

Writing tests in this environment is 99% intellectual masturbation.


> But they will notice if some test data they input themselves doesn't give the right answer, or if some minor UI element they never really thought through doesn't seem to work the way they like.

Those people are going to not stop at one; so you will be foolish if you don't write a regression test for the behavior they wanted.

When they find the next thing they think is broken, and you try to fix it, you could regress your earlier fix for the first item.

In reacting to the user reports, you may be breaking other things that are not tested, but at least you pin down the behavior that the users are reporting.

UI element not working right in some way could be a genuine "pass" for testing. I mean, writing some monkey test that feeds events into a UI to check that it's in exactly some expected state could be a big waste of time.

Probably if you were writing a reusable widget framework, you might want that sort of testing, because you could make an inadvertant change which makes every instance of some widget behave differently in downstream applications.

If you have some complex behavior in your UI that doesn't come from the underlying widgets, and has to be right in certain ways, then that could be worth testing.


Your comment was dead when I saw it so I vouched for it.

> But there's some cases where literal manual testing is the best business decision

I will agree, manual testing can be the best business decision. Especially if it's someone else's money and time you're spending on it rather than your own. Or if your system is trivial or short-lived.

I don't work on short-lived systems, and manual testing has repeatedly been a major hurdle to improvement overall when dealing with non-trivial, long-lived systems in my experience. Manual testing either misses too much, or takes too much time if it's comprehensive. And even if it is comprehensive, or perhaps especially if it's comprehensive, manual testing gives you many false positives and negatives because it is error prone.


There's many different worlds out there.

I think most of the developers on HN have predominately worked on long-lived products where some culture of heavily testing things makes perfect sense and the idea of doing anything else seems strange. If you've got a customer base on some SaaS that depends on things working as expected, then yeah, having lots of tests to make sure that your system is consistent and changes on various edge cases spread over the course of years don't break subtle things is vital.

So there's probably sort of a developer cultural disconnect when somebody who has often built urgently needed, shorter-life, bespoke projects points out that there's some types of projects and environments where software testing brings a much different value-proposition. In some cases, you're not building something that's intended to last for years: maybe you're building a specific software that's intended to be used in a booth demonstration for 3 days in one industry conference, as an example. In some cases, the speed of development and an absolute lowest cost takes priority over perfect software engineering principles, or otherwise something couldn't even be built.


When Barry from HR doesn't like the way a page works, that's a feature request, or maybe it's a bug.

When a bit of errant code refunds the customer $10000 instead of $100, that's a BUG (ALL CAPS).

The two are not mutually exclusive. You need human critics looking at what you create, but you also need objective tests, and automation is very good at objective tests. You might even ask Barry to look at your creation earlier, further reducing the need to inject more humans than you already have.


Just throwing opposite view. I saw money burned on test automation because manager needed to check the box. They got a guy that set some automation up in 6 months and then he left. Guy did not know the project and project was evolving in rapid pace so none of the GUI automation was working couple of weeks after he left - actually a lot of stuff stopped working while he was on it because project was moving under his feet.


> Guy did not know the project

That's the bigger issue. Why would you put someone in charge of testing something who doesn't know it? That's brainless. Your manager was not very competent, sorry you experienced that.


The "convention over configuration" can be regarded as self-referential. These guidelines are often good default choices in situation where you don't have a strong reason to go either way. You wouldn't write a program which is configurable between those choices (e.g. exhibits more code repetition or less based on a run-time setting): so you go with a good default convention. If you can avoid repeating yourself, that's usually good; machines should do the repetitive work rather than people. If you know that two or more repetitions of something are only initially that way and soon going to diverge, then might as well fork those copies now. Or maybe do allow yourself to repeat yourself, but via macro. Some compiler optimizations violate DRY by design: function inlining, and loop unrolling. It's invisible to humans who aren't disassembling the output, or measuring code size changes.


I'm not sure the opposite of "configuration" should be called "convention" - the worst abuses I've seen of punting to user configuration have been ones where the best solution was only determinable at runtime. (e.g. compare user-configured fixed window sizes in the doomed ISO protocol stack with dynamic window control in TCP)

Typically the user doesn't know best - they copied a config that worked for someone else, years ago on a different machine and workload, and don't know what any of the parameters actually mean. In the worst case (sendmail?) you have O(0) people who actually know how to use the configuration language, and 10 competing higher-level config generators.


Right. They're razors, not commandments. Given a choice between two reasonable alternatives, it might be wise to pick the one that's {unrepetitive|simple|explicit|extensible|testable}.


// Tests waste a lot of time

Back when I was a developer, I had a manager who was super religious about unit tests (this seemed incongruous to me, I was a senior lead at the time and it was my vibe that such tests weren't what the project really needed the most at the time)

The basic "problem" I have with tests is that - at least maybe in my experience - things tend to go wrong in cases that you haven't thought about so your tests don't cover them anyway.

Like, what's the point of asserting that add(2,2) will yield 4 if you've already been thinking about that. And then, if you've already thought more complicated cases (eg negative numbers, cases that overflow) then your code already very likely handles them. And if you hadn't thought about them, your tests won't cover them either.

I get it, it's cool to be able to run a test suite especially if I refactored something and be like "yup, add(2,2) is still 4!) but not sure how much actual value vs cost/annoyance I've seen from it.


I attribute some of the greatest successes of my career in part to having good unit test coverage. I have seen no other pragmatic way to solve the problem that, in the worst case, a minor software update can be O(N) expensive to make, where N is the size or complexity of your program. That little update has the habit of breaking things in code paths you'd never expect. With a good suite of unit tests, you can validate that your program likely still functions, and, far more importantly, you can do the validation quickly, which results fast feedback loops and iteration times.

Fast feedback loops are essential to high developer productivity. When developers are being slow, in my experience it usually is related to the system they are working on having slow feedback cycles.

> The basic "problem" I have with tests is that - at least maybe in my experience - things tend to go wrong in cases that you haven't thought about so your tests don't cover them anyway.

You are right in that unit testing can't make up for bad engineering. One advantage of investing time in it, though, is it gives you potentially more opportunity to think through those cases. And, as a consolation, prize, you write unit tests for bugs that make it through and now you have a great regression suite.

Furthermore, a major issue that often comes when unit tests are omitted or not taken seriously, is that these tests can't easily be added after the fact, A system has to be designed with unit testing in mind, or else you end up with a bunch of untestable code. I believe that many legacy codebases remain without tests because the act of refactoring them enough to support any testing at all would be both time-consuming and high risk in terms of potential breakage.


I agree 100%, but just want to add my2c.

Unit tests are in general never wasted, in the sense that they trap regressions before release. I don't think they necessarily are good to validate work though, or at least, they do so poorly. It's easy to write lots of tests which don't cover the areas your program will bork on in production. Which I think is point of GP.

Value of tests compound over time. The longer the software is in production and maintained, the more work the unit tests have been doing. It could be a paltry 20% code coverage set of tests, but nonetheless they're fighting the good fight. The value of a test suite is expressed in the integral over time. Few tests can do a lot of work; on the flip side, a lot of tests are useless when they're continuously thrown away due to pivots in business logic.


> I don't think they necessarily are good to validate work though, or at least, they do so poorly.

For me, I've found they actually do help significantly to catch bugs even for my initial PRs. To know your code works, you either have to run it manually or else write a unit test, and you must do so for all relevant code paths. How much time do you save by avoiding the unit test, if any at all, and how long before you recoup that time via your integral? In some projects, I think the time saved is closed to zero, whereas the benefits accrue almost immediately.

You are right that there is a tradeoff here, and I have been part of a project where I wrote a bunch of tests for code that ended up being a throwaway prototype. You could argue that I wasted time and resources there. On the other hand, I've seen more prototypes get shipped to production than scrapped, so maybe the calculated risk is worth it even in scrappier contexts.


Testing is important, because there are many ways things can go wrong.

Automated testing is critically important, because "manual testing" is often another phrase for "testing we plan to do but won't do often if ever".

Unit testing may or may not be the best approach. Unit tests are fast, and make it easy to answer very specific answers, but they're tied to the specific software design. Since computers have become much faster, I generally recommend that people use more integration testing, because they reveal when the components fail to work together (for whatever reason), and their added CPU effort is typically no big deal.

That said, I agree that testing is only useful if it's plausible it would detect a problem. But developers are really good at asserting something "can't happen" and then does :-).


I'm not really a developer here, more of the wedge between the customer and developer when things go wrong, but don't these tests cover the continuous updating of the application as specifications change? Especially as multiple people are working on the code and may have different ideas of what's going on. May not catch everything, but can trigger a warning in many cases.


Exactly this. Also in codebases where the authors seemed opposed to writing good comments, tests are extremely useful in documenting expected behavior in edge cases, how an optional argument is supposed to be dealt with when missing, etc.

Not to mention that people are often a lot less rigorous with "manual testing" than they should be. They don't think of scenarios, they skip scenarios, because nobody's checking on them. While when you write tests, you're creating a permanent record where you can be held accountable if you miss something.


>They don't think of scenarios, they skip scenarios,

Bane of my life.

Me: "Hey, this big customer we're not going to tell to change their behaviors and pays us a lot of money is using this product in this particular way. We should put in a functional test case to ensure that we don't mess this up"

Dev/QA: "Eh, no"

This generally leads to a support call half a year later with the customer panicking, support groaning, and the developers panicking because they 'could have never predicted the customer using the application in this manner'.


> in codebases where the authors seemed opposed to writing good comments, tests are extremely useful in documenting expected behavior in edge cases

In other words, tests are useful in teams with awful engineering culture. I tend to agree, in my experience they are almost synonymous.

If you treat engineering culture as weather (i.e. something you have no control over), then tests are a good tool to address your problems.

> you're creating a permanent record where you can be held accountable if you miss something

Bureaucracy is another word for it. You know who excels at it? Government. And you know what government is really bad at? Running business and innovating.


unit tests are comments that are forced to stay up to date and attached to the code they are for.


Ah - I can see those kinds of tests as useful. I guess they are something like "integration tests" or something like that which tests the user-visible outputs of the system. My comment was about very low-level unit testing, where it's literally every function of code must be covered, isolated from other parts of code.


Tests can waste time because developers treat the test suite as its own application that needs to be managed.

Stop. Doing. This.

With some exceptions, tests should NOT be DRY. They should be dumb, explicit, repetitive, conventional, and as independent as possible. This allows tests to be more quickly understood by any given developer and changed without breaking other tests for stupid reasons unrelated to the actual application.

So much of my time has been wasted at every job I've had because developers believed that the tests should be really clever and use a bunch of magic and shared logic to "save time", which in reality becomes difficult to manage and understand. Most tests are crap for that reason alone.


On the other hand I've seen test suites where each test consisted of 98% boilerplate that got copy&pasted every single time. Depending on the tests setup, they would mix and match hundredths of lines from other tests, which sometimes introduced subtle, hard to spot errors. There wasn't much magic and it was nightmare to read and maintain.


The system using the libraries should exercise those libraries sufficiently that integration tests catch those kinds of errors.

If you are allowed only one test, it should be an integration test that runs in under XX seconds so you can maintain flow and correctness.

If a system is under constant churn, test at the interface boundaries, unit tests are further burden during refactoring.

Unit tests ensure that the internals function in a certain manner, external users should be concerned with internals. Unit tests are extremely useful in the core of a system like an evaluator, a rules engine, timezone, geospatial, computational geometry, etc. Your type system should capture many of the invariants in your system.

I know what you are trying to convey, but the example `add(2,2)` encourages conversational confusion. If you are verifying hardware, that is a perfectly valid test, for a high level codebase, it is most likely a worthless test.


*edit, fixing typo*

> external users shouldn't be concerned with internals


> what's the point of asserting that add(2,2) will yield 4 if you've already been thinking about that

There have been plenty of times when I have run a test like add(2,2) and I don't get 4. Usually it's a typo or some logic error rather than my "not thinking about it". Just like what was mentioned in the article, the alternative is manual testing.

One benefit of testing is that then you start thinking about edge cases you didn't think about before. My common thought process: "What other tests should I add in here? What are the boundary conditions that are possible? Oh, running this function with an empty list even though it generally should have something in it. Huh, I didn't even account for that in my code!"


You know that when you have testable code because you started writing it in such way - you might not have to write all failure modes right away into your test suit.

Then when things go wrong in a way you haven't thought and your test did not cover for it - you can easily add test to cover that case.

One of the workflows in testable code base is "find a bug" -> "write tests to cover the bug" -> "fix the bug" -> "tests passing" -> "your test suit now covers something that was actual bug and should not be ever again".


> And then, if you've already thought more complicated cases (eg negative numbers, cases that overflow) then your code already very likely handles them. And if you hadn't thought about them, your tests won't cover them either.

Good tests check boundary conditions and edge cases. They check errors conditions (like overflow), not just the happy path. And yes, they can be a pain to write, and they affect the way your code is written.

On the other hand, a test suite with good coverage can give you tremendous confidence to make sweeping changes. I have a side project, an art project, and part of that is a custom on-the-wire networking protocol. I have tests that check all the edge cases I could think of. Did I miss something? Very probably. On the other hand, I've written the protocol 3 times now: once in python, once in C, and once in C++ (the C++ case was more like a giant re-factor of the C code to make it more usable), and I'll probably write it again in rust. Each time I had confidence I got it right because my tests pass.


> things tend to go wrong in cases that you haven't thought about so your tests don't cover them anyway

I think this is a symptom of writing all the tests at the same time or immediately after the implementation.

Adding test cases during implementation can be a good thing. You want to test cases that hit implementation edge cases, like when the length of the input is equal to, or one less or greater than, an internal buffer length or batch size. While you're coding the challenging parts of the implementation, these cases naturally pop out at you.

However, while you're focusing on the hard parts, the "easy" parts fade into the background. You're much more likely to cover them if you make a quick list of test cases before you get immersed in the implementation. My experience is that when you (I) write test cases before implementation, I tend to cover more types of invalid inputs, as well as other conceptual corner cases, such as "degenerate" cases: zero or empty inputs, cases that have zero or empty outputs, and cases where an output could be computed in a straightforward way, but shouldn't be, because it wouldn't be valid.

For example, one thing that has been a huge pet peeve for me my entire career is programmers treating empty inputs as invalid when there's a straightforward and correct way to handle them. After twenty years in the industry trying to get people to do this right, I've started to realize: people get this right if they talk about it before they start the implementation! If you sort an empty list, the result is an empty list, no problem. If you search for an alphanumeric character in an empty string, the result is that it's not found. But if they write the implementation before considering those cases, that's when they end up thinking it's okay to throw exceptions for them, because in their mind the implementation ("start with the first letter") becomes the specification.


> what's the point of asserting that add(2,2) will yield 4 if you've already been thinking about that

You are probably thinking about unit tests here, and yes, if it exercise a single point that you placed there by design, that test can only guarantee that somebody won't mess with it on the future.

On the other hand, I'm writing an interpreter in a side project, and the single most useful test there is literally and assert that eval("1 1 +") equals eval("2"). That is because it's not a unit test, and it touches a lot of different points. But for an outside observer, it looks exactly the same as a useless unit test.


Being able to automatically run your old tests to find regressions is super-super useful. I would argue that every bug you fix should turn into a test, since you're probably going to encounter it again.


Turn it upside down. Do the bare minimum of thinking and writing when setting up the initial test case, really just have it be scaffolding to run a single sunny scenario test case. Then let users (testers, other developers, automated systems or actual users - whatever hits your code) do any further discovery for you. Whenever they cause a bug, that's a new test case (or a series of new cases) to add to your initial single case.


Tests waste a lot of time. That's certainly true for some teams. On the other hand, those tests have saved me in case a useful documentation is lacking which is virtually true for almost every closed source software project. To be fair, sometimes these tests were also misguided by misunderstood requirements, providing a false "documentation".


> We are a hammer and every problem is a nail that can be solved by the act of thinking. We fall into the trap of overthinking every scenario because we are paid to poke holes in systems and then patch them before it gets to the user.

I was thinking about this yesterday, but more in terms of how small teams (or solo devs like in my case) can out-perform larger teams just by not even considering every edge-case.

I've discovered these overlooked cases later and been kind of embarrassed, but also happy that I didn't catch them right away, since fixing them wouldn't have been worth it, and the urge to fix them would have been high.


It's easy to fall into the trap of introducing even more edge cases by handling some other edge cases.

The correct way to handle edge cases is to not have them in the design. Easier said than done. And sometimes the edge cases are in the spec. Sometimes they are justified, but often they are not. These should be eliminated before the spec is finalized.


When I find edge cases that are low-stakes and unlikely to occur, I usually just ignore them completely. If they do occur, and someone thinks I shouldn't have allowed that edge case to go unhandled then I have plausible deniability of "I didn't think of that edge case".

I guess I'm fortunate enough not to have the high "urge to fix them".


Exactly! I try to be intentional about what edge-cases I care about and maybe more importantly the ones I don't care about -- for now. I also try to be intentional about time-boxing how long I spend thinking about edge-cases. Again I think context is important here.

I think this overall mentality is the contrarian in me resisting the status quo.


Hubris is an amazingly effective tool that more managers and senior engineers could leverage from junior members.

There are obvious problems and pitfalls that one needs to look out for. But all too often companies and teams grow to a size where their main product no longer has a place for people to try otherwise bad ideas.


I find it useful to write a list of the tests you would write if you were writing tests. Then you have made a list of all the things that can go wrong, instead of being caught off guard if it breaks


Yeah I admit I have wasted too much time solving edge cases that either never happen or happen but the effect is not so big of a deal that it was worth to handle it.


You cannot reduce software engineering to an algorithm[1], because if you could, you could (duh) write a program to automate it and not have to worry about it.

That's why the job is so hard: you have to think about the specific case and decide the best course of action. Sometimes (maybe often), DRY is the right solution; sometimes not. Same for all the other "rules" of software engineering.

I think the OP is just reinforcing this: don't go on autopilot--think!

[1] Obligatory disclaimer: Eventually AGI will reduce software engineering to an algorithm, but not today.


When it comes to "conventional" wisdom, I think there's simply a confusion between methods and principles. Am sure many here have heard the quote by Harrington Emerson[0] that roughly goes:

   "Methods are many, but principles are few. Methods always change. Principles never do."
So I think that many things, like DRY and KISS, are really simply principles that were distilled after many attempted much and learned a lot. Things go sideways quickly when these are mistaken for methods.

Whenever some rule or method doesn't really seem to fit, I wonder if this is really more of a principle to help guide me to the most contextually appropriate method.

So, don't necessarily throw out DRY and similar, just treat them as guiding principles that will absolutely make sense in the right situations.

The above quote is the most commonly known form, and is often misattributed to Ralph Waldo Emerson. Here is the full original quote: [1]

    "As to methods there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble."
[0] https://en.wikiquote.org/wiki/Harrington_Emerson

[1] https://www.goodreads.com/quotes/346365-as-to-methods-there-...


When people use meaningful guiding principles dogmatically, the principles aren't the problem.

As a chef, I often adopted the "measure twice, cut once" principle from carpentry. Saved big batches of sauce or whatever several times during prep. Working the line on Saturday night, however, doing that would cost someone their job within half an hour.

Don't replace problem-solving with rules and always use the right tool for the job. If you realize you aren't, it's usually a lot easier to switch the tool than modify the job.


People are confusing wisdom and rules. It takes wisdom and experience to understand "DRY" , but that doesn't mean people should DRY in every possible case. Following a rule blindly isn't wise. The Wisdom is in the understanding of why the rule matters along with when & how it should be applied.


I've written so many automated tests, but how many have actually caught something important? A couple dozen?

Yeah I'm glad TDD isn't the dominant religion any more. If you write a test and it never fails throughout the lifetime of the project, thats wasted time.


Just like if you wear a seatbelt all your life, but never have an accident, what a waste of effort. Not to mention $$$ spent on car insurance.

Programmers should peer into their crystal balls and predict which code is going to need to be maintained in ways that will break it and only write regression tests for that. For all the rest, just poke at it manually in your REPL and debugger and call it tested.

You heard it here.


Yes, I'll write a test for something that I know from experience is likely to give me trouble later on with regression problems

Writing a test when you find a bug and then fixing the bug is also quite handy sometimes

And I'll write tests upfront when there's well defined outcomes and I know the code is going to be tricky

But I'm not going to write tests for dumb stuff that I know wont break


I want to push back on this in two ways.

First, it's my understanding (as someone who usually doesn't TDD and hasn't read/thought that deeply about it in a while) that the core of TDD (vs other testing doctrines) was to iterate by making a failing test, then making it pass. If that's correct, then if you're writing tests that never fail throughout the lifetime of the project then you're not doing TDD, though culture does weird things to practices and definitions and I'm sure there's places that called any damned thing TDD when it had more hype.

Second, I think a test (or static check) that never fails can be useful if it lets the programmer pay less attention to making sure the program is correct in that particular way (freeing up bandwidth for other important considerations), and as a communication tool demonstrating that the system does in fact work the way it does. Whether those benefits are worth it depends on a bunch of features of the particular context, including how long it takes to write/maintain the test (although if it never fails then at least you probably don't need to update it) and how long it takes to run the test.


the core of TDD was to iterate by making a failing test, then making it pass

Yes and when I first heard that 15 years ago (or whenever it was) it sounded like genius and I did it on a few projects.

Its a useful crutch maybe if you're doing something very unfamiliar

But as a dogma that must always be followed? And having a test for every single 'specification' and edge case? No - screw that, you end up faffing around for ages setting up mocks and testing stuff that doesnt deserve it. And you end up with a massive library of tests that get on your nerves.

Also everything got completely bent out of shape for a few years because everything had to be injectable and testable.

https://dhh.dk/2014/tdd-is-dead-long-live-testing.html

http://web.archive.org/web/20180215225218/https://iansommerv...

http://blog.cleancoder.com/uncle-bob/2016/03/19/GivingUpOnTD...

https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste...


My point (in that bit) was only that your criticism was incompatible with it being the thing you labelled. I'm not pitching TDD or defending it more broadly.


Ok. But writing the test first means it will always fail once, in my mind that doesn't count. It's like pretending.


> But writing the test first means it will always fail once

Right, exactly.

> in my mind that doesn't count. It's like pretending.

¯\_(ツ)_/¯

It demonstrates that your test is actually testing something and that your code is having an impact on that something, which in most contexts is probably not high value but is not of zero value.

Pretending is when your team has agreed to TDD but you write the passing tests after the code just so no one can accuse you of not doing TDD when they weren't looking, which does seem to be what "we do TDD" sometimes turns into in practice... and even in that case, the tests may have communicative value that may be worth the weight of the test, your odds are just getting awfully low.


I think we are mostly in agreement but I'm enjoying deconstructing all this

When electricians are testing that a wire is dead they will take their multimeter and test it on a live wire (to make sure the multimeter works) then test the dead wire. Then test a live wire again (to make sure the multimeter is still working)

That seems worth it - because electrocution is worth avoiding.

Writing a test that tests a blank method with no code in it, watching the little light go red, then writing the code that you had in your head the whole time anyway, and then watching the little light go green, is just like superstition

“Do programmers have any specific superstitions?”

“Yeah, but we call them best practices.”

(via https://twitter.com/dbgrandi/status/508329463990734848)


For something as extreme as a blank method, it's unlikely to be useful, I agree. For something subtle in modifying existing code, "did that actually change the thing I thought" might be a question worth answering. I guess my best (somewhat Devil's advocate) argument for doing it all the time is that maybe it's cheap enough that doing it in the useless cases is less expensive than every time figuring out whether you should plus the cost of the false negatives.


> If that's correct, then if you're writing tests that never fail throughout the lifetime of the project then you're not doing TDD

Agreed.

> [tests that never fail can be useful] as a communication tool demonstrating that the system does in fact work the way it does.

Agreed as well. That's what makes the section I wrote on testing sting for me. I read about all the benefits and see how it can be helpful as a form of documentation. I see all the upsides but at the end of the day, we still need to calculate if automated tests are really worth it. The maintenance burden on them can be huge. A couple line change and confirming it works with a manual test can take a couple of minutes, but fixing an integration test that is now failing can take the entire day or worse.

It's tough, but my main goal was to challenge the dogma of "always write automated tests."


> It's tough, but my main goal was to challenge the dogma of "always write automated tests."

It's perfectly fine to challenge dogma and ask people figure out whether the automated tests make sense for their use case. I think the push for automated tests came when there were so many others who grossly underestimate the benefits of automated tests and dismiss them off-hand.

It's hard to see the benefits of automated testing. It's easy to see the time spent writing the tests while it's hard to see the time saved by catching bugs before they happen. It's also easier to refactor if you have tests to confirm that your refactoring didn't break something.


Right, just because it has value doesn't mean it has enough value. My objection to the parent (not the article) was that it wound up dismissing certain kinds of value.


The last time I found TDD to be useful was when I was implementing some code for processing of data feeds according to a pretty well-defined specification. The spec translated to tests quite easily and then the tests very readily pointed out where the data did not match the spec 100%. Luckily, we were in a position to modify the data-generating side of things, too, to make sure everything was in-spec.

But most of the time, I'm dealing with user interfaces, service integrations, and data munging. I have "test benches" instead of unit tests for these. They're like "minimal reproductions", fully stand-alone programs that exercise the code in the "expected" way. The big issue is that the "expected way" could change at practically any minute, depending on business need. There's no "spec" other than, "are people's expectations for a usable, comfortable system being met?" We want them to be changeable at the drop of a hat. And those test benches help make those changes possible, by not having to work in the full UI to change the behavior of one control, one service, etc.


A problem with calling it wasted time, is that implies you could have recaptured that time in some other, more productive, way. And, I'd wager odds are high that you could not have.


Why not?


There ultimately is no "reason." Time just isn't completely fungible between useful and wasteful. As much as we wish it could be.


So there's no difference between doing things that have a point and doing things that dont have a point?


Post fact, there can be. But we are terrible at knowing one from the other ahead of time. More, we don't always have "meaningful" things to do at any given moment.

This is why we don't all work to type as fast as court reporters. It would not gain us much if any meaningful time.

More, you can give meaning to things. Puzzles and such.


I know what you mean

But if I was managing a bunch of programmers (thankfully, I dont have to do that) I would be thinking about their productivity and which activities contributed to productivity. TDD seems like a bell curve, having some can really help with productivity and regression and bugs. But too much and it just starts to be counterproductive.


Oh, agreed there. I would question folks that don't build time tests. I do push against the unit/integration divide. Build time versus deploy time makes sense. Make sure to have both, where you can. With an early preference to build time with the expectation that deploy time will dominate later.


i think TDD is a great method, but not dogma. likewise with code coverage. being prescriptive about one or the other just leads to angry people who get fed up with the principle and rebel.


Luv manual tests. With everything that ships I bullet point a list of tests that theoretically would be written. Then I test the most important ones by hand. I’ve never had to retest yet, but theoretically if something was wrong i can start debugging by poking at these tests. If something broke at all i could even automate that specific test


You know what's not given enough credit? The fact that those so-called "patterns" are really just trendy ideas pretending to be "knowledge". It's like fashion - it goes in and out of style. Now we have all these blog posts saying "DRY considered harmful", but back in the day, everyone was all about "DRY considered good". It's just a trendy thing, man. And all this backlash against it? Yup, you guessed it - it's just another trend (because it's not cool to follow the rules anymore).

To be real, most of those ideas probably have some truth to them, but like always, it's about weighing the pros and cons. It's all about those tradeoffs.


There is a lot of truth to DRY but it has been distorted by becoming completely obsessed with it as if it was some end to reach, rather than any other property with its own tradeoffs.

And its not just developers, but even some companies are also obsessed with removing "waste" by sacrificing everything just to minimize any kind of seemingly duplicated effort without ever considering if it's actually cheaper to do so..

When DRY is shown as high ideal somewhere, how often do they also mention the tradeoffs like tight coupling and premature abstraction?


Conventional Wisdom is just Convenient Assumption in experience. We need to regularly check our assumptions, lest we find ourselves operating at right angles to reality.


You should basically follow conventional wisdom if you are doing conventional things, and follow unconventional wisdom if you are doing unconventional things.


> If a test never fails, is it a good test? If I have to change a test everytime I change the implementation, is it a good test? Writing good tests is really difficult.

If you don't like writing / maintaining tests or don't have the time, let the computer write them for you! [0][1]

[0] https://insta.rs/ [1] https://vitest.dev/guide/snapshot.html


I don’t write automated tests and none of my developers do, except the guy building smart contracts. And yet I am not sure I am doing the right thing. We only did it because we couldn’t afford the extra time and money to do it.

If a test never fails, is it a good test? If I have to change a test everytime I change the implementation, is it a good test? Writing good tests is really difficult.

Yes — just like good interface is not noticed, or how good security which prevents issues isn’t noticed when things go well.


I had a hard time writing tests because nobody else around me did it. Nobody taught me how to implement good tests or implement testable code. I had to figure it out by myself through trial and error, and through reading articles and watching videos.

Ultimately, I found writing tests were immensely helpful for the work I was doing. I was writing the API backend for desktop/mobile apps to call. When one of the guys working on the client apps would claim my stuff wasn't working right, I demanded an example HTTP request that I could use to reproduce the problem.

Most of the time they never got back to me. Often the problem was with their (untested) code and they wanted to throw the problem over the wall for someone else to investigate. I had the confidence that my code was behaving well because of the tests. So when someone else came over and said "Your stuff is broken" I had the confidence to say "Prove it".

With that said, I recognize that there are cases that are lower stakes that may not require such thorough testing. Right now I have some APIs I maintain that I don't write tests for. These APIs are of the type where if there's a bug then the user of the API says "Hey, it's broken" and I say "Ok, I'll fix it in a couple days" and he says "Sure, let me know when it's done". This is quite different than the API that gets called hundreds of times a second and customers are screaming "I want my data now!"


Ahhhhh. This is what I come to HN for. Pseudointellectual contrarian thinkpieces by bored silicon valley people and the massive upvotes by other pseudointellectual contrarian bored silicon valley people debating the details of a ridiculous premise.


assuming conventional wisdom to be an anti-pattern is an exceedingly common anti-pattern




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: