Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
When you combine two things that are close, but not the same (twitter.com/id_aa_carmack)
130 points by tosh on April 13, 2023 | hide | past | favorite | 90 comments


I run into this a lot and I find it hard to justify to reviewers why I decided to keep seemingly similar logic/functions separate rather than combining them. I usually don't have anything better than a "gut feeling" or "professional experience" that although they might seem to be the same they're either actually very slightly conceptually different, or that I feel that they're likely to diverge in the future even if they might be similar now.

Junior devs rush usually rush to combine things like that together, especially because combining like code is such an knee jerk thing to point out in reviews, and they don't have the experience to push back against it (or to even see where to push back). In the future the combined code then gets ugly when requirements slightly shift and the code that was combined should no longer be combined, but it's never split back up, it's usually just made complex.


This is an old debate and yet one that is difficult to think through. I share your experience of having trouble convincing someone else not to combine things too hastily. Partly it's a "my gut feel is different to yours" situation, but I often don't have the confidence that I could articulate my reasoning without indulging in a wordy lecture on minutiae.

Lately I have been fixating on the following line of thinking: the unit of deduplication--usually a function, but sometimes even bigger--is the same thing as the unit of abstraction. When you dedupe, you've also given birth to a new abstraction, and those don't come for free. Now there's a new thing in the world that you had to give a name to, and that somebody else might come along and re-use as well, perhaps not in a context where you originally intended. The new thing is now bearing the load of different concerns, and without anyone intending it, it now connects those concerns. The cost of deduplication isn't just the work of the initial refactor; it's the risk that those future connections will break something or make your system harder to understand.

This reminds me of another famous Carmack pronouncement about the value of longer functions [1], which I think has some parallels here. In the same way we're taught to DRY up our code, we're taught to break up long functions. I sort of think of these two things as the same problem, because I view their costs as essentially the same: they risk proliferating new and imperfect abstractions where there weren't any before.

[1] http://number-none.com/blow/blog/programming/2014/09/26/carm...


I understand your sentiment, but this line:

    When you dedupe, you've also given birth to a new abstraction, and those don't come for free.
It feels like you can write the inverse with equivalent impact, e.g., code duplication doesn't come for free.


Sure, but the costs of code duplication are well known. We know it increases maintenance, and can sometimes lead to issues if you forget to update one or another of something, and so on.

So there can be an assumption that deduplicating removes costs, when it may, but it may also create further ones. That removing something can make some things more difficult, isn't intuitive for everyone.

Like all things programming, their is a balance, pros and cons, of each approach. Knowing when to use which approach, that's part of the profession, and everybody can get it wrong sometimes. And the environment can change, and the choice may become invalidated - and then you get stuck with the hard choice of changing abstraction, or keeping the same. And that's a hard choice, as well.

Nothing in coding comes for free, but sometimes it can look like it does.


What's the cost? A slightly bigger binary & codebase? It seems like it's close to free to me. Am I missing a cost? Or are these costs bigger than I'm assigning them?


> What's the cost?

The cost is exactly what is pointed in the original tweet:

> a requirement to keep two separate things aligned through future changes is an “invisible constraint” that is quite likely to cause problems eventually

Code changes, and if those two identical or similar pieces of code are likely to change together, now whenever you change one you have the cognitive load to also change the other, or risk having them go out of sync.

Of course, when the two pieces of similar code aren't likely to be changed together, they should be kept separated


For sure. Most of the time when I copy-paste code from one place to another, a change in one place doesn't imply a change in the other. I certainly have seen that happen though.


I dunno, call me skeptical about Carmack's text.

I agree very much with using pure functions wherever you can. It fact, I would argue writing a pure function should be the default approach. (See the Function Core, Imperative Shell talk on DestroyAllSoftware.com.) Let the compiler handle the inlining and memory optimizations and use const everywhere.

OTOH, Carmack doesn't even consider testing. Breaking up your code into multiple functions facilitates that a lot. On top of that, if your functions are pure, it is even easier to test them.

He also doesn't consider the cost of reading & maintaining a piece of code that lives somewhere inside a big (multi-page) function. You have to keep track of all that function-global state. Side effects sprinkled all over the function are common. Ugghh.

> Besides awareness of the actual code being executed, inlining functions also has the benefit of not making it possible to call the function from other places. That sounds ridiculous, but there is a point to it. As a codebase grows over years of use, there will be lots of opportunities to take a shortcut and just call a function that does only the work you think needs to be done.

This, too, sounds a bit ridiculous today. Languages usually have access modifiers (public/private/…) or conventions to declare something "internal" to the class or module (e.g. `__foo` in Python). On top of that, you can always use something like ArchUnit to enforce your architectural rules and prevent usage of function X in module Y.

Yes, correctly cutting your modules & scopes is never easy. But this is simply at the heart of the game of software development.

The arguments regarding latency & performance are certainly valid but it feels like a very, very specific case he discusses. It's difficult to generalize the conclusions he draws from it.


This is a good insight. One quibble I’d make is to point to the “into a loop or function” in Carmack’s post. When you’re consolidating some repeated code into a loop, the weight of the abstraction is lower than a function. Also, the problem of “future maintainer pulls the abstraction out of necessary context” is less likely.


Code reviews on refactors are so frustrating because you know what the next PR is going to look like, and it's going to change half the things people are complaining about in this one.

You could literally be a day away from making those two code paths very different.


There are some rule of thumbs that I've come across in the literature and from personal experience on how one can tell when A and B are actually different and only superficially similar.

- If a hypothetical competitor implements A differently, will they by necessity also implement B differently in the same way? If not, it's likely A and B are actually different things.

- People in the organisation likely disagree (even if just gently) on some aspects of the design or requirements of both A and B. If you assign people to camps, and the camps don't overlap strongly when first considering A, then B, then it's likely A and B are actually different things.

- Can people in the organisation imagine a useful change to A that doesn't require a corresponding change to B? Then it's likely A and B are actually different things.

And the opposite:

- Does A support two entirely disparate sets of operations, where each component only ever uses operations from one set and not the other, then maybe A is actually two different things A and B.

In arguing for keeping things separate in peer reviews, I have found test 3 to be the most easily convincing, i.e. giving a concrete example of a useful change to one that wouldn't necessarily result in a change to the other.

----

It's also worth considering that sometimes it's not a matter of A and B being entirely different or the same thing, but rather that they should both be built on a common abstraction C that we just haven't thought of yet (and may not for another few months until we see another example that would need it, that happens to trigger the right neural pathway to conjure C up in the mind).

----

That said, when in doubt, I still think it's better to err on the side of combining into one. It's far easier to split up a thing that has been combined too far than it is to ensure to always co-evolve things that have been accidentally split up too far. (And then deal with the consequences when they have drifted apart despite not meaning to.)


> Junior devs rush usually rush to combine things like that together, ….

I know low-effort comments don't do well on HN, and this is one, but I just can't let this pass without sharing my intense happiness in reading this typo in a comment against DRY zealotry.

There must be something really subconsciously tempting about this kind of typo (https://twitter.com/BillHeyman/status/1646653124864606210):

> This is correct. It's often a very bad idea to create an abstraction without out having at least three instances that can use it.


I would go even further. Only create the abstraction when you have no other choice. If copying the code isn't causing you pain don't touch it, it's working code. In the rare case you need to update more than one copy you can grep a little bit for a while.

Taking a bunch of duplicated but "flat" code and creating an abstraction around it gets easier the more copies of the code you have because you have a more complete picture of what the right abstraction actually is.

Code is like clay, once it DRYs it stops being malleable.


It requires good familiarity with code base or domain knowledge to know that one has to grep and for how long. The abstraction triggers you to look at the right places.

That said, I am firmly in the camp that requires at least three instances.


Totally. A few copy/pastes describes the author. An abstraction describes the project, and you have to be so careful here to not break (or dramatically widen) the mental models.


How about something like this:

Is there a specific benefit you see in combining the code here? Combining these together introduces technical risk through increased coupling. Splitting the functionality has very little cost.


Combining helps when you later have to make changes to the common part, by making it impossible to forget that there is another place in the code you also need to change. Splitting also introduces technical risk.


Let the duplication live for a while. You may still make changes to the new code due to new insights and bugs.

Make a note or an issue to revisit after a few weeks.

Always refactor when a third or fourth identical case show up.


A good rule of thumb is "if one of these were to change in the future, would it be expected that the other one would change in the same way?" (where change can mean a bugfix or can mean a change in required behaviour).


I've been on calls where non-technical (or just technical enough to be dangerous) people advocate combining things. Like you said, it can be hard to justify not doing it so it inevitably becomes the path forward.


When in that situation I tend to go ahead and factor out the similar code anyway. I almost never end up committing it, but it usually helps turn the "gut feeling" into a better understanding of the differences.


Whether or not to split is more a measure of whether or not these two concepts are likely to split down the road than whether or not share similarity today.


Yes, the way I phrase this when defending it in review is "these things have different reasons why they would change, despite being the same right now." Thanks Sandi Metz.


'Idiomatic' is another good word.

Some people are so addicted to DRY that they want to write helper functions for every 3 lines of code that appear together. Nobody else can figure out what the fuck their code does, but only 3 of them will tell them to their faces.


Sandi Metz is a great resource for working through some of the dogmas out there. She's fully bought into OO and design patterns so it's not a situation where an outsider is assailing your whole worldview, and she puts into words what my intuition is about these things super well.


This example may be overly simplistic for the point I want to make, but the most common sort of false-factoring I see is when this…

  def f(x):
    regular_thing(x)

  f(v)
…becomes this:

  def f(x):
    if condition(x):
      special_thing(x)
    regular_thing(x)

  f(v)  # in the old code
  f(w)  # in some new code

Whereas I posit that the correct way to handle the special thing is as follows:

  def f(x):
    regular_thing(x)

  f(v)
  if condition(w):
    special_thing(w)
  f(w)

When junior programmers insist on handling conditional execution paths further and further down the stack you end up with a plethora of optional flags being passed down through each function call.

I found one piece of code where a team had given up and were passing the same **kwargs glob down three of four levels of function calls, with each function doing special conditional steps based on what it saw in the kwargs, effectively a global set of flags controlling logic up and down the stack.

It’s a great example of why long functions are a bad code smell.


Very much yes. Behavioural flags are the worst kind of abstraction and a strong signal that the wrong abstraction was chosen.

The reality is that while abstraction is critical for keeping complexity under control, it is also hard. Refactoring common looking code will usually not lead to the correct abstraction.

As a recovering DRY addict, it is a lesson that took me a long time to learn.

edit: another common bad refactor/abstraction:

def fun1(...): do_the_thing(...)

def fun2(...): prepare_the_thing(...) fun1(...)

def fun3(...): prepare_a_bit_more(...) fun2(...)

No other functions call fun1 and fun2. Often fun1 and fun2 are badly named because they are just steps in implementing the behaviour of fun3. Usually I inline back these cases as it greatly simplify understanding the code.


For people that want to have that style, I like to show them how to convert them into local functions if the language has that feature. At least that way they can't be accidentally misused (as easily... :)


    def do(**kwargs):
        subdo1(**kwargs)
        ...
What could be more general?


I once worked with a dev who was thoroughly committed to DRY. He'd find a private method somewhere that sort of looked reusable, but also made lots of assumptions about how it was called. Then he'd move it to a public method without documenting all the limitations and try to share it with other code that made different assumptions.

The natural consequence is that both sets of assumptions would have to get baked in somehow.

He'd take similar progressions all over the place to "remove duplication". It wasn't unusual for not only the complexity but also the line count (not including tests) to increase.

If you pointed that out, he'd say, yeah but duplication is always bad!

Needless to say, I don't agree.


I heard a good guideline somewhere, that if multiple pieces of code change at the same time for the same reason, it should be combined into one. If the separate pieces of code can change at different times for different reasons (they would likely diverge in different directions the future), it's best to keep them separate places to keep it from being a maintenance nightmare.

Excessive DRY as how you described leads to complexity and complexity is bad for code maintainability. Properly applied DRY is supposed to reduce complexity.


That guideline would make a better tweet than the submission IMO. It is a very good yardstick. Although even if you have to update 2 places together often it might be worth keeping them seperate for various reasons. Such as combining them require major refactor of dependencies.


If combining them would require a major refactor then create a unit test that fails if they diverge or leave a code comment. That gets you the benefits without the tedium.


That sounds like a good rule of thumb to me.


One of my colleagues is DRY zealot and it drives me nuts. It's always feels like a brain-dead application of a first pass philosophy that doesn't take any greater context or second order effects into account.


"A foolish consistency is the hobgoblin of little minds" and all


This sort of thinking about something being 'always bad' or 'always good' is in itself always bad.

Not sure if that is a paradox or not, be that as it may, in my experience it is true.


First response is:

> "Don't Repeat Yourself" versus "Repeat Once or Twice"

This is a common misconception, that principle is not about code duplication but about conflicting logic.

From Wikipedia:

> The DRY principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system".

So if the insides of that loop are tightly coupled to a specific business logic you want to be 100% sure don’t diverge and it applies to multiple different types of data: go for it. Otherwise, DRY supports Carmack’s point here.

> When the DRY principle is applied successfully, a modification of any single element of a system does not require a change in other logically unrelated elements.


> This is a common misconception, that principle is not about code duplication but about conflicting logic.

Ah the good 'ol, "Nobody really implements DRY correctly, a true DRY follower does X", or the No True Scotsman Fallacy. OOP advocates love to use this too.

Unfortunately, there comes a point when the Wikipedia definition really doesn't matter. What matters is what's advocated and the most widely used definition of DRY, which is simply "Don't Repeat Yourself", with little to no qualifications. And unfortunately, this usually leads to overzealous use of the principle and the abstraction of two separate pieces of logic into one shared piece of logic, when it really should be two separate pieces of logic.

So I think Carmack is spot on in his assessment of how DRY is used in the day to day, and why it can be a bad idea to apply it to every situation. As with every programming "principle", the DRY principle should be used judiciously and the benefits/drawbacks of applying it in any particular scenario should be analyzed accordingly.


I mean, yes you could argue for no true scottsman here, but it is a little different when the initial book or whatever that spurred the movement is the one being quoted.

Any movement distilled into only three words is gonna have a bad time right?


> I mean, yes you could argue for no true scottsman here, but it is a little different when the initial book or whatever that spurred the movement is the one being quoted.

I honestly have no idea whether this makes any difference. I'll take SOLID as an example. I know I've debated several people about the drawbacks of SOLID and how I believe those outweigh any proposed benefits. And almost every time I do this, people will tell me that I just didn't use it correctly, even when I am basing my opinion off the original blogs etc that started the principle.

When that fails, I'll point to code that I've seen in production, in real world codebases, and they'll just tell me those people are using it wrong, even though it seems that no matter where I work code that follows SOLID looks awfully similar to what I described. I think this still falls into the No True Scotsman fallacy.

I guess I'm basically saying, does it matter if the original authors of a principle had a certain ideal in mind when they coined the principal if the colloquial use is vastly different? I think OOP is a perfect example of this, because most people mean something very different than what Alan Kay meant. However, if I argue against OOP it's generally understood I'm not arguing with Alan Kay's idea of message passing even though that was the original intent.

Edit: but overall I do agree with your sentiment haha. A 3 word principle is bound to have many interpretations. It just irks me when people try to call out an argument with a "technically it really means this..." when everybody understands that "sure while it technically means this everybody really ends up using it like this...".


I'm interested in what the drawbacks of SOLID are. Do you have any links or could you briefly outline some?


Sure! Here's a few:

http://qualityisspeed.blogspot.com/2014/08/why-i-dont-teach-...

https://blog.cerebralab.com/Bimodal_programming_%E2%80%93_wh...

https://stackoverflow.com/questions/2997965/are-solid-princi...

And I have my own opinions of course, but to summarize I usually see SOLID code bases lead to extraordinarily overabstracted code bases that are impossible to debug or explore because everything is an interface that's injected through some DI framework. Exploring these code bases is a nightmare because everytime you Ctrl+Click a function, you're taken to an interface. The best part, the interface only has one implementation 99 times out of 100. But boy does it mess up your flow when your actively tracing and trying to understand a code path.

I would argue debugging + maintenance are the majority of a programmer's job. So any ideology that obfuscates your code and makes it increasingly difficult to maintain is not something I want people I code with to follow.


When I wrote my comment I was excited "I can help people understand a thing that was hard for me 10 years ago."

When I read your comment I feel attacked. I need people to hear that the original intent of the DRY does not mean "no duplicate lines of code" and that such application leads to the problem you're describing, while the new definition I'm attempting to introduce does not.

> As with every programming "principle", the DRY principle should be used judiciously and the benefits/drawbacks of applying it in any particular scenario should be analyzed accordingly.

Of course. I agree 100% and didn't realize this was in question. Therefore I think this is a good point. I would have appreciated a "yes, and" comment instead of a "no, but" argument.


This is a 100% fair criticism and I'm sorry I took the tone I did with my response. After re-reading through your original comment and my response, my response seems unwarranted.

> Of course. I agree 100% and didn't realize this was in question. Therefore I think this is a good point. I would have appreciated a "yes, and" comment instead of a "no, but" argument.

This is perfect advice, and put very succinctly. Sorry again for the combative nature of my first response and I'll try to do better in the future framing any remarks like these as a "yes, and" comment :)


Wow. I’m surprised and delighted by your response. I feel heard and validated.

Thanks for being open minded and for going the extra mile of stating what you will do above and beyond my original ask.

Seriously, thanks!


I think the slogan is too simple.

To couple or not to couple is the question.

If you come to think of it, coupling and de-coupling pieces of information is central to software development.


What is a single element? If you've coupled two almost related concepts into a single function then you are still only changing one element.


almost all the time 2+2 is the same thing as 4

but if you're foursquare's copy editor, somebody is going to be _pissed_ when they see the tagline, "it's as simple as 4".


2+2square's copy editor would be livid with that!


For that reason I prefer the acronym SPOT = single point of truth.


DRY SPOT

love it!


This is a horse that’s been beaten to death. Avoid mixing up incidental duplication with harmful duplication. Don’t be afraid to refactor existing abstractions if things change. Rule of three and write everything twice are heuristics, not replacements for critical thinking. Almost always decompose or refactor instead of adding Boolean flags, unless it’s unimportant or trivial code. Consider restructuring your code or using a for loop to remove duplication instead of using a function. These things can only be marginally taught, they must be experienced through practice.

And with all art, there’s a good amount of subjectivity. Your reviewer will likely assume their taste is better than yours. Who can blame them?


This is an otherwise good comment that starts with an unnecessarily dismissive statement ("This is a horse that’s been beaten to death").

Even assuming this is very common advice (which I don't think it is), there are always new people learning to code and design software, who'll find it useful. And beyond that, John Carmack is also allowed to be one of today's [Ten Thousand](https://xkcd.com/1053/).


One rule that I think works well:

If DRYing the function would entail adding a conditional statement, don't. Even if remaining 90% stays the same. That 90% is a good candidate to dedupe tho.


Exactly. Reuse through conditionals is a trap. “Look! I made a reusable thing!” No, you made a choke point. Dedupe what you can, then compose with the other 10% when you need it. No need to complect it, as Rich Hickey would describe it.


I think composition is often the tool that’s needed when trying to hit two birds with one stone, but not reached for as often as it should be.

Although conditional statements are totally useful and essential, I find their overuse tends to look like the logic is especially departed from the data structures it operates on. On the other hand, composition tends to accomplish the opposite — it reads with clearer intent and reveals more about what’s similar or dissimilar in various parts of your data.

It’s totally possible to compose poorly too, of course. Plenty of ways to write awful code :)


> If DRYing the function would entail adding a conditional statement, don't. Even if remaining 90% stays the same. That 90% is a good candidate to dedupe tho.

Carmack mentions this rule in slightly different terms (https://twitter.com/ID_AA_Carmack/status/1646638023499456517):

> It is often still a good trade if an abstraction may induce some extra work where not strictly necessary, but a bad trade when you start introducing conditional behavior in your abstraction.


Personally, I have been bitten by the opposite of this. Code duplicated in many areas that are very slightly different and then when you need to change some shared dependency you might forget to update the logic in all the places.

The problem is when your code does not have a single mental model that can be learned your team then you end up with lots of copy pasting with lots of different patterns and your codebase can rapidly become a pain to maintain.

I have been returning a lot to ”Design Patterns” lately and I think one of the under utilized patterns is the template method pattern. It can be used in places like this where the algorithm is almost the same except for one or two parts of the algorithm that could be extended.


Yes -- on personal projects I manage to solve this by including comments like:

  /* IF YOU CHANGE THIS see if you also need to change bar() in otherfile.js
     with comment code OSIJDF */
  function foo() { ...
I try to put a comment like this in front of every instance, and smash the keyboard for a single random character sequence like `OSIJDF` above that I put in each instance and can do a quick find-all for, in case my original bar() got renamed or moved to a different file, or there's a third function I forgot to add in later but I copy-pasted the comment.

However this is very much a personal practice for cross-referencing that I'm not sure would be received very well as a mandatory practice for code reviews etc.

It does make me very much wish for a best-practices way to essentially provide hyperlinks or cross-references between functions, to indicate that a code should always be interpreted in context of other bits of code that are not in a nearby obvious/contiguous spot. I've never seen anything like that though.


I have looked at the template method pattern, but to me this just sounds like what you really want is composition.

  open class Importer {
    fun import() {
      // do common logic...
      val parsed = parse() // call specialized behaviour
    }
    open fun parse() { ... }
  }
Vs.

  class Importer(private val parser: Parser) {
    fun import() {
      // do common logic...
      val parsed = parser.Parse() // call specialized behaviour
    }
  }
In my experience, using inheritance can quickly make things worse. (Go seems to agree, there is no inheritance in Go, only composition.)


This can lead to another form of anti-pattern. Excessive use of generic patterns can complicate and frustrate code maintenance also.

I have seen code-bases where these patterns are applied excessively. Generic implementations often take longer to read and understand, and are frequently much more difficult to modify. Adding inheritance can add effort to simple things like code-reviews (the abstract base class is not in the change), and the code cannot be read without jumping between files.

My point is not that the template or strategy patterns are bad, but that there is a price to pay. Just use them judiciously.


This feels like another wording of the motto from a great blog post by Sandi Metz:

"Prefer duplication over the wrong abstraction"

https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

I find myself reaching for this all the time after my first instinct is DRY. Makes me thing hard about if I should couple things or not.


This happens an awful lot with frontend components, which is the basic reason a lot of component systems end up failing.

It's really common for things that are merely similar to look 'duck-type style' literally the same early on when they're basic.

But the similarity is just an intersection. As things get more advanced the non overlapping bits come in, conditionals get added, compromises are made, and things end up a mess.

Often all along there should have just been two copies of the markup / behaviour that just happened to be the same in the beginning. Then when things change you add, statically, exactly the extra things you need to either one.


https://en.m.wikipedia.org/wiki/Lumpers_and_splitters

"splits can be lumped more easily than lumps can be split"


I read this article and then started thinking about whether using this distinction makes you a lumper or a splitter and then felt very dizzy.


One distinction that can make an individual DRY decision easier is when the 2+ instances must always agree.

Having only one instance to change seems usually easier than other ways of trying to guarantee that every instance will always agree, now, and in maintenance/evolution.

DRY decisions for things that don't have to always agree can be harder.


If I could go back in time and emphasize this a lot earlier in my career, I probably could have saved myself a lot of stress.

Code deduplication is for removing split sources of truth, not for playing code golf.

DRY is such an easily misused aphorism.


This is the best talk I've watched in this subject (a bit more generally)

The WET codebase

https://www.youtube.com/watch?v=17KCHwOwgms


Fun talk... In my mind, I've always seen Don't Repeat Yourself, and thought the opposite was Write Every Time.


I believe the key is not only looking at the code, but the semantics behind the code.

Two pieces of code might look alike at a structural level, but maybe have different variable names, etc. because addressing different needs that happen to have the same solution _right now_. You can have similar solutions to different problems, but only when you confirm you’re actually dealing with a _single_ problem disguised as two is that you can be confident in deduplicating solutions. As always, the trick is focusing on the problems instead of solutions.


I had a little thought a while back about using names that express why you're doing something in order to make it a little more obvious to the monkey when we're over abstracting something that might just be incidentally similar.

https://t-ravis.com/post/doc/what_functions_and_why_function...


> But when things aren’t exactly the same, do it three times before trying to infer an abstraction.

I think of it as "You don't really know if it's duplicate functionality until you've done it a few times and it becomes obvious." When working on a larger team, a dupe-checking tool can help in locating repeated code because not everyone reviews every code change.



Coding for finance is about finding the right balance between don’t repeat yourself and don’t abstract too much.

Financial contracts tend to have both a lot in common and a lot of differences.

An option on the Forex is vastly different from an equity option or a swaption, yet sometimes you need to treat those things just as options, and sometimes not at all.

The tradeoff is really hard.


This gets even worse in tests. People notice a handful of tests with similar structure (setup, tear down, assertions etc) - debugging those future test failures that now share a common abstraction becomes a pita and destroys confidence in testing.

That and mocking. Nothing makes a test less valuable than mocks everywhere.


This seems to be a perfect example of bikeshedding, It seems that improvement in LLM are going slowly. Nothing to object. Those with the money are expecting some results. Perhaps AGI is combining being a genious and a silly man and understanding that they are close but not the same.


>Many years ago, I would casually copy-paste-paste-paste-modify 4D vector statements, but now I almost never even do two related statements like that

I recall someone finding a bug in Doom? codebase in exactly such a place where third slight iteration in a row missed one character edit.


This DRY discussion is... um... repetitive? I've got nothing new to contribute. But what I really want to know is what "fovea" and "peripheral" mean here

> Sorting out some similar code for “fovea” and “peripheral” reminded me that I had made a mistake here.


The fovea is an area roughly in the centre of the eye where your vision is at its best.

Peripheral vision is the less accurate eyesight you have "around the edges" of your eyes.

I assume he's talking about foveated and peripheral rendering, that is rendering things better if they are "being looked at" by a fovea which is therefore sharper than the periphery.


Sounds like powershell ..

one of the few languages where the community have been shifting from the code golf one-liners to focus instead on human readability and ease of understanding.


Dan Abramov The wet codebase https://youtu.be/17KCHwOwgms


I think I agree with the main tweet classes that are close but not the same, in my experience, always have lead to breaking SRP and ISP


> But when things aren’t exactly the same, do it three times before trying to infer an abstraction.


Refactoring is the least interesting part of programming provided that what you’re programming is interesting.


Hmm. I love refactoring. Getting the initial something that works but is ugly is fun but can be frustrating, and leaves me wanting. Once I have it working, refactoring it to something elegant is a joy. It already works. That was the hard part. All I have to do after that is push things into piles and arrange things neatly, and the result can be very satisfying.


I have piles and piles of extremely interesting work to do in my job. I am a talented software engineer and do my level best to factor things correctly the first time around, but it’s hard to get things right all the time. Inevitably I’ll have to make a second (or a third) pass to organize things better in order to manage complexity. But compared to the extremely interesting work I do normally, pushing and arranging piles is typically pretty tedious. Well, if I haven’t slept well the night before and I’m lacking in creative energy, it’s OK I guess.


It’s what comes out of the process of pushing things into piles. Once functions are factored nicely, patterns emerge. You see different ways of combining things that make more sense. Abstractions that were muddy and confused become clear. You see where duplication exists and how to eliminate it. An elegant structure emerges. But to enjoy it requires creative energy. I suppose that to me, making code readable for others is inherently interesting work. I enjoy finding the clearest and most succinct way to express an idea. Maybe if you approach it that way, you could find the interesting aspect in it.


Monad way can weigh or wait.


why would you post a tech thing on twitter?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: