Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me, it was that I thought that being a good programmer was that I write clean code with enough abstraction and indirection to make it future proof.

Boy I was wrong. Unless you’re doing the same thing you’ve done for years, you can’t tell the future. And just when your unnecessary abstraction is wrong, this the reason why we’re talking about tech debt in the first place. Because nobody wants to touch it.

Unfortunately, I didn’t have anyone to tell me this for the longest time. It wasn’t until I had to fix a bug for something I wrote in the past, and I couldn’t figure out just what I wrote.

Now I try to write dumb and simple (yet sensible) code until there’s a good reason for abstractions. I have nothing to prove at this point in my career.



Probably unpopular/heretic sub-opinion: if given a chance to change the past, I’d rather NOT read books like TAOUP and other books on the same shelf. Or at least wouldn’t take them close to the heart. Because instead of collecting my own experience and fitting it to my projects, I’ve invested heavily in these patterns and rules and “gems” and built something in me that I now have to destroy with advanced therapy (not kidding). Last few weeks I said screw it (as a self-forced experiment, because I get anxious without structure, abstractions, etc) and began to write “just code” without any pre-principles, only using programming methodics as an extreme measure. It’s like I’ve never felt better than that. Like walking new streets after you’ve been paralyzed for years. I write f--king code like I’m 15, it is easy and simple, time to deploy / market / test ideas is several times less. My boss gets happily confused being not sure what’s left for the next week, I hear it in his voice. I still have huge respect to Fathers like ESR, but… just make sure this knowledge makes you any good, okay?

I don’t think I’ll stop this experiment any soon. Maybe will reassess everything in a year or so.


IMHO there is a lot of value in learning the rules and then breaking the ones, that in your judgement does not apply to a given situation. A variation on Chesterton’s fence [1], or, if more literary inclined, the parable of the camel, the lion and the child [2].

[1] https://wiki.lesswrong.com/wiki/Chesterton%27s_Fence [2] http://nietzsche.holtof.com/Nietzsche_thus_spake_zarathustra...


The things is: they are not even rules, while some people think they are, but merely potential tradeoffs, interesting in some situations but that's it.

And it is highly conterproductive when people start to cargo cult them, and oh my god do they do. I sometimes wonder if the world would not have been a better place without e.g. GOF design patterns book, clean code, or SOLID, etc.


The more I learned about "best practices" the less productive I've become. I think it happens because I spend my mental energy on solving the problem in a way that fits those so called best practices instead of solving it however I can in the most robust way possible.

It took me quite a bit time to re-learn to write a code that solves my problems and is inelegant enough that I can jump on it and modify it as my needs change without thinking how to do it elegantly again.

I start to think that the tools must fit elegantly to the domain, the solution built with these tools can only then be elegant. Code can get hard to read and maintain when the way of thinking about a problem doesn't mach well the way of the toolset works. Things get messy when you try to think of ways to make your tools work in a way they are not designed to.

For example, there are some domain specific languages and frameworks for stuff like maths or physics or engineering that work the way the mathematician or physicist or an engineer will think about a problem. If you try to make the code made with these elegant in a sense that it's optimised and nicely structured from software developers perspective, it will be a huge mess and very hard to understand from the mathematician/physicist/engineer perspective.

Therefore, when working on something I find that the most productive AND maintainable code is the one that matches my thought process - no matter how many sins(like repeating myself or writing non-reusable) are commit. Also, optimisation for the sake of the optimisation is evil. Abstractions work well only when they are intended to match the mental model in the solution and are evil when they are made to optimize something(like making it re-usable for all kind of situations).


I scare the absolute crap out of a lot of Junior developers when I tell them this.

I once said, jokingly, that "Actually, abstractions in code are bad" to a Junior and I'm pretty sure he considered quitting on the spot.

But I do think a lot of this stuff, designed to give everyone a common code philosophy, is actually just resulting in a lot of unnecessarily overcomplicated code.


In the course of my career I've learned a simple rule for processing programming wisdom. I don't follow any programming principle or practice unless:

- I have seen value from it firsthand, or

- I'm working alongside somebody who says they have seen value from it firsthand, or

- I've read something that has persuaded me of the likely benefit, and I'm curious to see if I can figure out how to realize it in a real project, as an experiment.

The obvious upside of this is that I don't get ripped off by useless bullshit. I can't tell you how much "best practices" programming "wisdom" I ignored that later disappeared without a trace, completely unmourned, like ashtrays on airplanes.

But it also saves you from doing good stuff wrong, or good stuff in the wrong context. A principle or practice may be amazingly effective, but if you don't understand it, you likely won't get much benefit from it. And very often if you don't understand something, it's because you've never seen a context in which it makes sense. If you try it out, you should try it as a conscious experiment, and drop it if you can't make it pay for itself. Don't keep doing it out of a sense of duty, or a feeling that "good programmers do this."

I'll never be entirely sure if something is 100% bullshit, or if I've never seen the right context for it, or if I'm just to stupid to understand it. I have finally, thanks to my first time working in an OO monolith in a dynamically typed language, for the first time after more than twenty years in the industry, understood what some of the old OO design ideas are about. It's a shame that my initial exposure to those ideas was through many years of watching people misapply them to create unnecessary mess in Java services.

Maybe I'm wrong about a lot of other things. Maybe someday I'll be working on a project and a light bulb will go off in my head: "Oh my God, so THIS is what dependency injection is good for!" I can't know if that will happen, but I do know that I won't spend the rest of my career setting up dependency injection on every single project I work on just because other people regard it as a prudent and mature thing to do.


There's a level beyond that where you actually figure out how to write good abstractions. It's likely you thought you were making good abstractions and useful indirections, but you weren't, hence the problem. Concrete code with little indirection will be better than badly thought out abstractions that are incorrectly designed with unnecessary layers and indirections. That said, good ones, that are well done and thought out are worth their salt and can result in huge force multipliers for future runway.


I think parent’s point is that, you can be a great engineer, but also have limited knowledge of any given problem domain. What constitutes a good abstraction is driven in large part by that domain knowledge, rather than by your pure skill as an engineer.


There are abstractions that are not domain related, those exist as well, and I'd consider them more a part of product design, but software design can benefit from good abstractions at multiple levels and constantly do, but knowing how to use good abstractions and design them is very hard, bad ones or bad use of them will be worse than none.

For example, a schema is an abstraction. Choosing to have a strictly defined schema for your stored data is choosing to add a layer of abstraction. You could say simply store things as JSON, directly, serialize whatever object you have into JSON and be done with it.

Or you could choose to add a layer of validation and create a JSON Schema. Then set things up so the concrete data is created using the abstract schema definition in a way that also automatically set ups validation of the data using that schema.

Now sometimes this is overboard and too complicated for whatever you're doing, sometimes this is an amazing addition to a code base that really simplifies and make you more productive.

Edit: I'll give another more simple example as well, to show how abstractions are relevant at all levels.

Take a Player Class, where Player has a position in the world map.

You could go the concrete direct route:

    Player {
      String name;
      List<item> inventory;
      int health;
      int x; // x position in world
      int y; // y position in world
    }
Or abstract out Position:

    Player {
      String name;
      List<item> inventory;
      int health;
      Position position;
    }

    Position {
      int x;
      int y;
    }
This is more indirect and there's an extra abstraction, Position, but there are scenarios where it's much better like that, mostly if positions are often managed by other things or moved around and manipulated in similar ways be it for Players or Npcs or Cars, etc.

And there's scenarios where it wouldn't benefit much.

The solution where position is just concrete ints on the existing Player class is more concrete and direct, but not always the best.

Now a mistake I find mid-level engineers make is they'll read a blog or book saying it's much better to abstract out Position like this for x, y, z reasons. And then they'll do it for everything, they'll apply it to `name` for example:

    Player {
      PlayerName name;
      ...
    }

    PlayerName {
      String name;
    }
Doing that will make your code base a nightmare, future you and other engineers might hate it, why is everything abstracted like this? What's the point? What's the reasons behind it? What's the benefits?

You could conclude never to abstract anything ever again, and that would be better than the monstrous over-abstracted everything for no reason and often badly implemented at that, and that's an improvement, later on you'll get even better and learn the nuances, when, why and what abstractions in just the right place, the right amount, in just the right way, and it'll be even better.


I can almost guarantee you that any codebase with a "Player" class that looks anything like your example is a very poor codebase. It shows me they just didn't know where to start, so they started by throwing everything in there. Abstractions are always about the consumer of the abstraction, not the implementer. No consumer needs everything in "Player", so it's a terrible abstraction, and it's not just a data type or service or implementation of something else ... it's a God Class that hasn't earned its keep.

The focus on building abstractions is misguided. You don't build an abstraction because you have stuff lying around that implements things -- you build an abstraction because you need it to do your job. That's the only valid reason to ever build an abstraction: you, as the consumer, need the abstraction to do (or to define) your own job. As a consequence of this, most abstractions should be defined before they're implemented. It really feels like most people miss the point on this one, and that's why we end up with bloated abstractions. They're not about what you have. They're about what you need.

That means you should actually have lots of abstractions (assuming you have lots of different needs throughout your code), and they should all be simple, small, and clear. It should be obvious how to implement them, and obvious what they're used for. They have to be: that's how they were built to begin with.

(In fact, while we're at it, the focus on classes is misguided too. Why does everyone think you need to make classes that mirror common nouns in real life? Bad CS education?)

I could absolutely see "Position" (and, critically, everything in it) as something some service needs to do its job. In fact by simply looking at that class, I've learned a lot about how your game works: it's 2D (no Z) and probably tile-based (ints, not floats). We've made a decision: that's how position works in this game. How does movement work? Start that next -- it will use Position. Keep picking away at the edges, making useful decisions about the game, etc. Build abstractions only when you need them to answer that question: "how does X work in this game?" You will never get to the point where you build a "Player" class like that, which is why I can confidently say that a codebase with such a class must inevitably suck.


> I can almost guarantee you that any codebase with a "Player" class that looks anything like your example is a very poor codebase. It shows me they just didn't know where to start, so they started by throwing everything in there.

And then there is Unreal Engine 5's ACharacter class[0] :-P. I recommend checking the superclasses too.

[0] https://docs.unrealengine.com/5.0/en-US/API/Runtime/Engine/G...


First off, I'll say that popular frameworks optimize for being popular, which usually means they let inexperienced people make cool things quickly. This necessarily involves tradeoffs that end up being "walls" to more experienced coders. It's very very hard to let inexperienced people make cool things quickly without restricting power-coders. So "Unreal does it" doesn't necessarily mean it's the right choice for great code -- it only means it probably helps inexperienced coders make cool things quickly.

> Characters are Pawns [AI or human decision-maker] that have a mesh, collision, and built-in movement logic.

Indeed that's a combination of a lot of different responsibilities. Probably too many. Why built-in visuals (mesh) but not built-in audio? Why a mesh and not built-in particle effects? I'm guessing it's just because that is the combination that they found helps inexperienced coders make cool things quickly. I'd be really curious whether people who spend a lot of time tweaking their engine, or make games that are more complicated than just Another FPS, actually use that class much. I suspect they either don't, or they have several similar varieties of their own, which they sorta switch between as it makes sense and then go "Dammit, I wish we had made this an ACharacterTypeSeven, not an ACharacterTypeSix!!"

Of course there are times when you combine responsibilities together into larger objects, but the trick there is to always accept that this is just one projection, one perspective on the entity. If you start to think of that ACharacter object as the character, you'll have problems. It's an arbitrary boundary. When you come up with a cool idea to, say, have your character split in two parts with independent motion before merging back together a few seconds later, is that two ACharacter instances or one? You've duplicated some parts of it, but not others.

"But dude, YAGNI! Don't try to predict the future" you say, missing the point. I'm not saying restructure your code just in case someone wants to split characters in two later -- that's YAGNI. I'm saying throw what-ifs at your code to see if it holds together as a sensible concept right now. You future-proof your code by making sure its concepts are clean, independent, and composable, not by trying to predict the future. My character-splitting example is not an example of something we should plan for, but rather an example of why the concepts may not actually fit together that well. When I look at ACharacter, I don't see something that's composable -- I see something that's already composed for you, and if you want a different composition, it looks like a pain in the ass. That tradeoff makes sense if your main goal is to help inexperienced coders make cool things quickly, but it does not make sense for the codebase you rolled yourself.


The technique that you have described on viewing the simple object combinations as "projections" is an excellent one; composability in your system emerges from efficiently selecting and combining these projections into the desired combination. At different points in your system you take just take different projections. Cross-cutting concerns are a breeze.

It actually all starts to feel like.... SQL! State is stored globally in a defined schema and queried as needed by the system.

But you can't do this if your compositions are preordained from on high by a rigid class hierarchy, the data is crystallized into the "blessed" projection and that's that. It's analogous to your SQL queries being constrained to solely static views. No JOIN. No GROUP BY. No WHERE.


> So "Unreal does it" doesn't necessarily mean it's the right choice for great code [...] "But dude, YAGNI! Don't try to predict the future" you say, missing the point.

Actually i'd say the opposite, "Unreal does it" indeed doesn't mean it's the right choice, but that "Unreal does it" proves that in practice that stuff doesn't matter - Unreal is a codebase going back decades and yet it is as popular among developers as it ever was (some developers even throw away their own engines to switch to it).

So while these topics can be amusing to read, in reality they are bikeshedding of little more importance than using spaces vs tabs or where to put curly braces and how that affects diff tools.


I wholly agree with you and your commentary here is one of the most profound things I've read about software engineering in a long time. But, to play devil's advocate,

> I'm guessing it's just because that is the combination that they found helps inexperienced coders make cool things quickly

Is not "making cool things quickly" the essence of enterprise programming? Sure, you can make the cleanest, most perfectly abstracted code for yourself when the requirements are well-defined and unchanging, but that's not the environment you find in business. One might contend that such a combination is the optimum for enterprise programming/making cool things quickly.


> Is not "making cool things quickly" the essence of enterprise programming?

I think it's not. I'm not sure there's anything quick about enterprise programming. If you're cynical, the essence of enterprise programming is selling absurdly expensive software to clueless senior leadership that will never use it. If you're optimistic, the essence of enterprise programming is being a good data steward while elegantly handling the needs of a lot of different stakeholders and interfacing with a lot of different systems (some automated, some implemented only in brains).

In game programming, if you can't figure out a good way to get the camera to work in one particular level, you just scrap or redesign the level. In enterprise programming, if you can't figure out a way to import a particular Excel format, you could seriously harm the usefulness of your project or even lose a contract. You have to "get it done", and there are a lot of "its" to get done.

When I say "make cool things quickly", I mean that there are tradeoffs between having high velocity in the beginning (standard templates, pre-defined assets, content management systems, implement the whole thing in Salesforce) vs. maintaining that velocity through the lifecycle of a potentially very long project. I claim that one of the things that makes popular frameworks popular is because they tend to heavily prioritize the former over the latter. That is great for going from 0 code to shipped quickly, but it's the wrong choice for 5+ year projects like you see in enterprise.

In fact I think one of the (many) things that poisons modern enterprise programming is the emphasis on tools that get you going quickly, rather than tools that stay loyally by your side through the whole project lifecycle. MongoDB is quick and easy to set up, because you can just throw whatever JSON objects you want in there, without spending all that time worrying about "schema" (I do think people spend too long worrying about schema, but the answer is not to abandon it -- that's a whole other subject). But you still have a schema! It's just that now you don't have a dedicated tool to help you with it, and as your needs and data change, you're the one responsible for keeping it up to date. It seems very easy to get mismatched or out of date JSON objects in there and very hard to clean it up (although I'm no expert on JSON databases). Whereas SQL Server or PostgreSQL will support your changing schema very well throughout the whole project lifecycle. If you took over a 15-year-old project, would you rather it had been using Postgres or MongoDB that whole time? I know I'd prefer Postgres.


I don’t disagree with anything you’ve said, but just to the specific narrow example of UE’s ACharacter: there’s nothing stopping you using APawn as your “player” and composing the collision, mesh, movement and functionality as you please - in fact, I suspect the majority of people using Unreal Engine for anything other than toy projects would do just that. I think the “pre-composed” ACharacter exists mainly to help with quick prototyping.


> They're about what you need

I've no experience with game dev, but in other areas of development what you need is often not known ahead of time (which I believe the parent is trying to say). Operating under those conditions makes the Position abstraction somewhat arbitrary (until it's obvious it's needed by other parts of the system). Aggressive refactoring and robust testing are necessary when operating under these conditions.


> what you need is often not known ahead of time

Well that's kind of the point of software engineering, isn't it? Actually typing code is only a small part of software engineering, and if you don't know what you need yet, then you're probably not ready to write code. That doesn't mean you're not being a productive software engineer! It just means you're still working toward that point.

I could be more clear about "what you need" actually means. Let's say I sit down to (A) prove the four-color theorem, which says that no more than 4 colors are needed to color in a 2D map with no two adjacent regions having the same color; (B) write a function to color a map using as few colors as possible. Before I can start on the meat of it, I have to decide exactly what I mean by "map". Anything that (A) relies on my proof or (B) calls my function is going to need to turn their data into the kind of "map" I'm working with.

Oh, I know what a map is: it's an ArcGIS Pro 10.7 Geodatabase with a Polygon layer! Hand me one of those and I'll assign a 32-bit ARGB value to each polygon. What? You don't have ArcGIS Pro 10.7? Well, sucks to be you. Obviously it has to be that, since I need insert esoteric proprietary feature in order to color it.

Hmm, well, okay, maybe I don't need every feature of ArcGIS Pro 10.7 Geodatabase Polygon layers. In fact, I really just need a list of regions and how they're connected. Do I need literally every wiggle-waggle of every border between each region? Well, not really ... in fact all I really need to know is which ones touch, and which ones don't. In fact, it turns out what I need is a Planar Graph. It probably shouldn't have any loops (nodes connected to themselves) either. A loopless planar graph. If you hand me one of those, I can color it for you -- in fact, I'll just assign each node 1 through (up to) 4 to represent the colors, and you can do whatever you want with that information, rather than me picking the actual ARGB values for you.

The reason I associated it with a math proof is because it's more clear in that case that reducing the preconditions on the objects you accept increases the power of your proof. Proving something interesting about all multiples of 5 is less powerful than proving it about all integers, and less powerful than proving it about all elements of an Abelian Group, etc. Writing your code to work on any loopless planar graph is much more powerful than writing it to only work on <Random Complex Proprietary Format>.

And we settled on "loopless planar graph" not because we had a bunch of graphs lying around -- we probably didn't! We probably actually had a map representation we've used elsewhere. We settled on "loopless planar graph" because that is the minimal possible description of the objects we can run our code on. That's what I mean by "what we need to do our job". That is the birth of an abstraction.


This is a great comment and should be read carefully by anyone reading the comment section on how to learn more. Very well explained!!


But the midpoint of that would be to type-pun PlayerName, because while a Playername is a string, not all strings are playernames and it's good to be able to see in the code base what types are being passed around.

One of the mistakes I really hate seeing people do in typed languages is not using types for distinctly important data sets - a good example is when you have ciphertext and plaintext being passed around. At an application level you want to be really sure that you're going to be accepting and using ciphertext in the parts that need it - even if they're both technically valid string types.


>For example, a schema is an abstraction

Meh, I'd say strictly defined schema, moving database consistency logic to DB etc. is an example of a bad abstraction in most cases I've seen it used. The idea sounded really good when I was a junior, you can have data layer enforce integrity from all sources.

Except most applications are exclusive owner of the DB and it's schema - even in the microservice world it's one database per service. If I see other apps hooked up it's passive readers/exporting/logging/etc.

SQL databases still don't play well with being in sync with the repo (it requires specialized tools or extra care, which again usually means extra tools).

Database schema constraints are often crude and/or complex and don't scale well - it's common for people to avoid even rudimentary things like foreign keys because of what it can mean in terms of locking/ordering and write throughput. And using things like callbacks etc. good luck.


I agree that SQL vs git is not a perfectly solved problem, but I would argue that NoSQL vs git is an even harder problem where the state of the DATA does not necessarily match what your current code says -- you need to remember/comment that some fields did not exist in past data or run jobs to migrate the old data etc; it is doable but not obviously better than the state SQL is at.


Dealing with breaking migrations is hard with or without types, but I agree that having database schema catches this sooner and more reliably (analogous to say having API schema and catching breaking changes by diffing).

But from what I've seen using schema to enforce data consistency brings more problems than benefits.


A (json) schema is a specification, which is the opposite of an abstraction.


An abstraction is something that can be made concrete. For example in OOP, Classes and Interfaces are abstractions.

An abstraction has no material presence, so in code, concretions are the actual data in memory in their exact place and precise arrangements and linking and all that on a precise machine.

Source code is an abstract representation of a running program for example. Everything in source code form is an abstraction.

Generally an abstraction has gaps, it can't actually be run into the exact behavior you want on its own.

A schema is an abstraction, because schema isn't executable into the concrete running behavior you're trying to implement. A schema or a data specification (which is just another word for schema), is definitely an abstraction. It abstracts over the actual concrete data you will have at runtime.

Everything that is not in the final concrete running form it needs to be at runtime is an abstraction.

A specification abstracts the implementation away, it's an abstract idea of what you want, but does not specify the implementation.

Unfortunately the concept of abstraction is itself very abstract, but one thing a lot of people don't realize is how many things are actually abstractions.

Even a simple function is an abstraction. It can only run once provided actual values to its arguments, it is but a template abstracting the idea of mapping inputs to outputs using some logic. You need an instance of it with actual values for its arguments to be able to run it, a function with concrete values provided as arguments is now a concrete thing, but the function definition, i.e, the code for it is simply an abstraction.

You can further abstract over abstractions, a function signature abstract further the implementation away, to be filled in later.

And generally speaking, even a running program is simply an abstraction of reality, a simulation of something real, but this is when you start to enter product design, how to best abstract over the real life use cases you're trying to have a program that can represent, model and simulate.

Programming Languages and other frameworks often provide you means of modeling abstractions, constructs that can help you define your own abstractions. Those tools will vary from language to language, in some OO language like I said you're given Classes, static types, Interfaces, inheritance hierarchies, etc. In some functional languages you'll be given abstract data types, functions, higher order functions, type classes, etc.

I could keep going on, but I find the concept of abstraction itself is often misunderstood.

P.S.: You can easily argue a different definition, arguing semantics has no definite truth, definitions of words are just axioms we define. I believe it is more useful and beneficial to define abstraction as I just did for being able to better reason about and make judgement calls as to how exactly to structure and design software code. I'd encourage others to give it a try, attempt to rediscover abstractions how I just described, and you'll learn a lot in my experience, and you might become a better programmer out of it.

Just my 2 cents.


> actually figure out how to write good abstractions.

There's an element of no-true-scottsman in this argument. Most codebases that I've seen have excessive amounts of unnecessary abstractions. It's rare to see a codebase that has too few abstractions. You can of course make the argument that "they just weren't creating the right abstractions", and it's not necessarily incorrect - it's just unhelpful as a piece of advice. You can take any methodology - no matter how bad it is - and claim that any seeming faults in the methodology are simply the result of people applying it incorrectly. "No true scottsman would have created this abstraction".

Since the needle is currently pointing in one direction more often than the other, I think it's generally helpful to shell out advice that moves the needle in the other direction: advice such as "less abstractions is generally better".


There was a time where there was seldom any abstractions, people wrote in assembly code, it was as close to the concrete machine as you could be. It was painful, complicated, and doing anything was tedious, effortful and slow.

Abstractions were clearly needed. Higher level languages abstracting over the machine lower level details were needed.

Then there was a time where abstractions themselves were very simple, branching and looping were all just done with "goto", it was error prone, confusing, and made working with other people's code bases difficult. Abstractions were clearly needed, something to abstract over the lower level details of branching and looping and memory management with relations to those.

Fast forward to Java. Now we already started with quite a lot of abstraction, yet there were still times when things were more tedious then they needed to be, more abstraction was still needed, it led to the addition of Interfaces, the development of frameworks like Spring, the creation of template languages like JSP, the addition of code-generation tools like Lombok or API generation like Open API.

Once again more abstraction became hugely benefitial, delivering real productivity boosts and still helping to make things clearer, not more obfuscated. Even though it is true at each layer it becomes harder to understand how all these abstractions reduce themselves back to some concrete instance at the end of it all. But if you can trust in them, you need not worry about that, a good abstraction lets you forget and ignore the complex details underneath it, freeing you to focus on more of your higher level concerns progressively closer and closer to your real domain problem and away from the computer machine concerns.

Finally enterprise software reached a point where managing complexity got difficult, so people tried to promote best practices they had learned, basically ways to fit in more abstractions in certain situations that again benefited them greatly. There was a big push to advocate for "design patterns" and other judicious use of abstractions.

Lots of people, often mid-level developers, including me at the time, we went seeking for advice, while we didn't understand why, what's the need that drove this advice, what's the use that benefits from it, we took them to heart: SOLID principles, GRASP, YAGNI, inheritance, interfaces, composition, we took it all at face value and tried to arbitrarily use our limited understanding of them everywhere we could, religiously and impartially.

This frivolous misuse of abstractions yielded the plagued over-engineered, obfuscated, puzzle-like, code bases that a lot of enterprise software suffers from. Where the hell is the actual code doing the actual thing?

This had more senior engineer once again try to push some new "best practice", a new commandment to amend for the misunderstanding of the prior ones: "less abstractions is generally better". Or in other words, just use the abstractions more experienced people have already put in place, stick to your popular framework, follow its existing patterns, stick to simple usage of your programming language, and don't try to be smarter than you are, aka too clever.

This is great advice, I'm absolutely in support for it, and to some developers, they're not ready to hear the more nuanced version of it, it might lead them down the wrong path again.

But, my point is, good abstractions are really awesome, and by definition of what makes them "good" is that they actually help rather than hurt. There's countless examples of good abstractions throughout the history of software development. There's even so many more minor abstractions that everyone implements on a daily basis without even realizing that once again being better at results in better code, like simply choosing what the method will be and what the arguments and return value for it will be. Or choosing where the data will live.

So my point is, in my opinion, a senior engineer is one that knows about the "generally" part of "less abstractions is generally better". A senior engineer knows exactly when less abstractions is better and when more abstractions is better. Don't stomp your growth by once again being religious about a best practice and arbitrarily being against all abstractions because the best practice said to try to avoid them.


> Now I try to write dumb and simple (yet sensible) code until there’s a good reason for abstractions.

Good abstraction is a lot harder than most think. We tote abstraction's benefits as a reason for abstraction but fail to recognize bad abstraction and how it completely negates any would be benefit.

Too often abstraction (so called) requires a lot of research into how it's implemented. By the time you figure out enough implementation details to use the abstraction, you could have implemented it yourself quicker. It saves no time or effort but you're stuck with it because existing code is hard to part with.

If you're not reliving your users of a shit ton of implementation details then it's just not good abstraction. Code for someone who doesn't want or have time to learn the details. Often this will be your future self despite your current believe that your mind is a steal trap and would never forget how you did this stuff.


I blame this tendency to overabstract on the emphasis on top-down design / teaching methods. Beginners are taught to abstract whenever possible, and aren't taught when to stop. They don't see the reason behind it, and instead add abstractions dogmatically, dramatically increasing complexity in the process. When abstraction is used well it definitely decreases effort and increases flexibility, but all too often it's overused and results in "object-oriented obfuscation" instead.


An approach that has been working for me is to work bottom-up. Think about what basic functionalities you need, and start implementing them. Once they work properly, you can start composing an orchestrating them to larger units. That way you're less prone to build Babylonian towers of superfluous abstractions and indirections. Doing it in a way that's maintainable and extensible should come with experience.


I dont know ... way more common problem I see is unwillingness to abstract. Spaghettis are way more frequent than massive abstractions. Now the popular thing is move toward functional-like style, which leads to one stream of flow that is quite difficult to decipher.


> which leads to one stream of flow that is quite difficult to decipher.

By the definition of one stream of flow, this is literally easier to follow lol. One stream of flow as opposed to what? Several streams that branch and intermingle? Spaghetti is several intermingling branching streams which is very hard to follow. Following one stream is easy, you just follow the stream /shrug


No it is not easier to follow practically. Second, definition "stream of flow" is not "easier to follow". It is "one steam of code". It forces you to have all the ifs and for in head all the time and does not explains what those means.

Yes, opposed to named structures you can understand in isolation and then treat as units.


> you can’t tell the future

This is so true. It's tricky because some things are worth thinking ahead a little - but on average, I've learned that it's far better to focus on making things easy to change than make them directly accommodate future requirements but (often at the cost of immense complexity)... If the code is so small and simple that you can easily re-write it, then it is future proof and easy to understand and maintain, win win.

It's easy enough to understand this abstractly, but takes practice to know when to plan ahead and when not to - but a good oversimplification is that: if you know it's a future requirement, it's worth thinking about, maybe even worth making a space in your architecture/api/data whatever; If it's an unknown, just don't bother, try to keep the code simple instead so that you can adapt.


Well said. We are having a similar "fight" where I work now. Way too much premature optimization. I think the one concept that takes a while to really understand is to focus on separation of concerns. It is far too easy to get encapsulation wrong. People (imo) generally forget to apply the SoC test that tells them where to draw the line. We also have a lot of parallel development where re-factoring a View (Screen) could cause a rippling failure to other people's code. Too often they try to reuse everything instead of the important part of extracting the shared business logic into a XXManager and letting the Views just be dumb. Who cares if two Views developed in parallel might be too similar? It might change in the future and don't have to worry about 1 View doing 2 jobs.


There are also abstractions that are so good they are invisible until someone tries to reinvent the wheel without them. They are then forced to confront some reality which is more complex than they thought.

What seems to be the common thread is: "you are not as good as you think you are".

Never enough humility.


I don't understand this distinction between futureproof vs simple code. I would have thought they are the same.


They might be the same, but often, future proof code is written with a future requirement in mind, and extra "hooks" added in to allow easy addition of that future requirement.

For example, you might add an orm to abstract the db specific SQL, even though you only run off one database type, because it allows you to switch in the future.


this is exactly what I like in Golang




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: