> A function with well-constrained inputs and outputs is easy to reason about.
It's quite easy to imagine a well factored codebase where all things are neatly separated. If you've written something a thousand times, like user authentication, then you can plan out exactly how you want to separate everything. But user authentication isn't where things get messy.
The messy stuff is where the real world concepts need to be transformed into code. Where just the concepts need to be whiteboarded and explained because they're unintuitive and confusing. Then these unintuitive and confusing concepts need to somehow described to the computer.
Oh, and it needs to be fast. So not only do you need to model an unintuitive and confusing concept - you also need to write it in a convoluted way because, for various annoying reasons, that's what performs best on the computer.
Oh, and in 6 months the unintuitive and confusing concept needs to be completely changed into - surprise, surprise - a completely different but equally unintuitive and confusing concept.
Oh, and you can't rewrite everything because there isn't enough time or budget to do that. You have to minimally change the current uintuitive and confusing thing so that it works like the new unintuitive and confusing thing is supposed to work.
Oh, and the original author doesn't work here anymore so no one's here to explain the original code's intent.
> Oh, and the original author doesn't work here anymore so no one's here to explain the original code's intent.
To be fair, even if I still work there I don't know that I'm going to be of much help 6 months later other than a "oh yeah, I remember that had some weird business requirements"
So even if comments are flawlessly updated they are not a silver bullet. Not everyone are good at explaining confusing concepts in plain English so worst case you have confusing code and a comment that is 90% accurate but describe one detail in a way that doesn't really match what the code says. This will make you question if you have understood what the code does and it will take time and effort to convince yourself that code is in fact deterministic and unsurprising.
(but most often the comment is is just not updated or updated along with the code but without full understanding, which is what caused the bug that is the reason you are looking at the code in question)
I don't think comments need to be perfect to have value. My point was that if a certain piece of code is solving a particularly confusing problem in the domain, explaining it in a comment doesn't _necessarily_ mean the code will be less confusing to future dev if the current developer is not able to capture the issue in plain English. Future dev would be happier I think with putting more effort into refactoring and making the code more readable and clear. When that fails, a "here be dragons" comment is valuable.
They can write a very long comment explaining why it is confusing them in X, Y, Z vague ways. Or even multilingual comments if they have better writing skills in another lanaguage.
And even if they don’t know themselves why they are confused, they can still describe how they are confused.
And that time spent writing a small paper in one's native language would be better spent trying to make the code speak for itself. Maybe get some help, pair up and tackle the complexity. And when both/all involved is like, we can't make this any clearer and it's still confusing af. _Then_ it's time to write that lengthy comment for future poor maintainers.
You can only do the “what” with clearer code. The “why” needs some documentation. Even if it is obvious what the strange conditionals do, someone needs to have written down that this particular code is there because the special exemption from important tariffs of cigarettes due to the trade agreement between Serbia and Tunis that was valid between the years years 1992 and 2007.
This is where a good comment really can help! And in these types of domains I would guess/hope that there exists some project master list to crossref that will send both developers and domain experts to the same source for "tariff-EU-92-0578" specifically the section 'exemptions'. So the comment is not not just a whole paragraph copied in between a couple of /*/
Thing is, good documentation has to be part of the company's process. eg, a QA engineer would have to be responsible for checking the documentation and certifying it. Costs money and time.
You can't expect developers, already working 60 hour weeks to meet impossible deadlines, to spend another 15 hours altruistically documenting their code.
How about old, out of date documentation that is actively misleading? Because that’s mostly what I run into, and it’s decidedly worse that no documentation.
Give me readable code over crappy documentation any day. In an ideal world the docs would be correct all of the time, apparently I don’t live in that world, and I’ve grown tired of listening to those who claim we just need to try harder.
Every line of documentation is a line of code and is a liability as it will rot if not maintained. That’s why you should be writing self documenting code as much as possible that’s obviates the need for documentation. But unlike code, stale/wrong doc will not break tests.
Spending 15 hours documenting the code is something no leader should be asking of engineering to do. You should not need to do it. Go back and write better code, one That’s more clear at a glance, easily readable, uses small functions written at a comparable level of abstraction, uses clear, semantically meaningful names.
Before you write a line of documentation, you should ask yourself whether the weird thing you were about to document can be expressed directly in the name of the method of the variable instead. Only once you have exhausted all the options for expressing the concept in code, then, only then, are you allowed to add the line of the documentation regarding it.
> Only once you have exhausted all the options for expressing the concept in code, then, only then, are you allowed to add the line of the documentation regarding it.
But that's what people are talking about when talking about comments. The assumption is that the code is organized and named well already.
The real world of complexity is way beyond the expressiveness of code, unless you want function names like:
Or you know, work the devs 40 hour weeks and make sure documentation is valued. Everything costs one way or another, it's all trade-off turtles all the way down.
An outdated comment is still a datapoint! Including if the comment was wrong when it was first written!
We live in a world with version history, repositories with change requests, communications… code comments are a part of that ecosystem.
A comment that is outright incorrect at inception is still valuable even if it is at least an attempt by the writer to describe their internal understanding of things.
This. I have argued with plenty of developers on why comments are useful, and the counter arguments are always the same.
I believe it boils down to a lack of foresight. At some point in time, someone is going to revisit your code, and even just a small `// Sorry this is awful, we have to X but this was difficult because of Y` will go a long way.
While I (try to) have very fluid opinions in all aspects of programming, the usefulness of comments is not something I (think!) I'll ever budge on. :)
> // Sorry this is awful, we have to X but this was difficult because of Y
You don’t know how many times I’ve seen this with a cute little GitLens inline message of “Brian Smith, 10 years ago”. If Brian couldn’t figure it out 10 years ago, I’m not likely going to attempt it either, especially if it has been working for 10 years.
But knowing what Brian was considering at the time is useful, both due avoiding redoing that and for realising that some constraints may have been lifted.
IMO the only thing you can assume is that the person who wrote the comment wasn't actively trying to deceive you. You should treat all documentation, comments, function names, commit messages etc with a healthy dose of scepticism because no one truly has a strong grip on reality.
Right, unlike code (which does what it does, even if that isn't what the writer meant) there's no real feedback loop for comments. Still worth internalizing the info based on that IMO.
"This does X" as a comment when it in fact does Y in condition Z means that the probability you are looking at a bug goes up a bit! Without the comment you might not be able to identify that Y is not intentional.
Maybe Y is intentional! In which case the comment that "this is intentional" is helpful. Perhaps the intentionality is also incorrect, and that's yet another data point!
Fairly rare for there to be negative value in comments.
It just occurred to me that perhaps this is where AI might prove useful. Functions could have some kind of annotation that triggers AI
to analyze the function and explain it plain language when you do something like hover over the function name in the IDE, or, you can have a prompt where you can interact with that piece of code and ask it questions. Obviously this would mean developer-written comments would be less likely to make it into the commit history, but it might be better than nothing, especially in older codebases where the original developer(s) are long gone. Maybe this already exists, but I’m too lazy to research that right now.
But then could you trust it not to hallucinate functionality that doesn't exist? Seems as risky as out-of-date comments, if not more
What I'd really like is an AI linter than noticed if you've changed some functionality referenced in a comment without updating that comment. Then, the worst-case scenario is that it doesn't notice, and we're back where we started.
Comments that explain the intent, rather than implementation, are the more useful kind. And when intent doesn't match the actual code, that's a good hint - it might be why the code doesn't work.
I mean it's easy to say silly things like this, but in reality most developers suck in one way or another.
In addition companies don't seem to give a shit about straightforward code, they want LOC per day and the cheapest price possible which leads to tons of crap code.
Each person has their own strengths, but a worthwhile team member should be able to meet minimum requirements of readability and comments. This can be enforced through team agreements and peer review.
Your second point is really the crux of business in a lot of ways. The balance of quality versus quantity. Cost versus value. Long-term versus short term gains. I’m sure there are situations where ruthlessly prioritizing short term profit through low cost code is indeed the optimal solution. For those of us who love to craft high-quality code, the trick is finding the companies where it is understood and agreed that long-term value from high-quality code is worth the upfront investment and, more importantly, where they have the cash to make that investment.
>I’m sure there are situations where ruthlessly prioritizing short term profit through low cost code is indeed the optimal solution
This is mostly how large publicly traded corps work, unless they are ran by programmers that want great applications or are required by law, they tend to write a lot of crap.
>In addition companies don't seem to give a shit about straightforward code, they want LOC per day and the cheapest price possible which leads to tons of crap code.
Companies don't care about LOC, they care about solving problems. 30 LOC or 30k LOC doesn't matter much MOST of the time. They're just after a solution that puts the problem to rest.
If a delivery company has four different definitions of a customer’s first order, and the resulting code has contents that are hard to parse - does the Blake lie with the developer, or the requirements?
If the developer had time to do it, with him. Otherwise with the company
I'm sure there's some abysmal shit that's extremely hard to properly abstract. Usually the dev just sucks or they didn't have time to make the code not suck
Business requirements deviate from code almost immediately. Serving several clients with customisation adds even more strain on the process. Eventually you want to map paragraphs of business req to code which is not a 1:1 mapping.
Aging codebase and the ongoing operations make it even harder to maintain consistently. eventually people surrender.
Then in 3 months someone in between came changing the code slightly that makes comment obsolete but doesn’t update the comment. Making all worse not better.
Issue trackers are much better because then in git you can find tickets attached to the change.
No ticket explaining why - no code change.
Why not in repo? because business people write tickets not devs. Then tickets are passed to QA who also does read the code but also need that information.
So we just fire all the employees and hire better ones only because someone did not pay attention to the comment.
Of course it is an exaggeration - but also in the same line people who think "others are failing at their jobs" - should pick up and do all the work there is to be done and see how long they go until they miss something or make a mistake.
Solution should be systematic to prevent people from failing and not expecting "someone doing their job properly".
Not having comments as something that needs a review reduces workload on everyone involved.
Besides, interfaces for PRs they clearly mark what changed - they don't point what hasn't been changed. So naturally people review what has changed. You still get the context of course and can see couple lines above and below... But still I blame the tool not people.
A code tends to be reused. When it happens jira is not likely to travel alongside the code. All 'older' jira tickets are useless broken links. All you have in practice is jira name. It usually happen with 'internal documentation' links as well.
Git blame often lies when big merge was squashed. I mostly had these in Perforce so I might be wrong. Also when code travels between source version control servers and different source version control software it also loses information.
I would say in my gamedev practical experience the best comments I saw are
TODO implement me and (unit) test code that still runs. First clearly states that you have reached outside of what was planned before and 2nd allows you to inspect what code meant to do.
One of my favorite conventions is ‘TODO(username): some comment’. This lets attribution survive merges and commits and lets you search for all of someone’s comments using a grep.
// TODO: <the name of some ticket>: <what needs to happen here>
e.g.
// TODO: IOS-42: Vogon construction fleet will need names to be added to this poetry reading room struct
I've not felt my name is all that important for a TODO, as the ticket itself may be taken up by someone else… AFAICT they never have been, but they could have been.
Jira entries get wiped arbitrarily. Git blame may not lie, but it doesn't survive larger organizational "refactoring" around team or company mergers. Or refactoring code out into separate project/library. Hell, often enough it doesn't survive commits that rename bunch of files and move other stuff around.
Comments are decent but flawed. Being a type proponent I think the best strategy is lifting business requirements into the type system, encoding the invariants in a way that the compiler can check.
Thank god we’re held to such low standards. Every time I’ve worked in a field like pharmaceuticals or manufacturing, the documentation burden felt overwhelming by comparison and a shrug six months later would never fly.
We are not engineers. We are craftsmen, instead of working with wood, we work with code. What most customers want is an equivalent of "I need a chair, it should look roughly like this."
If they want blueprints and documentation (e.g. maximum possible load and other limits), we can supply (and do supply, e.g. in pharma or medicine), but it will cost them quite a lot more. By the order of magnitude. Most customers prefer cobbled up solution that is cheap and works. That's on them.
Edit: It is called waterfall. There is nothing inherently wrong with it, except customers didn't like the time it took to implement a change. And they want changes all the time.
Same difference. Both appellations invoke some sort of idealized professional standards and the conversation is about failing these standards not upholding them. We're clearly very short of deserving a title that carries any sort of professional pride in it. We are making a huge mess of the world building systems that hijack attention for profit and generate numerous opportunities for bad agents in the form of security shortfalls or opportunities to exploit people using machines and code.
If we had any sort of pride of craft or professional standards we wouldn't be pumping out the bug ridden mess that software's become and trying to figure out why in this conversation.
Hmm, thinking back, think most companies I worked (from the small to the very large tech companies) had on average pretty good code and automated tests, pretty good processes, pretty good cultures and pretty good architectures. Some were very weak with one aspect, but made up for it others. But maybe I got lucky?
> Both appellations invoke some sort of idealized professional standards
The key point of the comment was that engineers do have standards, both from professional bodies and often legislative ones. Craftsmen do not have such standards (most of them, at least where I am from). Joiners definitely don't.
Edit: I would also disagree with "pumping out bug ridden mess that software's become."
We are miles ahead in security of any other industry. Physical locks have been broken for decades and nobody cares. Windows are breakable by a rock or a hammer and nobody cares.
In terms of bugs, that is extraordinary low as well. In pretty much any other industry, it would be considered a user error, e.g. do not put mud as a detergent into the washing machine.
Whole process is getting better each year. Version control wasn't common in 2000s (I think Linux didn't use version control until 2002). CI/CD. Security analyzers. Memory managed/safe languages. Automatic testing. Refactoring tools.
We somehow make hundreds of millions of lines of code work together. I seriously doubt there is any industry that can do that at our price point.
> We are miles ahead in security of any other industry. Physical locks have been broken for decades and nobody cares. Windows are breakable by a rock or a hammer and nobody cares.
That is not such a great analogy, in my opinion. If burglars could remotely break into many houses in parallel while being mostly non-trackable and staying in the safety of their own home, things would look differently on the doors and windows front.
The reason why car keys are using chips is because physical safety sucks so much in comparison with digital.
The fact is we are better at it because of failure of state to establish the safe environment. Generally protection and safe environment is one of reason for paying taxes.
> The reason why car keys are using chips is because physical safety sucks so much in comparison with digital.
Not the reason. There is no safe lock, chip or not. You can only make it more inconvenient then the next car to break in.
> The fact is we are better at it because of failure of state to establish the safe environment. Generally protection and safe environment is one of reason for paying taxes.
Exactly backwards. The only real safety is being in a hi-sec zone protected by social convention and State retribution. The best existing lock in a place where bad actors have latitude won't protect you, and in a safe space you barely need locks at all.
OTOH, the level of documentation you get for free from source control would be a godsend in other contexts: the majority of the documentation you see in other processes is just to get an idea of what changed when and why.
Most software work in pharma and manufacturing is still CRUD, they just have cultures of rigorous documentation that permeates the industry even when it's low value. Documenting every little change made sense when I was programming the robotics for a genetic diagnostics pipeline, not so much when I had to write a one pager justifying a one line fix to the parser for the configuration format or updating some LIMS dependency to fix a vulnerability in an internal tool that's not even open to the internet.
Well, a hand watch or a chair cannot kill people, but the manufacturing documentation for them will be very precise.
Software development is not engineering because it is still relatively young and immature field. There is a joke where a mathematician, a physicist and a engineer are given a little red rubber ball and asked to find its volume. The mathematician measures the diameter and computes, the physicist immerses the ball into water and sees how much was displaced, and an the engineer looks it up in his "Little red rubber balls" reference.
Software development does not yet have anything that may even potentially grow into such a reference. If we decide to write it we would not even know where to start. We have mathematicians who write computer science papers; or physicists who test programs; standup comedians, philosophers, everyone. But not engineers.
Difference is that code is the documentation and design.
That is problem where people don’t understand that point.
Runtime and running application is the chair. Code is design how to make “chair” run on computer.
I say in software development we are years ahead when it comes to handling complexity of documentation with GIT and CI/CD practices, code reviews and QA coverage with unit testing of the designs and general testing.
So I do not agree that software development is immature field. There are immature projects and companies cut corners much more than on physical products because it is much easier to fix software later.
> Oh, and in 6 months the unintuitive and confusing concept needs to be completely changed into - surprise, surprise - a completely different but equally unintuitive and confusing concept.
But you have to keep the old way of working exactly the same, and the data can't change, but also needs to work in the new version as well. Actually show someone there's two modes, and offer to migrate their data to version 2? No way - that's confusing! Show different UI in different areas with the same data that behaves differently based on ... undisclosed-to-the-user criteria. That will be far less confusing.
In many problem spaces, software developers are only happy with interfaces made for software developers. This article diving into the layers of complex logic we can reason about at once perfectly demonstrates why. Developers ‘get’ that complexity, because it’s our job, and think about GUIs as thin convenience wrappers for the program underneath. To most users, the GUI is the software, and they consider applications like appliances for solving specific problems. You aren’t using the refrigerator, you’re getting food. You’re cooking, not using the stove. The fewer things they have to do or think about to solve their problem to their satisfaction, the better. They don’t give a flying fuck about how software does something, probably wouldn’t bother figuring out how to adjust it if they could, and the longer it takes them to figure out how to apply their existing mental models UI idioms to the screen they’re looking at, the more frustrated they get. Software developers know what’s going on behind the scenes so seeing all of the controls and adjustments and statuses and data helps developers orient themselves save figure out what they’re doing. Seeing all that stuff is often a huge hindrance to users that just have a problem they need to solve, and have a much more limited set of mental models and usage idioms they need to use figuring how which of those buttons to press and parameters to adjust. That’s the primary reason FOSS has so few non-technical users.
The problem comes in when people that aren’t UI designers want to make something “look designed” so they start ripping stuff out and moving it around without understanding how it works affect different types of users. I don’t hear too many developers complain about the interface for iMessage for example despite having a fraction of the controls visible at any given time, because it effectively solves their problem, and does so easier than with a visible toggle for read receipts, SMS/iMessages, text size, etc etc etc. It doesn’t merely look designed, it it’s designed for optimal usability.
Developers often see an interface that doesn’t work well for developers usage style, assume that means it doesn’t work well, and then complain about it among other developers creating an echo chamber. Developers being frustrated with an interface is an important data point that shouldn’t be ignored, but our perspectives and preferences aren’t nearly as generalizable some might think.
I'm not particularly bothered by non-developer UI. I'm bothered by the incessant application of mobile UI idioms to desktop programs (remember when all windows programs looked somewhat similar?), by UI churn with no purpose, by software that puts functionality five clicks deep for no reason other than to keep the ui 'minimal', by the use of unclear icons when there's room for text (worse, when it's one of the bare handful of things with a universally-understood icon and they decided to invent their own), by UIs that just plain don't present important information for fear of making things 'busy'. There's a lot to get mad about when it comes to modern UIs without needing to approach it from a software developer usage style perspective.
You're making a lot of assumptions about who's doing what, what problems they're trying to solve by doing it, and why. The discipline of UI design is figuring out how people can solve their problems easily and effectively. If you have advanced users that need to make five mouse clicks to perform an essential function, that's a bad design and the chance of that being a UI design decision is just about zero. Same thing with icons. UI design, fundamentally, is a medium of communication: do you think it's more likely a UI designer-- a professional and likely educated interactivity communicator-- chose those icons, or a developer or project manager grabbing a sexy looking UI mockup on dribble and trying to smash their use case into it?
Minimalism isn't a goal-- it's a tool to make a better interface and can easily be overused. The people that think minimalism is a goal and will chop out essential features to make something "look designed" are almost always developers. Same thing with unclear icons. As someone with a design degree that's done UI design but worked as a back-end developer for a decade before that, and worked as a UNIX admin off and on for a decade before that, I am very familiar with the technical perspective on design and it's various echo-chamber-reinforced follies.
It's not like all UI designers are incredibly qualified or don't underestimate the importance of some particular function within some subset of users, and some people that hire designers don't realize that a graphic designer isn't a UI designer and shouldn't be expected to work as one. But 700 times out of 1000, that's something dev said "this is too annoying to implement" or some project manager dropped it from the timeline. Maybe 250 of those remaining times, the project manager says "we don't need designers for this next set of features, right? Dev can just make it look like the other parts of the project?"
Developers read an edward tufte book, think they're experts, and come up with all sorts of folk explanations about what's happening with a design and why people are doing it, then talk about it in venues like this with a million other developers agreeing with them. That does a whole lot more damage to UIs in the wild than bad design decisions made by designers.
You seem to think I'm attacking UI designers. I'm not. I think software would be a lot better with professional UI designers designing UIs.
edit: I am making a lot of assumptions. I'm assuming that most UIs aren't really designed, or are 'designed' from above with directions that are primarily concerned about aesthetics.
+1 to all this. And when did it become cool to have icons that provide no feedback they've been clicked, combined with no loading state? I'm always clicking stuff twice now because I'm not sure I even clicked it the first time.
> That’s the primary reason FOSS has so few non-technical users.
Yeah, citation needed. If your argument that 'non-technical users' (whatever that is - being technical is not restricted to understanding computers and software deeply) don't use software that exposes a lot of data on its internals as exemplified by FOSS having few 'non-technical users' meaning people who are not software developers, this is just false. There are entire fields where FOSS software is huge. GIS comes to mind.
Normally in this rant I specifically note that non-software technical people are still technical. For genuinely non-technical software, what are the most popular end-user facing FOSS-developed applications? Firefox, signal, blender, Inkscape, Krita maybe… most of those are backed by foundations that pay designers and in Mozilla’s case, actually do a ton of open usability research. I don’t believe Inkscape does but they do put a ton of effort into thinking about things from the user workflow perspective and definitely do not present all of the functionality to the user all at once. Blender, at first, just made memorize a shitload of shortcuts but they’ve done a ton of work figuring out what users need to see in which tasks in different workflows and have a ton of different purpose-built views. For decades, Gimp treated design, workflow and UI changes like any other feature and they ended up with a cobbled-together ham fisted interface used almost exclusively by developers. You’ll have a hard time finding a professional photographer that hasn’t tried gimp and an even harder time finding one that still uses it because of the confusing, unfocused interface. When mastodon stood a real chance of being what Bluesky is becoming, I was jumping up and down flailing my arms trying to get people to work on polishing the user flow and figure out how to communicate what they needed to know concisely. Dismissal dismissal dismissal. “I taught my grandmother how federation works! They just need to read the documentation! Once they start using it they’ll figure it out!” Well, they started using it, didn’t have that gifted grandmother-teaching developer to explain it to them, and they almost all left immediately afterwards.
Just like human factors engineering, UI design is a unique discipline that many in the engineering field think they can intuit their way through. They’re wrong and if you look beyond technical people, it’s completely obvious.
I'm trying to learn acceptance: how not to get so angry at despicable UIs.
Although I admit I'm kinda failing. My minor successes have been by avoiding software: e.g. giving up programming (broken tools and broken targets were a major frustration) and getting rid of Windows.
IMO the fact that code tends to become hard over time in the real world, is even more reason to lower cognitive load. Because cognitive load is related to complexity. Things like inheritance make it far too easy to end up with spaghetti. So if it's not providing significant benefit, god damn don't do it in the first place (like the article mentions).
That depends on who thinks it's going to be a significant benefit - far far too many times I've had non-technical product managers yelling about some patch or feature or whatever with a "just get it done" attitude. Couple that with some junior engineering manager unwilling to push back, with an equally junior dev team and you'll end up with the nasty spaghetti code that only grows.
Sounds like a bunch of excellent excuses why code is not typically well factored. But that all just seems to make it more evident that the ideal format should be more well-factored.
>It's quite easy to imagine a well factored codebase where all things are neatly separated.
If one is always implementing new code bases that they keep well factored, they should count their blessings. I think being informed about cognitive load in code bases is still very important for all the times we aren't so blessed. I've inherited applications that use global scope and it is a nightmare to reason though. Where possible I improve it and reduce global scope, but that is not always an option and is only possible after I have reasoned enough about the global scope to feel I can isolate it. As such, letting others know of the costs is helpful to both reduce it from happening and to convince stakeholders of the importance of fixing it after it has happened and accounting for the extra costs it causes until it is fixed.
>The messy stuff is where the real world concepts need to be transformed into code.
I also agree this can be a messy place, and on a new project, it is messy even when the code is clean because there is effectively a business logic/process code base you are inheriting and turning into an application. I think many of the lessons carry over well as I have seen an issue with global scope in business processes that cause many of the same issues as in code bases. When very different business processes end up converging into one before splitting again, there is often extra cognitive load created in trying to combine them. A single instance really isn't bad, much like how a single global variable isn't bad, but this is an anti-pattern that is used over and over again.
One helpful tool is working ones way up to the point of having enough political power and earned enough respect for their designs to have suggestions of refactoring business processes be taken into serious consideration (one also has to have enough business acumen to know when such a suggestion is reasonable).
>the original author doesn't work here anymore so no one's here to explain the original code's intent.
I fight for comments that tell me why a certain decision is made in the code. The code tells me what it is doing, and domain knowledge will tell most of why it is doing the things expected, but anytime the code deviates from doing what one would normally expect to be done in the domain, telling me why it deviated from expected behavior is very important for when someone is back here reading it 5+ years later when no one is left from the original project. Some will suggest putting it in documentation, but I find that the only documentation with any chance of being maintained or even kept is the documentation built into the code.
The "why" is the hardest part. You are writing to a future version of most probably a different person with a different background. Writing all is as wrong as writing nothing. You have to anticipate the questions of the future. That takes experience and having been in different shoes, "on the receiving side" of such a comment. Typically developers brag what they did, not why, especially the ones who think they are good...
Not necessarily. There are a lot of domains where you're digitizing decades of cobbled together non-computer systems, such as law, administration, or accounting. There's a very good chance that no single human understands those systems either, and that trying to model them will inevitably end up with obscure code that no one will ever understand either. Especially as legislation and accounting practices accrete in the future, with special cases for every single decision.
Plus to everything said. It's an everyday life of "maintainer", picking the next battle to pick the best way to avoid sinking deeper and defending the story that exactly "this" is the next refactoring project. All that while balancing different factors as you mention to actually believe oneself, because there are countless of paths..
We use it to automatically instrument code for tracing. Stuff like this is IMO the only acceptable use to reduce boiler-plate but quickly becomes terrible if you don't pay attention.
Also good for having default activities performed on object or subsystem. For instance, by default, always having an object have security checks to make sure it has permission to perform the tasks it should be (have seen this, and sounds like a good idea at least). And also, to have some basic logging performed to show when you've entered and left function calls.
It's easy to forget to add these to a function, especially with large codebase with lots of developers
This puts things really well. I’ll add into it that between the first white boarding session and the first working MVP there’ll be plenty of stakeholders who change their mind, find new info, or ask for updates that may break the original plan
I am so proud and happy, when I can make a seemingly complicated change quickly, because the architecture was well designed and everthing neatly seperated.
Most of the time though, it is exactly like you described. Or randalls good code comic:
Allmost too painful to be funny, when you know the pain is avoidable in theory.
Still, it should not be an excuse to be lazy and just write bad code by default. Developing the habit of making everything as clean, structured and clear as possible allways pays of. Especially if that code, that was supposed to be a quick and dirty throw away code experiment somehow ended up being used and 2 years later you suddenly need to debug it.
(I just experienced that joy)
I mean really nobody wants an app that is slow, hard to refactor, with confusing business logic etc. Everyone wants good proporties.
So then you get into what you’re good at. Maybe you’re good at modeling business logic (even confusing ones!). Maybe you’re good at writing code that is easy to refactor.
Maybe you’re good at getting stuff right the first time. Maybe you’re good at quickly fixing issues.
You can lean into what you’re good at to get the most bang for your buck. But you probably still have some sort of minimum standards for the whole thing. Just gotta decide what that looks like.
> you also need to write it in a convoluted way because, for various annoying reasons, that's what performs best on the computer.
That's nothing to do with hardware. The various annoying reasons are not set in stone or laws of physics. They are merely the path dependency of decades of prioritizing shipping soon because money.
It's quite easy to imagine a well factored codebase where all things are neatly separated. If you've written something a thousand times, like user authentication, then you can plan out exactly how you want to separate everything. But user authentication isn't where things get messy.
The messy stuff is where the real world concepts need to be transformed into code. Where just the concepts need to be whiteboarded and explained because they're unintuitive and confusing. Then these unintuitive and confusing concepts need to somehow described to the computer.
Oh, and it needs to be fast. So not only do you need to model an unintuitive and confusing concept - you also need to write it in a convoluted way because, for various annoying reasons, that's what performs best on the computer.
Oh, and in 6 months the unintuitive and confusing concept needs to be completely changed into - surprise, surprise - a completely different but equally unintuitive and confusing concept.
Oh, and you can't rewrite everything because there isn't enough time or budget to do that. You have to minimally change the current uintuitive and confusing thing so that it works like the new unintuitive and confusing thing is supposed to work.
Oh, and the original author doesn't work here anymore so no one's here to explain the original code's intent.