Call options are a better model than debt for cruddy code (without tests) because they capture the unpredictability of what we do. If I slap in an a feature without cleaning up then I get the benefit immediately, I collect the premium. If I never see that code again, then I’m ahead and, in retrospect, it would have been foolish to have spent time cleaning it up.
On the other hand, if a radical new feature comes in that I have to do, all those quick fixes suddenly become very expensive to work with. Examples I’ve seen are a big new client that requires a port to a different platform, or a new regulatory requirement that needs a new report. I get equivalent problems if there’s a failure I have to interpret and fix just before a deadline, or the team members turn over completely and no-one remembers the tacit knowledge that helps the code make sense. The market has moved away from where I thought it was going to be and my option has been called.
More like selling a put: you're hoping it expires out of the money, but if volatility goes up, it will be expensive. Upside is limited and short term, downside potentially unlimited and in the long term, so if your discount rate is high (due, perhaps, to low cash reserves) then the NPV of the trade still makes sense.
People who sell a few puts get away with it for a while, but make it your business and you better be too big to fail.
If everybody is selling puts and levering up, you better do it too or you'll find yourself out of a job. And if volatility increases, everybody is exposed at the same time so you won't get personally blamed anyway, and your career will be just fine.
I think you are underselling the upside, to be honest. History is littered with massive successes that are essentially technical debt, as software folks call it. AC/DC power is actually a fun case study in that regard.
But the success is not directly inherited from technical debt. Technical debt bought time that enabled the success to happen (maybe before the company ran out of money, or allowing it to launch months ahead of the competition), but the company would not have failed if you somehow built it "properly" and otherwise met those constraints.
In other words, you can rationally make the decision to take on technical debt if it allows you to meet those constraints. It isn't a black and white decision.
It seems obvious to say this, but I know plenty of founders and investors who have had success (survivorship bias - if they hadn't they wouldn't be investors/founders) with the technical-debt-and-grow method and therefore apply it to every problem.
Any way you formulate this, it is entirely possible that taking on technical debt is required for a largely unbounded success.
It may be a couple of steps removed, but it is still a direct line. Time to market is a vital metric for everyone. Anything that can help hit a tight time to market can add to the success of a company.
Can you reformulate it so that time to market was irrelevant for success? Only if you are comfortable with what is likely a bad model.
> it is entirely possible taking on technical debt is required for a largely unbounded success
> Time to market is a vital metric for everyone
I disagree with the first statement (I know plenty of exceptions, enough to statistically consider that it is false) and disagree with what you imply with the second. Google launched against 5-6 strong, established search companies. Copying the then-strategy of paid listings was not going to work, and their technology had to be ready before hitting the market. Theranos is an example where the technical debt did not pay off. Time to market was not as important as getting the tech to work.
I had a long meeting/debrief yesterday with a C-level from a freshly failed, large internet group. Their CEO and CTO's strategy to get out of the hole was to technical-debt-and-grow a dozen products in parallel in the hope one sticked; whilst several stuck, they did not have the expertise or willingness to do the work to stop the products from breaking, and the company failed by running out of money (which, due to technical debt stopping the accountants from having any visibility, surprised them all).
I consider their failure to be entirely due to the idea that technical debt is required for a large unbounded success, which was like a religion for the CEO and CTO. Had they gone "private equity" (cut headcount, focus on core business, invest time in cleaning up) they might have survived long enough to become profitable again.
Many business founders will not have the resources (reputation, money, or even just hustle, EQ and maturity and leadership) to get a good technical co-founder and will have to either take shortcuts, or reconsider their decision to start up without said resources.
You are shifting my claim from it being a possibly necessary criteria, to it being the sole criteria. Worse, you are shifting my claim to be that technical debt somehow guarantees success.
Neither of these are my assertion. I simply assert that time to market is a vital metric. If hitting a vital time to market means making some tradeoffs (what we now call technical debt) then, so be it.
Take your examples, do you really think that Google did not make some tradeoffs? Having seen some of their products (I'm looking squarely at the pieces of crap that were my early phones from them...), I can confidently say they make rash decisions all of the time.
Now, there is a silly double speak that we have in the industry where we say "technical debt" when we really mean just poor work. There is a lot of conflation in the world where these two should not be mixed.
I wish this was true, but we can see that it is not.
Take a look at successful programming languages. Languages like PHP, Javascript are hugely successful, despite universally acknowledged to be terribly designed. In open source world, despite having very poor reputation for code quality, projects like Wordpress, MySQL are more successful over supposedly well written projects like Postgres.
One exception to the rule is Perl, which lost out to Python as the de facto scripting language over time ( Python is supposedly better designed that Perl 5 ). But these are exceptions, not the rule.
The value of the option will rise with volatility. It is more apparent if you delta hedge leaving you with a net gamma position (i.e. hoping for quieter or more volatile markets depending on whether you are short or long gamma). Selling an option puts you in a short gamma position whether it is a put or a call.
The subtlety is that volatility falls slowly as prices rise AND rises much faster as prices fall. So (equity) options become much more valuable when things go badly. Check the S&P500 vs VIX and watch how the VIX explodes upwards with each market correction then slowly drops back as prices rise.
The metaphor can only go so far. He does not specify the underlying in the article but the mechanics described imply the company's health. If the bet is correct, the company will do well and "volatility will fall" making the option less valuable. You could say the impact of the damage made by technical debt is less important because you have more resources to deal with it (this is how spaghetti ball codebases start, but in the medium term it holds).
On the other hand, if the company does badly (fundraise fails, product does not grow as expected) the relatively small payoff from technical debt is exponentially more expensive and might well wipe out the tech side or slow down development enough to allow the competition to win faster. This is why I say the damage is "potentially infinite" (where infinite is used colloquially rather than formally).
In particular, an unhealthy company with a lot of technical debt will rapidly see its top technical talent bleed and find it hard to hire (I'm not theorising - I've many times heard developers who interviewed at well known places tell me afterwards "I liked the pay, the people and the brand, but the codebase was a mess and life is only so long"). Thus the damage done is rapidly exponentially higher than the reward (traders will refer to "convexity" or non-linearity in the model, convexity being the degree of curvature) throwing you in a death spiral.
You can even fit in the volatility smile: as more experienced developers ("the market") have in aggregate picked up on the convexity of the technical debt trade, they are more reluctant to take it on the more improbable the payoff is - that is, they compensate for the kurtosis of the return distribution (which is why buying deep out of the money options all the time hoping for 2008 to happen is a bad strategy).
I recommend Nassim Taleb's Dynamic Hedging which despite the dry title explains these ideas quite intuitively.
Totally agree. Unfortunately, it's a market with low liquidity and opaque fundamentals, so black magic reigns supreme.
The greatest difficulty is trying to predict how the product will evolve, and thus how the software must also evolve. We should always be thinking about how the relevancy and requirements of each component of source code may change in the future: next month, in 6 months, 2 year from now, etc.
To better evaluate the risk of each call option, there needs to be a strong, coherent, and tangible vision of what the product should be. This vision should be internalized by the entire team. There also needs to be a healthy, forward thinking dialogue between designers, product owners, and engineers.
Honestly, this seems like a stretched comparison to me. How many managers understand options? And what is even the underlying for your call option? Why would it be a call option and not a put option?
That's precisely the point - the reason so many companies fall into the trap of technical debt is that they not only don't understand the risks of what they're getting into; they lack the language (and analogies) to describe the risks.
i think trying to explain in the language of options isn't a great comparison, perhaps a clearer way to think about it might be in terms of "robustness" to possible future scenarios (see also Taleb's Antifragile)
There are many different ways to describe this problem, and it's very possible that using a call option as a metaphor is a better fit. Unfortunately, "technical debt" is the term that's currently in fashion, so you at least have to mention it when you're suggesting an alternative approach.
Incidentally, we're big fans of Martin Cronje's article[1] which suggests that "technical debt" is a broken metaphor, and he suggests using "depreciating assets" as an alternative.
We already use depreciation in software when we start writing off capital allocation over time. This new metaphor maybe more confusing. I would rather stick with technical debt and give more context.
There is a problem with these metaphors: activities that work against technical debt are often dependent on the skill and experience of the programmers involved. You often get no long term benefit from doing them.
I write code in a specific fashion, which involves a lot of TDD and a lot of refactoring. I've been doing it a long time and have refined it to the point where I expect to get payback in less than 1 week. But it is naive to expect that someone else will use the same techniques and achieve the same benefits right away (I have been, very, very naive in this respect at times :-( ).
For example, non-mocking TDD forces a particular set of design decision to be made up front. It also forces the creation of specific API entry points. Mocking TDD allows you to defer design decisions and makes it cheap to explore large design spaces. It forces you concentrate on your interfaces. The choice of which to use in various situations dramatically affects how well you can refactor. With non-mocking unit tests, you can get buried in the details of setting up contexts. With mocking unit tests, you tests can become very brittle. It also allows/encourages you to make some pretty horrifying design choices.
Even when people do TDD and have lots of tests, they often make poor choices and end up with code that they can't (or don't want to) refactor. The thing that people often do not understand is that unit tests and/or TDD will usually not provide much of a payback (in terms of productivity) alone. It is the refactoring that leads to simple design, reduces complexity, reduces code duplication, makes it easy to reason about changes, etc. Unit tests (and TDD in general) simply give you a framework for making refactoring easier.
The problem, from my perspective, is not about judging risk of upfront payback versus long term payback. When done well, the activities around reducing technical debt pay off so quickly and so dramatically that there are almost no cases where they are inappropriate. The problem is that using the techniques poorly often leads to no payback at all! In fact, sometimes you inherit a project with a rat's nest of crazy tests and the first thing you do to improve your productivity is delete the tests -- on the principle that less code means less complexity that you need to deal with.
Although my own personal style involves TDD and refactoring, I believe that the same can be said for other styles of development that are geared toward reducing technical debt. In my career I've done BDUP (1000 page requirement and design docs FTW!), I've done rapid prototyping (build 20 to throw away!), and a few other techniques. They can all work, but IMHO require even more skill to do well.
This is one of the things I wish someone told me at the beginning of my career. At the moment my current feeling is that there are virtually no cases where tolerating/embracing technical debt will lead to even short term gains. However, the alternative is not straight forward. Young teams must expend considerable effort finding expertise to help guide them forward. More importantly, everybody on the team needs to be aligned to the style that will be used. How you achieve that alignment is yet another can of worms.
accumulated technical debt makes the software project less robust / less able to respond effectively to future events.
there are many examples in life where failing to do maintenance / prepare for contingencies is a clear short/med-term win, provided something unexpected does not occur.
Here's a question: if you have a team and an internal culture that managed to create a legacy code mess in as little as six months in their existing app, how is giving them a microservices architecture and Docker to play with going to help?
If you give that team that structure, it seems to me you're highly unlikely to end up with beautifully bounded context and highly likely to end up with even more of a mess than you started with.
That's a very excellent question, to which I have most of an answer, if not 'the' answer.
Daylight is the best antiseptic.
It's not giving them a microservices architecture and Docker that solves problems. It's expecting them to use it that causes problems to be solved, or at least formally identified.
I do a lot of root cause analysis whether my boss asks for it or not. Human factors are almost always in play. A significant source of bugs that make it into late QA or even production are caused by wishful thinking about the scope and impact of code changes. Things that should have been red flags are brushed off because you can't prove that their code caused the problem, and as long as there is reasonable doubt about the source, some people won't look at code they already declared done.
When the code is developed and tested in an isolated system then the only source of state changes on that system were caused by the new code. It takes a real set of brass ones or an extremely dense skull to deflect concerns about problems seen on a system that only contains your code changes. People either shape up or get labeled as untrustworthy. The former result is preferable, but at least with the latter you get predictability out of the system.
I would argue it would be simpler to introduce test coverage tools that automatically call out bad test coverage during code review.
Microservices remind me of a particular C++ protocol implementation I wrote as a novice programmer. Since the protocol was structured, I had high level classes broken down into simpler classes. e.g.
virtual class Reader {
void read(Buffer *buf);
};
class TCharacter: public Reader {
TString *name;
TList<TStat> *stats;
...
};
class TStat: public Reader {
TWord32 *remaining;
TWord8 *percent;
};
class TWord32: public Reader {
uint32_t v;
};
class TWord8: public Reader {
uint8_t v;
};
And on, and on, and on, and on ...
I thought it was cute that down to the simplest PODT, every type satisfied a well defined `Reader` interface. I hand wrote many constructors, deconstructors, read and several other interface definitions for each type.
Every argument for microservices has in some way reminded me of this code, with it's well defined interfaces and overly verbose implementation.
My current strategy is to utilise the crap out of language features to safely achieve what I want in the minimal amount of code possible. In C++ this would be done by initialising the protocol structure classes straight out of the memory buffer, using #pragma pack as required. No error prone constructors / deconstructors / read() required.
Haskell and Scala are two languages I use a lot, and each have powerful type systems which I can use to prevent myself from doing something stupid. Programmable macros are great for removing boilerplate. Want less technical debt? Write less code.
In my own controversial opinion, if somebody can't work out how to modify my code because they can't work out how to correctly update the type definitions, that person has no business modifying my code. Problem solved! Interfaces are well defined and modifiable only by people who truly understand the code.
I love this Dijkstra quote: If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
> As a formal Software Engineer, I now consider him to be possibly the most enlightened engineer that's ever existed in the field.
On one hand, his quotes have the ring of wisdom. On the other, the man was famously not an engineer and never brought a software project to fruition, to my knowledge.
These aren't necessarily contradictory, but you should keep it in mind. Knuth is much more of an engineer in the sense it's used on HN.
Knuth is brilliant and wonderfully pragmatic, no doubt. I think the greatest thing Knuth really got into me is that code is meant for people, not computers.
As to Dijkstra; enlightenment doesn't necessarily mean productive. Some of the least successful people I've ever met, can have absolutely sage like advice. Dijkstra was able to visualize things others could not, and explain novel solutions to those. On top of that he had a deep understanding of the life of a programmer and the complexities faced day in and day out.
The kind of creativity and acceptance of uncertainty that leads to insight is often at odds with the narrow focus get-it-done tenacity associated with success. Some people have more of the former without the ability to switch to the latter.
Dijkstra worked as a programmer at the Mathematisch Centrum (mathematics center) in the 1950s. He was responsible for most of the systems programming of three subsequent machines that were build and used there. His PhD dissertation (1959) was on the operating system he wrote for the Electrologica X1 computer that was being built by the first Dutch computer industry Electrologica.
The primary offering of microservices is decoupling and simplicity. They are language agnostic, communicating via HTTP /messaging/pattern matching. It's the opposite of verbose and traditional monolithic architectures that relied on a rich domain model. Microservices should be small enough to be owned by a single developer. With pattern matching, you can extend functionality by creating new microservices instead of modifying existing ones. It boils down to change management. For complex systems, the benefits microservices bring to the table make the overhead cost of maintaining these services worth it.
The problem which I don't see discussed nearly enough when people start drinking the microservices koolaid is what does the interface between the microservices look like? If you can define a nice stable interface that changes much less frequently then the constituent services, then it's mostly a question of operational overhead. However, if the microservices comprise many cross-cutting business concerns, then the churn on the interfaces is a massive source of pain compared to (for example) running one process that loads multiple modules where you can leverage all the power of that languages to do basic wiring / type checking ensuring the whole things fits neatly together.
IMO, microservices or SOA or whatever you want to call it, is primarily a method of organizing large teams so they don't get hamstrung by ops coordination. In that case you absolutely need it, but for smaller teams the overhead will usually far outweigh the benefit. I'm glad the space is being explored so that we get better tooling and techniques to lower that overhead, but when the dust settles I expect many small teams will discover that we collectively overestimated the better-understood pain of monolithic architectures and underestimated the less-understood pain of microservices.
> Microservices should be small enough to be owned by a single developer
This seems bad for code review, and disastrous with turnover. Whoops, now we need to find a scheme engineer whose eccentricities are roughly equivalent to this last guy.
There are also ways to achieve microservice-like architectures without requiring multiple code bases. See Actors (scala akka). Streams are an extension which give automatic load balancing. And if you need it to run over the network, akka-remote has you covered.
I've not been happy with this kind code ownership in the past. Those who want to misbehave have a nice comfortable place to hide out while appearing relevant because they have a monopoly on an idea that they don't have the skills to be responsible for.
So then you have your coworker who wrote a wrapper around the code to sanitize all of the inputs and outputs and it's bigger than the actual code and once in a while it guesses wrong about ambiguous data.
I will say that it can be an advantage, but it's rare. It becomes an advantage when your primary language lacks the tools to do something well. One of my previous companies was a PHP shop and they used Python to chop up audio files, which would have been very cumbersome to do in PHP (few if any existing libraries). My current company implemented complex data pipelines in PHP, which really should have been done in a different language (python or scala would have been natural choices).
>The primary offering of microservices is decoupling and simplicity.
Microservice architecture is very much orthogonal to loose coupling. I've worked on several microservice architectures with tight coupling. The microservice aspect actually exacerbated the pain caused by the tight coupling because it added the risk overhead of serialization/deserialization and network failures, which simply don't exist in a 'monolith' context.
Microservices when implemented are usually either a reflection of conway's law (ID team in building B has its own service; as does the CMS team in Europe), or fashion-driven development (one team working on 17 different services... which can be nasty).
The previous poster is basically saying "they screwed it up last time in 6 months, they'll screw it up again".
Microservices don't help. If they couldn't properly design their code without micro-services, throwing docker and micro-services in the mix won't make it magically better. It'll add massive amounts of complexity right off the bat. They'll still be bad coders and bad architects. They still won't know how to lay out their code. Having to split up their code will probably make a bad situation even worse.
They'll put the wrong methods on the wrong microservices, and share some 'key' code that shouldn't be shared as the method's on the wrong microservice, or worse still, cut and paste code and have it decay at different rates. They'll create new services that should actually be on an existing service and then gradually they code will duplicate, but with random subtle bugs.
The code debt won't disappear, it'll accelerate until they have a bunch of services they daren't touch and one of those services will become "the monolith" and eventually no-one's allowed to deploy to it or the whole thing will come falling down.
And all you're saying is "daylight".
What does "daylight" mean?
The very idea that micro-services will remain 'isolated' in a bad coding team is utterly deluded.
It's changing the culture around how things are developed and understood, and the parent poster is saying that exposing the team to daylight using something such as an architecture like microservices (or another "good practice") can help push the team toward a better understanding of what a good codebase and stable state should look like.
Stop thinking about the specific buzzword hyped up trendy term you're stuck on here; start thinking about the psychological factors that caused the situation in the first place, the environmental pressures that resulted in poor decisions, and the ways you can teach the team—by doing—how to create better applications.
Also:
> They'll still be bad coders
Most often that's not the root cause. Most often the root cause is poor management and poor leadership causing otherwise decent programmers with decent instincts to make decisions against the best interest of themselves and the company. The most common classic debt generation situation is trading off long-term quality for short-term speed and functionality. That is not "bad coders" at work; it's bad managers.
I am not a manager and though I agree that sometimes there can be issues with managers/leadership I am really curious why you are defending coders."Coders" that actually do not try to improve themselves and read up on good practices and need managers to actually impose those practices are not really decent coders.
I'm not defending coders, I'm rejecting the individual mentality—that people are just good or bad by nature, and aren't in some (large) sense a product of their environment as well.
Attribution bias tells us that we tend to see behavior as more a product of individual traits than the system that produced it. The system is more responsible than we interpret in almost every case.
The 'daylight' line is me butchering a quote from Louis Brandeis about transparency, 'sunlight is said to be the best of disinfectants' commonly rendered as 'sunlight is the best disinfectant'. That's what I get for posting from mobile.
I'm neutral on microservices, pro Docker.
What I'm properly alergic to is arcane setups where it's easier for everyone to share a server than it is for anybody to set up a copy of the system that is theirs and theirs alone. In a big enough shop, running all of the microservices you need on your dev box might be possible while running the whole system isn't, because you run out of memory before everything loads.
The bad coders often continue to be bad coders because nobody can 'prove' that it's their fault and so they keep dazzling the managers with bullshit and implying that you are the one with the problem, not them. Isolated, repeatable systems means you have to look at how crazy your architecture is instead of ignoring it, and regarding this conversation, there's a paper trail backing up your version of the story.
When people don't know which solution is better, they tend to back the side that has more trustworthy people on it, where trustworthy is "doesn't make messes, or helps clean them up when they do".
If you make it clear who's the problem and the project management still doesn't intervene, take your skills elsewhere. You're quitting with cause and many managers will value your commitment to sanity.
Imagine a single monolithic jar file, that contains all of the code that lives on any given Docker instance, that decides which services and API endpoints to expose based on poking and prodding environmental variables.
I really didn't feel it was cryptic or pretentious. However, if you do, can you help describe what made it difficult for you to understand?
The daylight word worked great for me, since the idea is to have everyone be able to see what's happening when they introduce changes.
If there's just one staging / integration environment where people are building things and patching over one another, and your local environment maybe has deviated a lot from either the staging, production, or even a clean local env, then there's all kinds of unexpected problems that might happen.
Getting more things, including your environment, under version control and requiring people to deploy things via files that live in version control rather than a smattering of deployment or env setup commands that not all of the team has visibility into, helps the entire development process go more smoothly.
"Corgibytes often introduces new clients to Docker, making it much easier and faster to set up new developer environments." (emphasis mine)
She isn't advocating using Docker for the target environment, she is advocating using Docker to set up a standard development environment that allows the developers to be productive[1] in 10 minutes rather than 10 hours.
I've been in the situation where the development environment is a poorly documented, massively multi-step steaming pile. The result is I spent a lot of time chasing down those responsible for the steaming pile (the "rockstar" in the department) asking them how to get the development environment working. Typically they pull the keyboard away from me, type a bunch, and then say "there, it works." Which it does. For a while.
Rinse and repeat for every developer. Regularly.
[1] Well, at least building the system rather than staring at mysterious very broken error messages.
> Typically they pull the keyboard away from me, type a bunch, and then say "there, it works." Which it does. For a while.
This would happen if they distributed Docker environments, too. Your setup process would be to download a Dockerfile, crank up everything you need, discover it doesn't work, call the Rockstar over, have him or her explain that there's a few manual steps that they haven't yet added to the codebase (or to the wiki), and then you rinse and repeat that process, instead.
Oh wait, you need to configure your networking bridge differently ... oh wait, the DNS needs this little tweak ... oh wait, you still use this old version from your distros repo? ... oh wait, you must give it this docker volume ...
I can tell you're frustrated from too much yak shaving. But once you become more experienced with systems in general, you'll feel much more comfortable around Docker.
Which is great if you're the person in charge of distributing it, but if the person in charge keeps giving out the "slightly broken" version, everyone will keep fixing it.
Or, and I'm not sure if this is worse, a few people will fix it in slightly different ways, and start distributing those, leading to a weird ecosystem of competing Dockerfiles none of which are quite correct.
Well, what I mean is that once you get your Dockerfile to the point where it works, it will work literally anywhere you've got Docker installed. That is one of Docker's core value propositions. It's very hard to get something "halfway working" in Docker, because you've either got a process running containerized, or you don't have anything at all. "Slightly broken" in the context of Docker isn't really something you run into very often.
Nobody will volunteer to help you fix an obscure bug that only happens reliably in your version. Because it's too painful for them to shelve their local configuration and grab yours. And once people have that hesitation, 'all hands on deck' emergency fixes become window dressing, because only a few people can actually participate in tracking down and triaging the bug. Everyone else is sitting around waiting to be needed for the two minutes that they can contribute.
And really, the fact that we're caught up on this at all is concerning. It was one small suggestion in a sea of rather intelligent discourse mostly on team environments, structures, and management. Not technology.
Isn't "setting up your environment" pretty much a one time cost, though? In my head at least, losing a couple days up front doesn't matter as much, it's the time after that the needs to be optimized. But maybe that's just my work place where people tend to stay on the same team for years...
When you automate "setting up your environment", you are able to switch between several of them, create new ones for forks or extra tests, and roll back to debug production issues.
Ignoring the one time aspect. It's the standardization that is important. Docker environments enforce uniformity that are beneficial for testing. No more of the "it works on my machine" excuses. Although the article mentioned development environment specifically, there is recurring benefit in being able to power up a Docker VM for testing purposes.
While I agree this sort of thing can be frustrating when starting a new company, I find that it is useful. If i can't get the system working on my dev machine, how can I be expected to debug the production site when there are problems?
On the other hand, tracing back a dockerfile that pulls in an obscure one-off base image, then repeating that all the way up the chain, you can find yourself left with a dozen Dockerfiles that you need to read in the right order to know what you have.
That's if you can even find the original Dockerfiles - I recently ran into a situation where a container image was in the registry, but the original Dockerfile was completely lost.
Docker/containers in general move the complexity to a different plane, but they only move it - they don't eliminate it. Care needs to be taken to avoid situations just like the one above - or any of a number of other pitfalls - or you end up with an environment that's far more complex to understand than a VM housing a monolithic dev environment.
Depending on what you're doing, the right decisions might only be visible in hindsight.
To this point, you might scaffold out a system to get something running and then incrementally / organically improve the system. I regard this as 'getting the system to the point where it can be dogfooded and/or criticized in a helpful way'.
At this point, despite what you might have running, it may be exceedingly clear that you should have used a different strategy. Maybe you like the overall results, but you need things to be implemented in a different way.
For example, I'm a C++ programmer. It's not always obvious at first that you need a specific set of virtual functions to make something work really nicely.
It's impossible to always know these sorts of things in advance.
I assume you are saying this is why refactoring is so important? If so I absolutely agree.
Designing good design/abstractions like anything else takes practice. Armchair critiques won't cut it. If one's unwilling to fix bad design once it's identified, they're not practicing good design, they're practicing cobbling together something on a rickety foundation. How's that going to help next time one rights something from scratch?
What if microservices are poorly architected such that they are actually coupled? It's worth thinking about such an outcome especially if microservices are being seen as a way of making dysfunctional teams functional.
You're right, I'm just saying that it's not always possible to see these problems in advance, and that it can be very difficult to try and get something done if you apply too many rules to a nascent system ( too many rules == design by committee ).
Yeah, that's the 'shiny new toy' fallacy. In every project there are points when it's clear that something wasn't designed right. Some people decide to get deeper into their mess and some people decide to refactor. It's a judgement call and not an easy one.
I don't think I'd want to work with microservices and Docker containers designed by the same people who messed up another codebase.
Considering management has a very large role in most messed up codebases I've seen, even if they fire 100% of the engineering team who coded it you probably are working with (for) the same people who messed up a codebase.
That's a pretty dangerous mindset, and one that seems to be pretty popular around here, that all failure is automatically the fault of management. Yes, management plays a role, that's almost tautological, but it takes two to tango, and at the end of the day, the dev team wrote the messed up code. Grandly messing up in just six months, even if encouraged by management, takes willing participation from the developers.
Also, I've seen plenty of developers who were perfectly capable of screwing up codebases without being induced to do so by management (I've also seen lots of inept management, don't get me wrong. But I wrote the bad code, they didn't.)
Yes, it takes two to tango, which is the correct answer. Any other answer tends to oversimplify the problem.
A lot of engineers also seem to evaluate other engineers by their worst work. This is a toxic attitude. You can even take an engineer performing terribly in a good environment, and put them in a different good environment, and they can perform exceptionally. I've seen it happen more than once.
As I understand it, it's a very common human failing called the Fundamental Attribution Error. I only have a passing familiarity with it, but its very seductive.
It is always the managers fault: they hired bad people, and didn't train them. Or maybe they setup bad expectations (doesn't mater if the people are good or bad). Their job is to know how to hire the right people and get the best out of them.
Note, the engineers have an ethical responsibility to not be bad. However bad is sometimes creating perfect code when something else is called for.
We're using 'management' as a standin for 'business requirements' here, at least in the most cases I've seen that lead to accumulation of 'debt'.
These things generally start with, "We need feature X for customer Y in Z days.". Engineers dive in, and then comments start popping up - usually variations on a theme: "This is a mess, but it works for now - we'll come back later and clean it up."
The unhealthy process that leads to that is a different discussion. But as engineers, our responsibility is to be aware that it's a reality, and work within those constraints.
Yes, there will be time/resource/budget limitations. No, we're not going to fix that overnight. But knowing it going in, we can take a closer look at some of those 'good enough for now' decisions, and do our best to not produce dog shit.
Sometimes dog shit can't be avoided, but if the pattern above is the normal way of doing business in a company, engineers do have the ability (and responsibility) to do the best they can within those constraints.
Often that means pausing and saying, "I know this is going to come back and bite us, what can we do now to make it better?"
Telling yourself that failures are someone else's fault (while, presumably, still taking credit for good work) is a great way to never learn from your mistakes.
What about when management (and financial realities) pushes relentlessly for new options, customizations and deployments? When all time not spent shipping new variants is spent developing features? What then? When is the code going to be refactored? In the developers' spare time?
What about it? Yes, that environment will obviously be less conducive to good code than otherwise. But bad coders will do (a lot!) worse than good coders.
Don't assume that all good code bases out there were written by teams with reasonable, well-defined and stable requirements, plenty of time and money and perfectly enlightened management. Very few projects are like that. Generally, I think you'll find they were written by good developers who kept their heads cool in the face of a range of challenges.
Absolutely, they will also largely not have had bosses that death marched them or changed requirements three times a day - as I said, management do obviously play a role.
Certainly not in spare time. It has to be part of the daily work. I compare it to running a restaurant. Cleaning up the kitchen every night takes time but it has to be done no matter the circumstances. You can't skip it or the health department will come after you. Unfortunately in programming you can get away with taking shortcuts for a long time.
Incidentally, good cooks are extremely tidy, they clean obsessively as they go - and they are under insane pressure, and don't wait for management to allocate time for cleaning.
They clean continously, not primarily to make the end-of-shift cleaning easier, but because it allows them to execute faster, better and more consistently (which in that environment is a necessary condition for executing at all).
Management is a part of the team that messed up the codebase, not only the engineers. That's why I am always wary of managers bringing in new methods (microservices, scrum, whatever). They had a big part in previous failures and if they don't admit their own failures nothing will change.
Considering management has a very large role in most messed up codebases I've seen, even if they fire 100% of the engineering team who coded it you probably are working with (for) the same people who messed up a codebase.
It seems to me you should stop working for managers who are writing code.
Even if they don't write code they can set unrealistic deadlines and actively not allow refactoring. My solution is usually not to tell anybody who doesn't need to know and just refactor but that can be a dangerous path too.
Even if they don't write code they can set unrealistic deadlines and actively not allow refactoring.
I know it might sound glib but if thats the case then you're allowing yourself as a programmer to be set up to fail and be left holding the bag when things go to shit.
Either manage up in those situations or find a new job.
I've definitely cut corners for deadlines in my life but everyone always understands what debt we're accruing and when we'd get stuck paying that debt. If your status quo is "Do shitty work" you really need to find a new gig.
Doesn't need to be an "and". Just getting the development/ops environments into a virtualized architecture like Docker can make a tremendous difference in a monolithic app. Quite often, the technical debt is due to configuration differences in manually managed environments - development, QA, and production can differ in small or large ways. I don't know how many times I've seen things like developing on Windows and deploying on Unix, or testing on a single server but deploying into a cluster. Docker (or Vagrant, or others) can fully automate provisioning and configuration, simplifying and enforcing consistency between environments.
Likewise, microservices from monoliths can be done without containers. Just pick a chunk of the monolith that can be pried loose, and start with that. This can solve all sorts of problems.
Agreed. Cleaning and correcting a code base is only part of the solution. Putting in place the policies and culture to make sure the code stays clean is the other part.
You're right to say that a microservices architecture and Docker won't solve the problem. That's not an accurate view of what Corgibytes advocates for. Microservices and Docker are tactical tools that can help many teams, but they don't work everywhere.
What does work is assisting that team to slowly clean up the mess that they've built[1] by focusing on incremental improvements.
[1]: And we find that almost all messes were created for very understandable reasons.
It's not even "technical debt" until someone thinks "technically ... we could have done this more easily in X".
Now it's a question of developer comfort and huge investment will be made to make sure everyone building the product are having the best experience doing so.
All the old code and knowledge is now debt because it can't come with us on this wonderful journey.
It's only debt if you are paying for it going forward.
The thought should be "things would be easier going forward if we'd done X".
If it does the job and doesn't create issues then you just purchased it without taking on debt. This is easier if it is isolated/loosely coupled.
There's no point going back to improve it unless it saves you time/money in the future. If you make a better one to get the 'best experience' you are throwing away time and money for a slightly better mousetrap.
It's ironic because we're trying to build software others will depend on, meanwhile the software that we depend on tries to avoid the kind of disruptive change that accompanies rewritten-with-X... that's why we can depend on it.
It's easier to just start over again with one small part. I've worked at two places with massive monolithic tech debt ridden codebases. It was nearly impossible to rewrite anything because everything touched everything. Now the place I currently work has micro services. Some are legacy tech debt that no one wants to touch. But if needed they could probably be rewritten in a couple of months. It's the difference between selling AAA bonds and junk bonds. You're probably far more likely to pay off those low interest AAA ones than the junk bonds.
The tradeoff though is that you can't get a handle on how expensive a particular call is because the domino effect of knock-on calls is very difficult to see. You can get a system that 'works' on test data but doesn't scale to production (from a user's perspective, this code doesn't work, even though the dev team will insist that it does)
Engineering is about trade offs. Microservices are a fairly recent innovation, and there are many kinks that need to be worked out. Another issue is the overhead cost of maintaining a large number of decoupled, independent services. It's plausible to have hundreds or thousands of microservices, and the tools to visualize/manage these systems haven't yet been created.
I'm often in the minority opinion so I'm used to it, but to me microservices aren't fundamentally different than the message passing systems which we've tried many times. But when something doesn't work we try the opposite of it instead of just less or more. Monolithic server got you down? MICROSERVICES. Microservices got you down? Put everything into one process space!
Uh, how about we put all of the related stuff together so we have 8 services instead of 800? Could we maybe try that? How about something really crazy, why don't we give Conway's Law a try and arrange our code around organizational boundaries or vice versa?
Yeah, no. Micro-services are neither recent, nor innovative.
Fun fact: a friend of my consults for large pensions funds in The Netherlands. One of the projects he was brought in on earlier this year to get an outside perspective on was this gigantic monument to micro-services that had been built up over a two year period by ~200 developers, at a cost of around 50 million euros.
After two months, every developer was fired, the project cancelled, and the entire 50 million euro cost written off.
But it had great micro-services, hundreds and hundreds of them!
they are to the kids who make web apps - most of this stuff they talk about on the internet is by kids who make web pages for start ups. I used to rant and say similar things, I now see the value of peer reviewed material. Basically 90% of developers advocating things on the internet are not peers, I look for how many projects they've developed successfully in the new gee whiz tech before reading too much. Really most of this stuff was originally thought of in the 70's and 80's, we just keep going in circles.
Tell me about it. The problem is moving parts. Whether you spread it out over several programs or not does not matter. If you manage to really decouple programs so they can do useful work independently of one another then that's great. Congratulations to you. But microservices can just as easily turn into one huge program separated into many small ones, none of which make any sense on its own, and now all you've done is make things harder for devops.
I heard a phrase a couple months ago that I believe sums this up. What you want is local reasoning. You need to be able to estimate the scope of consequences of a change so you can act accordingly. You also need to be able to look at a problem and estimate the scope of code that could be causing the problem.
We have a bunch of rules and guidelines that work toward that goal but if you don't know the destination you may never arrive.
One of the main causes of technical debt and bad code overall is breaking out of the conventions of the code base. Good code reviews can help this, and solid leads of course, but things can still slip through.
Like or hate microservices, one thing they do is maintain a clear, hard boundary on the interfaces. There is no let me just reach into this class here, or call that there. It does work to keep people honest when the pressure is on.
Will it turn bad programmers into good ones? No, but I think the base assumption is that you're not necessarily dealing with bad programmers.
In my experience, diagnosing the tech-debt problem, prescribing the solution, and implementing the solution are not really the hard parts of this - the hard part is battling the culture that led to the problem, and wrangling that huge gulf between "seeming consensus" and "actual consensus" that the group is actually going to change direction by accepting the diagnosis, adopting the solution, and starting work.
Maybe an outside consultancy makes this easier since they are sort of imbued with a moral authority when coming in, but it could also mean that all the hard political work has been done beforehand in agreeing to hire them.
I agree. Very often I see this set forth as a problem with management, but it seems to me that other engineers are also a significant part of the problem. I try really, really hard to never be that engineer. If someone sees defects in something I wrote, regardless of the scope, I'm totally fine with whatever they do to make it better, up to and including totally replacing my work.
It's the whole system that forms the basis for the problem: management, engineers, PMs, Designers, everyone who touches the problem space. If you want to improve, you need to make sure each part of the system is aligned around the same goal. Sounds hokey, but it is the reality.
That means, as well, that it's both your responsibility and others' responsibility at the same time. Sure, you shouldn't be "that engineer," but you should also realize the kinds of things in the system that make someone become "that engineer," or lead them to be "that engineer" in some situations, or on the positive end, how certain engineers avoid it altogether. Then try to reproduce those factors and spread them.
The problem is never individual. It's never about a good coder or a bad coder or anything like that. There are always multiple internal and external factors.
That's a very insightful observation. Over my career, I would find myself doing the hard work to advocate for improving code and developer practices, and I would often encounter resistance from management. By becoming starting a consultancy and narrowing the work that we do to just that kind of improvement, we're able to ensure that only the organizations that are at least somewhat open to the idea are the ones who approach us for help. Fighting that battle internally is a tough job.
I can't agree with this enough idea enough. Also I just love refactoring.
Most of my side work is contributing to existing projects, but here's the stats for 3 school projects: 20,896 lines deleted for 30,342 written. 2 of the projects had fixed requirements from the beginning though one was a more free-form independent study. Additionally all 3 had only one additional author and I did a majority of the line additions and deletions.
I'm pretty sure these numbers are way untypical for this sort of work---with fix goals and deadlines with no more than a few months per project I assume most people delete pretty little, and basically only during debugging.
Certainly this refactor-happy approach did cause some missed deadlines early on, and getting all the abstractions correct still means basic functionality often doesn't come together until the last minute. But I still that the immense technical wealth does pay off as I get more productive the longer I maintain the project---the abstractions anticipate the sort of extensions I may later wish to add by laying a firm foundation. And the fact that the functionality does appear last second speaks testifies to the increasing productivity---longer hours or last minute panic doesn't explain it.
I think most managers would be horrified this approach, but if you're a real company planning on being around indefinitely and not some startup just hanging together or student, this method should be all the more fitting.
One of the things that I like to anecdotally measure is how long a similarly complex change takes. Over time, if it takes longer, then you're likely suffering from tech debt issues. If you're able to turn things around so that it starts getting done faster, then it sounds like you're starting to turn things around.
On the missed deadlines issue, that reminds me of a blog we posted a while ago about why we stopped estimating ongoing development on these types of code bases[1].
I also like to use the metaphor of a car that's stuck in the mud (or snow). The wheels Are spinning really fast and the engine is working really hard, but the car isn't going anywhere. It's only when you start trying to dig it out that you start to see forward progress, and even then, the progress is slow at first.
>> did cause some missed deadlines early on, and getting all the abstractions correct still means basic functionality often doesn't come together until the last minute.
Yes, yes, yes.
>> I think most managers would be horrified this approach
"Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementor, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code."
Refactoring is a necessary outcome of poor design choices. It is obvious that in a perfect world with perfectly designed software, you would never refactor right? While I applaud your continued effort at perfection, the goal should be less refactoring not more. Not less refactoring because we don't care about quality, but less refactoring because we get more right from the start.
> Not less refactoring because we don't care about quality, but less refactoring because we get more right from the start.
Requirements always change and information is never perfect. You can't always get everything right from the start because you might not even know what you're trying to get right.
If you get malleability right from the start, you'd expect _more_ refactoring, since it's easier to do when you inevitably want to. If you're going to spend effort on trying to get things right from the start, that's where you should focus: Minimize things that are slow change, maximize things that make change less painful.
Concretely, this means spend less time code reviewing local variable names and curly brace style; spend more time reviewing public APIs, data format definitions, promised external semantics, etc. with an eye for versioning, deprecating, evolving, etc.
Disagree, the goal is to maximize net value. Getting more right from the start largely depends on your specific project and how well you can predict the future. In highly unpredictable environments, you're better off not trying to predict the future, because you'll likely be wrong, which will cost you more than if you didn't try in the first place.
I agree in principle, but here's two big (and heavily opinionated) real-world phenomena that push back:
1. As a die-hard function programmer believe most everything out there is designed poorly---not because everybody incompetent but because anything working with poorly designed languages and libraries must itself necessary be less well designed. Bad abstractions breed other bad abstractions! To write near-ideal software usual an excise of ripping up the foundations---so a lot of refactoring.
2. We learn good design best by refactoring. When righting new code, functionality and elegance are competing goals, but when refactoring elegance can be tackled in isolation. Furthermore, refactoring is an opportunity to prove (to oneself and others) that given abstractions indeed make code better. Finally, it's hard to distill good design to principles beyond "don't repeat yourself" and "make bad things impossible", but refactoring offers an opportunity to try things and see what works.
I've found it next to impossible to get it through ( some ) people's thick heads that refactoring is not a sign that something was done wrong the first time around.
>> We learn good design best by refactoring.
Exactly. This is often when the true value of a design pattern becomes crystal clear in an applied ( rather than theoretical ) sense.
I haven't done any statically typed functional programming. I'm interested in some parts of functional programming, mostly as they related to replacing raw logic with functions.
Well C++ definitely rewards care and effort in a similar way. Haskell and Rust (to get specific) are good for me cause they allow one to write much more air-tight abstractions, raising the upper limit on what my constant refactoring can yield. No more of that "all abstractions leak" bullshit!
^ This. Instead of responding to productivity slowdowns by pointing the finger at clients for rushing us in the beginning, could we figure out how to respond better to rushing while it's going on? It really makes you think about our industry.
I find it amusing they got their inspiration from This Old House.
Between TOH and the head mechanic (also named Bob) at the bike shop I worked at in college, I get most of my work ethic from outside of software, and it shows. I had high hopes for the 'craft' movement in software but haven't seen much of it materialize.
I spent a lot of my formative years watching TOH, including the 'Pay the Piper' episode where two very nauseous homeowners found out their remodel went 2x the original budget. Part of it was scope creep, but there was always a hefty repair because the foundation was being destroyed by a tree or erosion, and a couple times the entire corner of the house (including corner beams) was rotting away from water damage caused by shoddy work on an earlier repair ten years prior.
I am forever looking for badly-sited trees and water infiltration in my code base, and I'm always agitating to throw out bad tools, get better ones, and learn to use them properly. Just like Bob and Bob taught me.
I think the "craft of software" is alive and well, it's just weirdly distributed: a few shops doing it throughout, lots of shops (and pretty much all big companies) doing it here or there: an individual coder, a small team, etc.
There are lots of other things like that. Take writing. We all write a bunch of emails (etc.) every day, but how many people try, out of a sense of craft and respect for the language, to write them well?
(Oh and I'm currently having a flat from 1914 renovated, which had additional work done in 1938 and 1958, so I definitely appreciate your analogy.)
I also like to make references to how if you're an electrician and the customer asks you to run a wire through a puddle or through the plumbing, you can tell them to politely go fuck themselves because it's not up to code.
In software we have no safety regulations and so it's up to you to win that standoff on your own, and most of us don't have the stomach for that confrontation, so we cave and agree to really stupid stuff all the time. And we know that even if we say no, they'll find some other developer who will say yes.
Another thing I really like about the work on This Old House is that they give a lot of care making sure that the new things are added blend in with the structures that were there to begin with.
I really wanted to like this post. It's well written. With passion. Using a common metaphor (a house). And about the biggest elephant in the I.T. room today.
But I couldn't. Because it just skated on the surface of the iceberg.
OP is right. Technical debt is a huge problem that requires significant mind shift to address. And I can't find a single thing here that I'd disagree with.
There's a lot of academic theory and cheer leading here. And true, it's requisite to any further discussion. But what I would love to see are specific prescriptions:
How do you train to avoid the sins of the past?
How do you enhance process (peer review, QA)?
How do you conduct post-mortems to organize the attack?
How do you decide what to black box and what to rewrite?
How do you prioritize shoring up the foundation and keeping the lights on?
How do you find tools to automate the process?
How do you make technology decisions to set the new path?
And probably most of all:
How do you fix the broken data supporting all this bad code?
Because any treatment of code without diving deeply into the underlying data structure is too myoptic to be of much value.
I've estimated that half of the bad legacy code I've ever encountered would never have even been written if the data were properly structured in the first place. This isn't a discussion of DBMS technology, but a comprehensive treatment of how the business operates and how it's supporting data must be structured, regardless of technology. Fix the data and the code becomes a much more manageable problem to attack.
Of course, maybe OP just didn't get that far.
Nice article. I'd love to read the sequel, about 5 levels deeper.
This article was written for more of a CEO audience, which is why it's not as technical. You might like the podcast I did on Hanselminutes. It gets into more technical stuff than I did on First Round. http://hanselminutes.com/539/learning-to-love-legacy-code-wi...
Sounds like the word "debt" is being used to make their clients feel guilty, and the word "remodeling" is being introduced to make it sound as if they're doing something novel.
Doesn't technical debt just reflect operational priority? How is having old code a problem if it all works? Debating priority is not the same as automatically asserting it as a problem.
And isn't technical wealth building always an opportunity? Why must one be in debt? But every business has a laundry list of opportunities. Technical wealth doesn't need to be at the top always.
If the axiom is that being cutting edge is sacrosanct, then technical wealth would always be top priority, and technical debt would be an operational problem. But technical debt is just a function of the technical advancement in the field. No matter what you do, you'll always end up in debt if you sit around long enough, and decide to call what you have "debt".
If you can make what you have better with technology available today, do it. If not, leave it. And ignore whether a technology is new or old. Just confirm if it is proven and tested.
Technical debt isn't about old vs. new technology. It's about making technically questionable decisions for the sake of reducing development time and costs.
Right. Except that's just one of the many ways it's defined in the article. The article is a hot mess.
Even then, from a business perspective, reducing costs can be completely intentional and pragmatic. Meaning it can be profitable, so debt really is the wrong word. It's closer to sacrifice, or even frugality. You could go as far as to say it's technical efficiency, granted it does what is expected of it.
But if it breaks or starts causing problems, then you just have problems. At which point call it whatever you like.
Which is not how the term is being used in the article. But also "technically questionable decisions for the sake of reducing development time and costs" does not automatically become debt. Technically questionable decisions happen regardless so debt can be incurred without intending to save any money, and reducing development time and cost can be done without questionable decisions so without incurring any debt.
The article uses Docker as an example of "remodeling" which could very well be a technically questionable decision, because their argument for it is that it saves a lot of money (and is not a technical argument, which is why it could be technically questionable).
So in a sense, they're actively promoting technical debt because they're choosing their tools based on cost. They should rename their method from "remodeling" to "debt restructuring" or "refinancing". Of course, the middleman always gets paid.
It becomes technical debt. Just like financial debt costs you the interest rate times the amount owed, technical debt costs you time and resources spent on refactoring and cleaning up the code, rather than on activities that directly increase revenue, say, adding new features.
One thing that makes technical debt particularly problematic is that you can incur in it without knowing it. Unlike the case with financial debt, where it's impossible to “accidentally” borrow money. But, just because technical debt is hard to measure, it doesn't mean it doesn't exist.
> reducing development time and cost can be done without questionable decisions so without incurring any debt.
In principle, yes. In practice, the most common way to reduce development costs, at least in the short term, is to incur in technical debt.
> They should rename their method from "remodeling" to "debt restructuring" or "refinancing". Of course, the middleman always gets paid.
> One thing that makes technical debt particularly problematic is that you can incur in it without knowing it. Unlike the case with financial debt, where it's impossible to “accidentally” borrow money.
Debt doesn't occur only because of agreements to borrow, it can occur because you've incurred a liability by some other means. Which, yes, can occur accidentally.
Indeed, cutting development corners arguably risks incurring technical debt that will manifest concretely later in a process much more like cutting physical construction corners might risk negligence liability down the road and less like deliberate, planned debt financing vs. paying cash upfront for construction.
The thing is that most projects fail for reasons other than technical debt. Not having clear requirements to begin with is one, and running out of money along the way is another big one. There's not much programmers can do about those, and burning time working on tech debt may actually hurt you on the second one.
Rather than thinking of it like a house you have to live in, I think of a healthy codebase more like a busy police precinct. It may be a bit messy, but that's just because there's always something more important to do than tidy up.
What you say is correct, but to add to the analogy- if the precinct is such a mess that the officers are constantly tripping over stuff, and criminals are going free because they are misplacing evidence, it is probably time to do some cleaning and organizing!
With respect to technical debt reflecting organizational problems:
I've always been surprised at how much internal resistance (management and other programmers) I've faced when trying to pay down or eliminate technical debt, regardless of whether it was my code or other people's code. ( For the record, I've written plenty of bad code in my day. )
In the past when I was foolish enough to try and be deferential, I was given excuses like 'yeah, we don't want that kind of churn in version control' and 'we have to maintain compatibility' even when compatibility isn't an issue at all. This, more than anything taught me that it's better to seek forgiveness than permission.
Love the concept of building technical wealth. That's a great way to put it.
Yes, and systems that one might refer to as 'disruptive' often get traction because they don't have the same gravitational pull as the incumbent. Maybe magnetism is a better term because feature richness often correlates with complexity, and that can attract and repel people.
It was interesting. Too bad the company profiled doesn't really value the developers--if you look at their careers page, they pay 110k for lead devs, and 90k for senior devs: http://corgibytes.com/careers/ . I respect them for stating it up front and I know they are 100% remote, but those salaries seem a bit low to me.
> We noticed that men constantly asked for WAY more than women when applying, so we decided to take the salary negotiations off the table to make it easier for everyone.[0]
So I'm being punished because some people don't ask for what they're worth? I would consider applying for the senior position but it's less than I make now. This policy means I have to either 1) take a pay cut; 2) punch above my weight and apply for the next rung up the ladder; 3) not apply.
I get what they're trying to do but this does not seem to be a way to go about it while continuing to get the best applicants possible.
We noticed men consistently knew their market worth and women undervalued themselves, so we decided to use the hot button topic of diversity to make it seem like lowball offers are actually coming from a moral high ground.
As for the rates, we acknowledge they're lower than average publicly on our website. However, our folks only work 40 hours per week. We're really diligent about that. So some folks are making more per hour from their last jobs that worked them 60+ hours per week.
It's also worth noting that we're bootstrapped. We don't have investors, which is both a pro and a con. One of the pros is that we have a lot of control over the culture so we make Corgibytes a place that works for the right people. One of the cons is that we don't have a deep pocket of investor money to dip into. We have to go with the market value for what people are willing to pay for our services and that determines what we can pay our staff. It ends up being less, but we try to be upfront about it so there's no surprise.
I don't think they are competitive outside the Bay area for the roles they are filling.
<Anecdote> A year and a half ago I was looking for (and found) a senior/lead remote position. I had four offers from such places as Ann Arbor, Boston, Austin, and D.C. The lowest offer was 20% more than what Corgibytes offers to Leads (plus non-trival equity).
That is good to hear. I haven't been captive in over 10 years and I am located in a smaller town so I am a little out of the loop on what senior dev salaries are in most places. Great to hear that remote senior dev positions are demanding that.
Different experience than misthop as I'm not remote, but these salaries are not competitive and I am in what 90% of HN would consider "rural" and I drive to an office every day. Now I do work for a software company so that may push the salaries up a bit higher but not by this much.
I know working remote is a plus and some of the copy they use implies you have an equipment stipend of some sort, but for most of the senior developers I know, CB's lead salary is at least a 10% haircut if not more. Not to mention you have to do 90 days as a contractor so even after you quit your job and work for them you may be looking for another job in three months anyway!
I mean you can make whatever argument you want about salary but requiring a 90-day contract period from people who aren't necessarily coming to you as contractors is a pretty shit way to get good people in the door.
I'm sure they can hire people for those salaries, but they're almost certainly not getting the best people. I know of multiple companies offering remote work with salaries that are $50k+ more than what Corgibytes offers. They might very well not be looking for the best though.
I do appreciate them being upfront though. I wish all employers would be so clear about their salary ranges.
It's especially low because they're basically hiring consultants. The going rate for a senior contract developer is well over $100/hr, which would be $200k a year if you consider their purported work schedule.
Agreed. I am a male, and it took me 3 years out in the real world before I felt I was actually being properly paid.
Having been in a situation where I am underpaid but really like my job, i.e. startup, I know the emotional toll and how it contributes to disgruntlement and burnout. For candidates I want to work with, I will advocate behind the scenes for higher salaries if the candidate either didn't ask or didn't ask for enough.
But, if you are being paid $XYZ and you believe you are being correctly compensated for your skill set, make damn sure you create many times that value (or potential value at startups) for the company. The situation can backfire very quickly where you will be seen as overpaid and your job questioned.
In what world are old text messages or Twitter posts better technical artifacts than commit messages or tests? The latter are directly tied to the code expression, include a clear history, and are easily discoverable when looking for the source of an issue or the explanation.
This is so wrong that I'm wondering if I'm reading the chart incorrectly.
"Break monolithic apps into micro-services that are lighter weight and more easily maintained."
Does anyone has any actual evidence to support this blanket claim, especially the "more easily maintained" part of the claim? My experience is quite the opposite. More separate processes communicating over unreliable network links there are, higher the likelihood of failures and mysterious performance issues. More succinctly, complexity of system goes as N^2 where N is the number of processes communicating over unreliable links (note, my use of the word "unreliable" is very intentional here).
Code is not an asset, is a liability.
Code that you own you need to maintain.
There's only so much code that you can realistically own and maintain. If you are smart, you will seek ways of owning strictly what you need, which is the code closer to the core of your business.
If there is code that you need and is not close to the core of your business and does not provide a competitive advantage, open source it.
The worse code always comes from lack of visibility and accountability. Open source promotes visibility and accountability and keeps things working and clean.
A culture of accountability and strong technical skills attracts talent and keeps the company competitive.
All these articles are fine and dandy but I would like to see more pragmatic articles on how to reduce technical debt. A lot of books have been written aroumd software patterns, service orientation, micro services, etc but none of them really address how to minimize technical debt over the life of software. Also not covered properly is what accounts for texhnical debt? For example, I have fee classes that have good unit tests, interfaces, proper seperation of concerns but performs really bad with 1000 concurrent users. So this goes back to how do we accurately define technical debt?
This is a very good article, and I can certainly relate to the points made about legacy code. The problem I had was to be able to quantify the fact that productivity of the programming team was going downhill (legacy code, little to no tests) and be able to make a case to management. I felt like the canari in the coal mine.
Another key revelation from this article was that there are 'Makers' and 'Menders' ... At heart I think I am a maker, but sometimes need to be a mender by necessity and this has a huge impact on my 'happiness' levels.
This analogy works quite well. Basically, you have to look at all of the costs of the current edifice. Maybe it costs nothing on paper to keep the ramshackle old house the way it is, but the impression it gives to visitors is a cost. Likewise, if you think of new hires as "visitors" was well. On the other hand, a building that's well designed can have a positive effect on the public perception of an organization and the mental state of the people working within it. The same thing goes for code.
This slogan sounds like a platitude, but might be important.
> Stop thinking about your software as a project. Start thinking about it as a house you will live in for a long time.
I say it "sounds like a platitude" because experienced software engineers already think this way instinctively. But we still call software development jiobs projects, like the building of a house rather than as an ongoing activity.
Why "forget technical debt?" A better title is probably simple, "How to Build Technical Wealth," the fact that a large part of this article is actually re-hashing the problems of technical debt notwithstanding.
How about: "How to avoid technical debt" as a title? It seems like the argument's main premise about building technical wealth is avoiding technical debt....
It's a marketing term. It's rebranding refactoring/cleanup as "technical wealth", which sounds good compared to the "technical debt" companies are tolerating.
There's nothing wrong with good marketing if it's leading to a solution for both parties involved.
Programmers are often logical people who neglect the importance of PR. Convincing people and marketing good solutions to them for reasons they don't necessarily understand is a powerful tool.
> Technical debt always reflects an operations problem
This is a clueless statement. Technical debt is just like any other debt. It's about getting product out fast during a phase where market-share build is crucial, building a dominant position, and then repaying the debt from the much larger resultant revenues, later. This is a critical aspect of the microeconomics of any business relying on network effects. You have to be good (enough), but really you have to be first (ish), and technical debt helps you be first.
Therefore, technical debt, just like financial debt, is absolutely not an operations problem. That's why we have COOs and CFOs whose roles are mainly orthogonal. It's fundamentally a question of strategy of the business.
This is easily acceptable as true but practically it seems so easy to abuse since there isn't any real data or standard underneath it in terms of when it starts to make sense to pay off.
No one really knows from the outset what level of techdebt is necessary to succeed, and if the techdebt appears costless (since, unlike the metaphor of paying off financial debt, the interest payments feel relatively invisible), you're always going to have people counseling risk and that it's better strategy to move forward as "fast as possible" right up until your younger faster competitors pass you by.
I don't have clear ideas either on when it's a good idea to pay off techdebt, to me it generally seems to be a good idea to aim on "as soon as humanly possible".
If people get too aggressive on solving tech debt then it turns into gold plating - creating tech that anticipates too much and then gets discarded when anticipations prove false. This is relatively easy to recognize when it happens and is a good indication that you're going too far. Up until that point, though...
I disagree that the interest costs of technical debt feel invisible. I personally feel a very heavy burden when cutting technical corners / coding to specifics rather than the elegant generic cases. Indeed I spend a large amount of time in meetings persuading non-tech people of the impact of their (often technical-debt unaware) instincts. If you are in an organisation which is unable to measure, even in broad terms, the impact of technical debt, then that organisation is probably not run by the kind of people who should be managing a tech business. It's why most successful startups in the past 10 years have been run by people who understand code.
@tunesmith I should add to my comment that I agree with you that many non tech people often treat technical debt as free. It's a big problem and is a major source of failure.
This is really wishful thinking about the nature of technical debt. Most technical debt is completely unnecessary and is due, entirely, to lack of planning and organization (in other words, operations problems). In a nice tidy perfect startup world your comment might make sense but that's such a small fraction of software businesses that no meaningful conclusion should be drawn from them.
This is analogous to saying no business needs financial debt, and can bootstrap entirely out of cashflow. It's baloney. There are many situations where you can do quick-and-dirty versus (slow) elegant-and-enduring, and there exist business (note I said, business, not technical) situations where the former is better than the latter.
The problem with the analogy of technical debt vs. real debt is that technical debt is the actual product itself. You aren't building a house using mortgage, you're building a house with shoddy materials and hoping you can flip it before it falls down on you.
I don't completely disagree with you at all -- there is a lot to building marketshare and having the dominant position. But I think you are way overselling it as an advantage. Being out in front and having the whole thing collapse on you is way worse than a slow start.
But again I think this conscious idea of doing quick & dirty is less prevalent than the debt caused by lack of experience and poor operations.
> This is analogous to saying no business needs financial debt, and can bootstrap entirely out of cashflow.
Technical debt isn't a commodity. You can't generally can't infuse legacy technology with more technology and fix things. At best there's a lot of bespoke work to get the legacy system to work with the new technology.
http://higherorderlogic.com/2010/07/bad-code-isnt-technical-...
From the article:
Call options are a better model than debt for cruddy code (without tests) because they capture the unpredictability of what we do. If I slap in an a feature without cleaning up then I get the benefit immediately, I collect the premium. If I never see that code again, then I’m ahead and, in retrospect, it would have been foolish to have spent time cleaning it up.
On the other hand, if a radical new feature comes in that I have to do, all those quick fixes suddenly become very expensive to work with. Examples I’ve seen are a big new client that requires a port to a different platform, or a new regulatory requirement that needs a new report. I get equivalent problems if there’s a failure I have to interpret and fix just before a deadline, or the team members turn over completely and no-one remembers the tacit knowledge that helps the code make sense. The market has moved away from where I thought it was going to be and my option has been called.