That's a very excellent question, to which I have most of an answer, if not 'the' answer.
Daylight is the best antiseptic.
It's not giving them a microservices architecture and Docker that solves problems. It's expecting them to use it that causes problems to be solved, or at least formally identified.
I do a lot of root cause analysis whether my boss asks for it or not. Human factors are almost always in play. A significant source of bugs that make it into late QA or even production are caused by wishful thinking about the scope and impact of code changes. Things that should have been red flags are brushed off because you can't prove that their code caused the problem, and as long as there is reasonable doubt about the source, some people won't look at code they already declared done.
When the code is developed and tested in an isolated system then the only source of state changes on that system were caused by the new code. It takes a real set of brass ones or an extremely dense skull to deflect concerns about problems seen on a system that only contains your code changes. People either shape up or get labeled as untrustworthy. The former result is preferable, but at least with the latter you get predictability out of the system.
I would argue it would be simpler to introduce test coverage tools that automatically call out bad test coverage during code review.
Microservices remind me of a particular C++ protocol implementation I wrote as a novice programmer. Since the protocol was structured, I had high level classes broken down into simpler classes. e.g.
virtual class Reader {
void read(Buffer *buf);
};
class TCharacter: public Reader {
TString *name;
TList<TStat> *stats;
...
};
class TStat: public Reader {
TWord32 *remaining;
TWord8 *percent;
};
class TWord32: public Reader {
uint32_t v;
};
class TWord8: public Reader {
uint8_t v;
};
And on, and on, and on, and on ...
I thought it was cute that down to the simplest PODT, every type satisfied a well defined `Reader` interface. I hand wrote many constructors, deconstructors, read and several other interface definitions for each type.
Every argument for microservices has in some way reminded me of this code, with it's well defined interfaces and overly verbose implementation.
My current strategy is to utilise the crap out of language features to safely achieve what I want in the minimal amount of code possible. In C++ this would be done by initialising the protocol structure classes straight out of the memory buffer, using #pragma pack as required. No error prone constructors / deconstructors / read() required.
Haskell and Scala are two languages I use a lot, and each have powerful type systems which I can use to prevent myself from doing something stupid. Programmable macros are great for removing boilerplate. Want less technical debt? Write less code.
In my own controversial opinion, if somebody can't work out how to modify my code because they can't work out how to correctly update the type definitions, that person has no business modifying my code. Problem solved! Interfaces are well defined and modifiable only by people who truly understand the code.
I love this Dijkstra quote: If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
> As a formal Software Engineer, I now consider him to be possibly the most enlightened engineer that's ever existed in the field.
On one hand, his quotes have the ring of wisdom. On the other, the man was famously not an engineer and never brought a software project to fruition, to my knowledge.
These aren't necessarily contradictory, but you should keep it in mind. Knuth is much more of an engineer in the sense it's used on HN.
Knuth is brilliant and wonderfully pragmatic, no doubt. I think the greatest thing Knuth really got into me is that code is meant for people, not computers.
As to Dijkstra; enlightenment doesn't necessarily mean productive. Some of the least successful people I've ever met, can have absolutely sage like advice. Dijkstra was able to visualize things others could not, and explain novel solutions to those. On top of that he had a deep understanding of the life of a programmer and the complexities faced day in and day out.
The kind of creativity and acceptance of uncertainty that leads to insight is often at odds with the narrow focus get-it-done tenacity associated with success. Some people have more of the former without the ability to switch to the latter.
Dijkstra worked as a programmer at the Mathematisch Centrum (mathematics center) in the 1950s. He was responsible for most of the systems programming of three subsequent machines that were build and used there. His PhD dissertation (1959) was on the operating system he wrote for the Electrologica X1 computer that was being built by the first Dutch computer industry Electrologica.
The primary offering of microservices is decoupling and simplicity. They are language agnostic, communicating via HTTP /messaging/pattern matching. It's the opposite of verbose and traditional monolithic architectures that relied on a rich domain model. Microservices should be small enough to be owned by a single developer. With pattern matching, you can extend functionality by creating new microservices instead of modifying existing ones. It boils down to change management. For complex systems, the benefits microservices bring to the table make the overhead cost of maintaining these services worth it.
The problem which I don't see discussed nearly enough when people start drinking the microservices koolaid is what does the interface between the microservices look like? If you can define a nice stable interface that changes much less frequently then the constituent services, then it's mostly a question of operational overhead. However, if the microservices comprise many cross-cutting business concerns, then the churn on the interfaces is a massive source of pain compared to (for example) running one process that loads multiple modules where you can leverage all the power of that languages to do basic wiring / type checking ensuring the whole things fits neatly together.
IMO, microservices or SOA or whatever you want to call it, is primarily a method of organizing large teams so they don't get hamstrung by ops coordination. In that case you absolutely need it, but for smaller teams the overhead will usually far outweigh the benefit. I'm glad the space is being explored so that we get better tooling and techniques to lower that overhead, but when the dust settles I expect many small teams will discover that we collectively overestimated the better-understood pain of monolithic architectures and underestimated the less-understood pain of microservices.
> Microservices should be small enough to be owned by a single developer
This seems bad for code review, and disastrous with turnover. Whoops, now we need to find a scheme engineer whose eccentricities are roughly equivalent to this last guy.
There are also ways to achieve microservice-like architectures without requiring multiple code bases. See Actors (scala akka). Streams are an extension which give automatic load balancing. And if you need it to run over the network, akka-remote has you covered.
I've not been happy with this kind code ownership in the past. Those who want to misbehave have a nice comfortable place to hide out while appearing relevant because they have a monopoly on an idea that they don't have the skills to be responsible for.
So then you have your coworker who wrote a wrapper around the code to sanitize all of the inputs and outputs and it's bigger than the actual code and once in a while it guesses wrong about ambiguous data.
I will say that it can be an advantage, but it's rare. It becomes an advantage when your primary language lacks the tools to do something well. One of my previous companies was a PHP shop and they used Python to chop up audio files, which would have been very cumbersome to do in PHP (few if any existing libraries). My current company implemented complex data pipelines in PHP, which really should have been done in a different language (python or scala would have been natural choices).
>The primary offering of microservices is decoupling and simplicity.
Microservice architecture is very much orthogonal to loose coupling. I've worked on several microservice architectures with tight coupling. The microservice aspect actually exacerbated the pain caused by the tight coupling because it added the risk overhead of serialization/deserialization and network failures, which simply don't exist in a 'monolith' context.
Microservices when implemented are usually either a reflection of conway's law (ID team in building B has its own service; as does the CMS team in Europe), or fashion-driven development (one team working on 17 different services... which can be nasty).
The previous poster is basically saying "they screwed it up last time in 6 months, they'll screw it up again".
Microservices don't help. If they couldn't properly design their code without micro-services, throwing docker and micro-services in the mix won't make it magically better. It'll add massive amounts of complexity right off the bat. They'll still be bad coders and bad architects. They still won't know how to lay out their code. Having to split up their code will probably make a bad situation even worse.
They'll put the wrong methods on the wrong microservices, and share some 'key' code that shouldn't be shared as the method's on the wrong microservice, or worse still, cut and paste code and have it decay at different rates. They'll create new services that should actually be on an existing service and then gradually they code will duplicate, but with random subtle bugs.
The code debt won't disappear, it'll accelerate until they have a bunch of services they daren't touch and one of those services will become "the monolith" and eventually no-one's allowed to deploy to it or the whole thing will come falling down.
And all you're saying is "daylight".
What does "daylight" mean?
The very idea that micro-services will remain 'isolated' in a bad coding team is utterly deluded.
It's changing the culture around how things are developed and understood, and the parent poster is saying that exposing the team to daylight using something such as an architecture like microservices (or another "good practice") can help push the team toward a better understanding of what a good codebase and stable state should look like.
Stop thinking about the specific buzzword hyped up trendy term you're stuck on here; start thinking about the psychological factors that caused the situation in the first place, the environmental pressures that resulted in poor decisions, and the ways you can teach the team—by doing—how to create better applications.
Also:
> They'll still be bad coders
Most often that's not the root cause. Most often the root cause is poor management and poor leadership causing otherwise decent programmers with decent instincts to make decisions against the best interest of themselves and the company. The most common classic debt generation situation is trading off long-term quality for short-term speed and functionality. That is not "bad coders" at work; it's bad managers.
I am not a manager and though I agree that sometimes there can be issues with managers/leadership I am really curious why you are defending coders."Coders" that actually do not try to improve themselves and read up on good practices and need managers to actually impose those practices are not really decent coders.
I'm not defending coders, I'm rejecting the individual mentality—that people are just good or bad by nature, and aren't in some (large) sense a product of their environment as well.
Attribution bias tells us that we tend to see behavior as more a product of individual traits than the system that produced it. The system is more responsible than we interpret in almost every case.
The 'daylight' line is me butchering a quote from Louis Brandeis about transparency, 'sunlight is said to be the best of disinfectants' commonly rendered as 'sunlight is the best disinfectant'. That's what I get for posting from mobile.
I'm neutral on microservices, pro Docker.
What I'm properly alergic to is arcane setups where it's easier for everyone to share a server than it is for anybody to set up a copy of the system that is theirs and theirs alone. In a big enough shop, running all of the microservices you need on your dev box might be possible while running the whole system isn't, because you run out of memory before everything loads.
The bad coders often continue to be bad coders because nobody can 'prove' that it's their fault and so they keep dazzling the managers with bullshit and implying that you are the one with the problem, not them. Isolated, repeatable systems means you have to look at how crazy your architecture is instead of ignoring it, and regarding this conversation, there's a paper trail backing up your version of the story.
When people don't know which solution is better, they tend to back the side that has more trustworthy people on it, where trustworthy is "doesn't make messes, or helps clean them up when they do".
If you make it clear who's the problem and the project management still doesn't intervene, take your skills elsewhere. You're quitting with cause and many managers will value your commitment to sanity.
Imagine a single monolithic jar file, that contains all of the code that lives on any given Docker instance, that decides which services and API endpoints to expose based on poking and prodding environmental variables.
I really didn't feel it was cryptic or pretentious. However, if you do, can you help describe what made it difficult for you to understand?
The daylight word worked great for me, since the idea is to have everyone be able to see what's happening when they introduce changes.
If there's just one staging / integration environment where people are building things and patching over one another, and your local environment maybe has deviated a lot from either the staging, production, or even a clean local env, then there's all kinds of unexpected problems that might happen.
Getting more things, including your environment, under version control and requiring people to deploy things via files that live in version control rather than a smattering of deployment or env setup commands that not all of the team has visibility into, helps the entire development process go more smoothly.
Daylight is the best antiseptic.
It's not giving them a microservices architecture and Docker that solves problems. It's expecting them to use it that causes problems to be solved, or at least formally identified.
I do a lot of root cause analysis whether my boss asks for it or not. Human factors are almost always in play. A significant source of bugs that make it into late QA or even production are caused by wishful thinking about the scope and impact of code changes. Things that should have been red flags are brushed off because you can't prove that their code caused the problem, and as long as there is reasonable doubt about the source, some people won't look at code they already declared done.
When the code is developed and tested in an isolated system then the only source of state changes on that system were caused by the new code. It takes a real set of brass ones or an extremely dense skull to deflect concerns about problems seen on a system that only contains your code changes. People either shape up or get labeled as untrustworthy. The former result is preferable, but at least with the latter you get predictability out of the system.