bought a ten years old company, a division of a public company, some million dollars.
got an overly complex, over 30 micro services architecture, over usd20k in monthly cloud fees.
rewrote the thing into a monolith in 6 months. reduced development team in half, costs of servers by 80-90%, latency by over 60%
newer is not better. each micro service must be born from a real necessity out of usage stats, server stats, cost analisis. not by default following tutorials.
It’s telling that you revised both application architecture and org structure to be simpler and more efficient.
Microservices are sometimes a reflection of the org; the separation of concerns is about ensuring everyone knows who’s working on what, and enforcing that in the tech.
(Not defending that, it’s often inefficient and can be a straight jacket that constrains the product and org)
I've seen the opposite: single monolithic codebase, where the different bits of functionality eventually end up tightly coupled, so it's actually pretty difficult to extract bits into a separate service later, so a different type of architecture isn't possible even if you wanted to split it up.
Why do that? Well, when a big Excel file is uploaded to import a bunch of data, or when some reports are generated, or when a crapton of emails is being sent, or when batch processes are sending data over to another system, both the API and the UI become slow for everyone. Scale it vertically, would be the first thought - for a plethora of reasons, that doesn't work. There are bottlenecks in the DB thread pool solution, there are bottlenecks in the HTTP request processing, there are bottlenecks all over the place that can be resolved (for example, replacing HikariCP with DBCP2, oddly enough) but each fix takes a bunch of time and it's anyone's guess whether something will break. Suddenly updating the dependencies of the monolith is also a mess, something like bumping the runtime version also leads to all sorts of things breaking, sometimes at compile time, other times at runtime (which leads to out of date packages needing to be used). Definitionally, a big ball of mud.
Can you just "build better software" or "write code without bugs"? Well, yes, but no.
I've seen plenty of cases of microservices also becoming a chatty mess, but what strikes me as odd is that people don't attempt to go for something like the following:
* keep the business functionality, whatever that may be, in one central service as much as possible
* instead of chopping up the domain model, extract functionality that pertains to specific types of mechanisms or workloads (e.g. batch processing, file uploads or processing, data import etc.) into separate services, even if some of it might still use the main DB
Yet, usually it's either a mess due to shoving too many things into one codebase, or a mess due to creating too much complexity by attempting to have a service for every small set of entities in your project ("user service", "order service", ...).
> single monolithic codebase, where the different bits of functionality eventually end up tightly coupled, so it's actually pretty difficult to extract bits into a separate service later, so a different type of architecture isn't possible even if you wanted to split it up.
In my experience, this is the time to refactor the monolith, not try to introduce microservices.
> grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
The document upload & send data to db + render UI are some of the primary functions of SharePoint. All done within the context of the same ASP.NET worker process with no UI slowdowns for everyone.
It's inherently async & multithreaded, of course.
What you're describing sounds like a single threaded sync solution, which we'd all agree will cause UI lag and/or timeout. But it doesn't have to be that way with a monolith.
> But it doesn't have to be that way with a monolith.
Tell that to some old Java Spring app running on JDK 8 (though I've also seen worse). It would be cool if I didn't see most of the software out there breaking in interesting ways, but it's also nice when you at least can either limit the fallout or scale specific parts of the system to lessen the impact of whatever doesn't behave too well, until it can be addressed properly (sometimes never).
Whether that's a modular monolith (same codebase, just modules enabled or disabled during startup based on feature flags), microservices, anything really, isn't even that relevant - as long as it's not the type of system that I've also seen plenty of, "singleton apps", that can only ever have one instance running and cannot be scaled (e.g. if you store sessions or any other long lived state in RAM, if you rely on data being present on a file system that's not mounted over the network and can't be shared with other instances etc.).
Your suggestion aligns well with how Ruby on Rails tends to handle this. All of the stuff in your list of workloads would be considered “jobs” and they get enqueued asynchronously and run at some later time. The jobs run in another process, and can even be (often are) on another server so it’s not bogging down the main app, and they can communicate their success or failure via the main database.
I run the tech org for an insurance company. We've got a team of about 30 folks working on a system that manages hundreds of thousands of policies. Apart from the API's we call (a couple of them internal, belonging to our Analytics team), it's one big monolith, and I don't see that changing anytime soon. At our scale, a monolith is more stable and performant, easier to understand, easier to deploy, easier to test, easier to modify, and easier to fix when something goes wrong.
got an overly complex, over 30 micro services architecture, over usd20k in monthly cloud fees.
rewrote the thing into a monolith in 6 months. reduced development team in half, costs of servers by 80-90%, latency by over 60%
newer is not better. each micro service must be born from a real necessity out of usage stats, server stats, cost analisis. not by default following tutorials.