The flip side is that services with an immediate need will get upgraded, and others won't, and six months later you will be saying "Why am I still seeing this bug in production, I already fixed it three times!"
Of course, the problem can be mitigated by a disciplined team that understands the importance of everybody being on the same page on which version of each library one should use. On the other hand, such a team will probably have little problem using monorepo in the first place.
Whether you have a monorepo or multiple repos, a good team will make it work, and a bad team will suck at it. But multiple repos do provide more ropes for inexperienced devs to tie themselves up, in my opinion.
I don't think that's quite true. In my experience multi-repos have the edge here.
If you have one key dependency update with a feature you need, but you need substantial code updates and 80 services depend on it, that may be impossible to pull off no matter what. Comparatively, upgrading one by one may not be easy, but at least its possible.
The importance of everyone being on the same page with dependencies might just be a limitation of monorepos rather than a generally good thing. Some services might just not need the upgrade right now. Others may be getting deprecated soon, etc.
There are languages / runtimes where there could not be two different versions of the same thing in one binary (and they eagerly fail at build time / immediately crash upon run). That is not the case for JavaScript, Rust, etc. But it is the case for C++, Java, Go, Python and more.
Everyone claims different needs if they can. Nothing could be linked together anymore if you just let everyone use whatever they want.
Or maybe people start to try to workaround this by ... reinventing the wheels (and effectively forks and vendoring) to reduce their dependency graph.
There is a genuine need for single instance of every third party dependencies. It is not unique to monorepos. Monorepo (with corresponding batch change tooling) just make this feasible, so you don't hear about this concept for manyrepos, and mentally bind it to monorepo.
Thanks. I'm not familiar with Java. I thought multiple classloaders are more like dlmopen (which doesn't help much - symbol visibility is hard) cause I saw people struggling on classpath conflict etc.
It is basically how application servers got implemented, every EAP/WAR file gets their own classloader, and there is an hierachy that allows to override search paths.
That is how I managed back in the day to use JSF 2.0 on Websphere 6, which officially did not had support for it out of the box.
How many internal libraries does your "separate services" contain? You service A depends on library alpha@1, your service B depends on library alpha@2. All happy now. Introduce another layer, your service A depends on library alpha@1, beta@1, and alpha@1 depends on gamma@1, beta@1 depends on gamma@2, what to do now? It does not even matter how many services you have now.
With Javascript it does not apply, alpha@1 can have its own gamma@1, beta@1 can have its own gamma@2. But the same does not hold for most languages.
left-pad is both amazing and sad. It's amazing because JS's "bundle entire dependency closure" approach, combined with npm infrastructure, successfully drove the usability of software reuse to the point that people even bother to reuse left-pad. This is beyond what a well-regulated corporate codebases can achieve (no matter strongly encouraged single instance or not, not matter manyrepo or monorepo), and it happens in open. It is sad because without being regulated people tends to do so too aggressively, causing, well, left-pad.
> How many internal libraries does your "separate services" contain? You service A depends on library alpha@1, your service B depends on library alpha@2. All happy now. Introduce another layer, your service A depends on library alpha@1, beta@1, and alpha@1 depends on gamma@1, beta@1 depends on gamma@2, what to do now? It does not even matter how many services you have now.
Got several thoughts on this one. First, lets look at how bad the issue really is:
To start using beta@1 you need to upgrade alpha@1 to alpha@2 that depends on gamma@2. What's the problem with that?
The same situation can arise with 3rd party dependencies, except there its much worse: you have zero control over those. Here you do have the control.
Now lets look at what this situation looks like in a monorepo: you can't even introduce gamma@2 and make beta@1 at all without
1. upgrading alpha@1 to alpha@2
2. upgrading all services that depend on alpha@2
3. upgrading all libraries that depend on gamma@2
4. upgrading all services that depend on gamma@2, if any
So you might even estimate that the cost of developing beta@2 is not worth it at all. Instead of quasi-dependency-hell ("quasi" because your company still controlls all those libraries and has power to fix the issue unlike real dependency hell) you have a real stagnation hell due to a thousand papercuts
My second comment is about building deep "layers" of internal dependencies - I would recommend avoiding it for as long as possible. Not just because of versioning, but because that itself causes stagnation. The more things depend on a piece of code, the harder it is to manage it effectively or to make any changes to it. The deeper the dependency tree is, the harder it is to reason about the effect of changes. So you better be very certain about their design / API surface and abstraction before building such dependencies yourself.
Major version bumps of foundational library dependencies is an indication that you originally had the wrong abstraction. No matter how you organize your code in your repos, its going to be a problem. (Incidentally, this is also why despite the flexibility of node_modules, we still have JS fatigue. At least with internal dependencies we can work to avoid such churn.) It should still be easier with separate services, however, as you can do it more gradually.
Last note on left-pad and similar libraries. They are a different beast. They have a clear scope, small size and most importantly, zero probability of needing any interface changes (very low probability of any code changes as well). That makes them a less risky proposition (assuming of course they cannot be deleted)
> To start using beta@1 you need to upgrade alpha@1 to alpha@2 that depends on gamma@2. What's the problem with that?
The problem is the team maintaining alpha does not want to upgrade to gamma@2 because it's an extra burden for them, and they don't have an immediate need.
The debate is not about teams owning separate services, it's about teams owning libraries.
I'm assuming a customer-driven culture where you work for your customers needs. In the case of libraries, teams using the libraries are customers. If you're the maintainer of alpha and your customer needs beta, your customer needs you to upgrade to gamma.
But then another customer still wants gamma@1, they are allowed to do that! But they also want your new features. So now you have to maintain two branches, which I hope we can agree: it is an extra burden.
This is unavoidable if we are talking about FOSS, people should be able to do whatever they want, and they do. A company has an advantage here: you can install company-wide rules and culture to make sure people don't do this. Which, in this case, happens to be: let's keep a single version of everything unless you have really good reasons.
> But then another customer still wants gamma@1, they are allowed to do that! But they also want your new features.
In this case, you still have the option of working with them to help them migrate to gamma@2, if the cost of maintaining gamma@1 is indeed too high and would negatively impact you in serving other customers. This was the original premise, wasn't it - upgrading all your dependants when you upgrade your library? That's still an option. The point is - you have more choices. And you can also help customers one by one - you don't have to do it all at once
I will agree though, restricting choices helps when the company is finding difficulty in aligning incentives through communication. But you do give up a lot for it - including ability to move fast and avoid stagnation.
From what I saw I'd say it's exactly opposite: allowing multiple versions actually means "make teams able to choose for stagnation". And because we are lazy, we certainly do! There is a non-trivial amount of people who believes "if it ain't broken don't fix it". I can work with them to migrate them over, but they might not want to do so! In this case, a hard "bump versions or die" rule is a must.
Maybe if you work in a small group of great engineers you don't need to set such rules and you can move even faster, but I unfortunately haven't found such a workplace :(
> you don't have to do it all at once
Yes. Nobody should do it all at once. Making "bump versions or die" compatible with incremental adoption is slightly harder (see sibling threads for how it's done). Still worth it I'd argue.