Hacker News new | past | comments | ask | show | jobs | submit login

Now, how do you explain to that executive that they will not get that feature in 3 months but rather in 5 years if the company survives that long with work screeching to a halt?

You have turned 20 teams working on 20 different focuses into 1 team. The focuses are interconnected but 90% independent. One team is working on billing. Another team is working on admin. Another team is working on a feature that has blocked five deals this quarter totaling $800,000 dollars in potential contract. Another team is working on imports from other external systems. Another team is working on exports to CSV and Looker and other platforms.

Yet another team is working on a feature that is only connected because they are the same user base but otherwise has no relation. Another team is directly tied to all of the same data, but could be flying on their own with a reasonable set of CRUD APIs.

These all get mashed into the same codebase early on because everyone is going as fast as possible with 8 developers two funding rounds ago.

I am not excusing systems that are a microservice to a developer, or worse, but these patterns evolved because there was a need.




Now, how do you explain to that executive that they will not get that feature in 3 months but rather in 5 years

I have no idea, but I don't worry about it because I haven't been persuaded that will happen.

You have turned 20 teams working on 20 different focuses into 1 team. The focuses are interconnected but 90% independent.

I would explain that what was originally presented to me as a monolith was later explained to me to be something else: a family of interrelated services. Then I would say that's a different problem, invoke Conway's Law, and say that they can stay as 20 different teams. I would also say that doesn't necessarily mean 20 different network servers and 40 different tiers, which in my experience is how "micro-services" are typically envisioned.

these patterns evolved because there was a need

I'm also not persuaded there was a [single] need rather than a network of interrelated needs, just as I'm not persuaded anyone here (including me) has a complete understanding of what all those needs were.


Note that I am usually on the other side of this argument, but mostly due to nuance.

In my world, monoliths are usually interrelated services that are in the same codebase and have poor boundary protection because they were started with teams that were later split along arbitrary boundaries. However, it's poorly factored because everyone has been rushing for so long that splitting it out is a giant cluster headache, and nobody can quite figure out where the bounded context is because it truly is different for each team.

So, nobody knows what all the needs are, because there's enough work for 100 people and 15 product managers, and only a handful of people in the organization have a mental model of the entire system because they were an early employee, engineer or otherwise.

So, can we agree on these architectural principals, except in edge cases:

1. A team must be in control of its own destiny. Team A releases must be independent of team B releases, and any interconnected development must be independently releasable (by feature flags by one example, but other patterns exist.) Otherwise, you get into release management hell.

2. Any communication between teams must be done by an API. That can be an HTTP API. That can be a library. That can be a stored procedure. But there must be a documented interface such that changes between teams are made obvious.

From there, I think there are options. You can have multiple teams that each contribute a library to a monolith that releases on every library change. You can have microservices. You can have WAR files in java. You can have a monorepo and each team has a directory. There are many options, some of which are distributed. However, without those two architectural principals all development comes to a halt after 30-40 developers come on board.

Microservices are used often because nobody managed to write a good set of books and blogs about how to keep 100 to 1000 developers humming along without the tests taking 4 hours to run and needing release managers to control the chaos. I don't dispute there are other ways to work, but the microservices crowd did the work to document good working patterns that keep humans in mind.

and it comes back to my original point: The architecture is the function of number of people in the system.


"So, can we agree on these architectural principals, except in edge cases:

1. A team must be in control of its own destiny...

2. Any communication between teams must be done by... an interface such that changes between teams are made obvious."

Sure. You'll get no argument from me on these points...

"it comes back to my original point: The architecture is the function of number of people in the system."

...or on this one.

I will grant that a division of code along the same lines as the division of labor is both sensible and inevitable. I will also grant that 100 or 250 or 2500 or more people are sometimes needed for a firm to achieve its objectives. Will you grant that sometimes, they aren't? That sometimes, the tail wags the dog and the staff and its culture determine the architecture rather than the reverse? That sometimes, adding more people to a slow project just makes it slower? I ask these questions because in my world, micro-services have typically been narrowly defined as network servers in Java, Python, or Rust, each interacting with a database (sometimes, the same database) through an ORM, and a rigid adherence to this orthodoxy has padded resource budgets both in terms of compute and people and has sapped performance both in terms of compute and people.


Will you grant that sometimes, they aren't?

It depends. Let's take a SaaS engineering department, for example.

If your customer base is tripling year over year based in the need of the market, you can end up with feature requests that would take decades even with an engineering team ten times the size. Those are often from sales on the backs of failed deals because the product didn't yet meet the client need.

If the goal is to keep the lights on and meet current customer need, you need a fraction of the total engineering team. However, we're on the message board of a venture capital site, so we can take an assumption of hypergrowth, as is the goal of a startup.

So, then, I'd argue that in growth scenarios, these people are required. That doesn't mean that they are being used the most efficiently, of course. I think this would be a main point of our disagreement.

That sometimes, the tail wags the dog and the staff and its culture determine the architecture rather than the reverse?

I agree. And some of that is the ZIRF culture as well. And I think we agree with the rest as well. However, I think I am sensing a separate point where I don't know if we agree or disagree.

Our field has not created the tools to scale from 10 to 100 or 100 to 250 well. The best tools that have been created to date have taken microservices as part of the orthodoxy. I don't think this is the only way to do it - Robert Martin has a good article here from a decade ago: https://blog.cleancoder.com/uncle-bob/2014/09/19/MicroServic...

However, everyone escaped the java ecosystem (because Oracle and because Spring, more than the language itself IMO) and solutions such as rails plugins didn't develop the rest of the ecosystem around it like AWS did with microservices.

And don't get me wrong - I'm currently living in nanoservice hell. We agree more than we disagree. However, I think we are looking at different constraints.

Were I a director of engineering at a seed funding company that was starting to feel the pain of a monolith, I'd take one engineer and create a plugin architecture that enforces APIs, and build a pseudo-schema enforced by peer review and linting - and performance exceptions must go through views (or stored procedures for creates and updates). It's painful to rename a table, but much less than moving it to another microservice. Then, I'd keep things in a monorepo as long as possible, at least until 100 people, with the rule that all things in main must be behind feature flags first and any database migrations must be independent of code changes.

But I take for granted that growing projects will usually need more people and more quickly than the architecture can easily accommodate, and I think we disagree there.


I'm having a hard time following you. All I'm saying is, I believe that all other things being equal, a more simple architecture with fewer tiers, layers, network servers, and moving parts will tend to require fewer people than a less simple architecture with more tiers, layers, network servers, and moving parts. If you're saying that isn't true in a hyper-growth startup then I guess I'll have to take your word for it as I've never worked in a hyper-growth startup (only in glacial-growth non-startups).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: