Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree. In the system I'm currently working on, they can't. I think it would be possible to implement, but no one has been able to take the time (or, perhaps sees the value in it).

(OTOH, I kind of doubt developers at Google or Facebook run the whole thing locally, so there must be some kind of end state for this.)



I actually disagree, developers doesn't need to run the whole code on the local env. In my company we use development docker cluster where we keep instances of all of our micro-services and they are exposed (via vpn) to the outside world so you can call them by domains. When you work on the logic that e.g. would affect 2 micro-services you can just set-up 2 of them and make remote request to the dev env for everything else. I don't see any reason why you should run all of them on your computer.


Yeah that's a good point. When the system is too big there's no way you can run it all locally. The angle I'm coming from is an application that's fairly new and still small and is changing rapidly with lots of cross-cutting changes. We're paying some heavy microservices taxes with development, testing, deployment, and performance, but not at a point where we see benefits yet (IMHO).


If the alternative to microservices is a monolith, and you can run the monolith locally, then logically microservices can also be run locally. If it's difficult to run all the microservices locally then that's just a sign of weak tooling.


"Just a sign of weak tooling."

Why does technical overhead take a backseat whenever a microservices vs monolith discussion comes up?

Yes, in a perfect world every org would have sufficient time and engineering resources to implement microservices for better scalability and code quality. In the real world, setting up and maintaining microservices has huge technical overhead, I'd estimate double that of the equivalent monolithic architecture.

If your company isn't flush with cash and the product you're building will never need massive scaling then it makes no sense to use microservices, at least from a business perspective.


There are many business reasons to use micro-services not only technical ones. In our org we could hire developers fast as we can use multiple programming languages (we have few approved stacks, as micro-services do not share code base).

It allows us split out teams in to small agile units. That can e.g. deploy independently.

Getting new developers on-board take less time as they work on few small services and they do not need to be aware of the whole code base (at the begin)

Yes I agree there is technical overhead, and proper CI&CD servers are required, docker or e.g. vagrant is a must with micro-services. But including all the benefits I wouldn't say that we loses 2x more time than having monolithic architecture.


"Why does technical overhead take a backseat whenever a microservices vs monolith discussion comes up?"

I don't think it does, but that's kind of off topic.

"In the real world, setting up and maintaining microservices has huge technical overhead, I'd estimate double that of the equivalent monolithic architecture."

I agree.


If two different solutions don't share the same "off topic overhead", they aren't off topic.


What a ridiculous statement. Splitting a single process into N processes with no shared memory multiples startup memory by N. It takes a much beefier system to run 8 JVMs than one. The same goes for 8 Python processes. Matters are worse on a VM, which you probably are since basically every shop out there issues developers a Windows or OSX machine. Or maybe you don't have "weak tooling," as you put it, and everything's containerized, using even more resources.


"Splitting a single process into N processes with no shared memory multiples startup memory by N. It takes a much beefier system to run 8 JVMs than one. The same goes for 8 Python processes."

I agree, but I haven't seen the amount of memory ever being a limiting factor when running multiple microservices on a local developer machine. I'd estimate that you can run at least 50 JVM, Python and Node.js processes on a typical single machine, and most applications consisting of microservices have 50 or less microservices.


> I'd estimate that you can run at least 50 JVM, Python and Node.js processes on a typical single machine

Unless someone has convinced you that Spring is the right way to build microservices, in which case you're going to need a gigabyte per instance, and most people won't be able to run 50 on a typical machine. I worked on a project like that, and our beefy iMacs really laboured to bring up ten services.


It's not just the runtime. Each Python process will redundantly import modules. The same goes for libraries in a Java project. It definitely adds up, and I have seen memory usage be an issue, particularly on VMs.


> are since basically every shop out there issues developers a Windows or OSX machine

Uhh... What?


Yes, the tooling is weak. Now, let's convince the engineering managers to spend a bunch of time building tooling to start everything together and fix service discovery on localhost. Turns out, they'd rather build features.

I'm with you 100%. It drives me crazy, but right now you cannot run multiple services locally without a bunch of fiddling.


It's not impossible. As a sibling comment says, it usually just requires a bunch of ad-hoc scripts to make sure everything is running. This does get increasingly complex, especially as microservices are written with different stacks, but that's part of the tradeoff.


In my own projects you can start up every microservice with a single command / script.

Yes, having sufficient tooling for microservices does consume resources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: