Hacker News new | past | comments | ask | show | jobs | submit login
Challenges of micro-service deployments (techtraits.com)
101 points by weitzj on April 16, 2016 | hide | past | favorite | 39 comments



I've yet to see a big reason for this whole SOA / microservices architecture other than in 2 specific instances:

1) massive projects with lots of developers/teams that work on well-defined functionality

2) massive projects that need to have precise capacity at each service level rather than the application level

Other than these situations, monoliths (both in architecture and deployment) will likely be faster, easier, more reliable and more productive.


One big reason I always bring up and which I never ever see spoken of is reuse.

Our microservices are reusable. We have lots of user-facing apps -- different web sites, each with their constellation of services required -- that all use an overlapping set of microservice backends. If we have a new app, we are usually able to write it entirely as a front end. We pick and mix those backends we need.

For example, we have microservices for: Identity/auth/authorization; messaging (SMS, email); image processing; geocoding and various other GIS operations; event logging; site map management; and so on. One of most important services manages data as documents, with data in Postgres and queries through ElasticSearch, with a fine-grained security system. All our apps use it, and many of the microservices are layered on top of it.

With a monolithic architecture we would have been forced to write this stuff as libraries, and forced our apps to use a single language. (Our microservices are written in Go, Ruby and JavaScript.)


I mean no disrespect by this, but those things sound more like modules / libraries than services with their own infrastructure.


You can be forgiven for thinking that. But these microservices require configuration and often multiple running processes.

For example, our image scaling pipeline runs multiple processes per machine, all coordinated through RabbitMQ. Ideally we'd scale deployment automatically based on the number of queued-up tasks, but we don't have the infrastructure set up for that at the moment.

There's also language issue, as mentioned: A library-oriented design would require everything to be written in the same language (modulo anything you can do with C bindings and such — no thanks!).

We have stuff written in three different languages, and it's actually very nice to be able to do that, and not worry about language-level compatibility. For example, we are in the process of writing a completely new data backend in Go (the old one is Ruby). We have that choice because no app needs to care what it's written in.

Having each microservice be its own app also forces you into a different and, in my opinion, healthier "shared nothing" mindset. You're not allowed to cheat by bypassing the API and tightly coupling yourself to the internals of the implementation, which in turn forces you to design good distributed APIs.


That's assuming you are starting to develop the project from scratch. But that's not the case.

Imagine, you join a company and their product is one, ugly monolith with a lot of technical debt. Now, you need to build some big and complicated features on top of this, and there are no similar features so far. Implementing it as a service can be much faster, have quicker iterations (especially if the deployment methods for the monolith are crap, which is often the case) and easier to build a high quality implementation. As long as you don't go crazy and ended up with ridiculous number of poorly separated services, this might be a good decision in long term (been there, done that).

Based on anecdotal data (where n=3), that's very often the case.


There are two insidious angles at play with the push for MSAs:

1. Software stacks for MSAs dovetail too well with the cloud service offerings from the same company or a division of the parent (EMC/Pivotal)

2. Latency kills. How are you avoiding it with highly distributed choreographed flows that are present in MSAs?


#1 describes most software projects of any business significance.


But describes near zero projects in startup world.


This would most likely be separate applications, perhaps sharing some configuration or central service like authentication, rather than what is commonly referred to as microservices.


From a developer perspective, it's more fun to build an entirely new service


Definitely this. It also allows the codebase to evolve as the languages, platforms and frameworks come and go.


And that's the problem I've had with large teams of developers. I was on a project with something like 10 scrum teams, and they all wanted to write their own isolated "microservices", so they didn't have to worry about integration.

Of course integration had to happen either way, and it just kicked the can down the road and made it a "dev ops" problem. (Dev ops in quotes because real dev ops wouldn't have been a separate team in the first place)

The point is, you have to integration test, you have to define an API, you have to not make breaking changes, and you have to do all these things regardless of deployment modality.

Beware the disguised call for microservices which is really just a request "not to worry about integration testing".


Maybe... but fun shouldn't be playing a part in technical architecture decisions.


> A micro-services architecture does force you to be more conscientious about following best practices and automating workflows.

This 100x. With a monolith, you can get away with SSH'ing into boxes once in a while to fix or debug stuff. Micro-services must be automated.


But if ssh'ing into boxes every once in a while is all you need to fix or debug stuff, why is it "best practice" to build out a lot of infrastructure to replace that?


I don't think that this is what he/she meant.

If you ssh into the box, check some logs or something, find wrong configuration option and then roll out the new configuration in an automated way then that's all good.

The problem comes when you change the configs manually over ssh and call it a day. A few fixes and a few months later, no one really knows how to set up some service because of those undocumented fixes (infrastructure automation serves as documentation). Now image that you have a dozen of services, some people quit, some people join, a year or two of manual fixes passes... And you have an awful mess where no one knows how to set up the services and just hope that you won't need to setup any new servers or migrate to different ones.


Nailed it. Micro-services are not necessarily always the answer (good devops is), but they encourage doing the "right thing".


SSH'ing into a box doesn't tend to lend itself to the fix going through source control, automated tests etc.

That's not to say Microservices are the answer. The answer is a dev environment and proper debug/bug fix procedures.


Because that will bite you in the ass. Someday the person who knows how to ssh in to fix stuff will be on vacation, or at another job. Or someday you'll decide you would really like to put the service in an autoscale group and now you have to sort out a bunch of tribal knowledge into a real deployment.


Absolutely. If you can't prepare a new server (install system dependencies, sync system configuration, etc) and deploy the latest version with one or two commands, you will have bad bad time.


This is more about automation vs manual control and having a strong deployment and integration process.

Microservices just demand it more, but it's just as useful and necessary with monoliths.

By the way, monoliths describe both the architecture and deployment package. You can have microservices that are deployed as a monolith (like a single binary for example).


The only good way I've found to do it is to have each service define it's logical (network) dependencies in a formal way, just like it does with it's GAV dependencies (including versions). This way the graph can be statically analyzed, cycles and transitive conflicts can be identified, deployment and roll back can be automated, the graph can be output to graphviz and visualized, and an automated tool can set up QA environments with a given vector of versions.

All that being said, the above setup only addresses about 1/2 the problems mentioned in the post.


Any good resources for doing that?


No, for us it was all custom tools built in-house :(


> Only using optional fields and coding for missing fields helps us ensure our services are resilient to version mismatch.

I've seen this before and I've always been suspicious of it. So if version 1 has field r,g,b and version 2 has fields r,g,b,a and i use version 1 in a version 2 stack any data on alpha field is ignored. O.k. so you didn't get a stack trace, but is that working software?


Totally agree. I think the microservices thing is actually a distraction from the real concepts at play. As a good thought exercise, imagine if the same thing happened within a single process.

If code was written that expected version 2, and a version 1 object was provided, the static type checker would catch it at compile time.

But with microservices, there is no static type checker and you're essentially coding as if you were in a dynamically typed language.

Hopefully you've at least set up integration tests where you can test the service you're about to deploy against the others, but I think in many microservice situations the only integration testing that happens is in production.


Yup. Version APIs and once you publish a spec for them and have other services relying on those APIs, changes should probably be a new version. Stuff like OpenAPI (Swagger) can also help: https://openapis.org


Totally agree. Actually went in depth on this with Heroku for an article a while back.

https://blog.codeship.com/exploring-microservices-architectu...


Re Distributed Debugging / Centralized Monitoring, Logging and Alerting, this is exactly the kind of problems that our team at Takipi (www.takipi.com) tackles. It's a new way to get all of the information you need (source, stack, and state) to understand what's going on in a large distributed deployment in production - without relying on logs


You should probably mention its a JVM-only technology (from what I can tell)


Correct, JVM only


Tons of good stuff in there. Best deployment post I've ever read.


Key point: don't use microservices in small teams or on v1's.


Microservices can work great for v1, but you absolutely need a common rpc framework and a solid way to deploy, test and monitor. Most teams don't have the right building blocks. This will change with time.


In v1 you have to be spending nearly all your time on features that will gain users and usage. It's risky wasting energy optimizing your infrastructure since you're unlikely to need it because you fail to get traction.


You can do this in a company that is already big and mature. If you're doing this from day #1 in a startup environment, then you aren't very lean, and you better have lots of funding and an expert team with experience doing this.


If you want to help ensure success, having an expert team with experience on day #1 is going to have more positive influence than having a few, or a fleet, of inexperienced people banging on it.


Much of the world lives outside of the bay, and isn't backed by lavish VC funding. You gotta do what you have to in order to survive. Many exciting innovations have come from the duck-tape and bailing wire community.


I assumed you were talking about those with VC funding and in the bay area when you referred to "startup environments". And that's exactly who I'm ragging on: SV startups who hire a fleet of inexperienced fresh grads because they are cheap. I agree that you're not going to end up with a solid SOA setup, or anything really, unless you're having experienced experts doing it from day #1. I think you have a greater chance of ending up with an impenetrable majestic monolith if a bunch of inexperienced people are working on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: