Hacker News new | past | comments | ask | show | jobs | submit | thedevopsguy's comments login

I find it interesting that many people seem to conflate the complexity of managing infrastructure and services with K8s.

K8s is complex because managing distributed services is. Not using it doesn't mean it goes away. The complexity migrates and ends up being bundled up in a separate tool or a runbook process or some script.

It's hard to maintain because the tools and apis are different from what some engineering teams are accustomed to using. Building an in-house tool gives them a warm fuzzy feeling and comfort that they can handle problems when they appear due to familiarity with their own code and design choices.

It's a fair trade off. I do wonder how much of the time spent doing this exercise could have been spent on K8S training.

I do feel that the K8S community do downplay how much a PITA k8s configuration can be and that the perceived robustness of cloud-managed K8S isn't up to scratch for something this complex.


Your standards aren't too high but I think you must realise that this is a cultural problem with little hope of changing. Even if the push comes from the CTO, it will take years for change to happen, it will require new hires and bringing new blood into the engineering leadership.

If you do want to take the challenge (which I strongly discourage you from) you'd need to collect data to build your case, quantify the time and human cost from issue/jira to code landing in prod to number of incidents/bugs. The instrumentation to do this will be a fairly chunky piece of devops work. Frame the data in light of your competitor's ability to iterate their products and so on. When it's collected and presented it can be quite compelling and people will listen.

It's only at this point you'll be able to present the problem to management in way that they understand. You know and I know that this is a cultural problem first, then a process problem and lastly a technology problem. The amount of work to effect this kind of organisational change, even in a small engineering company is immense. I don't know your motivations are for staying, if it's the domain or the money but if this something that bothers you then this is the best piece of advice I can give you:

Run, head for the hills, and don't look back.


> this is a cultural problem with little hope of changing

I disagree. Most existing teams with good processes started out decades ago without them, and improved incrementally from where they started. Team culture can absolutely be changed, but it takes a lot of time (and requires cooperation from seniors and managers).


Without knowing more about their architecture it is difficult to comment beyond the conclusion Alexandra Noonan came to, stated at the beginning of the article. It looks like to me that the architectural assumptions were changing too quickly due to the demands of a fast growing business. Having all their code in a single repository means that they can control dependencies, versioning and deployment centrally, it gives them central control of their software development lifecycle. I can't see how they could not have had the same benefits of the monolith if their microservices existed in a single repo to begin with and had the appropriate tooling to enforce testing, versioning, deployment across all services in the repo. I guess this is the whole monorepo debate and tooling.

This article for me is more about the complexity of managing a large team across different sites where the architecture needs to change rapidly when modularity is absent. They did get a measurable benefit around performance, though. I wonder if Alexandra will comment on the challenges of running a team in an environment of this complexity?


I totally agree with you.

I think this article is more evidence against the credibility of multi-repo than against "microservices".

Anecdotally, my current place of work has grown to about 200 engineers, maintains a monorepo, and hundreds of deployed cron jobs, ad-hoc jobs, and "microservices". We have none of the problems discussed here. We invest maybe 20 eng weeks a year in monorepo-specific tooling, and perhaps another 30 eng weeks per year in "microservices"-tooling.


If the microservices are in a single repo and tested and deployed together then they are arguably no longer microservices but a "distributed monolith"!


I'm referring to having the same testing, deployment,packaging,versioning policies etc being consistently applied across projects within the same repository not deploying, testing and releasing together.

It's the drift and inconsistencies across these concerns across projects that makes deployment and operations less predictable.


I've been an early adopter of docker. Used Compose when it was still called Fig, used and deployed kubernetes beta up to version 1 for in-house PAAS/heroku like environment.

Must say I do miss those days when K8s was an idea that could fit in your head. The primitives were just enough back then. It was powerful developer tool for teams and we used it aggressively to accelerate our development process.

K8s has now moved beyond this and seems to me to be focussing strongly on its operational patterns. You can see these operational patterns being used together to create a fairly advanced on-prem cloud infrastructure. At times, to me, it looks like over-engineering.

Looking at the borg papers, I don't remember seeing operational primitives this advanced. The develop interface was fairly simple i.e this is my app, give me these resources, go!

I know you don't have to use this new construct but it sure does make the landscape a lot more complicated.


I agree that this new construct makes the landscape even more complicated, but I disagree that k8s has reached the point of over-engineering. Most of the parts of k8s are still essentially complex to me -- they're what you'd need if you wanted to build a robust resource pool management kind of platform.

Ironically, the push to "simplify" the platform with various add-on tools is what is making it seem more complicated. Rather than just bucking up and telling everyone to read the documentation, and understand the concepts they need to be productive, everyone keeps building random, uncoordinated things to "help", and newcomers become confused.

For example, I don't know who this operator framework is aimed at -- it's not at application developers, but at k8s component creators who write cluster-level tools, but what cluster tool writer would want to write a tool without understanding k8s at it's core? Those are the table stakes -- if I understand k8s and already understand the operator pattern (which is really just a Controller + a CRD, two essential bits of k8s), why would I use this framework?

I think if they really wanted to help, they'd produce some good documentation or a cookbook and maintain an annex of the state of the art in how to create/implement operators. But that's boring, and has no notoriety in it.


I don’t see how these abstractions make the product more complex. They’re still optional.


It's not that they force Kubernetes to be more complex, it's that they muddy the waters. I clearly understand that they're optional, and that they're an add-on essentially, but it might not look this way to a newcomer.

People are being encouraged to download a helm chart before they even write their first kubernetes resource definition. People might start using this Operator Framework before they implement their own operator from scratch (that's kind of the point) -- though honestly it's unlikely that they'll actually be clueless since it's for cluster operators.


> You can see these operational patterns being used together to create a fairly advanced on-prem cloud infrastructure. At times, to me, it looks like over-engineering.

well consider you wanted to have a High Available solutions that supports Blue Green / Rolling Deploys without downtime. You either built it yourself or you rely on something like k8s. It's not that much over engineering. K8s is a lot of code, yes. But the constructs is still pretty simple. I think deploying k8s is still way easier than most other solutions out there, like all these PaaS's and Cloud solutions. Spinning up K8s is basically just using ignition/cloud-config, coreos and PXE or better iPXE. Yeah sometimes it's troublesome to upgrade a k8s version or a etcd cluster. However everything on top of k8s or even coreos itself is extremly simple to upgrade.

inb4 or our current system is using consul, haproxy, ansible and some custom built stuff to actually run our stuff. System upgrades are still done manually or trough ansible and my company plans to replace that with k8s. it's just way simpler to keep everything up-to date and run for high availability without disruption on deployments. it's also way simpler to actually get new services/tools into production, i.e. redis/elasticsearch without needing to keep them up to date/running.


>I think deploying k8s is still way easier than most other solutions out there, like all these PaaS's and Cloud solutions

Have you seen nomad + consul + traefik? Much easier to install and the end result is close to a K8s cluster.


Not the parent, but I really like Nomad + Consul + Fabio (or Traefik) too. I tried learning Kubernetes but there was so much to take in all at once; I tried learning the HashiStack and I could try it out one product at a time.


It's not clear for me, if you confuse the compose/swarm development progress (which is like your ideals) and kubernetes (which afaik always was over-engineered to begin with).

Kubernetes has the huge day-1 problem, that it doesn't solve all of your problems. The hard stuff like networking and distributed storage are hook-in APIs. That's fine on Google's cloud where all the other stuff is there and was developed with these interfaces in mind, so all the endpoints are there. But most companies don't work in GCP/AWS alone. The moment you come on-premise you see that kubernetes only does 25% of what it needs to do to get the job done.

So, oyu have this tool who already lacks 75% in its original design and it tries to overcome this by adding more stuff. Then you combine this with a prematurely hyped community who just adds more stuff to solve problems that are solved, that don't need to get solved or that aren't problems, just to get their own names and logos into what's out there.

These two are patterns that make it very clear that it is impossible for Kubernetes to ever become a lean, developer friendly tool. But it's a great environment to make money already, I can tell you. And I think maybe that was the main goal from the beginning.


There's some truth and some wistful hope in your post; In my time at Google, the only thing that was anything like these "Operators" was what was developed by the MySQL SRE team, which was great but they also admitted it was a bit "round peg, square hole". There's a shared persistence layer that hasn't quite shown up yet; you need a low-latency POSIX filesystem and a throughput-heavy non-POSIX system (Chubby and GFS in the Borg world/Etcd and ??? in k8s). Not having the ability to work with persistent, shared objects is the biggest detriment to the ecosystem. S3 sorta works if you're in AWS, GCE supports Bigtable and etc


Operations is always more complex than folk expect it to be and product evolution typically reflects that. Kubernetes was simple because it couldn't miraculously teleport to do all the things ever-larger clusters require of it.

We forever rush to the limits of current technology and then blame the technology.

I think it's worth noting that Kubernetes never tried hard to impose an opinion about what belongs to the operator (as in the person running it) and what belongs to the developer. You get the box and then you work out amongst yourselves where to draw the value line.

Cloud Foundry, which came along earlier, took inspiration from Heroku and had a lot of folks of the convention-over-configuration school involved in its early days. The value line is explicitly drawn. It's the opinionated contract of `cf push` and the services API. That dev/ops contract allowed Cloud Foundry to evolve its container orchestration system through several generations of technology without developers having to know or care about the changes. From pre-Docker to post-Istio.

Disclosure: I work for Pivotal, we do Cloud Foundry stuff. But as it happens my current day job involves a lot of thinking about Kubernetes.


I just find docker swarm to be so much simpler and works for the type of stuff we deploy.

Sad to see it 'lose the race' against kubernetes.


Kubernetes by itself may be daunting for most teams.

But I'm not sure I understand the backlash. Once you've built your application and it's been packaged (containerized) and deployed why would anyone care how its run. Also running a container in production and orchestration seem to be conflated somewhat in this thread and the use cases are very different.

You can think of Kubernetes as an Automated SysAdmin . This is a bit reductive I know but it is useful to think of this way. You ask the sysadmin to run something for you and they tell you how to package it (tgz, war, zip etc) and they run it for you on hardware.

The level of engagement that a dev has with getting his app running on hardware is no different to that of dealing with a sysadmin and with the admin requesting that your app is packagedin a container.

Kubernetes out of the box will give you most of this functionality as long as you keep state outside of the cluster. There are also options on how to make the experience smoother. There also these tools to help too:

* Openshift * Kubernetes + Rancher * Mesos

If you need orchestration and scheduling. I am a little perplexed.


Hadn't thought of it from this angle. Docker's only chance of survival is to have a cross platform container. Something that works on Windows and Linux.

Perhaps Docker's only play is to fold into Microsoft to achieve a cross platform solution. Microsoft does have Brendan Burns now.


Didn't Docker announce Windows support years ago?



My comment wasn't too clear. I'm not talking about just running docker cli on Windows or using windows 10 containers. Am yet to see a coherent story on how to write say a Java application containerize it, deploy it on Windows and Linux without having dealing with differences in each OS container solution.

If there is, I'd like to learn about it.


It's all about Openshift. Redhat developers have actively contributed to Kubernetes for about two years now.

Now they'll own the entire stack and have a great integration story for enterprises. Even though containers have been around 3+ year's in the form of docker, corporations still don't have a scooby on how to integrate their existing deployment and development workflows.


> Now they'll own the entire stack and have a great integration story for enterprises. Even though containers have been around 3+ year's in the form of docker, corporations still don't have a scooby on how to integrate their existing deployment and development workflows.

I second this. If its a legacy stack, enterprises struggle to fully containerize their apps and commit to deploying with a container orchestration layer like OpenShift or Kubernetes. IMHO, we need more enterprises to get over this barrier, than view it as a passion project by over eager devops' teams...


The docker ecosystem is hard to follow. Like you've just mentioned there are multiple solution to each problem. Docker based solutions for orchestration(Swarm), storage (v1.9) and networking (v1.9) overlap with the offerings from Kubernetes,Mesos, Flocker and whole bunch of others.

It's hard to know whether to wait for Docker to provide a solution or to use something that already has momentum. Take networking for example. Solutions have been bandied about for the last year or so and only now do we have something that's production ready. Do I rip out what I already have for something that is docker native or do I continue with the community based solution.

Storage (Data Locality) also follows a similar path. Kubernetes provides a way for making network based storage devices available to your containers. But now, with the announcement of Docker v1.9 do I go with their native solution or something that has been around for ~6months longer?

I've been working with these technologies for the past year and it has not been easy building something that is stable with a reasonable amount of future-proofness baked in.


My advice would be to think hard about your requirements and pick something which meets them. Don't fret about the "best" solution - you and your team have more important problems to solve. If something works for you then you have made the right choice. All the solutions you would pick today will still be around tomorrow.


Try writing a book on it! Maddening.


Hi Ctex,

It's neat that beluga is written predominantly in bash but it's also difficult to see what Beluga actually does. Right now it looks like it sets up an environment for a docker compose app to run. Similar to docker machine but also solvable by tools like ansible, salt or python fabric. Kubernetes and Mesos are solving different problems. They manage and orchestrate services and add ons may also help with repository management.

My view may come from a lack of understanding of the tool's primary use case. Deeply interested in new developments in the docker ecosystem. Could you please update your current documentation with a few examples.


Hi, thedevopsguy.

The whole goal of beluga is to take your docker-compose project from your machine to a remote host and have it running.

It's meant to be used with the existing docker-machine & docker-compose tools and shouldn't interfere with any internal docker apis.

Beluga was originally written in bash as we didn't want to impose runtime dependencies. Bash is usually pretty much available everywhere expect bsd.

It also accounts the fact that if you have to deploy to multiple machines you won't have to rebuild your project everywhere as it can push and pull the images from either dockerhub or your own self hosted docker registry (either https://github.com/docker/docker-registry or https://github.com/docker/distribution).

Here's a link to a sample node.js project that is deployed with beluga. https://github.com/cortexmedia/Beluga-SampleProject-Nodejs

It's pretty much a standard docker-machine & docker-compose project but with an additonal file.


Or move everything to a microservices architecture and use the best suited language for each subsystem. It would force both camps to engineer for composability and open service apis. My tupence.


I assure you that Uber is built using "micro services". That doesn't mean there isn't value in having some language uniformity in the org and it doesn't solve the language preference between people problem.


Looks like Uber has moved to a microservices architecture. Check their engineering blog: (https://eng.uber.com/soa/)


Wasn't aware of their move. A microservices architecture then makes the technical conversation about the service, it's latency, performance rather than implementation details.

Is it such a bad thing to have many different languages in play as long as the SLAs are met?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: