Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

tectonic, ignition, matchbox?

Also CoreUpdate is based on Chrome and ChromeOS but it's the only os that integrated it nicely. Running CoreOS is way simpler to manage than most other distro's because of that fact.



Atomic uses rpm-ostree, which looks similar to Omaha from a client perspective (update, roll back to previous version): http://www.projectatomic.io/docs/os-updates/


is there an alternative to locksmith? the "automatic" update (https://coreos.com/os/docs/latest/update-strategies.html) and/or https://github.com/coreos/container-linux-update-operator?

basically on coreos I don't need to run "rpm-ostree" to have an updated system. it is actually updated automatically.


And importantly, you can control updates across a whole cluster by limiting how many machines are allowed to update simultaneously.


I just can't get a decent "This is how to get started" from windows guide with atomic. It's all assuming I'm perfectly happy on linux. Not to knock it, but I just want to start some VMs on hyper-v and have a go.


This is good feedback. I'll bring it to the Fedora docs team.


Container Optimised OS (Which is just ChromiumOS stripped down) has been working pretty nicely for me on GCloud. Just reboot and you have the latest updates. https://cloud.google.com/container-optimized-os/docs/concept...

It's also open source

But yeh, Matchbox + Ignition for automatically bringing up configured nodes with iPXE was an extremely powerful combination. However, those ideas can be easily ported over or already exist through cloud-init.


Container Optimised OS also says they don't support use cases outside of GCloud, and it also relies on CoreOS' services itself.

So for bare metal use cases, I'm EOL?


well basically we use that on gcloud k8s in the future. but we still have a bare metal cluster. we only used ignition + vmware to provision it in the first place and it was a breeze.. etcd was configured on the fly in a HA setup, k8s was made ready to bootstrap (just needed to run kubeadm init --config...) and we had basically a working cluster. highly available, with as many nodes we wanted, just copy the worker.yaml and edit what's needed.

we actually play around with tectonic at the moment.


I was thinking of locksmith. It's nice, as you say, to be able to have no more than N nodes per pool restart at any time, so that rolling updates are automatic and in the background. You can't replicate it easily without a cluster-wide etcd and ignition. I forgot about matchbox. There was also good old fleet, but that one is dead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: