I'm a PM and engineer at CoreOS/Red Hat -- feel free to ask any questions and I'll do my best to answer.
In the next few months, you should see an OpenShift that is built upon the same upgrade system as Tectonic which allows for more incremental buy-in to OpenShift PaaS functionality and a Linux distribution that leverages Ignition and immutability to provide the minimal environment needed to run Kubernetes/containers.
My understanding is that Container Linux as is will be supported for years, but we will also be creating a new distro, RH CoreOS, that replaces the Gentoo build system with Fedora tooling. This shouldn't change much for users as they don't interact with the build system; they just consume the results of said system. I'd liken this scenario to the relationship between CentOS and RHEL, which are both maintained by Red Hat. Some details have yet to shake out; for example, I personally don't know if the resulting distro will leverage rpm-ostree, but we already have internal proof-of-concepts running OpenShift with Tectonic components on top of Container Linux.
Please voice your opinions here and now! Nothing is set in stone and we're listening for the community to weigh in on these decisions as well.
One of the main reasons we use CoreOS is that it ships stable and state of the art features from the latest Linux kernels, so features and infrastructure like eBPF are available and can be used without problems. RHEL kernels (e.g. speaking of RHEL7) on the other hand are ancient and heavily outdated, and only backport some of the newer features from upstream kernels, so user space making use of it cannot be run anymore preventing latest innovation (or killing Linux kernel innovation through supporting bypasses like DPDK).
Oracle's RHEL clone on the other hand ship their UEK kernel which is a recent, commercially supported kernel based on (almost) latest upstream. There, the situation is at least better than with native RHEL, but I truly hope Red Hat has an answer to that with brand new Linux kernel's on RH CoreOS side. Please don't let innovation coming from the kernel die by dictating ancient RHEL kernels to majority of users. CoreOS enables innovation, please make sure it continues to do so.
I totally agree with your points, especially with awesome projects like Cilium[0] solving real problems in the Kubernetes ecosystem and, you know, getting updates to the kernel primitives that actually isolate containers. RH CoreOS is not going have a kernel that is synchronized with the RHEL release cycle. We're working to find out what types of requirements for certifications that we want to support and, using that as a guideline, we'll be shipping the freshest possible kernel. The nice thing about supporting both RHEL and CoreOS as the OS for OpenShift is that customers that require government-tier certifications will be told to use RHEL and everyone else can enjoy CoreOS.
We also now have access to the vast kernel engineering resources at Red Hat, so CoreOS should be able to get emergency fixes like those for Spectre and Meltdown out to customers much more quickly.
I switched from Ubuntu to Fedora on my development machines and it has been amazing.
Stability is superb and a lot of the niggles I had with Ubuntu just went away.
Fedora Cinnamon is damn close to perfect as a development desktop for my needs. I literally can't think of anything I'd improve outside of its multi-monitor support (it's fine with fixed monitors but plugging my 4K on displayport on the ThinkPad requires some finessing but so did windows).
You are missing the point. It's not at all about Fedora itself here. What I was trying to say is that today CoreOS (even in) stable is using Linux kernel 4.14.32 (https://coreos.com/releases/) and therefore enabling deployments to use the latest and greatest features, performance optimisations and innovation from the Linux kernel community in enterprise environments. Going back to RHEL kernel would be a major step backwards preventing a wide user base from access to what CoreOS enables them to do today. I genuinely hope that CoreOS folks at RH don't give up on continuing to deliver this sort of innovation. That is all what I was trying to get across.
We really try to avoid being a "bleeding edge" distro, and prefer to focus on leading edge. We don't always do every thing absolutely before anyone else; we try to be the first to provide integrated, tested, usable versions of innovative open source.
Upgrading the Linux kernel opens a huge can of worms. Driver regressions happen all the time. The only things I think are worth backporting in an enterprise kernel are hardware enablement changes and security fixes. If a customer wants to use to a LTS kernel they can compile it by themselves. There is nothing stopping a customer that wants a new kernel from compiling it, and running it on RHEL. Red Hat has to pick their battles, I imagine a very small amount of users care about this for workstations or servers.
This is where we'd love to have your community input on Fedora/CentOS versions. Fedora Atomic Host has had very recent kernels available, and I don't see why that wouldn't continue with the Fedora version of the new OS.
Thanks for the response! My immediate question after I read the article yesterday was on the future of Fedora (and RHEL) Atomic.
Both you and the press release mention that CL will be supported for a while, but it's not clear what's going to happen with Atomic. My entirely unscientific and feels-based opinion is that Atomic seems to be getting a little more traction in the Workstation/SilverBlue area at the moment. rpm-ostree is really unique tooling, and I'd hate to see it go away; the more I use it, the more I like it!
At any rate, the market share of CL is probably substantially greater than that of Atomic, but there are those of us who are rolling out k8s clusters based on Atomic. It would be nice to know sooner rather than later if we're wasting our time!
I'm not sure of the all the details related to Atomic, but my understanding is that you're correct in its smaller marketshare. Unlike Container Linux, I personally don't know how much time we're going to continue to maintain the distribution. I know there is public information somewhere; I'm going to fire off an email to get more information so that I can clarify this post.
Going forward RHEL and Red Hat CoreOS are going to be the official distributions supported for running OpenShift. RHEL will be for users that need to install software on the hosts themselves manually and CoreOS will be the preferred immutable host that expects all software running at the cluster-level. Running OpenShift on Container Linux or Atomic will be like running RHEL RPMs on CentOS -- it'll pretty much work fine, but I don't think you can call up Red Hat if you get into trouble.
rpm-ostree is super cool. The engineers working on the OS are still trying to figure out how to bring everything together, but we understand how important this technology is. I know this is the hill where the Atomic engineers will die on, so if there's going to be anything from Atomic in CoreOS, it's going to be this, haha.
The answer to this splits across the Fedora/RHEL sides. For people using RHEL Atomic Host today (and RHEL) with OpenShift, we obviously want a good transition to Red Hat CoreOS. That gets into some details though of provisioning and what types of customization is being done today. Similar concerns apply to Tectonic+CL users.
My message on the community side (Fedora and existing CL) is - don't panic, you have some time. And a lot of us want to support a broad array of use cases.
Fedora Atomic Host will be updated through at least the end of Fedora 28 cycle (december?). I'd like to keep it updated through Fedora 29, but we'll see what actually happens; best intentions aside, build engineers are always in short supply (your contributions can help). Red Hat Atomic Host will be supported into 2020. Centos Atomic Host tracks RHAH, so it will likely be available through the same period.
After that ... there will be some kind of migration plan to the new OS, details TBD. The real asset we have there is `rpm-ostree rebase` which makes it really easy to swap out the Atomic Host base in-place.
Also, I'd really like to know how you're deploying Kubernetes on Atomic, because we will likely stop building some of the multiple ways to install K8s on Atomic, and would prefer to keep the ones people are actually using.
We were looking rather favourably at CoreOS as an enterprise friendly, yet lightweight alternative to OS. LDAP/RBAC/Prometheus are the type of features that we are looking for, on the other hand we have our own build tooling and release process, hence no use for that part of Openshift. I find it hard to recommend using a K8S distribution that has so many batteries included, 75% of which my organization doesn't need or use. Sorry to see Tectonic go.
Being from CoreOS, I understand your sentiment. Part of our long term integration goal with OpenShift is to try and package up all of its components as Operators that can be optionally installed with our Operator Lifecycle Manager[0]. This is the software that powered Tectonic's Open Cloud Services. In addition, we're rolling out an installer based on the Tectonic installer. The end result should be an installation process similar to Tectonic and Kubernetes cluster running very few services, until you decide to install additional Operators for the functionality you want.
There is a wonderful little piece of CoreOS that is toolbox[0]. It's not available on Atomic Host. Yes I could install it easily but the point of toolbox is to avoid installing anything on the host system in the first place !
It could easily be forgotten on the side of the road while doing the CoreOS/Atomic fusion work.
So I'm asking you, can you salvage this and integrate it pretty please :) ?
In Fedora, there is the Fedora Tools container [1], in RHEL, there is the RHEL Tools container. I don't envision either of those going away because they allow you to do all kinds of fancy stuff including things that require kernel compatibility, like SystemTap, core dumps, etc.
I am a product manager for containers/rhel/CoreOS at Red Hat, and I very much foresee a similar container for Red Hat CoreOS which will provide similar functionality for troubleshooting kernel, and user space (aka other containers) problems.
Thanks for bringing this up, this one had dropped off my radar.
I do think that the simple concept of having `toolbox` installed by default is a powerful one, and while we should revisit some of the details I'd say we should carry this one forward.
Have you built much scripting up on top of it, or is it just having it available for interactive use?
Not much scripting, just a personalised image on Docker Hub and an alias on my dev machine to push "navaati/toolbox" in the .toolboxrc on machines (but that's just a small convenience).
> My understanding is that Container Linux as is will be supported for years, but we will also be creating a new distro, RH CoreOS, that replaces the Gentoo build system with Fedora tooling.
As a Gentoo dev and long time user, I've always had a lot of sympathy for CoreOS since the beginning, so when I heard about the RedHat acquisition, I wondered about this. I have to say that I'm saddened to know that Portage will be eventually replaced. It's an excellent package manager.
As an early adopter of CoreOS, paid Tectonic user, I find this distressing, disappointing, and it leaves me wondering where I can turn.
This is the opposite direction of what RedHat should have done.
I was hoping this was a acquisition to move RedHat's technology stack moving forward and instead it's one to move an innovative and solid platform backward.
Acquire, assimilate and kill off competition, just like other RedHat acquisitions before it. :(
I'm genuinely interested why you believe this is the case. Can you point to anything in specific about the converged stack that makes you feel uneasy? From the CoreOS perspective, OpenShift will literally become the new version Tectonic, which was work from before the acquisition, but with the ability to install additional OpenShift components as optional Operators.
The move to Fedora on its a huge negative from my perspective. If you're just dropping OpenShift and rebranding Tectonic as OpenShift, I guess that's ok, but I'm still stuck dealing with RedHat (the organization) which in my 20+ years of being a Linux user has largely been more negative than positive.
The tooling to create the immutable image is what is changing from Mantle (our Gentoo and ChromeOS toolchain) to Fedora tools. From the consumer perspective, you will not see any RPMs or package managers in our distribution. If you have not built a custom image of Container Linux yourself, you weren't even aware this software existed.
It is to move our technology stack forwards. Container Linux and Tectonic have technology which the RH container stack has lacked (e.g. Ignition, Operator-based host upgrades). At the same time, there's tech in Atomic/Openshift which hasn't been available in the CoreOS stack, like rpm-ostree and S2I. We're really trying to do a "best of both" with the new projects.
Of course, you may disagree with what's "best". You know where to reach me (Josh Berkus) if you want to send backchannel feedback.
Red Hat open sources everything and Quay is no exception. I expect it to be open sourced by the end of the year or early Q1. Quay has a few additional [time consuming] internal processes it has to go through, as it is a totally new product at Red Hat.
We're collaborating with the kubebuilder[0] project upstream, which is a subproject of SIG API Machinery that focuses on generating the best scaffolding for controllers. Myself and some Googlers also proposed creating a SIG focusing on platform extensions to Kubernetes, such as operators tooling[1]. The steering committee is currently not convinced that it merits a dedicated SIG, despite the many projects in the wild experimenting without organization. We're fully committed to taking well understood, community-accepted opinions from our tooling and upstreaming the work, if the community can agree on that aspect of the framework. A great example of this is the Application Definition Working Group, which has leveraged many ideas from our Operator Lifecycle Manager; our CRDs are practically the same! Now that things are open source, we should see things like these converge entirely.
As to whether this or other similar projects will become part of Kubernetes core, my guess is probably there will be resistance to it as it will mean choosing one particular way of how Kubernetes can be extended over others. And this will be against the overall philosophy of Kubernetes that components in it are optional and pluggable[1]. On the other hand, some standardization will mostly likely evolve around the end-user experience of consuming multiple Operators in a single Kubernetes Cluster.
"CoreOS technology to combine with Red Hat OpenShift to drive hybrid cloud-native services, will power fully-automated Linux container platform stack, from the operating system to application services, across the hybrid cloud"
This is so overloaded with buzz words it took a few attempts to make any sense of it.
The bits about the OpenShift integration are interesting I guess, but buried about halfway down is the news that they intend CoreOS' Container Linux to supplant their existing Fedora/RHEL Atomic Host, and Brandon Philips from CoreOS says that they're be continuing to base it around Ignition[0]
I'm genuinely surprised at this. RH has put a ton of work into rpm-ostree for a long time. I guess there's a chance they'll meld it somehow with Ignition and Container Linux's Chrome OS bits when they turn it into Red Hat Container Linux or whatever it'll be called, but it's surprising to see Red Hat supporting a Linux distro not based of of RPMs and installed with kickstart/anaconda.
It's not clear to me if e.g. the Container Linux system will be still built using Gentoo's emerge or the RH tools. You seem to discount the latter, but I guess it depends on what they really mean with "based on Fedora and Red Hat Enterprise Linux sources". Are they going to rebuild e.g. the kernel using emerge, but from RH's source tarballs and collection of patch sets?
Initially its going to be based on RHEL so we can quickly ramp up. Over time it’s very likely we get more aggressive with RH CoreOS and newer kernels, but those are all details being worked on. It’s very important for us to be able to benefit from existing engineering investments in the community and within RHEL, but there’s a lot of excitement about taking what CoreOS proved could be compelling and reinforcing that with the Red Hat engineering teams. Stay tuned for more.
This is why I won't advocate RH and similar vendors. The tech stack churn is too high. If it's not a widely adopted open source platform it's just going to eventually become bought out or die a lonely death with me having to migrate clusters to something else, or hang on to unsupported legacy systems for 10 years. Rather support a Frankenstein's monster of my own design.
Bear in mind that we're basically trying to do it both ways; We still have Red Hat Enterprise Linux, and I don't think anyone would involve the term "tech stack churn" there.
It's CoreOS (now called containerLinux) so they probably figure the end user will install containers instead of rpm packages. I haven't added anything to the base containerLinux image while using it, which is probably as intended.
I am so glad they changed the name. TinyCore and Core Linux were around before CoreOS and I always thought they robbed the name. Defiantly made for some confusion.
Maintained. A new offering based on RHEL will be initially targeted for the supported scenarios under openshift while we work with the communities on how they want to evolve.
If you're using Tectonic in particular, we're certainly aiming to have a nice path. Although there are a lot of details in the term "upgrade" - it may require reprovisioning, but a lot of the idea of carrying forward Ignition is that any early OS customization you've made still applies.
If you're running a Kubernetes cluster, while Red Hat CoreOS will support automatic inplace updates just like existing CL, I'd say it's best practice to do periodic reprovisioning to flush out extra node state. For example in RHEL 7.5 we switched from devicemapper to overlayfs, but existing instances don't get automatically transitioned. If you're using k8s reprovisioning works well as all the containers just move off and then back on.
What are your plans for rkt? It’s great alternative in a docker-dominated ecosystem and much more solid architecture but haven’t seen much progress lately. Doesn’t seem to be getting a lot of love
Red Hat is backing CRI-O for an alternative OCI runtime.
While we did pave the way for creating alternatives for Docker in Kubernetes, CoreOS never quite got rkt to 100% stability in Kubernetes. Personally, I love a lot of things about rkt, but the project's ultimate goal was to have standards, regardless of whether or not it was AppC.
If you're still interested in rkt (it's great tech that we still use to this day to run kubelet for all Tectonic clusters), I recommended chatting to the awesome folks at Kinvolk[0]. They maintain rkt alongside CoreOS and support customers using it in production.
Chris from Kinvolk here. As Jimmy mentioned, we've done a good chunk of the work on rkt with CoreOS and are happy to support customers using rkt, and have done so for CoreOS, BlaBlaCar, NASDAQ and others in the past.
But we've chosen not to go the startup route, which means we can only really afford to work on rkt in the context of paid work. We're looking at doing more of this in the future through support contracts for Flatcar Linux[0], a fork of CoreOS' Container Linux, which includes rkt in the images, and through the contracts we get here and there from users looking for new features in, or support for, rkt directly.
But rkt, as is, remains a great container runtime. It's our preferred runtime when running outside of Kubernetes, atm. The Kubernetes integration via rktlet[1] works well but does not have 100% functional parity with the default CRI implementation. It probably needs about 3 person-months of work to get there at this point.
So yeah, it works well, but does indeed need a bit more love. If you're interested in helping out, get in touch.
May I ask why you're pushing cri-o instead of the more mature containerd? All the technical explanations I got were hand-wavy and unconvincing.
The only explanation I see is that containerd is backed by Docker, a competitor of Red Hat, and that business rivalry overrode engineering common sense. Now we have to suffer yet another "war of the container runtimes", as if the original dockerd vs rkt wasn't painful enough.
* work with OCI-standard runtimes (runc, gvisor, runv, kata, clearcontainers, etc) in order to accomplish the above
* integrate well with Linux (use systemd for process management, bias towards tools that already exist in Linux or improve existing tools) to accomplish the above
I don’t think there’s a war. Use what you like. OpenShift specifically supports exact versions of cri-o and Docker 1.13 in order to provide the best Kubernetes experience and ensure clusters always work. There are pros and cons to all container runtimes, but it wasn’t a business decision, it was a technical decision for us.
cri-o is supported for production workloads under OpenShift 3.9 and RHEL 7.5 and we use it on our largest Online clusters. We see better memory use and predictable latency for containers than Docker and containerd for Kubernetes in general.
We don’t want anyone to feel like they can’t use other runtimes, but being Kubernetes-first has always been a goal we didn’t want to compromise on.
Your answer is 99% techno-marketing bullshit, but you still delivered the kernel of a credible explanation: systemd.
It makes perfect sense now. cri-o, being "daemonless", reinforces the central role of systemd in orchestrating system resources, whereas containerd, being a daemon, weakens it. I can see how Red Hat architects would not love the idea, given the importance of systemd in the RHEL/Openshift stack.
There’s no call to be so rude. The RH/CoreOS people here are acting in good faith, I think they’re being exceptionally helpful, and they don’t even need to be answering questions at all.
For anyone reading this thread who is at Red Hat Summit, we will have a Container Linux/Atomic BOF at 1pm today (May 10), in the BOF area on the 2nd floor.
In the next few months, you should see an OpenShift that is built upon the same upgrade system as Tectonic which allows for more incremental buy-in to OpenShift PaaS functionality and a Linux distribution that leverages Ignition and immutability to provide the minimal environment needed to run Kubernetes/containers.
My understanding is that Container Linux as is will be supported for years, but we will also be creating a new distro, RH CoreOS, that replaces the Gentoo build system with Fedora tooling. This shouldn't change much for users as they don't interact with the build system; they just consume the results of said system. I'd liken this scenario to the relationship between CentOS and RHEL, which are both maintained by Red Hat. Some details have yet to shake out; for example, I personally don't know if the resulting distro will leverage rpm-ostree, but we already have internal proof-of-concepts running OpenShift with Tectonic components on top of Container Linux.
Please voice your opinions here and now! Nothing is set in stone and we're listening for the community to weigh in on these decisions as well.