Cloud vms are a main target for unikernels, however, as Russ mentions in one of the linked issues there actually is quite a lot of other code you need to include in your system depending on what you are deploying to.
For instance systems with arm64 might need UEFI or if you enable SEV now you need additional support for that which is why I'd agree with Russ's stance on this.
Every time someone asks us to provide support for a new cloud instance type (like a graviton 4 or azure's arm) we have to go in and sometimes provide a ton of new code to get it working.
I assume you're referring to this[1]. I don't think it's necessary to bring all of that into the Go runtime itself, or ask the Go team to maintain it. It would be part of your application, and similar to a board support package.
TamaGo already supports UEFI on x86, and that too would be part of the BSP for your application, not something that would need to be upstreamed to Go proper. Same for AMD SEV SNP.
As for you (nanovms) supporting new instance types, wouldn't it be nice to do that work in Go? :)
Edit: I wonder how big the performance impact would be if you used TamaGo's virtio-net support instead of calling from Go into nanos.
When we speak of 'hard security boundaries' most people, in this space, are comparing to existing hardware backed isolation such as virtual machines. There are many container escapes each year because the chunk of api that they are required to cover is so large but more importantly it doesn't have isolation at the cpu level (eg: intel vt-x such as VMREAD, VMWRITE, VMLAUNCH, VMXOFF, VMXON).
This is what the entire public cloud is built on. You don't really read articles that often where someone is talking about breaking vm isolation on AWS and spying on the other tenants on the server.
> This is what the entire public cloud is built on.
Well... The entire public cloud except Azure. They've been caught multiple times for vulnerabilities stemming from the lack of hardware backed isolation between tenants.
How Azure isolates VM's is completely unrelated, because containers are not VM's. And if you meant to assert that Azure uses hardware assisted isolation between tenants in general, that was not the case for azurescape [1] or chaosDB [2].
Unmanaged VM's created directly by customers still aren't relevant to this discussion. The whole point here is that everyone else uses some form of hardware assisted isolation between tenants, even in managed services that vend containers or other higher order compute primitives (i.e. Lambda, Cloud Functions, and hosted notebooks/shells).
Between first and second hand experience I can confidently say that, at a bare minimum, the majority of managed services at AWS, GCP, and even OCI use VM's to isolate tenant workloads. Not sure about OCI, but at least in GCP and AWS, security teams that review your service will assume that customers will break out of containers no matter how the container capabilities/permissions/configs are locked down.
Absolutely. Everyone who deploys unikernels to the public clouds does this. Some are better fits than others. AWS for instance you can build an image and deploy an ec2 instance in a matter of seconds.
Might've replied to the wrong comment: I don't think io_uring is bad, and the comment doesn't contain 'bad', and I certainly don't think async IO is bad :)
> flatpak: I really like software distribution done with flatpak, packages are all running in their own namespace, they can't access all the file system, you can roll back to a previous version, and do some interesting stuff
As of today flatpak still has holes you can drive a truck through.
I still don't understand the slapdash approach the desktop Linux crowd took, Qubes is a much better approach. Such unprofessional software engineering pervades FLOSS with the worn and tired "but it's a hobby project" when it's a core dependency necessary for international corporate, government, and military strategic systems. (I also don't understand the lack of appropriate support for critical projects either, but it makes sense because of decline, entitlement, corruption, and greed.)
I had to shout and scream at docker early on for container image integrity, but that fell on deaf ears. Heck, Python even threw away GPG to roll their own sketchy setup and Ruby doesn't even care about package integrity or supply chain attacks.
Perhaps it's the "corporate, government and military strategic systems" doing it wrong, if they choose to rely on a hobbyist project? Not the hobbyist publishing their sources?
Could you link some related Flatpak issues where this happens? I've been under the impression that security under bwrap could be relatively acceptable and mostly the same as running containers (aside from all the portals).
I haven't been using Flatpak, but was recently thinking about it.
flatpack for isolation is a joke. All the file duplication, none of the security. Not to mention nothing that depends on camera, screen cap, etc will ever get close to working.
Just accepting there's no easy solution, and do aparmour+firejail. It's awful user experience, but at least only once per application. Then it is perfect. There should be a distro like qubes but where everything must have either a firejail profile or a hardened systemd unit file (which is the worse designed/documented thing in the universe, taking the place of X11). That would be the ideal world.
To be hones I appreciate flatpak for software distribution. Afaik (correct me if I’m wrong) some degree of security is implemented through selinux (i’m on fedora).
Yes, you are technically right, but you are wrong only in the sense that most packs either do not ship with a selinux policy or ship with some just-added-whatever policy.
> I appreciate flatpak for software distribution
Flatpack is for distributors. Not end users or for the benefit of the end system running them. There are already too much written about this. The lax state of selinux policies is also a result of this focus.
PS: you will see my top comment get downvoted by the distributors who enjoy to offload the burden to end users, without offering any counter argument.
best argument I've heard against my points so far. but still, since the other solutions require the same amount of work for isolation, the only real benefit of flatpak is bundling soon-to-be-outdate dependencies.
> each function request to have its own hypervisor for protection.
They are talking about isolating serverless functions, not host program functions. In that sense, it is exactly what Firecracker does for lambda functions
Firecracker boots up a runtime that has a full blown operating system in it - lambda just happens to call a known program with a known function. In that sense sure it provides similar functionality but it's really quite different. That's not what fly uses firecracker for, for instance.
Qemu/firecracker are in the same space - this is different.
These are most definitely in a different boat as you embed the guest functions inside the host program and then you register those functions. Taken from the readme:
> The host can call functions implemented and exposed by the guest (known as guest functions).
> Once running, the guest can call functions implemented and exposed by the host (known as host functions).
This is more in the 'safe plugin' type of space. As with most things in this space - the best way to learn about them is to simply try it out.
The major tradeoff with firecracker is a reduction in runtime performance for a quick boot time (if you actually need that - this obviously doesn't work if your app takes seconds to boot). There are quite a lot of other tradeoffs too like 'no gpu' because that needs some of the support that they remove to make things boot fast. That's why projects like 'cloud hypervisor' exist.
For instance systems with arm64 might need UEFI or if you enable SEV now you need additional support for that which is why I'd agree with Russ's stance on this.
Every time someone asks us to provide support for a new cloud instance type (like a graviton 4 or azure's arm) we have to go in and sometimes provide a ton of new code to get it working.
reply