Hacker Newsnew | past | comments | ask | show | jobs | submit | Scramblejams's commentslogin

Is it the desktop environment or background stuff that’s getting worse for you? If the former: FWIW I was pleasantly surprised when I switched back to Kubuntu. KDE’s surprisingly resource efficient these days, actually seems pretty close to XFCE.

I'm not sure and don't have the patience to check.

I might go back to Debian. I'm only really using Ubuntu since that and RHEL are what we use at work.


Long-time Navy jet jock finds it "cringe" when people try to get a little break from the stresses of their life by attempting in a very small way to emulate what he achieved.

I get your point but come on man, ease up. At least remember that some of those DCS-playing wage slaves helped fund your adventures.


I did not!* Through many Pis serving many years and experiencing many power outages.

But I'm using CanaKit power supplies (which supply 5.1 volts, Rpis are notoriously flaky if the voltage dips just a little below 5v) and ATP industrial automotive-grade flash cards (not a big premium in absolute terms, I think 32 gig cards are $13 on Digikey).

* Okay okay, before I switched to those accessories I did have problems.


Escaping a container is apparently much easier than escaping a VM.

I think that threat is generally overblown in these discussions. Yes, container escape is less difficult than VM escape, but it still requires major kernel 0day to do; it is by no means easy to accomplish. Doubly so if you have some decent hygiene and don't run anything as root or anything else dumb.

When was the last time we have heard container escape actually happening?


Just because you haven't heard of it doesn't mean the risk isn't real.

It's probably better to make some kind of risk assessment and decide whether you're willing to accept this risk for your users / business. And what you can do to mitigate this risk. The truth is the risk is always there and gets smaller as you add several isolation mechanisms to make it insignificant.

I think you meant “container escape is not as difficult as VM escape.” A malicious workload doesn’t need to be root inside the container, the attack surface is the shared linux kernel.

Not allowing root in a container might mitigate a container getting root access outside of a namespace. But if an escape succeeds the attacker could leverage yet another privilege escalation mechanism to go from non-root to root


To quote one of HN's resident infosec experts: Shared-kernel container escapes are found so often they're not even all that memorable.

More here: https://news.ycombinator.com/item?id=32319067


apparently...

Like it's also possible in a VM.

What about running non privileged containers! You need really to open some doors to make it easier!


Better not rely on unprivileged containers to save you. The problem is:

Breaking out of a VM requires a hypervisor vulnerability, which are rare.

Breaking out of a shared-kernel container requires a kernel syscall vulnerability, which are common. The syscall attack surface is huge, and much of it is exploitable even by unprivileged processes.

I posted this thread elsewhere here, but for more info: https://news.ycombinator.com/item?id=32319067


Is Podman unescapable compared to Docker?

They both use the same fundamental isolation mechanisms, so no.

They both can be highly unescapable. The podman community is smaller but it's more focused on solving technical problems than docker is at this point, which is trying to increase subscription revenue. I have gotten a configuration for running something in isolation that I'm happy with in podman, and while I think I could do exactly the same thing in Docker, it seems simpler in podman to me.

Apologies for repeating myself all over this part of the thread, but the vulnerabilities here are something that Podman and Docker can't really do anything about as long as they're sharing a kernel between containers.

The vulnerability is in kernel syscalls. More info here: https://news.ycombinator.com/item?id=32319067

If you're going to make containers hard to escape, you have to host them under a hypervisor that keeps them apart. Firecracker was invented for this. If Docker could be made unescapable on its own, AWS wouldn't need to run their container workloads under Firecracker.


This same, not especially informative content is being linked to again and again in this thread. If container escapes are so common, why has nobody linked to any of them rather than a comment saying "There are lots" from 3 years ago?

I did apologize, didn't I? :-)

Perspective is everything, I guess. You look at that three year old comment and think it's not particularly informative. I look at that comment and see an experienced infosec pro at Fly.io, who runs billions of container workloads and doesn't trust the cgroups+namespaces security boundary enough so goes to the trouble of running Firecracker instead. (There are other reasons they landed there, but the security angle's part of it.)

Anyway if you want some links, here are a few. If you want more, I'm sure you can find 'em.

CVE-2022-0492: https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups

CVE-2022-0847: https://www.datadoghq.com/blog/engineering/dirty-pipe-contai...

CVE-2023-2640: https://www.crowdstrike.com/en-us/blog/crowdstrike-discovers...

CVE-2024-21626: https://nvd.nist.gov/vuln/detail/cve-2024-21626

Some are covered off by good container deployment hygiene and reducing privilege, but from my POV it looks like the container devs are plugging their fingers in a barrel that keeps springing new leaks.

(To be fair, modern Docker's a lot better than it used to be. If you run your container unprivileged and don't give it extra capabilities and don't change syscall filters or MAC policies, you've closed off quite a bit of the attack surface, though far from all of it.)

But keep in mind that shared-kernel containers are only as secure as the kernel, and today's secure kernel syscall can turn insecure tomorrow as the kernel evolves. There are other solutions to that (look into gVisor and ask yourself why Google went to the trouble to make it -- and the answer is not "because Docker's security mechanisms are good enough"), but if you want peace of mind I believe it's better to sidestep the whole issue by using a hypervisor that's smaller and much more auditable than a whole Linux kernel shared across many containers.


I mean docker runs in sudo privileges for the most part, yes I know that docker can run rootless too but podman does it out of the box.

So if your docker container gets vulnerable and it can somehow break through a container, I think that with default sudo docker, you might get sudo privileges whereas in default podman, you would be having it as a user run executable and might need another zero day or smth to have sudo privilege y'know?


Yep, and for the rest I've gotten a lot of mileage, when shipping server apps, by deploying on Debian or Ubuntu* and trying to limit my dependencies to those shipped by the distro (not snap). The distro security team worries about keeping my dependencies patched and I'm not forced to take new versions until I have to upgrade to the next OS version, which could be quite a long time.

It's a great way to keep lifecycle costs down and devops QoL up, especially for smaller shops.

*Insert favorite distro here that backports security fixes to stable package versions for a long period of time.


Pinning dependencies also means you're missing any security fixes that come in after your pinned versions. That's asking for trouble too, so you need a mechanism by which you become aware of these fixes and either backport them or upgrade to versions containing them.

Things like dependabot or renovate solves the problem of letting you know when security updates are available, letting you have your cake and eat it too.

> so you need a mechanism by which you become aware of these fixes and either backport them or upgrade to versions containing them

RSS Feeds?


All code is fundamentally not ever secure.

This statement is one of those useless exercises in pedantry like when people say "well technically coffee is a drug too, so..."

Code with publicly-known weaknesses poses exponentially more danger than code with unknown weaknesses.

It's like telling sysadmins to not waste time installing security patches because there are likely still vulnerabilities in the application. Great way to get n-day'd into a ransomware payment.


Have you spent time reviewing the security patches for any nontrivial application recently? 90% of them are worthless, the 10% that are actually useful are pretty easy to spot. It's not as big of a deal as people would like to have you think.

That's why I run Windows 7. It's going to be insecure anyways so what's the big deal?

What startup was it?

The news articles didn't name it.

It did result in jail time. The linked document states that the testing lab supervisor was sentenced to 3 years. (Not sure how much of that time was actually served, apparently he was suffering from dementia.) More info: https://www.oregonlive.com/portland/2018/08/company_supervis...

Also a correction to GP: They were payload deployment failures, they didn't blow up on the pad. More here: https://arstechnica.com/science/2019/05/nasa-finally-conclud...


I run a handful of servers and I have a couple that pop ECC errors every year or three, so YMMV.


> it is hard to maintain two APIs.

This point doesn't get enough coverage. When I saw async coming into Python and C# (the two ecosystems I was watching most closely at the time) I found it depressing just how much work was going into it that could have been productively expended elsewhere if they'd have gone with blocking calls to green threads instead.

To add insult to injury, when implementing async it seems inevitable that what's created is a bizarro-world API that mostly-mirrors-but-often-not-quite the synchronous API. The differences usually don't matter, until they do.

So not only does the project pay the cost of maintaining two APIs, the users keep paying the cost of dealing with subtle differences between them that'll probably never go away.

> I do not prefer Python for reliable, high-performance HTTP servers

I don't use it much anymore, but Twisted Matrix was (is?) great at this. Felt like a superpower to, in the oughties, easily saturate a network interface with useful work in Python.


> I don't use it much anymore, but Twisted Matrix was (is?) great at this.

You must be an experienced developer to write maintenable code with Twisted, otherwise, when the codebase increase a little, it will quickly become a bunch of spaghetti code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: