Hacker Newsnew | past | comments | ask | show | jobs | submit | amluto's commentslogin

I am massively sick of gaming focused boards. I don’t want my board to be “tough” or “mil-spec” or be extra shiny or have fancy-proprietary-auto-overclock. I want a reliable board that complies with all the specs it claims to support. Low idle power consumption would be nice, too.

This is obnoxiously difficult to shop for in the desktop/workstation space.


The PCIe lanes are the worst. You have x16 slots that run x1, you need to check slots with m.2 to make sure an x8 doesn't become x4 if you insert storage. Wait if I plug something into the thunderbolt port my 10g network card runs at half speed? Obviously these are actual physical limitation from PCIe lane counts, but it makes it impossible to search. Just painfull.

My advice to anyone doing motherboard shopping is to read the manual off the manufacture's site before deciding. The pcie lane tradeoffs tend to be in the block diagram next to the contents page.

This is exactly why my comment goes over the head of people who cry just get the basic boards. No, this is why the basic boards for $100 don't cut it. You now need to dive into the technical data and realize that the $100 board seems like a deal for a reason, and suddenly the $300+ category is your only option if you want to get a PC that doesn't run on fake specs.

They exist to partition capability so that enterprises can’t connect all of their peripherals and some ECC memory to get the same functionality for 1/10 the price. It’s not a physical limitation.

Obviously market tiering is part of it and you can play tricks with north and south bridge and pcie switches (which adds cost), but a ryzen board that advertises a pcie 5.0 x16 gpu slot and 5.0 x4 m2 slot only has 4 lanes left to work with from the cpu (i.e the cpus only have 24 usable lanes). Which while you can play with generations to get more lanes it's effectively still 16gb/s. That needs to cover network, extra m2 slots, usbs, as well as the extra PCIe slots.

I don't mind having to work within those physical limits but I do want to be able to search for boards that support N components. i.e 1x 4.0x8, 2x 3.0x8, 4x 5.0x4 . But the best you can search for is physical sizes of pcie slots and then dive into a spec sheet for each one, only to find that the 6 x16 slots only have 1.0x1 of bandwidth each.


Is this perhaps a reason to have a Users table that is separate from the table of data on how you authenticate that user?

Hahaha. You wish :-p

It's a pretty hard argument to work around: WebPKI certificates should go in the DNS, and also the largest DNS providers might at any moment decide not to validate DNSSEC anymore to get through an outage.

Yes, it's a crappy outcome, but endpoints can still choose to enforce this. Further, it's not a persuasive argument against more DNSSEC usage, since if there was more DNSSEC usage then resolvers would be more reluctant to disable it.

If there's going to be a single point of failure in front of your website, that single point of failure may as well be the only single point of failure instead of having two single points of failure, and it's probably important that people can't spoof responses.

Nobody had to hack it. A system at DENIC broke, and so Cloudflare turned off DNSSEC validation for all of their users accessing .de. If DNSSEC was actually important for the security model of those users, that would be a huge deal.

If DNSSEC is part of your security model, you want local validation. Not relying on third party resolver that you don't have a contract with.

Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.


Am I the only one who thinks that the AD bit is about as useful as the RFC 3514 evil bit?

We have this elaborate, complex, and extremely fragile cryptographic system behind DNSSEC and we distill it down to one single bit that we carry over unauthenticated links. Why?

At least WebPKI answers the right question: should I trust a particular claim to represent host.domain at the time in the following range? (Of course it defers determining the current time to some unspecified other mechanism.) DNSSEC tries to do everything and cannot survive an upstream error even within the downstream validity window. And yet, despite the fact that most of the spec leans heavily toward failing secure, the actual communication of validation status is entirely unprotected.


I can answer that! Because when DNSSEC was designed, it was believed that serverside compute could not keep up with per-request cryptography. DNSSEC contorts itself in several ways to maintain affordances for offline cryptography, which has been retconned into a security mechanism but was in reality just a bunch of non-cryptography-engineers making a terrible prediction about the feasability of cryptography.

(Source: I'm one of the few weirdos on Earth who has read the mailing lists all the way back to when DNSSEC was a TIS project).


The intention is clearly that the client is a minimal implementation that will only forward a request to a resolver it trusts. The fact that Cloudflare and Google have convinced us all to use Cloudflare's and Google's resolvers is the problem.

DNSSEC and WebPKI both rely on chains of trust. If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.


> If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.

Even this incident proves that’s not the case.

If LetsEncrypt has a temporary availability issue, my users don’t notice unless it spans longer than my need to renew a cert.

If LetsEncrypt has a CA cert expire, I can get a cert from another provider.

If DENIC’s DNSSEC records break, either due to an operational error or an expiry issue, my .de site becomes inaccessible and my users see a DNS lookup failure. My only option is to hope resolvers do what Cloudflare did, or move my site to a new TLD and just pray that TLD never has the same problem.


The WebPKI works end-to-end all the way to use devices; DNSSEC build an explicit client/server trust model into that. The former is obviously superior to the latter.

Yes, it's also quite damaging to DNSSEC's trust model that the world has transitioned to centralized resolver caches. But the fundamental problem we're talking about with the AD bit wouldn't vanish if 8.8.8.8 and 1.1.1.1 did too; instead, users would be even more reliant on ISP nameservers, which are literally the least trustworthy pieces of infrastructure on the entire Internet.


This is a non sequitur.

It is indeed a bit sad that Cloudflare had to turn off DNSSEC completely. But I completely understand that they don't have a production-ready, tested path to override DNSSEC validation for only some domains.

Sorry! status message was not clear. DNSSEC validation is temporarily disabled only for .de domains.

That's not much better!

[flagged]


Originally it said:

---

The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation on 1.1.1.1 resolver in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.

---

(and in case it changes again, now it says)

---

The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation for .de domains on 1.1.1.1 resolver (as per RFC 7646) in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.

See RFC 7646 for more details: https://datatracker.ietf.org/doc/html/rfc7646

---


The RFC 7646 thing here is the funniest possible addition. This is the greatest day.

It didn't originally say that. They added the clarification just a few minutes ago. The guidelines ask you not to ask people these kinds of questions, for what it's worth.

> QKD is an ongoing field of research where new issues are routinely being discovered.

This always bothers me a bit. QKD is on a very solid theoretical footing — if you have an authenticated classical communication channel and an actual quantum communication channel that sends actual qubits that are genuinely only in the basis you think they’re in, then it’s secure, full stop. It’s been proven for decades.

But this is hard (hint: a commercially useful quantum computer does not exist yet), so people fudge it with optical techniques that approximate, poorly, what is needed. And the result is not secure.


I wouldn't call it "solid theoretical footing". The rough sketch of QKD is

1. BB84 key exchange requires an authenticated channel. typically you do this with a 2. Carter-Wegman MAC, which is information-theoretically secure, but requires shared randomness that cannot be reused.

Successful protocol execution refreshes randomness (you can net gain from it), so you can communicate back and forth continuously when everything is working. An MiTM who simulates a network failure though can expend some of your pre-shared randomness (without it being refreshed). If they do this enough, they can exhaust your shared randomness, and bring down the link until you exchange more shared randomness somehow out of band. if you want to maintain information theoretic security, this might involve e.g. a courier with a USB or whatever (or a carrier pigeon, who knows).

This is still "secure", but is also a significant issue any QKD (even "real" QKD) has that classical cryptography does not have, and has always made me question the "solid" story for QKD.


QKD is interesting from the PoV of perfect secrecy. But AFAIK with e.g. BB84, the basis orientation communication (used to detect OTP delivery eavesdropping) is done with Wegman-Carter (unconditionally secure) authentication using... a pre-shared key.

So if you're only interested in computational security that is post-quantum, why not pre-share a symmetric key for some AEAD scheme? You'll get forward secrecy with hash ratchet and neither provides future secrecy in principle.

Neither solves the bootstrap and QKD requires a really, really expensive and complex infrastructure just to provide perfect secrecy which we're fine without.


In my opinion, QKD (implemented correctly) performs key exchange, basically like Diffie-Hellman except that it’s secure even against an adversary with unlimited computing power. If I had a quantum computer and a quantum network anyway, may I’d use it, but probably not with Wegman-Carter. If not, I wouldn’t.

(BB84 is from 1984. The terminology was different, and the understanding of what mattered in cryptography was different.)


BB84 (and QKD overall) requires authenticated channels. You have to get those somewhere. You can get them from an information-theoretically secure MAC, but it has significant downsides. You can get them with computationally secure primitives, but then there's no point in using QKD in the first place. You cannot instantiate QKD securely without one of those two choices though.

> You can get them with computationally secure primitives, but then there's no point in using QKD in the first place.

I don’t entirely agree. You can build a computationally secure authenticated channel using symmetric primitives (e.g. hashes) that are very, very likely to survive for a long time. And you can build comparably secure asymmetric authentication schemes from the same primitives (hash-based signatures are a thing).

But to build a classical key exchange system, you need more exotic primitives (Diffie-Hellman or public-key encryption / KEM schemes), and the primitives of this sort that are supposedly post-quantum secure have not been studied for nearly as long and have much more structure that might make them attackable.

Not to mention that attacking the authenticated channel in QKD cannot give a store-now-decrypt-later attack.


At that point you can just pre-share a key and use AES.

Nope -- that gives neither public-key capabilities nor forward secrecy.

There is a major issue with current AI tools that they want to effectively grant access to everything their user has access to. The whole sandbox structure is wrong (although various people have vibe coded assorted improvements).

Another issue I've noticed is they're sometimes very resourceful. For example when Codex can't directly edit file due to sandboxing restrictions, rather than asking "hey can I apply this diff on the file", it'd ask for permission to run a `cat EOF` command to re-write the whole file, which the UI doesn't surface properly (just shows the first line...).

This sounds similar to what's described in the "Claude deleted my DB post", it decided "I need to do X", then searched for whatever would let it do X, regardless of intended purpose.


I amused myself by removing codex-rs’s web search tool and then asking it to search for “foo”. It wrote a Python script to do the search.

If you want them to be able to write code and then run tests on that code, it can be a bit difficult to restrict access meaningfully....

Only for code that can’t be tested in an isolated environment, and designing code that can’t be tested in an isolated environment is generally a mistake for quite a few reasons.

If you pretend you have an intern with their own machine and run the AI agents on that machine, you have the same separation.

    async fn bar(input: u32) -> i32 {
        let blah = input > 10; // Preamble
        let result = foo(blah).await;
        result * 2 // Postamble
    }
> If only we were allowed to execute the code up to the first await point, then we could get rid of the Unresumed state. But "futures don't do anything unless polled" is guaranteed, so we can't change that.

Is that actually valid reasoning? If we know that foo(blah) doesn’t do “anything” until polled, then why can’t bar call foo without polling it before foo itself is polled? After all, there’s no “anything” that will happen.


Because foo might call process::abort().

I disagree. If the codegen / optimizer is trying to preserve the rule that futures don’t have side effects until polled, then it seems fine to assume that the future being wrapped also follows that rule.

So if I call a foo() that violates the rule, it seems odd to complain that the generated bar() also violates it.


The flow for removing cards is also a fantastic exercise in slowness.

How so?

Open the Wallet app (the double-tap-power view doesn’t work). Ask to delete one card at a time (which requires two taps which a short mandatory wait between them due to the animation). Tap again to confirm. Then wait an obnoxiously long time for the too-cute animation to complete. Then repeat for the next card, while wondering why there is no bulk remove operation of any sort.

Yep! Extremely annoying when traveling with my family of four!

Sigh.

1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.

So:

- Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.

- What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.

- What if you ro-bind-mount something in? It’s not ro any more.


> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?

I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.


> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use.

The need for this feature/functionality in the fist place is questioned by some:

> As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).

> The algorithm being used in this [specific] exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API. […]

* https://news.ycombinator.com/item?id=47952181#unv_47956312


> a security bug in them that one time?

More than one time.

> a cryptographic performance enhancement feature

It's very rarely used.

> Did we blacklist OpenSSL's binaries after Heartbleed?

No, but lots of companies have since migrated away. OpenSSL was harder to move away from because there weren't as obvious drop-in replacements. Blocking a syscall that you never actually used is simple and effective.


In fairness, after heartbleed - there was quite a push to move away from openSSL - like Google's boring ssl, openbsd libressl and Mozilla/nss or gnutls - but the alternative here would be moving to a different kernel, like freebsd or open Solaris/Illumos ...

that's just moving to kernel that had 1000x less eyes on it. Yeah sure it will have less exploits but purely because nobody bothers to look when there are much juicer targets on Linux.

But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project


1000x less eyes is true, but also: Linux, even in the kernel, has a long history of "move fast and break things".

Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).


driver attitude in Linux could be summed up to "we'd rather have the hardware driver working than absent".

> but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).

or vote with your wallet and get device that has well supported card.


Less eyes but also less problems like "it's been fixed in the kernel but not in distro XYZ"

If you're using a container as a sandbox, one should use a default deny policy and allow only the facilities required by the container. Though, in practice containers are used to package a huge collection of software, most of which the container creator has no familiarity with and no ability to determine what runtime dependencies, beyond other package names, are required. This one of the reasons why containers, generally speaking, don't offer reliable security. If you can't or won't carefully design your components to sandbox themselves (e.g. by using seccomp and landlock with policies tailored to the specific component), like Chrome or various OpenBSD daemons, then it's far better to use VMs for isolation; and if you do design your components that way, containers are superfluous from a security perspective.

> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time?

To my knowledge, not many things were using the in-kernel code anyways, the recommended way is to use userland tools...

It's optional for openssl, systemd apparently needs it, but deleting the module from one of my systems didn't cause any issues. /shrug


I haven't had it loaded on 100s of servers ranging kernel version from 5.10 to 6.14. The use is just that low

iiuc the AF_ALG interface only offers real performance wins if you have specialized hardware that the kernel can offload computations to. If you're not using that hardware, there's little reason not to do the crypto in userspace.

In fact, the authors specifically say on the very first line of their website that the copy/fail primitive can be used as a container escape. The entire premise of this article is flawed and irresponsible.

AIUI they haven't shown a container escape and are just claiming it so far. Or did I miss something?

Having write access on anything you can read should be enough if libraries or binaries are shared (read-only) between the host and container.

> if libraries or binaries are shared (read-only) between the host and container.

Yeah, exactly - that's a pretty big "if", and not how a lot of container automation does things. In particular you'd need to hit the base system, it's no help at all if some application files that the host does nothing with can be hit.


It's not hard to see ways to escape the container with a cache write primative. I suspect the copy.fail team have held back on releasing a POC because of the disruption it could cause.

It's not a cache write primitive though; it's a write-to-readable-mappings primitive. At least the way I understood it is, you need to be able to get a (read) file descriptor to the target in order to throw it into the splice() syscall.

Now, there are some "funky" no-fs things that could be opened and are mmap'able/spliceable (some stuff in /proc/*, no idea what exactly though), but it's not immediately obvious to me how this is a generic container escape.


I just contributed this [1] which does what you want for seccomp. Well, not by default, but profiling is now effective against this attack.

Oh, an this [2] just happened

[1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501


Blanket blocking socketcall() caused regressions for all 32-bit applications trying to make sockets. In theory, glibc disables socketcall when running on kernel version >= 4.3. In practice, Debian/Fedora/Ubuntu all set glibc's "expected kernel version" to 3.2, so socketcall() is still used on most 32-bit glibc binaries shipped.

https://salsa.debian.org/glibc-team/glibc/-/blob/sid/debian/...

https://src.fedoraproject.org/rpms/glibc/blob/rawhide/f/glib...


That’s… great. But who runs containerised 32 bit applications?

There is an addendum at the bottom where they admit the page corruption is still problematic even with rootless podman.

Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?


Containers were never a security boundary. VMs have better isolation, which is why people choose them for security. Containers are convenience and usually have better performance.

I see the ‘not a security boundary’ thing repeated constantly, and while it makes sense (eg. they’re sharing the underlying kernel or at least some access to it) if you think about it a little more, VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common. A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers

> […] VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common.

VMs are not different due to 'magic' but through hardware assist with things like Intel VT-x and AMD-V:

* https://en.wikipedia.org/wiki/X86_virtualization#Hardware-as...

* https://blog.lyc8503.net/en/post/hypervisor-explore/

* https://binarydebt.wordpress.com/2018/10/14/intel-virtualisa...


I disagree. VMs are better isolated to precisely the extent that (a) the attack surface is lower and (b) the implementation is simpler and thus less buggy.

Hardware virtualization has a strong effect on (b), but it’s not at all a foregone conclusion that it’s strictly in the direction of being more straightforward and thus more secure. And hardware features like fancy device passthrough encourages applications with a very, very large attack surface that has historically been full of holes.


You are obviously right that these are similar in principle: VM isolation exploit would lead to the same exposure like container-related isolation exploits.

VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.

If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?


I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.

I believe that is tautologically true, and thus not a very useful framing.

Security is obviously a continuum (eg. you can even have a bug in your IPMI FW, and a network packet could break in without any interaction with the OS; or there could be a HW bug too), but there is a discrete "jump" between containers and VMs to the extent that it is useful to call one a security boundary and the other not. Just like a firewall is a security boundary even if it can have security bugs.

Whether this jump between exploitable surface area warrants this distinction is what the point is: many believe it does.


But you also cannot just handwave the difference by "it's a continuum". I did not use absolutes, but said "VMs are _better_ for security", which already implicit about a "continuum".

Containers are mostly used as a deployment/packaging model where typically VMs are used where stronger security is needed. This has been the established industry standard for a while. Look at major cloud providers for example.

AWS:

> Unless explicitly stated, AWS does not consider a container or primitives such as an ECS task or a Kubernetes pod to be a security boundary. A notable exception to this is ECS tasks running AWS Fargate, where the isolation boundary is a task. To account for this, we recommend that you use Fargate with ECS if your applications have strict isolation requirements.

> When you’re using the Fargate launch type, each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.

They also further recommend that for even higher security requirements use different EC2 instances - which you can also run on dedicated hardware etc. But the fact that you can further increase isolation beyond VMs, does not make containers the same as VMs.

https://aws.amazon.com/blogs/security/security-consideration...

GCP:

> There’s one myth worth clearing up: containers do not provide an impermeable security boundary, nor do they aim to. They provide some restrictions on access to shared resources on a host, but they don’t necessarily prevent a malicious attacker from circumventing these restrictions. Although both containers and VMs encapsulate an application, the container is a boundary for the application, but the VM is a boundary for the application and its resources, including resource allocation.

> If you're running an untrusted workload on Kubernetes Engine and need a strong security boundary, you should fall back on the isolation provided by the Google Cloud Platform project. For workloads sharing the same level of trust, you may get by with multi-tenancy, where a container is run on the same node as other containers or another node in the same cluster.

https://cloud.google.com/blog/products/gcp/exploring-contain...

> Applications that run in traditional Linux containers access system resources in the same way that regular (non-containerized) applications do: by making system calls directly to the host kernel.

> One approach to improve container isolation is to run each container in its own virtual machine (VM). This gives each container its own "machine," including kernel and virtualized devices, completely separate from the host. Even if there is a vulnerability in the guest, the hypervisor still isolates the host, as well as other applications/containers running on the host.

> gVisor is more lightweight than a VM while maintaining a similar level of isolation. The core of gVisor is a kernel that runs as a normal, unprivileged process that supports most Linux system calls. This kernel is written in Go, which was chosen for its memory- and type-safety. Just like within a VM, an application running in a gVisor sandbox gets its own kernel and set of virtualized devices, distinct from the host and other sandboxes.

https://cloud.google.com/blog/products/identity-security/ope...

These guys are experts when it comes to securing workloads on shared infra and while there are different levels of isolation using various techniques, the current industry practice is to not consider regular Linux containers a security boundary.


Containers are a security boundary, yes.

> A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers

Yeah this almost never happens though whereas Linux privesc is 10x a day.


They may not provide isolation as VMs but they clearly do limit some attacks. VMs do not provide the same isolation as using physically separate hardware either.

I would have thought they provide better isolation than using multiple users which is the traditional security boundary.

It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?


> It might depends on what you mean by a container?

The article was about Podman and Linux namespaces


I understood the comment I replied to (and many similar comments that are regularly made on HN) as talking about containers in general.

Namespaces are used as a security mechanism.


Containers are a convenience boundary and they increase complexity of your risk assessments.

It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?


Security scanners already support most container and VM image formats in widespread use.

Does this increase complexity? Yes, it does. Is it worth the cost? Depends on each individual case IMO.


> Security scanners already support most container and VM image formats in widespread use.

E.g.,

> Container Security stores and scans container images as the images are built, before production. It provides vulnerability and malware detection, along with continuous monitoring of container images. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, Container Security ensures every container reaching production is secure and compliant with enterprise policy.

* https://docs.tenable.com/enclave-security/container-security...


You need a tool like Anchore and PrismaCloud to scan the container images then monitor them in runtime with PrismaCloud. Trellix can “scan” however most people turn off or exclude container directories on the host because it can interfere with the running container.

These sorts of vulns are extremely common on Linux. This one is making the rounds for various reasons but it's a good justification for a migration away from containers if your threat model is concerned about it.

MicroVMs have much lower attack surface and you can even toss a container into one if you'd like.

Or use gvisor, which mitigates this vulnerability.


> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

there is no reason it would be default policy. Else might as well block every socket and just multiplex everything on stdin/out


>might as well block every socket and just multiplex everything on stdin/out

You may be on to something…


They we can build an encoding to allow arbitrary syscalls via stdin/out for convenience

I'd have guessed that the default paranoia-first policy would be "drop everything; verify what you need" which would include AF_ALG.

share and enjoy!


How do you propose to implement that "drop everything except what you need" policy? Do your containers come with a detailed list of which OS services and syscalls are required? I think your idea has the same issue as what held back the adoption of selinux: many developers think that having to enumerate their application's behaviour like that is an undue burden.

A compounding issue is that using AF_ALG doesn't require a separate syscall: it's just using SYS_socket with the first argument set to 38. Your container behaviour specification needs to be specific enough to not only enumerate allowed syscalls, but the allowed values for each syscall parameter.


There are those who are paranoid and those who are expedient. If you're truly paranoid, you spin up the thing you want to run, measure what it does, and open the holes to allow it to do what it needs to. It's tedious and sometimes error-prone, but in some environments it is necessary.

In the vast majority of the world, you set permissions to what's reasonable and trust that most of the time things will work out pretty well and have a plan for if you need to fix things on the fly.

I personally am not terribly paranoid, but I've worked places where we had to be pretty paranoid (shared hosting).


The reason is that it's very rarely used and has a history of issues.

I've not looked for podman but moby/docker I believe does now block this https://github.com/moby/profiles/commit/7158007a83005b14a24f...

I’m not sure this has much to do with vision as opposed to fancy self-calibration software. At least a few years ago, Tesla cars would be in self-calibration mode for a while after delivery while they calibrated their cameras. I think the idea is that it’s cheaper to figure out in software where everything is than to calibrate the camera mounts and lenses at the factory.

I see no reason that LiDAR couldn’t participate in a similar algorithm.

A bigger issue would be knowing the shape of the car to avoid clipping an obstacle.


It probably could, but I imagine a LIDAR system would need a similar (large) amount of training data to enable effective self-calibration across a wide variety of situations.

At some point, with enough sensor suites, we might be able to generalize better and have effective lower(?)-shot training for self-calibration of sensor suites.


Isn’t the model needed rather similar to what’s needed for sensor fusion in general? If you can extract features from each sensor that you expect to match to features from a different sensor, then you can collect a bunch of samples of this sort of data and then use it to fit the transformation between one sensor’s world space and another sensor’s world space.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: