- D-Bus, ConsoleKit, PolicyKit: turds upon turds upon turds
- The Systemd (sic) debacle
- D-Bus is completely unnecesary
Choice quote:
"yes, they want me to call it “systemd”, but I’m averse, for some reason, to proper nouns beginning with lower-case letters; something to do with having had a moderately good education, I guess"
This is reddit-level drivel. Please refrain from posting it.
An LWN article on bus1 will come out, eventually; post that instead.
It's strange to me how often that particular quote, which was intended to be more humorous than anything (ok - I understand if most people don't think it's funny; that doesn't mean it wasn't meant to be) - is used as some kind of evidence that my ideas are wrong on a technical level, while never providing any actual technical analyses.
As for being biased; well, I've criticised SystemD and D-Bus and a whole slew of other pieces of software, and indeed that's more-or-less what the blog was originally for, but just because I do criticise does not mean that criticism is unwarranted. Where's your technical argument?
Sadly, the author failed to read on the history of kdbus and AF_BUS.
Just using UNIX domain socks is not the solution. This is more or less what dbus currently does and has major issues with memory accounting and trusts.
There are various other properties of Bus1 that make it better than all other alternatives, but obviously relying on it makes your program non-portable. IPC is a mess on POSIX systems.
> Sadly, the author failed to read on the history of kdbus and AF_BUS
On the contrary, I've been following their development from the beginning, with some interest.
> This is more or less what dbus currently does
Actually D-Bus multiplexes all messages, so they all pass through the message bus daemon. What I'm proposing is clearly different.
> and has major issues with memory accounting
This is a valid technical point, though I'm not sure it's ever been a problem in practice and I'm not convinced it needed a whole new IPC mechanism to solve.
Judging by this description, Bus1 sounds very much like an object capability system. There is a lot of theoretical and practical evidence that this is a good programming model, and as far as I know it does not currently exist in linux. Maybe you can build a proper object capability system on top of sockets. I don't know enough about unix domain sockets to say if it is really possible to establish the right security guarantees this way, but I'm reasonably sure that whoever is behind Bus1 has investigated this before proposing a new kernel extension. Case in point, the blog post describes a flawed implementation.
What really rubs me the wrong way, though, is all the negativity expressed in this blog post. Also some of the suggestions are downright dangerous. For instance:
> (except that individual connections to a socket can be “revoked” i.e. closed, which is surely an improvement if anything)
This is not an improvement, this is another failure state that you add to every single connection, and which every single client has to handle. You can already implement this feature via proxy objects in a pure object capability system, and in this case you have to be explicit about adding another error state.
> I think this is meant to read as “no, it’s not the D-Bus daemon functionality being subsumed in the kernel”.
I read the preceeding explanation as saying "here is a security guarantee that clients can rely on". What is the alternative? Having unix domain sockets sprinkled somewhere in the file system where everyone with the right permissions can access them? Great, now you have to write your nodes defensively again, to guard against file system priviledge escalation, or just badly administrated systems... The Bus1 documentation is describing a feature, and one which is (afaik) not present in Linux right now.
> Does global ordering of messages to different services ever actually matter?
It certainly makes proofs easier, so my intuition is to say "yes". Maybe the ordering doesn't have to be globally consistent, but better safe than sorry.
> Also some of the suggestions are downright dangerous
You give only one example, which I've rebutted below. I disagree with the assertion as a whole.
> this is another failure state that you add to every single connection, and which every single client has to handle
They already have to handle the node disappearing. The only difference is that you can sever the connection from particular handles via the process containing the node/socket, if it chooses to. Otherwise, there is no difference.
> I read the preceeding explanation as saying "here is a security guarantee that clients can rely on"
What particular guarantee is that? The whole point of the post was that you have the same guarantees, from a security perspective, with file descriptors.
> What is the alternative? Having unix domain sockets sprinkled somewhere in the file system where everyone with the right permissions can access them?
I think you've misunderstood me. Unix domain sockets do NOT have to be associated with a path in the file system (see socketpair function). You can have some kind of arbiter process (aka the D-Bus daemon) which hands out socket connections, just as you need (with Bus1) some way of handing out handles. This is exactly how D-Bus works right now.
> Maybe the ordering doesn't have to be globally consistent, but better safe than sorry.
So you build a whole new, Linux-only, IPC mechanism because it might be important to have global ordering, even though no-one's been able to identify a use case in which it matters?
Let me clarify: this is exactly how D-Bus conceptually works right now. In practice of course D-Bus acts as a multiplexor for the communications also. It is, however, possible to send one half of a socket pair via a D-Bus message.
> They already have to handle the node disappearing. The only difference is that you can sever the connection from particular handles via the process containing the node/socket, if it chooses to. Otherwise, there is no difference.
Logically, these are two different events. Let's say connections are sockets, which any party on that socket can close. Then there are now two failure states which behave differently. Let's say the node disappears. Presumably this only happens if the service itself shuts down for some reason (peripheral unplugged, network connection lost, etc). In any case, this is not an error your client can recover from. Or somebody closes your socket. There are several ways in which this can happen, most of them undesirable.
You can, for instance, pass your socket to another process and it can (maliciously) close it. So you can't really pass your own sockets around, you need some protocol to obtain a new socket. Will this new socket be closed automatically if your parent closes the socket? What's the right thing to do here?
Second this is not necessarily an error you cannot recover from, since you might hold multiple connections to the same service (through different peers). In the Bus1 model this ... is just not necessary, but in the socket model it can happen.
More importantly, this is an error which can happen purely on the client side. You don't need to compromise the node itself, you just need to compromise some peer between the client you wish to attack and the node. This is a larger attack surface and potentially allows you to force a reconnect and other bad things.
> What particular guarantee is that? The whole point of the post was that you have the same guarantees, from a security perspective, with file descriptors.
The guarantee you have in an object capability system is that you can only communicate with a node if you have obtained its address (the capability to communicate) beforehand. If you spawn a new process it has a priori no capabilities except the ones you explicitly grant it. This makes sandboxing the default and makes it very easy to enforce security guarantees and to reason about your system as a whole.
Building this on top of file descriptors and unix domain sockets is probably possible, but only if you add authentication code to every single client and run some kind of "capability-transfer" protocol in each client (which involves a handshake with the node, etc). I think this is essentially what sandstorm does, but it involves rewriting the source code of each application they want to host.
> So you build a whole new, Linux-only, IPC mechanism because it might be important to have global ordering, even though no-one's been able to identify a use case in which it matters?
No, what I mean is that if you are designing a new IPC mechanism, then this is a sensible design decision. E.g. in the design of distributed systems you don't have any consistent ordering between events, and this causes a lot of pain in the design of distributed algorithms (and gives you some impossibility results). Why would you want to have an IPC system with the problems?
> Or somebody closes your socket. There are several ways in which this can happen, most of them undesirable
Please tell me how a third party can close an anonymous socket connection between two processes (because if there really is a way to do that, it's a huge problem in a bunch of existing programs).
> You can, for instance, pass your socket to another process and it can (maliciously) close it
That won't have any effect; if you pass a socket to another process, it's a separate file descriptor in the other process. It can't close your file descriptor, and it can't close the socket since there remains an open descriptor.
> So you can't really pass your own sockets around, you need some protocol to obtain a new socket
Right; this is discussed in the blog post.
> Will this new socket be closed automatically if your parent closes the socket?
No. It's an independent connection. There's no way AFAIK to have two connections to the same socket where closing one will automatically close the other, even if you did want to do that, without kernel-level changes.
> The guarantee you have in an object capability system is that you can only communicate with a node if you have obtained its address (the capability to communicate) beforehand
This is also guaranteed by the file descriptor = handle model. If you don't have a file descriptor representing a channel to some particular service, you can't magically create one, not by guessing addresses or any other means.
> and this causes a lot of pain in the design of distributed algorithms
Hmm, usually distributed algorithms are used for distributed nodes - to which Bus1 doesn't apply; it's for local communication only (or did I miss something important)?
> More importantly, this is an error which can happen purely on the client side. You don't need to compromise the node itself, you just need to compromise some peer between the client you wish to attack and the node. This is a larger attack surface and potentially allows you to force a reconnect and other bad things.
I don't understand what you mean by this. There aren't any peers between the client and the node. The node (socket) is owned by some process and the client has a connection to that socket.
So let's assume that what you say is correct (and I have no reason to doubt it), for example, that what is required is an object capability system and that the Linux kernel doesn't adequately provide this.
Can't a generic object capability system then be devised rather than one just tied to D-Bus or systemd or whatever else will be thought of? And hopefully a system that provides local capability and easily allows distributed capability.
Because then once we have this then it could possibly be used by the likes of etcd / kubernetes or similar.
I'm kinda surprised that they don't just build a service on top of generic netlink. For those of you who don't know, Netlink has been a generic API for Kernel<->Client communication for some time. The kernel added the ability for modules to register their own generic netlink families that can add another route of kinds for modules to communicate with.
Selected post titles:
- D-Bus, ConsoleKit, PolicyKit: turds upon turds upon turds
- The Systemd (sic) debacle
- D-Bus is completely unnecesary
Choice quote:
"yes, they want me to call it “systemd”, but I’m averse, for some reason, to proper nouns beginning with lower-case letters; something to do with having had a moderately good education, I guess"
This is reddit-level drivel. Please refrain from posting it.
An LWN article on bus1 will come out, eventually; post that instead.