Hacker Newsnew | past | comments | ask | show | jobs | submit | zare_st's commentslogin

But the stuff that carries TCP is IP. That's why TCP can work seamlessly, because it uses identification from a previous layer. Consider I bind a server to an ID, and not IP:port, the operating system running it must know how to communicate that via IP, so there will be a corellation map somewhere and that map needs to be synchronized between all peers that wish to host the roaming server.

Otherwise you're just switching port (16-bit) value to arbitrary 32-bit identifier.


If TCP didn't use L3 source and destination addresses to distinguish connections, it could be more easily taught to deal with:

* Clients roaming between L3 addresses

* Clients/servers with multiple L3 addresses


But... it doesn't? TCP has no notion of IP address in the protocol, only the port. TCP with changing IPs can work e.g. on top of an ip-ip tunnel with applications not being aware at all.


> TCP has no notion of IP address in the protocol,

RFC793:

    To allow for many processes within a single Host to use TCP
    communication facilities simultaneously, the TCP provides a set of
    addresses or ports within each host.  **Concatenated with the network
    and host addresses from the internet communication layer,** this forms
    a socket.  A pair of sockets uniquely identifies each connection.
    That is, a socket may be simultaneously used in multiple
    connections.

TCP uses the combination of L3 source address, L3 destination address, L4 destination port, L4 source port to identify what connection a frame is on. We're discussing how using that L3 information isn't necessarily ideal for today's world.

> TCP with changing IPs can work e.g. on top of an ip-ip tunnel with applications not being aware at all.

That's just because the IPs have not changed from its point of view: it receives the same frame with the same destination/source IP addresses the entire time.

Part of the reason why we need things like IP-IP tunnels is because L4 connections can't "move" with TCP. In scenarios where we're using tunneling for this, we're accepting worse performance than if we could just directly send TCP to its true destination and have it processed.


So you want to implement persistent connections on L4 without implementing persistent addresses on L3 first?

This doesn't make much sense to me. The hardest problem here is not assigning uuids to pipes, it's the routing/mapping of the "true destination".

- If you manage to solve it on L3, ip-ip tunnels or not — you have it, TCP works unmodified and so does UDP and everything else including quic and http/3.

- If you didn't solve it, then support for persistent connections in TCP is useless.

In another words I don't see what a "transmission control protocol" has to do with it. It's very reasonable to assume that addreses are already figured out when designing transmission control and that's exactly what TCP did.


> - If you didn't solve it, then support for persistent connections in TCP is useless.

SCTP and multipath TCP (which is what we're talking about) already do pretty much this. Assuming that endpoints to a stream connection have single, unchanging network addresses isn't a reasonable assumption anymore. But we're stuck with the assumption that hosts won't move in one of our most common protocols.

https://en.wikipedia.org/wiki/Multipath_TCP#/media/File:Diff...

https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...

In the OSI model, you got similar functionality up at layer 5, but TCP only handles the connection/disconnection aspect of the session layer. In the internet world, we have a bunch of haphazard sets of retries, session balancing, multihoming and reconnecting behavior that are protocol specific (and completely missing from many well-used protocols) kludged on top. (Actually arguably MP-TCP is a session layer on top of TCP).

The only way you solve this on layer 3 is to build some kind of messy overlay network, because addresses have no real relation to where things are anymore. And we know that overlay networks are suboptimal and inefficient. Solving it at layer 4 doesn't have to be (but it's too late for that now).


The protocol would have to handle binding the network to the transport. MPTCP and SCTP both handle that via registering and un-register network layer endpoints. This parallel universe TCP would be the same in that regard.

(I did say I was oversimplifying...


Yeah it's a shortsighted plan. How would i set up rules on firewall if I don't know how to distingush the connections?


There's two separate ideas here:

* Where to send a frame to get to the other side of the connection

* Whose connection this is.

TCP combined the two, because we didn't have mobile clients or a lot of multihomed systems that would benefit from distinguishing them. Also, every octet in the header counted.

In practice, this means we have to keep building a lot of infrastructure on top of TCP (or parallel to it, in datagram protocols) to handle retries and splitting flows well. In turn, these things are completely opaque to the network and it's difficult to write rules about them.

Whereas if we had different packet fields for "where am I sending this packet right now" and "whose flow does this belong to"? we could write better firewall rules, have less infrastructure built on top of TCP, and have better typical application performance.


I don't think is wise even if you had written guarantees from their state via the business, because how your state might look at it.

It's a war, things have changed.


Absolutely no.

If a line of code belongs in a project with one file and a main() function, the presumption of impact of that code line on overall code paths is trivial.

If that line of code belongs in a library procedure used by a million LOC project, presumption cannot be done if you don't the project internals and tooling.

Rewriting entire systems or frameworks because one thinks that it's hard to implement a certain class of features is almost always a recipe for disaster.


For specific cases, asm is faster than c.

I'll show myself out


You win. I truly appreciated the joke; and two days and I still couldn't think of something on par.


Valgrind will show these as "still reachable"


Never. Individuals with that kind of mutation have to be born per random chance, and then selected via sexual preference.


Answer to question - yes, and more. Jails+rctl (available since 2012) is not cgroups it's cgroups+SELinux+APParmor. Vanilla linux container is not a security barrier, vanilla FreeBSD jail is.

In practice this means more seamless 'isolation' in Linux case but that isolation is weak. Which perfectly corresponds to FreeBSD looking at server uses 99% of the time and Linux looking at the desktop too.

About your conclusion, I don't think that's based on anything so please do write on what facts do you base the assessment that FreeBSD has no resource limiting and isolation features, and that it would be a 'separate implementation', as FreeBSD always tends to upgrade and not change tools and interfaces, and that there is not enough interest from anyone to implement it, as most major FreeBSD features are actually paid for by FreeBSD sponsors.


I don’t assess FreeBSD doesn’t have resource limits. I’m stating the fact that prior to systemd, issuing a service restart command under any Unix at all was prone to inheriting any rlimits your shell session happened to have, which could in turn lead to unforeseen consequences like sudden memory/file handles failures or sluggish I/O, depending on how shell sessions are being set up.

Implementing a service manager that can understand and interpret systemd unit files for FreeBSD would require it to be based on completely different kernel mechanisms than Linux, feature parity aside. I can easily see that people with enough skill won’t see the need to be bothered to write such a piece of software, and those who don’t will just shrug and stay on Linux.


Why would systemd unit files matter for FreeBSD? FreeBSD has its own rc system and its own ports and packages and maintainers. If and when some major software starts lagging behind the upstream because of runtime issues, then it's time to discuss the alternatives. Right now such things do not happen.

FreeBSD had to mock several parts of systemd in order to port newer GNOMEs that are dependent on systemd. rc still runs this software, but unfortunately software depends on systemd sockets which is absurd design choice but here we are. Again mocking the absurd interprocess part is the way to go, as opposed to supporting entire specification and API of systemd.

"Any unix" doesn't cut it, too general, too broad. You did ask in dual sense in your original question but then you specifically stated FreeBSD as an example; so I'm answering for FreeBSD specifically.

The problem that I have with opinion in your post is an implication about groups of people - people with skills that should supposedly write a systemd clone for FreeBSD or other Unices, and people without skills that are supposedly waiting for the first group to do some work so they can enjoy systemd on *BSD or wherever. Let me be blunt here, I can assure you this is not the case. Systemd is not a factor for anyone in BSD world and it's a factor only for people that would like to migrate from Linux and retain their usual workflow and muscle memory.


> I’m stating the fact that prior to systemd, issuing a service restart command under any Unix at all was prone to inheriting any rlimits your shell session happened to have...

IFF the service manager you used was either incompetently written, or had this behavior as an intentional feature. This is a problem (like many other problems that the Systemd Cabal addressed) that was known and resolved by many other folks substantially before systemd came along.

Like, think about it for a second. You're saying that systemd has a mechanism to avoid this problem. Systemd uses tools that have been available in Linux for a long time. (I think the newest tool it uses is control groups, which are like seventeen years old.) Are you really saying that none of the big players in the space (nor any of the sysadmins that got REALLY pissed off at the bullshit unreliable behavior that you describe) ever hit upon the technique that systemd uses to avoid this problem?

As I (and many other) keep saying, systemd did solve many problems... but it was also not the first to solve many of those problems.

> Implementing a service manager that can understand and interpret systemd unit files for FreeBSD...

Unit files kinda suck, and so much of the behavior of the various options inside of them are underspecified. (How do I know? I've gotten badly burned in production repeatedly by underspecified behavior shifting out from under me as the software implementation changed.)

There's no good reason to use Unit files outside of systemd... especially when services with non-trivial startup and/or shutdown procedures have to have systemd run helper programs because it's just impossible for systemd to handle every conceivable startup and/or shutdown requirement without embedding a general-purpose scripting language inside of systemd (which would just be fucking stupid).


Yes


Organization proven to use and abuse the security via obscurity model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: