You don't need to make up your own for this experiment. There's already a pretty old protocol that's far superior to TCP, but failed to get adoption because of network hardware dropping everything other than TCP and UDP. It's called SCTP.
SCTP is fascinating because it's one of the backbone technologies that makes communication possible for most people on the planet (as the mobile network stack pretty much relies on it), yet it's effectively unsupported on almost every consumer device. If you want to use it, you're probably going to have to ship a userland implementation that needs privileges to open a raw network socket, because kernel implementations are rare (and often slow).
We could've had it as a QUIC replacement if it weren't for terrible middleboxes and IPv4 NAT screwing everyone over once again. Hell, NAT wouldn't even have been an issue had SCTP support been widespread before consumer NAT devices started being implemented.
It's a prime example of how new network protocols now have to lie and deceive to work over the internet. TLS needs to pretend to be TLS 1.2, QUIC needs to pretend to be an application on top of UDP while reimplementing SCTP and TCP, and even things like secure DNS are now piped over HTTPS just in case a shitty middlebox can't deal with raw TLS.
While the gist of your post is spot on, I do feel it should be noted that DoH is preferred over DoT not to protect from middleboxes that don't work properly, but from middleboxes that are actively trying to outright censor encrypted DNS, but can't afford to snoop on/prevent all HTTPS traffic. It's an anti-censorship measure, not a compatibility measure.
DoT (DNS over TLS) would have been enough for privacy from your ISP, using a dedicated port. It's only when you want to protect from censorship that you need to hide the (encrypted) DNS traffic among other traffic that can't be easily blocked.
Security through obscurity is not the same as actual concealment. That DoH is specced to operate over port TCP/443 makes it no more or less efficacious than DoT over TCP/853 with regard to avoiding censorship. I.e., they're both encrypted.
Many LAN operators conclude that the pragmatic impossiblility of blocking DoH is a net-negative for both network security and censorship avoidance.
> That DoH is specced to operate over port TCP/443 makes it no more or less efficacious than DoT over TCP/853 with regard to avoiding censorship. I.e., they're both encrypted.
Of course there is. Blocking all traffic with destination port 443 is virtually impossible. Conversely, blocking port 853 is trivial, and it forces all clients to either not resolve DNS, or downgrade to un-encrypted DNS.
Of course, if DoH had not been encrypted, it wouldn't have mattered that it uses port 443. But being encrypted yet easily identifiable would have also defeated half the point.
You've sidestepped my point and merely reiterated yours.
Both DoH and DoT achieve actual concealment (and therefore privacy and censorship avoidance) through encryption. That one is more obscure than the other doesn't change the fact that the whole point of both protocols is encrypted DNS queries, not obscured DNS queries.
And again, if I'm the network operator and a host can obscure/obfuscate its DNS queries, then I've lost some measure of control over my network and the hosts that connect to it. I can trivially redirect all TCP/853 traffic to my DoT-capable resolver of choice. I can't do the same for all TCP/443 traffic (i.e. redirect it to my DoH-capable resolver of choice).
I don't care that an eavesdropper can observe discrete TCP/853 traffic because it's encrypted. The whole point is maintained, and I've maintained control over my private network.
I guess we're generally in agreement on the facts, just looking at this from different sides.
You're mostly commenting on the negative effect that DoH has over a private network administrator's legitimate need to control DNS resolution in their own private network.
I was discussing how DoH has positive effects on a network user trying to evade illegitimate control over their DNS resolution on the Internet, such as legally enforced DNS-based censorship of certain sites. Several countries have legally mandated ISPs log and prevent resolution of, say, thepiratebay.com; and some include a requirement to prevent attempts at circumvention of these bans, such as DoT traffic (they might also ban DoH traffic to well known resolvers, which is where own proxies come in).
Regardless, I think we can both agree that DoH was not created to work around ossification, the way QUIC was built on top of UDP instead of being a separate transport.
> I don't care that an eavesdropper can observe discrete TCP/853 traffic because it's encrypted.
Also, this is another level of miscommunication. I agree you don't need DoH to protect from eavesdropping, DoT works just as well. DoH protects from ISPs dropping easily-identifiable DoT packets to force a downgrade to regular plaintext DNS.
Some ISPs do run DNS servers to improve performance for their customers, not to snoop on them. My company is one of them. Being able to answer DNS queries in <1ms is far superior to the 8-25ms of latency over the transport network to the nearest peering point, especially when most clients are hitting the same names over and over. It's not like any of the public DNS servers are more trustworthy.
For communication between carriers and some communications within a given carrier's network components, yes. Base stations all still communicate with the core network over SCTP, though.
It’s too bad the original IP could not have included some kind of stronger header integrity lock to block middle boxes.
It would have forced us to adopt V6 and… my god… the network would be so much cleaner and superior in every way. Things would just have addresses, and protocol innovation at L3 would be possible.
I'd like to agree with you, but IPv6 had such a classic case of suffocation by committee during its birth that I doubt it would have been pushed by such a situation. IPSec was mandated in the original IPv6 RFC, for god's sake. That alone delayed a lot of work in implementing it as crypto code needed to be integrated into kernels, which was not common in those days. That's to say nothing about the fact that IPSec is loosely defined enough that setting it up between different vendors is always an adventure - adding support to an IP stack was a big headache (I followed OpenBSD at the time they were integrating IPv6 in the early 2000s and there was a lot of hard problems around compatibility).
Header integrity was so far off of consideration during IPv4's implementation because the internet was a dozen universities and DoD sites that it was overkill (and possibly a waste of limited CPU cycles at the time).
What's far more likely to have happened is that we'd see more proxies instead of NATs (SOCKS, etc). I don't think that'd be better than NAT.
It's funny that you mention IPSec, since that would have made most of the application-level encryption we see today obsolete. They did have good intentions, and if it was widely accepted, it would have meant that barely any applications would have had to deal with the details of encryption, including the ever-looming possibility of doing it wrong (doing encryption right is hard!).
Now we have a slew of protocols that either implement TLS, or roll their own custom thing, or have X-over-HTTPS protocols, including SSTP and DoH.
IPSec was far too complicated, loosely defined, and over-engineered to have ever been widely accepted. Any host verification would need to involve application level verification anyways to make sure the other end is who you expect. So your browser would need to verify the encrypted tunnel is in face connected to google, or whoever. There’s a reason SSL/TLS is done at the application level.
There's some history I'm not aware of here. I didn't know just how bad the V6 second system committee shitshow was.
Today V6 has been stripped down almost to what it should have been: bigger addresses, then stop. All that's left to get there is to deprecate SLAAC or make it optional.
SLAAC is awesome, but DNS support in it didn't show up until much later. The result is a mish-mash of DHCPv6/SLAAC support (and android famously doesn't support DHCPv6 at all and windows only supported DHCPv6 until windows 10).
Let's say your ISP gives you a /64. Now you have to use V6 NAT... or assign a /96 internally. SLAAC won't let you do that.
That, among other things, is a problem. SLAAC is too limited.
You can use DHCPv6 but then you can't use Android because Android, and I think they're alone here, stubbornly and dogmatically refuses to implement it. I guess you could go around and statically assign V6 IPs to Android devices, or run NATv6 with SLAAC for those and DHCPv6 for everything else, but that's annoying as hell.
Then your ISP isn’t following the RFCs. You might as well ask what you would do if you ISP gives your router a 17.0.0.0/8 address via DHCP.
These are completely invented problems in your head. SLAAC can absolutely advertise a single /64 internally. It can advertise any /64 you tell it to.
DHCPv6 can absolutely respond with DNS servers (and nothing else) in parallel. Configuring your SLAAC daemon to tell clients to get DNS servers via DHCPv6 is a 15 minute exercise with google.
Android's right on this one (and I don't own an Android device that I know of, so this isn't me fanboying them). TBH ISPs that hand out /64's shouldn't be allowed to say that they support IPv6 because it's a completely non-standard — not as in "uncommon", but as in "violates the documented standards" - setup.
Users have poor leverage over ISPs. A lot of them still don't support IPv6 out of unmitigated laziness. If they provide it but violate the standard there isn't much you can do about it, so what the standard is really doing is preventing you from mitigating it.
The standard requiring an entire /64 for SLAAC is also just a poor design.
Suppose your ISP is doing the right thing and giving you at least a /56. You have a complex network with your own subnets and you're not sure how many you'll need in the future, but the spec says you only have to give each one a /64 and that seems like plenty, so the person setting up the network does that. Time passes and you get devices in various subnets with fixed addresses you need to stay fixed. Then you want to hook up a VM host or anything else that wants to further subdivide the address space, but the spec says you can't even though there are still scads of addresses. And for what, so they can use EUI-64 which in practice is only 48 bits anyway and is effectively deprecated because it's a privacy fail?
Strictly speaking, it's perfectly standard and you can subdivide it (manually). However, for a variety of reasons subnetting smaller than a /64 is difficult and not supported at all via SLAAC (because the host portion is derived from the MAC which is already 48 bits, plus other overhead). So if you want to split up a /64 from your ISP, you're limited to manual configuration, DHCPv6 (and no android support), or other ways:
If you're just dealing with a single network at home, a /64 is otherwise fine. But there's a reason the recommended handout from ISPs is a /56; the inventors of IPv6 (or more specifically SLAAC) just didn't take into consideration how intransigent big telecoms would be.
I understand that, but what I'm replying to says: "TBH ISPs that hand out /64's shouldn't be allowed to say that they support IPv6 because it's a completely non-standard — not as in "uncommon", but as in "violates the documented standards" - setup."
RFC6177 specifically recommends a /48 to /56 because future security measures may require subnetting, even on home networks (eg having IoT devices on a secure DMZ).
Basically, giving out a /64 is the modern equivalent to ISPs saying "you have to pay extra to have more computers on the internet and you're not allowed to use NAT" that was actually a thing up until the mid 2000s.
The frustrating this is that there's no reason to do a /64 as IPv6 was literally designed to hand out huge IP ranges.
SHOULD This word, or the adjective "RECOMMENDED", mean that there
may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course.
Not following recommendation from RFC6177 by allocating a perfectly valid /64 (it being inconvenient to some is another story) is not "violates the documented standards"
What we can learn from this is that there are networks other than The Internet, and even The Internet can be subdivided into parts that don't really work together.
SCTP is really cool, I first found out about it because it’s the basis for WebRTC data channels. It’s basically reliable UDP, but you can turn off the reliability if you want. Makes me wonder why QUIC exists when SCTP does…
Yes this is called protocol ossification [1] or ossification for short. Other transport layer protocol rollouts have been stymied by ossification such as MPTCP. QUIC specifically went with UDP to prevent ossification yet if you hang out in networking forums you'll still find netops who want to block QUIC if they can.
Because from an enterprise security perspective, it breaks a lot of tools. You can’t decrypt, IDS/IPS signatures don’t work, and you lose visibility to what is going on in your network.
Yes I know why netops want to block QUIC but that just shows the tension between the folks who want to build new functionality and the folks who are in charge of enterprise security. I get it, I've held SRE-like roles in the past myself. When you're in charge of security and maintenance, you have no positive incentive to allow innovation. New functionality gives you nothing. You never get called into a meeting and congratulated for new functionality you help unlock. You only get called in if something goes wrong, and so you have every incentive to monitor, lock down, and steer traffic as best as you can so things don't go wrong on your watch.
IMO it's a structural problem that blocks a lot of innovation. The same thing happens when a popular open source project that's author led switches to an external maintainer. When the incentives to block innovation are stronger than the incentives to allow it, you get ossification.
Possibly even SRE shouldn't even exist, not only the structural issues you mention, but...
If you approach to security is that only square tiles are allowed because your security framework is a square grid, and points just break your security model, maybe it was never a valid thing to model in the first place.
I'm not saying security should not exist, but to use an analogy the approach should be entirely different - we have security guards, less so fences, not because fences don't provide some security, but because the agent can make the proper decision, and a lot of these enterprise models are more akin to fences with a doorman, not an professional with a piece and training...
Agreed. I also think rotations, where engineers and ops/security swap off from time-to-time and are actually rated on their output in both roles would be useful to break down the adversarial nature of this relationship.
> Other transport layer protocol rollouts have been stymied by ossification such as MPTCP
AFAIU, Apple has flexed their muscle to improve MPTCP support on networks. I've never seen numbers, though, regarding success and usage rates. Google has published alot of data for QUIC. It would be nice to be able compare QUIC and MPTCP. (Maybe the data is out there?) I wouldn't presume MPTCP is less well supported by networks than QUIC. For one thing, it mostly looks like vanilla TCP to routers, including wrt NAT. And while I'd assume SCTP is definitely more problematic, it might not be as bad as we think, at least relative to QUIC and MPTCP.
I suspect the real thing holding back MPTCP is kernel support. QUIC is, for now, handled purely in user land, whereas MPTCP requires kernel support if you don't want to break application process security models (i.e. grant raw socket access). Mature MPTCP support in the Linux kernel has only been around for a few years, and I don't know if Windows even supports it, yet.
Maybe its me being stupid but why don't we use quic always instead of tcp?
I think it has to do with something that I read that tcp can do upto 1000 connections simultaneously no worries and they won't interfere with each other's bandwidth / impact each other , but udp does make it possible for one service being very high to impact other.
There was this latest test by anton putra with udp vs tcp and the difference was IIRC negligible. Someone said that he should probably use udp in kernel mode to essentially get insane performance I amnot sure
> Maybe its me being stupid but why don't we use quic always instead of tcp?
A big reason is because QUIC is a lot younger than TCP and it will take a while for all the use cases of TCP to decide (if they are actively maintained and looking at possible upgrades) if QUIC is a good option worth testing.
QUIC's rollout so far hasn't been entirely without bugs/controversies/quirks/obstacles/challenges. You still see a lot more HTTP/2 than HTTP/3 connections in the current wild and that doesn't seem to be changing near as fast as major providers upgraded HTTP/1.x to HTTP/2. There's still a bunch of languages and contemporary OSes without strong QUIC support. (Just the other day on HN was a binding for Erlang to msquic, IIRC, for a first pass at QUIC support in that language.)
Some point soon QUIC might start feeling as rock solid as TCP, but today TCP is (decades of) rock solid and QUIC is still a lot new and a little quirky.
Safari on IOS still has a ton lingering HTTP/3 / QUIC bugs.
I think it is to the point that if your user base doesn't warrant it, (i.e. you are targeting well connected devices with minimal latency/packetloss) it's not even worth turning HTTP/3 on
so quic just lacks the decades of experience but is a better protocol than tcp overall ?
That is kind of nice to know actually. The support will come considering its built on top of UDP. You just need people pushing and google is already pushing it hard .
The main problem is quic's support in languages. But support will come.So after reading this comment of yours , I am pretty optimistic about quic overall
Not necessarily a "better protocol overall", it still seems too early to tell. I think we're still in the "Find Out" stages because of the rollout issues and the lack of language support and lack of diversity of implementations.
(On the diversity of implementations front: So far we've got Google's somewhat proprietary implementation, Apple's kind of broken entirely proprietary implementation, and Microsoft's surprisingly robust and [also surprisingly to some] entirely open source C implementation. General language support would be even worse without msquic and the number of languages binding to it. Microsoft seems to be doing a lot more for faster/stronger/better QUIC adoption than Google today, which I know just writing that sentence will surprise a lot of people.)
There will be trade-offs to be found with TCP. For instance, a lot of discussion elsewhere in these threads is on the overbearing/complicated/nuanced congestion control of TCP, but that's as much a feature as a bug and when TCP congestion control works well it quietly does the internet a wealth of good. QUIC congestion control is much more a binary: dropped packets or not. That's a good thing as an application author, especially if you are expecting the default case to be "not" on dropped packets, but it doesn't give the infrastructure a lot of options and when pressure happens and those "allow UDP packet" switches are turned off and most of your packets as an application developer are dropped how do you expect to route around that? At least for now most of the webservers built to support HTTP/3 still fallback to HTTP/2 on request, go back to the known working congestion control of TCP that most of the internet and especially the web was built on top of.
I'm not a pessimist on QUIC, I think it has great potential. I also am not an optimist about it 100% replacing TCP in our near future, and maybe not even in our lifetime. As an application developer, it will be a great tool to have in your toolbelt as a "third" compromise option between TCP and UDP, but deciding between TCP and QUIC is probably going to be an application-by-application pros/cons list debate, at least in the short term and I think probably in the long term too.
Because pure SCTP can't survive outside your LAN, thanks to everything in-between you and your destination. Why not use SCTP on top of UDP? Well, because one of the main benefits of QUIC is TLS being at its core.
SCTP you're talking about runs on top of DTLS on top of UDP. DTLS has issues on its own, but even if it didn't it wouldn't beat QUIC in TTFB.
Others have mentioned protocol ossification which is indeed the primary reason. A secondary reason is that QUIC fuses TLS so its latency is further reduced by one RTT. For high latency networks, the difference is palpable.
QUIC is supposed to be faster than SCTP by combining layers and eliminate round trips. Also, QUIC is a stream protocol like TCP. SCTP makes messages explicit. Both have multiplexing which is why seem different.
It's actually universal within a certain niche. I think phone networks are doing just about everything over SCTP internally. When SS7 links get replaced they get replaced with something that uses SCTP. Not sure of the details because I don't work there.
Related: there's a parallel Internet with a different root of number allocation called GRX/IPX (GPRS Roaming Exchange/Internetwork Packet Exchange)
IPX is another that was very common just 20years ago. Many old games only support ipx networking and you need to run an ipx over tcp emulator to play them multiplayer nowadays.
TCP would be fine if it had the concept of message, in addition to stream. The sender sends a message. It flows to the recipient. The recipient program can specify that a receive receives the entire message, no more and no less, as long as the application's target input buffer is large enough.
SCTP does this.
A shim on top of TCP socket receive could also do this also, as long as there is a convention to prefix each message with a length field, say 16 bits, with the MSB indicating that the message is incomplete and is continued in the next length delimited segment.
As someone who has implemented various transport protocols (both standard and custom) the biggest hurdle in layering atop IP will not be the gauntlet of WAN routers, surprisingly, rather consumer NAT devices.
One interesting case for a particular Netgear family of routers was that, while the traffic would survive end-to-end it was not without corruption - very specific corruption which took the form of zeroing the first 4 bytes. Given this aligns with where the src/dst ports would normally be, I suspect it was being treated as TCP/UDP albeit without the actual translation path taken.
One would have a hard time communicating with anyone. The internet has standardized around TCP and UDP. There are too many devices in most paths that will not be able to handle other protocols. Replacing all the hardware on the internet would take even longer than deprecating IPv4 in my pessimistic opinion. To get around this there would have to be some significant gains of the new protocol that would warrant big expenses in every corporation and government and all the hardware manufacturers would all have to agree on implementing support and further agree on interpretation of the new RFC.
Actually the Internet at large is fine with any protocol on top of IP. It's your home router's NAT function that can only handle UDP and TCP. Set it to bridge mode and use your computer as the router (if you need one) and you can send anything from that computer.
If you have CGNAT you're still screwed. Get one of those free ipv6 tunnels.
The way the NATs (network address translators) are sharing the scarce public IPv4 addresses is by multiplexing on the transport level fields (ports in case of TCP/UDP and IDs or inner packet transport level fields in case of ICMP).
Since they are unaware of your protocol, they get into a “special case mode”, which on a naive translator might consume a whole IP address (so you would really make a network admin with a few of those, because you exhaust all their available addresses :-) ; but on the carrier grade NAT there are safeguards against it and the packets are just dropped.
At some point it seems like we just need to start passing laws requiring (major) internet services to provide all services on IPv6. Routers could then intercept ipv4 only devices and their dns locally and translate it to ipv6.
Still lame that in 2024 major services like Steam and Quest basically require IPv4.
I want to be able to use the internet without silly things like exhausting a network admins ipv4s.
At least cell phone networks through their standards processes have pushed most consumer networks and consumer hardware to be IPv6 by default ("by only option" in many cases with DNS64 and NAT64 filling in the gaps).
The real pressure we need are to corporate networks. Too many of them think they can use 10.0.0.0/8 forever. Too many of them own giant chunks of public IPv4 space and think they are immune to the address exhaustion. At least the prices for IPv4 addresses are going up at major clouds like AWS and Hetzner. But it still seems too slow of a price rise to hit enough bottom lines that major corporations are feeling the pressure yet.
This is outside my area of expertise.. so naiive question.. but ports aren't tied to the protocol .. right? If you open a raw socket, it's still on some associated port number. NAT traversal multiplexes ports.. so why would that preclude using any arbitrary protocol?
Ports are very much a concept of the transport layer. They are a very useful concept, so they are used in all major transport-layer protocols, but they are not necessary in a theoretical sense (though a transport layer protocol without them would only allow a single stream of traffic between any two machines, or at least IPs). But TCP port 22 is a completely different thing than UDP port 22, and they are both completely different from SCTP port 22. To prevent confusion, IANA typically assigns the same port number for a protocol on both TCP and UDP (e.g. DNS over TCP uses TCP port 53, just like DNS over UDP uses UDP port 53; and QUIC uses UDP port 443, just like HTTPS uses TCP port 443).
When a machine receives a packet, after the Ethernet and IP layers have made sure the packet is addressed to this machine, the very next thing that happens is checking what transport layer implementation should receive the packet - and this is done based on the "transport" bits in the IP header. If the packet is, say, TCP, then the TCP layer starts reading its own header and finds out the port number, and proceeds from there.
You are partially right, though. The OS network stack does expose and handle ports if you use a protocol that has them.
Networks are built in layers. There's a physical layer underneath IP, then there's IP, and then there's TCP and UDP on top of IP.
The OS network stack has components that handle all of these layers. That's why it's called a stack.
Port numbers are part of the individual protocol (TCP or UDP) because there are a lot of things you can do with networking, and port numbers don't necessarily make sense with all of them.
For example, when you ping another computer, that uses ICMP, and there is no need for ports with ICMP. You're pinging the whole computer, not trying to connect with one of several applications running on it. So ports are not really needed.
TCP/IP is a protocol called TCP encapsulated in protocol called IP. Raw IP packets have "protocol number" for use by payload, but not port number. Port number is technically part of custom data that IP layer should not care about.
If you mean socket as in `my_socket = socket(AF_INET, SOCK_STREAM, 0);` that's TCP/IP, not raw IP, so it will have port numbers in the TCP part of the packet. `SOCK_STREAM` and `SOCK_DGRAM` respectively correspond to TCP and UDP. Raw IP sockets on Linux can be created by `socket(AF_INET, SOCK_RAW, protocol);` and that will have no TCP/UDP header attached by the Kernel after the IP header in the packet.
IP has no ports, it has a protocol field, TCP, UDP, IPsec and other layer 4 protocols have their own protocol number. NAT (PAT really) uses TCP and UDP ports to connect what can be considered separate connections, ie a connection on the inside and a connection on the outside, these connections don't even have the same source/destination ports as connection state tracking is what connects these separate connections.
The socket API has different types of sockets, typically you use stream (TCP) or dgram (UDP), but you can also get a completely raw socket where you need to construct your own ethernet and IP headers, or create your own replacement of IP protocol. So socket does not mean only TCP/UDP or something with ports, unix sockets are another example of this.
When using a packet socket you are sending bytes directly to the device driver. The only ports at this level are the physical ports on your machine which will have names like "eth1" etc. Assuming an Ethernet driver, the bytes must be a valid Ethernet frame[0], but that's all. The payload is just bytes.
I've been trying for a few minutes to figure out what you mean by "it's in the name"... What about Internet Protocol implies that ports are not an inherent property of it?
All I can think of is that an "IP address" does not have a port component and that's all that IP deals with.
IPv6 alone won't help if there are some firewalls sitting between the endpoints that will drop anything they don't know. Having two linux hosts without firewalls talk to each other over IPv6 without consumer grade routers (which tend to come with firewalls by default) between them might work.
This gives you a raw Ethernet socket. For a raw IP socket, use AF_INET,SOCK_RAW,0.
A raw Ethernet socket sends packets at the Ethernet level while a raw IP socket sends packets at the IP level. If you want to play on your local network you want Ethernet (maybe). If you want to send weird packets across the Internet you probably want IP, so that you don't have to waste effort doing route lookups and MAC address lookups and your code will still work on non-Ethernet networks, including VPNs.
There are some protocols that run on top of Ethernet but Internet-compatible protocols all run on IP by definition.
This comment was delayed several hours due to HN rate limiting.
Yeah, this was intended to eliminate the problems with protocol=0, but yeah you need to implement IP and maybe ARP to do a similar thing at this level. Thankfully IP is pretty simple to implement.
If you want to play with packet sockets you might find my notebook useful where I was going to implement TCP/IP from scratch (well, from layer 2 up): https://github.com/georgek/notebooks/blob/master/internet.ip... I did ICMP over IP but got bored before I got to TCP, though (it's way more complicated than IP). You could drop in your custom protocol in place of ICMP.
And this brings me to why I love networking on Plan 9. First off the dial string, net!address!service, passes the network along with the address and service port letting you easily switch protocols as needed. e.g. a program could listen on IL port 666 using the dial string il!*!666 and the client dials il!10.1.1.1!666. Second, that lets you dial and listen on any protocol from any program. If one wanted to use raw Ethernet you use the dial string ether0!aabbccddeeff. If I wanted to add a protocol like quic or a dead one like ipx/spx I just need to mount it to /net and keep the semantics the same as other services in net and any program can dial that protocol - ezpz user space networking stacks. Powerful abstraction.
Datakit was in fact one of the early supported networks in Plan 9. You could dial into a 9 machine over datakit, mount an outward facing IP stack over your /net and your on-line: http://man.postnix.pw/plan_9_2e/3/datakit
Yeah! I am baffled that even one got through, but given that it did, why only one? And I would’ve immediately tried the “every protocol” version at that point…
I always assumed any TCP/UDP packets would get captured by the OS network stack in order to be sent only to the processes listening on specific ports.
I guess this is a security feature, since a process cannot even listen on some ports without having elevated privileges. I wouldn't expect another process being able to capture all this traffic anyway. This would also require a mechanism of sending the same stream to multiple processes (TCP listeners and all-protocol listeners).
But I didn't even know it was possible to capture traffic from multiple transport layer protocols using a syscall, perhaps that syscall requires elevated privileges itself..?
It requires elevated privileges, but this is how programs like tcpdump and wireshark work. On Linux it's also possible to give a program these permissions for any user by setting "capabilities", specifically cap_net_admin and cap_net_raw.
DHCP services require this ability to receive and send UDP packets on raw sockets, barring a few advanced systems like Solaris that provide them with necessary facilities. Usually they install a BPF module on the socket to filter out uninteresting packets.
I think a more interesting question is: what if internet protocols and routing equipment were designed from scratch today? Besides much larger packets, I'm guessing something basic in the style of UDP would be chosen to replace HTTP to simplify query-response lookups, a much simpler streaming protocol would be chosen to replace TCP and support all the video playing going on, and those two protocols would more efficiently handle the vast majority of traffic.
It's weird to me that the definition of portable has changed from "can be moved" to "can be moved to another machine" to "can be moved to another environment" to "can be moved to another program"
We had the option to make every object addressable and we chose not to. Why keep sweating over it, just accept that software only works on the devs machine and ship that.
UUCP was only a couple years before TCP, and isn't really an equivalent (it's fancy file transfer with a little remote command exec sprinkled on in the end) and natively its dialup oriented; it usually requires some other L3 protocol to run over networks.
There was a time when IPX/SPX was a contender. Xerox pitched XNS directly at TCP. DECNet/OSI was around. There were a lot of others...lot's of experimenting going on at the time.
If you're interested in a more "modern" UUCP, there's NNCP [1] (HTTPS ver here [2].) It continues to mostly be a file-transfer protocol with a bit of signaling added on.