Hacker News new | past | comments | ask | show | jobs | submit login

> why QUIC exists when SCTP does

Because QUIC uses UDP, which is supported by most/all intermediate routing equipment.




The whole point of UDP is to allow alternative protocols to be implemented on top.

SCTPs mistake was it wasn't implemented as a userland library on top of UDP to begin with.


Is this a real issue? SCTP runs over IP, so unless your talking about firewalls and such, the support should be there.

Edit: a quick search showed that NAT traversal is an issue (of course!)


Yes this is called protocol ossification [1] or ossification for short. Other transport layer protocol rollouts have been stymied by ossification such as MPTCP. QUIC specifically went with UDP to prevent ossification yet if you hang out in networking forums you'll still find netops who want to block QUIC if they can.

[1]: https://en.m.wikipedia.org/wiki/Protocol_ossification


Because from an enterprise security perspective, it breaks a lot of tools. You can’t decrypt, IDS/IPS signatures don’t work, and you lose visibility to what is going on in your network.


Yes I know why netops want to block QUIC but that just shows the tension between the folks who want to build new functionality and the folks who are in charge of enterprise security. I get it, I've held SRE-like roles in the past myself. When you're in charge of security and maintenance, you have no positive incentive to allow innovation. New functionality gives you nothing. You never get called into a meeting and congratulated for new functionality you help unlock. You only get called in if something goes wrong, and so you have every incentive to monitor, lock down, and steer traffic as best as you can so things don't go wrong on your watch.

IMO it's a structural problem that blocks a lot of innovation. The same thing happens when a popular open source project that's author led switches to an external maintainer. When the incentives to block innovation are stronger than the incentives to allow it, you get ossification.


Possibly even SRE shouldn't even exist, not only the structural issues you mention, but...

If you approach to security is that only square tiles are allowed because your security framework is a square grid, and points just break your security model, maybe it was never a valid thing to model in the first place.

I'm not saying security should not exist, but to use an analogy the approach should be entirely different - we have security guards, less so fences, not because fences don't provide some security, but because the agent can make the proper decision, and a lot of these enterprise models are more akin to fences with a doorman, not an professional with a piece and training...


Agreed. I also think rotations, where engineers and ops/security swap off from time-to-time and are actually rated on their output in both roles would be useful to break down the adversarial nature of this relationship.


Wrapping everything in UDP breaks the same tools but it's more obnoxious for everyone involved.


> Other transport layer protocol rollouts have been stymied by ossification such as MPTCP

AFAIU, Apple has flexed their muscle to improve MPTCP support on networks. I've never seen numbers, though, regarding success and usage rates. Google has published alot of data for QUIC. It would be nice to be able compare QUIC and MPTCP. (Maybe the data is out there?) I wouldn't presume MPTCP is less well supported by networks than QUIC. For one thing, it mostly looks like vanilla TCP to routers, including wrt NAT. And while I'd assume SCTP is definitely more problematic, it might not be as bad as we think, at least relative to QUIC and MPTCP.

I suspect the real thing holding back MPTCP is kernel support. QUIC is, for now, handled purely in user land, whereas MPTCP requires kernel support if you don't want to break application process security models (i.e. grant raw socket access). Mature MPTCP support in the Linux kernel has only been around for a few years, and I don't know if Windows even supports it, yet.


It would be nice to generate some sort of report card here. Maybe I should try.


Every home user is behind a NAT. While you can send any protocol between datacenter servers, IPv4 home users are stuck with TCP or UDP.


Hole punching is perhaps why UDP is de facto?


WebRTC runs SCTP over DTLS over UDP.


That's not the reason, SCTP over UDP was already standardized


I am now genuinely wondering

Maybe its me being stupid but why don't we use quic always instead of tcp?

I think it has to do with something that I read that tcp can do upto 1000 connections simultaneously no worries and they won't interfere with each other's bandwidth / impact each other , but udp does make it possible for one service being very high to impact other.

There was this latest test by anton putra with udp vs tcp and the difference was IIRC negligible. Someone said that he should probably use udp in kernel mode to essentially get insane performance I amnot sure


> Maybe its me being stupid but why don't we use quic always instead of tcp?

A big reason is because QUIC is a lot younger than TCP and it will take a while for all the use cases of TCP to decide (if they are actively maintained and looking at possible upgrades) if QUIC is a good option worth testing.

QUIC's rollout so far hasn't been entirely without bugs/controversies/quirks/obstacles/challenges. You still see a lot more HTTP/2 than HTTP/3 connections in the current wild and that doesn't seem to be changing near as fast as major providers upgraded HTTP/1.x to HTTP/2. There's still a bunch of languages and contemporary OSes without strong QUIC support. (Just the other day on HN was a binding for Erlang to msquic, IIRC, for a first pass at QUIC support in that language.)

Some point soon QUIC might start feeling as rock solid as TCP, but today TCP is (decades of) rock solid and QUIC is still a lot new and a little quirky.


Safari on IOS still has a ton lingering HTTP/3 / QUIC bugs.

I think it is to the point that if your user base doesn't warrant it, (i.e. you are targeting well connected devices with minimal latency/packetloss) it's not even worth turning HTTP/3 on


so quic just lacks the decades of experience but is a better protocol than tcp overall ?

That is kind of nice to know actually. The support will come considering its built on top of UDP. You just need people pushing and google is already pushing it hard .

The main problem is quic's support in languages. But support will come.So after reading this comment of yours , I am pretty optimistic about quic overall


Not necessarily a "better protocol overall", it still seems too early to tell. I think we're still in the "Find Out" stages because of the rollout issues and the lack of language support and lack of diversity of implementations.

(On the diversity of implementations front: So far we've got Google's somewhat proprietary implementation, Apple's kind of broken entirely proprietary implementation, and Microsoft's surprisingly robust and [also surprisingly to some] entirely open source C implementation. General language support would be even worse without msquic and the number of languages binding to it. Microsoft seems to be doing a lot more for faster/stronger/better QUIC adoption than Google today, which I know just writing that sentence will surprise a lot of people.)

There will be trade-offs to be found with TCP. For instance, a lot of discussion elsewhere in these threads is on the overbearing/complicated/nuanced congestion control of TCP, but that's as much a feature as a bug and when TCP congestion control works well it quietly does the internet a wealth of good. QUIC congestion control is much more a binary: dropped packets or not. That's a good thing as an application author, especially if you are expecting the default case to be "not" on dropped packets, but it doesn't give the infrastructure a lot of options and when pressure happens and those "allow UDP packet" switches are turned off and most of your packets as an application developer are dropped how do you expect to route around that? At least for now most of the webservers built to support HTTP/3 still fallback to HTTP/2 on request, go back to the known working congestion control of TCP that most of the internet and especially the web was built on top of.

I'm not a pessimist on QUIC, I think it has great potential. I also am not an optimist about it 100% replacing TCP in our near future, and maybe not even in our lifetime. As an application developer, it will be a great tool to have in your toolbelt as a "third" compromise option between TCP and UDP, but deciding between TCP and QUIC is probably going to be an application-by-application pros/cons list debate, at least in the short term and I think probably in the long term too.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: