Hacker Newsnew | past | comments | ask | show | jobs | submit | kixelated's commentslogin

Hey lewq, 40Mbps is an absolutely ridiculous bitrate. For context, Twitch maxes out around 8.5Mb/s for 1440p60. Your encoder was poorly configured, that's it. Also, it sounds like your mostly static content would greatly benefit from VBR; you could get the bitrate down to 1Mb/s or something for screen sharing.

And yeah, the usual approach is to adapt your bitrate to network conditions, but it's also common to modify the frame rate. There's actually no requirement for a fixed frame rate with video codecs. It also you could do the same "encode on demand" approach with a codec like H.264, provided you're okay with it being low FPS on high RTT connections (poor Australians).

Overall, using keyframes only is a very bad idea. It's how the low quality animated GIFs used to work before they were secretly replaced with video files. Video codecs are extremely efficient because of delta encoding.

But I totally agree with ditching WebRTC. WebSockets + WebCodecs is fine provided you have a plan for bufferbloat (ex. adaptive bitrate, ABR, GoP skipping).


Yeah, technically it's SCTP over DTLS for data channels. Only the media layer gets to use raw UDP, limiting the scope.


I like to frame WebTransport as multiple WebSocket connections to the same host, but using a shared handshake.

It's common to multiplex a WebSocket connection but you don't need that with WebTransport, while also avoiding head-of-line-blocking.

But yeah I wish WebTransport had a better TCP fallback. I still use WebSocket for that.


The thing is, I very rarely (or arguably never) have a use case which requires that one client has multiple connections to the same server. The thing I want is almost always to have a bidirectional stream of messages where messages arrive in order. Essentially a simple message envelope protocol on top of TCP.


For sure, if you want an ordered/reliable stream then WebSocket is ideal. WebTransport is useful when you also want prioritization and semi-reliable networking, similar in concept to WebRTC data channels.


I maintain https://github.com/kixelated/web-transport

But yeah the HTTP/3 integration definitely makes WebTransport harder to support. The QUIC connection needs to be shared between HTTP/3 and WebTransport.


There's no probing in any QUIC implementation but it's possible. There's a QUIC extension in the IETF similar to transport-wide-cc but it would still be up to the browser to use it for any upload CC.


SCTP and by extension, WebRTC data channels, are supposed to use the same congestion control algorithms as TCP/QUIC. But I don't know which CC libsctp does these days.

WebTransport in Chrome currently uses CUBIC but the Google folks want to turn on BBR everywhere. It uses the same QUIC implementation as HTTP/3 so it's going to be more battle hardened.


SCTP: The FORWARD-TSN chunk was introduced to support selective unreliability: it allows the sender to tell the receiver that it will not retransmit some number of chunks, and requests that the receiver consider all these chunks as received.


QUIC has a much better alternative to FORWARD-TSN, either via RESET_STREAM or QUIC datagrams.

I've implemented SCTP before to hack in "datagram" support by spamming FORWARD-TSN. Fun fact: you can't use FORWARD-TSN if there's still reliable data outstanding. TSN is sequential after all, you have to drop all or nothing.

QUIC as a protocol is significantly better than SCTP. I really recommend the RFC


Wow thanks for the tip


I had a 30 minute intro call and got a rejection a few days later. It was VERY timely (Sep 21 email, Sep 23 meeting, Sep 26 rejection).

We both weren't sure if my project was a good fit for the program. It was still a positive experience, and they were nice enough to offer me an intro to a more relevant team within OpenAI.

I couldn't quite figure out the goal of Grove. The line about "pre-idea" individuals, and of course the referral offer, made me feel that it's more of a hiring pipeline and not a traditional incubator. But we'll see when they announce the cohort.


Hi! So its about scouting architects and other stuff? What will happen with ideas of thise who didn't get touched?


Would you mind sharing what you're working on?


Thanks!

You don't need multicast! CDNs effectively implement multicast, with caching, in L7 instead of relying on routers and ISPs to implement it in L3. That's actually what I did at Twitch for 5 years.

In theory, multicast could reduce the traffic from CDN edge to ISP, but only for the largest broadcasts of the year (ex. Superbowl). A lot of CDNs are getting around this by putting CDN edges within ISPs. The smaller events don't benefit because of the low probability of two viewers sharing the same path.

There's other issues with multicast, namely congestion control and encryption. Not unsolvable but the federated nature of multicast makes things more difficult to fix.

Multicast would benefit P2P the most. I just don't see it catching on given how huge CDNs have become. Even WebRTC, which would benefit from multicast the most and uses RTP (designed with multicast in mind) has shown no interest in supporting it. But I did hear a rumor that Google was using multicast for Meets within their network so maaaybe?


I answered in another reply, but client -> server protocols like TCP and QUIC don't have an issue traversing NATs. The biggest problem you'll run into are corporate firewalls blocking UDP, but hopefully HTTP/3 adoption helps that (UDP :443, same as WebTransport).


My fault, I trying too hard to avoid rehashing previous blog posts: https://moq.dev/blog/replacing-webrtc/

And you're right that MoQ primarily benefits developers, not end users. It makes it a lot easier to scale and implement features; indirect benefits.


Your queue solution solves the SFU problem? The p2p quic ietf draft expired which stinks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: