Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does such a scheme continue to work well for high bandwidth, low latency traffic (videoconferencing, potentially livestreaming)?

Essentially, I believe such a scheme would break down if you had, for example, 4N bandwith, 3 users using an average of N, with the ability to buffer, and one user using an average of N but fluctuating by +- .5N, and without the ability to buffer. I don't think the scheme you described would work in such a case, but an "intelligent" provider could give everyone in this situation a "perfect" experience.

Granted, I'm not sure how realistic what I just described is, but still.



> if you had, for example, 4N bandwith, 3 users using an average of N, with the ability to buffer, and one user using an average of N but fluctuating by +- .5N, and without the ability to buffer.

Are you referring to buffering in the endpoint, as for video streaming? The buffers I was referring to are the queues in the network itself. Properly managed queues can absorb bursts of traffic but will otherwise maintain a steady state of minimal buffer occupancy, so packets spend minimal time waiting in the buffer even when the line is running at full capacity. Even when a user is experiencing packet drops as a congestion signal, the packets that make it through the bottleneck will do so without undue delay.

If I understand your hypothetical correctly, user number 4 has higher latency sensitivity than users 1-3, but they're all trying to use at least their full fair share of the bandwidth. Furthermore, users 1-3 are transferring at a fairly steady rate, indicating that their traffic is being managed by a relatively intelligent endpoint that is using something like TCP BBR.

Depending on the timescale of user 4's traffic volume fluctuations, his experience may differ. Short bursts of traffic will get buffered, and so the tail end of the burst may experience a few milliseconds of delay (and also induce a few milliseconds of delay on the neighbors' traffic), but if the burst is large enough that it would monopolize the line for tens of milliseconds, packets will start getting dropped as a congestion signal, and user 4 will experience most of those drops. On a long time scale of seconds, if user 4 is still trying to use more than his fair share of bandwidth in spite of having had enough time for congestion signals to make a round trip, then user 4's packets are going to get dropped as much as necessary to keep them under control, because user 4's traffic is behaving badly.

In the real world, Netflix-style video streaming tends to be fairly well-behaved, dropping to lower resolutions in response to congestion. It is also fairly latency-tolerant because of client-side buffering. Interactive videoconferencing is more latency sensitive but has similar congestion response and is almost as loss-tolerant. Video games, DNS lookups, and early-stage connection handshakes are all relatively unresponsive to congestion signals and very latency-sensitive. But because those are almost never the traffic flows that are using the most bandwidth, they're never first in line to be dropped in the event of congestion and they usually are the first packets to be forwarded by a fq_codel style traffic manager.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: