Hacker News new | past | comments | ask | show | jobs | submit login

It's usually layers of HLS at that. For live broadcasts, someone has a camera somewhere. Bounce that from the sports stadium to a satellite, and someone else has a satellite pulling that down. So far so good, low latency.

But that place pulling down the feed usually isn't the streaming service you're watching! There are third parties in that space, and third party aggregators of channel feeds, and you may have a few hops before the files land at whichever "streaming cable" service you're watching on. So even if they do everything perfectly on the delivery side, you could already be 30s behind, since those media files and HLS playlist files have already been buffered a couple times since they can come late or out of order at any of those middleman steps. Going further and cutting all the acquisition latency out? That wasn't something really commonly talked about a few years ago when I was exposed to the industry. It was complained about once a year for the Super Bowl, and then fell down the backlog. You'd likely want to own in-house signal acquisition and build a completely different sort of CDN network.

Last I talked to someone familiar with it, the way stuff that cares about low latency (like streaming video game services) does it is much more like what you talk about with custom protocols.




The funny thing is that the web used to have a well-supported low latency streaming protocol… and it was via Flash. When the world switched away from Flash, we created a bunch of CDN-friendly formats like HLS but by their design, they couldn’t be low latency.

And it broke all my stuff because I was relying on low latency. And I remember reading around at the time — not a single person talked about the loss of a low latency option so I just assumed no one cared for low latency.


Flash "low latency" was just RTMP. CDNs used to offer RTMP solutions, but they were always priced significantly higher than their corresponding HTTP solutions.

When the iPhone came out, HTTP video was the ONLY way to stream video to it. It was clear Flash would never be supported on the iPhone. Flash was also a security nightmare.

So in that environment, The options were:

1) Don't support video on iOS

2) Build a system that can deliver video to iOS, but keep the old RTMP infrastructure running too.

3) Build a system that can deliver video to iOS, Deprecate the old RTMP infrastructure. This option also has a byproduct of reduced bandwidth bills.

For a company, Option 3 is clearly the best choice.

edit: And for the record, latency was discussed a lot during that transition (maybe not very publicly). But between needing iOS support, and reducing bandwidth costs, latency was a problem that was decided to be solved later.


I’m familiar with all of what you’re saying. I set up RTMP servers.

I’m more taking from the standpoint of like Apple or Google. HLS is by Apple after all.


Google puts quite a lot of effort into low latency broadcast for their Youtube Live product. They have noticed that they get substantially more user retention if there are a few seconds of latency vs a minute. When setting up a livestream, there are even choices for the user to trade quality for latency.

That's mostly because streamers want to interact with their audience, and lag there ruins the experience.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: