It's not at all unimportant. It's part of the reason people are upset that they paid for XXXMbps connection but "Only get XXXMbps" throughput.
In most households having a limited single-stream throughput isn't a big deal, but it IS a big deal specifically for things like your 4K streaming. It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance. If your ISP happens to have a Netflix caching box at the local pop you're golden. If you need to traverse any kind of distance across the internet - good luck.
edit: I guess I should've also said, more to the point, it's why gigabit hasn't gotten the legs most people were hoping for. When you look at the per-stream throughput on that site - in general if you have any kind of packet loss you're talking about 10Mbps. Who has 100 different streams in an average household???
The calculator you linked to assumes a maximum TCP window size of 64KB for everything but "replication". TCP window scaling has been on by default in every major OS for 10 years or more, allowing much greater throughput. It's true that latency sets a limit on TCP throughput but it's not nearly as bad as your calculator would indicate.
The cause is a bit cyclical though, isn't it? We don't need Gbit because most people don't use that much. But most people don't use that much because consumer apps/devices rarely need to use that much bandwidth. But consumer apps/devices are designed to not use that much bandwidth because the average consumer doesn't have Gbit internet. And the average consumer doesn't have Gbit because they don't use that much. Etc.
Commonly called a "chicken-and-egg" problem, or more often "circular" rather than "cyclical" despite the words being nearly synonyms. Just a vocabulary nit, not disagreeing with the point.
> It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance.
Your calculator is bunk. I've got a dedicated server in NJ, 11 milliseconds away. Gigabit on that end, 150/150 on mine. Your calculator says I should get under 50 mbps for file downloads, but I can definitely saturate my local link with both downloads and uploads.
I'm not sure what to tell you. It's literally based on a sound mathematics formula and the RFC that dictates how TCP operates. The only way you're getting maxed out gigabit with 11ms latency on a single stream is if you've got something on the pipe telling the TCP/IP stack to violate the RFC - IE: a piece of hardware like cisco waas, silverpeak, riverbed, etc.
There are literally hundreds of TCP throughput calculators out there and they all use the same formula.
TCP throughout is limited by window size divided by round trip time (I.e. How much you can send before having to wait for an ACK). With RFC 1323, you can specify a window size well above the 65,536 limit that would otherwise exist. With scaling you can have a window size up to a gigabyte. With an RTT of 11 ms, you can saturate a 150 Mbps link with a window size of 160-170kb.
I have a cheap shared server on the other side of the planet (250ms latency). Your calculator says I can get at most 2 Mbps(!), which is complete crap. With a single stream (rsync) I typically get around 80 Mbps.
> In most households having a limited single-stream throughput isn't a big deal, but it IS a big deal specifically for things like your 4K streaming. It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance. If your ISP happens to have a Netflix caching box at the local pop you're golden. If you need to traverse any kind of distance across the internet - good luck.
We're talking about TCP - but most 4K streaming shouldn't be using TCP though, right? Especially not if you actually have 100MB/s bandwidth.
I'm happy if someone has a reference stating otherwise, but it's my understanding that both Youtube and Netflix utilize TCP for their video streams whether it's 4K or not.
Video buffering. With TCP you can ensure delivery of the traffic ahead of time, retransmitting if necessary before a particular segment needs to be played.
In most households having a limited single-stream throughput isn't a big deal, but it IS a big deal specifically for things like your 4K streaming. It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance. If your ISP happens to have a Netflix caching box at the local pop you're golden. If you need to traverse any kind of distance across the internet - good luck.
edit: I guess I should've also said, more to the point, it's why gigabit hasn't gotten the legs most people were hoping for. When you look at the per-stream throughput on that site - in general if you have any kind of packet loss you're talking about 10Mbps. Who has 100 different streams in an average household???