Single-stream throughput is directly proportional to latency and jitter.
It is nearly impossible to push 1Gb/sec across the public internet because of the latency and jitter that get introduced via physical distance and multiple hops across multiple networks.
Outside of hitting a server within your metro area, and likely on Google's backbone itself, it would be nearly impossible to hit a gigabit (~110MB/sec) of throughput with a single stream of data.
A single stream of data is a bit unimportant, with BT and potentially multiple users on a home network. I'm getting gigE this year, and between 4k streaming, BT, and two teenage daughters, I fully expect to saturate the connection.
It's not at all unimportant. It's part of the reason people are upset that they paid for XXXMbps connection but "Only get XXXMbps" throughput.
In most households having a limited single-stream throughput isn't a big deal, but it IS a big deal specifically for things like your 4K streaming. It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance. If your ISP happens to have a Netflix caching box at the local pop you're golden. If you need to traverse any kind of distance across the internet - good luck.
edit: I guess I should've also said, more to the point, it's why gigabit hasn't gotten the legs most people were hoping for. When you look at the per-stream throughput on that site - in general if you have any kind of packet loss you're talking about 10Mbps. Who has 100 different streams in an average household???
The calculator you linked to assumes a maximum TCP window size of 64KB for everything but "replication". TCP window scaling has been on by default in every major OS for 10 years or more, allowing much greater throughput. It's true that latency sets a limit on TCP throughput but it's not nearly as bad as your calculator would indicate.
The cause is a bit cyclical though, isn't it? We don't need Gbit because most people don't use that much. But most people don't use that much because consumer apps/devices rarely need to use that much bandwidth. But consumer apps/devices are designed to not use that much bandwidth because the average consumer doesn't have Gbit internet. And the average consumer doesn't have Gbit because they don't use that much. Etc.
Commonly called a "chicken-and-egg" problem, or more often "circular" rather than "cyclical" despite the words being nearly synonyms. Just a vocabulary nit, not disagreeing with the point.
> It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance.
Your calculator is bunk. I've got a dedicated server in NJ, 11 milliseconds away. Gigabit on that end, 150/150 on mine. Your calculator says I should get under 50 mbps for file downloads, but I can definitely saturate my local link with both downloads and uploads.
I'm not sure what to tell you. It's literally based on a sound mathematics formula and the RFC that dictates how TCP operates. The only way you're getting maxed out gigabit with 11ms latency on a single stream is if you've got something on the pipe telling the TCP/IP stack to violate the RFC - IE: a piece of hardware like cisco waas, silverpeak, riverbed, etc.
There are literally hundreds of TCP throughput calculators out there and they all use the same formula.
TCP throughout is limited by window size divided by round trip time (I.e. How much you can send before having to wait for an ACK). With RFC 1323, you can specify a window size well above the 65,536 limit that would otherwise exist. With scaling you can have a window size up to a gigabyte. With an RTT of 11 ms, you can saturate a 150 Mbps link with a window size of 160-170kb.
I have a cheap shared server on the other side of the planet (250ms latency). Your calculator says I can get at most 2 Mbps(!), which is complete crap. With a single stream (rsync) I typically get around 80 Mbps.
> In most households having a limited single-stream throughput isn't a big deal, but it IS a big deal specifically for things like your 4K streaming. It's VERY difficult to do TRUE sustained 4K streaming over any kind of distance. If your ISP happens to have a Netflix caching box at the local pop you're golden. If you need to traverse any kind of distance across the internet - good luck.
We're talking about TCP - but most 4K streaming shouldn't be using TCP though, right? Especially not if you actually have 100MB/s bandwidth.
I'm happy if someone has a reference stating otherwise, but it's my understanding that both Youtube and Netflix utilize TCP for their video streams whether it's 4K or not.
Video buffering. With TCP you can ensure delivery of the traffic ahead of time, retransmitting if necessary before a particular segment needs to be played.
Using the 280MB Libreoffice source .deb from Ubuntu as a test, and the Ubuntu mirrors -- which are hosted on a mixture of commercial and academic networks.
From here in Denmark, I have no problem downloading at over 95MB/s from servers in the Nordic countries. 111MB/s from the Norwegian mirror!
From the rest Europe, including examples like Bosnia (60MB/s) and Russia (55MB/s), I get at least 50MB/s, 70-80 from GB, DE, NL etc.
Around 70MB/s also for Singapore, South Africa, Canada.
This isn't 1000Gb/s (125MB/s), but it's a lot faster than the 0.002Gb/s that tool predicts.
Raptor codes + UDP can saturate any link. Single Stream too. (And guarantee data integrity, so no need for TCP)
Try http://openrq-team.github.io/openrq/ for example.
You are assuming that TCP with current congestion avoidance algorithm is the only transport medium. When you have bigger pipes allowing you more bits to pack/unpack, you might find the era of newer transport protocols that do specific tasks more efficiently while increasing your overall throughput for specific use-cases. I can think of video streaming, gaming, file storage, etc that might add up to consuming the pipe to the ceiling.
Care to clarify? Throughput is determined by the bandwidth-delay product which is a function of bandwidth and, just as critically, latency (literally bandwidth * rtt). And jitter is just variable latency.
The bandwidth delay product is the amount of data in flight. That's not throughput. You can have a huge amount of data in flight on a fast connection just fine. Someone else said the calculator assumes a max window size of 64KB which is extremely unrealistic.
It is nearly impossible to push 1Gb/sec across the public internet because of the latency and jitter that get introduced via physical distance and multiple hops across multiple networks.
Outside of hitting a server within your metro area, and likely on Google's backbone itself, it would be nearly impossible to hit a gigabit (~110MB/sec) of throughput with a single stream of data.
A tool like this does a nice job of doing the math for you: https://www.silver-peak.com/calculator/throughput-calculator