Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The FTTH offerings from Ziply, Cox, Comcast, Google, ATT, Centurylink etc are all the same "shared media with high oversubscription" design too. Among them the typical ratio is ~32 PON users for the given "base rate" PON standard, similar to a typical ~20-50 for coax at a given DOCSIS standard. Both have better/worse examples (particularly early gigabit PON deployments were 64) but FTTH has rarely been about getting dedicated bandwidth up to the neighborhood box... honestly most of the time the lion's share of the benefits are "it's a sign your area just got upgraded cabling and equipment for the first time in many many years" than anything to do with the physicalities of the wire.

For GPON that's 2.5 Gbps for downstream and 1.25 Gbps for upstream. So with a 32 split it's the same story of 2-3 people downloading a popular Linux ISO at 980 Mbps still eat up the entire fiber line for all 32 people.

The difference on the fiber, outside the better upload symmetry we already see, is it will be able to scale a lot more in the future. Some places already have 10G PON (which, unlike GPON, is usually actually said speed) such as where ATT offers 2G and 5G symmetric service. The next step will be 25G PON (again, about the actual nominal speed).



> The FTTH offerings from Ziply, Cox, Comcast, Google, ATT, Centurylink etc are all the same "shared media with high oversubscription" design too.

No, they're really not, you can't compare single strand FTTH XGSPON on singlemode fiber (16:1 or 32:1 contention ratio), something that is built on 10G XGSPON tech, to something that is built on bonded RF channels on coax copper. The aggregate capacity per oversubscribed network segment is radically different.

Now, all of these cable operators also ARE building actual FTTH networks in certain areas because they see the writing on the wall for the longevity of how much more they can squeeze out of the copper. So in some very specific places the Comcast 1 Gbps last mile product is functionally equivalent to the local Verizon, or Ziply, or Lumen (Centurylink, now branded as Quantum Fiber) FTTH product.


DOCSIS 3.1 is extremely comparable to PON in terms of oversubscription design (in bandwidth and allocation breakout). The largest difference between the two is DOCSIS uses dynamic bonding of OFDM channels to chop up the ~10 Gbps of bandwidth while 10G-PON uses TDM to slice up the bandwidth.

The physical medium itself really has next to nothing to do with it. You can TDM and OFDM on both fiber and coax. The bandwidth is a factor of the total frequency and modulation methods.

FTTH is popular for new rollouts because it's cheaper to rollout and run. It uses less power, it goes longer distances, it's cheaper to repeat if you need to, it's cheaper to upgrade to the next generation of PON. It also has a better scaling future, but I already mentioned that above.


> The physical medium itself really has next to nothing to do with it.

yes, it does, because actual fiber is significantly more future-proof. The same boring 9/125 fiber that's being built today is capable of 100/200/400 Gbps ethernet with only a change in electronics at the ends. Can't say the same for coax. Even the must rudimentary DWDM with 10G OOK optics on single strand or two strands of fiber has vastly more capacity than anything coax based.

No matter how you slice it the coax has a much more limited service lifespan and eventual capacity exhaustion problem compared to the more creative solutions that can be employed in the future to grow beyond the capabilities of 10G XGSPON.

Your typical 10G XGSPON setup like with 16:1 or 32:1 split and single strand to the home, is only using maybe 2% to 5% of the actual available THz channels that exist (in the 1470 to 1610 bandwidth range) in the fiber. There's a vast amount of empty channel space in that fiber for future bidi optic usage scenarios if you know how DWDM stuff works.

The RF on coax, on the other hand, is using pretty much every viable frequency in the bands that will work on the coax and is already at its limit.


An actual Google PM called me up the other day trying to upsell me to $200 a month 20 gigabit. Said they’d give me a router but I’m free to hook whatever I want to the fiber, saturate it as much as I want, no worries. They must have a lot of extra bandwidth if this is a service they’re offering in my neighborhood.


That'd be the testing of the 25G-PON. They say you can do that because they known oversubscription ratios of 32:1 really aren't such a horrible concept. Think about it - they told you and a couple dozen others to blast it with 20G up and 20G down as much as you want... and how often do you actually use anywhere near that much? When you do, how long are you actually using the full pipe? For 99.999% of home users high last mile oversubscription makes perfect sense and allows the network to be built out SIGNIFICANTLY cheaper.

High last mile oversubscription is a net good for home consumers, it almost always works out in everyone's favor. The exception is that 1 guy in 10000 that will actually somehow use 20 gbps a day all day every day in some neighborhood and create some drama in the news because ISPs can't be bothered to try to explain why oversubscription is good to everyone who already doubts them.


In the active ethernet FTTH and GPON/XGSPON last mile world, to put it in the most casual language possible, you can put a metric fuckton of 1 Gbps symmetric residential last mile users on a single 10 Gbps full duplex uplink before anyone starts to notice that they aren't seeing 1 Gbps speed tests.

Or before you can no longer claim that you are delivering 1 Gbps.

Your average residential user does not move that much traffic at all, if you have a traffic chart that's something like 60s SNMP interface bit counter poller interval for their CPE, fed into a time series db, and draw grafana charts for that customer over a 1 day or 1 week or 1 month period of time.

Even when a customer does something like buy several new 140GB xbox games in one day, the actual amount of time that they're really utilizing that link near full capacity in the same 24 hour period is very minimal.

The only caveat is that you need to be able to watch out for the 1 or 2% of outlier/heavy use customers who will really use their link for huge amounts of data. In many neighborhoods there won't be any of those.


You can put the same number of 1 GBPs users on a 10G-PON link as a 10G DOCSIS3.1 link. The determination is total bandwidth, which is why the over subscription profile of each is identical despite being physically different. You are correct that the over subscription ratio itself is almost never a problem. The common cause of it still being able to become a problem being whenever bandwidth increases ISPs start offering higher speeds. E.g. with 10G transports carriers start offering >gbps plans so the story repeats where some isolated case of 2 or 3 users on the same segment thrash it.

I'm not saying this as anti-PON, just that the existing coax improvements are sometimes very sensible and extremely comparable in the current generation. I've architected and deployed several small city G-PON and 10G-PON networks with Nokia gear and it's fantastic for net new and probably the only real option to continue growing 5-10 years from now. That said, if you've already got decent coax for the last mile DOCSIS 3.1 can be extremely comparable and behave near identically at the moment.


It's true that the capacity is nearly the same at the moment with docsis3.1, but consider that a docsis3.1 system that is using pretty much EVERY viable RF channel can just barely have the same capacity as a 10G XGSPON system that is using maybe 1-2% of the THz channels available in normal singlemode fiber.

If you look at a typical residential 16:1 or 32:1 split XGSPON system on an optical spectrum analyzer that's capable of all DWDM ranges, it looks gapingly empty. There's just a few channels used for the downstream and upstream with the timeslicing for the various CPEs' usage. And vast ranges of totally empty optical space.

What I find interesting is that your average residential user does NOT really use much more traffic in (in average Mbps per CPE or GB per month) if you give them a 2.5Gbps or 5 Gbps or 10 Gbps connection. I have plenty of 2.5Gbps and 5 Gbps and 10 Gbps customers. Maybe 1-2% of them are really heavy users. The rest of them use exactly the same amount of traffic as the 1 Gbps users, because the vast majority of non-technical residential end users these days have only wifi client devices. Finding someone who has a desktop PC with a 1000BaseT or 2.5GBaseT LAN port to do a proper speed test is maybe 1 in 50 customers.

Even if you've got people with 3x3 802.11ax stuff running in 80 MHz channels they're just barely going to approach 900 Mbp speed tests downstream one way.

If we had offered 10G FTTH to the home in 2002 the sort of power user who would buy that might actually try to run a small server farm out of their spare bedroom. But now it's 2024 and people who are serious about hosting their own stuff are doing it with their own VM/VPS/cloud based stuff, or by colocating a few servers, etc. They know that a residential last mile gigabit+ connection is not the best place for it. There's outliers and exceptions of course, but they're getting even rarer every year as a percentage of the total customers (eg: someone who wants to run a torrent seedbox from their house or something).


Agreed. I think the most underrated feature of PON is the ability to run future generations simultaneously on the same line, without any impact at all to the existing install. The bandwidth on SM fiber is so enormous it will pretty much always (for my lifetime at least) leave other wavelengths open. This allows for deploying 25G-PON on the same fiber as existing 10G-PON with no negative impact to the existing customers. That's huuuuge in terms of long term operations budget and upgrade logistics.


Google is sadly known for doing cool proof-of-concept stuff with no regard for profitability, and then axing it when the hype is over.


Im not sure Google is GPON (or has passive splitters at all). They were very early to deployments, and their plans are always symmetrical (which doesnt make sense for any standard *PON deployment).


It is PON. When the underlying network is 2.4 down, 1.2 up you can offer 1/1 plans (with some degree of prayer).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: