FYI, this is NOT the cable Facebook is planning to build, this is the dream cable of a submarine cable… enthusiast? From LinkedIn: “To be clear, this is not Meta's plan or map. This is Ver "T" and T stands for Tagare. It's what I think is going to happen to this cable if I was designing it. This is my wish list.”
No, Facebook is building a cable from the East Coast to South Africa and then from South Africa to India and forward to Australia. The final leg is to the US. What Tagare did is take the W shaped network and add branching units. Those branching units have not been confirmed by my contacts. The general shape of the network has been confirmed.
AFAICT this post isn't "false" as much as "speculation". Given the news that a cable will be spanning the Atlantic ocean with an end goal of South Asia, South Carolina -> Africa doesn't seem insane. Though it looks like there's no cables there right now...
I was coming in here to complain about "encompass" vs "encircle", but now I'm fascinated by this map. Cool webdev, too!
EDIT: My biggest takeaway is that we should conquer/buy/steal French Polynesia. Also, huge shoutout to the Leif Erikson cable, connecting Oslo with the absolute middle of nowhere[1] in Canada. Oil rig thing, maybe...?
The article is accurate. I am the author. My sources are/were involved in designing the cable.
As noted the cable connects the US East Coast to South Africa and then heads to India and continues on to Australia before the home stretch to the States. We even know the number of fibre pairs, 16. It is a spatial division multiplexing system.
I’ve heard stories from people who have worked at Hanford, and it seems like a lot of that money is being squandered. Excess caution, basically everything is radioactive waste, and just overall wasteful spending for decades.
“Clean up” is clearly a euphemism for some sort of money funnel.
Hanford.gov even readily admits this!
“In April 2009, the Hanford Site received a nearly $2 billion in funding from the American Recovery and Reinvestment Act. Contractors quickly hired thousands of new employees for temporary stimulus jobs in environmental cleanup.”
Pretty wild for a single fiber, but super involved, using 5 different doping type amps. Current systems use just EDFAs (Erbium). These guys used Erbium, Thallium, Bismuth, and others.
Unless you've worked at Amazon, Microsoft, Google, and Facebook, or a whole bunch of datacenter providers, I'm not sure how you could make that claim. They don't really share that information freely, even in their stock reports.
Heck I worked at Amazon and even then I couldn't tell you the total datacenter space, they don't even share it internally.
This would be an interesting dataset to use for trading decisions (or sell to hedge funds).
But I wonder how much of their infrastructure is publicly mappable, compared to just the part of it that's exposed to the edge. (Can you map some internal instances in a VPC?)
That said, I'm sure there are a lot of side channels in the provisioning APIs, certificate logs, and other metadata that could paint a decently accurate picture of cloud sizes. It might not cover everything but it'd be good enough to track and measure a gradual expansion of capacity.
Then you should be aware that, for the longest time, Google was against multiple floors, until they suddenly switched to four floors in many locations:
A decade ago, there was a burst in construction and in some places the bottleneck was not getting the machines or electricity, but how fast they could deliver and pour cement, even working overnight.
To date, facebook has built, or is building, 47,100,000 sq ft of space, totaling nearly $24bn in investment. Based on available/disclosed power numbers and extrapolating per sqft, I get something like 4770MW.
Last I updated my spreadsheet in 2019, Google had $17bn in investments across their datacenters, totaling 13,260,000 sq ft of datacenter space. Additional buildings have been built since then, but not to the scale of an additional 30mil sq ft.
Amazon operates ~80 datacenter buildings in Northern Virginia, each ~200,000 sq ft -- about 16,000,000sq ft total in that region, the other regions are much much smaller, perhaps another 4 mil sq ft. When I'm bored I'll go update all my maps and spreadsheets.
Does the square footage take into account multiple floors? What's the source? It can be misleading, because you don't know the compute density of what's inside. Using just public data, power is a more accurate proxy. Until at least 5-6 years ago, Google was procuring more electricity than Amazon. Before that, it had a further advantage from lower PUE, but I bet the big names are all comparable on that front by now. Anyone that has worked at several of them can infer that FB is not the largest (but it's still huge).
As for the dollars, were they just in 2019 or cumulative? The Google ones seem low compared to numbers from earnings.
Google certainly has more compute density than Amazon, the numbers I was able to find from the local power company was 250MW at Council Bluffs back in 2015 or so.
Amazon builds out 32MW shells, and the most utilized as of 5 or 6 years ago was 24MW or so, with most being much less than that.
At this point Power Companies (ala PG&E, etc) should be investing in AI companies in a big way. THen they make money off the AI companies to build out power infra - and vice versa.
I am surprised we havent heard about private electrical grid built out by such companies.
Surely they all have some owned power generation, but then if they do, the local areas where they DO build out power plants - they should have to build capacity for the local area, mayhaps in exchange for the normal tax subsidies they seek for all these large capital projects.
Cant wait until we pods/clusters in orbit. With radioisotope batteries to power them along with the panels. (I wonder how close to a node a RI battery can be? Can each node have its own RI?) (sas they can produce upto "several KW" -- but I cant find a reliable source for max wattage of an RI...)
SpaceX should build an ISS module thats an AI DC cluster.
And have all the ISS technologies build its LLM there based on all the data they create?
I updated my map for AWS in Northern Virginia -- came up with 74 buildings (another source says 76, so i'll call it directionally correct). If I scale my sq ft by ~5% to account for missing buildings, we get 11,500,000sq ft in the northern virginia area for AWS.
Yeah, Google buys servers in public datacenters like those from Equinix. One "region" needn't be one datacenter, and sometimes AWS and GCP will even have computers in the same facility. It's actually quite annoying that "region" is such an opaque construct and they don't have any clear way to identify what physical building is hosting the hardware you rent from them.
Those are almost lost in the noise, compared to the big datacenters. (I've been inside two Atlanta facilities, one leased and one built from scratch, and the old Savvis one in Sunnyvale).
Token buckets have burstiness problems, CAKE lets you run things at ~99%+ of your link speed, assuming a stable medium, without bufferbloat, thanks to the packet pacing it does. CAKE also does several other things using Cobalt (CoDel + BLUE), and clever 8-way set associative hashing that helps out when you have many flows.
I did the math on this back in 2013, based on the annual reported number of hours uploaded per minute, and came up with 375PB of content, adding 185TB/day, with a 70% annual growth rate. This does not take into account storing multiple encodes or the originals.