Hacker News new | past | comments | ask | show | jobs | submit | virtuallynathan's comments login

Always fun to see one of my professors from the quite tiny, but awesome computer science department at St Andrews on HN!


FYI, this is NOT the cable Facebook is planning to build, this is the dream cable of a submarine cable… enthusiast? From LinkedIn: “To be clear, this is not Meta's plan or map. This is Ver "T" and T stands for Tagare. It's what I think is going to happen to this cable if I was designing it. This is my wish list.”

https://www.linkedin.com/pulse/map-metas-w-cable-sunil-tagar...


No, Facebook is building a cable from the East Coast to South Africa and then from South Africa to India and forward to Australia. The final leg is to the US. What Tagare did is take the W shaped network and add branching units. Those branching units have not been confirmed by my contacts. The general shape of the network has been confirmed.


Sorry, yes, was referring specifically to the Sunil Tagare version.


I'm confused.

Are you saying the article is false?

Or that the map illustration is false? Although the map illustration doesn't come from your post.

How did you even find the LinkedIn post? Are they the same author? Is TFA based on the LinkedIn post? How do you know?

And it seems like the TFA doesn't even have an author, nor can I find an author for their whole blog...

Maybe you can clarify all of this, since you seem to have some context here?


The map is current cables, for reference :) https://www.submarinecablemap.com/

AFAICT this post isn't "false" as much as "speculation". Given the news that a cable will be spanning the Atlantic ocean with an end goal of South Asia, South Carolina -> Africa doesn't seem insane. Though it looks like there's no cables there right now...

I was coming in here to complain about "encompass" vs "encircle", but now I'm fascinated by this map. Cool webdev, too!

EDIT: My biggest takeaway is that we should conquer/buy/steal French Polynesia. Also, huge shoutout to the Leif Erikson cable, connecting Oslo with the absolute middle of nowhere[1] in Canada. Oil rig thing, maybe...?

[1] https://maps.app.goo.gl/Z5CEWt16NzHP2QH3A


The great circle route from Norway to the US passes through Canada. I wouldn’t put too much stock in that.

https://www.greatcirclemap.com/?routes=OSL-JFK


Not speculation in terms of the general path of the cable. What is speculation is the number of branching units.


From the page header, it sounds like the author is Roderick Beck:

“Roderick Beck worked as a sales contractor for Hibernia Atlantic and helps buyers procure capacity and providers make sales.”


Oh thanks, I totally missed that, kind of hidden at the end of the paragraph. Kinda weird place to put the author but at least it's there!


Maybe he's confusing it with the other post? https://subseacables.blogspot.com/2024/10/facebooks-semi-sec...


The article is accurate. I am the author. My sources are/were involved in designing the cable.

As noted the cable connects the US East Coast to South Africa and then heads to India and continues on to Australia before the home stretch to the States. We even know the number of fibre pairs, 16. It is a spatial division multiplexing system.


What about the Nigeria-Chad-Sudan route? Too expensive?

How long till we get Niger and Algeria connected?


The article is just rumors and speculation.


This latest revision of Jupiter is apparently 400G, as is the ConnectX-7, A3 Ultra will have 8 of them!


I’ve heard stories from people who have worked at Hanford, and it seems like a lot of that money is being squandered. Excess caution, basically everything is radioactive waste, and just overall wasteful spending for decades.


Isn’t that exactly the point?

“Clean up” is clearly a euphemism for some sort of money funnel.

Hanford.gov even readily admits this!

“In April 2009, the Hanford Site received a nearly $2 billion in funding from the American Recovery and Reinvestment Act. Contractors quickly hired thousands of new employees for temporary stimulus jobs in environmental cleanup.”


There’s more than one slang dictionary, it seems… here’s one I found in person at UCSD’s library: https://photos.app.goo.gl/J3rCLtbJMEn4SkKq8


They called out Flexential in the post?


Yes


Pretty wild for a single fiber, but super involved, using 5 different doping type amps. Current systems use just EDFAs (Erbium). These guys used Erbium, Thallium, Bismuth, and others.


The latest one is also just more lambdas. Like that’s impressive but DWDM itself took a decade+ after gear was introduced.

Most of the time it’s easier to just add another few dozen fibers when laying cables.


Facebook has more datacenter space and power than Amazon, Google, and Microsoft -- possibly more than Amazon and Microsoft combined...


Unless you've worked at Amazon, Microsoft, Google, and Facebook, or a whole bunch of datacenter providers, I'm not sure how you could make that claim. They don't really share that information freely, even in their stock reports.

Heck I worked at Amazon and even then I couldn't tell you the total datacenter space, they don't even share it internally.


You can just map them all... I have. I also worked at AWS :)


This would be an interesting dataset to use for trading decisions (or sell to hedge funds).

But I wonder how much of their infrastructure is publicly mappable, compared to just the part of it that's exposed to the edge. (Can you map some internal instances in a VPC?)

That said, I'm sure there are a lot of side channels in the provisioning APIs, certificate logs, and other metadata that could paint a decently accurate picture of cloud sizes. It might not cover everything but it'd be good enough to track and measure a gradual expansion of capacity.


I’m not sure mapping VPCs is super helpful - the physical infra is fairly distinct.

AWS has also disclosed 20 million Nitro adapters have been deployed, so you can do some backwards napkin math from that.


Mapping as in.. drawing the outlines of buildings and computing the square footage yourself?


Yep.


Then you should be aware that, for the longest time, Google was against multiple floors, until they suddenly switched to four floors in many locations:

https://www.datacenterfrontier.com/cloud/article/11431213/sc...

A decade ago, there was a burst in construction and in some places the bottleneck was not getting the machines or electricity, but how fast they could deliver and pour cement, even working overnight.


Yep, I am aware, I have a square footage multiplier for their multi-story buildings.


But how can you know how many floors they have? And where are you getting the list of buildings from? And what makes you think your list is complete?

Also how do you know their efficiency? Google might have less space but also a way to pack twice as much compute in the same place.

Like I said, this is impossible to know without a lot of insider information from a lot of companies.


Well, they tell you for one: https://datacenterpost.com/scaling-up-google-building-four-s... We also have Google Street view, etc.

Of course, it's all estimates. You can get fancier and count generators and transformers and stuff too.


To date, facebook has built, or is building, 47,100,000 sq ft of space, totaling nearly $24bn in investment. Based on available/disclosed power numbers and extrapolating per sqft, I get something like 4770MW.

Last I updated my spreadsheet in 2019, Google had $17bn in investments across their datacenters, totaling 13,260,000 sq ft of datacenter space. Additional buildings have been built since then, but not to the scale of an additional 30mil sq ft.

Amazon operates ~80 datacenter buildings in Northern Virginia, each ~200,000 sq ft -- about 16,000,000sq ft total in that region, the other regions are much much smaller, perhaps another 4 mil sq ft. When I'm bored I'll go update all my maps and spreadsheets.


Does the square footage take into account multiple floors? What's the source? It can be misleading, because you don't know the compute density of what's inside. Using just public data, power is a more accurate proxy. Until at least 5-6 years ago, Google was procuring more electricity than Amazon. Before that, it had a further advantage from lower PUE, but I bet the big names are all comparable on that front by now. Anyone that has worked at several of them can infer that FB is not the largest (but it's still huge).

As for the dollars, were they just in 2019 or cumulative? The Google ones seem low compared to numbers from earnings.


Google certainly has more compute density than Amazon, the numbers I was able to find from the local power company was 250MW at Council Bluffs back in 2015 or so.

Amazon builds out 32MW shells, and the most utilized as of 5 or 6 years ago was 24MW or so, with most being much less than that.


At this point Power Companies (ala PG&E, etc) should be investing in AI companies in a big way. THen they make money off the AI companies to build out power infra - and vice versa.

I am surprised we havent heard about private electrical grid built out by such companies.

Surely they all have some owned power generation, but then if they do, the local areas where they DO build out power plants - they should have to build capacity for the local area, mayhaps in exchange for the normal tax subsidies they seek for all these large capital projects.

Cant wait until we pods/clusters in orbit. With radioisotope batteries to power them along with the panels. (I wonder how close to a node a RI battery can be? Can each node have its own RI?) (sas they can produce upto "several KW" -- but I cant find a reliable source for max wattage of an RI...)

SpaceX should build an ISS module thats an AI DC cluster.

And have all the ISS technologies build its LLM there based on all the data they create?


I updated my map for AWS in Northern Virginia -- came up with 74 buildings (another source says 76, so i'll call it directionally correct). If I scale my sq ft by ~5% to account for missing buildings, we get 11,500,000sq ft in the northern virginia area for AWS.

I'll finish my other maps and share them later...


But Google built data centers aren't the only data centers google is running their machine fleet in...


Yeah, Google buys servers in public datacenters like those from Equinix. One "region" needn't be one datacenter, and sometimes AWS and GCP will even have computers in the same facility. It's actually quite annoying that "region" is such an opaque construct and they don't have any clear way to identify what physical building is hosting the hardware you rent from them.


Those are almost lost in the noise, compared to the big datacenters. (I've been inside two Atlanta facilities, one leased and one built from scratch, and the old Savvis one in Sunnyvale).


[citation needed]


I don't think so, AWS hasn't disclosed this numbers, like datacenter spaces occupied, so how do you know.


I have mapped every AWS data center globally, and I worked at AWS.

Facebook publishes this data.


I have zero evidence, but this seems extremely unlikely. Do you have more than zero evidence?


Meta can use all their datacenter space while Amazon, Google, and Microsoft datacenter space is mostly rented.


Token buckets have burstiness problems, CAKE lets you run things at ~99%+ of your link speed, assuming a stable medium, without bufferbloat, thanks to the packet pacing it does. CAKE also does several other things using Cobalt (CoDel + BLUE), and clever 8-way set associative hashing that helps out when you have many flows.


I did the math on this back in 2013, based on the annual reported number of hours uploaded per minute, and came up with 375PB of content, adding 185TB/day, with a 70% annual growth rate. This does not take into account storing multiple encodes or the originals.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: