Hacker News new | past | comments | ask | show | jobs | submit login

Apple definitely does not use GCP for 100% of anything, but it could be done. A migration like that couldn't happen overnight anyway, as I'm sure you know. Google hit the first 1EB of raw disk space a long time ago. Or you could look at power consumption as a proximate for capacity.



Power consumption is probably not a good proximate for disk capacity. Dense disk capacity has way lower power per rack than compute. Unless you know the overall compute/storage ratio, very hard to figure out storage capacity. The reverse might work if compute power consumption overwhelms storage power consumption. On the other hand, it might be easier to infer storage than comoute from rack positions, or square feet of dc space. Only so many spinning disks per cubic meter.

Also, there's a Reinvent talk from several years ago with "8 exabytes" in the title, so surely Google and Amazon both have many exabytes by this point.


You can make educated guesses about which range the compute/storage ratio is in if you look at e.g. public information from talks like

http://www.pdsw.org/pdsw-discs17/slides/PDSW-DISCS-Google-Ke...

(it has other useful information that might not have been mentioned anywhere before, like the crazy Colossus on Colossus... or D, the GFS chunkserver replacement)

You can use that to revisit Randall Munroe's estimates: https://what-if.xkcd.com/63/ You can infer from some of the comments that e.g. very dense disk capacity is not a good idea. There's more, of course, but I can't go into details (ex-Googler).


> many exabytes

Like, a hellabyte?

I'm still trying to make hella- happen as an SI prefix.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: