Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  > pretty close proxy for how much computation is happening.
[citation needed]. See the vastly different power budget and cost of AWS graviton ARM vs x86 compute. Looking even at power use directly is only going to give a very low precision proxy for aggregate compute, with water usage even more indirect.


Looking at power use directly and making some educated guesses about average FLOPs/watt is probably the most effective way to estimate aggregate compute.

Even at Amazon I wouldn't be surprised if it's the primary way they do it, and I would be interested in some research. I'm trying to think of other ways, and accurately aggregating CPU/GPU load seems virtually impossible to do in a very rigorous way at that scale.

And yes, as an outsider you might have trouble knowing the relative distribution of ARM/x86, but that's just another number you want to obtain to improve your estimate.


Counterpoint: you have no factual basis for believing anything about the energy used by various CPUs in EC2, none of which are publicly available parts.


You just proved their point though




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: