Hacker News new | past | comments | ask | show | jobs | submit login

Datacenters are actually exactly as efficient as laptops.

They consume more only because they do not stay idle, like laptops.

The CPU cores in the biggest server CPUs consume only 2.5 W to 3 W per core at maximum load, which is similar or less than what an Apple core consumes.

The big Apple cores are able to do more work per clock cycle, while having similar clock frequencies and power consumption to the server cores, but that is due almost only to using a newer manufacturing process (otherwise they would do more work while consuming proportionally more power).

The ability of the Apple CPU cores to do more work per clock cycle than anything else is very useful in laptops and smartphones, but it would be undesirable in server CPUs.

Server CPUs can do more work per clock cycle by just adding more cores. Increasing the work done per clock cycle in a single core, after a certain threshold, increases the area more than the performance, which diminishes the number of cores that could be used in a server CPU, diminishing the total performance per socket.

It is likely that the big Apple cores are too big for a server CPU, even if they may be optimal for their intended purpose, so without the advantage of a superior manufacturing process they might be less appropriate for a server CPU than cores like Neoverse N2 or Neoverse V2.

Obviously, Apple could have designed a core optimized for servers, but they do not have any reason to do such a thing, which is why the Nuvia team has split from them, but they were not able to pursue their dream and then they went back to designing mobile CPUs at Qualcomm.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: