> CPU clock speeds maxed out between 3-4GHz a decade ago.
Even for x86 this isn't true (4+GHz is at least possible), let alone platforms like POWER which have already pushed beyond 5GHz. Fancier things like vacuum-channel transistors, graphene transistors, etc. could push that even further once they break into commercial viability.
Not that clock speed alone really matters all that much compared to the other performance benefits of high-performance RISC architectures like POWER and SPARC...
> Nobody develops special supercomputing CPUs any more.
Today I learned that Blue Gene was a figment of my imagination :)
Special supercomputing CPUs are still being developed. The reason why they seem insignificant is because their market size has remained relatively constant, while the markets for general-purpose, non-supercomputing-specific platforms have grown much more rapidly. This doesn't mean supercomputing is dead necessarily, just like how the invention of the microwave oven doesn't mean that ordinary ovens are suddenly dead. Rather, it's just an indicator of different use cases, and the different markets thereof.
> The top 10 are all Government operations.
It's a bit misleading (though I suppose technically accurate) to list academic institutions (like the University of Texas, which holds the #7 spot) as "Government operations"; they're government-funded, yes, but there's a big difference between that and, say, an actual government agency directly managing such an installation. I also fail to see how even a majority of those being government installations has anything to do with anything; governments typically have much greater capital to spend on such things - and greater need for such things - than all but the most massive commercial entities.
HPC was never really the purview of commercial enterprises anyway (unless they had extreme computational requirements). The uptick in the use of COTS products for high-performance computing among enterprises (particularly big Internet-reliant ones like Google) wasn't really at the expense of the HPC crowd losing potential users; it's rather just something that formed very recently alongside HPC already being a niche topic.
Basically, by your arguments, "high-performance computing" has been dying for basically as long as it's existed.
> Grosch's Law [2] stopped working a long time ago.
Only because the world switched to clustering, where Grosch's Law doesn't quite apply, and hasn't addressed the limitations of current transistor technology (like the above-mentioned vacuum-channel and graphene transistor technologies, among many others).
> Maximum price/performance today is achieved with racks of midrange CPUs, which is why that's what every commercial data center has.
That's what "every commercial data center has" (this isn't exactly true, but we'll go with it for now) more because of price alone than because of an actually-calculated price/performance ratio. Businesses tend to think in terms of short-term investments much easier than they tend to think in terms of long-term investments (in contrast with academic and often government institutions, which tend to think in the opposite direction, and therefore have entirely different sets of problems in many cases).
Meanwhile, the big businesses that really do actively calculate an optimal price/performance ratio (like Google) aren't the ones using COTS solutions; they usually have the financial capability to invest in homegrown solutions and cut out any unnecessary expense, and are certainly not just buying a bunch of prebuilt servers from Dell. Google in particular has started to invest heavily in IBM's Open POWER initiative, probably due to a perception that POWER will offer a better price/performance ratio than x86 in their already-very-customized hardware stack.
Even for x86 this isn't true (4+GHz is at least possible), let alone platforms like POWER which have already pushed beyond 5GHz. Fancier things like vacuum-channel transistors, graphene transistors, etc. could push that even further once they break into commercial viability.
Not that clock speed alone really matters all that much compared to the other performance benefits of high-performance RISC architectures like POWER and SPARC...
> Nobody develops special supercomputing CPUs any more.
Today I learned that Blue Gene was a figment of my imagination :)
Special supercomputing CPUs are still being developed. The reason why they seem insignificant is because their market size has remained relatively constant, while the markets for general-purpose, non-supercomputing-specific platforms have grown much more rapidly. This doesn't mean supercomputing is dead necessarily, just like how the invention of the microwave oven doesn't mean that ordinary ovens are suddenly dead. Rather, it's just an indicator of different use cases, and the different markets thereof.
> The top 10 are all Government operations.
It's a bit misleading (though I suppose technically accurate) to list academic institutions (like the University of Texas, which holds the #7 spot) as "Government operations"; they're government-funded, yes, but there's a big difference between that and, say, an actual government agency directly managing such an installation. I also fail to see how even a majority of those being government installations has anything to do with anything; governments typically have much greater capital to spend on such things - and greater need for such things - than all but the most massive commercial entities.
HPC was never really the purview of commercial enterprises anyway (unless they had extreme computational requirements). The uptick in the use of COTS products for high-performance computing among enterprises (particularly big Internet-reliant ones like Google) wasn't really at the expense of the HPC crowd losing potential users; it's rather just something that formed very recently alongside HPC already being a niche topic.
Basically, by your arguments, "high-performance computing" has been dying for basically as long as it's existed.
> Grosch's Law [2] stopped working a long time ago.
Only because the world switched to clustering, where Grosch's Law doesn't quite apply, and hasn't addressed the limitations of current transistor technology (like the above-mentioned vacuum-channel and graphene transistor technologies, among many others).
> Maximum price/performance today is achieved with racks of midrange CPUs, which is why that's what every commercial data center has.
That's what "every commercial data center has" (this isn't exactly true, but we'll go with it for now) more because of price alone than because of an actually-calculated price/performance ratio. Businesses tend to think in terms of short-term investments much easier than they tend to think in terms of long-term investments (in contrast with academic and often government institutions, which tend to think in the opposite direction, and therefore have entirely different sets of problems in many cases).
Meanwhile, the big businesses that really do actively calculate an optimal price/performance ratio (like Google) aren't the ones using COTS solutions; they usually have the financial capability to invest in homegrown solutions and cut out any unnecessary expense, and are certainly not just buying a bunch of prebuilt servers from Dell. Google in particular has started to invest heavily in IBM's Open POWER initiative, probably due to a perception that POWER will offer a better price/performance ratio than x86 in their already-very-customized hardware stack.