I disagree on this: "Performance is an absolute thing"
I think the core misunderstanding is what is being discussed. You say "Performance is an absolute thing, not a relative thing. Amount of work per second, not percentage-over-previous!"
But nobody here is concerned with the absolute performance number. Absolute performance is only about what we can do right now with what we have. The discussion is about the performance improvements and how they evolve over time.
Performance improvements are inherently a relative metric. The entire industry, society and economy - everything is built on geometric growth expectations. You cannot then go back to the root metric and say "ah but what really matters is that we can now do X TFlops more", when that number is a mere 10% improvement, nobody will care.
Built into everything around computing is the potential for geometric growth. Startups, AI, video games - it's all predicated on the expectation that our capabilities will grow in geometric fashion. The only way to benchmark geometric growth is by using % improvements. Again, it is wholly irrelevant to everyone that the 5090 can do 20 tflops more than the 4090. 20 TFlops is some number, but whether it is a big improvement or not - ie is it expanding our capabilities at the expected geometric rate or not - is not possible to divine from this number.
And when we look it up, it turns up that we went from 80 tflops to 100 tflops, which is a paltry 25% increase, well short of the expectations of performance growth from Nvidia which was in the 50% region.
20 tflops might or might not be a lot. The only way to judge is to compare how much of an improvement it is relative to what we could have before.
>I think the core misunderstanding is what is being discussed.
I see your point and it's valid for sure, and certainly the prevailing one, but I think it's worth thinking about the absolute values from time to time even in this context.
In other contexts it's more obvious, let's use video.
Going from full HD to 4K is roughly a doubling of resolution in each direction. It means going from about 2 megapixels to 8, a difference of 6.
The next step up is 8k, which is going from 8 megapixels to 33. A difference of 25.
This is a huge jump, and crosses a relevant threshold - many cameras can't even capture 33mpx, and many projectors can't display them.
If you want to capture 8k video, it's not relevant if your camera has twice the megapixel count of the previous one - it needs to have 33mpx. If it has 16 instead of 8 it doesn't matter (also it doesn't really matter if it has 100 instead of 40).
On the other hand, if you want to capture 4k, you need only 8 megapixels. If you have 12, 16 24 or 40 doesn't really matter.
If we return to CPUs, absolute performance matters there too - if you want to play back video without frame drops, you need a certain absolute performance. If your computer is twice as fast and still can't do it, the doubling didn't matter. Similarly if your current computer can do it, doubling it again won't be noticeable.
I think the core misunderstanding is what is being discussed. You say "Performance is an absolute thing, not a relative thing. Amount of work per second, not percentage-over-previous!"
But nobody here is concerned with the absolute performance number. Absolute performance is only about what we can do right now with what we have. The discussion is about the performance improvements and how they evolve over time.
Performance improvements are inherently a relative metric. The entire industry, society and economy - everything is built on geometric growth expectations. You cannot then go back to the root metric and say "ah but what really matters is that we can now do X TFlops more", when that number is a mere 10% improvement, nobody will care.
Built into everything around computing is the potential for geometric growth. Startups, AI, video games - it's all predicated on the expectation that our capabilities will grow in geometric fashion. The only way to benchmark geometric growth is by using % improvements. Again, it is wholly irrelevant to everyone that the 5090 can do 20 tflops more than the 4090. 20 TFlops is some number, but whether it is a big improvement or not - ie is it expanding our capabilities at the expected geometric rate or not - is not possible to divine from this number.
And when we look it up, it turns up that we went from 80 tflops to 100 tflops, which is a paltry 25% increase, well short of the expectations of performance growth from Nvidia which was in the 50% region.
20 tflops might or might not be a lot. The only way to judge is to compare how much of an improvement it is relative to what we could have before.