Most things that "run on CPUs" run on CPUs by many vendors. In the cases where software does not run on ARM processors it's almost always due to an architectural difference with x86, not an artificial proprietary limitation.
If an open-source library chose to rely on a proprietary C compiler with special language features (e.g. the Borland C compiler) that targeted only Intel x86 CPUs, I would argue we would not generally claim this software "runs on CPUs". The software "runs on Intel x86 CPUs" seems more appropriate.
Similarly, especially in the machine learning field, it seems to be more and more common that "GPU" really means "CUDA-required GPU". In a world where AMD and Intel GPUs are also common (perhaps more common?), to me it makes sense to be up-front about this.
I'm not criticizing the authors' choice to use CUDA here. I'm just saying it would be nice if we stop pretending CUDA is synonymous with GPU programming, especially in cases where OpenCL would be a very appropriate choice (e.g. open-source software).
The important point is it runs on any GPUs. If you think then saying it runs on GPUs is wrong the counter sounds more unusual, that it doesn't run on GPUs.
They could have been more precise but there's nothing wrong or particularly misleading about what they said, and what they said conveyed the most important aspect.
> would argue we would not generally claim this software "runs on CPUs"
"This software does not run on CPUs" sounds wrong for this scenario.
Is it just me or does anyone else get annoyed when a library claiming to run on GPUs uses proprietary CUDA, making it nvidia-only?
It would seem strange to claim that a library "runs on CPUs" but only supports Intel CPUs.