This is rather technical, but SCALE is a solution to enable Nvidia CUDA code run on AMD GPUs, without modification of the original codebase. The results in this early experiment show that even though performance hasn't been a focus yet (it is compatibility and functionality first), it is actually quite good compared to AMDs own native code (HIP).
SCALE as a solution to enable developers to write code once for CUDA, and run anywhere (AMD is the first target), is very bullish for AMD. We have been watching their progress over the past year and things are improving at a steady pace.
We're big fans of SCALE at https://stratoflow.com, as it's a natural enabler for AMD hardware to break into more serious AI and machine learning (remember that term?) workloads.
And honestly, I still can't understand why they haven't been acquired by AMD yet.
Would love to connect and hear more about what you like about SCALE and where you'd like it to go.
AMD is a part of our strategy, but it's not the end-game - we envision SCALE to be vendor neutral and have plans to support all of the competitive GPUs that come out in the future, including AMD, NVIDIA, Intel and any newcomers.
SCALE as a solution to enable developers to write code once for CUDA, and run anywhere (AMD is the first target), is very bullish for AMD. We have been watching their progress over the past year and things are improving at a steady pace.