Older GPUs had nowhere near enough memory to be really interesting for big data crunching.
Now however, the biggest limit for AI workloads is GPU memory capacity (and bandwidth). The billions invested are going into improving this aspect faster than any other. Expect GPUs with a terabyte of ultra-fast memory by the end of the decade. There are lots… and lots… of applications for something like that, other than just LLMs!
Now however, the biggest limit for AI workloads is GPU memory capacity (and bandwidth). The billions invested are going into improving this aspect faster than any other. Expect GPUs with a terabyte of ultra-fast memory by the end of the decade. There are lots… and lots… of applications for something like that, other than just LLMs!