Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Older GPUs had nowhere near enough memory to be really interesting for big data crunching.

Now however, the biggest limit for AI workloads is GPU memory capacity (and bandwidth). The billions invested are going into improving this aspect faster than any other. Expect GPUs with a terabyte of ultra-fast memory by the end of the decade. There are lots… and lots… of applications for something like that, other than just LLMs!



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: