Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a number of scaled AMD deployments, including Lamini (https://www.lamini.ai/blog/lamini-amd-paving-the-road-to-gpu...) specifically for LLM's. There's also a number of HPC configurations, including the world's largest publicly disclosed supercomputer (Frontier) and Europe's largest supercomputer (LUMI) running on MI250x. Multiple teams have trained models on those HPC setups too.

Do you have any more evidence as to why these categorically don't work?




> Do you have any more evidence as to why these categorically don't work?

They don't. Loud voices parroting George, with nothing to back it up.

Here are another couple good links:

https://www.evp.cloud/post/diving-deeper-insights-from-our-l...

https://www.databricks.com/blog/training-llms-scale-amd-mi25...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: