Hacker News new | past | comments | ask | show | jobs | submit login

Not sure what you mean by that, but ROCm works just fine. I'm using a patched ollama (for APU support) [1] and I can run models like llama3.1:8b and deepseek-r1:14b just fine on my Radeon 780M iGPU, on Arch (btw).

[1] https://github.com/rjmalagon/ollama-linux-amd-apu




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: