Hacker News new | past | comments | ask | show | jobs | submit login

It was so bad, we just moved to immutable GPU infrastructure regardless of physical or virtual. When a new release of all the nvidia stuff comes out, we re-image the machine and install it.

Cuda on linux with ml/gpu workloads is still kind of a hotmess and i'd say we're far from finding a winner like some suggest here.

It's gotten better... but still far easier to treat it like a mess and start fresh with any install




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: