Hacker News new | past | comments | ask | show | jobs | submit login

what do you mean? 8 or 16 GPUs? That's require changing the code to use distributed tensorflow...



Yes exactly. The instances with 8 or 16 GPUs. Does the training time reduce linearly, is the GPU utilisation 100%, is it plug and play with TF




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: