Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The paper also says they used 8 Tesla V100. They are GPUs that are dedicated for ML and quite a bit more powerful than a m2.


Can you confirm that was for inference? I thought that was only for training 55min on 8x v100


You are right, inference only uses one single v100 according to the paper.


I missed the bit about using 8 of them to run it! Wow that’s a lot of GPU horsepower to do this. More efficient to just use a vtuber style pipeline using unreal engine and the metahumans or other avatars… only need one good GOU for that.


Follow up,glad to know I read it right the first time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: