Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the scenes that they’re showing, 76ms is an eternity. Granted, it will get (a lot) faster but this being better than traditional rendering is a way off yet.


Yeah, and the big caveat with this approach is that it scales quadratically with scene complexity, as opposed to the usual methods which are logarithmic. Their examples only have 4096 triangles at most for that reason. It's a cool potential direction for future research but there's a long way to go before it can wrangle real production scenes with hundreds of millions of triangles.


I'd sooner expect them to use this to 'feed' a larger neural path tracing engine where you can get away with 1 sample every x frames. Those already do a pretty great job of generating great looking images from what seems like noise.

I don't think this conventional similarity matrix in the paper is all that important to them




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: