Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Timing comparison with the reference is very disingenuous.

In raytracing, error scale with the square root of sample count. While it is typical to use very high sample count for the reference, real world sample count for offline renderer is about 1-2 orders of magnitude lower than in this paper.

I call it disingenuous because it is very usual for a graphic paper to include a very high sample count reference image for quality comparison, but nobody ever do timing comparison with it.

Since the result is approximate, a fair comparison would be with other approximate rendering algorithm. Modern realtime path tracer + denoiser can render much more complex scenes on consumer GPU in less than 16ms.

That's "much more complex scenes" part is the crucial part. Using transformer mean quadratic scaling on both number of triangles and number of output pixels. I'm not up to date with the latest ML research, so maybe it is improved now? But I don't think it will ever beat O(log n_triangles) and O(n_pixels) theoretical scaling of a typical path tracer. (Practical scaling wrt pixel count is sub linear due to high coherency of adjacent pixels)



Modern optimized path tracers in games (probably not Blender) also use rasterization for primary visibility, which is O(n_triangles), but is somehow even faster than doing pure path tracing. I guess because is reduces the number of samples required to resolve high frequency texture details. Global illumination by itself tends to produce very soft (low frequency) shadows and highlights, so not a lot of samples are required in theory, when the denoiser can avoid artifacts at low sample counts.

But yeah, no way RenderFormer in its current state can compete with modern ray tracing algorithms. Though the machine learning approach to rendering is still in its infancy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: