First, those render times sound hugely inflated to me.
Second, this is apple and oranges. Just look at 'ray tracing' is simple and naive. Pixar uses their own renderer, PRman, which has been developed over multiple decades. There is a big difference in renders for film, including micropolygons, texture filtering, more textures, higher res textures, way fewer lighting cheats, much more customization, motion blur, depth of field, sub surface scattering, multi-layered glossy surfaces that are neither fully specular or diffuse, better area lighting models, etc.
The original Coco frame times were 1000hrs/f, thanks to millions of light sources (and coming in at 30GB to 120GB of scene data). But engineers got that down to 50hrs/f over the course of production - light acceleration structures that have now ended up in RenderMan 22. (These numbers are single core benchmarks, so you then need to scale by some factor of numbers of cores you have available). In general, I think it’s the huge scene data size that keeps feature films mostly on the CPU.
It takes 29 hours for a single CPU core (or thread?) to render one frame, which is reasonable for Pixar-quality scenes.
Big Hero 6 was rendered using 55,000 CPU cores in parallel, which would bring the final render time closer to 70 hours for 144k frames (assuming maximum efficiency and ignoring all the tests and overhead).
For reference you can download a production benchmark scene (https://www.blender.org/download/demo-files/) from the Blender team's short film and try to render it at 1080p — my 16-thread Ryzen CPU takes over an hour to finish one frame.
Also feature films are rendered in 4K or above which quadruples the render time over 1080p.
"NVIDIA Turing architecture-based GPUs enable production-quality rendering and cinematic frame rates..."
Raytracing at 24fps?