gpucpuhardwareraytracingrasterizing

Why do we use CPUs for ray tracing instead of GPUs?


After doing some research on rasterisation and ray tracing. I have discovered that there is not much information on how CPUs work for ray-tracing available on the internet. I came across and article about Pixar and how they pre-rendered Cars 2 on the CPU. This took them 11.5 hours per frame. Would a GPU not have rendered this faster with the same image quality? http://gizmodo.com/5813587/12500-cpu-cores-were-required-to-render-cars-2 https://www.engadget.com/2014/10/18/disney-big-hero-6/ http://www.firstshowing.net/2009/michael-bay-presents-transformers-2-facts-and-figures/


Solution

  • I'm one of the rendering software architects at a large VFX and animated feature studio with a proprietary renderer (not Pixar, though I was once the rendering software architect there as well, long, long ago).

    Almost all high-quality rendering for film (at all the big studios, with all the major renderers) is CPU only. There are a bunch of reasons why this is the case. In no particular order, some of the really compelling ones to give you the flavor of the issues:

    But what's different about games? Why are GPUs good for games but not film?

    First of all, when you make a game, remember that it's got to render in real time -- that means your most important constraint is the 60Hz (or whatever) frame rate, and you sacrifice quality or features where necessary to achieve that. In contrast, with film, the unbreakable constraint is making the director and VFX supervisor happy with the quality and look he or she wants, and how long it takes you to get that is (to a degree) secondary.

    Also, with a game, you render frame after frame after frame, live in front of every user. But with film, you effectively are rendering ONCE, and what's delivered to theaters is a movie file -- so moviegoers will never know or care if it took you 10 hours per frame, but they will notice if it doesn't look good. So again, there is less of a penalty placed on those renders taking a long time, as long as they look fabulous.

    With a game, you don't really know what frames you are going to render, since the player may wander all around the world, view from just about anywhere. You can't and shouldn't try to make it all perfect, you just want it to be good enough all the time. But for a film, the shots are all hand-crafted! A tremendous amount of human time goes into composing, animating, lighting, and compositing every shot, and then you only need to render it once. Think about the economics -- once 10 days of calendar (and salary) has gone into lighting and compositing the shot just right, the advantage of rendering it in an hour (or even a minute) versus overnight, is pretty small, and not worth any sacrifice of quality or achievable complexity of the image.

    ADDENDUM (2022):

    The world has changed a lot since I wrote this answer in 2016! Once ray tracing acceleration was added to hardware (with NVIDIA RTX cards) ray tracing on GPUs was finally, definitively faster than ray tracing the same scene on a CPU -- for scenes that are of a size that can fit on the GPUs. And GPUs have a lot more memory than they did in 2016, so that includes a much wider range of scenes. Lots of games in 2022 use a combination of rasterization and ray tracing (when available) and probably within a couple years there may be games that are ray traced only. And in the film world, we are all racing to get our renderers ray tracing on GPUs with full feature parity with the CPU ray tracers. But we're not quite there yet. We use the GPUs more and more for various interactive uses during production, but final frames are still CPU rendered for full-complexity frames. But I think we're within a year or two of some portion of final frames being rendered strictly with GPU ray tracing, and probably within 5 years of nearly all final film frames being GPU ray traced (though not anywhere near at realtime rates).