Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the difference between ray tracing and path tracing? And could we have a path tracing chip?

This demo was pretty impressive. I think they said they used 4 Titans at 720p.

https://www.youtube.com/watch?v=aKqxonOrl4Q



Ray tracing works backwards from the way actual optics works. It sends out a ray for each pixel and then finds out what it falls on then does math to figure out what it should look like. In a simple example, you have a few light sources and you trace a ray back to an object (polygonal surface), then you shoot off other rays from there toward the light sources (or you shoot off potential reverse reflection rays). If something is blocking a light source you adjust the amount of shadow being rendered or you bounce off of that surface again toward the light sources, and so on. Making it possible to render shadows, reflections, and so on with decent fidelity.

In contrast to that technique you have "radiosity", where the illumination at each point on a surface is calculated based on its environment. Radiosity is better at rendering diffuse light, ray tracing is better at rendering reflected light.

The underlying difficulty is that illumination is a maddeningly combinatorial problem. In principle every part of every object in a scene contributes illumation to every part of every other object. The light falling on a desk lamp from the LEDs of a clock also illuminates the desk itself, and so on. Radiosity and ray tracing attempt to solve those problems by making simplifying assumptions and performing a subset of the calculations necessary to illuminate and render a scene completely faithfully. In principle a more proper ray tracing algorithm would be to send out a huge number of rays in every direction for every image field ray, then follow each of those through their evolution, sending out yet more for surfaces each fall on, and so on. But that's far too computationally intensive (it's an O(n!) level problem).

Path tracing is similar to these techniques but makes different simplifying assumptions. And the most important aspect is that instead of deterministically calculating everything in a specific way it uses a monte-carlo method of statistically sampling different "paths" then using the data to estimate the resulting illumination/image rendering. Path tracing results in a compromise between ray tracing and radiosity of being able to render both reflective and diffuse light well, though it has its own short-comings.


This sounds somewhat wrong. Ray tracing is just another term for path tracing, as paths are made up of traced rays. A naive camera-to-light witting ray tracer is still a sampling path tracer but has bias towards a specific subset of paths, namely those which follow specular bounces from the eye to the light or those which have one diffuse bounce.

Much of the progress in photorealistic rendering has come from improving the techniques used to sample paths. Bi-directional path tracing samples paths by tracing paths from both the light and the eye and performing a technique called multi-importance sampling in order to weight them appropriately. This allows the algorithm to pick up more light paths that would otherwise be hard to reach, such as those which form caustic reflections.

One of the more recent developments on this front is the use of the "specular manifold", which allows a path sampler to "walk" in path-space that allows capturing of even more partially-specular paths. (See Wenzel 2013.) This technique allows efficient sampling of the main light transport paths in scenes where a specular transparent surface surrounds a light source, ex a light bulb casing.

Edit: for this specific hardware, it sounds like they are using a hybrid approach so may very well be doing a basic eye-to-light ray tracing algorithm and then using raster for diffuse surface shading.


I thought I was clear that path tracing and ray tracing shared similarities. But ultimately they differ in several key ways.

For example, ray tracing tends to be deterministic, and it starts at the view frame and moves toward the scene and then light sources. Path tracing has no such constraint. Modern path tracing techniques use a combination of ray-tracing-like paths as well as paths beginning from light sources (gathering rays and shooting rays, in the parlance of the field).

In principle you could consider path tracing to be some sort of specialized subset of ray tracing (or vice versa for that matter) but in practice it is different enough in its specifics and implications that it makes more sense to treat it as merely related.


This might be bit of a tangent but this is one of the problems where it matters which parameter you are looking for.

While it is a O(n!) in relation to reflections a real physical rendering is 'mere' O(n) in relation to volume (or n^3 in relation to the sides of the cubic volume we want to simulate)! Why? We simply create a grid of around 200nm volumes and run a wave simulation for the light. Or O(nlogn) if we want to use fft based simulation.

Naturally this consumes horrendous amounts of memory and computational power simply because the size of spaces humans are interested in are so massive compared to wavelength of light.

But eventually we might move into this direction, maybe we even live to see it eventually.


I looked into this because I design optics. Too bad I can't use graphics oriented ray tracing to analyze designs!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: