### Dispersion, Spectral Filtering, Subpixel Sampling

Bunny model available here. Ajax bust available here (thanks jotero!). Bunny scene was under 3 hours, Ajax scene was 8 hours.

The Cubo prism model is available here. For comparison, here is a proper rendering by luxrender. The prism shows off a new feature, spectral rendering. I used 20,000,000 photons which took about 1h20m to emit, and another 10 hours for rendering. (Without diffuse final gather it renders in about 2.5 hours.)

Dispersion in the glass is computed using Cauchy’s equation with dense flint glass (SF10) coefficients. Spectral computations are done in a way similar to this project, but with support for arbitrary wavelengths. I sample a wavelength from the visible spectrum and compute a “wavelength filter”, which is just an RGB color. I convert the CIE XYZ response for that wavelength to RGB and normalize such that a uniform sampling of the entire spectrum produces white (1, 1, 1) instead of (0.387463, 0.258293, 0.240652). Then I scale the emitted photon color and primary rays’ radiance by the normalized color. I sample the wavelengths with importance sampling according to the average CIE XYZ response.

With this spectral filtering I have to take more samples to eliminate the chromatic noise, but the result is consistent with the non-spectral result, provided there is no wavelength dependent reflection such as dispersion. That is, without dispersion, the spectral and non-spectral results match. If there *is* wavelength dependent reflection, then you get results like the prism image.

Finally, the cubo prism image and the last batch of sphere BRDF tests use subpixel sampling (similar to what’s done in smallpt). I divide each pixel up into 4×4 subpixels. Basically I scale the image resolution by 4, render, do all the tone mapping, and then shrink the image back down to the desired resolution using averaging. This produces much sharper results at the cost of increased memory usage. This is partially based on what Sunflow does, whose source I used as a guide, but without the adaptive anti-aliasing.

6 comments## 6 Comments so far

## Leave a reply

Any reason you haven’t posted anything for six months?

No good reason, just been super lazy

So, you’re still working on Pane. That’s great! There’s lots of stuff here I’m planning for liar as well, but you know, progress is slow =)

About how much longer does it take to clean up the chromatic noise vs. a non-spectral render?

Another question: do you get much out of using the Sunflow technique of super-sampling if you use a pixel reconstruction filter other than the box (e.g. tent filter)?

How are you generating photons for the Cubo scene? Some kind of importance sampling? For me, 100,000,000 (random) photons wasn’t enough.

Also, how did you compute the dispersion coefficients? The original was 1.4+50.0/(w-230), but I curvefit this to Sellmeier. Coefficients are A=1, B0=0.940608, C0=0.061544, B1=0.220931, C1=0.061271, B2=-30.611816, C2=-262.595686, in case you’re interested.