Technical Report Number
Rendering production-quality cinematic scenes requires high computational and temporal costs. From an artist's perspective, one must wait for several hours for feedback on even minute changes of light positions and parameters. Previous work approximates scenes so that adjustments on lights may be carried out with interactive feedback, so long as geometry and materials remain constant. We build on these methods by proposing means by which objects with high geometric complexity at the subpixel level, such as hair and foliage, can be approximated for real-time cinematic relighting. Our methods make no assumptions about the geometry or shaders in a scene, and as such are fully generalized. We show that clustering techniques can greatly reduce multisampling, while still maintaining image fidelity at an error significantly lower than sparsely sampling without clustering, provided that no shadows are computed. Scenes that produce noise-like shadow patterns when sparse shadow samples are taken suffer from additional error introduced by those shadows. We present a viable solution to scalable scene approximation for lower sampling reolutions, provided a robust solution to shadow approximation for sub-pixel geomery can be provided in the future.
Dartmouth Digital Commons Citation
Kerr, William B. and Pellacini, Fabio, "Light-Based Sample Reduction Methods for Interactive Relighting of Scenes with Minute Geometric Scale" (2007). Computer Science Technical Report TR2007-600. https://digitalcommons.dartmouth.edu/cs_tr/301