Learning to Cluster for Rendering with Many Lights
Yu-Chen Wang1
Yu-Ting Wu1
Tzu-Mao Li2,3
Yung-Yu Chuang1
National Taiwan University1
MIT CSAIL2
University of California San Diego3
All results are generated on a machine with 8-core Intel Core i7-9700 CPU (using 4 cores) and 32GB RAM.
Image viewer is borrowed from this page.
Scene (click on the thumbnail to select the scene comparison shown at the bottom of the page)
|
|
|
|
Bathroom (120 sec.) |
Bedroom (120 sec.) |
Classroom (120 sec.) |
Kitchen with VPL. (120 sec.) |
|
|
|
Living-room (60 sec.) |
Parking-lot (360 sec.) |
Sanmiguel with VPL. (480 sec.) |
|
|
|
SiA-shelf (30 sec.) |
Staircase (30 sec.) |
Staircase2 (60 sec.) |
Please select mode:
Result Comparison      
Ablation Study
Compared methods:
Stochastic Lightcuts (SLC) [Cem Yuksel 2019]
Resampled Importance Sampling (RIS) [Talbot et al. 2005, Bitterli et al. 2020]
Bayesian Online Regression for Adaptive Direct Illumination Sampling (BORAS) [Vevoda et al. 2018]
Variance-aware BORAS (VA-BORAS) [Rath et al. 2020, Vevoda et al. 2018]
Importance Sampling of Many Lights with Reinforcement Lightcuts Learning (RLL) [Pantaleoni 2019]
Ablation Study:
We compare different combinations of the clustering update rule and the action values update rule
in reinforcement lightcuts learning (RLL) [Pantaleoni2019] and our method:.
RLL/RLL: Use both RLL's clustering update rule and action values update rule
RLL/Ours: Use RLL's clustering update rule and our action values update rule
Ours/RLL: Use our clustering update rule and RLL's action values update rule
Ours/Ours: Use both our clustering update rule and action values update rule