利用者:Lukasstockner97/GSoC 2016/Weekly Reports/Week 9

提供: wiki
< 利用者:Lukasstockner97‎ | GSoC 2016‎ | Weekly Reports
2018年6月29日 (金) 06:20時点におけるYamyam (トーク | 投稿記録)による版 (1版 をインポートしました)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Hi!

So, this week I mainly worked on integrating a shadow pass into the Denoiser. In theory, you can use any pass - first I tried something just like the regular Shadow pass in Cycles, just also including meshes and the background. It kind of worked, but for LWR it's best to maximize correlation between features and image. Since the shadow pass only records the fraction of successful light queries and doesn't account for strength differences, that's not really the case. Therefore, I decided to use the same approach I used back in the shadowcatcher patch - record both regular light contribution as well as all contribution without accounting for shadow checks - the ratio is the amount of shadowing. The result is pretty good, but also quite noisy. As explained in the last mail, noise in features carries over into the image.

So, the shadow pass needs to be prefiltered. For that, I combined some techniques from "Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings" and "Adaptive Rendering with Non-Local Means Filtering":

  • Instead of using one pass, the info is split into two buffers, one containing all the odd samples and one the even ones.
  • Then, the variance of the feature pass can be estimated from the squared difference of the even and odd ratio (aka Buffer Variance) - it's quite noisy, though
  • Variance can also be estimated on the fly (aka Sample Variance), but due to the ratio involved it can only be approximate.
  • So, to get smooth and accurate values, the buffer variance is filtered using Non-Local Means filtering with weights coming from the Sample Variance.

Then, this variance info is used to smooth the even and odd buffer. To get rid of remaining artifacts, a second filter pass is performed with variance info coming from the difference of the two filter results.

That way, we get a smooth, noise-free and accurate shadow feature pass to use in the denoiser. The actual changes required to use the additional feature are minimal. One thing that still needs to be done is downweighting the shadow pass around edges since it can get a bit funky there, which might currently cause a bit of noise on edges.

That already improved results a lot, but that annoying bug that caused haloing on imaged edges was still there. So, to finally track it down, I hacked the old demo code into the new system and ran two instances of GDB with old and new code, respectively. By comparing every value, I could find the problem: As an optimization, the new code stores the inverse of the bandwidth, therefore saving a lot of divides. However, at another part in the code, I forgot about that and just multiplied it with a factor instead of dividing as I should (since it's the inverse). The effect of that bug was that when the filter decided to get narrower to preserve edges, it actually got wider, causing these halos.

Also, I fixed a bug that caused feature passes to be all black with OSL.


Originally, I intended to clean up and push all this today after my University exam, but couldn't get home due to all the Munich stuff happening, so I'll do it tomorrow. Sorry for the long time without commits, but I can promise some amazing filter quality improvements - the bugfix makes the branch work just as well as the old builds, and the shadow pass makes it work even better!

Considering the amazing performance of the NLM filtering for the shadow feature, I'll continue improving the core filter algorithm next week - using image-based NLM weights instead of the feature-pass-based Epanechnikov kernels should reduce overblurring in case the features still don't explain details (like it happens with glass or slightly rough objects). Also, doing even-odd splitting for the image as well, filtering both independently and then doing a second pass like it's done for the shadow pass might help to get rid of remaining artifacts. So, expect more quality improvements in the future!

Lukas