Dev:Source/Render/Cycles/Open Questions
< Dev:Source | Render | Cycles
Cycles Open Questions
Ray Intersection
- Ray intersection precision with instances: to avoid self-intersection with raytracing, we use the system of refined intersection and offsets from the motion blur thesis on http://gruenschloss.org/. However this breaks down when we use instancing, the extra forward/inverse matrix transformations give precision loss again. Is there a way to make this work, or do we need to keep using bigger epsilons to work around the issue?
- Ray-Hair intersection: how to write an efficient ray-hair intersection routine? All the code I've seen is very slow, needs to be 10x-100x faster to be usable in practice, but I have no idea how to get there, besides tesselating the hair into triangles. Arnold somehow manages to trace hair reasonably fast, but I have no idea how they do it.
Sampling
- Retry samples: a random bsdf, bsdf direction or light that is picked may turn out to have no influence and just give black. In such a case it might help to retry sampling with a different random number. Is it worth it? How to do such rejection sampling with QMC sampling, where it seems that there is only one number available? Is there some way to generate a suitable random number for this case, would a pseudorandom number be appropriate?
- Adaptive sampling: is there a reliable algorithm for this? There's various papers and implementations, but I don't see much evidence that this is actually used in movie production, my impression is that generally a fixed number of samples is used. Is there a method that is sufficiently reliable that you could send out many frames to a renderfarm, and get back consistent results with a good speedup?
- Light sampling: in path tracing, is there a more efficient way to pick lights for sampling besides just picking each light with the same probability. Is there a particular weighting that works better than uniform weighting as a default? Light intensity, distance, size, may all work in some cases, but perform significantly works due more samples being wasted on occluded lights. Can we do better than just giving user control over weights?
- Path termination: what is the best way to determine when to terminate a path? Probably russian roulette based on the path throughput, but I have the impression this undersamples dark areas often. Especially as linear => display color space transformations tend to make these areas brighter. Should there be some sort of adjustment for the estimated pixel intensity?
- Multi-BSDF sampling: when you need to sample a path direction from two or more BSDF's, you pick one of the BSDF's and then sample a direction. You then have two ways to adjust the path throughput: use only the picked BSDF, or take all them into account by evaluating all the BSDF's. It's not clear to me which method is better, sometimes evaluating all helps reduce noise, but sometimes it also increases it. Is this a bug, expected behavior, can we do better?
Shading
- Multiresolution radiosity cache: the ideas in this Pixar paper are quite cool, caching but still interactive and no file I/O. Can we use this without splitting and dicing the whole scene before rendering? The multiresolution aspect seems to rely on the REYES pipeline. For simple cases it might be possible to compute the appropriate resolution and grid location on the triangle on the fly, but how to pick the resolution on triangles that would need many splits in a REYES rendering engine?
- Side note: we could also use such a shading cache to speed up depth of field and motion blur, caching the shading even if it isn't strictly diffuse, similar to what REYES already can do through rasterization. It could also interact well with the Filter Glossy setting, turning glossy into diffuse reflections after a blurry reflection to make it possible to cache them. And it would be suitable as a cache for SSS. Lots of possibilities here.
- Antialiasing specular highlights: does there exist a good method to antialias specular highlights? If the reflections are very sharp it could be useful to make them a bit softer and avoid aliasing, but not too much. Is there a way to do this with ray differentials?
Motion Blur
- How to find a bounding box for a motion blurred object, without taking N samples and using the bounds of that + some epsilon? Is there any code / algorithm available for this?
- Is there a more efficient algorithm for interpolating matrices for motion blur (both the forward and inverse matrix)? The code that I've come up with is still quite slow to be called millions of times.