「Dev:Source/Render/Cycles/Optimization/Notes」の版間の差分

提供: wiki
移動先: 案内検索
 
(1版 をインポートしました)
 
(相違点なし)

2018年6月29日 (金) 06:08時点における最新版

Optimization Notes

Light Paths

Noise Free
  • Diffuse x point light
  • Glossy x point light
  • Sharp x point light
  • Sharp x area light (if MIS is enabled)
Little Noise
  • Ambient occlusion
  • Diffuse x area light
  • Sharp x anything (if not too many bounces)
Pretty Noisy
  • Glossy x area light (if MIS is enabled)
  • Diffuse x diffuse x ...
  • Glossy x diffuse x …
  • DoF or motion x diffuse
Very Noisy
  • High number of diffuse or glossy bounces
  • Diffuse x sharp glossy x … (caustic)
  • DoF or motion x sharp glossy

So it can help to exclude more expensive light paths or replace them by something else. When you exclude them there will of course be missing light for which the artists will have to compensate. Replacing can mean:

  • Use a local trick like AO to replace many light bounces
  • Blur a sharp glossy surface to a softer glossy after a diffuse bounce (that’s what Filter Glossy does)
  • Replace a glossy surface by a diffuse surface
  • For glass you can replace many bounces by a constant exit color after N bounces

One thing we are missing is a way to say for a BSDF is the ability to disable it for indirect light. For example (glossy x point light) is fast and useful to give a specular highlight, but (glossy x diffuse) may be too noisy to be worth it. Being able to disable the latter would be useful.

Another is support for light groups or layers, so specific interactions between objects and lights can be disabled if they are too noisy.

Branching

What Cycles Does
  • Classic path tracing will only follow a single light path, much like a photon would. At each vertex you pick one BSDF/BSSRDF and one direction to continue the path in.
  • Path tracing with next event estimation (as Cycles uses) connects every vertex on the path to one randomly sampled position on a light as well. So at that point the path is temporarily branched.
  • Branched path tracing will sample all BSDFs and BSSRDFs at the first hit, and sample all lights for camera rays. The rest of the path is like regular path tracing, except that there is also an option to sample all lights each time instead of one.
  • Branched path tracing also handles transparency different, in that it it also fully samples all BSDFs, BSSRDFs and lights at each transparent surface hit from the camera ray. For regular path tracing a transparent BSDFs would be randomly picked among other BSDFs, here it is always picked.
Tradeoffs

Picking one BSDF, one light, etc. can introduce significant noise but is also clearly faster per sample. For complex scenes or lighting setups that require many bounces picking just one can be helpful, because you might need to try many different variations of the start of the path to find a light at the end.

With fewer bounces that’s less helpful and branching more helps. This does get expensive if you have many lights, looping over all lights each sample is slow, but it’s a tradeoff.

Probably for production renders where you don’t have a ton of render time, it’s probably best to use branched path tracing and avoid putting too many lights in the scene to keep render times reasonable.

Lights

  • Area lights are more noisy than point lights for direct light
  • However they can be less noisy for indirect light due to inverse squared falloff giving extreme high values for point lights
    • For production probably best to always use smooth light falloff, at least for indirect light
    • Blender Internal always uses a similar smoothing, not possible to get extreme values when shading point is near light
  • Blender Internal does a trick where it only evaluates shadows for area lights but still treats shading as coming from a point light
  • This leads to less noise but also some strange results
    • I don’t think this is a trick that should be added to cycles
  • Mesh emitters should probably get multiple importance sampling disabled by default
    • Meshes that emit light weakly can take away too many samples from meshes that do contribute a lot
    • Not clear for users that this happens
  • Multiple importance sampling is not enabled by default for lamps
    • Maybe it should be, but there is a performance impact
    • Generally lights need to be mindful of where to enabled MIS
  • For scenes with many lights, looping over all lights and checking if they influence the current shading point may be slow
    • Solution could be some sort of light tree (similar to a BVH for triangles?) to quickly cull lights
  • Lights also have an inverse square falloff which means they have a very far influence
    • Some sort of max distance or intensity cutoff would help skipping lights when there are many

Camera Rays

Depth of Field and Motion Blur
  • Doing it in compositing is much faster and perhaps the most practical
    • Deep compositing will give better quality result for transparency and antialiasing.
    • Perhaps preview renders could still use it for tweaking, and then have a better way for compositing to use the same settings as rendering
    • Still need to split in render layers to avoid issues with missing pixels behind blurred objects
  • REYES style motion blur
    • Is faster, though need some sort of REYES dicing or shader caching to fit in a path tracer
    • Does not give you motion blurred shadows or reflections
    • Probably too difficult to fit in
  • 3D sample sequences for pixel filter + time or 4D for pixel filter + lens may reduce noise
Transparency
  • For hair shading, it may be good to cache shader evaluations at curve key points and interpolate
    • Lots of overlapping transparent hairs
    • Transparency also caused by minimum width feature even if not used in shader
  • Camera rays might benefit from the same optimization recently added for shadow rays
    • Recording all possible transparency surface intersection in one go
Antialiasing
  • With branched path tracing fewer pixel samples are possible which can help performance
    • Improved filtering in the shader may be needed to make the most of that
    • Procedural noise could use tricks like frequency clamping to remove high frequency components
    • Better quality image texture filtering (as implemented in OIIO) can help as well
    • OSL backend supports good ray differentials in shading, SVM does not
  • Blender internal tries to shade only once per pixel, samples in a single pixel are merged
    • Helps in some cases but also gives artifacts that you can’t get rid of (subtle flickering)
    • With global illumination, shading for camera rays is not the main cost anymore though
    • So practical benefit might not be so big anymore
    • Don’t think this is a good way to go
  • Cycles currently uses “filtered importance sampling” to implement pixel filters
    • It may be faster to do as Blender Internal does and let pixel samples contribute to multiple pixels
    • This requires padding pixels around tiles which can make things slower again with small tiles
    • If we’re clever those padding pixels can be shared however between tiles if they are cached somewhere

Shader Evaluation

  • With path tracing you get many shader evaluations, so important to make them as fast as possible
  • Only camera visible or shaders visible through sharp reflections or refractions need to be accurate
  • So production shaders should have two an accurate and fast evaluation, where fast can mean:
    • No detailed procedural textures
    • No glossy, or replaced by diffuse
    • Constant color

Production Shaders

  • Remembering to use all production tricks is not convenient, best to have a number of presets with tricks that can be the default
  • Ideally a production should mainly use 20 or so preset shader groups for different kinds of materials, worlds and lights
  • For predictability these shaders should be tested in some standard light setup (maybe a HDRI world)
    • That way artists don’t have to tweak materials too much for specific scenes, but rather can tweak the lights knowing that the material react as they should

Difficult Light Paths

  • For production rendering without a huge render farm, it’s probably best to avoid difficult light paths like caustics or high number of bounces entirely
  • Caustics can be blurred out or omitted
  • If lower number of bounces are not enough, more lights can be placed manually or ambient occlusion added (perhaps only when shading for indirect light to make it less obvious)

Caching

  • Irradiance caching could speed up diffuse indirect light a lot but is prone to flickering. A ideal implementation should:
    • Do one or more pre-passes to distribute samples well
    • Use a lot of heuristics to decide the weight and possibility of interpolating samples at a given point
    • Use smart interpolation using irradiance gradients or least squares interpolation
    • Support bump mapping well by e.g. storing spherical harmonics rather than a single color
    • Support efficient multithreading
    • Give consistent results on repeated renders (not easy with a cache shared between threads)
    • Implementations to look at might be Mitsuba and the Blender Internal one in the render branch (there’s no ideal one)
  • An alternative is the multiresolution shading cache used by Renderman
    • Works better with threading due to per thread cache
    • Main issue is that it relies a lot on the dice/split REYES architecture and low res base meshes
    • Not entirely clear how to implement that in a path tracing, especially how to pick things like dicing resolution
  • For animation, caching in a way trades artist tweaking time for improved render times
    • Not as easy to get artifact free renders but big render time gains are possible

Memory

  • Keeping all geometry in memory is pretty much a must
    • Rays fly all over the place, difficult to find enough coherence to make caches work fast
    • Better focus on compression and lowering memory usage than caching
    • Mesh storage in cycles can be reduce
    • Data structures are sometimes duplicated for blender, intermediate cycles data and cycles kernel data
  • For image textures and volume data this is possible
    • OpenImageIO gives this mostly for free
    • But good ray differentials are needed to make it work, being very careful to only access high res when needed
    • Need a good system for user to autogenerate tiled, mipmapped .tx and .exr files
    • Big gain would be already in only loading lower resolution tiles

BVH

  • Embree kernels from Intel are really good
    • They added support for instancing, motion blur and are also working on hair on github
  • BVH for hair can probably be significantly improved
    • Lots of overlapping curves means you get a lot of needless intersections
  • BVH for triangles improvements are probably most in
    • Better quality BVH building for complex scenes (analyzing BVH and trying to find places where it performs poorly)
    • Multithreaded spatial splits builder so it can be enabled more often or by default
    • SIMD for triangle intersections
  • Making BVH traversal faster makes the entire renderer faster