Dev:Source/Render/Cycles/Roadmap

提供: wiki
< Dev:Source‎ | Render‎ | Cycles
2018年6月14日 (木) 22:56時点におけるwiki>Lukasstockner97による版
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Roadmap

This is a roadmap updated at the Cycles developer meeting at the 2017 Blender Conference. It's based on the priorities of the developers that were present, and should not be considered a complete or accurate plan. Plans may change over time, the purpose here is to make it easier for developers to cooperate.

The names in () indicate who has already done some work on these features and who you might want to get in contact with if you want to help, it's not so much a task assignment and on many of these topics we could use help from more developers in reviewing code or helping to finish patches.

Here is information on getting in contact with developers.

Targets in Progress

Some code exists for these features:

  • Microdisplacement: still an experimental feature and we'd like to make it a fully supported feature. A few issues to fix are: smooth UV subdivision, too high subdivisions outside of the screen, panorama camera support, viewport update for displacement shader changes, new (vector) displacement nodes. A memory cache maybe be added as well. Not all these features are required to make the feature non-experimental, that would happen earlier. (Mai)
  • UDIM textures for mapping multiple high resolution textures to one model. This is implemented in the OpenImageIO texture cache in the latest version, so if we use that for mipmaps and texture caching we almost get UDIMs for free as well, at least in Cycles. However on the Blender side this would require more work to support it in the UV editor and viewport. (Lukas)
  • Embree: for faster motion blur and hair rendering. BVH building would be moved to Embree, GPU traversal we would need to keep. (Stefan)
  • Mipmaps and texture cache to render more textures with less memory usage. This requires some fairly deep changes to SVM, to pass along ray differentials through the nodes, while for OSL this is already automatic. The first implementation of this would likely use OpenImageIO, which means it would be CPU only to start. On the Blender side this would also require some changes to support .tx files and (auto)generate them. (Stefan)
  • Thin Surface for Principled BSDF (Pascal)
  • Denoising for animation. There is a Cycles side implementation of this already, but Blender integration will require more work and design changes, for example to integrate with compositing or render farms. (Lukas)
  • Light groups to render separate AOVs / passes for different light groups with minimal overhead, which then lets you tweak the light intensity and color in compositing. (Lukas)
  • Metallic BSDF: this might become a feature of the principled BSDF. (Lukas)
  • Light linking to specify which lights affect which objects. (Tangent Animation)
  • Cube map rendering for VR and panoramas. (Sergey)
  • Network rendering to have multiple computers in a local network cooperate on the same frame. We already have a partial implementation of this but it is disabled. The first implementation of this might only support F12 renders, with viewport renders coming later. (Lukas)
  • Configurable working color space (Lukas)
  • Persistent Data: waits for Blender 2.8 to be implemented properly. (Lukas)
  • Filter Glossy for DoF (Sergey)
  • Many light sampling (Lukas, Stefan)
  • Light motion blur (Lukas)

Other Targets

Implementation has not started on these, but they are considered important:

  • Adaptive sampling to focus samples on parts of the image that need them most. Ideally this should use the denoising passes. Some implementations exist, but we need something more robust and less dependent on tiles. (Lukas, Brecht)
  • OpenVDB rendering. This would likely included empty space skipping for better performance. It may be CPU only to begin with if we reuse the OpenVDB code for ray marching and sampling.Ideally this should be coupled with a new volume object datablock on the Blender side.
  • Volume rendering optimizations and sampling improvements
  • Statistics: for power users to investigate why rendering performance is slow, why memory usage is high, which objects or materials to optimize, etc. This would be a log generated that could be shown in Blender or as a HTML report, and possibly also debugging AOVs.
  • Combined Hair and Volume shaders similar to what we have in the Principled BSDF, to make it easier to set up these types of materials as well.
  • Resumable rendering (for render farms) improvements, storing metadata in OpenEXR for better usability. (Blender Institute)
  • CPU work stealing: rendering on the CPU requires small tiles to get a good work distribution, and may still not utilize all cores for the last few tiles. We would like to have a system where multiple cores can cooperate on the same tile.

On Hold

  • Blue noised dithered sampling for lower noise in viewport renders. Unfortunately only helps with a handful of AA samples, needs additional research to become practical. (Lukas)
  • Micro jitttered sampling for better performance on GPUs. Correlation artifacts remain even with high AA samples though, needs additional research to become practical. (Lukas)

More Ideas

Some places to look for more work or ideas: