利用者:Brecht/RenderIdeas

提供: wiki
< 利用者:Brecht
2009年11月28日 (土) 04:23時点におけるwiki>Brechtによる版 (Shading)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Materials vs. Shaders

  • There are two types of approaches to shading in render engines. More physically based render engines tend to be based on passive materials, which are used by an integrator. In contrast, render engines based on renderman for example tend to have active shaders which themselves act as integrators, spawning rays from the shader itself.
  • The advantage of passive materials is that advanced lighting algorithms can be implemented better, as they are in full control then. Active shaders however fit better for flexible nodes and shading languages, since they retain full control.
  • Blender rendering is more geared towards active shaders, as there is no clear separation between the material and an integrator. Clearly it would be ideal to get the best of both worlds. How to do this best in practice is still fuzzy for me.
  • Mainly the problem is formulating nodes as a passive material is the problem. A reasonable way to implement this can be to ignore certain nodes (e.g. ambient occlusion) when doing photon bounces, and for more advanced uses a separate output.

Shading

  • Currently BRDF's are only taken into account for some parts of rendering. Physically speaking, mirror reflection should use the specular BRDF for example instead of a separate gloss value, and environment light should use the diffuse BRDF rather than assuming lambert.
  • One organization that could be made in the UI and code is diffuse/specular/transmission x lights/indirect/environment. For example specular x indirect is mirror reflection, and diffuse x environment corresponds to (color) AO. SSS fits in this organization fairly well too, as a diffuse shader, volume rendering may require a different organization than diffuse/specular/transmission, but can still use lights/indirect/environments.

Textures

  • Texture node in material nodes: if the texture coordinate is not set, it should be possible to set the coordinate from the texture buttons. The geometry node and transform nodes can of course be used, but it just seems more convenient to have the Map Input panel available and keep the node setup clean.
  • Active texture: the active texture in the texture buttons could be used for display, texture painting, uv editing, rather than the active uv texture layer. This raises the question how to edit/display procedurals or various texture mapping methods, but I guess it is reasonable to disable things when they are not supported in the 3d view.
  • Texface: this should be separated from uv layers, and managed from the texture buttons. The image texture can work as it does now, and by default just use one image. But then there can also be an option to assign an image texture per face rather than using one image, basically replacing texface. This would still be stored in the mesh of course in "image" layers.

Nodes

  • Nodes provide a lot of flexibility but at a certain point become difficult to sample from a physically based point of view. A system could be made to support sampling node shaders anyway, and as long as they are not doing too crazy things it will still work. The tricky thing is mostly in the shader calculations. As long as nodes are used to texture various inputs, and then after shading blender different shaders, it may work to just detect the shader nodes and do an averaged sampling of those only, regardless of the particular node setup.
  • Shader nodes can only output color and alpha now, while render passes need diffuse, specular, AO, etc. One way to support this is to provide those as outputs to the user and let them hook it up. Another is to add a new kind of data type that can be exchanged between nodes, which rather than one color is a number of colors for different passes. A subset of nodes could then support mixing all passes rather than one color, and as long as only those are used, passes would be preserved for the compositor. I prefer multiple outputs though.

Alpha

  • Alpha is confused in Blender, and we there are various things that need to be fixed. It should be made clear and documented where premul or key alpha are used and why. Loading image files should do the right conversion by default.
  • Alpha for the texture stack should not have to be set to 0.0 to work, it should just be 1.0 like other values that you can texture.
    • This is not just for alpha, but all intensity values in the texture stack. --matt
  • Physically speaking, alpha is not something you really care about, it is just a matter of rays being reflected or refracted. However alpha is necessary for compositing, z-transparency, and blending between shaded result for example, so it cannot be ignored. How it should be treated during shading should be clarified and documented, I don't have a good grasp on how to do this yet.
    • You can think about alpha in a couple of ways, both which have different uses. One is 'coverage', i.e. clipmaps, for making leaves, determining where you want to shade or pass through. Another is 'transmission', i.e. a factor for how much light reaches a point (in the scene, or on the camera). In the volume render code I made alpha equal to the inverse of the luminance of an RGB transmission value. I'm not sure whether these two meanings for alpha need separation in the shading pipeline or not, but something to think about at least. --matt
  • One way to define it is to think of the alpha channel for the combined pass as based on leaving out the last transmission x environment in the shading tree. That does give you a different alpha for each color channel though, but this could be folded. And what to do with passes?

Lighting Support

Overview of what kind of lighting effects are supported in Blender.

From BxDF Implemented Note
Light Diffuse Yes Wrong falloff (1)
Light Specular Yes Wrong falloff (1)
Light Transmission No (3)
Environment Diffuse Yes (Color) AO; BRDF not taken into account
Environment Specular Part Only with Mirror, fade to sky
Environment Transmission Yes Ray transparency; no shadows, no BTDF
Indirect Diffuse No Radiosity was removed (2)
Indirect Specular Yes Mirror; separate, limited phong BRDF
Indirect Transmission Yes Ray transparency; limited phong BTDF


(1) The inverse square falloff formula for lamps is wrong. The code says dist/(dist + r^2), but it is actually 1/(r^2), and for no value of dist can you get that result. It may be useful to avoid the falloff going to infinity near r=0, but this totally alters the result. Actually the Distance value is unnecessary physically speaking.

(2) And radiosity only supported diffuse-diffuse reflections, not diffuse-specular/transmission, also known as caustics.

(3) Transmission x Light is unlikely to happen if everything is modelled as having volume, in that case the light would have to be inside the object. In this case also Transmission x Environment can't happen at all. So not handling those cases does not make much difference if you model things with volume.

Parameters

  • One issue is how many and which parameters to expose to the user. It would be good to define few parameters that are needed to make things physically correct, and then more to do non-physical tweaks, which are set to e.g. 0.0 or 1.0 by default to keep things correct unless you change them.
  • There's various ways to organize material parameters that are equivalent but it would be good to make it so that as long as you tweak the "physical" sliders within their default ranges, you still get correct results. E.g. so that a light ray coming in has probability <= 1 of being reflected/refracted. This is somewhat difficult when combining two arbitrary diffuse/specular shaders, not sure how to do that. But disregarding that, the basic relation could be:
    • (diffuse_intensity*diffuse + specular_intensity*specular)*transp + (1 - transp)*transmission + emission
  • Note that this formula would be used for diffuse/specular/transmission from all sources, including environment etc. Also transp may be changed by fresnel before using. This is all very basic stuff
  • Some examples of how existing things fit in:
    • Ambient becomes a multiplier for light coming in from the environment (and perhaps other surfaces too, depends if we want a separate value for that), and defaults to 1.0.
    • Specular transparency is removed.
    • Gloss amount would be replaced by the specular shader hardness.
    • SSS becomes a diffuse shader.
  • Unifying things like this is not ideal though, for reasons of control & performance. So it would be good to allow separate control but how best to communicate this in the UI I'm not sure. I guess mirror settings could for example have an option "follow specular shader" which you could toggle for more control. Also not clear is in this case how a user would understand that to create a perfect mirror, the specular hardness would have to be set to infinity.
  • In for example LuxRender there are no separate diffuse/specular/transmission shaders, you one shader has everything included. This makes sense as well since the diffuse/specular/transmission split is artificial. Both options are possible, as you can split up such combined shaders in the render engine for algorithms that need it, even if it is not exposed to the user. But making these kinds of things clear does need a good UI.