利用者:Hypersomniac/PBR viewport/Pre Render

提供: wiki
< 利用者:Hypersomniac‎ | PBR viewport
2016年9月11日 (日) 23:15時点におけるwiki>Hypersomniacによる版 (Created page with "=1. Update Probes= Before we begin rendering we need to make sure we have all the informations needed to light our models. <br><br> For this we uses what is called Probes.(Unity...")
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

1. Update Probes

Before we begin rendering we need to make sure we have all the informations needed to light our models.

For this we uses what is called Probes.(Unity's naming) They capture environment at a specific point in space. We need information in all directions so we render a cubemap from this specific point.

A world cubemap that only contains the world texture need to be available as default probe source. This is the distant probe infinitely far away.

For local cases, I think that each object should have the possibility to become a probe source. This way no other object type is required and no complicated setups have to be made to get good lighting approximation on one specific object. Local Probe assignation is discussed later.

Optimization : We can render the 6 faces of the cubemap with a single call with geometry shader
Depending on what is needed we have to precompute the diffuse contribution of this environment.

We have Two choice of storage for this.
- Low res Cubemap : Cubemap size * 6 storage, need additional Shader Sampler, no light bleeding, will be Pixelated.
- Spherical Harmonics : (9 or 16 coefficient to Compute), low storage cost, light bleeding, smooth result, easily interpolated.

I think going with spherical harmonics is a good choice and a lot of people seems to use it already. The artifacts introduced by the 9 bands could be “solved” using 16 bands. This could be a quality parameter. We can use compute shader if available for this task.

If we need to prefilter the cubemap for a specific BRDF we have to also do it at this stage. For this I would be inclined to use Prefiltered Importance sampling as is decrease drastically the process time. A scene could have a lot of probes and refreshing all of them should not take more than a few seconds.

Optimization : this process should be async and not freeze the UI. But necessary before each frame of an « Opengl Render ».

One other important point is refreshing the captures.
My proposition on this point is :
- World probe is updated every time the world node tree changes.
- Local probes are updated only on request. An additional auto refresh checkbox (per probe/global?) would refresh the probe on any scene changes.
Additionally we can provide an operator to update every probes.

Another concern is light bounce. If we don't render only direct lighting inside the probes, we can fall inside a dependency loop. As the probes in the refresh list are updated, the firsts become obsolete because of the new one. I suggest to allow more than one refresh to aleviate this problem and allow more stable animations and allow light to bounce. Disabling probes for the probes renders is also a possibility eliminating this problem.

So pseudo code for this stage is :

Gather probes tagged to be refreshed.
For each light bounce
	For each probe to refresh
		Renders the scene to a cubemap render target
		Prefilter cubemap / compute SH
		Store result and tag the probe as refreshed

2. Pre Render Buffers

If we want to support screen space effects such as Screen space reflection (SSR) or occlusion (but not post render to apply it effectively) we have to get access to screen information when rendering the geometry.
A depth prepass used for culling could be re-used for ambient occlusion. But for reflection we need a color buffer.

So we have two choices :
- Use the previous frame with re-projection : Faster, may have temporal artifacts, bouncing lights for free.
- Render the scene first without SSR : Easier to implement but Very Slow, no light bounce.

I will definitly go for the first choice but as of now (september 2016) only the second is implemented.
In render mode we might give the opportunity to render the first frame (or all frames) twice to cancel temporal artifacts. We can also at this stage render the scene with normal inverted to know the thickness of all pixels. This is good to have when doing screen space raytracing. If we use Hierarchical Zbuffer raytracing we have to downsample the buffers to be able to accelerate the tracing process. This would be done at this stage. For cone trace reflections we also need to prefilter a color and a visibility buffer.
I propose to use the HiZBuffer for raytracing SSR and AO.

So pseudo code for theses steps are :

Render Depth Buffer (like a shadowmap)
Render Depth Buffer with normal inverted
Reproject Color Buffer (optional)
Make HiZ Buffers
	Downsample Depth
	Downsample Backface Depth
	Downsample Color Buffer
	Create Visibility Buffer

3. Object Setup

For each object we must find the right data to feed the PBR to make it look right.

For reflection probe there is a few different solution here :
- We can set the probe to use for each object. Either its own or the one of another object. Making transition from one probe to another without blending. The affectation of one probe to other objects is done manualy. This is the easy solution.
- We can set the probe to use based on influence radius / box of the probes. This is what is done in unity3D.
- We can blend between all cubemaps in the pixel shader. This is what is done in UnrealEngine4. But I see this approach impractical for us for a lot of reason. (can't importance sample multiple probe for performance reason, cubemap arrays support…)

Default choice is to use the distant / world probe. If a local probe is available then use it instead.
We could also allow user to provide custom hdri to use instead of scene captured data. This may need additional thoughts.


The lack of blending between cubemaps can be balanced by the fact object can have local cubemaps refreshed every frames.

Optimization : After this stage it would be good to sort objects/surfaces based on the shaders they use and the resources (textures, cubemaps) they uses to minimize state changes. But only doing it when it make sense (object added, Material Assigned...).

Further thoughts : Something need to be done for diffuse lighting. For the moment only Spherical Harmonics are used to store light received at a point in space and applied to the whole mesh. Realtime Global illumination technique already exists but not trivial to implement.

4. Render Surface / Shader compilation

At this stage we bind textures and update uniforms variables. Care must be taken to do this efficiently. The material could have an override option for it's probe too.

One concern is that PBR relies on lots of textures to achieve good performance (Look Up Tables, Depth Buffer, …) and number of texture slots are not unlimited. Also Shadowmaps are also taking texture slots.