「利用者:Broken/VolumeRenderingDev」の版間の差分

提供: wiki
移動先: 案内検索
(Todo)
 
(1版 をインポートしました)
 
(相違点なし)

2018年6月29日 (金) 03:42時点における最新版

Volume Rendering Technical Notes

I've tried to keep the new volume rendering code in Blender as modular and separate from the rest of the shading system code as possible, to prevent adding more mess to the already messy current shaders, to prevent introducing potential bugs into the rest of the renderer, and to make it easily upgradeable if/when work is done on Blender's shading system.

Apart from a few small tweaks to hook it up in shadeinput.c, the bulk of the volume rendering code is in:

  • source/render/intern/source/volumetric.c
  • source/render/intern/source/volume_precache.c

Volume Shading

Since Blender's renderer is scanline for the first hit, the easiest, least disruptive way to add volume rendering capability was to make it based on render faces. This way, the triangle and associated shadeinput is initialised as usual, but instead of continuing on to shade_material_loop(), this is completely bypassed and the volume shader takes over returning a colour back to the ShadeResult.

Using raytraced transparency, the volume shader takes care of getting the shading from objects inside and behind the volume too (like refraction does). This is made a bit more complicated by things like z transparency, and the camera possibly being inside the volume, but this is taken care of too.

The basic raytrace volume rendering process goes as such:

  • Shadeinput and triangle is initialised, on the front face of the volume, ready for shading.
  • The volume shader traces a ray to the far bounds of the volume, to see how deep the volume is (i.e. how much stuff is in between the near bounds of the volume and the far bounds of the volume).
  • If the ray reaches the other side of the volume:
    • A ray is shot along the view vector, behind the far bounds of the volume, to shade the colour of any objects/sky behind that point.
  • Else if the ray intersects another internal object:
    • That new intersection is shaded, to find the colour of the internal object
  • Then, the volume itself is shaded, which is then added on top of the radiance from internal or behind objects, and returned in ShadeResult.

To shade the volume, the space in between the near bounds of the volume and the far intersection point (the minimum of either the far bounds of the volume, or any internal faces) is ray-marched, to get the final colour reaching the camera from that range of volume.

The volume shading is done with a physically based, emission/absorption/scattering model.

  • Emission: Light being generated from the medium itself (eg. fire)
  • Absorption: Light being absorbed by the medium (eg. dark soot)
  • Scattering: Light entering the medium from outside (mostly from lamps), and being scattered by the medium, towards the camera.

These properties are all dependent on the density of the medium at the currently sampled point.

To shade the volume, a ray is marched through, gathering the current (textured or not) density, emission, absorption, and scattering values for that point in space, and calculating the amount of radiance coming back to the camera. As the ray is marched, a 'transmission' value is accumulated, based on density and absorption, which determines how much of the emitted or scattered light reaches the eye.

Scattering

There are different methods available for determining the scattering at points within the volume. The default is 'single scattering', which calculates the attenuated light from lamps through the volume, with a single scattering event.

To calculate the lighting at each point inside the volume:

  • A secondary ray is traced from the currently sampled point, towards the lamp
  • If another object is intersected, this means there is an internal object blocking the light, so the scattering returns with 0.
  • If a face on the outer edge of the volume is intersected, the lamp must be outside the volume
  • If nothing is intersected, the lamp must be inside the volume.
  • A ray is then marched along this distance, from the currently sampled point in the volume, to the outer point (the minimum of the distance to the lamp, or the distance to the volume bounds). The density and absorption along this ray is accumulated into a final transmission value, which is used to attenuate the lamp's colour and intensity reaching that current sampled point.

Finding Volume Bounds

As mentioned above, there are a lot of cases where we need to find the outer bounds of the volume along a given direction. Since we are using mesh faces to define the volume region, (rather than a bounding box primitive) this is done using ray tracing and the orientation of the face normals. The volume bounds function assumes that normals will be pointing outward, and uses this to determine whether the ray is going into a volume, or going out of a volume.

This brings a limitation, that volumes must be watertight, without weird geometry such as overlapping faces (i.e. suzanne can cause problems around her eyes as a volume!). It's not really a problem in practice though, since most of the time, volume objects will be simple objects like cubes or spheres, with textured density to give it its appearance.

Inside or Outside

Things are made more complicated by the fact that the camera can also be inside a volume.

If the camera is inside the volume, rather than shading the media between the volume region's near bounds and far bounds, instead we need to shade the media between the camera location and the volume region's far bounds.

Usually, if the face being rendered is that far edge of the volume, we can determine that it's the far bounds based on the face normal - i.e. if the normal points towards the camera, the camera is outside the volume, or if the normal points away from the camera, the camera's inside the volume looking at its outer edge.

This gets more tricky though when both the camera, and other objects are inside a volume. If we just render each face directly, then when rendering an additional internal object, it won't know to shade the volume in between the object and the camera.

To fix this, in convertblender.c when objects are prepared for render, objects with volume materials are logged. Then an additional prepass takes place, where each object with volume material is checked to see if the camera is inside it. The results of this are stored in a list in Render.render_volumes_inside, which is checked after shading surface materials. If it detects that the camera's inside a volume, it will shade the volume between the surface and the camera as well.

Z transparency

Volumes can work with z transparency instead of raytraced transparency. This has several advantages, including automatic anti-aliasing of internal/behind objects, and alpha channels for use in comp.

When rendering z transparent volumes, there are a few differences. Firstly, it's not necessary to manually raytrace behind the far bounds of the volume to get the incoming radiance from behind, as z transparency will automatically layer it over the top. The 'alpha-over' layering that z transparency uses probably isn't as physically correct as involving the raytraced shading from behind, however in many cases the difference isn't noticable, or problematic.

The second difference is that rendering of back faces is disabled for z transparency. This is because z transparent materials render all faces, regardless of occlusion. If this happens, the volume renderer will shade twice, on:

  • the front face -> normal pointing at camera, so shade all volume between the front face and back face
  • the back face -> normal pointing away from camera, so shade volume between the camera and back face

...and layer them over each other. Clearly, this is not correct. To solve this, volume shading on z transparent back faces is ignored in this situation, however it still works correctly with the case of the camera inside the volume.

Light Cache

To accelerate single scattering calculation, the shading can be pre-calculated at points throughout the volume, and then interpolated at render time. This generally gives around a 3x speedup, with little difference in quality.

The volume light cache is a bit more simplistic than a normal irradiance cache - it samples the scattering throughout the volume at regular intervals in the object's 3D bounding box, and saves the results in a voxel grid (actually there are three grids, one for each of R,G,B). At shading time, rather than calculating the scattering directly, results from the voxel grid are sampled with trilinear interpolation, to retrieve RGB light values.

The light cache is a property of the ObjectInstanceRen, so that instances of the same ObjectRen can have different shading, based on their locations in the scene.

As a pre-process before shading, all ObjectInstanceRens in the scene with Volume materials and light cache enabled, have their scattering precalculated as such:

  • A raytree is created per ObjectInstanceRen, to accelerate queries to determine if a point is inside the volume
  • Data is prepared, and a threaded process to precalculate scattering begins:
  • The result voxel grid is divided into smaller voxel grids (bricks), similar to how the renderer divides the image into tiles
  • One brick is assigned per thread to pre-calculate until they are all done
    • At the center point of each voxel, it checks to see if that point is actually inside the volume
      • If so, it evaluates the scattering at that point, and stores it in the grid
      • If not, it marks the voxel as being empty and continues

Once all the voxels in all the bricks have been processed, some post-processing occurs:

  • The voxels are processed with a 3d filter, similar to a dilate filter. This is to expand the 'filled' voxels into the surrounding empty voxels, to prevent interpolation errors.
  • Then, if selected, a multiple scattering post-process acts on the light cache, working similarly to a 3d blurring filter.

Texturing Volumes

As part of this work, two additional textures have been added, intended for use as density/colour/etc sources, however they still work for surface materials too. These textures are:

  • Point Density - source/render/intern/source/pointdensity.c
  • Voxel Data - source/render/intern/source/voxeldata.c

Point Density

Point density pre-processes a point cloud, storing it in a BVH (using BLI_kdopbvh), then at render time, does a range lookup to find the density of points within a given radius. The main usage for this is to render particles as density within a volume.

Voxel Data

Voxel data renders a voxel source, working very similarly to an image texture, but in 3d. Various input data source types are available (such as smoke voxel data, or external files), as well as various interpolation methods.

The voxels are stored in a flat z/y/x grid of floats. Functions for sampling this based on location within the (0,1) bounds are available in:

  • source/blender/blenlib/intern/voxel.c

Weaknesses and Todo

Weaknesses

  • Light cache currently uses ObjectRen's bounding box, even though it works on ObjectInstanceRens, which isn't correct. Need a way to get a bounding box in camera space for ObjectInstanceRens - probably some simple matrix multiplications somewhere, but I could use some help here.
If this can be worked out, we can also use the bounding box diagonal as a max distance in the raytrace function used to find volume bounds. Currently it uses the scene's bounding box max distance, but limiting to the object instance's bounding box can speed it up a bit.
This can also be used to develop a 'bounding box' texture mapping input.
  • Overlapping volumes are not supported well. Perhaps it can track a list of volumes that the ray has entered (stored in ShadeInput), and propagate it through the ray bounces. There's also Ztransp too, though. What to do then is a bit trickier though - shade each volume independently and add them (slow?)? Mix the shading variables for each volume together and shade that?

Todo

  • Try to make the point density baking space stuff more intuitive
  • Fix light cache with asymmetry != 0.0 - currently gives fireflies
  • Add ability to hook up various extra (temperature, velocity, etc.) channels in voxel data texture.
  • Internal/external cache - preprocess in convertblender.c --> if a lamp is internal to a volume, no need to trace volume extents to find it - can just use simple shadowing.
  • Modify the 3d View smoke drawing code to use a similar method to the render engine, and use the properties that are in the volume material (absorption colour etc) and lamp properties to shade it.
  • Add 'bounding box' texture mapping type (stretches 0,1 or -1,1 mapped to object instance's bounding box in camera space)
  • Add 'volume' particle render option, that generates a bounding box to render particles as volume with point density.

jahka: "do min/max at the end of physics loop and end of cache reading loop (both in particlesystem code)"

  • Check on unifying voxelisation code:
    • point density from obejct volume
    • particles from volume
    • smoke from object volume
    • voxel data from object volume
    • etc.
  • Experiment with a non-voxel light cache method, using irradiance points in a bvh etc.
  • Motion blur for point density texture - can add a new 'streaks' mode, which draws the points as streaks. This can be done with:
    • multi-sampling (brute force, but still faster than full-scene-motion blur)
    • or perhaps a bit smarter, inserting the particles into the BVH as rectangular bounding volumes, stretched along the direction of their velocity. The current BVH range lookup doesn't support this yet though, maybe jaguarandi can help with this.

Credits

  • Matt Ebb (with thanks to Red Cartel/ProMotion Studios)
  • Raul Fernandez Hernandez (Farsthary) for patches:
    • Light cache based multiple scattering approximation
    • Initial voxeldata texture
    • Depth Cutoff threshold
  • Andre Susano Pinto for BVH range lookup addition
  • Alfredo de Greef for hints and voxel interpolation code