Attic:Requests/Render

提供: wiki
移動先: 案内検索

These requests have been implemented and are hence obsolete.

--BeBraw 20:48, 22 December 2006 (CET)


Rendering separate channels:

It would be extremely useful to be able to render in different passes shadows, speculars, reflectivity and so on, in order to postprocess each channel in a proper way and then composite everything. Lots of people have been asking for this one.

-- Blenderwiki.malefico - 05 Sep 2004

HDR support

Blender could have rendering to HDR formats, as well as be able to use HDR images for textures. Basically, these formats use a floating-point number for each color component, instead of integers. This would require changes in the internal blender code, specially ImBuf, since it has the internal format hardcoded as integers. For an HDR file format, we could use ILM's [OpenEXR], which is free software as well.

-- RodrigoDamazio - 08 Jan 2005

A detailed proposal for implementing vectorial motion blur in Blender.

I have determined the steps that a programmer would need in order to code this superlative feature. However, due to lack of time and familiarity with the render code, I'm not even going to try to do it. That said, this document will hopefully be detailed enough that someone familiar with the render code could implement this without too much difficulty.

Overview

Vectorial motion blur is the process of smearing the pixels of a finished render based on per-pixel motion data that is derived either from scene animation data, or from an analysis of preceding and subsequent images. Excellent results are possible with the second method even though it is strictly 2D in its working space, and even better ones are possible when using the 3D motion data from the scene. This proposal makes use of the first method.

Good motion blur is essential to believability (not necessarily realism) in computer animation. We are so used to seeing motion blur in every aspect of our daily lives that it is one of the essential cues our brain uses when processing vision. When it's missing, even if everything else is photoreal, we know that something is wrong.

The process flows like this:

  1. Create a map of the rendered image that indicates 3D motion vectors.
  2. Using a combination of z-buffer data and the motion vector data, create a new image based on smeared pixels of the original render.

Due to the need for a whole extra four channels of data, I think that this feature needs to be integrated into the main render pipeline, as opposed to implemented in the sequencer. If it is trivial to pass arbitrary amounts of data to the sequence editor, though, it could be done that way, providing a bit more flexibility.

Process

1. Creating the 3D motion map

This process needs to be done within the main render loop.

A. Determine the blurring time slice.

  1. Using a combination of the frames-per-second, mblur samples (from the current mblur implementation), and a user-specified exposure time, a value in fractions of frames is calculated for the time slice to investigate. Eg. 30 fps, mblur set to 5 samples, user exposure at 50%: 1 sec / 30 f/s=.0333 secs/frame / 5 mblur samples = .0066 secs/samples * 0.50 exposure = .0033 seconds for the whole vector, thus yielding look-ahead and -behind values of .0016 seconds, or .05 frames.

B. As Blender generates renderfaces for the upcoming render, it needs to add a new attribute to each face to store motion data. Assumption: Blender generates renderfaces before the render, culling for visibility. Motion data is calculated by:

  1. Determine the location of this same renderface is both the previous and following timeslice (looking forward and backward .05 frames from the previous example)
  2. Calculate a best-fitting vector for each vertex of the renderface between the previous, current and following locations
  3. Save the calculated vectors in the new attributes of the original renderface for later use.

C. As Blender shades the geometry and writes to the render buffer, it also writes to the new motion buffer.

  1. For each pixel that is shaded, a final motion vector is calculated by interpolating between the motion vectors stored for the vertices of the renderface that is being shaded, based on the location of the sampled point on the renderface.
  2. The final motion vector is projected onto the camera plane, producing a 2D vector, then normalized.
  3. The two components of the motion vector, plus the normalization factor are saved in the motion buffer, which is a three channel "image", with xyf (fnormalization factor) corresponding to rgb from the familiar model.

D. This motion map can then be saved to a file (eg. xyf.tga) for later use, or used in the pipeline once the original render is complete.

2. Using the motion map to create a blurred image from the raw render

A. Once the motion map is generated, a new blank rgb image buffer is initialized (called the "blur buffer" here).

B. Areas indicated as sky by the z-buffer (or the alpha channel of the render buffer, whichever is more appropriate) are copied directly from the render buffer to the blur buffer, and if desired can be blurred based on a function of the camera's motion.

C. Proceeding forward through each level of the z-buffer, render buffer pixels from that level are evaluated one at a time, then added to the blur buffer.

  1. The motion vector from the motion buffer is recreated from the xyf information.
  2. The base pixel is copied from the render buffer to the blur buffer.
  3. The pixel is mixed into the blur buffer along the trajectory indicated by the motion vector, with opacity falling to 0 (transparent) at each end of the vector.
  • Note: I think that two things would need to be determined here by trial and error, coupled with visual inspection of output. First, the blending method. Blender has several already available, and we should let our eye determine which would produce the best result. Also, testing would be needed to determine the correct falloff function for opacity: linear, quad, etc.

D. The finished blur buffer could be copied back into the render buffer for display, posted to the secondary render buffer so the user has access to the original render as well as the blurred one, or simply held in the blur buffer, while granting the user access to save this as a separate process.

Issues

It has been pointed out that this method fails to take into account animated textures, as well as faces moving into and out of shadow. Indeed, this is a shortcoming of the method, but I believe that experimentation will show that this would only be a visual problem in a very small percentage of cases.

The other issue is the incorporation of camera motion into the model. A static scene, including it's background, should receive blurring if the camera moves. Perhaps incorporating camera motion into the vector at step 1.C.2 where the face's vector in 3D space is projected into the camera plane would do the trick.

Hopefully this detailed explanation will light a creative fire under the backside of a coder who is familiar with the render code.

-- RolandHess - 11 Feb 2005

Render the 3D View and Previewrendering

It would be nice, if you can render a image only from the actual 3d view window without setting a camera first. -- ThomasBuschhardt - 28 Aug 2005