Doc:2.6/Manual/Render

提供: wiki
< Doc:2.6‎ | Manual
2018年6月29日 (金) 03:45時点におけるYamyam (トーク | 投稿記録)による版 (1版 をインポートしました)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索


Page status (reviewing guidelines)

Partial page Text needs review and at least some img
Proposed fixes: none

Render engines

Rendering

Rendering is the final process of CG (short of post processing, of course) and is the phase in which a 2D image corresponding to your 3D scene is finally created. Rendering is a CPU-intensive process. You can render an image on your computer, or use a render farm which is a network of PCs that each work on a different section of the image or on different frames. This section provides a full explanation of the features in Blender related to the process of producing your image or animation.

After you have set up the materials, textures, lighting, and the camera, you can begin rendering. It is unlikely that you will get it right on the first render, so be prepared to do many test renderings. This section describes the options and settings for the rendering process that will result in the desired image quality.

Blender has in internal render engine that it uses. This is a fast renderer, and can produce nice results if fine tuned. There are several other external renderers that can be loaded, which offer more advanced rendering tools.

We know that around the world, our users have PCs of widely varying power. Rendering is the process in CG that can chew up CPU and disk space like there's no tomorrow. Especially in corporate environments, it is easy to fill up terabyte servers by uploading ten hour-long DV tapes and doing some editing. So there are lots of options to try to shoehorn a big job into a small PC by providing you with multiple sets of options that chunk up the work as best we can, while still preserving image integrity.

This page discusses the main options found on the Render panel, and subsequent pages explain more.

Overview

The rendering of the current scene is performed by pressing the big Image button in the Render panel, or by pressing F12 (you can define how the rendered image is displayed on-screen in the Render Output Options). See also the Render Window.

A movie is produced by pressing the big Animation button. The result of a rendering is kept in a buffer and shown in its own window. It can be saved by pressing F3 or via the File->Save Image menu using the output option in the Output panel. Animations are saved according to the format specified, usually as a series of frames in the output directory. See Output Options and Animations.

The image is rendered according to the dimensions defined in the Dimensions Panel.

Workflow

In general, the process for rendering is:

  1. Create all the objects in the scene
  2. Light the scene
  3. Position the Camera
  4. Render a test image at 25% or so without oversampling or ray tracing etc., so that it is very fast and does not slow you down
  5. Set and adjust the materials/textures and lighting
  6. Iterate the above steps until satisfied with the quality level
  7. Render progressively higher-quality full-size images, making small refinements and using more compute time
  8. Save your images

Distributed Render Farm

There are several levels of CPU allocation that you can use to decrease overall render time by applying more brainpower to the task.

First, if you have a multi-core CPU, you can increase the number of threads, and Blender will use that number of CPUs to compute the render.

Second, if you have a local area network with available PCs, you can split the work up by frames. For example, if you want to render a 200-frame animation, and have 5 PCs of roughly equal processing power, you can allocate PC#1 to produce frames 1-40, PC#2 to frames 41-80, and so on. If one PC is slower than the others, simply allocate fewer frames to that PC. To do LAN renders, map the folder containing the .blend file (in which you should have packed your external data, like the textures, …) as a shareable drive. Start Blender on each PC and open the .blend file. Change the Start and End frame counts on that PC, but do not save the .blend file. Start Rendering. If you use relative paths for your output pathspec, the rendered frames will be placed on the host PC.

Third, you can do WAN rendering, which is where you email or fileshare or Verse-share the .blend file (with packed data!) across the Internet, and use anyone's PC to perform some of the rendering. They would in turn email you the finished frames as they are done. If you have reliable friends, this is a way for you to work together.

Fourth, you can use a render farm service. These services, like BURP, are run by an organization. You email them your file, and then they distribute it out across their PCs for rendering. BURP is mentioned because it is free, and is a service that uses fellow Blender users' PCs with a BOINC-type of background processing. Other services are paid subscriptions or pay-as-you-go services.

Render Workbench Integration

Manual-Render-Pipeline.jpg

Blender has three independent rendering workbenches which flow the image processing in a pipeline from one to the other in order:

You can use each one of these independently, or in a linked workflow. For example, you can use the Sequencer by itself to do post-processing on a video stream. You can use the Compositor by itself to perform some color adjustment on an image. You can render the scene via the active Render Layer, and save that image directly, with the scene image computed in accordance with the active render layer, without using the Compositor or Sequencer. These possibilities are shown in the top part of the image to the right.

You can also link scenes and renders in Blender as shown, either directly or through intermediate file storage. Each scene can have multiple render layers, and each Render Layer is mixed inside the Compositor. The active render layer is the render layer that is displayed and checked active. If the displayed render layer is not checked active/enabled, then the next checked render layer in the list is used to compute the image. The image is displayed as the final render if Compositing and Sequencer are NOT enabled.

If Compositing is enabled, the render layers are fed into the Compositor. The nodes manipulate the image and send it to the Composite output, where it can be saved, or, if Do Sequence is on, it is sent to the Sequencer.

If Sequencer is enabled, the result from the compositor (if Do Composite is enabled) or the active Render layer (if Do Composite is not enabled) is fed into the Scene strip in the Sequencer. There, it is manipulated according to the VSE settings, and finally delivered as the image for that scene.

Things get a little more complicated when a .blend file has multiple scenes, for example Scene A and Scene B. In Scene B, if Compositing is enabled, the Render Layer node in Scene B's compositor can pull in a Render Layer from Scene A. Note that this image will not be the post-processed one. If you want to pull in the composited and/or sequenced result from Scene A, you will have to render Scene A out to a file using Scene A's compositor and/or sequencer, and then use the Image input node in Scene B's compositor to pull it in.

The bottom part of the possibilities graphic shows the ultimate blender: post-processed images and a dynamic component render layer from Scene A are mixed with two render layers from Scene B in the compositor, then sequenced and finally saved for your viewing enjoyment.

These examples are only a small part of the possibilities in using Blender. Please read on to learn about all the options, and then exercise your creativity in developing your own unique workflow.

The Render Settings Panel

The Render tab contains all of the options for the internal render engine, or an external one, if selected.

Render

Here you can activate the rendering process, by rendering a Still Image or an Animation.

You can also select where the image is rendered to. This are described on the Render Display page.

Layers

The Layers menu contains options for rendering in Layers and Passes

Dimensions

This menu has settings for the size of the rendered images (see Output Options), and options for rendering sequences (see Animations)).

Anti-Aliasing

Antialiasing is important for producing high quality renders that do not have "jaggies" or stair-stepped pixel artifacts.

Motion Blur

Motion Blur is an important effect in rendering moving images. It prevents the animation from appearing unrealistic and stuttery, as in stop-motion, where each frame is a perfect still image.

Shading

These are options for controlling what shading effects are calculated in the render. Deselecting them disables them.

Uses a linear workflow when enabled
Set how transparent pixels are rendered.

Output

Set where images are rendered to and what file type. See Output Options.

Performance

Control the way the renderer performs with respect to the computer's memory and processor. See Performance.

Post Processing

Control effects that are applied after the image has been rendered. If you are using the Compositor or Sequencer, you can tell Blender to process those effects instead of directly rendering the scene.

Fields are used when Rendering for Video.

Dithering is method of blurring pixels.

You can also enable Edge Rendering to create sketch-like or toon-like effects.

Stamp

Stamping inserts text over the rendered images, as well as stamps meta-data into image formats that support it (PNG, JPEG and EXR).

Bake

Render Baking is a process that creates texture files that hold desired rendered effects, like lighting, shadows, or color information. This is useful for working with real-time graphics that benefit from not having to calculate shading when not necessary.