「利用者:Jwilkins/GSoC2012/Viewport FX Proposal」の版間の差分

提供: wiki
移動先: 案内検索
 
(1版 をインポートしました)
 
(相違点なし)

2018年6月29日 (金) 05:49時点における最新版

This is my original proposal, which is somewhat vague, please see User:Jwilkins/GSoC2012/Viewport FX Design for a more technical discussion.

Viewport FX

Application for Google Summer of Code 2012

Blender Foundation

Name: Jason Wilkins

IRC: jwilkins

Web: http://sites.google.com/site/jasonawilkins


Synopsis

The work proposed here has two main goals. The first goal is to rewrite all the code in Blender that draws to the screen so that it uses a single higher level library that manages geometry and state efficiently. The second goal is to take advantage of the new layer of abstraction and greatly increase flexibility by making the drawing and compositing of the viewport programmable through some kind of textual description.


Benefits to Blender

The first goal of improving performance by routing all drawing code through a high performance library provides the following benefits:

Improved performance of redrawing the screen, especially the 3D viewport. Elimination of OpenGL immediate mode calls (glBegin/glEnd), which are the slowest way to draw. Easier to limit drawing code to the OpenGL ES subset which is used by portable devices and consoles. By making more indirect use of OpenGL, improvements to the core drawing code benefit all of Blender. Porting Blender to future versions of OpenGL may be easier. The second goal increasing the flexibility of the viewport by making it programmable has the following benefits:

Different viewport modes, such as wire/solid/texture, are no longer ad hoc, but are just pre-packaged scripts. Additional modes (such as a ZBrush style cavity shader or screen space ambient occlusion) could be implemented as additional scripts instead of as hard-coded new modes or options. A simple description language allows users to create their own specialized visualizations of objects. The new viewport system could be ported to the Blender Game Engine where it would be a more powerful version of the game engine's 2D filters. For some users the output of the OpenGL frame renderer would be adequate for their purposes if they had more control over its output. Viewport drawing scripts may be a better place to describe how to support different versions of OpenGL (fallback code) than hard-coded C, letting Blender take advantage of new features sooner, while keeping the C code simpler. A programmable viewport could provide a framework accelerating the viewport of the rendered frame compositor (or even the compositor itself).


Deliverables

If we consider deliverables to be mostly user oriented then this proposal offers improved performance for what Blender already draws to the screen and a scripting language that lets Blender developers and advanced users create additional ways to visualize objects in real time. The features of the scripting language should provide the same flexibility as shader IDEs like RenderMonkey and Composer~FX or game engines like Unreal and Unity. However, due to time constraints this proposal is not ambitious enough to include as nice of a user interface as those programs provide.


Project Details

Abstract Drawing Library There are approximately 60 files in the source code that contain OpenGL code that causes pixels to appear on the screen. The first deliverable would be a module that adequately covers all of the use cases in those files and replaces all of that code with calls to the new module. This would not be merely an emulation of immediate mode where immediate mode was used. Instead it would attempt to preserve the spirit of immediate mode while allowing the implementation to cache geometry and use VBO if it can. This should ease porting.

Concrete Drawing Profiles There are now many different ways to send geometry to OpenGL. I would propose implementing two different concrete back-ends for the abstract drawing library. The first would be a compatibility mode that draws things in the same way that Blender currently does now. Such a profile would be created as a way of not biting off too much of the apple, so to speak. The second would be a high-performance implementation which supports the most cutting-edge version of OpenGL+extensions that Blender is allowed to use. An OpenGL ES implementation is also imaginable, but since there are a lot of other parts of Blender that use OpenGL calls that are not in OpenGL ES, such a profile would not be a complete port of Blender to OpenGL ES.

Command-Driven Viewport Drawing The objects and widgets drawn to the viewport, as well as what order they are drawn in, and the target buffer to which they are drawn are all currently hard-coded into Blender. All of the individual elements that make up the 3D view in Blender should be divided into logical pieces and then parameterized so that they can be invoked in a scriptable way. Besides whatever parameters each element is discovered to have, the state and target of each rendered element could be overridden.

Viewport Compositing The purpose of controlling the target of different elements of the viewport is so that they can be recombined using shaders. This is a separate deliverable because a scriptable viewport without compositing is imaginable. If we only allowed rendering to the back-buffer or only used alternative render targets to be used as textures then we would not have the ability to composite. A viewport compositor is an additional step that uses new commands not derived from existing Blender behavior that would be used to combine separately rendered targets together into a final view (or another render target if you really wanted to get complicated).

Viewport Description Language So far everything proposed so far could be programmed into Blender as a set of data structures. This would be kind of like a key-map, but it also implies some kind of node based editor. However, at this time I think implementing a node based editor would take a considerable amount of my time for me to complete. For this reason I believe a text based language is preferable, not just because because it would be easier to implement, but because it allows for easy import and export. Such a language could be XML based or something else to be determined.


Project Schedule

The first goal of this project is higher priority than the second, due to Blender needing to have improved performance and portability. The modules that implement this goal are the abstract and concrete parts of a new drawing library. Somewhere between one-third and one-half of the time should be spent implementing and testing this aspect of the project to make sure that it can be integrated with Blender.

The task of implementing the new drawing library is a little tedious but also low risk. The work should proceed quickly (the first 4 weeks) with the rest of the time spent testing, profiling, tuning, and benchmarking (the remaining 2 weeks).

The second goal, which is improved flexibility and programmability, is riskier and lower priority. The tasks associated with it are identifying commands that exist implicitly in Blender and making them explicit, then creating new useful commands such as a compositor, and then making all of these commands accessible to a user through a description language.

The easiest part should be identifying existing implicit commands, because this can be done simultaneously with implementing the new drawing library. These should be fairly quickly re-factored into a set of repeatable commands that can be put into a data structure instead of hard-coded (about 2 weeks). The development of the viewport compositing commands should take about 3 weeks, which leaves 1 week for developing a text based description. I give the text based description so little time because I have a lot of experience with making parsers, but also I believe that at this point I may conclude that a textual description is premature. I would decide not to do a text-based description if I believed the system was not mature enough by the end of the summer to allow regular users to be productive using it.