「利用者:Mosani/SummerOfCode2007」の版間の差分

提供: wiki
移動先: 案内検索
(Current Work)
 
(1版 をインポートしました)
 
(相違点なし)

2018年6月29日 (金) 02:51時点における最新版

Introduction

This project's main goal is to rework the section of the render pipeline which converts blender's scene data into render data which is accepted as input to a renderer. The goal is to make the system more modular by separating the tasks of accessing and formulating blender's scene data. The accessing of the data will be generalized an managed by the API. So all renderers will make use of the API for access to scene data, but each renderer will have it's own code which formulates that data for it. Like this:

Scene <-> Render API <-> Internal Renderer
                     <-> Yafray Plugin
                     <-> Renderman Plugin
                     <-> etc.

Having these tasks divided makes reuse easier, which in turn makes maintenance easier. Furthermore, it will lay the foundation for writing a plugin system to manage the internal integration of external renderers.

My project is part of Blender's summer of code 2007: BlenderDev/SummerOfCode2007.


Current Work

Here is the Render API — Renderer documentation, the Render API — Blender Core documentation, and here is the plugin writing tutorial: Render API Plugin Tutorial.

The following notes are partially for my own sake, keeping my thoughts in order, and partially so the outside world can know what I'm doing, so pardons if it doesn't all make 100% sense.

Summary

I've determined that the area which needs the priority right now is the reworking of the render pipeline and the setting up of the plugin system. I've determined that there is no reason I cannot make headway on this section of the project now, and it the most crucially lacking section of the work right now. This is the general direction I'm heading in.

Overall

  1. Plugin system. I'm prioritizing setting up the plugins, because it will be good for testing.
    1. Generate the renderer menu.
    2. Get renderer switching working.
    3. Call the render_frame callback in the test code.
    4. (the rest is integrated with the render pipeline reworking.)
  2. API Coding
    1. Implement temporary data storage.
    2. Implement camera functions.
    3. Finish the easy geometry functions that aren't done yet.
  3. Implementation Design
    1. Research and figure out what needs to be done with results that are passed back, implement passes and sending of data back.
  4. API Design
    1. Finish designing the method used to handle tricky animation situations.
    2. Design the export functions for: subsurfaces, nurbs, curves, metaballs and particles.
    3. Design the system for dealing with materials and textures.
  5. Pipeline
    1. Research and continue documenting the current export system, and how Blender Internal and Yafray currently like their data.
    2. Propose a new design for the 3d render section of the pipeline which uses the API callbacks and functions.
    3. Write setup, render_frame, finish, and abort for Blender Internal and Yafray. (using the current system heavily for reference)

To Do — Week 9

Plugin System / Pipeline
  1. finished: get callbacks working: blender can call them, they can call render API.
  2. finished: embed callbacks in pipeline.
  3. rewrite blender internal code to render a cube with the API
Plugin
  1. in progress: write aqsis render that uses all the functionality current written in the API
API Implementation
  1. research, design implementation for passing back data from exporters.
Design
  1. materials and textures

Schedule

  1. Sunday.
    1. Commit non-working plugin code, get help with dl stuff.
    2. Work through entire pipeline deciding: where code will be moved to a callback, where it will stay (work out how callbacks will be embedded). Decide what structures need to be reworked and how. Make a write-up, post on bf-committers and wiki.
    3. Begin writing aqsis plugin.
  2. Monday.
    1. Review comments, incorporate useful feedback.
    2. Embed callbacks in pipeline.
    3. Work on dl problems.
  3. Tuesday.
  4. Wednesday.
  5. Thursday.
  6. Friday.
  7. Saturday.

Proposal

Here is the final draft of the project proposal.

A RENDER API FOR BLENDER

Synopsis:

The current system for converting Blender's database into render data is awkward. This has two important consequences. First, the system is difficult to maintain. When Blender changes there is no well-established module for the export of the new data for rendering; instead the existing export code must be searched for the places which need changes. Second, it is difficult to integrate external renderers. I propose a render API take the place of the current system. The current setup's main fault is its lack of structure. Thus, the most obvious change would be to create a well-defined, documented protocol here. By providing easy functions for the access and export of Blender's scene data, both problems would be solved.

Benefits to the Blender Community:

Maintenance

A render API would give Blender developers an obvious system to alter when changes are made to Blender. For example, when the Mesh refactor is done, it will be simple to change the render API to reflect this. Also, if Blender changes to support procedural models it will be obvious what needs to be changed in the API to render these. Furthermore, this would separate two things that are currently done at the same time: the extraction of the data and the sending of that data to a renderer. Having this separation will make changes on either part easier.

External Renderers

Having a strong and well-designed link to external renderers has been needed for years. I have felt this personally and seen it in the community. External renderers would allow Blender users to choose a renderer tailored to their needs. Here is a short list of notable renderers that could be added once this was completed: Aqsis, Pixie, 3Delight, Yafray, POV-Ray, Free-Style, etc.

Also, tight integration of these programs would stimulate the development of both Blender and the external renderers to take advantage of each other's advanced features. Another important result is that it would facilitate integrating Blender into various production pipelines.

Deliverables:

The current system which interfaces between Blender and its renderers (mostly in convertblender.c) would be replaced with two separate systems which perform its task: the render API and the exporter. A plug-in system would be created so that by swapping DLL/SO plug-ins Blender could switch which renderer it is hooked up to internally. Code would be created to connect Blender's internal render via the API, completing the replacement of the existing system. Furthermore, detailed documentation of the render API would be published online to facilitate the production of exporter plug-ins for external renderers.

Functionality Details:

Basically, the API would provide the entire scene description stripped of unnecessary data (from a rendering point of view), and formatted in a way to make sending render data to the renderer intuitive and straight-forward. For example, at a given frame the mesh deformation stack would be collapsed and used to populate a stripped-down mesh structure which would then have operators that could be performed on it to easily output the vertices's positions, edges, faces, and normals. A function would be provided to check if the mesh was non-manifold, because many renderers do no support non-manifold meshes. Functionality similar to this would be provided for every part of a Blender scene.

Implementation Details:

Most of the work would go into two additional files in the render module: the render API header and source file. The database would be traversed and copied into stripped-down and re-organized structures for the API. These stripped-down versions would have functions to further ease the output of render-necessary data. For example, the Renderman Interface specifies that the normals of the mesh be defined by the clockwise or counter-clockwise order of vertex input with respect to the handedness of the coordinate system. An iterator could be provided to traverse the mesh in this way. Also, useful error checking functions could be provided.

Development Methodology:

The Render API would first be designed and rigorously evaluated. I would publish the work in progress for input from the developer community and study the form of the data accepted by Blender's internal renderer, the Renderman standard, Yafray's API, and possibly others. It would include both the API design and the design of the specific implementation thereof.

I am thoroughly familiar with the need to work in small chunks and compile often. I am familiar with the communication channels needed to try to track down knowledge that I need to correctly put this together.

Project Schedule:

  • April 12 - May 28. Renderer research, including at least the Blender internal, the Renderman-compliant ones, and Yafray. Determine what methods would be the most useful for the programmer who is feeding the data into the renderer. Begin design of render API specification and seek community input.
  • May 28 - June 5. Begin coding simple testing functions to determine how best to proceed. Develop a work-flow for debugging the API.
  • June 5 - July 9. Implement the stripped-down structures needed for the API's extraction and re-organization of data. Setup the plug-in system for testing purposes. Implement extraction of render settings, camera, lamps, materials, and polygon meshes.
  • July 9 - August 20. Implement collapsing of mesh deformation stacks. Implement extraction of texture coordinates and settings, meta-balls, and curves. Implement extraction of particle systems. Implement functions which aid in the easy output of the data in multiple forms. Implement error checking functions. Integrate the API into the render pipeline. Re-write the internal renderer support using the API.

Bio:

I am a 21-year-old student in my 3rd year of my Computer Science bachelor at Gonzaga University, Spokane, WA, USA. I have been using Blender for 7 years. I have been coding for 3 years, mostly in C++. I have taken courses in Data Structures, and Object-Oriented Programming. I have been using open source tools for development for the last year, including the gcc compiling package, scons, svn, and gdb.

I have experience being thrown into existing code bases, through my freelance web-design. I am currently working on a Bayesian matting program with my friend Nathan Vegdahl, using a combination of C and C++. I also have some experience in database design.

Blender and 3d graphics were what got me into Computer Science, but I haven't yet been able to take a class in graphics.

Why Me

Skills
  • Designing Code:
    • Database design in MySQL.
    • Design of a polymorphic Pong implementation in Object-Oriented Programming.
    • Design of an object-oriented AI to play Othello.
    • 3 years of programming. I am a quite broken-in programmer with respect to coding conventions, documentation, coding work-flow, and debugging. I have encountered issues and solved them.
    • I was taught to use pencil and paper for design. I am not afraid to do so.
  • API Experience:
    • JavaScript + DOM. I have many years of using the DOM API to create web interfaces. I have experience exploring all the different possibilities an API offers and lacks.
    • API coded in OOP. I coded an API, previously mentioned, which interfaced between the windows API and the needs of my text UI. During that project I learned a lot about the need to carefully choose the level of abstraction for an API.

Progress Reports

Progress Report 1

I've just had a chance to step back, take a breather, and look at what I've done so far. It may not look like much, but I think I've made some steps in the right direction, and also begun working out what directions arn't the right one...

Here are a few of the ways I've attempted to break down the project so far: Workflow: Determine requirements, design, implement. Research: existing APIs, existing renderers, blender source. Existing APIs are important because they show how others have broken this problem down and created a proven working solution. Also, if my API was similar it would be easier to port exporters. Existing renderers are important because they are who I'm trying to give data to. And finally, the blender source, of course, tells me what data I have to export and how that's currently being done.

A few days before SOC started I began madly working through the blender source. I tried to figure out the UI in an attempt to setup a system for running test code, but failed at that. I searched with grep and ctags. Finally, I had the idea of running the current source in a debugger and following execution and examining the memory. These methods have helped me start to produce a grasp of the source, but talking with developers I'm sure will help much more. This was research in the "blender source" area.

Just before summer of code started, I attended the developer meeting on #blendercoders and got some very helpful suggestions, such as the idea of examining the existing XSI API among others. The XSI API seems well designed, so I have been reducing my work by taking it as representative. I examined how XSI setup external renderers through a system of callbacks, an internal plugin registrar, and a 'rendering context' object which contains links to all the necessary data. However the XSI scene description is a massive class hierarchy which I soon got bogged down in and lost some time. This was my research in the "existing APIs" area.

Next I turned from XSI to the Renderman Specification. I discussed renderman and rendering in passes with Akhil, which was helpful, but also got me thinking that I have some significant things to learn before I create this design. I decided that in order to write an API that will be used to write exporters I should have some knowledge of what an exporters concerns are, so I merged these two ideas and started working through the RISpec again, but this time writing test RIBs along the way to verify that I understood it. This was very aggravating, but I did get somewhere. I learned to read the specs more carefully, and came to a better understanding of the parts that I did work out in test RIBs. I used aqsis and pixie for test renders. This has bee the extent of my research in the "existing renderers" area in the last week.

I felt like I needed to study a few 3D concepts a bit more in order to do this project, so I pulled out "3D Computer Graphics" by Alan Watt, and read about transformations, viewing systems, geometry, an overview of the rendering process, and some about materials. I think this will help me make sense of what's going on in the blender source more.

I have finally come full circle. I've realized that where the question "how does a given renderer want it's data?" certainly influences design decisions in an API, the first question must be "what data do I have to export and how is it structured?" So, short of a possible read through of the Renderman shading language, which I have yet to do, I'm headed back into the blender source. I will begin by examining the data structures. My focus will be extracting the information I need to create the design.

Finally concerning the workflow. I have been focusing on research thus-far, that is, determining the requirements of my API. But in all this research I have somewhat lost the goal of creating a full design of an API. Tonight I have had a chance to sit down an think about how I see the design at this point, and I feel more capable than I did Monday. So my goal for the next week is to get the design document written and begin discussing it on #blendercoders. My research of the blender source should help me write the document, but I can't afford to do research any more without making progress in producing a design.

Here are the things I want to get done in the next week:

  1. Finish reading about renderman shader language
  2. Write, peer-review, and finalize a framework for the API. Once I have a general framework to work in, I'll be able to set it up in the code. Having a foothold on the code would be good for making further progress when the design is complete.
  3. Look over blender data-structures, current blender implementation, and existing blender exporters.
  4. Block out the main sections of the API and design a first draft. Use blender data-structures, XSI docs, and renderman spec as reference / inspiration.
  5. Discuss this with blender community on #blendercoders.

Progress Report 2

This week, I:

  • setup the current file structure for the project
  • established that I can print to the console for debugging
  • discussed the design with several people on various levels of abstraction.
  • sorted out that integration with python API would be a bad idea to pursue whereas integration with the plugin API would be a good idea to pursue.
  • made 1 and 1/4 attempts at pseudocode for new API based system
  • tried to divide the API into sections which I could work from
  • proposed a framework for the API to the community

Next week I am in Poland. I will be back the 19th at 11ish am.

Progress Report 3

I couldn't give one this week because I was out of the country.

Progress Report 4

This week I:

  1. Read up on open source communications (while I was still out of town).
  2. Cleared out the render api wiki to be more focused on its purpose: the design.
  3. Did a couple drafts of about half of my pseudocode and sent out emails about a few design concerns.
  4. Began documenting RE_Database_FromScene in an attempt to figure out where the conversion actually happens in blender.

Next week I plan to:

  1. Finish my complete pseudocode of the entire system.
  2. Finish working through the RE_Database_FromScene, and determine if there are any other functions I need to work through in order to write my API design.
  3. Write enough of the design to get started code. Primarily, I need to design a supplement to the existing pipeline code that will allow external renderers to be served by the pipeline. That is, a division between generic pipeline code and blender internal pipeline code. I also need to define the export methods for render settings, camera, lights, and polygon meshes. These are the bare basics for producing an image.
  4. Get feedback on my pseudocode and design documents.

Last comments.

I'm behind because of going to Poland. I plan to work very hard this week.