利用者:Kevindietrich/NodeSystem

提供: wiki
< 利用者:Kevindietrich
2017年5月14日 (日) 22:50時点におけるwiki>Kevindietrichによる版
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索
Note
This proposal does not take into account some of the developments or design made for the Blender 2.8 project yet. This is mostly a collection of previous ideas that I am putting together and making public.


A node system in Blender is long desired, and was attempted a few times. In this proposal, a node system is presented that is going to be achieved in two steps: firstly, we will introduce new object types Blender: particles/point cloud object, a hair object, and a volume object; secondly, the modifiers and some operators will be turned into a high level node system.


Reviewing the object types.

Blender has a few object data types: meshes, metaballs, curves, NURBS, cameras, as well as empties. Along those, Blender is also generating particles (points), hair, and volumetric data, respectively through the particles, hair, and smoke modifiers.

Blender has only really benefited from developments in, and is centered around, its mesh editing capabilities. The material nodes for viewport rendering and the modifiers are based around the DerivedMesh, which is a data structure made to find a common ground, a common API, for curves, meshes and metaballs. Furthermore, the attribute node is hardcoded with mesh custom data attributes (UV, vertex colors, normals, etc....) making it almost impossible to port the Cycles volume rendering pipeline which using the attribute node to lookup the proper volume channel. To properly support node-based viewport volume rendering by using the attribute node, the node system has to know about volumes as well.

Vertex colors in Blender require a mesh that has polygons, and the polygons' vertices will then hold the color information. Some workflow using Alembic would like individual points and vertices to be able to hold vertex colors. For exemple, one would create a fluid simulation in Houdini, turn the individual voxel of the simulation to points, storing informations about the simulation as colors on the points, to finally import those points in Blender via Alembic and rendering the points in Cycles using the Point Density node. Obviously, this workflow fills a lack of a volume object in Blender that is capable of importing volumetric data from other software. Again, only the mesh I/O is truly supporting. Nevertheless, vertex colors should not require polygons, and there are a lot of other reasons to support such a workflow.

We have seen so far that Blender is centered around its mesh editing capabilities, and that the custom data attributes are narrow minded.

Blender should promote particles, hair, and volumes to object types as a way to become more flexible in a pipeline. Trying to design a fluid solver, or a particles system, that can work directly from VDB, or Alembic, or Partio caches is nothing but an impossible task. Easier would be to have a modifier system that work by using object as input and modifying those objects directly instead of creating extra data.

A volume object was developed by myself:


This volume object is based on OpenVDB and therefore it is possible to import volumes created in other software and exported as VDBs. Volumes do not necessarily only include the result of smoke simulations. OpenVDB was created to store the data created from liquid simulations, level sets, at first. The level set encode in its voxel the distance to the nearest surface point, and as such it is possible to render those in Cycles on one hand, and remove the need to mesh the liquid on the other hand. Currently, level sets in the liquid solver ("Fluid Simulation Modifier") are turned into meshes, which is always approximative and prone to data loss. Whilst Kévin worked on density volume rendering in Cycles, Lukas Stockner worked on level set rendering. Level sets are also commonly used in VFX for collision detection, and fracturing. A volume object will then allow for more flexible volumetrics pipeline and simulation. The liquid and smoke modifier will then take as input, and output, a volume object instead of a mesh object.

Lukas Toenne made some tests for a new hair edit mode which is using the BMesh Vertex and Edge API. I think that this shows Blender could turn part of its hair data structure into a new hair object. This hair object would of course reuse part of the mesh API (vertex and edge), and whatever operator working on a mesh's edges and vertices will then work for this hair object as well. The particles system modifier will then be split: on one hand, the data generation will be handled by the new data block, on the other hand, the physics solver will work on its own, taking a hair object as input. This hair object could be merged with the already existing curve object.

Similarly, particles will be promoted to a new object data type, separating the logic of creating and managing the data, to that of the physics simulation itself. It is important to note that most of the issues relating to the particles system are caused by the particles system doing too much: creating, managing, and simulating the data. Also, particles in Alembic are currently imported using a vertex only mesh, since it is unpractical to create a Blender particles system from Alembic data (recall: the particle system needs an input mesh to create points from, whilst Alembic only stores the points). To generate points in this new point cloud object, the particles solver will make use of extra objects, like the boolean modifier is using extra objects to perform its various carving operations.

Promoting particles, hair, and volumes to object data type will have multiple benefits:

  • the duplication system will de-duplicated: currently we can duplicate over a mesh's vertices, or over the points in a particles system; that is on an object's data for the first case, and on an object's data's modifier's data on the second case. With these new object data types, duplication will be done using the currently existing scheme, by using parented objects.
  • the caching system will be handled at the object level instead of at the modifiers level.
  • by separating the data from its editing through modifiers and simulations, pipeline between software will be made easier.


Developing a node system.

Now that the object types are rewritten, a node system can be written more easily.

Kamikaze

As a case study, let's look at Kamikaze, a software that I've written which is making use of a node system. The first aspect of Kamikaze is that meshes and volumes are "object data types", called "primitives", and can be mixed in the same node tree:

This is the particle mesher that I wrote in Blender, but this time using nodes. A point cloud object is turned into a volume (using OpenVDB level set) and that volume is then turned into a mesh. Note that Kamikaze also has an "edge object", which would be similar to the proposed hair object for Blender.

In Kamikaze, attributes can be added to the primitives through nodes:

This is similar to Blender's custom data attributes. The main issue with Blender's custom data is that their are hard coded to the mesh. To have such a system, custom datas have to be made more flexible.

Another aspect of Kamikaze is that objects do not have a single "primitive" (or "object data type") in it. An object has an array of primitives which can be of multiple types: an object can have for instance 3 volumes, 5 meshes, and 1 curve. The main reason for that decision was to be able to make use temporary meshes or curves without having to add an object like one would in Blender. Those temporary objects are often simply used for a boolean operation, and then forgotten about. The second reason is to be able to support shards without having to worry about having a shards system. For fracturing, an object will contain 1 mesh during the time preceding the fracture, and after the fracturing has occurred, the same object will contain n number of meshes. Here's an example video demonstrating fracturing using nodes and OpenVDB:

Again, the concept of having a container object helps making use of temporary meshes, volumes, and point clouds without worrying about adding too many objects to the scene. For Blender, a container object would be too big of a change perhaps for a fist implementation.

Finally a very crude physics system was attempted, but revealed to be too complicated to implement using Kamikaze's current architecture :

Kamikaze has a lot of bugs and issues, and is in dire need of a rewrite. The software itself is Linux only (it requires GCC 6.3 with the C++17 filesystem library) and a bit cumbersome to compile, but for highly adventurous people the source code can be found on my github page, the OpenVDB nodes are implemented as plugins.

Blender

Building on the concepts and experience working on Kamikaze, object graphs in Blender should be able to have temporary mesh or curves data-blocks. An output node will determine the final type of the object. So if a mesh is the final product of the graph, the object will be a mesh object. Some operators, like the ones converting between object types (e.g. Curves from Mesh), or even import operators (OBJ, Alembic, FBX, etc...), will be available as nodes.

Each node will work on the object data entirely, like the current modifiers.

The way custom data will be handled through the graph is not entirely clear, but seems to be straightforward. Since the nodes will work on the object data in its entirety, special nodes to add and remove e.g. UV maps seem at first glance feasible.

The node system will be high level. Lukas Toenne used to work on a low level node system, which I was briefly involved in, but I doubt such a system is desirable for a first implementation. Part of the reason it is not desirable is that it prevents meshes from having a change in vertex count, like from a subsurface operation. Rather, such a node graph shall be a second level of nodes accessible from a high level node, in a similar fashion as node groups. A special node will be able to contain this second level of nodes, and users will be able to "code using low level nodes" and share their custom nodes.

I also propose to remove the DerivedMesh data type to further simplify the API: the nodes will receive the object data pointer, and act according to the object type. Trying to have a single abstract API to handle mesh/points/curves/volumes/metaballs is bound to result in an API that tries to do too much and be clustered (e.g. calling dm->getNumEdges(dm) on a volume doesn't make sense, neither does a dm->getNumVoxels(dm) call make sense on a mesh object).

The nodes will be executed recursively starting from the output node. Kamikaze is trying to be smart, but low level optimisations like the ones present in the SVM in Cycles, or in the BVM in Lukas' work, are not feasible when the nodes are working on the entire objects: we can't merge two such nodes, just like we cannot merge two modifiers currently. Therefore, before processing the data, the current node will call its input nodes to request the data; those input nodes will then call their input nodes, and so on, until no more input nodes are found. Using such a scheme gives us the opportunity to have simple control flow using switch nodes.

Physics

Physics will work the same way that they do presently in the modifiers system. Since this node system is merely a node-based modifier system, the physics modifiers should translate easily. That is: the node graph will be tagged as "dynamic" and executed on every frame, passing the output of the previous frame as input for the current frame.

Caching will be done at the object level, since we will have particles (points), hair, and volumes, existing as object data types. Therefore, the result of a graph will be cached after its execution, and the graph will be executed only if no cache is available or up-to-date. Perhaps, individual cache nodes can be made available to save intermediate computations as well.

Ideas

For a sculpting node, the node could record informations about the brush and 3D view (view matrix, brush radius, brush path, etc...) so that every time the node is executed, the brush stroke can be reapplied despite a change in geometry.