利用者:Brecht/Depsgraph

提供: wiki
< 利用者:Brecht
2012年12月17日 (月) 01:06時点におけるwiki>Brechtによる版 (More Topics)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Dependency Graph Ideas

High Level Overview

Incremental Steps

When we talk about the dependency graph project many topics gets intermixed, and it becomes difficult to oversee things. There are various distinct problems to be solved here, and we do not need to know the solution and design for each one of them to start working. What I propose is to split things up into manageable tasks than can be done by one person or a few people in a reasonably short time, and incrementally improve the system.

Here are some proposed steps, not necessarily in the order that they would need to be done.

Any Datablock in the Depsgraph

Currently only objects can be put into the depenendecy graph, but any datablock should be put in there. Every datablock should get 2 functions, one to create dependencies to build the graph, and another to evaluate the datablock. Note that if the generic system is implemented, we can move datablocks one by one into this system.

Threaded Evaluation

Making datablock evaluation multithreaded requires two things: ensuring all datablock evaluation operations are thread safe, and making a threaded scheduler that distributes datablock update calls over threads.

Making things thread safe can be done incrementally as well if we add a flag to indicate some operation is not thread safe, so that threaded evaluation can block if there are such objects.

Finer Granularity in the Depsgraph

Rather than evaluating an object as a whole, we can evaluate it in pieces, which will avoid some dependency cycles. For objects you could start with spliting up object transformation, object data and object dupligroup contents. Dependencies that are created would then have to specify which subset(s) they depend on, and the evaluate function would be called with a parameter specifying which subset needs to be evaluated.

In principle you could go all the way down to modifiers and constraints here, but as they are a stack and not a graph there is no point currently.

Particle systems are another thing that should be evaluated & tagged for update separately.

Armature Bones at Scene Level

Building on this finer granularity, each bone in an armature could be added as a separate node in the dependency graph. This would make the armature level dependency graph obsolete and instead move bones to the scene level dependency graph, solving some common dependency cycles.

Is Updated instead of Need Update

There are some update bugs with making layers visible, linking, or using objects from different scenes. These could be solved by switching the RECALC flag to an IS_UPDATED flag. Depsgraph flush would then clear such IS_UPDATED flags rather than set RECALC flags. This way we don't need to e.g. keep track of which layers become visible.

More Topics

Other problems that need to be solved:

  • Physics simulations / collision
  • Dependencies across time (time offset, ..)
  • Duplis and proxies
  • Lazy evaluation / Alembic / archive loading
  • Multi scene editing
  • Multi time editing
  • Sequencer and compositor integration
  • Particle system render / viewport levels
  • RNA update from animation / driver

Thoughts

Data vs. Operation Nodes

There's two types of graphs that you could make, one where the nodes are data (datablocks, bones, ..), and one where the nodes are operations (modifiers, constraints, ...). The dependency graph is currently has data nodes, while node systems have operation nodes.

Thinking of the dependency graph as having data nodes fits better with how Blender currently works, you evaluate an object (or in a more advanced system, part of an object), and then move on to the next. With instancing, proxies, physics and other more complex dependencies, this gets fuzzier, although the model doesn't necessarily break down, but it would need to be reviewed how these things fit.

Thinking of Blender data evaluation as a graph with operation nodes fits better for things like instancing, proxies, generative modeling, where you would be naturally creating/instancing/copying/discarding data all the time. This is more flexible, but it requires blender data evaluation to work more like nodes, with separated input and output data.

Input and Output

There is some separation between update input and output (e.g. mesh => derivedmesh, loc/rot/scale => matrix), however this is not enough. With everything animatable, effectively all data in a datablock can be output data, which complicates things. Making this work fully reliable may require copying entire datablocks for evaluation. Memory usage is a concern then, for e.g. heavy meshes.

Physics

Physics in the dependency graph currently has two purposes:

  • Object with physics sims have dependencies on other collision and effector objects, so they are evaluated in the right order.
  • Point caches are cleared or tagged outdated when dependencies change.

An issues is that you can have two way interaction between objects, so it's not clear who depends on who then. In a more operation oriented graph you could avoid such cyclic dependencies, but it's not entirely clear to me how to best fit physics simulation yet in the entire data evaluation system.

Scheduling

A more advanced centralized scheduler that takes care of tasks like render, preview render, scene updates, compositing, baking, .. might be in order. The current jobs system does part of this, but doesn't handle conflicting jobs and data access well.

In some simple experiments with multi-threaded object evaluation, just the overhead of starting/stopping threads (at 24 fps) negated most of the performance improvements. This needs to be carefully designed so there is no waiting.

Update Flags

We currently tag objects that need to be updated. It would be more reliable to do the reverse, tag objects when they are up to date and let the depsgraph clear outdated data. This could be a relatively simple change, but would solve issues with layer and visibility updates.