MockupIdeaMotionTracking
目次
Mockups Requested for Motion Tracking UI!
Blender Motion Clip Editor contains 2d motion tracking and 3d camera solution functions. After tracking 2d elements of a scene, the 3d camera solver will attempt to reconstruct 3d camera motion to assist in placing synthetic 3d elements into a live-action 3d scene.
For an overview of the Motion Tracking workflow, check out this tutorial video:
Persistent Camera Reconstruction Orientation
What is Reconstruction Orientation?
One of the challenges in using the motion-tracking workflow is "orienting" the 3d camera tracking solution to a convenient place in the 3d workspace. This is needed because the camera solver does not know "which way is up" or the scale of objects in the footage.
Orientation is currently performed using action tools in the Reconstruction menu, including "set origin", "set scale", "set floor", "set wall", "set x".
UI Issue: repetitive re-orientation...
One challenge with these tools is that they act "immediately" without storing any configuration state. For example, when the user picks 3 points to orient the "floor" and clicks "set floor", the 3d reconstruction is re-oriented to place those points on the 3d-viewport grid-floor. HOWEVER, if the solver is re-run, these three points may move, and thus the user has to repeat the procedure by clicking the same 3 points and choosing "set floor" again. This is true also for "set origin", "set scale", "set x", etc.
Repeating this process can be quite cumbersome in a complex tracking scene with lots of iterations. First, because it's needlessly repetitious. Second, because as the number of tracking points increases, it becomes diffiult to remember and find the same orientation points. Third, because another user working on the scene won't know what tracking points were used.
How do we improve it?
- UI to show persistent orientation configuration, so re-running the camera solver can automatically re-orient the 3d scene.
- UI to communicate the "completeness" of orientation.
- An origin-point is not sufficient to constrain the orientation, and so even after setting an origin, the camera solver may move the reconstruction. Ideally, some UI would more clearly help the user understand how many tracks to orientation to create a "stable" reorientation -- and to show whether the current orientation configuration is stable.
- For example, orienting the "origin", "floor", and any axis (X,Y) is sufficient for a stable re-orientation. Alternately, orienting a "wall", "origin", and any axis (X,Y,Z) is also stable.
Mockups Needed
For the above solution proposal, the following mockups are needed:
- some way to clearly display in the Reconstruction view which tracks are associated with the following orientation configurations (preferably all at once):
- 1 point: Origin, Axis-direction from origin X,Y
- 2 points: Scale
- 3 points: Floor, Wall
Display them on the marks themselves, and also on the motion tracking dopesheet....
- some type of tool/properties UI to display the list of orientations, which are currently setup, and whether the current orientation is "stable"
- For example, could be placed into the camera-solver tools-panel as a collapsable sub-section.
Tracking Composite Filtered Views
Hard to track scenes may require work to get decent 2d mark/track locks. A powerful tool for doing this is the 2d image compositor, for contrast adjustment and edge detection (see commercial boujou).
- Adding a compositor output node that feeds to the tracker is easy enough.
- The tracker needs a UI for having tracks from multiple video sources fed into a single camera solution. Perhaps this should be the default, and multiple scenes should be used for different footage?