「利用者:Tianwei37/Report & Blog/Multi-view Reconstruction Documentation」の版間の差分
細 (1版 をインポートしました) |
|
(相違点なし)
|
2018年6月29日 (金) 06:17時点における最新版
Design Philosophy:
The problem with the current implementation is that it is assumed that the tracks are all from one camera throughout the whole reconstruction process. It is straightforward to implement like this for single camera, since matched markers are easily managed because they are all from the same track.
For multi-camera case, which I assume to be more than 2 cameras, much of the stuff here need to become arrays. For example, the following data structure can be modified:
typedef struct {
Scene *scene;
int clip_num; /* the number of active clips for multi-view reconstruction*/
MovieClip **clips; /* a clip pointer array that records all the clip pointers */
MovieClipUser user;
ReportList *reports;
char stats_message[256];
struct MovieMultiviewReconstructContext *context;
} SolveMultiviewJob;
clip is changed to an array of clips, which contains the primary camera and witness camera(s). It holds a struct call MovieMultiviewReconstructContext as follows:
typedef struct MovieMultiviewReconstructContext {
struct libmv_TracksN **tracks; /* set of tracks from all clips (API in autotrack) */
struct libmv_ReconstructionN **reconstruction; /* reconstruction for each clip (API in autotrack) */
libmv_CameraIntrinsicsOptions **camera_intrinsics_options; /* camera intrinsic of each camera */
TracksMap **tracks_map; /* tracks_map of each clip */
int *sfras, *efras; /* start and end frame of each clip */
ListBase *corr_base; /* a set of correspondence across clips */
bool select_keyframes;
int keyframe1, keyframe2; /* the key frames selected from the primary camera */
int refine_flags;
char object_name[MAX_NAME];
bool is_camera;
short motion_flag;
float reprojection_error;
} MovieMultiviewReconstructContext;
This basically extends from MovieReconstructContext. The key is that it records a set of correspondences (corr_base) specified by users, so we can arrange and connect these tracks in the back end (libmv). The data structure of correspondence is the following:
typedef struct MovieTrackingCorrespondence {
struct MovieTrackingCorrespondence *next, *prev;
char name[64]; /* MAX_NAME */
MovieTrackingTrack *self_track;
MovieTrackingTrack *other_track;
struct MovieClip *self_clip;
struct MovieClip *other_clip;
} MovieTrackingCorrespondence;
It records two track pointers, and the clip pointers which each of them comes from. This MovieTrackingCorrespondence is going to be stored often as a ListBase, so we also need to add two *next and *prev fields to it.
The CLIP_OT_solve_multiview will work like this. First we prepare everything that libmv needs and pass them to MovieMultiviewReconstructContext, including the primary tracks, witness tracks, correspondences, tracks_map, key frames, etc. Then I process the options and pass all the tracks and correspondences to libmv_solve_multiview_reconstruction.
The reconstruction can be similar to the previous single-view one, which we first compute a two-view geometry from the two keyframes, then inserted other images both in the primary camera and witness cameras. After bundle adjustment over all the images, we can pass back the information to the blender side and finally do the cleanup.
More about MovieMultiviewReconstructContext
MovieMultiviewReconstructContext seats in between blender and libmv. Since libmv/simple_pipeline will soon be abandoned. This data structure is mainly built for libmv/autotrack. Much of the stuff here will be array (or pointers for arrays), since we need to cope with more than one clips. struct libmv_ReconstructionN **reconstruction is a new data structure for reconstruction, based on APIs in libmv/autotrack:
struct libmv_ReconstructionN {
mv::Reconstruction reconstruction;
/* Used for per-track average error calculation after reconstruction */
mv::Tracks tracks;
libmv::CameraIntrinsics *intrinsics;
double error;
bool is_valid;
};
Since libmv/autotrack is still not mature, many changes will happen there. Stay tuned.