利用者:Nazg-gul/GSoC-2011

提供: wiki
< 利用者:Nazg-gul
2011年12月14日 (水) 17:07時点におけるwiki>Nazg-gulによる版 (Google summer of code: Camera tracking integration: page cleanup)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Google summer of code: Camera tracking integration

General info

NOTE: Google Summer of Code program was finished and camera tracking project was merged into trunk. Information here might be outdated, more recent information can be found here

This is homepage of my Google Summer of Code project called "Camera tracking integration into Blender".

Here are some links to pages with more detailed info:

Current workflow state

  • done
      Load image data - video or sequence of images.
  • done
      [Optional] Input known camera parameters.
  • done
      [Optional] Provide hints to camera lens undistortion solver (identify straight lines in the image) to help with undistorting the image.
  • to do
      [Optional] Undistort the camera lens using a distortion solver.
  • to do
      [Optional] Create mask for data you don't want tracked.
  • to do
      [Optional] adjust color/contrast/etc so that feature points to track have increased contrast and are thus easier for the tracker to find them.
  • done
      [Optional] select specific features you want tracked (either by mask or placement of tracking points). (there's feature selection, but no masks)
  • done
      If you specified specific features to the tracker you may also want to specify how the points should be tracked (bounding box for the tracker to look for the point in; whether the object being tracked is.
  • to do
      «Rigid» or can deform, and knowledge of the types of camera/object motion - object translation; rotation; camera translation; rotation).
  • done
     Send image data, specified track data; camera data; and mask data to tracker.
  • done
      Tracker library can automatically identify feature points to track and or use user specified features, ignoring areas that are masked (no masks support yet).
  • done
      Do a «solve» of the camera motion based on the track points, including statistics on how 'good' the tracks are in contributing to the solve.
  • done
      Return track point data and camera solve to software, including the statistical analysis of the track points.
  • done
     Based on the statistical analysis pick error thresholds for what track points to automatically delete.
  • done
      [Optional] Manually delete any track points.
  • to do
      [Optional] Create a mask to 'hide' unwanted track points.
  • to do
      [Optional] Mask can be assigned to follow a set of track points to automatically mask a moving object from the tracker/solver.
  • to do
      [Optional] mask can be manually keyframed so that it moves and deforms over time to mask a moving object from the tracker/solver.
  • to do
      [Optional] Provide a manually created camera curve to 'hint' the tracker/solver what you expect the actual curve to look like.
  • done
      Retrack if additional tracker points are now needed.
  • done
      Pass the tracker points, camera hints, etc. to the camera solver.
  • done
      Return the solved camera track and the 3d location of the track points to the software.
  • done
      Visualize the camera motion and track points in the 3d view.
  • done
      Define a ground plane reference to the 3d track points and camera.
  • done
      Define the scene origin relative to the 3d track points and camera.
  • done
      Define the world scale and orientation relative to the 3d track points and data.
  • done
      Add a test object into the 3d view and see if it stays in the proper location.
  • to do
      [Optional] Stabilize the camera view based on the solved camera track.
  • to do
      [Optional] smooth the camera track curve.