利用者・トーク:Nazg-gul/GSoC-2011

提供: wiki
2018年6月29日 (金) 04:46時点におけるYamyam (トーク | 投稿記録)による版 (1版 をインポートしました)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Hello As I went through your developments for the camera tracking, and recently I did test it to track a camera in a video scene I produced I did discover your model and methods for such a process. Before I come to my remarks I have to explain that in early 1992, I was in LA working in an animation company named Modern Cartoons, we were doing saturday morning cartoons for kids. Our work flow was 'mostly' live, meaning that we had actors playing a scene that was mocap to the digital characters for the animation. The director wanted to have long camera panning. So the artists were producing series of background plates. But to have a non interrupted panning for a camera required that the background plate be continuously updated. So they asked me, the developer, to write the code that would track the camera angle and stitch the plates as they were needed in the field of view. That was the first attempt I made to track down 2D within a 3D space. At that level it was, retrospectively, easy. Then after, they asked more sophisticated movements that led me to write a stitching engine for linear series of images, not video. During the development of the engine, I had to take in account the camera specs, optics, distortion and so on. That was quite complex because I did not know about the video camera they would use. So as I was looking for solutions I got in touch with Herr Helmut Dersch from the Universty of Iena. He had produced a stitcher called PTGui, composed of a great set of software tools for all aspects of optical processing of images. All the functions of his libraries were geared to compute and correct optical distortions. I still have all the libraries and sometimes I use them for a project. The PTGui stitcher was based on markers defined manually, probably at that time given the knowledge in programming and also the languages available, that method was not very efficient. Working from his experience, I had improved some capabilities to mark location within a 2D space and relate it within a 3D projection. At that time, new mathematical models were coming as well as theories of imaging and vision displacement in 3D space. As I was not happy with having to place manually markers in images, and then going to the next one and start again, I tried to automate the identification of hot spots by doing boundary extraction and comparison. In some cases it was helping but not much. Then in late 1998 my code being on source forge french people took over the concept and did a great job in using Fourier Transforms to automate the process of marking hot spots. The company is www.kolor.com and I would suggest you have a look at their work that is today, as I have experienced, one of the best technology for identifying hot spots in images or video fields. Probably this is already in your road map and I am looking forward to see that coming. At this time, camera tracking is a bit rough, but I am sure you will keep evolving it and this is great. I love Blender and the work that has been done recently. If I can be of help I would be very happy to bring my 'old' experiences to the table.. Regards Yves Bodson