「利用者:Shuvro」の版間の差分
細 (1版 をインポートしました) |
|
(相違点なし)
|
2018年6月29日 (金) 03:47時点における最新版
目次
Realtime motion capture input and motion recording for Blender - Test case is using the Kinect
Name
- Shuvro Sarker
Email/IRC/WWW
- My mail account is shuvro05@gmail.com .I am frequent at freenode in channel #blendercoders with the nick shuvro.
Synopsis
- Animating 3D objects is undoubtedly one of the most interesting part of 3D computer graphics. As a technique for making realistic animation, motion capture has become popular for many days. The term ‘motion capture’ is used to describe the process of recording movement and translating that movement on to a digital model. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision and robotics. So, having a realtime motion capture input system for Blender will be great. Because with this system the artists and animators can make great animation with less effort and time. Moreover, having a generic realtime motion capture input system can be useful for Game Engine of Blender too. But the main problem was that almost all of the motion capture devices were so expensive. Most of them demand complicated and expensive environment setup to capture motion data.
- Luckily at the end of year 2010, Kinect was released which is equipped with infrared sensor and RGB camera. This amazing machine, among many of it’s facilities can capture motion data. The real thing is that this device is much cheaper(under $200) than the other vastly expensive motion capture devices. So many artists, animators can easily afford this to capture their motion data. Here are two cool videos of Kinect experience on Windows and Linux platform.
- That is why, I thought about capturing live body motion data from Kinect and converting the motion data into keyframes under Blender. So that users can easily tune them if necessary and make great animations with much flexibility spending lower amount of time.
Benefits to Blender
- I think This will be a great addition in making agile animation for a great number of people using Blender. With a very convenient workflow, they can capture motion data right inside Blender, apply them and tune. That will improve the animation quality and reduce the production time a lot. And most importantly the hardware setup cost is very low. Moreover, if they can’t manage to buy the device, it will be perfectly possible to use the data captured by Kinect device of anyone else. The realtime input system from Kinect can be useful in adding the support of Kinect gameplay in Blender Game Engine.
Deliverables
- A user detection system built inside blender that can detect user. The primary focus of the detection system is to detect biped user/characters. Later it can be extended to detect more type of characters.
- A streamer that will directly stream motion data of the performer from the Kinect device and transfer it to a dummy skeleton to show the move at runtime.
- A recording/capturing system that can be controlled inside Blender to start, pause or recording motion data. After the recording process the data will be converted in a standard format(e.g. BVH)
- A training module that will be used in removing the Footskate from the captured motion data.
- A converter that will convert the captured motion data of a standard format(e.g. BVH) into keyframe data.
Project Details
- First part of this project is to detect user/performer and detect his body joints. The body joints depend upon character type. For now, I am interested about biped characters. The project code/plugin will try to find out the joints of a biped user/performer. This will be done using the depth data from Kinect. Currently there are several open source drivers for Kinect device. Among them, OpenNI is the most recommended driver by all which is developed by PrimeSense. Combining it with another two modules Nite and SensorKinect, the body joints info (joint names and global joint positions in 3d space) can be extracted. This info is necessary to construct a tree of joints (the skeleton) where pelvis is the root.
- After detecting the joints, it is needed to construct a simple bone structure with the joint data. This structure is not graphical rather its a tree like data structure that represents the positional relation among the joints.
- Now the project will begin streaming motion data of the performer from the Kinect device and transfer it to the skeleton. It is needed to convert this global joint position data into local space of the skeleton and set transformation of the individual joints to determine a particular timed pose in the animation sequence. At this stage, a simple skeleton will be drawn based on the current info of the tree data structure in order to show the directly stream motion data.
- There will be a capture/record button for the Artists. With this, they can convert this motion into keyframe data when they like to capture/record it.
- Here comes an important issue regarding this motion capture data. Real human motion usually contains footplants, which are periods of time when a foot or part thereof remains in a fixed position on the ground. Processed motion capture data may fail to accurately reproduce these footplants, introducing an artifact known as Footskate. Since footplants are often the primary connection between a character and the surrounding environment, even small amounts of footskate can destroy the realism of a motion. A number of researches have been made in cleaning up Footskate. Here, we will follow the procedure named Knowing When to Put Your Foot Down in order to avoid Footskate. This method uses a machine learning based approach in order to find in which frames of the captured data the feet should be planted. Once this positions are determined, they can be enforced using a inverse kinematics which will adjust the joint angles in the legs, the root position, and the lengths of bones to satisfy foot plant constraints at each frame.
- The above mentioned algorithm uses K-nearest neighbor classifier in order to do the training for the classification problem of finding whether a frame of motion contains a foot plant or not. I will use neural network based algorithm named as Backpropagation Algorithm, which will possibly be more robust.
- After applying all these, the resulting data will be converted into a standard format as mentioned above.
- If time allows or after the GSoC timeline I want to deal with one more thing. The captured motion data in general need to be filtered and edited before they can be used on a model. Filtering is necessary in order to eliminate any jitter that may be introduced by the motion capture acquisition system. It is also desired to represent the filtered motion data with a smaller number of samples (key frames) than those that are acquired. To deal with this problem several steps have been made in literature. Among those this paper presents two approaches of achieving the goal. One of the approaches achieves keyframe reduction and noise removal simultaneously by fitting a curve to the motion information using dynamic programming. The other approach uses curve simplification algorithms on the motion capture data until a predefined threshold of number of keyframes is reached. I think, it will be good to follow the first approach. I will take the decision about it with the suggestion of my mentor.
- And finally the processed motion data will be converted into keyframe data.
Tentative Project Schedule
- I will start implementing the above proposal as soon as the "official" coding period starts at May 23. I am already acquainted with blender python API as I worked with it in a GSoC project under YafaRay. And I have some idea about the main code structure of Blender. So, I hope before the starting of the official coding period, I will be well acquainted with the codebase of Blender. I will also learn about how to deal with the above mentioned libraries by this time. The tentative schedule is just an assumption to how the Blender Foundation would like me to allocate my time. It can be changed according to the requirement of my mentor. They are as follows-
- May 23 – June 15 : Integration of OpenNI / Nite with Blender and try to play with the data and actions sent by the libraries inside Blender.
- June 15 - July 10: Construction of a tree of joints (the skeleton). Process of directly streaming motion data of the performer from the kinect device inside Blender 3dview will also be implemented.
- July 10 - July15: Test the above functionalities and submit midterm evaluation.
- July 15 - July31: Implementation of the capture/record system and conversion of the captured motion data into keyframe data will be done.
- August 1 - August 15: Implementation of the training algorithm for avoiding Footskate.
- August 15 - August 22: Tuning the whole system, complete if any of the above steps are not completed in time and submit final evaluation.
Bio
- I am a student of computer science and engineering in Bangladesh University of Engineering and Technology. I have started programming with C, then in C++ and so far I have programmed in platforms like Python, Actionscript, Java, PHP, Javascript, MySQL etc along with C and C++. My C and Java projects were mainly game oriented (e.g. board games with C/C++, chess game that has networking support in Java and social games with actionscript).
- I have participated in Google Summer of Code in 2010 successfully from YafaRay on the project Blender 2.5 Integration with YafaRay. Recently I devised an algorithm for determining the optimal architecture of a neural network. I am still working on it. And I have also made a simple ray tracer from Scratch. I have worked with OpenGL too.
These are some small patches of mine for Blender -
Automatic seam creation for UV unwrapping
Name
- Shuvro Sarker
Email/IRC/WWW
- My mail account is shuvro05@gmail.com .I am frequent at freenode in channel #blendercoders with the nick shuvro.
Synopsis
- When creating or using a texture map, a 3D model needs UVs. These are 2D coordinates that tell all 3D applications how to apply a texture to the model. One way for making that is UV unwrapping. It roughly refers unfolding the triangle meshes at the seams, automatically laying out the triangles on a flat page. Until now, the process of creating those UVs has been a time consuming challenge for the artists.
- Currently blender uses ABF and LSCM( Least square conformal map ) which is a good way for doing that. Here I am proposing a better way that is semi-automatic with some interactive controls. This method works well for complex models also. This proposal is mainly based on the paper what you seam what you get.
Benefits to Blender
- This project will make the UV map creation process faster and easier. Users have to hardly go through the time consuming process of defining new seam cuts rather they have to delete some extra ones. This will save the production time of a project a lot. Users have not to depend on the automated seams only, as suitable tuning tools will also be implemented to assist them doing what exactly they want.
Deliverables
- An automatic method that creates reasonable initial seams without user intervention.
- Several tools to edit and to set constraint on the seams created. The tools will be dynamic. This means during editing, when the user sets any constraint or do any parameter tuning, he can see the results of his tuning on the go.
- A method to map the two halves of symmetric objects to the same texels in UV space.
Project Details
- The design approach of this method is based on two principles -
- It is easier to delete an extra seam rather than creating a new one.
- Dynamic feedback on each individual user action is important for making the creation process easier.
- Initial seam creation:
- To do this, a spectral segmentation algorithm will be used. It will provide the user with a reasonable initial set of seams without any user interaction.
- This algorithm uses higher-order spectral cuts. This is necessary to detect good seam candidates. The auto-generated set of seams will contain all the axes of symmetry of the mesh. Based on the idea that it is easier to remove seam than to create a new one,this system creates more seams than necessary. Then it lets the user to remove the undesired ones. The main noticeable fact is that most of the constructed seams will be natural, and will resemble what an artist would have made.
- Interactive tools:
- To improve the initial automated mapping up to the user's expectation, the following tools will be provided -
- Seam editing: Since the initial seam creation will generate more seams than necessary, the user can delete the extra seams with this tool. There will be option for creating new seams also. Currently blender has options for creating/deleting seams. Some small enhancements can make it more effective (e.g. remove a seam between two points with one click)
- Sew/Unsew: The user can choose to keep only parts of a seam by sewing/unsewing it.This tool can be used both to group related parts and to untangle parts.
- Boundary angle constraint : With this tool user can set angle constraints on the boundary of the mesh . This may act as an alternative way of untangling an overlap.
- Straightening angle constraint: When the user selects a sequence of edges with the tool, all the selected edges will be aligned. Seams that cross at a vertex can also be constrained to be orthogonal.
- And after each individual action from any of the tools , a parameterization is updated, and visual feedback is given to the user.
- On top of that, if time allows I want to implement one of the following tools according to the decision of my mentor -
- painting tool for controlling seam which may consist of these three following options-
- Protect seam : with this user can paint a rough area(not exact) quickly to suggest the area he does not want to create seam(e.g. face).
- Suggest seam : User can paint an area with this tool to indicate the area in which he wants to create seams.
- Erase : User can erase previous 2 kinds of painting with that.
- Tool for Creating seams based on ambient occlusion.
- An option can be provided like 'check seams' which will highlight the cutting edges of the created seams.
- Symmetry detection:
- The set of spectral seams will contain all the axes of symmetry of the mesh. Therefore it is possible to detect symmetries and and take them into account to reduce texture storage requirements.
Implementation Idea
- The author of the previously mentioned paper Dr. Lévy has written a plugin which demonstrates his idea for a software named graphite.But the main thing is that, in his plugin he used functionalities from graphite a lot and also external library for numerical operations.I have listed the main graphite files that are included in his plugin in this page.Implementation of this idea in Blender will be dependent on the codebase of Blender.
- So I fear this can hardly be used as a library for this project - but obviously I can have an idea of the implementation from that.
- One of the main aspects of this project is to integrate a eigen solver library or in general, library for numerical methods. For his implementation Dr. Lavy used ARPACKlibrary which is capable of solving large scale eigen value problems.
- When I googled for such libraries, I found GSL, EIGEN and codecogs to do the numerical operations.I hope, mentor of this project will help me choosing which one will be best suited for blender.
Tentative Project Schedule
- I will start implementing the above proposal as soon as the "official" coding period starts at May 23. By then I will be well acquainted with the codebase of Blender,about the techniques described in the proposal in detail and with other necessary stuffs. The tentative implementation phases can be described as follows -
- May23 – July 10 : Implementation of the automated seam generation module.
- July 10 - July 15: Test the implemented portion and fixing of bugs and midterm evaluation.
- July 16 - August 15: Implementation of the tools described above.
- August 15 - August 22: Test of the whole system , code scrubbing if needed and necessary documentation of code and final submission.
Bio
- I am a student of computer science and engineering in Bangladesh University of Engineering and Technology. I have started programming with C, then in C++ and so far I have programmed in platforms like Python, Actionscript, Java, PHP, Javascript, MySQL etc along with C and C++. My C and Java projects were mainly game oriented (e.g. board games with C/C++, chess game that has networking support in Java and social games with actionscript).
- I have participated in Google Summer of Code in 2010 successfully from YafaRay on the project Blender 2.5 Integration with YafaRay. Recently I devised an algorithm for determining the optimal architecture of a neural network. I am still working on it. And I have also made a simple ray tracer from Scratch. I have worked with OpenGL too.
These are some small patches of mine for Blender -