「利用者:PrototypeNM1/Weekly Report」の版間の差分
(→Week 13) |
細 (1版 をインポートしました) |
(相違点なし)
|
2018年6月29日 (金) 05:49時点における最新版
目次
Multitouch Framework Weekly Report
Week 1
This week I further researched whether libSDL 1.3 would be an appropriate library for catching touch events. After searching through varying documentation and writing a basic test program, I found that it currently does not support Windows touch. It is documented as working with Android and iOS, as well as Mac and Linux Wacom screens. The documentation listed contact information for the developer whom I have tried to contact in order to assist with Windows touch testing, but have received no reply as of this time. For the time being I will resort Window's native touch support.
As my customary tools are cross-platform, I considered moving my development environment to Linux. Ubuntu's Wubi had jumping cursor issues likely caused by touch which rendered the system unusable. I have not been able to find similar cases documented online, leading me to believe my problems might be the result of using Wubi. I will return to this problem when I want to work with cross-platform input catching.
Next week I will add support for Window's touch events and flesh out the design document. I will also further investigate prior examples of touch in Blender as conducted by TZI Universität Bremen and BlenderTUIO.
Week 2
Pulled from emails with my mentor Mike Erwin
Monday & Tuesday
So the first thing I did today was track down which commit was breaking MinGW in SVN (which is what most of my day yesterday was comprised of due to inexperience of tracking these down and more than a few mistakes in the methods I went about doing so). In the process I've been learning quite a bit about efficiently rebuilding, and build environments in general.
While poking around to see what I needed to add, I somehow ventured into winuser.h and added defines in there for touch events. I later realized that was not a part of blender, and spent longer than I would like to admit trying to track it back down.
I played with throwing touch in at differing locations; the current issue I'm running into is that I need to inform Windows to send Touch, not Gesture, messages. To do this I need to send it a window handle, but I'm not entirely sure the best way to go about integrating this. I'm pretty sure I'll be able to figure it out in the morning though, it's 2:39am here and I'm pretty sure that's having some effect on my perception.
Wednesday
I now have recognition for Windows Touch messages. There were a few things that were causing me trouble here: 1. Qt Creator pulls from it's own windows.h for file browsing even though it compiles with MinGW's include windows.h. Once I had that figured out I was able to reference the windows.h with MinGW instead when figuring it out. 2. I have to bump the value of _WIN32_WINNT to match Windows 7 to correctly load windows.h and winuser.h 3. Doing the above, registering the window for touch input was no problem.
I added an option to CMake WITH_INPUT_TOUCH defaulted to off, and encompassed all my code in #ifdef ...TOUCH to prevent any trouble that might be caused by bumping _WIN32_WINNT to Windows 7 from affecting the other projects.
The first thing I did this morning was bring Campbell's attention a CMake flag(?) that was documented as outdated in _SystemWin32.
I played around with it a bit and it recognizes the first touch point is treated as a mouse cursor, first touch and move a left click, first touch and hold a right click. I will probably have to play with the cursor code a bit to change this, or see if it is a setting I can specify when registering the touch window.
Tonight I'm going to review Proton in preparation for working on a rewrite of the design document tomorrow. I also need to respond to a couple comments in the BA thread that I didn't want to respond to until reviewing Proton.
Thursday
I spent a good part of the day rereading the ProtonChi Paper laboriously for detail, but did not get much of the design document fleshed out. I did get around to responding to community questions (had put it off much longer than I wanted to).
Lots of questions on my mind at the moment but very little research behind any of them, so expect several tomorrow. I believe I have at least a day of research into how things are intentionally separated in Blender before I can start work on the meat of this project. I will probably research Boost Regex tomorrow in addition to working on the design document. I'd like to have something of value to commit tomorrow but at this time I am not sure of a direction to go in; a solid, documented idea for the design seems to me to be needed as precedence to everything else aside from expanding input capturing to more systems, which I believe is a bad use of time right now.
Saturday
I was able to spend a good bit of time researching Blender's Architecture, but later realized that Touch was not working as I expected. I started making wrong connections then and put myself on a path that I just now realized was erroneous.
When WM_TOUCH was not defined in MinGW, but WM_TOUCHDOWN and similar functions were defined, I just assumed this a mild quirk with MinGW or possibly a more specific options provided in VC's winuser.h and continued using Window's documentation. I later realized the functions provided in the documentation did not work. As time went on I started to assume that the defines I saw might have more to do with pen input and touch might not be incorporated at all with MinGW. This was backed up when I started searching for MinGW and related header information from Window's documentation and found out that some of it had been recently committed to MinGW64. Unfortunately, I was not prone to explore winuser.h since I know Qt was pulling up it's own local version that came with it's install of MinGW.
On that note, I began today looking for any final solutions before jumping to VC. When I decided to jump to VC today, SVN was down, eating away more time. There seemed to be bugs in the code when I went to build, because it took a few shots to get a working executable, compiling from scratch at least 4-5 times. After that, I tried using Visual Studio for some debugging and found the documentation to be accurate when using VC's include libraries, however VS is an unnavigable mess and slow to-boot by comparison with Qt Creator. Spent even more time trying to figure out how to export the project to Qt, leading to one positive I removed Qt's install of MinGW, so now I accurately pull up my install of MinGW's include directory. I found some promising leads but VS persisted in being finicky. I happened to Google "mingw touchdown" and got no useful results. For whatever reason, I decided to extend my search to "mingw touchdown touchup" and found an important piece for my understanding.
In 2009, WM_TOUCHDOWN, the other defines, and a differently named functions were committed to winuser.h included with MinGW. I decided to extend my search of winuser.h, looking for the x0601 _WIN32_WINNT tag, and found all the oddly named functions right there in MinGW. Here's what I have pieced together. Committed in 2009, right as Windows 7 launched, this must have been the initial implementation of the API which I can no longer find well documented, Sometime in between there I'm guessing development focus switched to MinGW64, which has a more up-to-date winuser.h and at least some of my missing functions from the Window's documentation.
So again, I have a better understanding and setup for my build environment, better knowledge to go forward with, but very little to show for it at the time being. :/
Next Week
I had expected to have had a functioning design document at this point, before I was thrown off again by Windows Touch. I believe not having this will only hurt my project in the long run, due to lack of ability to receive input from the community. I will make an attempt to have a working draft Sunday for this reason.
One issue I hope to resolve is how to best handle how varying gesture libraries will be handled. Unlike button input, gestures must be interpreted before acted upon. A centralized location for interpreting gestures could lead to excessive conflicts and limited options for what the user can do, but decentralized would lead to spaghetti code. In writing this though I'm starting to see that the answer is pretty clear, being that the subwindow should be part of the composed regular expression. This would have a nice added benefit of interactions between elements in subwindows, such as touch and drop.
Week 3
Wednesday
I have touch generalized in GHOST, but not quite clean yet so I will hold off on committing until then. I got in contact with my mentor to discuss the next phase of my project, being handling touch input for multiple contexts.
Friday
Eeeeeek, bad builds are getting committed. Wasted a lot of time figuring out that CMake WITH_RAYOPTIMIZATION | OFF was building the following, and not an error in my own code: http://www.pasteall.org/pic/show.php?id=32911
Knowing this, the work around is simple. Still the past two days were burned up on bug tracking. Now that that's out of the way, I hope to be committing GHOST code tomorrow.
After talking with my mentor, I discovered that my initial plans for how to decipher multitouch input would not work. Specifically, I need a way to get the context of any individual touch point give meaning to the user's action.
Next Week
After further researching the aforementioned problem, I'd like to have GHOST and context information strung together in regex readable form, even if some of this is placeholder information.
Week 4
Monday
The code for touch on Windows should compile in MSVC (and I assume by extension MinGW-w64). I noticed some weird behavior as a result but I'm guessing this has more to do with event handling than the compiler. Specifically when compiling with MinGW, touch is recognized only as a left click and drag (as expected). MSVC by comparison seems to have a couple weird things occurring. On a couple occasions, touch and drag caused the 3d view to rotate around the focus of the screen, now it seems to zoom out infinitely or causes a draw error. This might require further investigation.
Tuesday
Added touch events to GHOST_EventPrinter. It now returns the state, index, and coordinates (accurate to 1/100 of a pixel) to the console when WITH_GHOST_DEBUG is set.
Friday
I just finished adding touch event data to the window manager types and mainloop, tomorrow I will add in some debugging code to make sure I'm getting good values. This is just a 1:1 copy of Ghost's touch type. I also refactored some of the code to take better advantage of existing types.
A hurdle at the moment is finding an appropriate place to convert the touch input code into strings of characters, which will have to be a point where I can get the context of any given touch point. I believe I should start exploring how drag and drop functions in Blender to get an idea of how this might work (though the comments I've seen throughout the code seem to imply this might not be the best place to look).
Next Week
At the suggestion of my mentor, I've been researching how modal handlers are used in Blender. Admittedly I do not have a clear picture of how they would be used at the moment, due to still not having a clear idea of where the input should be translated into regular expressions. I believe that this is first and foremost what I should spend my weekend working on.
Next week I would like to have code for converting touch input into regular expressions, as well as a simple test case for single touch input (rotate).
I should also take more active notes as I research, a lot of the time that goes into that has been going undocumented due to forgetting the exact details of the research later (leading to generic weekly reports).
Week 5
Sunday
I cleaned up some compile errors in MSVS in order to confirm that MinGW and MSVC now function the same.
Monday
I'm still getting to grips with the Window Manager. While studying the code I noticed a bit that could benefit from a simple change of name, specifically change "custom" to "customdatatype" in wm_event_system. I still need to find the best point to register touch as a modal event. I believe I read through most of the code relating to it in the window manager today so I believe I have a good base for tomorrow.
I also noticed a graphical bug during drag and drop, the subwindow(?) borders do not update until any border is moved. I'd guess it's a simple notifier missing, which I might look into later. I also confirmed today a weird glitch where disabling "WITH_RAYOPTIMIZATION" in both swiss_cheese and trunk for MinGW causes most text to disappear. I don't believe this was the case when the summer began, though it might be an issue with my system.
Wednesday
I decided to spend some time bugfixing for the past couple days in order to get a better feel of how operators work and contribute something in the process. In the process I committed fixes for "[#31577] Select N-th vertex/face/edge doesnt work" and "[#31433] BMesh: Knife tool Angle Constraint function". The latter took a bit longer than expected, it's now 5 in the morning here which is why this update is brief.
Friday
Expanding on the last update, the two bugfixes I worked on were:
The reasoning I set forth working on bugfixes was two parts. First, the tracker had come up in the past two weekly meetings, so I figured I have a go at a couple of the issues. Second, it was a great opportunity to see how other operators function in Blender.
The former bug was actually documented in the code once I got to look into it. In short, the issue was that loop selections did not update the clicked face as active. This caused errors with N-th face selection which is reliant on checking the active face and associated loops. The fix was pretty simple, call the existing function that sets a face as active based on mouse position. This led to cleaner code and fixing all issues commented in that section.
The latter bug, while very educational for my project regarding handling modal events, led me to realizing that the knife code is a bit messy for the time being (the fact that it's messy is even documented in the code XD ). The issue I found in the code was that mouse position was stored, but not updated when vertex or edge snapping occurs (not necessary for face snapping of course). This caused issues when constraining to angles, which is reliant on previously recorded mouse positions. The fix for vertex snapping was pretty straight forward, as knife vertices calculate and store their relative screen position when created, a fact I used to then correct edge snapping.
While the core problem of the report was fixed, there are still several shortcomings of the existing system, such as ignoring angle constraints when vertex/edge snapping occurs. An eventual rewrite of this might also take advantage of existing snap code instead of custom knife snap code. An additional shortcoming (thought admittedly a mild one) is that overlapping edges (like those which occur when viewing an orthographic box from any side) might register the overlapping edges as a cutting line, going from front to back, causing the intended cut across a visible face to be rejected for cutting. A simple screen space check in the the edge hit code might fix this.
For the past two days (Thursday and Friday) I have been dealing with a drive by cold. Thursday I fixed an error I introduced and later noticed while trying to fix the second bug; today was not very productive at all. :(
Next Week
Bug hunting in the knife operator was very beneficial, in that I read through quite a bit of code which gives me an idea as to how I might set up my operators. I'll need to return to plans for next week when my mind is a bit more clear though.
Week 6
Tuesday
In IRC conversation with jesterKing and defelinto, I realized some fundamental design directions I was intending to go was not going to work if I wanted the same code base to work for BGE and Blender. jesterKing promptly informed me that BGE and Blenders' code diverge strongly, thus assumptions as to what information I have available are fewer, I can not base my system solely around context in Blender.
This information (while important) is troubling in how it will set me back. In the end I believe a better design will result from it though, the way I was looking at it was becoming a very hard coded solution, which would certainly limit usability.
Wednesday
I sat in with an Skype Conference with Mike and previous Blender TUIO developer Marc Herrlich. Marc expressed interest in finding students to test and develop touch input in Blender with respect to how creating new gestures in an existing touch environment. This relationship would benefit us both, as I would reduce my burden to crate gestures once the system is in place, allowing more time to stabilize the system instead. The benefit to Marc is a research area for his students that will allow for rapid prototyping of gesture commands.
In our discussion, Marc confirmed what I believed to be the most immediately important things to have work with the gesture recognition, being tablet and touch support for sculpting.
Friday
I believe the best solution to the code duplication problem is to have a unique touch library which both Blender and BGE share. This library should have a means for registering possible context information; in Blender this would be at minimum the editor type and what is directly under the touchpoint. It might also be useful to know the Mode of the editor, but for the time being I will assume this is context that I do not need to know. Upon initialization, Blender should register existing editor types with the touch library, as well as what might be under any given touch point. In BGE, providing context and under touch information might be best to leave to the game designer, which means exposing the register context information to python.
I believe it would be best to handle actions through Python. This would work in both BGE and Blender, and more importantly make user defined actions during touch events most easily customizable (very important for BGE). This also leaves open the possibility for complex actions written in Python by users.
I don't like that not a lot of coding work was done this week, most of my time ended up being committed to researching how this problem was addressed in existing code in Blender. The best example I have found thus far is GHOST's usage.
Next Week
My first goal for next week is to have a library started which can be fed context information and display a list of possible contexts, and is fed editor types available in Blender. To accomplish this, I will need to find the appropriate location in Blender's initialization to feed this information to the touch manager. I want to have this finished by Tuesday.
After this, I will begin work on feeding it touch information, individual touch points, the editor they reside in, and storing relative changes in location. The week after next is the beginning of midterm reviews. Before that process begins, I want to have the stream generator mostly finished.
Week 7
Friday
This week, too much of my time was spent on reading/rereading documentation and source code, not enough on actual coding. This is leading to indecision and lack of real work getting done. As I step back and reflect on it now, I realized this does not make sense as, working on a library now, the information is abstracted enough to work on a basis even if the specifics need to be rewritten later. With this realization in mind, this evening I began writing the library code to get my project heading in a better direction.
Since I have been reading through a lot of code, I've been trying to correct formatting and comment spelling as I go along. In realization that these little commits might become obnoxious, I'm going to start accumulating them before committing to trunk.
After reviewing Python's usage in Blender, I am realizing it will have a larger role to play in my project. since context information is handled between C and Python. Creating expressions was already going to be handled in Python for the UI, in addition I believe it would be best to handle operators from and registering possible contexts to the touch manager via the Python API. Handling Regex state logic and system input will still occur in C++.
Two main things kept me distracted from making real progress this week. 1. I wanted to have established where Blender would register context information, which is necessary for BGE and Blender to have unified touch code. I now believe this should be via the Python API populating known contexts in Blender with a startup script, and possibly populated by the game designer in BGE. I am not sure how the embedded BGE in Blender will handle this, but I believe it is as simple as re-interpreting the touch events given it being provided from the Window Manager into BGE readable form based on code I bumped into while reading how the BGE and Window Manager relate. 2. I wanted to understand how to best integrate this library into blender, which is still an open issue. I believe my existing code in the Window Manager is probably the best place to look at this moment. This is an issue that needs to be solved soon, since it could lead to disconnect due to library code being untestable (NOT GOOD, BUG PRONE!). Once this issue is worked through, any code I have working in the library will just need GHOST related additions to work dependent on platform.
An issue I see looming on the horizon is that relates to the multicursor nature of Blender. I believe Blender will only take the first touch point into consideration when determining context. This is an issue as interpreting touch based on context is quite important to this project. I am set to talk with BlenderTUIO developers next week, and will ask how they dealt with this.
Next Week
Continuing from my (late) work on library code this week, I will start writing the basic functions I know I will need once I am able to hook in my library, while figuring out a good way to implement problem 1. above. I believe I already know where to look for problem 2., so I will also try to have that done as soon as possible, just to work it out of the code I need focus on.
Week 8
Eeek, getting bad about updating these throughout the week, I need correct this next week.
Friday
I talked with the BlenderTUIO developers again this Wednesday. They offered to give me read access to their repositories. We also discussed how they went about handling multiple contexts in parallel as a result of multiple cursors.
This week, I created shells for the C api and touch manager. I've also been working on touch type definitions, iterating on the design as I find it necessary.
Currently I am designing around the idea of context being an area, region, and data type under the touch point, which are used to interpret meaning from touch events. I have written appropriate code to register new input for each type, as well as check if the type has previously been registered.
I have also began work on code to convert and store touch events into a continuous, regex readable form. This code, however, will need further review as I believe its current implementation is not ideal.
Next Week
I need to plug this into Blender's Window Manager to confirm that it is handling as expected, thus next week I should begin fleshing out the C api.
Week 9
Friday
The C API is taking longer that I had initially anticipated to complete, simply due to my inexperience with writing wrappers. I had hoped to be done with this by tonight, but I keep learning that the code will not function as I anticipated and ending up having to rewrite base blocks. Simple tracking and trying to understand why errors are being created ended up being the majority of my week.
Common errors I have run into are incorrectly choosing to include files in headers where more appropriate in the source code (i.e. attempting to include object oriented code in headers which will be used with C code). Another issue I've run into recently is an over reliance on structs instead of objects within the C++ code, which I realized has been a cause for issues with TOUCH_Types.
I should also be making better utilization of the IDE I am using, as I realized I was loosing time on work by not using simple refactor tools when instead opting to manually change functions. This is definitely leading to unnecessarily wasted time in recompiling and minor tweaks in between.
Next Week
I want to have the API finished and functioning before mid week. This will have to involve considerable more retooling of the current touch library types and objects. I will report on this specific issue next Wednesday.
Wednesday I would like to spend researching existing state machine libraries to determine if any one might be better than programming my own to compare touch "gestures". I believe that Boost may be an appropriate place to start looking.
Week 10
Friday
Finding what I needed in order to make progress took significantly longer than it should have this week. The documentation that I needed is the following: http://wiki.blender.org/index.php/Dev:2.5/Source/UI/AreaManager. The reason this is such a large hangup is that I needed it in order to test the code written up to this point, to understand what works and what needs to be changed with regards to Blender's core functionality. I had initially put this off in order to continue work on the actual code base of the touch library, but at this point it will incur to much technical debt in terms of bug tracking to continue work without having contextual information to test.
The touch library now compiles and is included into Blender without causing build errors (finally!). Nothing is hooked up yet, so nothing interesting happens.
Next Week
I need to get in contact with dfelinto, jesterKing, or anyone else who is familiar with the game engine, to help guide me to an equivalent to Area management in BGE. I need this information before continuing in order to ensure comparability with the BGE going forward. Due to the issues described above, I was not able to attend to this as of yet this week.
I would also like to verify with someone that my understanding of what is considered an area and region matches my mental model of Blender.
After accomplishing the above to tasks, I will be able to link both into the touch library and begin work on interpretation of touch events given no setbacks. I am currently expecting a setback in the form of touch points past the first only sending the first's context information to the touch manager, which I will deal with after testing.
Week 11
Friday
I have finally settled on a manner of handling context that satisfies the needs of both Blender and BGE. Instead of standardizing the contextual information before sending it to the touch manager, the touch manager will have a Context object. This object is extended to either interpret Blender or BGE's input with respect to the context. In doing this, I can now work with both Blender's and BGE's context types directly in my own library, instead of figuring out where to best put it in the existing code base while trying to best work around different type definitions.
Now among my TOUCH_Types, I can create a separate context container for both Blender and BGE, for the game engine and WM will fill and blindly pass to the touch manager. The touch manager will then feed the input to the Context object, which will type and translate the context, and return it as a string for the touch manager to use for interpretation.
Working under the assumption that Blender and BGE were more alike than different has caused me quite a bit of grief up until this point. This put me in the wrong frame of mind while looking at the issue of handling contextual information, as I tried to understand how to best force one into the other. This new solution is significantly more elegant, and does not compromise overall function of touch in either Blender or BGE.
I believe this method might also provide the means for dealing with BGE playing in Blender. This is only a hunch based on the BGE code I have glossed over.
I was hoping that I might have had something for show before SIGGRAPH, but this is seeming very unlikely at this point.
Next Week
I will be attending SIGGRAPH next week, all week. I will work on this project in the downtime I have, but I am not expecting to have much of it.
One thing I do hope to accomplish is tidying up my code. After working on the code with several visions of how it might look, I'm starting to notice that the seams are becoming a bit frayed, and several unnecessary bits are being left in.
Week 12
SIGGRAPH 2012 Begin
As I had guessed last week, SIGGRAPH left me with no time to work on my project this week. So instead I will report on what I got out of attending this event.
First thing I learned, is that the Blender crowd is made up of some of the most diverse, generous, and interesting people I have ever had the honor of coming into contact with. I can not stress enough that it was a pure joy to spend time with this community.
Throughout the week, I prioritized attending studio events, panels, and general talks over technical papers. This is largely due to it being much easier to find and review the papers at a later date than other presentations, and many of these papers' content is above my ability to ask intelligent questions for. Additionally I am more interested in finding issues in the development process than results of others' research I have little exposure to.
With that quick point out of the way, I'll continue the rest of this update in a somewhat more structured manner.
Sunday
For some reason I had convinced myself that the conference began Monday, but while on-route to my hotel I noticed the LA Convention Center, and decided to stop by to see if I could pick up my badge early. This was a fortuitous choice, as soon upon arrival I learned that the conference was already in full swing. However, this left me with the unfortunate necessity to carry my travel bag all day. In hindsight, paying the minor baggage holding cost at the front would have been a smarter move, as my shoulders have still not completely recovered from this poor choice.
I had arrived just in time to make it to the Blender Community Meeting, where I quickly learned of the diversity of the Blender community that is not always apparent online. As a side note, this was also my first exposure to the absurd presence Texas A&M had at this conference.
The first panel I attended pertained to the future of motion control in interactive entertainment. Unfortunately the panel itself seemed more like a recap of where it has been going than a look forward as to what will come; unfortunately three of five panel members were from competing companies which were largely weary of sharing too much information. I was able to confirm with the panel directly that their experience suggested there is an uncanny valley for input devices.
The Technical Papers Fast Forward was an interesting and entertaining overview of most papers which were to be featured at the conference. Many of the exceptions to my above stated rules for prioritization were a direct result of interest that sparked from this event.
Though I live in a city and am no stranger to walking downtown at night, I can say that the half-ish hour walk to my hotel, baggage in tote using a dying cellphone's GPS was not the most comfortable stroll I have taken in my life. Combined with Google's lack of awareness of which streets might be considered questionable for such time of day, and you have a mixture that certainly quickened my pace.
Monday
I began my morning bumping into two fellow conference attendees over breakfast. The repor I had with them really set the tone of conversations I had with most individuals at this conference. Most people were exceptionally outgoing and interested in randomly starting up conversation in the short stints of downtime had. These small conversations in and of themselves made of some of the most memorable, personal moments on this trip.
The first papers roundup of the morning centered around shape analysis, which I found to be quite similar to 2D image analysis as I already understood it, at least with regards to the general steps necessary to progress through the process.
Later I would attend a talk on teaching procedural workflows in animation. I found this interesting when the instructor mentioned that students begin projects with the idea of having to hand animate everything, in many cases I find myself included. The talked focused on the point that there are intuitive examples that students tend to grasp quickly, while basic abstraction such as using a noise texture to cause noise in geometry tends to through students for a loop.
I was confronted with my first hard decision of the conference at this point. Either I could take an extended course in "Modern OpenGL" or attend presentations on the work done for The Avengers and Brave. I am happy to say I chose the latter, which certainly did more for me in terms of motivation and fascination. Later, I confirmed with fellow attendees that courses such as the one I was considering tend to be very dull and not as useful as simply researching the material online.
There were several panels and presentations that centered around things learned from Brave and The Avengers, all of which blur together as a large collection of ideas learned around these two movies. So instead of trying to recall what I learned from each presentation each day, I have written up a separate section for each below.
Following the Brave presentation was the animation festival. This was something special to have attended. These shorts hit all the extremes of abstractedness, ranged from funny to absurd, to absurdly funny; ending with what I consider the largest surprise of the show, Paperman. Had I been told that a non-Pixar, Disney animated short would be the crown jewel of this conference before seeing it, I do not believe I would have believed it. I believe that in the future, this will be marked as the point at which Disney Animations finally paralleled the quality of their towering counterpart. I can not speak highly enough of the simple charm that this short invoked.
That evening I attended the Networking Dessert Reception and ACM SIGGRAPH Chapters Party. For anyone considering attending SIGGRAPH in the future, I have two quick thoughts of advice. First, just go. Second, attend these kind of social events.
Tuesday
The exhibition and emerging technologies are must-stops for anyone at this event. One prominent thing I observed is how large 3D printing has become. I was somewhat surprised at that fact that 3DConnexion's nDoF mice are no particularly intuitive over a short term. It is my understanding that they are quite versatile with time, and I was able to get in mention to them of a crowd interested in a Bluetooth extension of their device. More surprising was how exceptionally intuitive the Leonar3Do was, though even with short usage I was already experiencing mild cramps in my arm.
Presentations later focused on crowd simulation. It was interesting to see the little tricks employed by larger studios to fake large volumes of characters while maintaining a high fidelity of control for artists, a theme which was particularly among Pixar discussions.
SIGGRAPH Dailies! was the next notable event of the day, where presenters gave a quick dialogue to go with their snippet of animation as presented onscreen. Often this was either giving an explanation to the context for which it was created, or the process and hardships which were went through to get the final result.
I believe it was this evening that I dined with the Blender crew. Many of my above statements about the Blender community were a direct result of this event.
Wednesday
Most events that I found to be notable this day were centered around Brave and The Avengers, save a pleasant discussion had long into the night with a fellow Blenderhead.
Thursday
Hackerspaces being a personal area of interest to me, I decided to check out a series of presentation on such. Of particular interest was a project to retrofit old microscopes with modern imaging equipment (see: camera), which allowed varying levels of focus to be taken and composited together for a clear, high detail image. The applications mentioned during this talk were numerous and highly probable.
In addition to more Brave centered discussions, there was a large presentation held this day to talk about Paperman. There was quite a bit of interesting software and novel techniques of animating used to create this film, much food for thought. I am, however, opting not to discuss this further due to honoring their implicit request to not spread too much about it.
The Avengers Presentations
Many of the interesting points taken from these presentations had to do with keeping character in-character during animations and constructing large scale scenes given live-action footage. At length they discussed their process for recording their animation needs on-set without being over-bearing to the film crew.
As both Digital Domain and Weta Studio had hands in this film, and panels featuring both studios, I was treated to discussions on how assets used for this movie were traded between the companies.
Interestingly, metal bending simulation was a point of contention in at least one scene, I had not known that this was an issue.
Another large discussion point was the fidelity of digital doubles, in that in more than a few cases they made the conscious decision to scrap the original material and recreate it, and that this was often easier than fixing the recorded material.
Brave Presentations
Muscle and fat simulations seemed to be integral parts of this movie, so much that some furry animals in the movie had no fur simulation applied afterward. Ideas presented like this one emphasized to me just how serious Pixar is about their pre-production, as a fact like this would not have been applicable in later stages of development.
I gained a much more intuitive understanding of shaders as a result of their discussion on generating much of the foliage of the movie in shaders alone. Suddenly this topic has become very interesting to me.
As I mentioned above, it seems that much of Pixar's focus in simulation is presenting a framework that mostly provides good results before tweaking, while allowing artists to tweak details as necessary. They also stressed the ability to simplify the simulation in order to allow quick feedback on how a simulation will look that is accurate to the larger detail simulation that follows on the final pass, which was the main take-away I got from the hair and cloth simulation talks.
One interesting thing I took note of is the fact that while working on Brave, it seemed that Pixar artists were individually placing guide hairs which were used to control the flow of hair across characters and creatures. This seemed to be a painful and slow process that lacks an intuitive quality. To me, it would seem more intuitive to sketch a hair line, recording the pen's rotation as it is drawn across a surface, and using this as a base to tweak guide hairs. I will need to further research this is indeed an issue, as well as find if a solution similar to this already exists in Blender.
SIGGRAPH 2012 End
I go forward into the end of this summer with a new fervor and a new level of pride in being able to say that I am part the Blender community.
Next Week
Now that I have a plan for how context should be handled, implementing it should be rather trivial. I need to verify that the way I intended to communicated between my library and Blender actually functions as expected. This week I need to lay out what work will continue after the summer's end, with the goal of having something presentable for the Blender Conference come September.
Week 13
Friday
Cleanup and restructuring around the new idea of handling context was my focus this week. Stubs are now in place for handling Blender vs. BGE contextual information, without the manager having to know anything about the implementation of either.
One issue that I will need to reconcile is how the sharing of types are managed, as BGE and Blender will have separate structs for passing their individual contextual information, which can not be cross-included.
This week was a bit busier than normal, as I had learned right before SIGGRAPH that I would be transferring to Purdue University; this entailed all registration, housing, and other details having to be worked out this week. Unfortunately my week was crowned by a mid-week cold that has certainly dampened my productivity.
Summer's End
I believe GSoC begins the hard pencils down period next week, as well as classes beginning for myself. This weekend, I will be making sure that I am not missing any required deadlines. This would be a good excuse to write up a formal design document that has been missing thus far for the project (due to my implementation up to this point being largely experimental). This will also give me a good opportunity to step back and get a good overview of my project going forward.
As I stated before this summer, I will be continuing work with the intent of maintaining and extending the code as a module owner. Unless told to do otherwise, I will continue reporting both here and to the 2012 Summer of Code listserv on my weekly progress. I have already contacted my mentor to make sure that we stay in contact after the summer's end.
Next week will not likely see much work getting accomplished as I will be adapting to my new environment. Afterward, I should have a better idea of my workload and how much time I will be able to spend on this project in the weeks going forward. The goal I am working toward is still to have a demo-able item for this year's Blender Conference, come October. At this time, based on what I have learned over the summer, I don't believe it will include the Game Engine, but only Blender the 3D creation suite.