|Labs8: Controllable Heliotrope Video (assessed!)|
The goal of this practical is to combine Direct Manipulation with the ideas in VideoTextures. You will build a system that allows the user to (seemingly) specify the look-at directions of a "real" human face.
The end-user input to the system is a piecewise linear curve. This curve specifies the linear sections of the path that the video-sprite object should follow. For the default data provided here, the path should specify a sequence of look-at-points of the face.
Your system will follow that user-specified path to traverse a collection of images (could also be video) where the subject was excercising at least two obvious degrees of freedom. The resulting video should be smooth in terms of looking as continuous as possible, and controlled in terms of moving in accordance with the user's directions. In theory, and NOT a required part of this assignment, one could imagine a real-time version of the system, where the human face would turn interactively, in response to click-and-drag mouse inputs.
Grading and Deadline
This is assessed and the deadline is set to Thursday, March 24th at 23:55. You will be able to upload your final .zip file through the Moodle. 10% will be deducted for submissions that are less than 24 hours late, 20% will be deducted for submitting between 24 and 48 hrs late, and there is NO credit for solutions submitted more than 48hrs past the deadline.
The Basic Section of the assignment is worth a total of 70%. Each item in the Advanced Section is worth a maximum of 15%. The maximum total grade for this lab is 100%.
You are provided with a 70Mb .zip file containing a collection of photographs. The zip is available online here or locally from ~brostow/prism/gjbLookAtTargets.zip.
As part of this lab, you will need to compute pixel-wise optical flow, and shortest-paths in a graph. For these and any other purposes, you are free to use the optical flow library here for a matlab version (read the instructions, mex it with a recent version of Matlab, and if you are using Linux on the labs, replace in the file "ImageProcessing.h" the functions __max and __min by fmax and fmin, in order to make it compile). If you are using windows, you don't have to compile this, it comes with precompiled mex files. The 64-bit version is the .mexw64 file that is included in the archive, and the 32-bit version can be downloaded here. There is an example file in the archive to show you how to use it. It takes quite a long time to compute so don't hesitate to subsample the images by a factor of 2 if needed. If you don't manage to have that working, you can use this one, but it's slower and works only in graylevel, or finally this one for a binary version that uses CUDA (but it won't work on the Linux labs). You can also use any code offered in the Isomap dimensionality reduction system, especially the different Dijkstra implementations (online here http://waldron.stanford.edu/~isomap/, or locally, with minor changes for compatibility with newer Matlab versions, here and at ~brostow/prism/Isomap.zip). Note: Isomap's Readme.txt explains how to use its dimensionality reduction and Dijkstra routines, which is nice, but Isomap itself is NOT NEEDED for this lab. Document any other code you use that is from someone else - you are being evaluated on the code you contribute yourself. You will be asked to submit the standard CS Department coursework cover sheet.
In the YOUR_NAME_Lab8.zip that you upload, there should be:
Basic Section (70%)
Submit the items listed in the "Output" section above to show that you've implemented the following algorithm-outline. You are expected to fill in gaps yourself. You can improve on the algorithm if you like, but then you should submit a separate .m file or explain in the Readme.txt how to run both the regular and your improved versions. The following algorithm outline explains ONLY what should happen when your system is trying to find frames from the image collection to traverse one straight line path between the clicked-point s (for source) and t (for target). Obviously, you will need to repeat the process for the other segments of the path
Advanced Section (15% each)
Items in the Advanced Secion will be evaluated based on effort, creativity, and thoroughness, so don't assume "easy" ones are less work, when they are meant to be roughly equal in difficulty.