Labs8: Controllable Heliotrope Video (assessed!) |
---|
The goal of this practical is to combine Direct Manipulation with the ideas in VideoTextures. You will build a system that allows the user to (seemingly) specify the look-at directions of a "real" human face. The end-user input to the system is a piecewise linear curve. This curve specifies the linear sections of the path that the video-sprite object should follow. For the default data provided here, the path should specify a sequence of look-at-points of the face. Your system will follow that user-specified path to traverse a collection of images (could also be video) where the subject was excercising at least two obvious degrees of freedom. The resulting video should be smooth in terms of looking as continuous as possible, and controlled in terms of moving in accordance with the user's directions. In theory, and NOT a required part of this assignment, one could imagine a real-time version of the system, where the human face would turn interactively, in response to click-and-drag mouse inputs. Grading and Deadline Uploads of either your final .zip file or that .zip file's MD5 checksum are due by noon on 23rd March. 10% will be deducted for submissions that are less than 24 hours late, 20% will be deducted for submitting between 24 and 48 hrs late, and there is NO credit for solutions submitted more than 48hrs past the deadline. The Basic Section of the assignment is worth a total of 70%. Each item in the Advanced Section is worth a maximum of 15%. The maximum total grade for this lab is 100%. Given You are provided with a 70Mb .zip file containing a collection of photographs. The zip is available online here or locally from ~brostow/prism/gjbLookAtTargets.zip. As part of this lab, you will need to compute pixel-wise optical flow, and shortest-paths in a graph. For these and any other purposes, you are free to use the optical flow library here for a matlab version (read the instructions, mex it with a recent version of Matlab, and if you are using Linux on the labs, replace in the file "ImageProcessing.h" the functions __max and __min by fmax and fmin, in order to make it compile), or if you don't manage to have that working, you can use this one, but it's slower and works only in graylevel, or finally this one for a binary version that uses CUDA (but it won't work on the Linux labs)You can also use any code offered in the Isomap dimensionality reduction system, especially the different Dijkstra implementations (online here http://waldron.stanford.edu/~isomap/, or locally, with minor changes for compatibility with newer Matlab versions, here and at ~brostow/prism/Isomap.zip). Note: Isomap's Readme.txt explains how to use its dimensionality reduction and Dijkstra routines, which is nice, but Isomap itself is NOT NEEDED for this lab. Document any other code you use that is from someone else - you are being evaluated on the code you contribute yourself. You will be asked to submit the standard CS Department coursework cover sheet. Output In the YOUR_NAME_Lab8.zip that you upload, there should be:
Basic Section (70%) Submit the items listed in the "Output" section above to show that you've implemented the following algorithm-outline. You are expected to fill in gaps yourself. You can improve on the algorithm if you like, but then you should submit a separate .m file or explain in the Readme.txt how to run both the regular and your improved versions. The following algorithm outline explains ONLY what should happen when your system is trying to find frames from the image collection to traverse one straight line path between the clicked-point s (for source) and t (for target). Obviously, you will need to repeat the process for the other segments of the path
Advanced Section (15% each) Items in the Advanced Secion will be evaluated based on effort, creativity, and thoroughness, so don't assume "easy" ones are less work, when they are meant to be roughly equal in difficulty.
|