Advanced Modelling, Rendering and Animation

Coursework

Home | Syllabus | Course Work | Assessment

coursework 1:  GPU Rendering with Shaders in OpenGL

Coursework 1 will be assigned on January 23rd. Submissions are due on February 6th at 23:55.

The first AMRA coursework will allow you to explore GPU rendering with programmable shaders in OpenGL. You should have met OpenGL in COMP3080/GV10, and you will be working with code shown in the AMRA lecture slides from weeks 2 and 3. However, feel free to refer to your COMP3080/GV10 notes. This coursework will require you to use resources we have not given you. You will need to explore the OpenGL API to find what you need! You are free to complete the tasks using any OpenGL/GLSL functions that you wish, and any comments are only a guide. There are also many resources online to help you. Each task is worth 20 marks towards a total of 100. Good luck!

If you have any questions, please email:



Code

First, download the code framework from here, and extract the zip file to a convenient location. You should see some c source files (with amracw1.c and amracw1.h), some shader source files (.vert and .frag), and some project files. If you're in the Windows labs, please open 'vs2008/amracw1/amracw1.sln'; otherwise, there is an included Makefile. Feel free to edit all files as you wish; amracw1.c contains most of the code that you will change. Note that this coursework will not execute in MPEB 4.06, as the graphics cards are not capable of running programmable shaders.


Task 1:

Compile and run the code. NOTE: Change the Working Directory to '../../..'. You can find this in Project -> Properties -> Configuration Properties -> Debugging (in the tree to the left). Once running, you should see an extruded square bathed in yellow light.

Read the instructions in the console window and interact with the application. Notice that when pressing 't', the model disappears. We would like 't' to switch between different models so that we may better observe lighting effects.

Find the display call in the code, and fill in the case/switch command so that the 't' key switches between the extruded square, a teapot, a sphere and a torus. Use the GLUT API with suitable parameters - it is already linked.

We would also like to add another light source to the scene, to observe more complicated lighting interaction. Find where the first light is setup in the code, and create another light of your choice to illuminate the model. You may need to refer to the OpenGL API documentation here. Once completed, you should have something that looks like this:

Pressing 's' will switch from using a per-vertex lighting shader (above left) to using a per-pixel lighting shader (above right). Notice the difference this causes by rotating the object with the mouse. Why are they different? Under which circumstances will these two different lighting techniques look the same? It may help to experiment with the parameters you use to create the objects.

Task 2:

We would now like to texture our object. Modify the per-vertex and per-pixel shaders to apply a texture to the object and light the object. Find the code that loads, links, and compiles the shader for the graphics card. Replicate this for your new shader. Note that any errors in your shader will be written to the console window for easier debugging.

ShaderGen is a tool from 3DLabs (now defunct) that converts fixed-function OpenGL into GLSL vertex and fragment shaders. It is included in the code package under the folder 'ShaderGen/bin'. Spend some time exploring ShaderGen to see how texturing can be added to objects and combined with lighting in shader code.

The function 'LoadDIBitmap' is provided to load 24bit RGB bitmap textures (like those included in the 'ShaderGen/textures' folder - note that some of these textures are RGB padded to 32bits - use the 'RGB' variants of the files if you do not see what you expect). Once loaded, use OpenGL calls to upload the texture to the graphics card, set any texture parameters correctly, and bind it for use. Note: Pay special attention to specular highlights!

Once complete, your code should produce something like below. Why do these examples not look realistic?


Task 3:

We would now like to make our object very shiny, like chrome, so that it reflects the world around it. Currently, by changing material and light properties, we can generate a shiny highlight, but materials such as chrome are not accurately represented using these approaches. In COMP3080/GV10, coursework 1 showed how ray tracing could be used to generate such reflections of the world; however, this approach is expensive for real-time graphics. Instead, we will use environment mapping.

Environment mapping approximates the world outside the object using a texture. We use a specially formed texture, called a cube-map, which has six-sides. The texture is drawn as if we are inside the object, looking out at the world. Then, when we apply it to the object, it looks as if it is very shiny, reflecting the world around it.

A cube-map texture is provided in the folder 'cubemap'. You should start by drawing these textures to each side of a cube, and then drawing this cube around the object and the camera. This provides our outside world. Approach this part of the task by first correctly creating a GL_TEXTURE_CUBE_MAP texture out of the six-sided cube-map provided. 'draw.c' includes a function to draw a cube - make a new function using this example that now draws the cube textured with the cube-map.

Once that is complete, we now need to generate texture coordinates to make our cube map appear as if it is the reflection of the world. In Task 2, the texture is fixed to the object during rotation. With environment mapping for reflection, we want the texture to stay fixed while the object rotates - after all, the world is not rotating. Start by exploring ShaderGen's TEXTURE COORDINATE SET tab. This tab contains options for OpenGL's inbuilt texture coordinate generation. One of these is just what we're looking for - once you've found it, it may help to read up on texture coordinate generation. Integrate cube-map environment mapping into your shaders.

Once complete, your code should produce something like the following images:

The left image still includes per-pixel lighting, but the scene does not look realistic. When using real-world sourced cube maps, why should we generally not use lighting with environment mapping?


Task 4:

We would now like to turn our object to glass. When light passes through a transparent material such as glass, some of the light is reflected and some of the light is refracted. The Fresnel equations describe this ratio, and Snell's Law defines the angle of the refracted ray. You will need to define a refractive index for your object. Air has a refractive index very close to 1, and most glasses have a refractive index around 1.5. Modify your shaders to implement refraction and make the object look like glass.

Importantly, here we are not concerned with physically accurate refraction. Given a ray incoming to the surface of our object, generate the refracted ray and assume it then goes off into the world and is not refracted again (as if the ray stays in glass until it reaches 'the world'). To be correct, we would again need to reflect/refract the ray when it exits the object, but you are not required to do this. We can produce something that looks convincing by performing just one refraction.

Once complete, your code should produce something like the following images:

As this effect is only an approximation, which of the objects do you think produces the most convincing glass material? Which do you think produces the least convincing glass material? Why?


Task 5:

What if the refractive index of your object was not consistent? Have you ever peered through an old pane of glass and seen the world distort as you move your head? Implement 'bumpy' refraction to recreate this effect (left image). Use an appropriate example to show off this effect.

Different wavelengths of light are refracted by different amounts, producing dispersion (or chromatic aberrations - right image). How could this effect be implemented in a shader?

Looking at the Venus De Milo inspiration video in the references, we can see differences between the video and our result. What differences still remain, and how could these effects be achieved?


Write-up:

Please write a short report on your work, detailing how you solved each part of the coursework. Be sure to answer the questions that are in bold within this document. Include all relevant code along with screenshots to demonstrate your solution. Make sure that your report shows examples of all the required simulation effects - this may require side-by-side comparison shots with highlighting. Assignment on January 23rd. Submissions are due on Wednesday 6th February at 23:55. Electronic submission through Moodle. The report has to be in PDF format. Do not upload other document formats, such as Microsoft Word or Open Office.

References

OpenGL 2.1 Reference Documentation
GLSL Quick Reference Card
Venus De Milo inspiration!
Many cubemaps!

Arch cubemap courtesy of Paul Bourke.


Coursework 2:  Monte-Carlo Path Tracing

Coursework 2 will be assigned on February 6th. Submissions are due on February 25th at 23:55.

The aim of this coursework is for you to obtain a working understanding of advanced global illumination, Monte-Carlo methods and the rendering equation. You will take a working ray tracer and modify it to produce a working path integral solution to the rendering equation: a path tracer.

This website, along with these notes, describes your task and attempts to provide the information and references you will need to complete the coursework. You will find it difficult to complete this coursework without delving into the literature. Fortunately, there's a lot of good material around. See what Matt Pharr would recommend as suggested reading. The notes cover the essential aspects of the coursework and collect the equations you will need in a single place. They cannot replace the papers entirely but we recommend you start with them.

If you have any questions, please email:


Tasks

  1. A crucial difference between a path tracer and a ray tracer is that a path tracer must shoot many rays per pixel, since it is potentially sampling difficult integrals over the area of the pixel. Extend the ray tracer to shoot N rays through each pixel using jittering and stratification. Why does the one ray version have salt-and-pepper noise? How could you make the sampling adaptive?
    Function:  LitScene::renderPixel
    Marks:      10%

  2. Traditional ray tracing typically only supports point light sources; however, these are physically implausible and cause images to exhibit hard shadows. One of the hallmarks of path tracing is that it correctly accounts for umbras and penumbras due to area light sources. Modify the direct lighting calculations to add support for area light sources. As an example, sphere sampling is given in Sphere::sample. You need to add polygon sampling to Polygon::sample, i.e., you will need to add sampling for triangular and rectangular polygons. Use this paper as a reading reference. Note: The Shirley paper has a typo for the PDF expressions for rectangular and triangular luminaries. Refer to the notes for the correct formulation. How could you add support for general polygon luminaries?
    Function:  Polygon::sample
    Marks:      15% for triangle and quadrilateral sampling

  3. Add support for sampling a BRDF. When a surface is struck by a path a reflected ray along with the probability of that ray must be generated in order to continue the path. Lambertian (diffuse) BRDF sampling is given in lambertianBRDF::reflection, you need to add modified Phong BRDF sampling to phongBRDF::reflection (see this paper). Modified Phong sampling has both diffuse and specular parts. A sample is randomly taken (weighted by the diffuse and specular coefficients) from either the diffuse part (as in lambertianBRDF::reflection) or from within the specular lobe. Note: y and z are flipped between the notes and the code. Pay attention to the local/world normal transformation, and where the normal is for specular rays.
    Function:  phongBRDF::reflection
    Marks:      20% for modified Phong reflection

  4. Add support for evaluating the BRDFs at a surface point. When the path has terminated, the amount of radiance reflected back towards the viewer at each surface interaction must be evaluated. BRDF evaluation for a Lambertian (diffuse) material is given in lambertianBRDF::brdf and you need to add BRDF evaluation for a modified Phong BRDF in phongBRDF::brdf. Note: 'n' in the notes is the 'm_k' material coefficient in the code. Take care with ρds and 'm_kd/m_ks' also.
    Function:  phongBRDF::brdf
    Marks:      20% for modified Phong BRDF

  5. Put it all together to form paths that sample all the integrals; pixels, direct lighting and BRDFs.
    Function:  LitScene::tracePath
    Marks:      20%

  6. Add unbiased path termination (Russian roulette) (see this paper) OR apply importance sampling to another integral beyond the Phong BRDF (explore this SIGGRAPH course for inspiration). For whichever option you choose to implement, describe how the other could be implemented.
    Marks:      5%

  7. Make your own scene. Marks will be given for scenes which demonstrate the advantages/disadvantages of path tracing.
    Marks:      10%


  8. The report: Create a zip file containing a short PDF report that details exactly what you have done and how you have done it. Please include result images for each part that you have completed. Please also comment your code changes and include the code in the zip file. Be sure to answer any questions that are in bold. Submit to Moodle by 25th February at 23:55.

Code

The ray tracing framework code is available here ( DOWNLOAD UPDATED DEBUG LIBRARY HERE and DOWNLOAD UPDATED RELEASE LIBRARY HERE). It is set up to work correctly in Visual Studio, and we strongly recommend that you complete the coursework using this setup. Download and extract the zip, and open 'net2005/cw2.sln'. If you are using Visual Studio 2008/2010, it may ask to convert the solution file - let it do this. The code can be compiled by pressing 'F7', or using the 'Build' menu. Depending on which mode you compiled, you will now have a 'pathtracer.exe' in either 'net2005/Release' or 'net2005/Debug'. If you don't know about Release/Debug modes and what they do, you can choose either one (the Release mode one will run significantly faster). The application takes one argument: the filename of the scene you wish to render. You can call this from the command line, or by changing the 'Command Arguments' field in the Debugging section of the project Properties page. The different scene files are stored in the 'scenes' directory, and should be passed in as relative paths (e.g., ../scenes/touchingSpheres.dat). To test the framework, run the app with touchingSpheres.dat and you should see something like this:

In the code, there is one file (solution.cpp) containing a collection of empty functions that you need to fill in. Pressing 's' will capture a screenshot (to the working directory).

There are 2 variables in mainray.cpp that you will need to adjust: N_RAYS_PER_PIXEL and DISPLAY_SCALE. Use the first to adjust the number of rays per pixel, and the second to adjust the radiance scaling (to globally modify the brightness).

HTML documentation of the code is available here.

Algorithm flow:

The main function uses the GLUT idle loop to perform the path tracing. The idle loop will call LitScene::renderPixel once for each pixel on the camera image plane. LitScene::renderPixel should in turn generate a number of rays (your job!) and call LitScene::tracePath for each ray. In the LitScene::tracePath you can use the BRDF reflection functions you've implemented to sample the radiance over the hemisphere for surface points. Direct lighting evaluations should be performed with your polygon and sphere sampling functions, and BRDFs evaluated with the BRDF functions you'll also write. All the surface properties you need are already loaded into the BRDF object attached to each sphere or polygon. In order to access the BRDF for a given object call this: Object->brdf() This will give you a pointer to a BRDF object (see brdf.h). Pointers to area light sources are stored in an array in the LitScene class: areaLightAt(i) will give you a pointer to an area light object (see litscene.h). Area light objects are really just ordinary sphere or polygon objects.

We have supplied a LitScene::tracePath function which performs direct lighting only. This can be used to test that your pixel sampling and area light source sampling works correctly.

The provided framework loads scene files for you, and initialises and stores material properties. We also provide a set of scenes for you upon which to test your path tracer as you progress towards the final solution.


Our results - stratified and jittered sampling

touchingSpheres.dat (1 ray per pixel) touchingSpheres.dat (64 rays per pixel)

Our results - direct illumination

Direct lighting examples, no indirect lighting sampled, good for testing (#define MAX_PATH_BOUNCES 1). 64 rays per pixel, stratified and jittered.

touchingSpheres.dat Cornell_RectLight.dat Cornell_TriLight.dat Cornell_SphereLight.dat

Our results - diffuse

Diffuse scene with square area light source (rectLightCornell.dat).

1 ray per pixel

4 rays per pixel

64 rays per pixel

1024 rays per pixel

Notice the soft shadows and colour bleeding onto the sphere and box.

Diffuse scene with spherical area light source (sphereLightCornell.dat).

1 ray per pixel 4 rays per pixel 64 rays per pixel 1024 rays per pixel

The noise is worse sampling the spherical emitter, probably because the dot product of the incident ray (sampling the emitter) with the surface normal of the sphere varies more than is the case with the flat polygonal emitter.

Our results - modified Phong

Specular/Diffuse scene with rectangular area light source (phong.dat).
1 ray per pixel 4 rays per pixel 64 rays per pixel 1024 rays per pixel

 

Specular/Diffuse scene with rectangular area light source (phonggloss.dat)
1 ray per pixel 4 rays per pixel 64 rays per pixel 1024 rays per pixel

Coursework 3:  Particle Simulation and Boids

Assignment on March 11th. Submissions are due on March 25th at 23:55. Electronic submission through Moodle. The report has to be in PDF format. Do not upload other document formats, such as Microsoft Word or Open Office.

Please refer to the coursework description as PDF and download the corresponding code framework.

UPDATES:
You can find more informations about Boids and Flocks at the following links: