Raytracer

For this lab, you will use threads to speed up a simple raytracer, a program that uses a physically-inspired technique for rendering realistic 3D graphics. A raytracer works by simulating the movement of light from light sources to a viewing plane. You can think of the viewing plane as a transparent rectangle you hold in front of your face. Sending rays from light sources out into the scene would be very inefficient; most of these rays wouldn’t come anywhere near your eye. Instead, a raytracer works by running rays backwards: starting at your eye, going out into the scene, and toward light sources. This path may include reflections off of multiple objects in the scene, which makes it possible to realistically render reflective surfaces.

One big advantage of a raytracer is that every pixel of the resulting image can be produced independently. That means it is well suited to concurrency (sometimes we call problems like this “embarassingly parallel”). You can use threads to raytrace many points at once, wait for these threads to finish, then move onto the next chunk of points.

During the course of this lab, you’ll be learning some very basic C++. Specifically, this project uses C++’s operator overloading to provide a convenient class that represents three dimensional vectors. The project does use other C++ standard features like std::vector (which is similar to Java’s ArrayList) and some additional classes to make the code a bit cleaner. If you run into some C++ code you don’t understand or you are interested in using C++ features for your component of this lab, cppreference.com is a good resource for the C++ standard library. You’ll also work with git branching and merging as you incorporate features into the raytracer and make your own contributions.

Program Organization

This section describes the basic organization of the provided code. For this project, we are using the .cc and .hh file extensions for C++ source and header files, respectively. The suffixes .cpp and .hpp are also common, but sometimes you’ll see C++ include files that just end in .h.

main.cc: This file contains the main fuction and the basic raytracing code. This program uses SDL for graphics, but the details are hidden inside the gui class. Other than main, there are two key functions in this file. The init_scene function populates a scene with 3D objects, and raytrace follows a single ray through the scene to determine what it intersects and therefore what color it should be. This is the file where you should make all of your changes.

bitmap.hh: This C++ include file defines the bitmap class. This class represents an image as an array of RGB colors. Your raytracing code generates colors as vectors of three floating point numbers, and this class converts them to RGB colors with components between 0 and 255. This class also includes code for copying a bitmap into the display. You do not need to change this file.

geom.hh: This C++ include file defines the shape abstract class, which serves as the basis for the sphere and plane classes that are also implemented in this file. These classes use mathematical descriptions of three dimensional shapes to determine where rays intersect with shapes in the scene. This file also provides a viewport class, which takes care of calculating the appropriate ray directions for pixels in the scene. You do not need to change this file.

gui.hh: This C++ include file defines the gui class, which hides some of the setup and management of an SDL window. You can pass a bitmap to the display(bitmap bmp) function to display that bitmap on the screen. Note that there is an alternate version that takes coordinates and a size to display a bitmap on just part of the screen. You do not need to change this file.

util.hh: This C++ include file defines the familiar time_ms() and wait_ms(size_t) functions we’ve used before. These functions are marked as static, which means something different than “static” in Java, at least in this context. Marking a function as static in this context means it is private to the current source file, so it can be defined in multiple sources. Without this keyword, the compiler would give a “multiple definitions” error. You do not need to change this file.

vec.hh: This C++ include file defines a vec class, which represents a three-dimensional vector. This class makes extensive use of “operator overloading,” which allows you to define operators like +, -, *, and so on for your own classes. This class is heavily used in raytracing, and most of the tricky geometry is hidden inside this class instead of spread throughout the raytracing code. You do not need to change this file.

Getting Started

Before we get to the challenging part of this lab, you’ll need to familiarize yourself with the provided code. The following five steps walk you through merging rendering features into the raytracer program, along with a high-level overview of how the code works.

The initial funcitonality of the provided raytracing code: flat colored objects rendered in perspective

Begin this lab as we usually do, by creating a fork of the GitHub repository at github.com/grinnell-cs/213-raytracer. Don’t forget to add your lab partner as a collaborator. When you clone your fork of the repository, you get a very basic implementation of the 3D geometry and drawing functionality, but none of the nice features that make a raytracer produce high quality images. This initial code produces a scene like the one to the right.

To build and run the program, type the commands:

$ make
$ ./raytracer

The way this program works involves a little geometry, but the tricky parts are already done for you. At a high level, what is happening is a simple test for intersection; for each pixel on screen, the raytracer creates a vector to represent a ray that goes from the eye position, through this pixel, and into the scene. If the closest object this ray hits is the red sphere, the pixel is red; if the closest object is the gray surface under the spheres, it is gray. This doesn’t give us any reflections, lighting, or shadows, but it does project a three dimensional scene onto a two dimensional surface.

Step 1: Oversampling

Aliasing or 'jaggies' are an unpleasant side effect of this simple raytracing Aliasing, or "jaggies"

This basic rendering approach has one big drawback: aliasing, also known as "jaggies." You can see in the image to the left that the edges of objects in the scene have rough, pixelated boundaries, even though these are created by perfectly smooth shapes described by mathematical equations. While we're stuck with pixels (you can't turn on half a pixel), you can reduce the effect of aliasing by oversampling the image.

Oversampling softens edges. This image was created by oversampling in a 3x3 grid 3x3 oversampling

For each pixel in the output image, we send multiple rays into the scene at slightly different angles, then average the resulting colors. If most of the rays hit the red sphere instead of the gray background, the color will be mostly (but not completely) red. This will result in a smooth transition between objects in the rendered scene. The image to the right performs 3x3 oversampling, meaning every pixel is the average color of nine rays cast into the scene evenly distributed over the range of angles that pass through that pixel.

Lucky for you, the code to perform oversampling (also known as “antialiasing”) is already written and available in a branch on GitHub. Branches are a common method for working collaboratively on a shared git repository. Typically you create a branch off of the master branch, make a series of commits to implement a specific feature, then merge those changes back into the master branch.

You can look at the code changes directly on the original GitHub repository. Adding this feature just requires looping over a range of x and y offsets, invoking the raytracer for each, then averaging these colors at the end.

The full scene rendered with 3x3 oversampling

By default, git will not pull remote branches to your working copy. You can view all of the local branches by running git branch (there should only be one called master so far). To view all remote branches, run git branch -a. To check out the branch that implements oversampling, run the command:

$ git checkout -b step1-oversampling origin/step1-oversampling

Now that you have this branch copied to your local working copy, switch back to the master branch with the command:

$ git checkout master

Now you can merge the changes from the oversampling branch with the command:

$ git merge step1-oversampling

This applies all of the branch’s commits to your master branch. Run git push to send these commits to GitHub. When you run the raytracer now, you should get a cleaner image like the one to the right. Don’t forget to run make again before starting ./raytracer.

A frame from the animated version of the scene

Step 2: Animation

The next step in building up our raytracer is to add animation. This is surprisingly simple! The run_raytracer function already renders the same frame repeatedly in a loop. Take a look at the commit that adds animation on GitHub. All this changes is to rotate the position of the camera and its orientation around the y-axis. The amount of rotation is determined by the time that has elapsed since the program started.

You can merge in the changes from the step2-animation branch all in one command instead of checking out the branach first. To do this, run:

$ git merge origin/step2-animation

Go ahead and run the program. Odds are the animation is a little jerky, and we’re not even done adding features yet. You may have guessed already, but your objective for this lab is to speed up the raytracer using concurrency. But, before we get to that we have a few more steps to work through.

The same scene with reflections enabled

Step 3: Reflections

Part of the promise of raytracing was that we would simulate realistic behavior of rays of light. When a ray of light hits a surface, it reflects off that surface; that’s why you can see it. The branch called step3-reflections implements simple raytracing reflections; merge this branch with master in your repository. All that is required to reflect one object in another is to find the point where the ray hits the object (we’re already doing that), then bounce a new ray off the object at this point. This is pretty easy with a little vector math when you know the normal vector for a surface, just a unit vector that points away from the surface, perpendicular to all tangent lines.

The changes in the GitHub commit implement this functionality, with one notable special case: if we start a new ray at the point where our original ray meets the object, it may immediately collide with that object again. That would create odd black splotches on our shapes depending on floating point rounding error. To resolve that issue, we start the reflected ray a small distance (the constant EPSILON) away from the surface.

Given a new ray starting point and new ray direction, we just recursively call the raytrace function and mix the resulting color with the shape of the object that is reflecting this ray. The implementation bounds the number of recursive rays we draw to 10 to make sure the raytracing will terminate. The resulting image (right) is fairly dark because we’re relying solely on the ambient light in the scene to illuminate everything, and our spheres reflect the dark surface they are resting on. If you zoom in, you can see that the spheres show reflections of each other though!

Basic diffuse lighting

Step 4: Lighting

The next step in drawing a realistic scene is to illuminate the scene with our two light sources (in the lights vector, which works quite a bit like an ArrayList in Java). Whenever a ray hits a surface, we draw a line from that point to a light source. The closer the line is to the angle of the reflection, the more light will be reflected from this point. Specifically, the brightness is proportional to the cosine of the angle between the reflected ray and the vector pointing toward the light source. This ideal illumination is known as Lambertian Reflectance (wikipedia). For every ray, we repeat this process for each light source, adding up their contributions to the illumination of the point.

Another detail to consider is shadows; If an object is between a ray intersection and a light source, it should block that light source (by casting a shadow). Implementing shadows is surprisingly simple; we already have code to determine which scene object a ray intersects. We just use this code to check if any objects are between the intersection point and the light source; if the path is clear, the light source illuminates that point, otherwise it does not.

Merge in the step4-lighting branch to add this functionality. Look closely at how lighting is implemented; you will be surprised how simple it is!

The same scene with reflective highlights

Step 5: Highlights

And finally, our objects are somewhat reflective but they don’t really look reflective. We can fix their somewhat flat appearance by adding highlights to the lighting. This just adds a bright white point in the area where the light hits the surface directly. You can see in the GitHub commit that the computation is very simple, but the effect is striking. Merge the step5-highlights branch into your master branch.

Once nice side effect of raytracing is that these enhancements are complementary. Not only do we have highlights on objects, we can also see reflections of those highlights on the surfaces of other objects.

Hopefully now that we’ve walked through each of the steps in building this program, you will understand it well enough to improve it. You don’t need to understand all the details of raytracing, but you should have a sense of what different parts of the program are doing.

Your Objective: Performance

Now that you’ve applied all five of the raytracing features, this program produces (I think) pretty nice images considering the whole program is under 900 lines of code. Unfortunately, it’s too slow for our animation to run smoothly. Your job is to speed the raytracer up as much as you can using threads. That means you’ll need to figure out what parts of this task you can do in parallel, and how you should use the limited amount of parallelism available on the lab machines.

You are free to use either the C style pthreads API or the C++ threads interface; whichever one you prefer is perfectly fine. There should be no difference in performance between the two.

Hint 1: Try turning features on and off to see how they impact performance. If turning off a feature doesn’t make the animation smoother, then making it a little faster with parallelism probably won’t help either.

Hint 2: Think about amoritization; there is a cost associated with creating and switching between threads. If you make many more threads than your machine can run at one time, you’ll spend a lot of time context switching and not as much time doing useful work. The same goes for short-lived threads: if you don’t give a thread enough to do before it quits, it will spend much of its lifetime starting up and shutting down.

Requirements

Because machines vary and performance can depend on quite a few factors, there is no specific performance requirement for this lab. You should be able to get the video to run smoothly with all of the features enabled, but talk to me if the drawing is still slow and you are out of ideas.

You must comment your code and explain your changes! How did you decide which part of the raytracer to run in parallel? How much parallelism will you use? Make these decisions clear in your git commits and document your work in the README.md file in the repository. You should expect to write at least three paragraphs explaining how you decided what to parallelize and how you actually accomplished that. I’m also eager to read about any false starts you may have encountered along the way.

Extra Credit

There are many other interesting enhancements you can add to a raytracer, many of which expose additional opportunities for parallelism. Some possible options include soft shadows, motion blur, or depth of field simulation, which can all be implemented with Distributed Ray Tracing (wikipedia). Both these features rely on turning one ray into many rays to create a more realistic effect, but this creates an exponential increase in the amount of work required to draw each pixel.

To earn extra credit, implement one of these features (or another feature you identify) and incorporate it into your “parallelization plan.” That may mean you would try to run the feature in parallel and discover it does not help performance, or perhaps you might change your technique for parallelization to improve performance with the new feature in place.

It is okay if you add a new feature and the animation no longer runs smoothly when this feature is active. Please add your feature in a separate branch called extra-credit so I can see both your original and improved raytracers.