The schedule below shows the tentative dates for all class topics, readings, and assignments. You should complete all assigned reading before class on the day it is listed. Labs will be available shortly before the assigned lab day. There may be some revisions to the schedule during the semester, but I will make sure to announce these changes in class. If you view this page with JavaScript enabled you can jump to the current week on the schedule, and you should see the next day of class highlighted in the schedule below.
We’ll begin the course by discussing operating systems generally. What is an OS? What is its job? How does it work? We’ll also spend some time practicing C programming, which will be an important tool you’ll use to learn OS concepts this term.
We’ll spend a little time practicing C at the start of class. After that, you’ll complete an exercise learning to use a debugger to troubleshoot buggy programs.
No reading
Today we’ll discuss a key OS abstraction: the process. We’ll talk about why and how we use processes on Linux. We’ll spend the remainder of the day learning about some important functions available on all POSIX-compliant operating systems.
In today’s lab you will implement a shell, the program that runs in a terminal window. Shells make it possible for users to interact with an operating system, so building your own will give you a chance to practice writing code to communicate with the OS. You’ll also get to practice dealing with user input in C.
We’ll pick up the exercise that was originally scheduled for last Friday where we’ll all practice using gdb to diagnose bugs in C programs.
Address spaces are an important abstraction that makes it possible for the OS to run processes in isolation. We’ll look at the high-level idea of an address space, learn about how you interact with address spaces in code, and discuss some of the basic mechanisms that an OS can use to implement address spaces.
Today we will look in detail at two real mechanisms that the OS uses to create address spaces.
Today’s lab will test your understanding of address spaces and the memory API. You’ll take advantage of Linux’s address space features to write some interesting and useful code.
One important use of virtual memory that goes beyond simply isolating processes from each other is swapping. This makes it possible for an OS to run programs that don’t fit in the amount of memory on the system. We’ll look at why this is useful and how it works.
We’ll conclude our discussion of virtual memory by looking at the complete VM system from malloc down to disk space. This broader view leads us to some interesting applications of virtual memory, as well as some important security issues.
Writing secure software is incredibly important, especially when you’re writing OS-level code in a language like C. We’ll work through three security exercises as a class today. This won’t make you an expert in secure programming with C, but hopefully you’ll think about the vulnerabilities we discuss when you find yourself writing important code in the future.
No reading
Today’s lab will be one of the most challenging of the semester. You’ll use your new understanding of virtual memory and the memory API to implement a memory allocator, the code that provides malloc and free for other programs.
Users expect file systems to keep their data safe, even when their computers crash. Today, we will look at how a file system can be designed to tolerate failures. We will also look at a newer file system design and explore some of the important systems ideas that come up in its implementation.
We’ve seen how we can support multiple programs running on a single machine with processes and address spaces, but how does the OS decide which one to run at any given time? This is the job of the CPU scheduler. We’ll look at a few scheduling algorithms and discuss their advantages and drawbacks.
Today’s lab will require that you use your new understanding of CPU scheduling to write a scheduler for a console game. The game, a clone of the classic Snake game, is composed of a series of tasks. You will build the system that tracks these tasks and executes them at the appropriate times.
While processes make it possible to run multiple programs on a single machine, sometimes we might like a single program to do multiple tasks at a time. Threads make it possible for a single process to run multiple operations concurrently. We’ll look at why threads are useful, how to create and interact with threads in Linux, and what makes thread programming particularly challenging.
We’ll use today’s class to continue our discussion of threads, answer questions about the worm lab, and catch up on anything else we’re behind on.
No reading
We’ll build on our understanding of threads from the previous class and look at how we can use locks to control concurrent accesses to data structures.
For today’s lab, you will solve an embarassingly parallel problem using threads. An embarassingly parallel problem is one that is easy to distribute over multiple threads, requiring minimal synchronization with locks.
I’ve shifted our planned topics for Thursday and Friday to next week. We’ll take a short detour to explore a different kind of parallel programming on GPUs at the beginning of next week, then return to thinking about concurrency control mechanisms and the bugs they help us fix at the end of the week.
No reading
No reading
Today we will learn how to use graphics processing units (GPUs) to write parallel programs that, when carefully designed, can run tens or hundreds of times faster than parallel programs that use threads on conventional processors.
We’ll continue our introduction to GPUs with a second in-class exercise.
This week’s lab will require you to implement a parallel computation that can run on a GPU. This computation will be part of a larger system that uses the GPU as a co-processor, a common model for modern workloads.
While locks are important for guaranteeing mutual exclusion, they aren’t the only tool available for controlling concurrency. We’ll look at two additional concurrency control primitives today and see how they can help us write interesting concurrent programs.
Today we will look at the kinds of bugs that concurrent programs can have, and think about how to design a concurrent program to avoid these bugs.
One of the most interesting and challenging problems computer science is designing and implementing systems that work reliably across multiple machines. We’ll look at what makes this problem difficult and explore some of the interesting techniques that make it possible to build distributed systems that work well. If time allows, we will start an exercise that will introduce some basic network programming concepts.
We’ll conclude this term by looking back at a particularly influential systems project—the UNIX operating system. Dennis Ritchie wrote an interesting description of some of the basic features of UNIX and how they came about. We’ll combine that with Richard Gabriel’s commentary on two distinct approaches to system building, and how they fare over time.
No reading