The schedule below shows the tentative dates for all class topics, readings, and assignments. You should complete all assigned reading before class on the day it is listed. Labs will be available shortly before the assigned lab day. There may be some revisions to the schedule during the semester, but I will make sure to announce these changes in class. If you view this page with JavaScript enabled you can jump to the current week on the schedule, and you should see the next day of class highlighted in the schedule below.
We’ll begin the course by discussing operating systems generally. What is an OS? What is its job? How does it work? We’ll also spend some time thinking about how we learn and the elements of this course that are meant to facilitate your learning. You’ll be writing a lot of code in C for this course, so we’ll also take some time to practice C and discuss the standards you’ll be expected to follow for assignments and labs in this class.
Today we will practice using gdb to track down bugs in C programs.
Today we’ll discuss a key OS abstraction: the process. We’ll talk about why and how we use processes on Linux.
In today’s lab you will implement a shell, the program that runs in a terminal window. Shells make it possible for users to interact with an operating system, so building your own will give you a chance to practice writing code to communicate with the OS. You’ll also get to practice dealing with user input in C.
Address spaces are an important abstraction that makes it possible for the OS to run processes in isolation. We’ll look at the high-level idea of an address space and learn about how you interact with address spaces in code. We’ll also take some time to look at different types of mistakes you can make when dealing with memory.
Today we will look in detail at two real mechanisms that the OS uses to create address spaces.
Today’s lab will test your understanding of address spaces and the memory API. You’ll take advantage of Linux’s address space features to write some interesting and useful code.
One important use of virtual memory that goes beyond simply isolating processes from each other is swapping. This makes it possible for an OS to run programs that don’t fit in the amount of memory on the system. We’ll look at why this is useful and how it works.
Today’s lab will be one of the most challenging of the semester. You’ll use your new understanding of virtual memory and the memory API to implement a memory allocator, the code that provides malloc and free for other programs.
We’ll conclude our discussion of virtual memory by looking at how real systems use paging for large address spaces. We will explore how this mechanism works, and then discuss the ways paging can fit into a larger system.
Instead of starting a new lab, you will have the entire class to work with your lab group on the memory allocator lab we started last week.
Today we’ll begin looking at how an operating system can store users’ files and directories on a disk.
We’ve seen how we can support multiple programs running on a single machine with processes and address spaces, but how does the OS decide which one to run at any given time? This is the job of the CPU scheduler. We’ll look at a few scheduling algorithms and discuss their advantages and drawbacks.
Today’s lab will require that you use your new understanding of CPU scheduling to write a scheduler for a console game. The game, a clone of the classic Snake game, is composed of a series of tasks. You will build the system that tracks these tasks and executes them at the appropriate times.
While processes make it possible to run multiple programs on a single machine, sometimes we might like a single program to do multiple tasks at a time. Threads make it possible for a single process to run multiple operations concurrently. We’ll look at why threads are useful, how to create and interact with threads in Linux, and what makes thread programming particularly challenging.
We’ll build on our understanding of threads from the previous class and look at how we can use locks to control concurrent accesses to data structures.
We will continue practicing using threads and locks to write safe, concurrent programs in C.
While locks are important for guaranteeing mutual exclusion, they aren’t the only tool available for controlling concurrency. We’ll look at two additional concurrency control primitives today and see how they can help us write interesting concurrent programs.
For today’s lab, you will solve an embarassingly parallel problem using threads. An embarassingly parallel problem is one that is easy to distribute over multiple threads.
Today we will look at the kinds of bugs that concurrent programs can have, and think about how to design a concurrent program to avoid these bugs.
Today we will learn how to use graphics processing units (GPUs) to write parallel programs that, when carefully designed, can run tens or hundreds of times faster than parallel programs that use threads on conventional processors.
This week’s lab will require you to implement a parallel computation that can run on a GPU. This computation will be part of a larger system that uses the GPU as a co-processor, a common model for modern workloads.
One of the most interesting and challenging problems computer science is designing and implementing systems that work reliably across multiple machines. We’ll look at what makes this problem difficult and explore some of the interesting techniques that make it possible to build distributed systems that work well.
We will continue to practice designing and implementing programs that communicate with each other over networks.
For this week’s lab, you will implement a basic distributed system that allows users to communicate between different computers without the use of a central server.
We will look at another mechanism for writing concurrent programs that fits well with network applications and programs that spend much of their time interacting with the outside world rather than just running computation.
We will kick off the project phase of the course with a discussion of a classic paper by Butler Lampson, a Turing Award winner and accomplished system builder.
Instead of starting a new lab, we’ll reserve today’s class so you have time to continue work on your p2p labs.
You will have nearly all of today’s class to work with your project group to complete your proposal or begin work on your project implementation.
Today’s class will focus on techniques we can use to pass data between processes on POSIX systems. These techniques are used across a wide variety of applications, and may be useful for some final projects.
Writing secure software is incredibly important, especially when you’re writing OS-level code in a language like C. We’ll work through three security exercises as a class today. This won’t make you an expert in secure programming with C, but hopefully you’ll think about the vulnerabilities we discuss when you find yourself writing important code in the future.
We’ll continue the memory errors and security activity from the previous class.
You will have nearly the whole class to work on your course projects.
You will have nearly the whole class period to work on your course projects.
In today’s class we’ll look back at a particularly influential systems project—the UNIX operating system. Dennis Ritchie wrote an interesting description of some of the basic features of UNIX and how they came about. We’ll combine that with Richard Gabriel’s commentary on two distinct approaches to system building, and how they fare over time.
You will have nearly the whole class period to work on your course projects.
We’ll finish off the course by looking back at what we’ve done this semester and thinking about how all those different pieces fit together.
The morning section will present their projects during the 9am–noon final exam time slot. The afternoon section will present from 2–5pm.