We’ll begin the course by discussing operating systems generally. What is an OS? What is its job? How does it work? We’ll also spend some time setting up a development environment that we’ll all use this term.
On lab days, you will begin work on a group lab that requires you to use some of the concepts we’ve covered in class. You are expected to start, but not necessarily finish these labs during class time. The first lab will primarily be a chance for you to warm up your C skills, which may be a bit rusty.
We’ll spend some time discussing why we work in groups for this class, and set some expectations for group dynamics. After that, you’ll complete an exercise learning to use a debugger to troubleshoot buggy programs.
Today we’ll discuss a key OS abstraction: the process. We’ll talk about why and how we use processes on Linux. We’ll spend the remainder of the day learning about some important functions available on all POSIX-compliant operating systems.
In today’s lab you will implement a shell, the program that runs in a terminal window. Shells make it possible for users to interact with an operating system, so building your own will give you a chance to practice writing code to communicate with the OS. You’ll also get to practice dealing with user input in C.
We’ve seen how we can support multiple programs running on a single machine with processes and address spaces, but how does the OS decide which one to run at any given time? This is the job of the CPU scheduler. We’ll look at a few scheduling algorithms and discuss their advantages and drawbacks.
Today’s lab will require that you use your new understanding of CPU scheduling to write a scheduler for a console game. The game, a clone of the classic Snake game, is composed of a series of tasks. You will build the system that tracks these tasks and executes them at the appropriate times.
Address spaces are an important abstraction that makes it possible for the OS to run processes in isolation. We’ll look at the high-level idea of an address space, learn about how you interact with address spaces in code, and discuss some of the basic mechanisms that an OS can use to implement address spaces.
Today’s lab will test your understanding of address spaces and the memory API. You’ll take advantage of Linux’s address space features to write some interesting and useful code.
One important use of virtual memory that goes beyond simply isolating processes from each other is swapping. This makes it possible for an OS to run programs that don’t fit in the amount of memory on the system. We’ll look at why this is useful and how it works.
Today’s lab will be one of the most challenging of the semester. You’ll use your new understanding of virtual memory and the memory API to implement a memory allocator, the code that provides
free for other programs.
We’ll conclude our discussion of virtual memory by looking at the complete VM system from
malloc down to disk space. This broader view leads us to some interesting applications of virtual memory, as well as some important security issues.
While processes make it possible to run multiple programs on a single machine, sometimes we might like a single program to do multiple tasks at a time. Threads make it possible for a single process to run multiple operations concurrently. We’ll look at why threads are useful, how to create and interact with threads in Linux, and what makes thread programming particularly challenging.
We’ll use today’s class to continue our discussion of threads, answer questions about the malloc lab, and catch up on anything else we’re behind on.
We’ll build on our understanding of threads from the previous class and look at how we can use locks to control concurrent accesses to data structures.
You’ll have more time in class to work on your malloc lab before it’s due this evening.
While locks are important for guaranteeing mutual exclusion, they aren’t the only tool available for controlling concurrency. We’ll look at two additional concurrency control primitives today and see how they can help us write interesting concurrent programs.
For today’s lab, you will solve an embarassingly parallel problem using threads. An embarassingly parallel problem is one that is easy to distribute over multiple threads, requiring minimal synchronization with locks.
Today we will look at the kinds of bugs that concurrent programs can have, and think about how to design a concurrent program to avoid these bugs.
Today we will learn how to use graphics processing units (GPUs) to write parallel programs that, when carefully designed, can run tens or hundreds of times faster than parallel programs that use threads on conventional processors.
We will continue the introductory exercises for GPU programming. While you may not finish all of the exercises in the time allotted, you should have enough of an understanding of CUDA and GPU parallelism to move on to our GPU lab on Friday.
This week’s lab will require you to implement a parallel computation that can run on a GPU. This computation will be part of a larger system that uses the GPU as a co-processor, a common model for modern workloads.
One of the most interesting and challenging problems computer science is designing and implementing systems that work reliably across multiple machines. We’ll look at what makes this problem difficult and explore some of the interesting techniques that make it possible to build distributed systems that work well. If time allows, we will start an exercise that will introduce some basic network programming concepts.
One of the key technical concepts that underpins distributed systems is networking; we’ll spend our entire class today working through exercises that introduce the basics of network programming.
We will look at another mechanism for writing concurrent programs where the primary work being done concurrently is I/O. This is a common model for software that interacts with networks, large files, or users.
This week’s lab will combine your experience with networks and distributed systems. You will implement a small distributed system using some of the techniques we’ve seen in class.
We will shift our focus to persistent storage, which allows us to preserve data outside a running process. Today we will look at how the OS interacts with devices (including storage devices) and see how we can use a storage device that holds blocks of data to implement a recognizable file system.
Users expect file systems to keep their data safe, even when their computers crash. Today, we will look at how a file system can be designed to tolerate failures. We will also look at a newer file system design and explore some of the important systems ideas that come up in its implementation.
Writing secure software is incredibly important, especially when you’re writing OS-level code in a language like C. We’ll work through three security exercises as a class today. This won’t make you an expert in secure programming with C, but hopefully you’ll think about the vulnerabilities we discuss when you find yourself writing important code in the future.
We’ll conclude this term by looking back at a particularly influential systems project—the UNIX operating system. Dennis Ritchie wrote an interesting description of some of the basic features of UNIX and how they came about. We’ll combine that with Richard Gabriel’s commentary on two distinct approaches to system building, and how they fare over time.