YouTube placeholder

Page Replacement

Hardware-Managed TLBs

  • On certain (common) architectures the MMU will search the page table itself to locate virtual-to-physical address translations missing from the TLB.

    • Pro: hardware is faster!

    • Con: operating system must set up the page tables in a fixed way that the hardware understands.

  • With a hardware-managed TLB, the kernel never sees TLB faults: they are handled in hardware.

  • Don’t worry: System/161 has a software-managed TLB, so you get to do all of the hard work and design your own page table structures!

Page Eviction

In order to swap out a page we need to choose which page to move to disk.

In order to swap in a page we might need to choose which page to swap out.

Swapping cost-benefit calculation:
  • Cost: mainly the time and disk bandwidth required to move a page to and from disk.

  • Benefit: the use of 4K (or a page) of memory as long as the page on disk remains unused.

There are tricks that the operating might play to minimize the cost, but mainly we will focus on algorithms designed to maximize the benefit.

(Another complementary description of our goal is minimizing the page fault rate.)

Page Eviction FAIL: Thrashing

Thrashing is a colloquialism normally used to describe a computer whose virtual memory subsystem is in a constant state of paging, rapidly exchanging data in memory for data on disk, to the exclusion of most application-level processing. This causes the performance of the computer to degrade or collapse.

— Wikipedia

Maximizing Benefit

Benefit: the use of 4K of memory as long as the page on disk remains unused.

How do we maximize the benefit?
  • Pick the page to evict that will remain unused the longest!

The Best Bet

What is the absolute best page to evict, the one that page replacement algorithms dream about?
  • A page that will never be used again!

Break Out the Ball

Think back to scheduling algorithms. (This is a bit simpler.)

What would we like to know about a page when choosing one to evict?
  • How long will it be before this page is used again?

The optimal scheduler evicts the page that will remain unused the longest. This clearly maximizes our swapping cost-benefit calculation.

  • This scheduler is difficult to implement!

What Now?

When we can’t predict the future, we…​ use the past to predict the future!

The Past Didn’t Go Anywhere

Intelligent page replacement requires three things:
  1. determining what information to track,

  2. figuring out how to collect that information, and

  3. how to store it.

There Be Tradeoffs

  • Collecting statistics may be expensive, slowing down the process of translating virtual addresses.

  • Storing statistics may be expensive, occupying kernel memory that could be better used for other things.

Simplest

What is the simplest possible page replacement algorithm?
  • Random.

Pros:
  • Extremely simple.

  • Good baseline for algorithms that try to be smarter.

Cons:
  • Too simple. We can probably do better.

Use The Past, Luke

What is an algorithm that uses a page’s past to predict its future?
  • Least Recently Used (LRU):

  • Choose the page that has not been used for the longest period of time.

  • Hopefully this is a page that will not be used again for a while.

Pros:
  • Might be as good as we can do without predicting the future.

Cons:
  • How do we tell how long it has been since a page has been accessed?

  • How do we store how long it has been since a page has been accessed?

LRU: Collecting Statistics

At what point does the operating system know that a process has accessed a virtual page?
  • When we load the entry into the TLB!

Does this reflect every virtual page access?
  • No! Only the first. A page that is accessed once and one that is accessed 1,000 times are indistinguishable.

Why not record every page access?
  • Too slow!

LRU: Storing Statistics

How much access time information can we store?
  • 32 bits = 232 "ticks", but doubles the page table entry size!

  • 8 bits = 256 "ticks".

How do we find the least recently used page?
  • Need some kind of efficient data structure holding all physical pages on the system that is searched on every page eviction.

Clock LRU

  • Simple and efficient LRU-like algorithm.

  • One bit of accessed information, set when loading a virtual address into the TLB.

To locate a page to evict:
  1. Cycle through all pages in memory in a fixed order.

  2. If a page accessed bit is clear, evict that page.

  3. If a page accessed bit is set, clear the bit.

clock 1
clock 2
clock 3
clock 4
clock 5
clock 6
clock 7
clock 8
clock 9
clock 10

Clock Speed

What does it mean when the clock hand is turning slowly?
  • Little memory pressure, or

  • We are making good decisions about what to evict.

What does it mean when the clock hand is turning rapidly?
  • Lots of memory pressure, or

  • We are making bad decisions about what to evict.

Memory Management Design Exercise: Copy-on-Write

Remember the problem with fork?

  • Have to copy the entire address space of the parent!

  • If the parent has N pages allocated, we have to find N more free pages!

    • …​and then the child calls exec anyway.

Can we avoid this?

Copy-on-Write

Goal: don’t allocate any additional memory during fork, but preserve private address spaces.

Here’s what to do:
  1. During fork, point all child’s PTEs to the same physical memory as the parent.

  2. As long as the child and parent are both just reading from any shared page, we are OK.

    • Large parts of the address space may be read-only or read-mostly anyway…​

  3. As soon as either the child or the parent modifies the contents of any shared virtual page, we have to make a private copy.

  4. How do we trap writes? Mark the page as read only in the TLB!


Created 3/27/2017
Updated 9/18/2020
Commit 4eceaab // History // View
Built 3/28/2017 @ 20:00 EDT