YouTube placeholder

Paging

Page State

In order to keep the TLB up-to-date we need to be able to:
  • Store information about each virtual page.

  • Locate that information quickly.

Page Table Entries (PTEs)

We refer to a single entry storing information about a single virtual page used by a single process a page table entry (PTE).

  • (We will see in a few slides why we call them page table entries.)

  • Can usually jam everything into one 32-bit machine word:

    • Location: 20 bits. (Physical Page Number or location on disk.)

    • Permissions: 3 bits. (Read, Write, Execute.)

    • Valid: 1 bits. Is the page located in memory?

    • Referenced: 1 bits. Has the page been read/written to recently?

Locating Page State

  • Process: "Machine! Store to address 0x10000!"

  • MMU: "Where the heck is virtual address 0x10000 supposed to map to? Kernel…​help!"

  • (Exception.)

  • Kernel: Let’s see…​ where did I put that page table entry for 0x10000…​ just give me a minute…​ I know it’s around here somewhere…​ I really should be more organized!

What are the requirements for how we locate page information?
  • Speed: translation is a hot path and should be as efficient as possible.

  • Compactness: data structures we use should not take up too much physical memory.

Page Tables

The data structure used to quickly map a virtual page number to a page table entry is called a page table.

  • Each process has a separate page table.

  • Why? Virtual addresses are private to each process and translated differently for each.

Flat Page Tables

Approach: use one array to hold all page table entries for each process. Virtual page number is index into this array.

flatpagetable 1
flatpagetable 2
flatpagetable 3
flatpagetable 4
flatpagetable 5
flatpagetable 6
flatpagetable 7
  • Approach: use one array to hold all page table entries for each process. Virtual page number is index into this array.

  • Speed: O(1). VPN is used directly as an index into the array.

  • Compactness: 4 MB per process that may have to be contiguous! Most is unused!

flatpagetable 7

Linked List Page Tables

  • Approach: list of PTEs for each process, searched on each translation.

linkedlistpagetable 1
linkedlistpagetable 2
linkedlistpagetable 3
linkedlistpagetable 4
linkedlistpagetable 5
linkedlistpagetable 6
linkedlistpagetable 7
linkedlistpagetable 8
linkedlistpagetable 9

Approach: list of PTEs for each process, searched on each translation.

  • Speed: O(n) for n valid virtual pages.

  • Compactness: 4 bytes * n for n valid virtual pages. All entries are used!

linkedlistpagetable 9

Multi-Level Page Tables

Approach: build a tree-like data structure mapping VPN to PTE. Break VPN into multiple parts, each used as an index at a separate level of the tree.

  • Example: with 4K pages VPN is 20 bits. Use top 10 bits as index into top-level page table, bottom 10 bits as index into second-level page table.

    • Each page table is 210 * 4 bytes = 4K. Convenient!

multilevelpagetable 1
multilevelpagetable 2
multilevelpagetable 3
multilevelpagetable 4
multilevelpagetable 5
multilevelpagetable 6

Build a tree-like data structure mapping VPN to PTE. Break VPN into multiple parts, each used as an index at a separate level of the tree.

  • Speed: O(c). Constant number of look ups per translation depending on tree depth.

  • Compactness: Depends on sparsity of address space, but better than flat and worse than linked list.

Out of Core

So far we have been talking about cases where processes are able to use the physical memory available on the machine.

But what happens when we run out?

swapping 1
swapping 2
swapping 3
swapping 4
swapping 5
swapping 6
swapping 7
swapping 8
swapping 9
swapping 10
swapping 11
swapping 12

When we run out, there are two options:

  1. Fail, and either don’t load the process (exec()), don’t create a new process (fork()), refuse to allocate more heap (sbrk()), or kill the process if it is trying to allocate more stack.

  2. Create more space, preserving the contents of memory for later use.

Virtually Sneaky

Virtual address translation gives the kernel the ability to remove memory from a process behind its back.

What are the requirements for doing this?
  • The last time the process used the virtual address, it behaved like memory.

  • The next time the process uses the virtual address, it behaves like memory.

  • In between, whatever data was stored at that address must be preserved.


Created 3/13/2017
Updated 9/18/2020
Commit 4eceaab // History // View
Built 3/14/2017 @ 20:00 EDT