Skip to content

Commit 4a70dc3

Browse files
syntacticallysimongdavies
authored andcommitted
Add doc on guest-aided CoW design
1 parent b82cf78 commit 4a70dc3

File tree

1 file changed

+123
-76
lines changed

1 file changed

+123
-76
lines changed

docs/paging-development-notes.md

Lines changed: 123 additions & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -1,80 +1,127 @@
1-
# Paging in Hyperlight
1+
# Guest-aided Copy-on-Write snapshots
2+
3+
When running on a Type 1 hypervisor, servicing a Stage 2 translation
4+
page fault is relatively quite expensive, since it requires quite a
5+
lot of context switches. To help alleviate this, Hyperlight uses an
6+
unusual design in which the guest is aware of readonly snapshot from
7+
which it is being run, and manages its own copy-on-write.
8+
9+
Because of this, there are two very fundamental regions of the guest
10+
physical address space, which are always populated: one, at the very
11+
bottom of memory, is a (hypervisor-enforced) readonly mapping of the
12+
base snapshot from which this guest is being evolved. Another, at the
13+
top of memory, is simply a large bag of blank pages: scratch memory
14+
into which this VM can write.
15+
16+
## The scratch map
17+
18+
Whenever the guest needs to write to a page in the snapshot region, it
19+
will need to copy it into a page in the scratch region, and change the
20+
original virtual address to point to the new page. The page table
21+
entries to do this will likely need to be copied themselves, and so a
22+
ready supply of already-mapped scratch pages to use for replacement
23+
page tables is needed. Currently, the guest accomplishes this by
24+
keeping an identity mapping of the entire scratch memory around.
25+
26+
The host and the guest need to agree on the location of this mapping,
27+
so that (a) the host can create it when first setting up a blank guest
28+
and (b) the host can ignore it when taking a snapshot (see below).
29+
30+
Currently, the host always creates the scratch map at the top of
31+
virtual memory. In the future, we may add support for a guest to
32+
request that it be moved.
33+
34+
## The snapshot mapping
35+
36+
Do we actually need to have a physmap type mapping of the entire
37+
snapshot memory? We only really use it when copying from it in which
38+
case we ought to have the VA that we need to copy from already. There
39+
is one major exception to this, which is the page tables
40+
themselves. The page tables themselves must be mapped at some VA so
41+
that we can copy them.
42+
43+
Setting this VA statically on the host is a bit annoying, since we are
44+
already using the top of memory for the scratchmap. Unfortunately,
45+
since the size of the page tables changes as the sandbox evolves
46+
through e.g. snapshot/restore, we cannot preallocate it...
47+
48+
Let's just be stupid and leave them at 0xffff_0000_0000_0000 for now.
49+
50+
## The physical page allocator
51+
52+
The host needs to be able to reset the state of the physical page
53+
allocator when resuming from a snapshot. Currently, we use a simple
54+
bump allocator as a physical page allocator, with no support for free,
55+
since pages not in use will automatically be omitted from a snapshot.
56+
Therefore, the allocator state is nothing but a single `u64` that
57+
tracks the address of the first free page. This `u64` will always be
58+
located at the top of scratch physical memory.
59+
60+
## The guest exception stack
61+
62+
Similarly, the guest needs a stack that is always writable, in order
63+
to be able to take exceptions to it. The remainder of the top page of
64+
the scratch memory is used for this.
65+
66+
## Taking a snapshot
67+
68+
When the host takes a snapshot of a guest, it will traverse the guest
69+
page tables, collecting every (non-page-table) physical page that is
70+
mapped (outside of the scratch map) in the guest. It will write out a
71+
new compacted snapshot with precisely those pages in order, and a new
72+
set of page tables which produce precisely the same virtual memory
73+
layout, except for the scratch map.
74+
75+
### Pre-sizing the scratch region
76+
77+
When creating a snapshot, the host must provide the size of the
78+
scratch region that will be used when this snapshot is next restored
79+
into a sandbox. This will then be baked into the guest page tables
80+
created in the snapshot.
81+
82+
TODO: add support, if found to be useful operationally, for either
83+
dynamically growing the scratch region, or changing its size between
84+
taking a snapshot and restoring it.
85+
86+
### Call descriptors
87+
88+
Taking a snapshot is presently only supported in between top-level
89+
calls, i.e. there may be no calls in flight at the time of
90+
snapshotting. This is not enforced, but odd things may happen if it is
91+
violated.
92+
93+
When a snapshot is taken, any outstanding buffers which the guest has
94+
indicated it is waiting for the host to write to will be moved to the
95+
bottom of the new scratch region and zeroed.
96+
97+
Q: how will the guest know about this? Maybe A: The guest nominates a
98+
virtual address that it wants to have this sort of bookkeeping
99+
information mapped at, and the snapshot creation process treats that
100+
address specially writing out a manifest
101+
102+
Q: how do we want to manage buffer
103+
allocation/freeing/reallocation/etc? Maybe A: for now we will mostly
104+
ignore because we only need 1-2 buffers inflight at a time. We can
105+
emulate the current paradigm by recreating a new buffer out of the
106+
free space in the original buffer on call, etc etc.
107+
108+
## Creating a fresh guest
109+
110+
When a fresh guest is created, the snapshot region will contain the
111+
loadable pages of the input ELF and an initial set of page tables,
112+
which simply map the segments of that ELF to the appropriate places in
113+
virtual memory. If the ELF has segments whose virtual addresses
114+
overlap with the scratch map, an error will be returned.
2115

3-
Hyperlight uses paging, which means that all addresses inside a Hyperlight VM are treated as virtual addresses by the processor. Specifically, Hyperlight uses (ordinary) 4-level paging. 4-level paging is used because we set the following control registers on logical cores inside a VM: `CR0.PG = 1, CR4.PAE = 1, IA32_EFER.LME = 1, and CR4.LA57 = 0`. A Hyperlight VM is limited to 1GB of addressable memory, see below for more details. These control register settings have the following effects:
116+
The initial stack pointer will point to the top of the second-highest
117+
page of the scratch map, but this should usually be changed by early
118+
init code in the guest, since it will otherwise be difficult to detect
119+
collisions between the guest stack and the scratch physical page
120+
allocator.
4121

5-
- `CR0.PG = 1`: Enables paging
6-
- `CR4.PAE = 1`: Enables Physical Address Extension (PAE) mode (this is required for 4-level paging)
7-
- `IA32_EFER.LME = 1`: Enables Long Mode (64-bit mode)
8-
- `CR4.LA57 = 0`: Makes sure 5-level paging is disabled
122+
# Architecture-specific details of virtual memory setup
9123

10-
## Host-to-Guest memory mapping
124+
## amd64
11125

12-
Into each Hyperlight VM, memory from the host is mapped into the VM as physical memory. The physical memory inside the VM starts at address `0x0` and extends linearly to however much memory was mapped into the VM (depends on various parameters).
13-
14-
## Page table setup
15-
16-
The following page table structs are set up in memory before running a Hyperlight VM (See [Access Flags](#access-flags) for details on access flags that are also set on each entry)
17-
18-
### PML4 (Page Map Level 4) Table
19-
20-
The PML4 table is located at physical address specified in CR3. In Hyperlight we set `CR3=0x0`, which means the PML4 table is located at physical address `0x0`. The PML4 table comprises 512 64-bit entries.
21-
22-
In Hyperlight, we only initialize the first entry (at address `0x0`), with value `0x1_000`, implying that we only have a single PDPT.
23-
24-
### PDPT (Page-directory-pointer Table)
25-
26-
The first and only PDPT is located at physical address `0x1_000`. The PDPT comprises 512 64-bit entries. In Hyperlight, we only initialize the first entry of the PDPT (at address `0x1_000`), with the value `0x2_000`, implying that we only have a single PD.
27-
28-
### PD (Page Directory)
29-
30-
The first and only PD is located at physical address `0x2_000`. The PD comprises 512 64-bit entries, each entry `i` is set to the value `(i * 0x1000) + 0x3_000`. Thus, the first entry is `0x3_000`, the second entry is `0x4_000` and so on.
31-
32-
### PT (Page Table)
33-
34-
The page tables start at physical address `0x3_000`. Each page table has 512 64-bit entries. Each entry is set to the value `p << 21|i << 12` where `p` is the page table number and `i` is the index of the entry in the page table. Thus, the first entry of the first page table is `0x000_000`, the second entry is `0x000_000 + 0x1000`, and so on. The first entry of the second page table is `0x200_000 + 0x1000`, the second entry is `0x200_000 + 0x2000`, and so on. Enough page tables are created to cover the size of memory mapped into the VM.
35-
36-
## Address Translation
37-
38-
Given a 64-bit virtual address X, the corresponding physical address is obtained as follows:
39-
40-
1. PML4 table's physical address is located using CR3 (CR3 is `0x0`).
41-
2. Bits 47:39 of X are used to index into PML4, giving us the address of the PDPT.
42-
3. Bits 38:30 of X are used to index into PDPT, giving us the address of the PD.
43-
4. Bits 29:21 of X are used to index into PD, giving us the address of the PT.
44-
5. Bits 20:12 of X are used to index into PT, giving us a base address of a 4K page.
45-
6. Bits 11:0 of X are treated as an offset.
46-
7. The final physical address is the base address + the offset.
47-
48-
However, because we have only one PDPT4E and only one PDPT4E, bits 47:30 must always be zero. Each PDE points to a PT, and because each PTE with index `p,i` (where p is the page table number of i is the entry within that page) has value `p << 21|i << 12`, the base address received in step 5 above is always just bits 29:12 of X itself. **As bits 11:0 are an offset this means that translating a virtual address to a physical address is essentially a NO-OP**.
49-
50-
A diagram to describe how a linear (virtual) address is translated to physical address inside a Hyperlight VM:
51-
52-
![A diagram to describe how a linear (virtual) address is translated to physical](assets/linear-address-translation.png)
53-
54-
Diagram is taken from "The Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A: System Programming Guide"
55-
56-
### Limitations
57-
58-
Since we only have 1 PML4E and only 1 PDPTE, bits 47:30 of a linear address must be zero. Thus, we have only 30 bits (bit 29:0) to work with, giving us access to (1 << 30) bytes of memory (1GB).
59-
60-
## Access Flags
61-
62-
In addition to providing addresses, page table entries also contain access flags that describe how memory can be accessed, and whether it is present or not. The following access flags are set on each entry:
63-
64-
PML4E, PDPTE, and PD Entries have the present flag set to 1, and the rest of the flags are not set.
65-
66-
PTE Entries all have the present flag set to 1.
67-
68-
In addition, the following flags are set according to the type of memory being mapped:
69-
70-
For `Host Function Definitions` and `Host Exception Data` the NX flag is set to 1 meaning that the memory is not executable in the guest and is not accessible to guest code (ring 3) and is also read only even in ring 0.
71-
72-
For `Input/Output Data`, `Page Table Data`, `PEB`, `PanicContext` and `GuestErrorData` the NX flag is set to 1 meaning that the memory is not executable in the guest and the RW flag is set to 1 meaning that the memory is read/write in ring 0, this means that this data is not accessible to guest code unless accessed via the Hyperlight Guest API (which will be in ring 0).
73-
74-
For `Code` the NX flag is not set meaning that the memory is executable in the guest and the RW flag is set to 1 meaning the data is read/write, as the user/supervisor flag is set then the memory is also read/write accessible to user code. (The code section contains both code and data, so it is marked as read/write. In a future update we will parse the layout of the code and set the access flags accordingly).
75-
76-
For `Stack` the NX flag is set to 1 meaning that the memory is not executable in the guest, the RW flag is set to 1 meaning the data is read/write, as the user/supervisor flag is set then the memory is also read/write accessible to user code.
77-
78-
For `Heap` the RW flag is set to 1 meaning the data is read/write, as the user/supervisor flag is set then the memory is also read/write accessible to user code. The NX flag is not set if the feature `executable_heap` is enabled, otherwise the NX flag is set to 1 meaning that the memory is not executable in the guest. The `executable_heap` feature is disabled by default. It is required to allow data in the heap to be executable to when guests dynamically load or generate code, e.g. `hyperlight-wasm` supports loading of AOT compiled WebAssembly modules, these are loaded dynamically by the Wasm runtime and end up in the heap, therefore for this scenario the `executable_heap` feature must be enabled. In a future update we will implement a mechanism to allow the guest to request memory to be executable at runtime via the Hyperlight Guest API.
79-
80-
For `Guard Pages` the NX flag is set to 1 meaning that the memory is not executable in the guest. The RW flag is set to 1 meaning the data is read/write, as the user/supervisor flag is set then the memory is also read/write accessible to user code. **Note that neither of these flags should really be set as the purpose of the guard pages is to cause a fault if accessed, however, as we deal with this fault in the host not in the guest we need to make the memory accessible to the guest, in a future update we will implement exception and interrupt handling in the guest and then change these flags.**
126+
Hyperlight unconditionally uses 48-bit virtual addresses (4-level
127+
paging) and enables PAE. The guest is always entered in long mode.

0 commit comments

Comments
 (0)