pub struct VirtualAddressSpace {
pub page_table_root: AtomicU64,
pub tlb_generation: AtomicU64,
/* private fields */
}Expand description
Virtual Address Space for a process
Fields§
§page_table_root: AtomicU64Page table root (CR3 on x86_64)
tlb_generation: AtomicU64TLB generation counter. Incremented on every page table modification. The scheduler compares this against the last-seen generation at switch time to determine whether a TLB flush is needed.
Implementations§
Source§impl VirtualAddressSpace
impl VirtualAddressSpace
Sourcepub fn init(&mut self) -> Result<(), KernelError>
pub fn init(&mut self) -> Result<(), KernelError>
Initialize virtual address space
Sourcepub fn map_kernel_space(&mut self) -> Result<(), KernelError>
pub fn map_kernel_space(&mut self) -> Result<(), KernelError>
Map kernel space into this address space.
Copies the upper-half L4 entries (indices 256-511) from the current (boot) page tables into this VAS’s L4, plus the bootloader’s physical memory mapping entry (which may be in the lower half). This shares the kernel’s code, data, heap, MMIO, and physical memory access with the new process, so that the kernel remains accessible during syscalls (which run with the user’s CR3).
Sourcepub fn clone_from(&mut self, other: &Self) -> Result<(), KernelError>
pub fn clone_from(&mut self, other: &Self) -> Result<(), KernelError>
Clone from another address space (deep copy for fork).
Allocates a new L4 page table for this VAS, copies kernel-space L4 entries directly (shared kernel mapping), and for each user-space page in the parent, allocates a new physical frame, copies the 4KB content, and maps it into this VAS’s page tables with the same flags.
Sourcepub fn set_page_table(&self, root_phys_addr: u64)
pub fn set_page_table(&self, root_phys_addr: u64)
Set page table root
Sourcepub fn get_page_table(&self) -> u64
pub fn get_page_table(&self) -> u64
Get page table root
Sourcepub fn map_region(
&self,
start: VirtualAddress,
size: usize,
mapping_type: MappingType,
) -> Result<(), KernelError>
pub fn map_region( &self, start: VirtualAddress, size: usize, mapping_type: MappingType, ) -> Result<(), KernelError>
Map a region of virtual memory
Sourcepub fn map_physical_region(
&self,
phys_addr: u64,
size: usize,
vaddr: VirtualAddress,
) -> Result<(), KernelError>
pub fn map_physical_region( &self, phys_addr: u64, size: usize, vaddr: VirtualAddress, ) -> Result<(), KernelError>
Map specific physical frames into user space at a chosen virtual address.
Used for framebuffer mmap: the physical frames already exist (MMIO) and must be mapped read/write into the process address space.
Sourcepub fn map_region_raii(
&self,
start: VirtualAddress,
size: usize,
mapping_type: MappingType,
process_id: ProcessId,
) -> Result<MappedRegion, KernelError>
pub fn map_region_raii( &self, start: VirtualAddress, size: usize, mapping_type: MappingType, process_id: ProcessId, ) -> Result<MappedRegion, KernelError>
Map a region of virtual memory with RAII guard
Sourcepub fn unmap_region(&self, start: VirtualAddress) -> Result<(), KernelError>
pub fn unmap_region(&self, start: VirtualAddress) -> Result<(), KernelError>
Unmap a region
Sourcepub fn unmap(&self, start_addr: usize, size: usize) -> Result<(), KernelError>
pub fn unmap(&self, start_addr: usize, size: usize) -> Result<(), KernelError>
Unmap a region by address and size (POSIX-compliant partial munmap).
Supports three cases:
- Exact match:
addrandsizematch a BTreeMap entry → remove it. - Front trim:
addrmatches the start of a larger mapping → shrink the mapping and free the leading pages. - Back trim:
addr+sizematches the end of a mapping → shrink from the back. - Hole punch: Range is in the middle of a mapping → split into two.
- Sub-range not at start:
addris inside a mapping → find the containing mapping and trim/punch accordingly.
GCC’s ggc garbage collector relies on partial munmap to free individual pages within larger mmap pools. Without this, munmap(pool_start, 4KB) would destroy the entire multi-MB pool.
Sourcepub fn find_mapping(&self, addr: VirtualAddress) -> Option<VirtualMapping>
pub fn find_mapping(&self, addr: VirtualAddress) -> Option<VirtualMapping>
Find mapping for address
Sourcepub fn mappings_ref(&self) -> &Mutex<BTreeMap<VirtualAddress, VirtualMapping>>
pub fn mappings_ref(&self) -> &Mutex<BTreeMap<VirtualAddress, VirtualMapping>>
Get a reference to the underlying mappings BTreeMap.
Used by COW fork to iterate user-space pages and by diagnostics. The caller must lock the returned Mutex before accessing entries.
Sourcepub fn map_page_with_frame(
&mut self,
vaddr: usize,
frame: FrameNumber,
flags: PageFlags,
) -> Result<(), KernelError>
pub fn map_page_with_frame( &mut self, vaddr: usize, frame: FrameNumber, flags: PageFlags, ) -> Result<(), KernelError>
Map a specific virtual address using a pre-allocated physical frame.
Unlike map_page (which allocates its own frame), this takes an
existing frame – used by demand paging and COW fault handlers.
Sourcepub fn remap_page(
&mut self,
vaddr: usize,
new_frame: FrameNumber,
flags: PageFlags,
) -> Result<(), KernelError>
pub fn remap_page( &mut self, vaddr: usize, new_frame: FrameNumber, flags: PageFlags, ) -> Result<(), KernelError>
Re-map a virtual address to a different physical frame (for COW).
Unmaps the old mapping and installs the new frame with the given flags.
Sourcepub fn map_lazy(&mut self, vaddr: usize, size: usize, flags: PageFlags)
pub fn map_lazy(&mut self, vaddr: usize, size: usize, flags: PageFlags)
Register a lazy (demand-paged) mapping without allocating frames.
Delegates to the demand paging manager. The first access will trigger a page fault that the manager resolves by allocating a physical frame.
Sourcepub fn mmap(
&self,
size: usize,
mapping_type: MappingType,
) -> Result<VirtualAddress, KernelError>
pub fn mmap( &self, size: usize, mapping_type: MappingType, ) -> Result<VirtualAddress, KernelError>
Allocate memory-mapped region
Sourcepub fn heap_start_addr(&self) -> u64
pub fn heap_start_addr(&self) -> u64
Return the base address of the user heap region.
Sourcepub fn brk(&self, new_break: Option<VirtualAddress>) -> VirtualAddress
pub fn brk(&self, new_break: Option<VirtualAddress>) -> VirtualAddress
Extend or query heap (brk).
When new_break is Some, attempts to move the program break:
- Grow (new > current): allocates physical frames and maps pages for the delta region.
- Shrink (new < current but >= heap_start): unmaps pages and frees frames for the delta region.
- Below heap_start or equal to current: no-op.
When new_break is None, returns the current break without changes.
All heap pages are tracked in a SINGLE consolidated BTreeMap entry keyed at the heap start page. This avoids creating one entry per brk() call, which previously caused 50,000+ entries and O(n^2) slowdown.
Sourcepub fn fork(&self) -> Result<Self, KernelError>
pub fn fork(&self) -> Result<Self, KernelError>
Clone address space (for fork).
Creates a new VAS with its own L4 page table and deep-copies all user-space pages from this VAS. Kernel-space entries are shared.
Sourcepub fn protect_region(
&self,
start: VirtualAddress,
size: usize,
prot: usize,
) -> Result<(), KernelError>
pub fn protect_region( &self, start: VirtualAddress, size: usize, prot: usize, ) -> Result<(), KernelError>
Update hardware page table entry flags for a region.
Walks the page table for each page in [start, start+size) and updates
the PTE flags according to the POSIX prot bitmask. Flushes TLB for
each modified page.
Sourcepub fn handle_page_fault(
&self,
fault_addr: VirtualAddress,
write: bool,
user: bool,
) -> Result<(), KernelError>
pub fn handle_page_fault( &self, fault_addr: VirtualAddress, write: bool, user: bool, ) -> Result<(), KernelError>
Handle page fault
Sourcepub fn clear_user_space(&mut self) -> Result<(), KernelError>
pub fn clear_user_space(&mut self) -> Result<(), KernelError>
Clear user-space mappings only (for exec)
Sourcepub fn user_stack_base(&self) -> usize
pub fn user_stack_base(&self) -> usize
Get user stack base address
Sourcepub fn user_stack_size(&self) -> usize
pub fn user_stack_size(&self) -> usize
Get user stack size
Sourcepub fn set_stack_top(&self, addr: usize)
pub fn set_stack_top(&self, addr: usize)
Set stack top address
Sourcepub fn set_stack_size(&self, size: usize)
pub fn set_stack_size(&self, size: usize)
Set stack size in bytes
Sourcepub fn map_page(
&mut self,
vaddr: usize,
flags: PageFlags,
) -> Result<(), KernelError>
pub fn map_page( &mut self, vaddr: usize, flags: PageFlags, ) -> Result<(), KernelError>
Map a single page at a virtual address
Sourcepub fn map_huge_page(
&mut self,
vaddr: usize,
flags: PageFlags,
) -> Result<(), KernelError>
pub fn map_huge_page( &mut self, vaddr: usize, flags: PageFlags, ) -> Result<(), KernelError>
Map a 2MB huge page at the given virtual address.
Allocates 512 contiguous 4KB frames (= 2MB) and installs a single L2 page table entry with the HUGE flag set. This reduces TLB pressure for large contiguous allocations (heap, framebuffer, DMA).
The virtual address must be 2MB-aligned.