⚠️ VeridianOS Kernel Documentation - This is low-level kernel code. All functions are unsafe unless explicitly marked otherwise. no_std

VirtualAddressSpace

Struct VirtualAddressSpace 

Source
pub struct VirtualAddressSpace {
    pub page_table_root: AtomicU64,
    pub tlb_generation: AtomicU64,
    /* private fields */
}
Expand description

Virtual Address Space for a process

Fields§

§page_table_root: AtomicU64

Page table root (CR3 on x86_64)

§tlb_generation: AtomicU64

TLB generation counter. Incremented on every page table modification. The scheduler compares this against the last-seen generation at switch time to determine whether a TLB flush is needed.

Implementations§

Source§

impl VirtualAddressSpace

Source

pub fn new() -> Self

Create a new virtual address space

Source

pub fn init(&mut self) -> Result<(), KernelError>

Initialize virtual address space

Source

pub fn map_kernel_space(&mut self) -> Result<(), KernelError>

Map kernel space into this address space.

Copies the upper-half L4 entries (indices 256-511) from the current (boot) page tables into this VAS’s L4, plus the bootloader’s physical memory mapping entry (which may be in the lower half). This shares the kernel’s code, data, heap, MMIO, and physical memory access with the new process, so that the kernel remains accessible during syscalls (which run with the user’s CR3).

Source

pub fn clone_from(&mut self, other: &Self) -> Result<(), KernelError>

Clone from another address space (deep copy for fork).

Allocates a new L4 page table for this VAS, copies kernel-space L4 entries directly (shared kernel mapping), and for each user-space page in the parent, allocates a new physical frame, copies the 4KB content, and maps it into this VAS’s page tables with the same flags.

Source

pub fn destroy(&mut self)

Destroy the address space

Source

pub fn set_page_table(&self, root_phys_addr: u64)

Set page table root

Source

pub fn get_page_table(&self) -> u64

Get page table root

Source

pub fn map_region( &self, start: VirtualAddress, size: usize, mapping_type: MappingType, ) -> Result<(), KernelError>

Map a region of virtual memory

Source

pub fn map_physical_region( &self, phys_addr: u64, size: usize, vaddr: VirtualAddress, ) -> Result<(), KernelError>

Map specific physical frames into user space at a chosen virtual address.

Used for framebuffer mmap: the physical frames already exist (MMIO) and must be mapped read/write into the process address space.

Source

pub fn map_region_raii( &self, start: VirtualAddress, size: usize, mapping_type: MappingType, process_id: ProcessId, ) -> Result<MappedRegion, KernelError>

Map a region of virtual memory with RAII guard

Source

pub fn unmap_region(&self, start: VirtualAddress) -> Result<(), KernelError>

Unmap a region

Source

pub fn unmap(&self, start_addr: usize, size: usize) -> Result<(), KernelError>

Unmap a region by address and size (POSIX-compliant partial munmap).

Supports three cases:

  1. Exact match: addr and size match a BTreeMap entry → remove it.
  2. Front trim: addr matches the start of a larger mapping → shrink the mapping and free the leading pages.
  3. Back trim: addr+size matches the end of a mapping → shrink from the back.
  4. Hole punch: Range is in the middle of a mapping → split into two.
  5. Sub-range not at start: addr is inside a mapping → find the containing mapping and trim/punch accordingly.

GCC’s ggc garbage collector relies on partial munmap to free individual pages within larger mmap pools. Without this, munmap(pool_start, 4KB) would destroy the entire multi-MB pool.

Source

pub fn find_mapping(&self, addr: VirtualAddress) -> Option<VirtualMapping>

Find mapping for address

Source

pub fn mappings_ref(&self) -> &Mutex<BTreeMap<VirtualAddress, VirtualMapping>>

Get a reference to the underlying mappings BTreeMap.

Used by COW fork to iterate user-space pages and by diagnostics. The caller must lock the returned Mutex before accessing entries.

Source

pub fn map_page_with_frame( &mut self, vaddr: usize, frame: FrameNumber, flags: PageFlags, ) -> Result<(), KernelError>

Map a specific virtual address using a pre-allocated physical frame.

Unlike map_page (which allocates its own frame), this takes an existing frame – used by demand paging and COW fault handlers.

Source

pub fn remap_page( &mut self, vaddr: usize, new_frame: FrameNumber, flags: PageFlags, ) -> Result<(), KernelError>

Re-map a virtual address to a different physical frame (for COW).

Unmaps the old mapping and installs the new frame with the given flags.

Source

pub fn map_lazy(&mut self, vaddr: usize, size: usize, flags: PageFlags)

Register a lazy (demand-paged) mapping without allocating frames.

Delegates to the demand paging manager. The first access will trigger a page fault that the manager resolves by allocating a physical frame.

Source

pub fn mmap( &self, size: usize, mapping_type: MappingType, ) -> Result<VirtualAddress, KernelError>

Allocate memory-mapped region

Source

pub fn heap_start_addr(&self) -> u64

Return the base address of the user heap region.

Source

pub fn brk(&self, new_break: Option<VirtualAddress>) -> VirtualAddress

Extend or query heap (brk).

When new_break is Some, attempts to move the program break:

  • Grow (new > current): allocates physical frames and maps pages for the delta region.
  • Shrink (new < current but >= heap_start): unmaps pages and frees frames for the delta region.
  • Below heap_start or equal to current: no-op.

When new_break is None, returns the current break without changes.

All heap pages are tracked in a SINGLE consolidated BTreeMap entry keyed at the heap start page. This avoids creating one entry per brk() call, which previously caused 50,000+ entries and O(n^2) slowdown.

Source

pub fn fork(&self) -> Result<Self, KernelError>

Clone address space (for fork).

Creates a new VAS with its own L4 page table and deep-copies all user-space pages from this VAS. Kernel-space entries are shared.

Source

pub fn protect_region( &self, start: VirtualAddress, size: usize, prot: usize, ) -> Result<(), KernelError>

Update hardware page table entry flags for a region.

Walks the page table for each page in [start, start+size) and updates the PTE flags according to the POSIX prot bitmask. Flushes TLB for each modified page.

Source

pub fn handle_page_fault( &self, fault_addr: VirtualAddress, write: bool, user: bool, ) -> Result<(), KernelError>

Handle page fault

Source

pub fn get_stats(&self) -> VasStats

Get memory statistics

Source

pub fn clear(&mut self)

Clear all mappings and free resources

Source

pub fn clear_user_space(&mut self) -> Result<(), KernelError>

Clear user-space mappings only (for exec)

Source

pub fn user_stack_base(&self) -> usize

Get user stack base address

Source

pub fn user_stack_size(&self) -> usize

Get user stack size

Source

pub fn stack_top(&self) -> usize

Get stack top address

Source

pub fn set_stack_top(&self, addr: usize)

Set stack top address

Source

pub fn set_stack_size(&self, size: usize)

Set stack size in bytes

Source

pub fn map_page( &mut self, vaddr: usize, flags: PageFlags, ) -> Result<(), KernelError>

Map a single page at a virtual address

Source

pub fn map_huge_page( &mut self, vaddr: usize, flags: PageFlags, ) -> Result<(), KernelError>

Map a 2MB huge page at the given virtual address.

Allocates 512 contiguous 4KB frames (= 2MB) and installs a single L2 page table entry with the HUGE flag set. This reduces TLB pressure for large contiguous allocations (heap, framebuffer, DMA).

The virtual address must be 2MB-aligned.

Trait Implementations§

Source§

impl Default for VirtualAddressSpace

Source§

fn default() -> Self

Returns the “default value” for a type. Read more

Auto Trait Implementations§

§

impl !Freeze for VirtualAddressSpace

§

impl !RefUnwindSafe for VirtualAddressSpace

§

impl Send for VirtualAddressSpace

§

impl Sync for VirtualAddressSpace

§

impl Unpin for VirtualAddressSpace

§

impl UnwindSafe for VirtualAddressSpace

Blanket Implementations§

§

impl<T> Any for T
where T: 'static + ?Sized,

§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
§

impl<T> Borrow<T> for T
where T: ?Sized,

§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
§

impl<T> BorrowMut<T> for T
where T: ?Sized,

§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> From<T> for T

§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T, U> Into<U> for T
where U: From<T>,

§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of [From]<T> for U chooses to do.

§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.