Expand description
NUMA-Aware Scheduling
Optimizes process placement for Non-Uniform Memory Access (NUMA) architectures.
§NUMA Background
Modern multi-socket systems have NUMA characteristics where memory access latency depends on which CPU socket is accessing which memory node. Local memory access is faster than remote access (typical ratio: 1.0x vs 1.5-2.0x).
§Optimization Strategy
- Memory affinity: Schedule processes on CPUs close to their memory
- Load balancing: Balance load within NUMA nodes before cross-node migration
- Page migration: Move pages to local node when access patterns change
- Interleaving: Distribute memory across nodes for bandwidth-intensive workloads
§ACPI Topology Parsing
On x86_64, NUMA topology is discovered from ACPI tables:
- SRAT (System Resource Affinity Table): CPU-to-domain and memory-to-domain mappings
- SLIT (System Locality Information Table): inter-node distance matrix
- MADT: CPU enumeration including offline CPUs
Structs§
- CpuInfo
- CPU information from MADT parsing.
- Node
Load - Per-node load statistics
- Numa
Node - Represents a discovered NUMA node.
- Numa
Scheduler - NUMA scheduler
- Numa
Topology - NUMA topology information
- Slit
Entry - Parsed SLIT (System Locality Information Table) data.
Enums§
- Srat
Entry - SRAT sub-table entry types.
Functions§
- build_
topology - Build a NumaTopology from parsed SRAT and SLIT data.
- get_
distance - Get the distance between two NUMA nodes from a parsed SLIT entry.
- get_
numa_ scheduler - Get global NUMA scheduler
- init
- Initialize NUMA-aware scheduling
- parse_
madt_ topology - Parse MADT to extract CPU topology.
- parse_
slit - Parse the raw SLIT table bytes.
- parse_
srat - Parse the raw SRAT table bytes into structured entries.