Pyrogenesis HEAD
Pyrogenesis, a RTS Engine
vm Namespace Reference

Enumerations

enum  PageType { kLarge , kSmall , kDefault }
 

Functions

void * ReserveAddressSpace (size_t size, size_t commitSize=g_LargePageSize, PageType pageType=kDefault, int prot=PROT_READ|PROT_WRITE)
 reserve address space and set the parameters for any later on-demand commits. More...
 
void ReleaseAddressSpace (void *p, size_t size=0)
 release address space and decommit any memory. More...
 
bool Commit (uintptr_t address, size_t size, PageType pageType=kDefault, int prot=PROT_READ|PROT_WRITE)
 map physical memory to previously reserved address space. More...
 
bool Decommit (uintptr_t address, size_t size)
 unmap physical memory. More...
 
bool Protect (uintptr_t address, size_t size, int prot)
 set the memory protection flags for all pages that intersect the given interval. More...
 
void * Allocate (size_t size, PageType pageType=kDefault, int prot=PROT_READ|PROT_WRITE)
 reserve address space and commit memory. More...
 
void Free (void *p, size_t size=0)
 decommit memory and release address space. More...
 
void BeginOnDemandCommits ()
 install a handler that attempts to commit memory whenever a read/write page fault is encountered. More...
 
void EndOnDemandCommits ()
 decrements the reference count begun by BeginOnDemandCommit and removes the page fault handler when it reaches 0. More...
 
void DumpStatistics ()
 
 CACHE_ALIGNED (struct Statistics)
 
static bool ShouldUseLargePages (size_t allocationSize, DWORD allocationType, PageType pageType)
 
static void * AllocateLargeOrSmallPages (uintptr_t address, size_t size, DWORD allocationType, PageType pageType=kDefault, int prot=PROT_READ|PROT_WRITE)
 
 CACHE_ALIGNED (struct AddressRangeDescriptor)
 
static AddressRangeDescriptor * FindDescriptor (uintptr_t address)
 
static LONG CALLBACK VectoredHandler (const PEXCEPTION_POINTERS ep)
 
static Status InitHandler ()
 
static void ShutdownHandler ()
 

Variables

static bool largePageAllocationTookTooLong = false
 
static AddressRangeDescriptor ranges [2 *os_cpu_MaxProcessors]
 
static PVOID handler
 
static ModuleInitState initState { 0 }
 
static std::atomic< intptr_t > references { 0 }
 

Enumeration Type Documentation

◆ PageType

Enumerator
kLarge 
kSmall 
kDefault 

Function Documentation

◆ Allocate()

void * vm::Allocate ( size_t  size,
PageType  pageType = kDefault,
int  prot = PROT_READ|PROT_WRITE 
)

reserve address space and commit memory.

Parameters
size[bytes] to allocate.
pageType,prot- see ReserveAddressSpace.
Returns
zero-initialized memory aligned to the respective page size.

◆ AllocateLargeOrSmallPages()

static void * vm::AllocateLargeOrSmallPages ( uintptr_t  address,
size_t  size,
DWORD  allocationType,
PageType  pageType = kDefault,
int  prot = PROT_READ|PROT_WRITE 
)
static

◆ BeginOnDemandCommits()

void vm::BeginOnDemandCommits ( )

install a handler that attempts to commit memory whenever a read/write page fault is encountered.

thread-safe.

◆ CACHE_ALIGNED() [1/2]

vm::CACHE_ALIGNED ( struct AddressRangeDescriptor  )

◆ CACHE_ALIGNED() [2/2]

static vm::CACHE_ALIGNED ( struct Statistics  )

◆ Commit()

bool vm::Commit ( uintptr_t  address,
size_t  size,
PageType  pageType = kDefault,
int  prot = PROT_READ|PROT_WRITE 
)

map physical memory to previously reserved address space.

Parameters
address,sizeneed not be aligned, but this function commits any pages intersecting that interval.
pageType,prot- see ReserveAddressSpace.
Returns
whether memory was successfully committed.

note: committing only maps virtual pages and does not actually allocate page frames. Windows XP uses a first-touch heuristic - the page will be taken from the node whose processor caused the fault. therefore, worker threads should be the first to write to their memory.

(this is surprisingly slow in XP, possibly due to PFN lock contention)

◆ Decommit()

bool vm::Decommit ( uintptr_t  address,
size_t  size 
)

unmap physical memory.

Returns
whether the operation succeeded.

◆ DumpStatistics()

void vm::DumpStatistics ( )

◆ EndOnDemandCommits()

void vm::EndOnDemandCommits ( )

decrements the reference count begun by BeginOnDemandCommit and removes the page fault handler when it reaches 0.

thread-safe.

◆ FindDescriptor()

static AddressRangeDescriptor * vm::FindDescriptor ( uintptr_t  address)
static

◆ Free()

void vm::Free ( void *  p,
size_t  size = 0 
)

decommit memory and release address space.

Parameters
pa pointer previously returned by Allocate.
sizeis required by the POSIX implementation and ignored on Windows.

(this differs from ReleaseAddressSpace, which must account for extra padding/alignment to largePageSize.)

◆ InitHandler()

static Status vm::InitHandler ( )
static

◆ Protect()

bool vm::Protect ( uintptr_t  address,
size_t  size,
int  prot 
)

set the memory protection flags for all pages that intersect the given interval.

the pages must currently be committed.

Parameters
protmemory protection flags: PROT_NONE or a combination of PROT_READ, PROT_WRITE, PROT_EXEC.

◆ ReleaseAddressSpace()

void vm::ReleaseAddressSpace ( void *  p,
size_t  size = 0 
)

release address space and decommit any memory.

Parameters
pa pointer previously returned by ReserveAddressSpace.
sizeis required by the POSIX implementation and ignored on Windows.

◆ ReserveAddressSpace()

void * vm::ReserveAddressSpace ( size_t  size,
size_t  commitSize = g_LargePageSize,
PageType  pageType = kDefault,
int  prot = PROT_READ|PROT_WRITE 
)

reserve address space and set the parameters for any later on-demand commits.

Parameters
sizedesired number of bytes. any additional space in the last page is also accessible.
commitSize[bytes] how much to commit each time. larger values reduce the number of page faults at the cost of additional internal fragmentation. must be a multiple of largePageSize unless pageType == kSmall.
pageTypechooses between large/small pages for commits.
protmemory protection flags for newly committed pages.
Returns
base address (aligned to the respective page size) or 0 if address space/descriptor storage is exhausted (an error dialog will also be raised). must be freed via ReleaseAddressSpace.

◆ ShouldUseLargePages()

static bool vm::ShouldUseLargePages ( size_t  allocationSize,
DWORD  allocationType,
PageType  pageType 
)
static

◆ ShutdownHandler()

static void vm::ShutdownHandler ( )
static

◆ VectoredHandler()

static LONG CALLBACK vm::VectoredHandler ( const PEXCEPTION_POINTERS  ep)
static

Variable Documentation

◆ handler

PVOID vm::handler
static

◆ initState

ModuleInitState vm::initState { 0 }
static

◆ largePageAllocationTookTooLong

bool vm::largePageAllocationTookTooLong = false
static

◆ ranges

AddressRangeDescriptor vm::ranges[2 *os_cpu_MaxProcessors]
static

◆ references

std::atomic<intptr_t> vm::references { 0 }
static