Struct page_table::PageTable64
source · pub struct PageTable64<M: PagingMetaData, PTE: GenericPTE, IF: PagingIf> { /* private fields */ }
Expand description
A generic page table struct for 64-bit platform.
It also tracks all intermediate level tables. They will be deallocated
When the PageTable64
itself is dropped.
Implementations§
source§impl<M: PagingMetaData, PTE: GenericPTE, IF: PagingIf> PageTable64<M, PTE, IF>
impl<M: PagingMetaData, PTE: GenericPTE, IF: PagingIf> PageTable64<M, PTE, IF>
sourcepub fn try_new() -> PagingResult<Self>
pub fn try_new() -> PagingResult<Self>
Creates a new page table instance or returns the error.
It will allocate a new page for the root page table.
sourcepub const fn root_paddr(&self) -> PhysAddr
pub const fn root_paddr(&self) -> PhysAddr
Returns the physical address of the root page table.
sourcepub fn map(
&mut self,
vaddr: VirtAddr,
target: PhysAddr,
page_size: PageSize,
flags: MappingFlags
) -> PagingResult
pub fn map( &mut self, vaddr: VirtAddr, target: PhysAddr, page_size: PageSize, flags: MappingFlags ) -> PagingResult
Maps a virtual page to a physical frame with the given page_size
and mapping flags
.
The virtual page starts with vaddr
, amd the physical frame starts with
target
. If the addresses is not aligned to the page size, they will be
aligned down automatically.
Returns Err(PagingError::AlreadyMapped)
if the mapping is already present.
sourcepub fn map_overwrite(
&mut self,
vaddr: VirtAddr,
target: PhysAddr,
page_size: PageSize,
flags: MappingFlags
) -> PagingResult
pub fn map_overwrite( &mut self, vaddr: VirtAddr, target: PhysAddr, page_size: PageSize, flags: MappingFlags ) -> PagingResult
Same as PageTable64::map()
. This function will error if entry doesn’t exist. Should be
used to edit PTE in page fault handler.
sourcepub fn unmap(&mut self, vaddr: VirtAddr) -> PagingResult<(PhysAddr, PageSize)>
pub fn unmap(&mut self, vaddr: VirtAddr) -> PagingResult<(PhysAddr, PageSize)>
Unmaps the mapping starts with vaddr
.
Returns Err(PagingError::NotMapped)
if the
mapping is not present.
sourcepub fn map_fault(
&mut self,
vaddr: VirtAddr,
page_size: PageSize,
flags: MappingFlags
) -> PagingResult
pub fn map_fault( &mut self, vaddr: VirtAddr, page_size: PageSize, flags: MappingFlags ) -> PagingResult
Maps a fault page starts with vaddr
.
sourcepub fn query(
&self,
vaddr: VirtAddr
) -> PagingResult<(PhysAddr, MappingFlags, PageSize)>
pub fn query( &self, vaddr: VirtAddr ) -> PagingResult<(PhysAddr, MappingFlags, PageSize)>
Query the result of the mapping starts with vaddr
.
Returns the physical address of the target frame, mapping flags, and the page size.
Returns Err(PagingError::NotMapped)
if the
mapping is not present.
sourcepub fn update(
&mut self,
vaddr: VirtAddr,
paddr: Option<PhysAddr>,
flags: Option<MappingFlags>
) -> PagingResult<PageSize>
pub fn update( &mut self, vaddr: VirtAddr, paddr: Option<PhysAddr>, flags: Option<MappingFlags> ) -> PagingResult<PageSize>
Updates the target or flags of the mapping starts with vaddr
. If the
corresponding argument is None
, it will not be updated.
Returns the page size of the mapping.
Returns Err(PagingError::NotMapped)
if the
mapping is not present.
sourcepub fn map_region(
&mut self,
vaddr: VirtAddr,
paddr: PhysAddr,
size: usize,
flags: MappingFlags,
allow_huge: bool
) -> PagingResult
pub fn map_region( &mut self, vaddr: VirtAddr, paddr: PhysAddr, size: usize, flags: MappingFlags, allow_huge: bool ) -> PagingResult
Map a contiguous virtual memory region to a contiguous physical memory
region with the given mapping flags
.
The virtual and physical memory regions start with vaddr
and paddr
respectively. The region size is size
. The addresses and size
must
be aligned to 4K, otherwise it will return Err(PagingError::NotAligned)
.
When allow_huge
is true, it will try to map the region with huge pages
if possible. Otherwise, it will map the region with 4K pages.
sourcepub fn map_fault_region(
&mut self,
vaddr: VirtAddr,
size: usize,
flags: MappingFlags
) -> PagingResult
pub fn map_fault_region( &mut self, vaddr: VirtAddr, size: usize, flags: MappingFlags ) -> PagingResult
TODO: huge page
sourcepub fn unmap_region(&mut self, vaddr: VirtAddr, size: usize) -> PagingResult
pub fn unmap_region(&mut self, vaddr: VirtAddr, size: usize) -> PagingResult
Unmap a contiguous virtual memory region.
The region must be mapped before using PageTable64::map_region
, or
unexpected behaviors may occur.
sourcepub fn update_region(
&mut self,
vaddr: VirtAddr,
size: usize,
flags: MappingFlags
) -> PagingResult
pub fn update_region( &mut self, vaddr: VirtAddr, size: usize, flags: MappingFlags ) -> PagingResult
Update the mapping flags of a contiguous virtual memory region.
The region must be mapped before using PageTable64::map_region
, or it will return an error.
sourcepub fn walk<F>(&self, limit: usize, func: &F) -> PagingResult
pub fn walk<F>(&self, limit: usize, func: &F) -> PagingResult
Walk the page table recursively.
When reaching the leaf page table, call func
on the current page table
entry. The max number of enumerations in one table is limited by limit
.
The arguments of func
are:
source§impl<M: PagingMetaData, PTE: GenericPTE, IF: PagingIf> PageTable64<M, PTE, IF>
impl<M: PagingMetaData, PTE: GenericPTE, IF: PagingIf> PageTable64<M, PTE, IF>
sourcepub fn get_entry_mut(
&self,
vaddr: VirtAddr
) -> PagingResult<(&mut PTE, PageSize)>
pub fn get_entry_mut( &self, vaddr: VirtAddr ) -> PagingResult<(&mut PTE, PageSize)>
To get the mutable reference of the page table entry of the given virtual address.