Skip to content

Commit 62f1dc9

Browse files
authored
Discontiguous PageProtect GC support (#946)
This CL fixes PageProtect's memory unprotect calls. When using discontiguous space, * If the allocated pages are not in a new chunk, unprotect the pages as before. * If the pages are in a new chunk, there can be two cases: * If the new chunk is not mapped, no need to unprotect * If the new chunk is mapped, this means that the chunk is allocated and released on previous GC epochs before. The GC should unprotect these pages.
1 parent e8b826f commit 62f1dc9

File tree

1 file changed

+22
-13
lines changed

1 file changed

+22
-13
lines changed

src/util/heap/freelistpageresource.rs

Lines changed: 22 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -147,19 +147,28 @@ impl<VM: VMBinding> PageResource<VM> for FreeListPageResource<VM> {
147147
let rtn = self.start + conversions::pages_to_bytes(page_offset as _);
148148
// The meta-data portion of reserved Pages was committed above.
149149
self.commit_pages(reserved_pages, required_pages, tls);
150-
if self.protect_memory_on_release && !new_chunk {
151-
// This check is necessary to prevent us from mprotecting an address that is not yet mapped by mmapper.
152-
// See https://github.com/mmtk/mmtk-core/issues/400.
153-
// It is possible that one thread gets a new chunk, and returns from this function. However, the Space.acquire()
154-
// has not yet call ensure_mapped() for it. So the chunk is not yet mmapped. At this point, if another thread calls
155-
// this function, and get a few more pages from the same chunk, it is no longer seen as 'new_chunk', and we
156-
// will try to munprotect on it. But the chunk may not yet be mapped.
157-
//
158-
// If we want to improve and get rid of this loop, we need to move this munprotect to anywhere after the ensure_mapped() call
159-
// in Space.acquire(). We can either move it the option of 'protect_on_release' to space, or have a call to page resource
160-
// after ensure_mapped(). However, I think this is sufficient given that this option is only used for PageProtect for debugging use.
161-
while !MMAPPER.is_mapped_address(rtn) {}
162-
self.munprotect(rtn, self.free_list.size(page_offset as _) as _)
150+
if self.protect_memory_on_release {
151+
if !new_chunk {
152+
// This check is necessary to prevent us from mprotecting an address that is not yet mapped by mmapper.
153+
// See https://github.com/mmtk/mmtk-core/issues/400.
154+
// It is possible that one thread gets a new chunk, and returns from this function. However, the Space.acquire()
155+
// has not yet call ensure_mapped() for it. So the chunk is not yet mmapped. At this point, if another thread calls
156+
// this function, and get a few more pages from the same chunk, it is no longer seen as 'new_chunk', and we
157+
// will try to munprotect on it. But the chunk may not yet be mapped.
158+
//
159+
// If we want to improve and get rid of this loop, we need to move this munprotect to anywhere after the ensure_mapped() call
160+
// in Space.acquire(). We can either move it the option of 'protect_on_release' to space, or have a call to page resource
161+
// after ensure_mapped(). However, I think this is sufficient given that this option is only used for PageProtect for debugging use.
162+
while !new_chunk && !MMAPPER.is_mapped_address(rtn) {}
163+
self.munprotect(rtn, self.free_list.size(page_offset as _) as _)
164+
} else if !self.common().contiguous && new_chunk {
165+
// Don't unprotect if this is a new unmapped discontiguous chunk
166+
// For a new mapped discontiguous chunk, this should previously be released and protected by us.
167+
// We still need to unprotect it.
168+
if MMAPPER.is_mapped_address(rtn) {
169+
self.munprotect(rtn, self.free_list.size(page_offset as _) as _)
170+
}
171+
}
163172
};
164173
Result::Ok(PRAllocResult {
165174
start: rtn,

0 commit comments

Comments
 (0)