Skip to content

Commit

Permalink
riscv: Stop emitting preventive sfence.vma for new userspace mappings…
Browse files Browse the repository at this point in the history
… with Svvptc

The preventive sfence.vma were emitted because new mappings must be made
visible to the page table walker but Svvptc guarantees that it will
happen within a bounded timeframe, so no need to sfence.vma for the uarchs
that implement this extension, we will then take gratuitous (but very
unlikely) page faults, similarly to x86 and arm64.

This allows to drastically reduce the number of sfence.vma emitted:

* Ubuntu boot to login:
Before: ~630k sfence.vma
After:  ~200k sfence.vma

* ltp - mmapstress01
Before: ~45k
After:  ~6.3k

* lmbench - lat_pagefault
Before: ~665k
After:   832 (!)

* lmbench - lat_mmap
Before: ~546k
After:   718 (!)

Signed-off-by: Alexandre Ghiti <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
  • Loading branch information
Alexandre Ghiti authored and palmer-dabbelt committed Sep 15, 2024
1 parent 503638e commit 7a21b2e
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 1 deletion.
16 changes: 15 additions & 1 deletion arch/riscv/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -476,6 +476,9 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
struct vm_area_struct *vma, unsigned long address,
pte_t *ptep, unsigned int nr)
{
asm goto(ALTERNATIVE("nop", "j %l[svvptc]", 0, RISCV_ISA_EXT_SVVPTC, 1)
: : : : svvptc);

/*
* The kernel assumes that TLBs don't cache invalid entries, but
* in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
Expand All @@ -485,12 +488,23 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
*/
while (nr--)
local_flush_tlb_page(address + nr * PAGE_SIZE);

svvptc:;
/*
* Svvptc guarantees that the new valid pte will be visible within
* a bounded timeframe, so when the uarch does not cache invalid
* entries, we don't have to do anything.
*/
}
#define update_mmu_cache(vma, addr, ptep) \
update_mmu_cache_range(NULL, vma, addr, ptep, 1)

#define __HAVE_ARCH_UPDATE_MMU_TLB
#define update_mmu_tlb update_mmu_cache
static inline void update_mmu_tlb(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep)
{
flush_tlb_range(vma, address, address + PAGE_SIZE);
}

static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp)
Expand Down
13 changes: 13 additions & 0 deletions arch/riscv/mm/pgtable.c
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,26 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep,
pte_t entry, int dirty)
{
asm goto(ALTERNATIVE("nop", "j %l[svvptc]", 0, RISCV_ISA_EXT_SVVPTC, 1)
: : : : svvptc);

if (!pte_same(ptep_get(ptep), entry))
__set_pte_at(vma->vm_mm, ptep, entry);
/*
* update_mmu_cache will unconditionally execute, handling both
* the case that the PTE changed and the spurious fault case.
*/
return true;

svvptc:
if (!pte_same(ptep_get(ptep), entry)) {
__set_pte_at(vma->vm_mm, ptep, entry);
/* Here only not svadu is impacted */
flush_tlb_page(vma, address);
return true;
}

return false;
}

int ptep_test_and_clear_young(struct vm_area_struct *vma,
Expand Down

0 comments on commit 7a21b2e

Please sign in to comment.