Skip to content

Commit 4b9c9d8

Browse files
pvts-matPlaidCat
authored andcommitted
x86/mm: Recompute physical address for every page of per-CPU CEA mapping
jira VULN-3958 cve-bf CVE-2023-0597 commit-author Sean Christopherson <seanjc@google.com> commit 80d72a8 Recompute the physical address for each per-CPU page in the CPU entry area, a recent commit inadvertantly modified cea_map_percpu_pages() such that every PTE is mapped to the physical address of the first page. Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-2-seanjc@google.com (cherry picked from commit 80d72a8) Signed-off-by: Marcin Wcisło <marcin.wcislo@conclusive.pl>
1 parent 7285ad2 commit 4b9c9d8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

arch/x86/mm/cpu_entry_area.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
9797
early_pfn_to_nid(PFN_DOWN(pa)));
9898

9999
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
100-
cea_set_pte(cea_vaddr, pa, prot);
100+
cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
101101
}
102102

103103
static void __init percpu_setup_debug_store(unsigned int cpu)

0 commit comments

Comments
 (0)