mirror of
https://github.com/torvalds/linux.git
synced 2024-12-20 18:11:47 +00:00
eccd906484
The commit0a9fe8ca84
("x86/mm: Validate kernel_physical_mapping_init() PTE population") triggers this warning in SEV guests: WARNING: CPU: 0 PID: 0 at arch/x86/include/asm/pgalloc.h:87 phys_pmd_init+0x30d/0x386 Call Trace: kernel_physical_mapping_init+0xce/0x259 early_set_memory_enc_dec+0x10f/0x160 kvm_smp_prepare_boot_cpu+0x71/0x9d start_kernel+0x1c9/0x50b secondary_startup_64+0xa4/0xb0 A SEV guest calls kernel_physical_mapping_init() to clear the encryption mask from an existing mapping. While doing so, it also splits large pages into smaller. To split a page, kernel_physical_mapping_init() allocates a new page and updates the existing entry. The set_{pud,pmd}_safe() helpers trigger a warning when updating an entry with a page in the present state. Add a new kernel_physical_mapping_change() helper which uses the non-safe variants of set_{pmd,pud,p4d}() and {pmd,pud,p4d}_populate() routines when updating the entry. Since kernel_physical_mapping_change() may replace an existing entry with a new entry, the caller is responsible to flush the TLB at the end. Change early_set_memory_enc_dec() to use kernel_physical_mapping_change() when it wants to clear the memory encryption mask from the page table entry. [ bp: - massage commit message. - flesh out comment according to dhansen's request. - align function arguments at opening brace. ] Fixes:0a9fe8ca84
("x86/mm: Validate kernel_physical_mapping_init() PTE population") Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Lendacky <Thomas.Lendacky@amd.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190417154102.22613-1-brijesh.singh@amd.com
28 lines
747 B
C
28 lines
747 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef __X86_MM_INTERNAL_H
|
|
#define __X86_MM_INTERNAL_H
|
|
|
|
void *alloc_low_pages(unsigned int num);
|
|
static inline void *alloc_low_page(void)
|
|
{
|
|
return alloc_low_pages(1);
|
|
}
|
|
|
|
void early_ioremap_page_table_range_init(void);
|
|
|
|
unsigned long kernel_physical_mapping_init(unsigned long start,
|
|
unsigned long end,
|
|
unsigned long page_size_mask);
|
|
unsigned long kernel_physical_mapping_change(unsigned long start,
|
|
unsigned long end,
|
|
unsigned long page_size_mask);
|
|
void zone_sizes_init(void);
|
|
|
|
extern int after_bootmem;
|
|
|
|
void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache);
|
|
|
|
extern unsigned long tlb_single_page_flush_ceiling;
|
|
|
|
#endif /* __X86_MM_INTERNAL_H */
|