memblock: always release pages to the buddy allocator in memblock_free_late()

If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
 only releases pages to the buddy allocator if they are not in the
 deferred range. This is correct for free pages (as defined by
 for_each_free_mem_pfn_range_in_zone()) because free pages in the
 deferred range will be initialized and released as part of the deferred
 init process. memblock_free_pages() is called by memblock_free_late(),
 which is used to free reserved ranges after memblock_free_all() has
 run. All pages in reserved ranges have been initialized at that point,
 and accordingly, those pages are not touched by the deferred init
 process. This means that currently, if the pages that
 memblock_free_late() intends to release are in the deferred range, they
 will never be released to the buddy allocator. They will forever be
 reserved.
 
 In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
 which is also correct for free pages but is not correct for reserved
 pages. KMSAN metadata for reserved pages is initialized by
 kmsan_init_shadow(), which runs shortly before memblock_free_all().
 
 For both of these reasons, memblock_free_pages() should only be called
 for free pages, and memblock_free_late() should call __free_pages_core()
 directly instead.
 
 One case where this issue can occur in the wild is EFI boot on
 x86_64. The x86 EFI code reserves all EFI boot services memory ranges
 via memblock_reserve() and frees them later via memblock_free_late()
 (efi_reserve_boot_services() and efi_free_boot_services(),
 respectively). If any of those ranges happens to fall within the
 deferred init range, the pages will not be released and that memory will
 be unavailable.
 
 For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
 
 v6.2-rc2:
 Node 0, zone      DMA
       spanned  4095
       present  3999
       managed  3840
 Node 0, zone    DMA32
       spanned  246652
       present  245868
       managed  178867
 
 v6.2-rc2 + patch:
 Node 0, zone      DMA
       spanned  4095
       present  3999
       managed  3840
 Node 0, zone    DMA32
       spanned  246652
       present  245868
       managed  222816   # +43,949 pages
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmPCrI8QHHJwcHRAa2Vy
 bmVsLm9yZwAKCRA5A4Ymyw79kT1lB/wPbLpePLzZfDGyV/NR9gi4FuJiaRfhlklV
 rbxnJce050GERbSQoF/r4zrxn2pzvIWGMh1xWZBGi/q8mT2rOIYtVqUahY9YuL/Z
 7+xqdCOALIxEj+cXqYocqp8/NFgUWLGuMoomc9lWvEkUs+zOvkD8Z/bRecfPYvOa
 BftPALmtXgx46Ecce0gZvvh4YULpVLNdDPPiwZTabV+47Cl8+cJ0Y+iEHsUfOesU
 hQG0unWJH77O3IU4QxiirLekLP/6a5O5f0W7u3PZmNNv7N+UdwE+De+QF0aamfgA
 LZDO1qOakflegFZvK0JchCzS4hc6dtRKqIvNM3cCBMXLvV4REHKP
 =geNh
 -----END PGP SIGNATURE-----

Merge tag 'fixes-2023-01-14' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock

Pull memblock fix from Mike Rapoport:
 "memblock: always release pages to the buddy allocator in
  memblock_free_late()

  If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
  only releases pages to the buddy allocator if they are not in the
  deferred range. This is correct for free pages (as defined by
  for_each_free_mem_pfn_range_in_zone()) because free pages in the
  deferred range will be initialized and released as part of the
  deferred init process.

  memblock_free_pages() is called by memblock_free_late(), which is used
  to free reserved ranges after memblock_free_all() has run. All pages
  in reserved ranges have been initialized at that point, and
  accordingly, those pages are not touched by the deferred init process.

  This means that currently, if the pages that memblock_free_late()
  intends to release are in the deferred range, they will never be
  released to the buddy allocator. They will forever be reserved.

  In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
  which is also correct for free pages but is not correct for reserved
  pages. KMSAN metadata for reserved pages is initialized by
  kmsan_init_shadow(), which runs shortly before memblock_free_all().

  For both of these reasons, memblock_free_pages() should only be called
  for free pages, and memblock_free_late() should call
  __free_pages_core() directly instead.

  One case where this issue can occur in the wild is EFI boot on x86_64.
  The x86 EFI code reserves all EFI boot services memory ranges via
  memblock_reserve() and frees them later via memblock_free_late()
  (efi_reserve_boot_services() and efi_free_boot_services(),
  respectively).

  If any of those ranges happens to fall within the deferred init range,
  the pages will not be released and that memory will be unavailable.

  For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:

    v6.2-rc2:
    Node 0, zone      DMA
          spanned  4095
          present  3999
          managed  3840
    Node 0, zone    DMA32
          spanned  246652
          present  245868
          managed  178867

    v6.2-rc2 + patch:
    Node 0, zone      DMA
          spanned  4095
          present  3999
          managed  3840
    Node 0, zone    DMA32
          spanned  246652
          present  245868
          managed  222816   # +43,949 pages"

* tag 'fixes-2023-01-14' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
  mm: Always release pages to the buddy allocator in memblock_free_late().
This commit is contained in:
Linus Torvalds 2023-01-14 10:08:08 -06:00
commit 4f43ade45d
2 changed files with 11 additions and 1 deletions

View File

@ -1640,7 +1640,13 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
end = PFN_DOWN(base + size);
for (; cursor < end; cursor++) {
memblock_free_pages(pfn_to_page(cursor), cursor, 0);
/*
* Reserved pages are always initialized by the end of
* memblock_free_all() (by memmap_init() and, if deferred
* initialization is enabled, memmap_init_reserved_pages()), so
* these pages can be released directly to the buddy allocator.
*/
__free_pages_core(pfn_to_page(cursor), 0);
totalram_pages_inc();
}
}

View File

@ -15,6 +15,10 @@ bool mirrored_kernelcore = false;
struct page {};
void __free_pages_core(struct page *page, unsigned int order)
{
}
void memblock_free_pages(struct page *page, unsigned long pfn,
unsigned int order)
{