Commit Graph

1073381 Commits

Author SHA1 Message Date
Christoph Hellwig
aaf7d70cc5 mm: refactor the ZONE_DEVICE handling in migrate_vma_pages
Make the flow a little more clear and prepare for adding a new
ZONE_DEVICE memory type.

Link: https://lkml.kernel.org/r/20220210072828.2930359-13-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
1776c0d102 mm: refactor the ZONE_DEVICE handling in migrate_vma_insert_page
Make the flow a little more clear and prepare for adding a new
ZONE_DEVICE memory type.

Link: https://lkml.kernel.org/r/20220210072828.2930359-12-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
f9f38f78c5 mm: refactor check_and_migrate_movable_pages
Remove up to two levels of indentation by using continue statements
and move variables to local scope where possible.

Link: https://lkml.kernel.org/r/20220210072828.2930359-11-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
5cbf9942c9 mm: generalize the pgmap based page_free infrastructure
Key off on the existence of ->page_free to prepare for adding support for
more pgmap types that are device managed and thus need the free callback.

Link: https://lkml.kernel.org/r/20220210072828.2930359-10-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
f74515986e fsdax: depend on ZONE_DEVICE || FS_DAX_LIMITED
Add a depends on ZONE_DEVICE support or the s390-specific limited DAX
support, as one of the two is required at runtime for fsdax code to
actually work.

Link: https://lkml.kernel.org/r/20220210072828.2930359-9-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
27674ef6c7 mm: remove the extra ZONE_DEVICE struct page refcount
ZONE_DEVICE struct pages have an extra reference count that complicates
the code for put_page() and several places in the kernel that need to
check the reference count to see that a page is not being used (gup,
compaction, migration, etc.). Clean up the code so the reference count
doesn't need to be treated specially for ZONE_DEVICE pages.

Note that this excludes the special idle page wakeup for fsdax pages,
which still happens at refcount 1.  This is a separate issue and will
be sorted out later.  Given that only fsdax pages require the
notifiacation when the refcount hits 1 now, the PAGEMAP_OPS Kconfig
symbol can go away and be replaced with a FS_DAX check for this hook
in the put_page fastpath.

Based on an earlier patch from Ralph Campbell <rcampbell@nvidia.com>.

Link: https://lkml.kernel.org/r/20220210072828.2930359-8-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
dc90f0846d mm: don't include <linux/memremap.h> in <linux/mm.h>
Move the check for the actual pgmap types that need the free at refcount
one behavior into the out of line helper, and thus avoid the need to
pull memremap.h into mm.h.

Link: https://lkml.kernel.org/r/20220210072828.2930359-7-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
895749455f mm: simplify freeing of devmap managed pages
Make put_devmap_managed_page return if it took charge of the page
or not and remove the separate page_is_devmap_managed helper.

Link: https://lkml.kernel.org/r/20220210072828.2930359-6-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
75e55d8a10 mm: move free_devmap_managed_page to memremap.c
free_devmap_managed_page has nothing to do with the code in swap.c,
move it to live with the rest of the code for devmap handling.

Link: https://lkml.kernel.org/r/20220210072828.2930359-5-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
730ff52194 mm: remove pointless includes from <linux/hmm.h>
hmm.h pulls in the world for no good reason at all.  Remove the
includes and push a few ones into the users instead.

Link: https://lkml.kernel.org/r/20220210072828.2930359-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
5c3f1f9cc4 mm: remove the __KERNEL__ guard from <linux/mm.h>
__KERNEL__ ifdefs don't make sense outside of include/uapi/.

Link: https://lkml.kernel.org/r/20220210072828.2930359-3-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Christoph Hellwig
be8a80b3e2 mm: remove a pointless CONFIG_ZONE_DEVICE check in memremap_pages
Patch series "start sorting out the ZONE_DEVICE refcount mess", v2.

This series removes the offset by one refcount for ZONE_DEVICE pages that
are freed back to the driver owning them, which is just device private
ones for now, but also the planned device coherent pages and the ehanced
p2p ones pending.

It does not address the fsdax pages yet, which will be attacked in a
follow on series.

This patch (of 27):

memremap.c is only built when CONFIG_ZONE_DEVICE is set, so remove
the superflous extra check.

Link: https://lkml.kernel.org/r/20220210072828.2930359-1-hch@lst.de
Link: https://lkml.kernel.org/r/20220210072828.2930359-2-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:33 -05:00
Hugh Dickins
c8263bd605 mm/munlock: mlock_vma_page() check against VM_SPECIAL
Although mmap_region() and mlock_fixup() take care that VM_LOCKED
is never left set on a VM_SPECIAL vma, there is an interval while
file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may
still be set while VM_SPECIAL bits are added: so mlock_vma_page()
should ignore VM_LOCKED while any VM_SPECIAL bits are set.

This showed up as a "Bad page" still mlocked, when vfree()ing pages
which had been vm_inserted by remap_vmalloc_range_partial(): while
release_pages() and __page_cache_release(), and so put_page(), catch
pages still mlocked when freeing (and clear_page_mlock() caught them
when unmapping), the vfree() path is unprepared for them: fix it?
but these pages should not have been mlocked in the first place.

I assume that an mlockall(MCL_FUTURE) had been done in the past; or
maybe the user got to specify MAP_LOCKED on a vmalloc'ing driver mmap.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03 12:47:07 -05:00
Hugh Dickins
47d4f3eeef mm/thp: shrink_page_list() avoid splitting VM_LOCKED THP
4.8 commit 7751b2da6b ("vmscan: split file huge pages before paging
them out") inserted a split_huge_page_to_list() into shrink_page_list()
without considering the mlock case: no problem if the page has already
been marked as Mlocked (the !page_evictable check much higher up will
have skipped all this), but it has always been the case that races or
omissions in setting Mlocked can rely on page reclaim to detect this
and correct it before actually reclaiming - and that remains so, but
what a shame if a hugepage is needlessly split before discovering it.

It is surprising that page_check_references() returns PAGEREF_RECLAIM
when VM_LOCKED, but there was a good reason for that: try_to_unmap_one()
is where the condition is detected and corrected; and until now it could
not be done in page_referenced_one(), because that does not always have
the page locked.  Now that mlock's requirement for page lock has gone,
copy try_to_unmap_one()'s mlock restoration into page_referenced_one(),
and let page_check_references() return PAGEREF_ACTIVATE in this case.

But page_referenced_one() may find a pte mapping one part of a hugepage:
what hold should a pte mapped in a VM_LOCKED area exert over the entire
huge page?  That's debatable.  The approach taken here is to treat that
pte mapping in page_referenced_one() as if not VM_LOCKED, and if no
VM_LOCKED pmd mapping is found later in the walk, and lack of reference
permits, then PAGEREF_RECLAIM take it to attempted splitting as before.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:59:50 -05:00
Hugh Dickins
6d9df8a588 mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH)
collapse_file() is using unmap_mapping_pages(1) on each small page found
mapped, unlike others (reclaim, migration, splitting, memory-failure) who
use try_to_unmap().  There are four advantages to try_to_unmap(): first,
its TTU_IGNORE_MLOCK option now avoids leaving mlocked page in pagevec;
second, its vma lookup uses i_mmap_lock_read() not i_mmap_lock_write();
third, it breaks out early if page is not mapped everywhere it might be;
fourth, its TTU_BATCH_FLUSH option can be used, as in page reclaim, to
save up all the TLB flushing until all of the pages have been unmapped.

Wild guess: perhaps it was originally written to use try_to_unmap(),
but hit the VM_BUG_ON_PAGE(page_mapped) after unmapping, because without
TTU_SYNC it may skip page table locks; but unmap_mapping_pages() never
skips them, so fixed the issue.  I did once hit that VM_BUG_ON_PAGE()
since making this change: we could pass TTU_SYNC here, but I think just
delete the check - the race is very rare, this is an ordinary small page
so we don't need to be so paranoid about mapcount surprises, and the
page_ref_freeze() just below already handles the case adequately.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:59:45 -05:00
Hugh Dickins
b74355078b mm/munlock: page migration needs mlock pagevec drained
Page migration of a VM_LOCKED page tends to fail, because when the old
page is unmapped, it is put on the mlock pagevec with raised refcount,
which then fails the freeze.

At first I thought this would be fixed by a local mlock_page_drain() at
the upper rmap_walk() level - which would have nicely batched all the
munlocks of that page; but tests show that the task can too easily move
to another cpu, leaving pagevec residue behind which fails the migration.

So try_to_migrate_one() drain the local pagevec after page_remove_rmap()
from a VM_LOCKED vma; and do the same in try_to_unmap_one(), whose
TTU_IGNORE_MLOCK users would want the same treatment; and do the same
in remove_migration_pte() - not important when successfully inserting
a new page, but necessary when hoping to retry after failure.

Any new pagevec runs the risk of adding a new way of stranding, and we
might discover other corners where mlock_page_drain() or lru_add_drain()
would now help.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:59:40 -05:00
Hugh Dickins
2fbb0c10d1 mm/munlock: mlock_page() munlock_page() batch by pagevec
A weakness of the page->mlock_count approach is the need for lruvec lock
while holding page table lock.  That is not an overhead we would allow on
normal pages, but I think acceptable just for pages in an mlocked area.
But let's try to amortize the extra cost by gathering on per-cpu pagevec
before acquiring the lruvec lock.

I have an unverified conjecture that the mlock pagevec might work out
well for delaying the mlock processing of new file pages until they have
got off lru_cache_add()'s pagevec and on to LRU.

The initialization of page->mlock_count is subject to races and awkward:
0 or !!PageMlocked or 1?  Was it wrong even in the implementation before
this commit, which just widens the window?  I haven't gone back to think
it through.  Maybe someone can point out a better way to initialize it.

Bringing lru_cache_add_inactive_or_unevictable()'s mlock initialization
into mm/mlock.c has helped: mlock_new_page(), using the mlock pagevec,
rather than lru_cache_add()'s pagevec.

Experimented with various orderings: the right thing seems to be for
mlock_page() and mlock_new_page() to TestSetPageMlocked before adding to
pagevec, but munlock_page() to leave TestClearPageMlocked to the later
pagevec processing.

Dropped the VM_BUG_ON_PAGE(PageTail)s this time around: they have made
their point, and the thp_nr_page()s already contain a VM_BUG_ON_PGFLAGS()
for that.

This still leaves acquiring lruvec locks under page table lock each time
the pagevec fills (or a THP is added): which I suppose is rather silly,
since they sit on pagevec waiting to be processed long after page table
lock has been dropped; but I'm disinclined to uglify the calling sequence
until some load shows an actual problem with it (nothing wrong with
taking lruvec lock under page table lock, just "nicer" to do it less).

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:59:22 -05:00
Hugh Dickins
2262ace607 mm/munlock: delete smp_mb() from __pagevec_lru_add_fn()
My reading of comment on smp_mb__after_atomic() in __pagevec_lru_add_fn()
says that it can now be deleted; and that remains so when the next patch
is added.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:57:10 -05:00
Hugh Dickins
c3096e6782 mm/migrate: __unmap_and_move() push good newpage to LRU
Compaction, NUMA page movement, THP collapse/split, and memory failure
do isolate unevictable pages from their "LRU", losing the record of
mlock_count in doing so (isolators are likely to use page->lru for their
own private lists, so mlock_count has to be presumed lost).

That's unfortunate, and we should put in some work to correct that: one
can imagine a function to build up the mlock_count again - but it would
require i_mmap_rwsem for read, so be careful where it's called.  Or
page_referenced_one() and try_to_unmap_one() might do that extra work.

But one place that can very easily be improved is page migration's
__unmap_and_move(): a small adjustment to where the successful new page
is put back on LRU, and its mlock_count (if any) is built back up by
remove_migration_ptes().

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:57:06 -05:00
Hugh Dickins
34b6792380 mm/munlock: mlock_pte_range() when mlocking or munlocking
Fill in missing pieces: reimplementation of munlock_vma_pages_range(),
required to lower the mlock_counts when munlocking without munmapping;
and its complement, implementation of mlock_vma_pages_range(), required
to raise the mlock_counts on pages already there when a range is mlocked.

Combine them into just the one function mlock_vma_pages_range(), using
walk_page_range() to run mlock_pte_range().  This approach fixes the
"Very slow unlockall()" of unpopulated PROT_NONE areas, reported in
https://lore.kernel.org/linux-mm/70885d37-62b7-748b-29df-9e94f3291736@gmail.com/

Munlock clears VM_LOCKED at the start, under exclusive mmap_lock; but if
a racing truncate or holepunch (depending on i_mmap_rwsem) gets to the
pte first, it will not try to munlock the page: leaving release_pages()
to correct it when the last reference to the page is gone - that's okay,
a page is not evictable anyway while it is held by an extra reference.

Mlock sets VM_LOCKED at the start, under exclusive mmap_lock; but if
a racing remove_migration_pte() or try_to_unmap_one() (depending on
i_mmap_rwsem) gets to the pte first, it will try to mlock the page,
then mlock_pte_range() mlock it a second time.  This is harder to
reproduce, but a more serious race because it could leave the page
unevictable indefinitely though the area is munlocked afterwards.
Guard against it by setting the (inappropriate) VM_IO flag,
and modifying mlock_vma_page() to decline such vmas.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:57:02 -05:00
Hugh Dickins
07ca760673 mm/munlock: maintain page->mlock_count while unevictable
Previous patches have been preparatory: now implement page->mlock_count.
The ordering of the "Unevictable LRU" is of no significance, and there is
no point holding unevictable pages on a list: place page->mlock_count to
overlay page->lru.prev (since page->lru.next is overlaid by compound_head,
which needs to be even so as not to satisfy PageTail - though 2 could be
added instead of 1 for each mlock, if that's ever an improvement).

But it's only safe to rely on or modify page->mlock_count while lruvec
lock is held and page is on unevictable "LRU" - we can save lots of edits
by continuing to pretend that there's an imaginary LRU here (there is an
unevictable count which still needs to be maintained, but not a list).

The mlock_count technique suffers from an unreliability much like with
page_mlock(): while someone else has the page off LRU, not much can
be done.  As before, err on the safe side (behave as if mlock_count 0),
and let try_to_unlock_one() move the page to unevictable if reclaim finds
out later on - a few misplaced pages don't matter, what we want to avoid
is imbalancing reclaim by flooding evictable lists with unevictable pages.

I am not a fan of "if (!isolate_lru_page(page)) putback_lru_page(page);":
if we have taken lruvec lock to get the page off its present list, then
we save everyone trouble (and however many extra atomic ops) by putting
it on its destination list immediately.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:56:57 -05:00
Hugh Dickins
b109b87050 mm/munlock: replace clear_page_mlock() by final clearance
Placing munlock_vma_page() at the end of page_remove_rmap() shifts most
of the munlocking to clear_page_mlock(), since PageMlocked is typically
still set when mapcount has fallen to 0.  That is not what we want: we
want /proc/vmstat's unevictable_pgs_cleared to remain as a useful check
on the integrity of of the mlock/munlock protocol - small numbers are
not surprising, but big numbers mean the protocol is not working.

That could be easily fixed by placing munlock_vma_page() at the start of
page_remove_rmap(); but later in the series we shall want to batch the
munlocking, and that too would tend to leave PageMlocked still set at
the point when it is checked.

So delete clear_page_mlock() now: leave it instead to release_pages()
(and __page_cache_release()) to do this backstop clearing of Mlocked,
when page refcount has fallen to 0.  If a pinned page occasionally gets
counted as Mlocked and Unevictable until it is unpinned, that's okay.

A slightly regrettable side-effect of this change is that, since
release_pages() and __page_cache_release() may be called at interrupt
time, those places which update NR_MLOCK with interrupts enabled
had better use mod_zone_page_state() than __mod_zone_page_state()
(but holding the lruvec lock always has interrupts disabled).

This change, forcing Mlocked off when refcount 0 instead of earlier
when mapcount 0, is not fundamental: it can be reversed if performance
or something else is found to suffer; but this is the easiest way to
separate the stats - let's not complicate that without good reason.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:56:53 -05:00
Hugh Dickins
cea86fe246 mm/munlock: rmap call mlock_vma_page() munlock_vma_page()
Add vma argument to mlock_vma_page() and munlock_vma_page(), make them
inline functions which check (vma->vm_flags & VM_LOCKED) before calling
mlock_page() and munlock_page() in mm/mlock.c.

Add bool compound to mlock_vma_page() and munlock_vma_page(): this is
because we have understandable difficulty in accounting pte maps of THPs,
and if passed a PageHead page, mlock_page() and munlock_page() cannot
tell whether it's a pmd map to be counted or a pte map to be ignored.

Add vma arg to page_add_file_rmap() and page_remove_rmap(), like the
others, and use that to call mlock_vma_page() at the end of the page
adds, and munlock_vma_page() at the end of page_remove_rmap() (end or
beginning? unimportant, but end was easier for assertions in testing).

No page lock is required (although almost all adds happen to hold it):
delete the "Serialize with page migration" BUG_ON(!PageLocked(page))s.
Certainly page lock did serialize with page migration, but I'm having
difficulty explaining why that was ever important.

Mlock accounting on THPs has been hard to define, differed between anon
and file, involved PageDoubleMap in some places and not others, required
clear_page_mlock() at some points.  Keep it simple now: just count the
pmds and ignore the ptes, there is no reason for ptes to undo pmd mlocks.

page_add_new_anon_rmap() callers unchanged: they have long been calling
lru_cache_add_inactive_or_unevictable(), which does its own VM_LOCKED
handling (it also checks for not VM_SPECIAL: I think that's overcautious,
and inconsistent with other checks, that mmap_region() already prevents
VM_LOCKED on VM_SPECIAL; but haven't quite convinced myself to change it).

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:56:48 -05:00
Hugh Dickins
a213e5cf71 mm/munlock: delete munlock_vma_pages_all(), allow oomreap
munlock_vma_pages_range() will still be required, when munlocking but
not munmapping a set of pages; but when unmapping a pte, the mlock count
will be maintained in much the same way as it will be maintained when
mapping in the pte.  Which removes the need for munlock_vma_pages_all()
on mlocked vmas when munmapping or exiting: eliminating the catastrophic
contention on i_mmap_rwsem, and the need for page lock on the pages.

There is still a need to update locked_vm accounting according to the
munmapped vmas when munmapping: do that in detach_vmas_to_be_unmapped().
exit_mmap() does not need locked_vm updates, so delete unlock_range().

And wasn't I the one who forbade the OOM reaper to attack mlocked vmas,
because of the uncertainty in blocking on all those page locks?
No fear of that now, so permit the OOM reaper on mlocked vmas.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:56:44 -05:00
Hugh Dickins
b67bf49ce7 mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE
If counting page mlocks, we must not double-count: follow_page_pte() can
tell if a page has already been Mlocked or not, but cannot tell if a pte
has already been counted or not: that will have to be done when the pte
is mapped in (which lru_cache_add_inactive_or_unevictable() already tracks
for new anon pages, but there's no such tracking yet for others).

Delete all the FOLL_MLOCK code - faulting in the missing pages will do
all that is necessary, without special mlock_vma_page() calls from here.

But then FOLL_POPULATE turns out to serve no purpose - it was there so
that its absence would tell faultin_page() not to faultin page when
setting up VM_LOCKONFAULT areas; but if there's no special work needed
here for mlock, then there's no work at all here for VM_LOCKONFAULT.

Have I got that right?  I've not looked into the history, but see that
FOLL_POPULATE goes back before VM_LOCKONFAULT: did it serve a different
purpose before?  Ah, yes, it was used to skip the old stack guard page.

And is it intentional that COW is not broken on existing pages when
setting up a VM_LOCKONFAULT area?  I can see that being argued either
way, and have no reason to disagree with current behaviour.

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:56:36 -05:00
Hugh Dickins
ebcbc6ea7d mm/munlock: delete page_mlock() and all its works
We have recommended some applications to mlock their userspace, but that
turns out to be counter-productive: when many processes mlock the same
file, contention on rmap's i_mmap_rwsem can become intolerable at exit: it
is needed for write, to remove any vma mapping that file from rmap's tree;
but hogged for read by those with mlocks calling page_mlock() (formerly
known as try_to_munlock()) on *each* page mapped from the file (the
purpose being to find out whether another process has the page mlocked,
so therefore it should not be unmlocked yet).

Several optimizations have been made in the past: one is to skip
page_mlock() when mapcount tells that nothing else has this page
mapped; but that doesn't help at all when others do have it mapped.
This time around, I initially intended to add a preliminary search
of the rmap tree for overlapping VM_LOCKED ranges; but that gets
messy with locking order, when in doubt whether a page is actually
present; and risks adding even more contention on the i_mmap_rwsem.

A solution would be much easier, if only there were space in struct page
for an mlock_count... but actually, most of the time, there is space for
it - an mlocked page spends most of its life on an unevictable LRU, but
since 3.18 removed the scan_unevictable_pages sysctl, that "LRU" has
been redundant.  Let's try to reuse its page->lru.

But leave that until a later patch: in this patch, clear the ground by
removing page_mlock(), and all the infrastructure that has gathered
around it - which mostly hinders understanding, and will make reviewing
new additions harder.  Don't mind those old comments about THPs, they
date from before 4.5's refcounting rework: splitting is not a risk here.

Just keep a minimal version of munlock_vma_page(), as reminder of what it
should attend to (in particular, the odd way PGSTRANDED is counted out of
PGMUNLOCKED), and likewise a stub for munlock_vma_pages_range().  Move
unchanged __mlock_posix_error_return() out of the way, down to above its
caller: this series then makes no further change after mlock_fixup().

After this and each following commit, the kernel builds, boots and runs;
but with deficiencies which may show up in testing of mlock and munlock.
The system calls succeed or fail as before, and mlock remains effective
in preventing page reclaim; but meminfo's Unevictable and Mlocked amounts
may be shown too low after mlock, grow, then stay too high after munlock:
with previously mlocked pages remaining unevictable for too long, until
finally unmapped and freed and counts corrected. Normal service will be
resumed in "mm/munlock: mlock_pte_range() when mlocking or munlocking".

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-17 11:56:13 -05:00
Linus Torvalds
f71077a4d8 MMC core:
- Fix recovery logic for multi block I/O reads (MMC_READ_MULTIPLE_BLOCK)
 -----BEGIN PGP SIGNATURE-----
 
 iQJLBAABCgA1FiEEugLDXPmKSktSkQsV/iaEJXNYjCkFAmIMzkcXHHVsZi5oYW5z
 c29uQGxpbmFyby5vcmcACgkQ/iaEJXNYjClSYw//YfgmZEE7XxoY0+ZYp+sded6f
 XDlu7EiXxriljWzYDbZ3kCwQoHA3XlsT1jNJnrZPk6FOvHpxvZt281sWf152e3V/
 X7jRi6briOiZb6nzzo0CnrkQng+vwcjwkjThAlhYT2aEFGOnS4Go6UkaBVp0nNs+
 d+mHyPzrKLH2BQCIX+JlZv0tH+nqxa+YDT6BAqsReICPl4yoPhMPNRhLMsam8YZe
 uS3ViKv3i5VB5+539VqcBxDwd2y2LZt4y4tsDUe/iCuuQ1n2+bRncrne7H0nQlx7
 KrKELE2kuIqGZsmFT/HxH4P0ZFYqM7ZVdJzRtUJWAPFXewgWN+mFydDgfzEr0TNT
 B8ixGbixkApIpNKWkOAU6DcLUOOnTmUa3X48G0E2+43QEBV36ctG1Zk9qzdtD1tB
 ETXgwa9cclt5/fs+NKp4ZAq0jLuBPj2YJyHj9E+mFp2F+cPkfH8Gex0+WJ7SYdzS
 ReboDxVSuvK7NaYkNiWNtR5MFE/6r2eZZ3B4UUMrws+ZcFJBJdgNRvDqFRmMWtjb
 oTJkoeepR1tHAPaGXyx2OeOIgOjVf6Qs8DB+VZDASZNfsrWBDoDYdbtzhjvEPFl0
 oi54k/1osPKZZSu/u0KRuWCOUhn0jsk9hJ4QsKcH5rVGpxSD58SV43KAR1GN8zEM
 +AtLYIFzUrQAX1FmtPM=
 =joAD
 -----END PGP SIGNATURE-----

Merge tag 'mmc-v5.17-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC fix from Ulf Hansson:
 "Fix recovery logic for multi block I/O reads (MMC_READ_MULTIPLE_BLOCK)"

* tag 'mmc-v5.17-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc:
  mmc: block: fix read single on recovery logic
2022-02-16 12:09:22 -08:00
Linus Torvalds
3593030761 tty: n_tty: do not look ahead for EOL character past the end of the buffer
Daniel Gibson reports that the n_tty code gets line termination wrong in
very specific cases:

 "If you feed a line with exactly 64 chars + terminating newline, and
  directly afterwards (without reading) another line into a pseudo
  terminal, the the first read() on the other side will return the 64
  char line *without* terminating newline, and the next read() will
  return the missing terminating newline AND the complete next line (if
  it fits in the buffer)"

and bisected the behavior to commit 3b830a9c34 ("tty: convert
tty_ldisc_ops 'read()' function to take a kernel pointer").

Now, digging deeper, it turns out that the behavior isn't exactly new:
what changed in commit 3b830a9c34 was that the tty line discipline
.read() function is now passed an intermediate kernel buffer rather than
the final user space buffer.

And that intermediate kernel buffer is 64 bytes in size - thus that
special case with exactly 64 bytes plus terminating newline.

The same problem did exist before, but historically the boundary was not
the 64-byte chunk, but the user-supplied buffer size, which is obviously
generally bigger (and potentially bigger than N_TTY_BUF_SIZE, which
would hide the issue entirely).

The reason is that the n_tty canon_copy_from_read_buf() code would look
ahead for the EOL character one byte further than it would actually
copy.  It would then decide that it had found the terminator, and unmark
it as an EOL character - which in turn explains why the next read
wouldn't then be terminated by it.

Now, the reason it did all this in the first place is related to some
historical and pretty obscure EOF behavior, see commit ac8f3bf883
("n_tty: Fix poll() after buffer-limited eof push read") and commit
40d5e0905a ("n_tty: Fix EOF push handling").

And the reason for the EOL confusion is that we treat EOF as a special
EOL condition, with the EOL character being NUL (aka "__DISABLED_CHAR"
in the kernel sources).

So that EOF look-ahead also affects the normal EOL handling.

This patch just removes the look-ahead that causes problems, because EOL
is much more critical than the historical "EOF in the middle of a line
that coincides with the end of the buffer" handling ever was.

Now, it is possible that we should indeed re-introduce the "look at next
character to see if it's a EOF" behavior, but if so, that should be done
not at the kernel buffer chunk boundary in canon_copy_from_read_buf(),
but at a higher level, when we run out of the user buffer.

In particular, the place to do that would be at the top of
'n_tty_read()', where we check if it's a continuation of a previously
started read, and there is no more buffer space left, we could decide to
just eat the __DISABLED_CHAR at that point.

But that would be a separate patch, because I suspect nobody actually
cares, and I'd like to get a report about it before bothering.

Fixes: 3b830a9c34 ("tty: convert tty_ldisc_ops 'read()' function to take a kernel pointer")
Fixes: ac8f3bf883 ("n_tty: Fix  poll() after buffer-limited eof push read")
Fixes: 40d5e0905a ("n_tty: Fix EOF push handling")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215611
Reported-and-tested-by: Daniel Gibson <metalcaedes@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jiri Slaby <jirislaby@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-02-16 10:13:23 -08:00
Linus Torvalds
c5d9ae265b ARM:
* Read HW interrupt pending state from the HW
 
 x86:
 
 * Don't truncate the performance event mask on AMD
 
 * Fix Xen runstate updates to be atomic when preempting vCPU
 
 * Fix for AMD AVIC interrupt injection race
 
 * Several other AMD fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmIL4G4UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNkQQf/Z75dnmdRl8sHHnGjwH2IhWHwAg+h
 5O+mJphYt4cvVMexP5dj69b7mHtKMeg/0TxPvPfwCLlhzKkW1gQFwwBAq/YuBCKw
 cnMuVPeCSWo6znpS+jYUF4FAJgPKkzfFR9UwYAR5UexSWyOwU8rLcvSxj8vJjO/l
 sIke+f767Ks2KgcTMIudObg+vDcgnQXI8n8ztI7hF1WJKYHdTKFkYN7BYRxQ9BW6
 4fq51218DhRMv6S7so5dhYC473f+D0t8b5S/Mygur/x6mzsdQJKeOmi8aWGoDa/B
 Bmse+X0lHoOkdXaxqpBgQCYeyrXohNcXx7cpGRVFnS45Jf7MLG4OfVHWNQ==
 =kD2l
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "ARM:

   - Read HW interrupt pending state from the HW

  x86:

   - Don't truncate the performance event mask on AMD

   - Fix Xen runstate updates to be atomic when preempting vCPU

   - Fix for AMD AVIC interrupt injection race

   - Several other AMD fixes"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: x86/pmu: Use AMD64_RAW_EVENT_MASK for PERF_TYPE_RAW
  KVM: x86/pmu: Don't truncate the PerfEvtSeln MSR when creating a perf event
  KVM: SVM: fix race between interrupt delivery and AVIC inhibition
  KVM: SVM: set IRR in svm_deliver_interrupt
  KVM: SVM: extract avic_ring_doorbell
  selftests: kvm: Remove absent target file
  KVM: arm64: vgic: Read HW interrupt pending state from the HW
  KVM: x86/xen: Fix runstate updates to be atomic when preempting vCPU
  KVM: x86: SVM: move avic definitions from AMD's spec to svm.h
  KVM: x86: lapic: don't touch irr_pending in kvm_apic_update_apicv when inhibiting it
  KVM: x86: nSVM: deal with L1 hypervisor that intercepts interrupts but lets L2 control them
  KVM: x86: nSVM: expose clean bit support to the guest
  KVM: x86: nSVM/nVMX: set nested_run_pending on VM entry which is a result of RSM
  KVM: x86: nSVM: mark vmcb01 as dirty when restoring SMM saved state
  KVM: x86: nSVM: fix potential NULL derefernce on nested migration
  KVM: x86: SVM: don't passthrough SMAP/SMEP/PKE bits in !NPT && !gCR0.PG case
  Revert "svm: Add warning message for AVIC IPI invalid target"
2022-02-15 11:07:59 -08:00
Linus Torvalds
a254a9da45 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid
Pull HID fixes from Jiri Kosina:

 - memory leak fix for hid-elo driver (Dongliang Mu)

 - fix for hangs on newer AMD platforms with amd_sfh-driven hardware
   (Basavaraj Natikar )

 - locking fix in i2c-hid (Daniel Thompson)

 - a few device-ID specific quirks

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid:
  HID: amd_sfh: Add interrupt handler to process interrupts
  HID: amd_sfh: Add functionality to clear interrupts
  HID: amd_sfh: Disable the interrupt for all command
  HID: amd_sfh: Correct the structure field name
  HID: amd_sfh: Handle amd_sfh work buffer in PM ops
  HID:Add support for UGTABLET WP5540
  HID: amd_sfh: Add illuminance mask to limit ALS max value
  HID: amd_sfh: Increase sensor command timeout
  HID: i2c-hid: goodix: Fix a lockdep splat
  HID: elo: fix memory leak in elo_probe
  HID: apple: Set the tilde quirk flag on the Wellspring 5 and later
2022-02-15 10:52:05 -08:00
Linus Torvalds
705d84a366 for-5.17-rc4-tag
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE8rQSAMVO+zA4DBdWxWXV+ddtWDsFAmILuxMACgkQxWXV+ddt
 WDvhrA/9Hsyj2DdvvBVR3HudaER51RAJS6dtJCJdFZGWy2tEwtkxhIdbPn1nwJE7
 mvZy2UN79JKwPAdX8inyJ68RCMtcifprkUMC2d7y2mVZcCG/a/iYGdDIVB/z4Pyx
 NneBBgwdG0V505i7/sm07epLUaNI1MwXc9wNAs00zSXw4eYjLq09fp0lfl74RBhv
 HvuYgawk2abY6hPbJnTu2MyyHEZI4oGH/fRurP48cvU/FbIm9en7aX3rEZ+T8yRW
 TkEdFF/60Wce3xkyN87Xqma6L0smypJ888yzpwsJtlFOTr7iI58HYqUfx27Q5VQc
 xy5fyuuplEb0ky4GBnscpsoutj5C241+4+YE4HGqf9ne5EYU1rzJATlEFUBk84hY
 YwjdordS/nTScyFVCBG9yiTL30KsQ2SQc1TzIt/rIJiYIJexJyppOJMFmxbuN9By
 WSrLB5/uN56dRe/A8LMGpuJdwTVrYr2SPXfSseAxCEONt5fppPnDaCGEgVKIdmHq
 sQXbs/LMGHQ1lq2JsPFD12p8kQJ5Redxy0KIzDwmeBAL3HlXwpFiMia0AhvKOzPj
 UFtU/KcOmtqWMMv3P0aHydmDmUid4c3612BtvbKOhIXTVzKodzcQhkyTw1ducAa1
 GMkKIHCaPCzbsJwiogZGSBmIyDyMwitVvAybZIpRTR9i0xSA61A=
 =AqU+
 -----END PGP SIGNATURE-----

Merge tag 'for-5.17-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux

Pull btrfs fixes from David Sterba:

 - yield CPU more often when defragmenting a large file

 - skip defragmenting extents already under writeback

 - improve error message when send fails to write file data

 - get rid of warning when mounted with 'flushoncommit'

* tag 'for-5.17-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  btrfs: send: in case of IO error log it
  btrfs: get rid of warning on transaction commit when using flushoncommit
  btrfs: defrag: don't try to defrag extents which are under writeback
  btrfs: don't hold CPU for too long when defragging a file
2022-02-15 09:14:05 -08:00
Linus Torvalds
2572da44a5 parisc architecture fixes for kernel v5.17-rc5
- Fix miscompilations when function calls are made from inside a
   put_user() call
 - Drop __init from map_pages() declaration to avoid random boot crashes
 - Added #error messages if a 64-bit compiler was used to build a 32-bit
   kernel (and vice versa)
 - Fix out-of-bound data TLB miss faults in sba_iommu and ccio-dma
   drivers
 - Add ioread64_lo_hi() and iowrite64_lo_hi() functions to avoid kernel
   test robot errors
 - Fix link failure when 8250_gsc driver is built without CONFIG_IOSAPIC
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCYgu/DwAKCRD3ErUQojoP
 X5+oAP4qsrVJZHrzalqJoGlDHZKGRKH5b9t5JQb09d1XmgFcZgD5AedIEX3sIS4X
 zOIY5500TQqqbqLn7pvQDXYc1S7LtgM=
 =Zwfn
 -----END PGP SIGNATURE-----

Merge tag 'for-5.17/parisc-3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux

Pull parisc architecture fixes from Helge Deller:

 - Fix miscompilations when function calls are made from inside a
   put_user() call

 - Drop __init from map_pages() declaration to avoid random boot crashes

 - Added #error messages if a 64-bit compiler was used to build a 32-bit
   kernel (and vice versa)

 - Fix out-of-bound data TLB miss faults in sba_iommu and ccio-dma
   drivers

 - Add ioread64_lo_hi() and iowrite64_lo_hi() functions to avoid kernel
   test robot errors

 - Fix link failure when 8250_gsc driver is built without CONFIG_IOSAPIC

* tag 'for-5.17/parisc-3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
  serial: parisc: GSC: fix build when IOSAPIC is not set
  parisc: Fix some apparent put_user() failures
  parisc: Show error if wrong 32/64-bit compiler is being used
  parisc: Add ioread64_lo_hi() and iowrite64_lo_hi()
  parisc: Fix sglist access in ccio-dma.c
  parisc: Fix data TLB miss in sba_unmap_sg
  parisc: Drop __init from map_pages declaration
2022-02-15 09:10:09 -08:00
Linus Torvalds
c24449b321 hyperv-fixes for 5.17-rc5
-----BEGIN PGP SIGNATURE-----
 
 iQFHBAABCAAxFiEEIbPD0id6easf0xsudhRwX5BBoF4FAmILjZgTHHdlaS5saXVA
 a2VybmVsLm9yZwAKCRB2FHBfkEGgXuPAB/96ls6ZaUrvAEseODu9N+qbCKCjqjGx
 LBqP5gzzG71LoFLlvvGp7quwDXkSlfzMmZM1oxBrsRN4TiexdQ4b9tR2jNqxUBS0
 pR7efRpR4YN1TrvxCr0lgfbKV6F3EC4YqMwK2Tf7zMRkzAbMQmw500JTkAdtplUT
 opSZKlWt1Q66Iudz7vwz2e2D/NnAg03icB7lZUNulV1WTDfDfBUZyAphAnOqpty9
 Y9UlVjFYSozOA6dv3HClYiJPjxhzUev88W/Sld81+2+gGCngWRT4a2OyUvCqNWXD
 P8EswLHuqECpkz6uxnbQOLDBgdiMXigtANNky/E4Za8r9MqpZT3+ueLf
 =/jBG
 -----END PGP SIGNATURE-----

Merge tag 'hyperv-fixes-signed-20220215' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux

Pull hyperv fixes from Wei Liu:

 - Rework use of DMA_BIT_MASK in vmbus to work around a clang bug
   (Michael Kelley)

 - Fix NUMA topology (Long Li)

 - Fix a memory leak in vmbus (Miaoqian Lin)

 - One minor clean-up patch (Cai Huoqing)

* tag 'hyperv-fixes-signed-20220215' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux:
  Drivers: hv: utils: Make use of the helper macro LIST_HEAD()
  Drivers: hv: vmbus: Rework use of DMA_BIT_MASK(64)
  Drivers: hv: vmbus: Fix memory leak in vmbus_add_channel_kobj
  PCI: hv: Fix NUMA node assignment when kernel boots with custom NUMA topology
2022-02-15 09:05:01 -08:00
Randy Dunlap
6e8793674b serial: parisc: GSC: fix build when IOSAPIC is not set
There is a build error when using a kernel .config file from
'kernel test robot' for a different build problem:

hppa64-linux-ld: drivers/tty/serial/8250/8250_gsc.o: in function `.LC3':
(.data.rel.ro+0x18): undefined reference to `iosapic_serial_irq'

when:
  CONFIG_GSC=y
  CONFIG_SERIO_GSCPS2=y
  CONFIG_SERIAL_8250_GSC=y
  CONFIG_PCI is not set
    and hence PCI_LBA is not set.
  IOSAPIC depends on PCI_LBA, so IOSAPIC is not set/enabled.

Make the use of iosapic_serial_irq() conditional to fix the build error.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: kernel test robot <lkp@intel.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-serial@vger.kernel.org
Cc: Jiri Slaby <jirislaby@kernel.org>
Cc: Johan Hovold <johan@kernel.org>
Suggested-by: Helge Deller <deller@gmx.de>
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
Signed-off-by: Helge Deller <deller@gmx.de>
2022-02-14 19:34:42 +01:00
Linus Torvalds
d567f5db41 regulator: Fix for v5.17
One fix here, for initialisation of regulators that don't have an
 in_enabled() operation which would mainly impact cases where they aren't
 otherwise used during early setup for some reason.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmIKg5cACgkQJNaLcl1U
 h9DYPQf/b2E70Q4mu/pFF1ldQHJrfyd4XuwP/+Ay1iHY8az+fKniipy+YD/+Hth3
 ownNwNuQowHnMK606+jvgr+2nFahRMbuOjX7JZS7dGZ3F62v8D9rRglD7XVIxoRa
 7n/xzAFxMYuWph25yrtr3xHqfdEKmEmfhT0pWNmajCu6s+f9J96qjDmSubDUZp1Q
 nVEvwSYmZ7NFZJu+1fINpRIqni7veXyQwfFTZpHZvQWW5CFY8o4RtKoQ3tkAcXp8
 GGRqSw9dfIpaqql8WHjEEI03kiWVepC541ZNBedzJrsDG0PbVuG+HNvQHFhDQjnO
 Ii9FEE0GdDXvd05uZoZGLk4yMXa0Kg==
 =Qkqf
 -----END PGP SIGNATURE-----

Merge tag 'regulator-fix-v5.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator

Pull regulator fix from Mark Brown:
 "One fix here, for initialisation of regulators that don't have an
  in_enabled() operation which would mainly impact cases where they
  aren't otherwise used during early setup for some reason"

* tag 'regulator-fix-v5.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
  regulator: core: fix false positive in regulator_late_cleanup()
2022-02-14 09:51:26 -08:00
Basavaraj Natikar
7f016b35ca HID: amd_sfh: Add interrupt handler to process interrupts
On newer AMD platforms with SFH, it is observed that random interrupts
get generated on the SFH hardware and until this is cleared the firmware
sensor processing is stalled, resulting in no data been received to
driver side.

Add routines to handle these interrupts, so that firmware operations are
not stalled.

Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-02-14 16:28:51 +01:00
Basavaraj Natikar
fb75a3791a HID: amd_sfh: Add functionality to clear interrupts
Newer AMD platforms with SFH may generate interrupts on some events
which are unwarranted. Until this is cleared the actual MP2 data
processing maybe stalled in some cases.

Add a mechanism to clear the pending interrupts (if any) during the
driver initialization and sensor command operations.

Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-02-14 16:28:51 +01:00
Basavaraj Natikar
b300667b33 HID: amd_sfh: Disable the interrupt for all command
Sensor data is processed in polling mode. Hence disable the interrupt
for all sensor command.

Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-02-14 16:28:51 +01:00
Basavaraj Natikar
aa0b724a2b HID: amd_sfh: Correct the structure field name
Misinterpreted intr_enable field name. Hence correct the structure
field name accordingly to reflect the functionality.

Fixes: f264481ad6 ("HID: amd_sfh: Extend driver capabilities for multi-generation support")
Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-02-14 16:28:51 +01:00
Basavaraj Natikar
0cf74235f4 HID: amd_sfh: Handle amd_sfh work buffer in PM ops
Since in the current amd_sfh design the sensor data is periodically
obtained in the form of poll data, during the suspend/resume cycle,
scheduling a delayed work adds no value.

So, cancel the work and restart back during the suspend/resume cycle
respectively.

Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-02-14 16:28:51 +01:00
Jim Mattson
710c476514 KVM: x86/pmu: Use AMD64_RAW_EVENT_MASK for PERF_TYPE_RAW
AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of
a PerfEvtSeln MSR. Don't mask off the high nybble when configuring a
RAW perf event.

Fixes: ca724305a2 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM")
Signed-off-by: Jim Mattson <jmattson@google.com>
Message-Id: <20220203014813.2130559-2-jmattson@google.com>
Reviewed-by: David Dunn <daviddunn@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-02-14 07:44:51 -05:00
Jim Mattson
b8bfee85f1 KVM: x86/pmu: Don't truncate the PerfEvtSeln MSR when creating a perf event
AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of
a PerfEvtSeln MSR. Don't drop the high nybble when setting up the
config field of a perf_event_attr structure for a call to
perf_event_create_kernel_counter().

Fixes: ca724305a2 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM")
Reported-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Message-Id: <20220203014813.2130559-1-jmattson@google.com>
Reviewed-by: David Dunn <daviddunn@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-02-14 07:43:46 -05:00
Helge Deller
dbd0b42350 parisc: Fix some apparent put_user() failures
After commit 4b9d2a731c ("parisc: Switch user access functions
to signal errors in r29 instead of r8") bash suddenly started
to report those warnings after login:

-bash: cannot set terminal process group (-1): Bad file descriptor
-bash: no job control in this shell

It turned out, that a function call inside a put_user(), e.g.:
put_user(vt_do_kdgkbmode(console), (int __user *)arg);
clobbered the error register (r29) and thus the put_user() call itself
seem to have failed.

Rearrange the C-code to pre-calculate the intermediate value
and then do the put_user().
Additionally prefer the "+" constraint on pu_err and gu_err registers
to tell the compiler that those operands are both read and written by
the assembly instruction.

Reported-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: 4b9d2a731c ("parisc: Switch user access functions to signal errors in r29 instead of r8")
Signed-off-by: Helge Deller <deller@gmx.de>
2022-02-14 12:36:42 +01:00
Helge Deller
b160628e9e parisc: Show error if wrong 32/64-bit compiler is being used
It happens quite often that people use the wrong compiler to build the
kernel:

make ARCH=parisc   -> builds the 32-bit kernel
make ARCH=parisc64 -> builds the 64-bit kernel

This patch adds a sanity check which errors out with an instruction how
use the correct ARCH= option.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # v5.15+
2022-02-13 22:45:53 +01:00
Linus Torvalds
754e0b0e35 Linux 5.17-rc4 2022-02-13 12:13:30 -08:00
Linus Torvalds
e89d3a4671 - Fix the truncated path issue for HAVE_GCC_PLUGINS test in Kconfig
- Move -Wunsligned-access to W=1 builds to avoid sprinkling warnings for
    the latest Clang
 
  - Fix missing fclose() in Kconfig
 
  - Fix Kconfig to touch dep headers correctly when KCONFIG_AUTOCONFIG is
    overridden.
 -----BEGIN PGP SIGNATURE-----
 
 iQJJBAABCgAzFiEEbmPs18K1szRHjPqEPYsBB53g2wYFAmIJNo4VHG1hc2FoaXJv
 eUBrZXJuZWwub3JnAAoJED2LAQed4NsGfkcQAJy0r6+XHQj/s/avH/KiicyFuPhl
 VHDFqVjVsfhgrtw3+gbgP7J9piNPwf2H9vcbjD+HOqEGWPLMZh+pi5PxEhVSZ1AD
 IgYM38xaLdGD5FPVFgFW7IpUOJ06c1CeYZkXWUEQm2vSbY9A9aJWFdAgNqlJfyS2
 ApiVQybTaBMZoF/MPhpMeeW9IlBYzupfw4cNTkMx7wwi582DZharypxxyyKwsgW2
 2T7e8luSTBWK4Qee6PDs1M3CtTSEITARGjQCrY38IU72pGsY45/if0BkK9XIBqCM
 j1JA9339ceKYK9qG0vGLmTbP4q2vfWXoW//UFZYxAr5LvPUgA5ILKjXUT7nfMwYw
 sWFbD0cMOnakC+FBnKdd+uYSr7aZul93+QkLXgLtHh7eJvFWbtd+e6oNaayTs0b1
 e28Sme0zG6h1L/YeGTEFI0uBiB17FmSG7iU5JK0ech948AnF7IAtvO/xRZ4hNC/l
 0eDACv+sMyVRjQdniKytXWpKq3KLB1KjSnxJZ85yQqAXhNwm5xafWUdIOAyvHz5Z
 xiaEdD5Ac1q1AEORqUh19x4mAzxnvijF3iZnsZr7+ipE1RWdTXVVmWhyOeh1kx0f
 4kJfI1IYbHyZUYcN6KT5Lusi6GZA3nbQTfFkjo9J328xivPrBTBYJQrk9R13MqI2
 a6dpCAmbVtfPXGU7
 =AOMt
 -----END PGP SIGNATURE-----

Merge tag 'kbuild-fixes-v5.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild fixes from Masahiro Yamada:

 - Fix the truncated path issue for HAVE_GCC_PLUGINS test in Kconfig

 - Move -Wunsligned-access to W=1 builds to avoid sprinkling warnings
   for the latest Clang

 - Fix missing fclose() in Kconfig

 - Fix Kconfig to touch dep headers correctly when KCONFIG_AUTOCONFIG is
   overridden.

* tag 'kbuild-fixes-v5.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  kconfig: fix failing to generate auto.conf
  kconfig: fix missing fclose() on error paths
  Makefile.extrawarn: Move -Wunaligned-access to W=1
  kconfig: let 'shell' return enough output for deep path names
2022-02-13 11:58:11 -08:00
Linus Torvalds
c5d714aa6d Interrupt chip driver fixes:
- Don't install an hotplug notifier for GICV3-ITS on systems which do not
     need it to prevent a warning in the notifier about inconsistent state
 
   - Add the missing device tree matching for the T-HEAD PLIC variant so the
     related SoC is properly supported.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmIJCMgTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoaMPD/9phdwtDfMV7pKv3AvKwb3tt5QVkKJ/
 8fcl4QObJcE/1rhoUvWNUyUy/VvcyfVUVZV26jlbMjlIo6W7QIrCr4aV/neps9E3
 bEKsrli/pxtwrZyp5uFGrh0egiO6dFxu3II67GBmbBywkAxhAhc6eFSh5mD2LfB8
 3uJVcle+G1IdKn9ULri5KZtpuqyMIIpkMAGrO/0TdYfiG87rl4cuCkm2PZlreyDC
 llL+61ro6KU9DsSNQ5s7l/qfVsffdKLiPxwkww/IjnP7mWbBb2clMnbh9/YkIE90
 IWNEwqQIS4ACMd9F7IY41qu0u47rEHY/suwFJWC2DccmX7lPRp76qdyjMZbGMN0r
 gt08XA6Utl+d2kQVxG51EQEjOZx9x48lQUTkB2MWQ5PXEMUNk0b4gazkYaI/7bw1
 J7GLoWTaczOOjAhrDGPLChDt0cmBRTzuNz6klRh4KcK3BbMD/Lq9JWMs9QOuNdDl
 o0v5MP+1woie2SlnqFOUIPrthEUdQRTLq0C7vU1FE8iYw74NxHH8Yy5pmfdTDqum
 ZeuIbehog2CtZCJ7GklzcY2tbFu4cXMwWgwdngkcHX80JAb/mWOp6Uz83AFOcIGh
 N/vWJvAwbNq+Y7NbTa2AjB8cPkqFg4DFQpTFsYZNVOuf0ISEl9bR99xQ1+KzOy3+
 NPUClXH+mdKTRA==
 =QWzW
 -----END PGP SIGNATURE-----

Merge tag 'irq-urgent-2022-02-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "Interrupt chip driver fixes:

   - Don't install an hotplug notifier for GICV3-ITS on systems which do
     not need it to prevent a warning in the notifier about inconsistent
     state

   - Add the missing device tree matching for the T-HEAD PLIC variant so
     the related SoC is properly supported"

* tag 'irq-urgent-2022-02-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/sifive-plic: Add missing thead,c900-plic match string
  dt-bindings: update riscv plic compatible string
  irqchip/gic-v3-its: Skip HP notifier when no ITS is registered
2022-02-13 10:06:40 -08:00
Linus Torvalds
42964a18f8 - Fix a case where objtool would mistakenly warn about instructions being unreachable
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmII/BUACgkQEsHwGGHe
 VUr5VBAAhK4/Hqe5wgbbKE2CwUfvCd7KBbKQb0FbBguO9Sg9NkQhD0ROE5qQCsRE
 AMBJKuC4AahoVIsPuJ2nr73iaM5UY3xx6Dth6qPnA7VEwpWM/h/A2t0D9bgm5PdR
 1FK5QnmIP+o/astYTvP8YpIlhRqHK97fZM0KYZX8SfLG2sbnYiBANj7YiwafNLQs
 7FR/J+LH2PRAU1sBrdrhvd/u4jvSTjcbutPEETGuTTMoPiJj4TGa0XyZNQR4/br+
 WTnbzLFJpRabBMVE2ELzbzSXnEeUZpSKe79G9LW5AFaGa9UpyooXVUn4PcPNXTia
 Q8zkMKusNFUlQc0pbcUxaS/g+UnC7koFn/XvPxp5k9tL7x4exq9hhOn/F+K9Ctuw
 jUVrg63+/VFYtMwbZpYd81k5I/rx+o7t8rkmUrk0Wz/gpE9CDgbjfhGyAFqmXAFU
 mGAGcFHbBaG6J5/XGqOGZR42yajlg9lwxpaj+taCtfbf46/48E6mTtLG2qQRDAqW
 QlGVS8H9t+vmwfO8oAt2tWwLTyZqt+6VmNTbwerKSEblEC+yB/lLO5AcWsnNwHVl
 9ZwSRTPw8ejkj/AdyoTSedMJHzcsAXT8PtIr2rSjuX1b8SubN8GmbsApyQmzV3G0
 n5DLTcKvpCPjtWjtpF44yjP1rfdDLFBIh+RYHF7iNPFo0uhQFOg=
 =+cTD
 -----END PGP SIGNATURE-----

Merge tag 'objtool_urgent_for_v5.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull objtool fix from Borislav Petkov:
 "Fix a case where objtool would mistakenly warn about instructions
  being unreachable"

* tag 'objtool_urgent_for_v5.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/bug: Merge annotate_reachable() into _BUG_FLAGS() asm
2022-02-13 09:43:34 -08:00
Linus Torvalds
6f35736723 - Fix a NULL-ptr dereference when recalculating a sched entity's weight
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmII+xoACgkQEsHwGGHe
 VUqiEQ/+JI1pSIk5+arj5v7B1GaiqUg2UE0MNgFErPfMsMCWMT1/3R0P+ULzy/Dl
 Tv/mQnHznfSdntUVoJ2iYCayLFAJC7AnDVq00inHQI/vIvQnwjUuoE6AYDBJx5FG
 oLu9W1eUjXt6RVlhVA3+OY4PJlxoEXCNbai7cDYoHbOYwQDGfPPuNkdcvLsnPC8Z
 xIICP4+ncTgMw4unI0edqCVYYtuKv8GUklaFlNyPv/PNXYnf1mFvGQ8w4Zq2f3og
 ndVFDj3HMijYhpiBlcLQCvq1aT11ubmCaEcKufzwXWzQfEnMhHw4he/vobuuycSG
 i9vGUO8Qo+7sRCFL0CGI3UBchTWbzUe57Aj/rWijmKl4zFWkYcc7PBDEZetfU/rS
 CrYD83WzfS+DV7NRThIpdqG8fpqGcIp40ot724XZN05NEmX3oxPNT/UabYyBx/0h
 IGb63PeQndsfzFxVx5fl6GeiuMvqLHtgVbJV3hWE5Ea8zrOBWANAyTdRet486IL/
 JKkFb8VGtCjRzjfpAoWt81U6JL6bh0yR+tfLu4QNOK3ELhDxTVsS3JnsEUJjgZvk
 4tOJhuIR4ApGYiwKGTNo9J9GfppMUcwHOxwmtjFgG17MS5Iqam7XrHdnpInB8CCF
 WrcFlEg2EiufSOrEp4Ub9Dc/uqmxmYO/NNWZxXCvNgDOlO0sKHs=
 =a9P6
 -----END PGP SIGNATURE-----

Merge tag 'sched_urgent_for_v5.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler fix from Borislav Petkov:
 "Fix a NULL-ptr dereference when recalculating a sched entity's weight"

* tag 'sched_urgent_for_v5.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched/fair: Fix fault in reweight_entity
2022-02-13 09:27:26 -08:00
Linus Torvalds
f5e02656b1 - Prevent cgroup event list corruption when switching events
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmII+WQACgkQEsHwGGHe
 VUofdA//QLbJMYy5BsRFTtN7uFMmeDX8n0/9Wv06laRkq+emTTZCvKks9IkbGuXj
 nuaaI/PIxFrfrQpgY8AVgopW/DdQFjSlnKY+SyJKhVzpzR7QxyrJVj+HpYEAa9bM
 E41Hw2IS6CQ+Iy1cdE3LLd7HagAf935w1OQm+P7x7WhYtnFwpbW0u/J3QDsFUU/h
 t52Ap/NxoMfd1LSAh8ZSUfKn22pnl0NfY1FtZ42mNM0K/LLGfX1oRiAnZpLzW56c
 lS/dPCaaiqYuGRhQTtdPk9tS6tfpeeF2FzXsnrXCNBPRQbg2CxAKqne7ohTHA3uY
 eaQAS4Fxq3QN3ooVaPsiHRNbAPPPNLdaRAgy8IWM5g3CbKhzWRLd0faerkBwTQ82
 bT1OAjglLxj564dHATM5/tQsNU39EBnTBx35Y2XKuyu3XffojCvfHPtFCPik+xV/
 kscaXyMBSlvld2V57LkKZUbtL/vqkobKTfuu5TNKVp1lpKBbQOePldpe4nWPLPKR
 /egH3BUaQMMGmAIZu1I55pLpv3DaOqsaIQHYuwyRpfOi2Vsu4HHl0OzenfiKggzK
 FgFjbPABsDS42hAojx7ua+QsrjycdMh0nBQkPMT1ZTityKipqgUhbNEjIclf7oau
 V/FEroTF1HhZR6enyVbzY+9woQq4hwL++Cs32QX0xSGKsZAs1sM=
 =yedI
 -----END PGP SIGNATURE-----

Merge tag 'perf_urgent_for_v5.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf fix from Borislav Petkov:
 "Prevent cgroup event list corruption when switching events"

* tag 'perf_urgent_for_v5.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf: Fix list corruption in perf_cgroup_switch()
2022-02-13 09:25:26 -08:00