Commit Graph

45920 Commits

Author SHA1 Message Date
Al Viro
048181992c fdget_raw() users: switch to CLASS(fd_raw)
Reviewed-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2024-11-03 01:28:06 -05:00
Al Viro
4dd53b84ff get rid of perf_fget_light(), convert kernel/events/core.c to CLASS(fd)
Lift fdget() and fdput() out of perf_fget_light(), turning it into
is_perf_file(struct fd f).  The life gets easier in both callers
if we do fdget() unconditionally, including the case when we are
given -1 instead of a descriptor - that avoids a reassignment in
perf_event_open(2) and it avoids a nasty temptation in _perf_ioctl()
where we must *not* lift output_event out of scope for output.

Reviewed-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2024-11-03 01:28:06 -05:00
Al Viro
05e555642c regularize emptiness checks in fini_module(2) and vfs_dedupe_file_range()
With few exceptions emptiness checks are done as fd_file(...) in boolean
context (usually something like if (!fd_file(f))...); those will be
taken care of later.

However, there's a couple of places where we do those checks as
'store fd_file(...) into a variable, then check if this variable is
NULL' and those are harder to spot.

Get rid of those now.

use fd_empty() instead of extracting file and then checking it for NULL.

Reviewed-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2024-11-03 01:28:06 -05:00
Linus Torvalds
622a3ed1ac Various fixes for tracing:
- Fix tp_printk command line option crashing the kernel
 
   With the code that can handle a buffer from a previous boot, the
   trace_check_vprintf() needed access to the delta of the address
   space used by the old buffer and the current buffer. To do so,
   the trace_array (tr) parameter was used. But when tp_printk is
   enabled on the kernel command line, no trace buffer is used and
   the trace event is sent directly to printk(). That meant the tr
   field of the iterator descriptor was NULL, and since tp_printk still
   uses trace_check_vprintf() it caused a NULL dereference.
 
 - Add ptrace.h include to x86 ftrace file for completeness
 
 - Fix rtla installation when done with out-of-tree build
 
 - Fix the help messages in rtla that were incorrect
 
 - Several fixes to fix races with the timerlat and hwlat code
 
   Several locking issues were discovered with the coordination
   between timerlat kthread creation and hotplug. As timerlat has
   callbacks from hotplug code to start kthreads when CPUs come online.
   There are also locking issues with grabbing the cpu_read_lock()
   and the locks within timerlat.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZwAUghQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qmMCAP9Z+0sN/Tx+F+5LmrXV1R9UxRPykmpm
 4NeZYEp+hAQSegD+MdHsEJLIDfmsZnGOBivmzfepuv35GMLrqQMIhhWQOA0=
 =/OQq
 -----END PGP SIGNATURE-----

Merge tag 'trace-v6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

 - Fix tp_printk command line option crashing the kernel

   With the code that can handle a buffer from a previous boot, the
   trace_check_vprintf() needed access to the delta of the address space
   used by the old buffer and the current buffer. To do so, the
   trace_array (tr) parameter was used. But when tp_printk is enabled on
   the kernel command line, no trace buffer is used and the trace event
   is sent directly to printk(). That meant the tr field of the iterator
   descriptor was NULL, and since tp_printk still uses
   trace_check_vprintf() it caused a NULL dereference.

 - Add ptrace.h include to x86 ftrace file for completeness

 - Fix rtla installation when done with out-of-tree build

 - Fix the help messages in rtla that were incorrect

 - Several fixes to fix races with the timerlat and hwlat code

   Several locking issues were discovered with the coordination between
   timerlat kthread creation and hotplug. As timerlat has callbacks from
   hotplug code to start kthreads when CPUs come online. There are also
   locking issues with grabbing the cpu_read_lock() and the locks within
   timerlat.

* tag 'trace-v6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  tracing/hwlat: Fix a race during cpuhp processing
  tracing/timerlat: Fix a race during cpuhp processing
  tracing/timerlat: Drop interface_lock in stop_kthread()
  tracing/timerlat: Fix duplicated kthread creation due to CPU online/offline
  x86/ftrace: Include <asm/ptrace.h>
  rtla: Fix the help text in osnoise and timerlat top tools
  tools/rtla: Fix installation from out-of-tree build
  tracing: Fix trace_check_vprintf() when tp_printk is used
2024-10-04 12:11:06 -07:00
Linus Torvalds
f6785e0ccf slab fixes for 6.12-rc1
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEe7vIQRWZI0iWSE3xu+CwddJFiJoFAmb/8bcACgkQu+CwddJF
 iJoApwf5AWWhKFbbYwFUCXDi7+/Xr7T7c9H9q+GAEOQiDLsDxihEAo1KYQ+DLl+h
 Vp1ddRYIKMIUfllW3bcD4O6C8L46OX3XPHhTHnksEfvtn3fQGjcU3jKH8n0eL01J
 s9eUdvduNSJorAWqjFPPRrGuLJTXmervrDYYPJLaXGITHHMOxMjKfLAxtXehvARv
 mVQV1F0NTvvNqieuibUCM5XqJs37lrmqB39pLun7bQDU48z4OR1L3nkJxTFF1bGm
 EcvAPayTiNybMt08QSVHIwqfSs+e0HmyKqjvSLpJPImDrfSrWOJvBCJxI4DU+1aw
 UiHyWYLaxWZ7DoJgtZuHV2//8wOWww==
 =EXEA
 -----END PGP SIGNATURE-----

Merge tag 'slab-for-6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab

Pull slab fixes from Vlastimil Babka:
 "Fixes for issues introduced in this merge window: kobject memory leak,
  unsupressed warning and possible lockup in new slub_kunit tests,
  misleading code in kvfree_rcu_queue_batch()"

* tag 'slab-for-6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab:
  slub/kunit: skip test_kfree_rcu when the slub kunit test is built-in
  mm, slab: suppress warnings in test_leak_destroy kunit test
  rcu/kvfree: Refactor kvfree_rcu_queue_batch()
  mm, slab: fix use of SLAB_SUPPORTS_SYSFS in kmem_cache_release()
2024-10-04 12:05:39 -07:00
Linus Torvalds
6cca119588 close_range(): fix the logics in descriptor table trimming
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQQqUNBr3gm4hGXdBJlZ7Krx/gZQ6wUCZv90qwAKCRBZ7Krx/gZQ
 629DAP9NT9/ndByazKrP03nQ/ITNhYVd0cby2iPcXZiHGj/fsQD7BF4yUUiBy7EO
 seIY8rGxc8S5TDJAG4HYa7t1Nuksnw4=
 =oji5
 -----END PGP SIGNATURE-----

Merge tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull close_range() fix from Al Viro:
 "Fix the logic in descriptor table trimming"

* tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  close_range(): fix the logics in descriptor table trimming
2024-10-04 09:46:16 -07:00
Johannes Weiner
3840cbe24c sched: psi: fix bogus pressure spikes from aggregation race
Brandon reports sporadic, non-sensical spikes in cumulative pressure
time (total=) when reading cpu.pressure at a high rate. This is due to
a race condition between reader aggregation and tasks changing states.

While it affects all states and all resources captured by PSI, in
practice it most likely triggers with CPU pressure, since scheduling
events are so frequent compared to other resource events.

The race context is the live snooping of ongoing stalls during a
pressure read. The read aggregates per-cpu records for stalls that
have concluded, but will also incorporate ad-hoc the duration of any
active state that hasn't been recorded yet. This is important to get
timely measurements of ongoing stalls. Those ad-hoc samples are
calculated on-the-fly up to the current time on that CPU; since the
stall hasn't concluded, it's expected that this is the minimum amount
of stall time that will enter the per-cpu records once it does.

The problem is that the path that concludes the state uses a CPU clock
read that is not synchronized against aggregators; the clock is read
outside of the seqlock protection. This allows aggregators to race and
snoop a stall with a longer duration than will actually be recorded.

With the recorded stall time being less than the last snapshot
remembered by the aggregator, a subsequent sample will underflow and
observe a bogus delta value, resulting in an erratic jump in pressure.

Fix this by moving the clock read of the state change into the seqlock
protection. This ensures no aggregation can snoop live stalls past the
time that's recorded when the state concludes.

Reported-by: Brandon Duffany <brandon@buildbuddy.io>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=219194
Link: https://lore.kernel.org/lkml/20240827121851.GB438928@cmpxchg.org/
Fixes: df77430639 ("psi: Reduce calls to sched_clock() in psi")
Cc: stable@vger.kernel.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-10-03 16:03:16 -07:00
Wei Li
2a13ca2e8a tracing/hwlat: Fix a race during cpuhp processing
The cpuhp online/offline processing race also exists in percpu-mode hwlat
tracer in theory, apply the fix too. That is:

    T1                       | T2
    [CPUHP_ONLINE]           | cpu_device_down()
     hwlat_hotplug_workfn()  |
                             |     cpus_write_lock()
                             |     takedown_cpu(1)
                             |     cpus_write_unlock()
    [CPUHP_OFFLINE]          |
        cpus_read_lock()     |
        start_kthread(1)     |
        cpus_read_unlock()   |

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20240924094515.3561410-5-liwei391@huawei.com
Fixes: ba998f7d95 ("trace/hwlat: Support hotplug operations")
Signed-off-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-03 16:43:23 -04:00
Wei Li
829e0c9f08 tracing/timerlat: Fix a race during cpuhp processing
There is another found exception that the "timerlat/1" thread was
scheduled on CPU0, and lead to timer corruption finally:

```
ODEBUG: init active (active state 0) object: ffff888237c2e108 object type: hrtimer hint: timerlat_irq+0x0/0x220
WARNING: CPU: 0 PID: 426 at lib/debugobjects.c:518 debug_print_object+0x7d/0xb0
Modules linked in:
CPU: 0 UID: 0 PID: 426 Comm: timerlat/1 Not tainted 6.11.0-rc7+ #45
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
RIP: 0010:debug_print_object+0x7d/0xb0
...
Call Trace:
 <TASK>
 ? __warn+0x7c/0x110
 ? debug_print_object+0x7d/0xb0
 ? report_bug+0xf1/0x1d0
 ? prb_read_valid+0x17/0x20
 ? handle_bug+0x3f/0x70
 ? exc_invalid_op+0x13/0x60
 ? asm_exc_invalid_op+0x16/0x20
 ? debug_print_object+0x7d/0xb0
 ? debug_print_object+0x7d/0xb0
 ? __pfx_timerlat_irq+0x10/0x10
 __debug_object_init+0x110/0x150
 hrtimer_init+0x1d/0x60
 timerlat_main+0xab/0x2d0
 ? __pfx_timerlat_main+0x10/0x10
 kthread+0xb7/0xe0
 ? __pfx_kthread+0x10/0x10
 ret_from_fork+0x2d/0x40
 ? __pfx_kthread+0x10/0x10
 ret_from_fork_asm+0x1a/0x30
 </TASK>
```

After tracing the scheduling event, it was discovered that the migration
of the "timerlat/1" thread was performed during thread creation. Further
analysis confirmed that it is because the CPU online processing for
osnoise is implemented through workers, which is asynchronous with the
offline processing. When the worker was scheduled to create a thread, the
CPU may has already been removed from the cpu_online_mask during the offline
process, resulting in the inability to select the right CPU:

T1                       | T2
[CPUHP_ONLINE]           | cpu_device_down()
osnoise_hotplug_workfn() |
                         |     cpus_write_lock()
                         |     takedown_cpu(1)
                         |     cpus_write_unlock()
[CPUHP_OFFLINE]          |
    cpus_read_lock()     |
    start_kthread(1)     |
    cpus_read_unlock()   |

To fix this, skip online processing if the CPU is already offline.

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20240924094515.3561410-4-liwei391@huawei.com
Fixes: c8895e271f ("trace/osnoise: Support hotplug operations")
Signed-off-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-03 16:43:22 -04:00
Wei Li
b484a02c9c tracing/timerlat: Drop interface_lock in stop_kthread()
stop_kthread() is the offline callback for "trace/osnoise:online", since
commit 5bfbcd1ee5 ("tracing/timerlat: Add interface_lock around clearing
of kthread in stop_kthread()"), the following ABBA deadlock scenario is
introduced:

T1                            | T2 [BP]               | T3 [AP]
osnoise_hotplug_workfn()      | work_for_cpu_fn()     | cpuhp_thread_fun()
                              |   _cpu_down()         |   osnoise_cpu_die()
  mutex_lock(&interface_lock) |                       |     stop_kthread()
                              |     cpus_write_lock() |       mutex_lock(&interface_lock)
  cpus_read_lock()            |     cpuhp_kick_ap()   |

As the interface_lock here in just for protecting the "kthread" field of
the osn_var, use xchg() instead to fix this issue. Also use
for_each_online_cpu() back in stop_per_cpu_kthreads() as it can take
cpu_read_lock() again.

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20240924094515.3561410-3-liwei391@huawei.com
Fixes: 5bfbcd1ee5 ("tracing/timerlat: Add interface_lock around clearing of kthread in stop_kthread()")
Signed-off-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-03 16:43:22 -04:00
Wei Li
0bb0a5c12e tracing/timerlat: Fix duplicated kthread creation due to CPU online/offline
osnoise_hotplug_workfn() is the asynchronous online callback for
"trace/osnoise:online". It may be congested when a CPU goes online and
offline repeatedly and is invoked for multiple times after a certain
online.

This will lead to kthread leak and timer corruption. Add a check
in start_kthread() to prevent this situation.

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20240924094515.3561410-2-liwei391@huawei.com
Fixes: c8895e271f ("trace/osnoise: Support hotplug operations")
Signed-off-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-03 16:43:22 -04:00
Steven Rostedt
50a3242d84 tracing: Fix trace_check_vprintf() when tp_printk is used
When the tp_printk kernel command line is used, the trace events go
directly to printk(). It is still checked via the trace_check_vprintf()
function to make sure the pointers of the trace event are legit.

The addition of reading buffers from previous boots required adding a
delta between the addresses of the previous boot and the current boot so
that the pointers in the old buffer can still be used. But this required
adding a trace_array pointer to acquire the delta offsets.

The tp_printk code does not provide a trace_array (tr) pointer, so when
the offsets were examined, a NULL pointer dereference happened and the
kernel crashed.

If the trace_array does not exist, just default the delta offsets to zero,
as that also means the trace event is not being read from a previous boot.

Link: https://lore.kernel.org/all/Zv3z5UsG_jsO9_Tb@aschofie-mobl2.lan/

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20241003104925.4e1b1fd9@gandalf.local.home
Fixes: 07714b4bb3 ("tracing: Handle old buffer mappings for event strings and functions")
Reported-by: Alison Schofield <alison.schofield@intel.com>
Tested-by: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-03 16:43:22 -04:00
Linus Torvalds
7ec462100e Getting rid of asm/unaligned.h includes
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQQqUNBr3gm4hGXdBJlZ7Krx/gZQ6wUCZv3NAgAKCRBZ7Krx/gZQ
 68kbAP0YzQxUgl0/o7Soda8XwKSPZTM9ls6kRk1UHTTG/i4ZigEA/G+i/mBQctL0
 AB911kK8mxfXppfOXzstFBjoJSqiigQ=
 =IE7D
 -----END PGP SIGNATURE-----

Merge tag 'pull-work.unaligned' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull generic unaligned.h cleanups from Al Viro:
 "Get rid of architecture-specific <asm/unaligned.h> includes, replacing
  them with a single generic <linux/unaligned.h> header file.

  It's the second largest (after asm/io.h) class of asm/* includes, and
  all but two architectures actually end up using exact same file.

  Massage the remaining two (arc and parisc) to do the same and just
  move the thing to from asm-generic/unaligned.h to linux/unaligned.h"

[ This is one of those things that we're better off doing outside the
  merge window, and would only cause extra conflict noise if it was in
  linux-next for the next release due to all the trivial #include line
  updates.  Rip off the band-aid.   - Linus ]

* tag 'pull-work.unaligned' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  move asm/unaligned.h to linux/unaligned.h
  arc: get rid of private asm/unaligned.h
  parisc: get rid of private asm/unaligned.h
2024-10-02 16:42:28 -07:00
Al Viro
5f60d5f6bb move asm/unaligned.h to linux/unaligned.h
asm/unaligned.h is always an include of asm-generic/unaligned.h;
might as well move that thing to linux/unaligned.h and include
that - there's nothing arch-specific in that header.

auto-generated by the following:

for i in `git grep -l -w asm/unaligned.h`; do
	sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i
done
for i in `git grep -l -w asm-generic/unaligned.h`; do
	sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i
done
git mv include/asm-generic/unaligned.h include/linux/unaligned.h
git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h
sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild
sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
2024-10-02 17:23:23 -04:00
Uladzislau Rezki (Sony)
3c5d61ae91 rcu/kvfree: Refactor kvfree_rcu_queue_batch()
Improve readability of kvfree_rcu_queue_batch() function
in away that, after a first batch queuing, the loop is break
and success value is returned to a caller.

There is no reason to loop and check batches further as all
outstanding objects have already been picked and attached to
a certain batch to complete an offloading.

Fixes: 2b55d6a42d ("rcu/kvfree: Add kvfree_rcu_barrier() API")
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Closes: https://lore.kernel.org/lkml/ZvWUt2oyXRsvJRNc@pc636/T/
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-10-01 18:30:42 +02:00
Linus Torvalds
e32cde8d2b sched_ext: A second set of fixes for v6.12-rc1
- When sched_ext is in bypass mode (e.g. while disabling the BPF scheduler),
   it was using one DSQ to implement global FIFO scheduling as all it has to
   do is guaranteeing reasonable forward progress. On multi-socket machines,
   this can lead to live-lock conditions under certain workloads. Fixed by
   splitting the queue used for FIFO scheduling per NUMA node. This required
   several preparation patches.
 
 - Hotplug tests on powerpc could reliably trigger deadlock while enabling a
   BPF scheduler. This was caused by cpu_hotplug_lock nesting inside
   scx_fork_rwsem and then CPU hotplug path trying to fork a new thread while
   holding cpu_hotplug_lock. Fixed by restructuring locking in enable and
   disable paths so that the two locks are not coupled. This required several
   preparation patches which also fixed a couple other issues in the enable
   path.
 
 - A build fix for !CONFIG_SMP.
 
 - Userspace tooling sync and updates.
 -----BEGIN PGP SIGNATURE-----
 
 iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCZvsB8A4cdGpAa2VybmVs
 Lm9yZwAKCRCxYfJx3gVYGW0cAP9LWESwNvszp0jDglg8kvS6tyHDLNQYLT99Dy4Q
 NPE1OgD+KUKXuRBOhT3dc9dbwodmPtX2rq36QIgHhoGiPB9hew8=
 =NH+o
 -----END PGP SIGNATURE-----

Merge tag 'sched_ext-for-6.12-rc1-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext

Pull sched_ext fixes from Tejun Heo:

 - When sched_ext is in bypass mode (e.g. while disabling the BPF
   scheduler), it was using one DSQ to implement global FIFO scheduling
   as all it has to do is guaranteeing reasonable forward progress.

   On multi-socket machines, this can lead to live-lock conditions under
   certain workloads. Fixed by splitting the queue used for FIFO
   scheduling per NUMA node. This required several preparation patches.

 - Hotplug tests on powerpc could reliably trigger deadlock while
   enabling a BPF scheduler.

   This was caused by cpu_hotplug_lock nesting inside scx_fork_rwsem and
   then CPU hotplug path trying to fork a new thread while holding
   cpu_hotplug_lock.

   Fixed by restructuring locking in enable and disable paths so that
   the two locks are not coupled. This required several preparation
   patches which also fixed a couple other issues in the enable path.

 - A build fix for !CONFIG_SMP

 - Userspace tooling sync and updates

* tag 'sched_ext-for-6.12-rc1-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext:
  sched_ext: Remove redundant p->nr_cpus_allowed checker
  sched_ext: Decouple locks in scx_ops_enable()
  sched_ext: Decouple locks in scx_ops_disable_workfn()
  sched_ext: Add scx_cgroup_enabled to gate cgroup operations and fix scx_tg_online()
  sched_ext: Enable scx_ops_init_task() separately
  sched_ext: Fix SCX_TASK_INIT -> SCX_TASK_READY transitions in scx_ops_enable()
  sched_ext: Initialize in bypass mode
  sched_ext: Remove SCX_OPS_PREPPING
  sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
  sched_ext: Use shorter slice while bypassing
  sched_ext: Split the global DSQ per NUMA node
  sched_ext: Relocate find_user_dsq()
  sched_ext: Allow only user DSQs for scx_bpf_consume(), scx_bpf_dsq_nr_queued() and bpf_iter_scx_dsq_new()
  scx_flatcg: Use a user DSQ for fallback instead of SCX_DSQ_GLOBAL
  tools/sched_ext: Receive misc updates from SCX repo
  sched_ext: Add __COMPAT helpers for features added during v6.12 devel cycle
  sched_ext: Build fix for !CONFIG_SMP
2024-09-30 12:58:17 -07:00
Linus Torvalds
190ecde722 Probes fixes for v6.12-rc1:
- uprobes: fix kernel info leak via "[uprobes]" vma
    Fix uprobes not to expose the uninitialized page for trampoline
    buffer to user space, which can leak kernel info.
 -----BEGIN PGP SIGNATURE-----
 
 iQFOBAABCgA5FiEEh7BulGwFlgAOi5DV2/sHvwUrPxsFAmb6yK4bHG1hc2FtaS5o
 aXJhbWF0c3VAZ21haWwuY29tAAoJENv7B78FKz8bME4H9ivH3PKLYd4tlz5WYBD6
 EhcMhvlOVv6rD/l3LVZuh81o+TkObypWOaIVMeMJOTxWv0wEIYsNuQy/jzYWMB6L
 gQpdrQtVb79kk0flf8feeZYebwNgCM7tOkq/dFaZJSDYd7Cb6QW4jroeSLs1oYFK
 fulA4Tq2wrQwPpxF8docYZiBPpkz62uICP9eS0X2BUlWjzzHeDRny6uO0DW7SfJc
 knNRmFZYPyW5Fovd4MF0iyaUMRduTt6B9bvtasu52F4WzwCyG2r7scm6UDAEJiV7
 O77+Xq55KkkoZy3+xlZPQgyE1i/Uer5hX735dppu2CPQvhgZGfflNYNQUUUcGegu
 /w==
 =kLlc
 -----END PGP SIGNATURE-----

Merge tag 'probes-fixes-v6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull probes fix from Masami Hiramatsu:

 - uprobes: fix kernel info leak via "[uprobes]" vma

   Fix uprobes not to expose the uninitialized page for trampoline
   buffer to user space, which can leak kernel info.

* tag 'probes-fixes-v6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  uprobes: fix kernel info leak via "[uprobes]" vma
2024-09-30 11:06:01 -07:00
Al Viro
678379e1d4 close_range(): fix the logics in descriptor table trimming
Cloning a descriptor table picks the size that would cover all currently
opened files.  That's fine for clone() and unshare(), but for close_range()
there's an additional twist - we clone before we close, and it would be
a shame to have
	close_range(3, ~0U, CLOSE_RANGE_UNSHARE)
leave us with a huge descriptor table when we are not going to keep
anything past stderr, just because some large file descriptor used to
be open before our call has taken it out.

Unfortunately, it had been dealt with in an inherently racy way -
sane_fdtable_size() gets a "don't copy anything past that" argument
(passed via unshare_fd() and dup_fd()), close_range() decides how much
should be trimmed and passes that to unshare_fd().

The problem is, a range that used to extend to the end of descriptor
table back when close_range() had looked at it might very well have stuff
grown after it by the time dup_fd() has allocated a new files_struct
and started to figure out the capacity of fdtable to be attached to that.

That leads to interesting pathological cases; at the very least it's a
QoI issue, since unshare(CLONE_FILES) is atomic in a sense that it takes
a snapshot of descriptor table one might have observed at some point.
Since CLOSE_RANGE_UNSHARE close_range() is supposed to be a combination
of unshare(CLONE_FILES) with plain close_range(), ending up with a
weird state that would never occur with unshare(2) is confusing, to put
it mildly.

It's not hard to get rid of - all it takes is passing both ends of the
range down to sane_fdtable_size().  There we are under ->files_lock,
so the race is trivially avoided.

So we do the following:
	* switch close_files() from calling unshare_fd() to calling
dup_fd().
	* undo the calling convention change done to unshare_fd() in
60997c3d45 "close_range: add CLOSE_RANGE_UNSHARE"
	* introduce struct fd_range, pass a pointer to that to dup_fd()
and sane_fdtable_size() instead of "trim everything past that point"
they are currently getting.  NULL means "we are not going to be punching
any holes"; NR_OPEN_MAX is gone.
	* make sane_fdtable_size() use find_last_bit() instead of
open-coding it; it's easier to follow that way.
	* while we are at it, have dup_fd() report errors by returning
ERR_PTR(), no need to use a separate int *errorp argument.

Fixes: 60997c3d45 "close_range: add CLOSE_RANGE_UNSHARE"
Cc: stable@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2024-09-29 21:52:29 -04:00
Oleg Nesterov
34820304cc uprobes: fix kernel info leak via "[uprobes]" vma
xol_add_vma() maps the uninitialized page allocated by __create_xol_area()
into userspace. On some architectures (x86) this memory is readable even
without VM_READ, VM_EXEC results in the same pgprot_t as VM_EXEC|VM_READ,
although this doesn't really matter, debugger can read this memory anyway.

Link: https://lore.kernel.org/all/20240929162047.GA12611@redhat.com/

Reported-by: Will Deacon <will@kernel.org>
Fixes: d4b3b6384f ("uprobes/core: Allocate XOL slots for uprobes use")
Cc: stable@vger.kernel.org
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2024-09-30 08:19:11 +09:00
Linus Torvalds
ec03de73b1 Locking changes for v6.12:
- lockdep:
     - Fix potential deadlock between lockdep and RCU (Zhiguo Niu)
     - Use str_plural() to address Coccinelle warning (Thorsten Blum)
     - Add debuggability enhancement (Luis Claudio R. Goncalves)
 
  - static keys & calls:
     - Fix static_key_slow_dec() yet again (Peter Zijlstra)
     - Handle module init failure correctly in static_call_del_module() (Thomas Gleixner)
     - Replace pointless WARN_ON() in static_call_module_notify() (Thomas Gleixner)
 
  - <linux/cleanup.h>:
     - Add usage and style documentation (Dan Williams)
 
  - rwsems:
     - Move is_rwsem_reader_owned() and rwsem_owner() under CONFIG_DEBUG_RWSEMS (Waiman Long)
 
  - atomic ops, x86:
     - Redeclare x86_32 arch_atomic64_{add,sub}() as void (Uros Bizjak)
     - Introduce the read64_nonatomic macro to x86_32 with cx8 (Uros Bizjak)
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmb4/IIRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1ix5g/9HbnP8nR6y2JQ+K4FPj911jcNdPg66rb3
 MTERO+9kLiODFi1N6L/s7w+l4umXlmDSwxd7GaMLIQuaxgQD/lGpw0P5MoZhKfwj
 2AyJWjc9KiW/NwdOLWrJXESrsPSQSHan9EpEV8FV6X8QaflBvYCjcWmUXmu0lW+I
 r+pqHuQFrISL5eBZDd38PGHWNB4UB9YoY5GojUmoDHgJQiyn2oJVopva11RsLneR
 64m4slWRnOG/IjY6AlUlcFK4s7b8g5v1p0NHuJQNTFnzxsKp/QmFnP49dUC2fiZd
 FuMbGv+nPA7rRI1eZ/pCTk0h2CTT1RotQt78WJmL/R6jrQRIxkeFSTiKC2sZ5smp
 +CWiGUiKxy426qBO9Wzien2BXq5RTL8dLuX31ioflhXPEvTfWFHX3yw73sbhQZGW
 QbXztV9xz/B70TneGVPCHHsFDGwrT+EnC8tQbWw+Mv4OxfUknoMEVD9eIye6jXbV
 lJkx8Y8y7AQewQ2uAVOKn6xSXhsAnMGS/BQ1KWITO5rdLhNInkqKfYssaoVroXhA
 2qBtNBoPULWz+pvB8d8J/kljK4o3jPVTZYESpW3cLQ76NolTuXpe9i3zkNHGBj0A
 tZE9ZAumJIXGj0lhnoiOB9ezgqKUIK+LQ1yxrCVUpjZ2rd4ZT1BlQj/Nvoc1witS
 6iq+S/FCSbY=
 =LbkS
 -----END PGP SIGNATURE-----

Merge tag 'locking-urgent-2024-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:
 "lockdep:
    - Fix potential deadlock between lockdep and RCU (Zhiguo Niu)
    - Use str_plural() to address Coccinelle warning (Thorsten Blum)
    - Add debuggability enhancement (Luis Claudio R. Goncalves)

  static keys & calls:
    - Fix static_key_slow_dec() yet again (Peter Zijlstra)
    - Handle module init failure correctly in static_call_del_module()
      (Thomas Gleixner)
    - Replace pointless WARN_ON() in static_call_module_notify() (Thomas
      Gleixner)

  <linux/cleanup.h>:
    - Add usage and style documentation (Dan Williams)

  rwsems:
    - Move is_rwsem_reader_owned() and rwsem_owner() under
      CONFIG_DEBUG_RWSEMS (Waiman Long)

  atomic ops, x86:
    - Redeclare x86_32 arch_atomic64_{add,sub}() as void (Uros Bizjak)
    - Introduce the read64_nonatomic macro to x86_32 with cx8 (Uros
      Bizjak)"

Signed-off-by: Ingo Molnar <mingo@kernel.org>

* tag 'locking-urgent-2024-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/rwsem: Move is_rwsem_reader_owned() and rwsem_owner() under CONFIG_DEBUG_RWSEMS
  jump_label: Fix static_key_slow_dec() yet again
  static_call: Replace pointless WARN_ON() in static_call_module_notify()
  static_call: Handle module init failure correctly in static_call_del_module()
  locking/lockdep: Simplify character output in seq_line()
  lockdep: fix deadlock issue between lockdep and rcu
  lockdep: Use str_plural() to fix Coccinelle warning
  cleanup: Add usage and style documentation
  lockdep: suggest the fix for "lockdep bfs error:-1" on print_bfs_bug
  locking/atomic/x86: Redeclare x86_32 arch_atomic64_{add,sub}() as void
  locking/atomic/x86: Introduce the read64_nonatomic macro to x86_32 with cx8
2024-09-29 08:51:30 -07:00
Ingo Molnar
ae39e0bd15 Merge branch 'locking/core' into locking/urgent, to pick up pending commits
Merge all pending locking commits into a single branch.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-09-29 08:57:18 +02:00
Linus Torvalds
6f81a446f8 Modules changes for v6.12-rc1
There are a few fixes / cleanups from Vincent, Chunhui, and Petr, but the
 most important part of this pull request is the Rust community stepping
 up to help maintain both C / Rust code for future Rust module support. We
 grow the set of modules maintainers by 3 now, and with this hope to scale to
 help address what's needed to properly support future Rust module support.
 
 A lot of exciting stuff coming in future kernel releases.
 
 This has been on linux-next for ~ 3 weeks now with no issues.
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCgAwFiEENnNq2KuOejlQLZofziMdCjCSiKcFAmb3InQSHG1jZ3JvZkBr
 ZXJuZWwub3JnAAoJEM4jHQowkoinA/IP/RP3O3Cwtyjd51lMNzEmJR0WE0J7/C3z
 v4L3teqoiH4vWF0vDd8jVE1SL9RZ0TnrSUUF/Kbf7YolXELPO+WSvPepGqlzeUTd
 KH+PZX+AmaGXhwAGmB53AMhcP8HmGci+IZZgyZUnYxZawcFYU24WYO84JAKltNsy
 /wqepYXObc0HiNXk+VS3h8Z+1y9nhJ55xluvTf5guQbrtjl1xWXSdVdF1/V5wnjp
 qShNSNn1bktFO0lK7IW/UmM0kEoFHHyUslwNcP/rJLIb99lDV3M+Vd3i41dBkuYw
 iSCD+a/0fOmUj909Q4VfZQkK4vKEi04XIz1EHb2uYOGKcr75gnWmCRyUL1TJSFO/
 oXNd2SlvwMYXxMczsaLppAPERRgSMWnsBEZWZ7nk2uBpuFay43LfEdZcPwknGNkz
 7Ns+3PHr6W3phUo1izrgxBk6xTyEDR6etxThSGvq/dhG3VuivV6hRyxFZX9NaTSD
 a/uFhIj2f8FuV9TLYUzPO/NwwLklPFe9dCvtWEHgSvtyaeX1pSvyjz8fLbXDGyu/
 qVXMp2fegLJ2bq9A0ABtd7nuVNCAN24pl+Nwws+GMRmCg9b1Sfego16WoLUDbbHX
 mjVAFTtKgqEg0ePnbjqGm7I7siY/9x8I39aA9WbNoXKNFu3hwMDHLAavATmj+1dV
 UlrMxvfv20WQ
 =4P89
 -----END PGP SIGNATURE-----

Merge tag 'modules-6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux

Pull module updates from Luis Chamberlain:
 "There are a few fixes / cleanups from Vincent, Chunhui, and Petr, but
  the most important part of this pull request is the Rust community
  stepping up to help maintain both C / Rust code for future Rust module
  support. We grow the set of modules maintainers by three now, and with
  this hope to scale to help address what's needed to properly support
  future Rust module support.

  A lot of exciting stuff coming in future kernel releases.

  This has been on linux-next for ~ 3 weeks now with no issues"

* tag 'modules-6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux:
  module: Refine kmemleak scanned areas
  module: abort module loading when sysfs setup suffer errors
  MAINTAINERS: scale modules with more reviewers
  module: Clean up the description of MODULE_SIG_<type>
  module: Split modules_install compression and in-kernel decompression
2024-09-28 09:06:15 -07:00
Zhang Qiao
95b873693a sched_ext: Remove redundant p->nr_cpus_allowed checker
select_rq_task() already checked that 'p->nr_cpus_allowed > 1',
'p->nr_cpus_allowed == 1' checker in scx_select_cpu_dfl() is redundant.

Signed-off-by: Zhang Qiao <zhangqiao22@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-09-27 10:23:45 -10:00
Tejun Heo
efe231d9de sched_ext: Decouple locks in scx_ops_enable()
The enable path uses three big locks - scx_fork_rwsem, scx_cgroup_rwsem and
cpus_read_lock. Currently, the locks are grabbed together which is prone to
locking order problems.

For example, currently, there is a possible deadlock involving
scx_fork_rwsem and cpus_read_lock. cpus_read_lock has to nest inside
scx_fork_rwsem due to locking order existing in other subsystems. However,
there exists a dependency in the other direction during hotplug if hotplug
needs to fork a new task, which happens in some cases. This leads to the
following deadlock:

       scx_ops_enable()                               hotplug

                                          percpu_down_write(&cpu_hotplug_lock)
   percpu_down_write(&scx_fork_rwsem)
   block on cpu_hotplug_lock
                                          kthread_create() waits for kthreadd
					  kthreadd blocks on scx_fork_rwsem

Note that this doesn't trigger lockdep because the hotplug side dependency
bounces through kthreadd.

With the preceding scx_cgroup_enabled change, this can be solved by
decoupling cpus_read_lock, which is needed for static_key manipulations,
from the other two locks.

- Move the first block of static_key manipulations outside of scx_fork_rwsem
  and scx_cgroup_rwsem. This is now safe with the preceding
  scx_cgroup_enabled change.

- Drop scx_cgroup_rwsem and scx_fork_rwsem between the two task iteration
  blocks so that __scx_ops_enabled static_key enabling is outside the two
  rwsems.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-and-tested-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Link: http://lkml.kernel.org/r/8cd0ec0c4c7c1bc0119e61fbef0bee9d5e24022d.camel@linux.ibm.com
2024-09-27 10:02:40 -10:00
Tejun Heo
160216568c sched_ext: Decouple locks in scx_ops_disable_workfn()
The disable path uses three big locks - scx_fork_rwsem, scx_cgroup_rwsem and
cpus_read_lock. Currently, the locks are grabbed together which is prone to
locking order problems. With the preceding scx_cgroup_enabled change, we can
decouple them:

- As cgroup disabling no longer requires modifying a static_key which
  requires cpus_read_lock(), no need to grab cpus_read_lock() before
  grabbing scx_cgroup_rwsem.

- cgroup can now be independently disabled before tasks are moved back to
  the fair class.

Relocate scx_cgroup_exit() invocation before scx_fork_rwsem is grabbed, drop
now unnecessary cpus_read_lock() and move static_key operations out of
scx_fork_rwsem. This decouples all three locks in the disable path.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-and-tested-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Link: http://lkml.kernel.org/r/8cd0ec0c4c7c1bc0119e61fbef0bee9d5e24022d.camel@linux.ibm.com
2024-09-27 10:02:40 -10:00
Tejun Heo
568894edbe sched_ext: Add scx_cgroup_enabled to gate cgroup operations and fix scx_tg_online()
If the BPF scheduler does not implement ops.cgroup_init(), scx_tg_online()
didn't set SCX_TG_INITED which meant that ops.cgroup_exit(), even if
implemented, won't be called from scx_tg_offline(). This is because
SCX_HAS_OP(cgroupt_init) is used to test both whether SCX cgroup operations
are enabled and ops.cgroup_init() exists.

Fix it by introducing a separate bool scx_cgroup_enabled to gate cgroup
operations and use SCX_HAS_OP(cgroup_init) only to test whether
ops.cgroup_init() exists. Make all cgroup operations consistently use
scx_cgroup_enabled to test whether cgroup operations are enabled.
scx_cgroup_enabled is added instead of using scx_enabled() to ease planned
locking updates.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-09-27 10:02:40 -10:00
Tejun Heo
4269c603cc sched_ext: Enable scx_ops_init_task() separately
scx_ops_init_task() and the follow-up scx_ops_enable_task() in the fork path
were gated by scx_enabled() test and thus __scx_ops_enabled had to be turned
on before the first scx_ops_init_task() loop in scx_ops_enable(). However,
if an external entity causes sched_class switch before the loop is complete,
tasks which are not initialized could be switched to SCX.

The following can be reproduced by running a program which keeps toggling a
process between SCHED_OTHER and SCHED_EXT using sched_setscheduler(2).

  sched_ext: Invalid task state transition 0 -> 3 for fish[1623]
  WARNING: CPU: 1 PID: 1650 at kernel/sched/ext.c:3392 scx_ops_enable_task+0x1a1/0x200
  ...
  Sched_ext: simple (enabling)
  RIP: 0010:scx_ops_enable_task+0x1a1/0x200
  ...
   switching_to_scx+0x13/0xa0
   __sched_setscheduler+0x850/0xa50
   do_sched_setscheduler+0x104/0x1c0
   __x64_sys_sched_setscheduler+0x18/0x30
   do_syscall_64+0x7b/0x140
   entry_SYSCALL_64_after_hwframe+0x76/0x7e

Fix it by gating scx_ops_init_task() separately using
scx_ops_init_task_enabled. __scx_ops_enabled is now set after all tasks are
finished with scx_ops_init_task().

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-09-27 10:02:40 -10:00
Tejun Heo
9753358a6a sched_ext: Fix SCX_TASK_INIT -> SCX_TASK_READY transitions in scx_ops_enable()
scx_ops_enable() has two task iteration loops. The first one calls
scx_ops_init_task() on every task and the latter switches the eligible ones
into SCX. The first loop left the tasks in SCX_TASK_INIT state and then the
second loop switched it into READY before switching the task into SCX.

The distinction between INIT and READY is only meaningful in the fork path
where it's used to tell whether the task finished forking so that we can
tell ops.exit_task() accordingly. Leaving task in INIT state between the two
loops is incosistent with the fork path and incorrect. The following can be
triggered by running a program which keeps toggling a task between
SCHED_OTHER and SCHED_SCX while enabling a task:

  sched_ext: Invalid task state transition 1 -> 3 for fish[1526]
  WARNING: CPU: 2 PID: 1615 at kernel/sched/ext.c:3393 scx_ops_enable_task+0x1a1/0x200
  ...
  Sched_ext: qmap (enabling+all)
  RIP: 0010:scx_ops_enable_task+0x1a1/0x200
  ...
   switching_to_scx+0x13/0xa0
   __sched_setscheduler+0x850/0xa50
   do_sched_setscheduler+0x104/0x1c0
   __x64_sys_sched_setscheduler+0x18/0x30
   do_syscall_64+0x7b/0x140
   entry_SYSCALL_64_after_hwframe+0x76/0x7e

Fix it by transitioning to READY in the first loop right after
scx_ops_init_task() succeeds.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: David Vernet <void@manifault.com>
2024-09-27 10:02:40 -10:00
Tejun Heo
8c2090c504 sched_ext: Initialize in bypass mode
scx_ops_enable() used preempt_disable() around the task iteration loop to
switch tasks into SCX to guarantee forward progress of the task which is
running scx_ops_enable(). However, in the gap between setting
__scx_ops_enabled and preeempt_disable(), an external entity can put tasks
including the enabling one into SCX prematurely, which can lead to
malfunctions including stalls.

The bypass mode can wrap the entire enabling operation and guarantee forward
progress no matter what the BPF scheduler does. Use the bypass mode instead
to guarantee forward progress while enabling.

While at it, release and regrab scx_tasks_lock between the two task
iteration locks in scx_ops_enable() for clarity as there is no reason to
keep holding the lock between them.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-09-27 10:02:40 -10:00
Tejun Heo
fc1fcebead sched_ext: Remove SCX_OPS_PREPPING
The distinction between SCX_OPS_PREPPING and SCX_OPS_ENABLING is not used
anywhere and only adds confusion. Drop SCX_OPS_PREPPING.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-09-27 10:02:39 -10:00
Tejun Heo
1bbcfe620e sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
check_hotplug_seq() is used to detect CPU hotplug event which occurred while
the BPF scheduler is being loaded so that initialization can be retried if
CPU hotplug events take place before the CPU hotplug callbacks are online.

As such, the best place to call it is in the same cpu_read_lock() section
that enables the CPU hotplug ops. Currently, it is called in the next
cpus_read_lock() block in scx_ops_enable(). The side effect of this
placement is a small window in which hotplug sequence detection can trigger
unnecessarily, which isn't critical.

Move check_hotplug_seq() invocation to the same cpus_read_lock() block as
the hotplug operation enablement to close the window and get the invocation
out of the way for planned locking updates.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: David Vernet <void@manifault.com>
2024-09-27 10:02:39 -10:00
Al Viro
cb787f4ac0 [tree-wide] finally take no_llseek out
no_llseek had been defined to NULL two years ago, in commit 868941b144
("fs: remove no_llseek")

To quote that commit,

  At -rc1 we'll need do a mechanical removal of no_llseek -

  git grep -l -w no_llseek | grep -v porting.rst | while read i; do
	sed -i '/\<no_llseek\>/d' $i
  done

  would do it.

Unfortunately, that hadn't been done.  Linus, could you do that now, so
that we could finally put that thing to rest? All instances are of the
form
	.llseek = no_llseek,
so it's obviously safe.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-27 08:18:43 -07:00
Tejun Heo
6f34d8d382 sched_ext: Use shorter slice while bypassing
While bypassing, tasks are scheduled in FIFO order which favors tasks that
hog CPUs. This can slow down e.g. unloading of the BPF scheduler. While
bypassing, guaranteeing timely forward progress is the main goal. There's no
point in giving long slices. Shorten the time slice used while bypassing
from 20ms to 5ms.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David Vernet <void@manifault.com>
2024-09-26 12:56:46 -10:00
Tejun Heo
b7b3b2dbae sched_ext: Split the global DSQ per NUMA node
In the bypass mode, the global DSQ is used to schedule all tasks in simple
FIFO order. All tasks are queued into the global DSQ and all CPUs try to
execute tasks from it. This creates a lot of cross-node cacheline accesses
and scheduling across the node boundaries, and can lead to live-lock
conditions where the system takes tens of minutes to disable the BPF
scheduler while executing in the bypass mode.

Split the global DSQ per NUMA node. Each node has its own global DSQ. When a
task is dispatched to SCX_DSQ_GLOBAL, it's put into the global DSQ local to
the task's CPU and all CPUs in a node only consume its node-local global
DSQ.

This resolves a livelock condition which could be reliably triggered on an
2x EPYC 7642 system by running `stress-ng --race-sched 1024` together with
`stress-ng --workload 80 --workload-threads 10` while repeatedly enabling
and disabling a SCX scheduler.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David Vernet <void@manifault.com>
2024-09-26 12:56:46 -10:00
Tejun Heo
bba26bf356 sched_ext: Relocate find_user_dsq()
To prepare for the addition of find_global_dsq(). No functional changes.

Signed-off-by: tejun heo <tj@kernel.org>
Acked-by: David Vernet <void@manifault.com>
2024-09-26 12:56:46 -10:00
Tejun Heo
63fb3ec805 sched_ext: Allow only user DSQs for scx_bpf_consume(), scx_bpf_dsq_nr_queued() and bpf_iter_scx_dsq_new()
SCX_DSQ_GLOBAL is special in that it can't be used as a priority queue and
is consumed implicitly, but all BPF DSQ related kfuncs could be used on it.
SCX_DSQ_GLOBAL will be split per-node for scalability and those operations
won't make sense anymore. Disallow SCX_DSQ_GLOBAL on scx_bpf_consume(),
scx_bpf_dsq_nr_queued() and bpf_iter_scx_dsq_new(). This means that
SCX_DSQ_GLOBAL can only be used as a dispatch target from BPF schedulers.

With scx_flatcg, which was using SCX_DSQ_GLOBAL as the fallback DSQ,
updated, this shouldn't affect any schedulers.

This leaves find_dsq_for_dispatch() the only user of find_non_local_dsq().
Open code and remove find_non_local_dsq().

Signed-off-by: tejun heo <tj@kernel.org>
Acked-by: David Vernet <void@manifault.com>
2024-09-26 12:56:46 -10:00
Linus Torvalds
a78282e2c9 Revert "binfmt_elf, coredump: Log the reason of the failed core dumps"
This reverts commit fb97d2eb54.

The logging was questionable to begin with, but it seems to actively
deadlock on the task lock.

 "On second thought, let's not log core dump failures. 'Tis a silly place"

because if you can't tell your core dump is truncated, maybe you should
just fix your debugger instead of adding bugs to the kernel.

Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Link: https://lore.kernel.org/all/d122ece6-3606-49de-ae4d-8da88846bef2@oracle.com/
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-26 11:39:02 -07:00
Linus Torvalds
5159938e10 Probes updates for v6.12:
- uprobes: make trace_uprobe->nhit counter a per-CPU one
    This makes uprobe event's hit counter per-CPU for improving
    scalability on multi-core environment.
 
 - kprobes: Remove obsoleted declaration for init_test_probes
    Remove unused init_test_probes() from header.
 
 - Raw tracepoint probe supports raw tracepoint events on modules.
    The tracepoint events using fprobe were introduced in v6.5, but
    tracepoints can be compiled in modules. This supports such a case.
    This includes the following improvements.
   . tracepoint: add a function for iterating over all tracepoints in
     all modules.
   . tracepoint: Add a function for iterating over tracepoints in a
     module.
   . tracing/fprobe: Support raw tracepoint events on modules.
   . tracing/fprobe: Support raw tracepoints on future loaded modules.
      This allows user to add tracepoint events on modules which is
      not loaded yet.
   . selftests/tracing: Add a test for tracepoint events on modules.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEh7BulGwFlgAOi5DV2/sHvwUrPxsFAmb0HXgACgkQ2/sHvwUr
 Pxs7AAf+K89Q7eyqKLP/oG5LGsnmWwhZHP26HTbGKh7mRaxGE+cf3l1O2lCMAgBt
 0Y1J0sHkgRSnubmlPrgEMKKLOKVBwnvwBqbqO8Zw8L3GxMegG5YYsl3Y60Q0T6Gq
 xiL17sHILbb/yefUqnf6C3QHoSjR4aTMEaQSpux1tsCqG/sLeU7V6DZrWdM5t4Fl
 CvQDuy//UdQUKFTUC5XOc6lRbKr94ktp/VTxdHZLXa5u6p/slq8ISf9EA+Rrsjkp
 m+FtW8MpfcYt3K+hs0kV58F43XWeRt9F7OlLf+MlyCeRRQor4xvkVlV0iw6VcRG9
 sXt6ml6AmyA2JWRzR5qSKYvMAsNVyA==
 =GYlS
 -----END PGP SIGNATURE-----

Merge tag 'probes-v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull probes updates from Masami Hiramatsu:

 - uprobes: make trace_uprobe->nhit counter a per-CPU one

   This makes uprobe event's hit counter per-CPU for improving
   scalability on multi-core environment

 - kprobes: Remove obsoleted declaration for init_test_probes

   Remove unused init_test_probes() from header

 - Raw tracepoint probe supports raw tracepoint events on modules:
     - add a function for iterating over all tracepoints in all modules
     - add a function for iterating over tracepoints in a module
     - support raw tracepoint events on modules
     - support raw tracepoints on future loaded modules
     - add a test for tracepoint events on modules"

* tag 'probes-v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  sefltests/tracing: Add a test for tracepoint events on modules
  tracing/fprobe: Support raw tracepoints on future loaded modules
  tracing/fprobe: Support raw tracepoint events on modules
  tracepoint: Support iterating tracepoints in a loading module
  tracepoint: Support iterating over tracepoints on modules
  kprobes: Remove obsoleted declaration for init_test_probes
  uprobes: turn trace_uprobe's nhit counter to be per-CPU one
2024-09-26 08:55:36 -07:00
Linus Torvalds
aa486552a1 memblock: updates for 6.12-rc1
* new memblock_estimated_nr_free_pages() helper to replace totalram_pages()
   which is less accurate when CONFIG_DEFERRED_STRUCT_PAGE_INIT is set
 * fixes for memblock tests
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmbejv0QHHJwcHRAa2Vy
 bmVsLm9yZwAKCRA5A4Ymyw79kVVlB/4yOoCDvJyUocEY0/Zv5bdRGXlAI0Igp3VV
 E0rEpvIjTBWwp/KZziQ8zMFk5zL/Aqb081vRsCko0lh2wjD5tFgNWWJG/sryQ/tX
 vc88p83KEXxNy4QC1qCh8dvHGIZVuLQ8oWQ7QFuH2ResdOaLdcfnobcu6/W/pBE0
 60/0bNdNgFPgnCpFIcWvGFOqZ10akhw4xYrwRsCKAQEeqeKyQE/DBFUvNrqkOuNG
 +4k71X/9mcuEDBKGRCf5XzCf7nwk4k8pzOc4xMeEhAaaV2uZdENfQuu1Av7nqRah
 zhYveo0Wd0cnGWORBT/ddzPDeBjdP2ZM9qR70yoSj2mQ7a3ixLfd
 =wtsK
 -----END PGP SIGNATURE-----

Merge tag 'memblock-v6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock

Pull memblock updates from Mike Rapoport:

 - new memblock_estimated_nr_free_pages() helper to replace
   totalram_pages() which is less accurate when
   CONFIG_DEFERRED_STRUCT_PAGE_INIT is set

 - fixes for memblock tests

* tag 'memblock-v6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
  s390/mm: get estimated free pages by memblock api
  kernel/fork.c: get estimated free pages by memblock api
  mm/memblock: introduce a new helper memblock_estimated_nr_free_pages()
  memblock test: fix implicit declaration of function 'strscpy'
  memblock test: fix implicit declaration of function 'isspace'
  memblock test: fix implicit declaration of function 'memparse'
  memblock test: add the definition of __setup()
  memblock test: fix implicit declaration of function 'virt_to_phys'
  tools/testing: abstract two init.h into common include directory
  memblock tests: include export.h in linkage.h as kernel dose
  memblock tests: include memory_hotplug.h in mmzone.h as kernel dose
2024-09-25 11:35:19 -07:00
Linus Torvalds
1f9c4a9967 Kbuild: make MODVERSIONS support depend on not being a compile test build
Currently the Rust support is gated on not having MODVERSIONS enabled,
and as a result an "allmodconfig" build will disable Rust build tests.

While MODVERSIONS configurations are worth build testing, the feature is
not actually meaningful unless you run the result, and I'd rather get
build coverage of Rust than MODVERSIONS.  So let's disable MODVERSIONS
for build testing until the Rust side clears up.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-25 11:08:28 -07:00
Masami Hiramatsu (Google)
57a7e6de9e tracing/fprobe: Support raw tracepoints on future loaded modules
Support raw tracepoint events on future loaded (unloaded) modules.
This allows user to create raw tracepoint events which can be used from
module's __init functions.

Note: since the kernel does not have any information about the tracepoints
in the unloaded modules, fprobe events can not check whether the tracepoint
exists nor extend the BTF based arguments.

Link: https://lore.kernel.org/all/172397780593.286558.18360375226968537828.stgit@devnote2/

Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2024-09-25 23:23:44 +09:00
Masami Hiramatsu (Google)
67e9a9ee47 tracing/fprobe: Support raw tracepoint events on modules
Support raw tracepoint event on module by fprobe events.
Since it only uses for_each_kernel_tracepoint() to find a tracepoint,
the tracepoints on modules are not handled. Thus if user specified a
tracepoint on a module, it shows an error.
This adds new for_each_module_tracepoint() API to tracepoint subsystem,
and uses it to find tracepoints on modules.

Link: https://lore.kernel.org/all/172397779651.286558.15903703620679186867.stgit@devnote2/

Reported-by: don <zds100@gmail.com>
Closes: https://lore.kernel.org/all/20240530215718.aeec973a1d0bf058d39cb1e3@kernel.org/
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2024-09-25 23:23:44 +09:00
Masami Hiramatsu (Google)
d4df54f338 tracepoint: Support iterating tracepoints in a loading module
Add for_each_tracepoint_in_module() function to iterate tracepoints in
a module. This API is needed for handling tracepoints in a loading
module from tracepoint_module_notifier callback function.
This also update for_each_module_tracepoint() to pass the module to
callback function so that it can find module easily.

Link: https://lore.kernel.org/all/172397778740.286558.15781131277732977643.stgit@devnote2/

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2024-09-25 23:23:44 +09:00
Masami Hiramatsu (Google)
d5dbf8b48a tracepoint: Support iterating over tracepoints on modules
Add for_each_module_tracepoint() for iterating over tracepoints
on modules. This is similar to the for_each_kernel_tracepoint()
but only for the tracepoints on modules (not including kernel
built-in tracepoints).

Link: https://lore.kernel.org/all/172397777800.286558.14554748203446214056.stgit@devnote2/

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2024-09-25 23:23:44 +09:00
Andrii Nakryiko
10cdb82aa7 uprobes: turn trace_uprobe's nhit counter to be per-CPU one
trace_uprobe->nhit counter is not incremented atomically, so its value
is questionable in when uprobe is hit on multiple CPUs simultaneously.

Also, doing this shared counter increment across many CPUs causes heavy
cache line bouncing, limiting uprobe/uretprobe performance scaling with
number of CPUs.

Solve both problems by making this a per-CPU counter.

Link: https://lore.kernel.org/all/20240813203409.3985398-1-andrii@kernel.org/

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2024-09-25 20:10:38 +09:00
Linus Torvalds
fa8380a06b bpf-next-6.12-struct-fd
-----BEGIN PGP SIGNATURE-----
 
 iQIyBAABCAAdFiEE+soXsSLHKoYyzcli6rmadz2vbToFAmbyniwACgkQ6rmadz2v
 bTqE0w/2J8TJWfR+1Z0Bf2Nzt3kFd/wLNn6FpWsq+z0/pzoP5AzborvmLzNiZmeh
 0vJFieOL7pV4+NcaIHBPqfW1eMsXu+BlrtkHGLLYiCPJUr8o5jU9SrVKfF3arMZS
 a6+zcX6EivX0MYWobZ2F7/8XF0nRQADxzInLazFmtJmLmOAyIch417KOg9ylwr3m
 WVqhtCImUFyVz83XMFgbf2jXrvL9xD08iHN62GzcAioRF5LeJSPX0U/N15gWDqF7
 V68F0PnvUf6/hkFvYVynhpMivE8u+8VXCHX+heZ8yUyf4ExV/+KSZrImupJ0WLeO
 iX/qJ/9XP+g6ad9Olqpu6hmPi/6c6epQgbSOchpG04FGBGmJv1j9w4wnlHCgQDdB
 i2oKHRtMKdqNZc0sOSfvw/KyxZXJuD1VQ9YgGVpZbHUbSZDoj7T40zWziUp8VgyR
 nNtOmfJLDbtYlPV7/cQY5Ui4ccMJm6GzxxLBcqcMWxBu/90Ng0wTSubLbg3RHmWu
 d9cCL6IprjJnliEUqC4k4gqZy6RJlHvQ8+NDllaW+4iPnz7B2WaUbwRX/oZ5yiYK
 bLjWCWo+SzntVPAzTsmAYs2G47vWoALxo2NpNXLfmhJiWwfakJaQu7fwrDxsY11M
 OgByiOzcbAcvkJzeVIDhfLVq5z49KF6k4D8Qu0uvXHDeC8Mraw==
 =zzmh
 -----END PGP SIGNATURE-----

Merge tag 'bpf-next-6.12-struct-fd' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf 'struct fd' updates from Alexei Starovoitov:
 "This includes struct_fd BPF changes from Al and Andrii"

* tag 'bpf-next-6.12-struct-fd' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next:
  bpf: convert bpf_token_create() to CLASS(fd, ...)
  security,bpf: constify struct path in bpf_token_create() LSM hook
  bpf: more trivial fdget() conversions
  bpf: trivial conversions for fdget()
  bpf: switch maps to CLASS(fd, ...)
  bpf: factor out fetching bpf_map from FD and adding it to used_maps list
  bpf: switch fdget_raw() uses to CLASS(fd_raw, ...)
  bpf: convert __bpf_prog_get() to CLASS(fd, ...)
2024-09-24 14:54:26 -07:00
Tejun Heo
42268ad0eb sched_ext: Build fix for !CONFIG_SMP
move_remote_task_to_local_dsq() is only defined on SMP configs but
scx_disaptch_from_dsq() was calling move_remote_task_to_local_dsq() on UP
configs too causing build failures. Add a dummy
move_remote_task_to_local_dsq() which triggers a warning.

Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 4c30f5ce4f ("sched_ext: Implement scx_bpf_dispatch[_vtime]_from_dsq()")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202409241108.jaocHiDJ-lkp@intel.com/
2024-09-24 11:10:07 -10:00
Linus Torvalds
68e5c7d4ce Kbuild updates for v6.12
- Support cross-compiling linux-headers Debian package and kernel-devel
    RPM package
 
  - Add support for the linux-debug Pacman package
 
  - Improve module rebuilding speed by factoring out the common code to
    scripts/module-common.c
 
  - Separate device tree build rules into scripts/Makefile.dtbs
 
  - Add a new script to generate modules.builtin.ranges, which is useful
    for tracing tools to find symbols in built-in modules
 
  - Refactor Kconfig and misc tools
 
  - Update Kbuild and Kconfig documentation
 -----BEGIN PGP SIGNATURE-----
 
 iQJJBAABCgAzFiEEbmPs18K1szRHjPqEPYsBB53g2wYFAmby2+QVHG1hc2FoaXJv
 eUBrZXJuZWwub3JnAAoJED2LAQed4NsGpQ0QALWMgox3OdceNiBT8QieqRFfwKFv
 5jxtsZt+MbTdWNMEfgc4Cq2i5ZAqpYGZh32RwTiZJogBvYEIoO7M4Md9VwoEe/BC
 q8VZ6FhUy7358IX/FCukfB0dYvkziRalBRDrE4iFmMMdhBvZ9nrvMxllqFCMllLj
 DTrBTTiMus3qiiczr4tb5QwaIR6C+yqiEBF++ftLmWvo9dn8YNNUnI65fGjyQM/w
 0wMPwsB3Y2HdnRpLUS6T18gZbjoXsAk4+WX0TpdBfTs3d7AdbzlSMtc0BslEm6Tb
 JjIK6SbJCM3kNC7O0/gsUenOaSBxSbKjjg33gQxn/eNoi0nRt+qnBMMreYiTd95G
 Hq86QcNfKQtWAagKRTppMkYEDqMU2RKH7BmJOsfQyeG9cGpAAu+0HsQv3f/h5QP1
 MlA8o+NP5oQn6RbrhZz1Pqm24+OMxiXaBhmo8XbZ+MXzi/CBR54Eo4ip/FSHzXII
 EGEAQL7t7YU7xu8qMIE6ZQMH7BJsjJNee0vrNiYZa4xHLYyHi6mJl8K6LlHQ3nEx
 WOsPX9MLITtSJwcvIio/0sEnuR7pjcShGfqhbHO5tiOYznsbcSvu3+18HPGCpFRt
 vYFkNIRc298k7++A+Zp2wwdD2TS+SSilrAImmJXMhf0M+Nyg2vnlfAo8t0QSkFlh
 1g9dJuy+8jYRjHXP
 =g4t/
 -----END PGP SIGNATURE-----

Merge tag 'kbuild-v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild updates from Masahiro Yamada:

 - Support cross-compiling linux-headers Debian package and kernel-devel
   RPM package

 - Add support for the linux-debug Pacman package

 - Improve module rebuilding speed by factoring out the common code to
   scripts/module-common.c

 - Separate device tree build rules into scripts/Makefile.dtbs

 - Add a new script to generate modules.builtin.ranges, which is useful
   for tracing tools to find symbols in built-in modules

 - Refactor Kconfig and misc tools

 - Update Kbuild and Kconfig documentation

* tag 'kbuild-v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (51 commits)
  kbuild: doc: replace "gcc" in external module description
  kbuild: doc: describe the -C option precisely for external module builds
  kbuild: doc: remove the description about shipped files
  kbuild: doc: drop section numbering, use references in modules.rst
  kbuild: doc: throw out the local table of contents in modules.rst
  kbuild: doc: remove outdated description of the limitation on -I usage
  kbuild: doc: remove description about grepping CONFIG options
  kbuild: doc: update the description about Kbuild/Makefile split
  kbuild: remove unnecessary export of RUST_LIB_SRC
  kbuild: remove append operation on cmd_ld_ko_o
  kconfig: cache expression values
  kconfig: use hash table to reuse expressions
  kconfig: refactor expr_eliminate_dups()
  kconfig: add comments to expression transformations
  kconfig: change some expr_*() functions to bool
  scripts: move hash function from scripts/kconfig/ to scripts/include/
  kallsyms: change overflow variable to bool type
  kallsyms: squash output_address()
  kbuild: add install target for modules.builtin.ranges
  scripts: add verifier script for builtin module range data
  ...
2024-09-24 13:02:06 -07:00
Linus Torvalds
4491b85480 dma-mapping fixes for Linux 6.12
- sort out a few issues with the direct calls to iommu-dma
    (Christoph Hellwig, Leon Romanovsky)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAmbyWtkLHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYPToRAAlChDftSXmz9Isxds5VyzFy8El6hcutPiRfB8mUze
 9aq7qgCeLVq6L0tPvNTgl2iYhS8KjTxVKSooYSQ0eAfb3FgNHKggmfV9GO8ddqBe
 ojf9s9EJzJ3BatkVPBDPin88Sv0Vfwkb6n78gEeuTGPReScMdR5WZfmyl5QfMWpt
 5W16gSA666DcIpxIDPo6zEPZT/lXsPcmvXrZJhFZe7hITNp/CihVxdxy9spLRG2Z
 Ntei8AtXUvjn3ziU4ymcKOfP6fmERlVAiq65CqlXMUELs3Eh4knH9eEiXHtu0JAD
 qMhuHSB5b2Dd2gjwvJDoISRPPtpy0sGnx81lAZJPIGnvVKKJSI+Lp7wDP/w66Rxx
 7vF7tU9uyAb4Qmb5h0s9BqJ3mCNN9WXsAKrNX3rLr2ZLCjSuQFDne0oAYgVl10J8
 ezALiv+mG4C4Q5lxzmZm4PU8f1CYjLMe0KppDMLsuwdWpliOlGigCKYddqkOj0ct
 Cvg4lMHNcBo8xADyaqm/9iJYIkvLl5o6JGmVBbSCikUT3GMssBf/0KdITS4D+/cj
 150LGxiZOQaOx1Ef8eACYslKJq8hIz0OmAjgDGpdLM7sBxa62FuZJrcujkKW9UA6
 8DrYp+XqKd5R/ZOcjAIMbNAlZEa5CytgB9r4dXvrAC1dk//Dd23G873KunAE1DJJ
 cy0=
 =3hPK
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-6.12-2024-09-24' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping fixes from Christoph Hellwig:

 - sort out a few issues with the direct calls to iommu-dma (Christoph
   Hellwig, Leon Romanovsky)

* tag 'dma-mapping-6.12-2024-09-24' of git://git.infradead.org/users/hch/dma-mapping:
  dma-mapping: report unlimited DMA addressing in IOMMU DMA path
  iommu/dma: remove most stubs in iommu-dma.h
  dma-mapping: fix vmap and mmap of noncontiougs allocations
2024-09-24 12:00:37 -07:00
Linus Torvalds
6fa6588e59 sched_ext: Fixes for v6.12-rc1
- Three build fixes.
 
 - The fix for a stall bug introduced by a recent optimization in sched core
   (SM_IDLE).
 
 - Addition of /sys/kernel/sched_ext/enable_seq. While not a fix, it is a
   simple addition that distro people want to be able to tell whether an SCX
   scheduler has ever been loaded on the system.
 -----BEGIN PGP SIGNATURE-----
 
 iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCZvGekA4cdGpAa2VybmVs
 Lm9yZwAKCRCxYfJx3gVYGdkDAP46Wbz7XOTIJHs4NV3sxAH1Kk3bmZHtzB0C0zb6
 FChT3QEAzHFtY+mCtc/qJ6IMKizTDcgQ6V8zbCtXNuVxXxXMrAY=
 =uVPP
 -----END PGP SIGNATURE-----

Merge tag 'sched_ext-for-6.12-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext

Pull sched_ext fixes from Tejun Heo:

 - Three build fixes

 - The fix for a stall bug introduced by a recent optimization in sched
   core (SM_IDLE)

 - Addition of /sys/kernel/sched_ext/enable_seq. While not a fix, it is
   a simple addition that distro people want to be able to tell whether
   an SCX scheduler has ever been loaded on the system

* tag 'sched_ext-for-6.12-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext:
  sched_ext: Provide a sysfs enable_seq counter
  sched_ext: Fix build when !CONFIG_STACKTRACE
  sched, sched_ext: Disable SM_IDLE/rq empty path when scx_enabled()
  sched: Put task_group::idle under CONFIG_GROUP_SCHED_WEIGHT
  sched: Add dummy version of sched_group_set_idle()
2024-09-24 11:33:50 -07:00