mirror of
https://github.com/torvalds/linux.git
synced 2024-11-29 23:51:37 +00:00
Two small updates to ptrace_stop():
- Add a comment to explain that the preempt_disable() before unlocking tasklist lock is not a correctness problem and just avoids the tracer to preempt the tracee before the tracee schedules out. - Make that preempt_disable() conditional on PREEMPT_RT=n. RT enabled kernels cannot disable preemption at this point because cgroup_enter_frozen() and sched_submit_work() acquire spinlocks or rwlocks which are substituted by sleeping locks on RT. Acquiring a sleeping lock in a preemption disable region is obviously not possible. This obviously brings back the potential slowdown of ptrace() for RT enabled kernels, but that's a price to be payed for latency guarantees. -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmU+tkETHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYoSYtD/9O9VkmH0kHPEzYHrRGd7JT/T4249Mw ezYvO/OWQqaeC2Et758Gi89QdC9u7fKqlTKGasNXlVCyVQ5i/8/3HkxjzUaiX0g0 y1bIEtyUK2bBOO8rveSJAbMUuy21Ap2XTOTI9x6Au+iRY7JJASjWDfjrmh92+dN7 YTIhDWb6Fe8P6fFBSzdcYnsIX1gBEu9eJfVH885sFDFjHbWwD9TPmiQOx7rFsiOW yq5qsGARE71k8KB6URlCfB6gB4lOw5d8CFDlafTWwfbu2/m2/RpfoZw1kdsnhsxV XQnHyKYkRzimG+82xQFZEx0a1izWRg+9ANReWVo02hpolxWxOjJeZm97K8cEeabV iRAVGegYMkaIHkxWIukZkXEx9dz18jxUryWaFslC6om43TiZMhGhZ8se8BPljiwN 8LxGzw4lVwTqB7+l/PVc1oZKh9851nQNsiEG4dMrwcPJd6HvMFXDTn/vVwWcZhRg YWCd7u3bc9xZXwm5xhFyApWV8M32bpe7mNxPO4johUSf7HzK89ABrNg85OrKTlwm vGjOI6q7Dh5YbyM8xmIV0jxeIyWMKmwPUpesTSvoZI0ApQ+heK2lMXvrxN6oN4bH 14JcEG5gRAmYuNZmdyQRGa/6yCgEXd8TvSxgYrufvcfnVAdyZWbPM2g09Qa761bC ZXZ/ZM4LMNnENg== =3ai1 -----END PGP SIGNATURE----- Merge tag 'core-core-2023-10-29-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull core updates from Thomas Gleixner: "Two small updates to ptrace_stop(): - Add a comment to explain that the preempt_disable() before unlocking tasklist lock is not a correctness problem and just avoids the tracer to preempt the tracee before the tracee schedules out. - Make that preempt_disable() conditional on PREEMPT_RT=n. RT enabled kernels cannot disable preemption at this point because cgroup_enter_frozen() and sched_submit_work() acquire spinlocks or rwlocks which are substituted by sleeping locks on RT. Acquiring a sleeping lock in a preemption disable region is obviously not possible. This obviously brings back the potential slowdown of ptrace() for RT enabled kernels, but that's a price to be paid for latency guarantees" * tag 'core-core-2023-10-29-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: signal: Don't disable preemption in ptrace_stop() on PREEMPT_RT signal: Add a proper comment about preempt_disable() in ptrace_stop()
This commit is contained in:
commit
9cc6fea175
@ -2329,15 +2329,38 @@ static int ptrace_stop(int exit_code, int why, unsigned long message,
|
||||
do_notify_parent_cldstop(current, false, why);
|
||||
|
||||
/*
|
||||
* Don't want to allow preemption here, because
|
||||
* sys_ptrace() needs this task to be inactive.
|
||||
* The previous do_notify_parent_cldstop() invocation woke ptracer.
|
||||
* One a PREEMPTION kernel this can result in preemption requirement
|
||||
* which will be fulfilled after read_unlock() and the ptracer will be
|
||||
* put on the CPU.
|
||||
* The ptracer is in wait_task_inactive(, __TASK_TRACED) waiting for
|
||||
* this task wait in schedule(). If this task gets preempted then it
|
||||
* remains enqueued on the runqueue. The ptracer will observe this and
|
||||
* then sleep for a delay of one HZ tick. In the meantime this task
|
||||
* gets scheduled, enters schedule() and will wait for the ptracer.
|
||||
*
|
||||
* XXX: implement read_unlock_no_resched().
|
||||
* This preemption point is not bad from a correctness point of
|
||||
* view but extends the runtime by one HZ tick time due to the
|
||||
* ptracer's sleep. The preempt-disable section ensures that there
|
||||
* will be no preemption between unlock and schedule() and so
|
||||
* improving the performance since the ptracer will observe that
|
||||
* the tracee is scheduled out once it gets on the CPU.
|
||||
*
|
||||
* On PREEMPT_RT locking tasklist_lock does not disable preemption.
|
||||
* Therefore the task can be preempted after do_notify_parent_cldstop()
|
||||
* before unlocking tasklist_lock so there is no benefit in doing this.
|
||||
*
|
||||
* In fact disabling preemption is harmful on PREEMPT_RT because
|
||||
* the spinlock_t in cgroup_enter_frozen() must not be acquired
|
||||
* with preemption disabled due to the 'sleeping' spinlock
|
||||
* substitution of RT.
|
||||
*/
|
||||
preempt_disable();
|
||||
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
|
||||
preempt_disable();
|
||||
read_unlock(&tasklist_lock);
|
||||
cgroup_enter_frozen();
|
||||
preempt_enable_no_resched();
|
||||
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
|
||||
preempt_enable_no_resched();
|
||||
schedule();
|
||||
cgroup_leave_frozen(true);
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user