The tasklist_lock popped up as a scalability bottleneck on some testing
workloads. The readlocks in do_prlimit and set/getpriority are not
necessary in all cases.
Based on a cycles profile, it looked like ~87% of the time was spent in
the kernel, ~42% of which was just trying to get *some* spinlock
(queued_spin_lock_slowpath, not necessarily the tasklist_lock).
The big offenders (with rough percentages in cycles of the overall trace):
- do_wait 11%
- setpriority 8% (this patchset)
- kill 8%
- do_exit 5%
- clone 3%
- prlimit64 2% (this patchset)
- getrlimit 1% (this patchset)
I can't easily test this patchset on the original workload for various
reasons. Instead, I used the microbenchmark below to at least verify
there was some improvement. This patchset had a 28% speedup (12% from
baseline to set/getprio, then another 14% for prlimit).
One interesting thing is that my libc's getrlimit() was calling
prlimit64, so hoisting the read_lock(tasklist_lock) into sys_prlimit64
had no effect - it essentially optimized the older syscalls only. I
didn't do that in this patchset, but figured I'd mention it since it was
an option from the previous patch's discussion.
v3: https://lkml.kernel.org/r/20220106172041.522167-1-brho@google.com
v2: https://lore.kernel.org/lkml/20220105212828.197013-1-brho@google.com/
- update_rlimit_cpu on the group_leader instead of for_each_thread.
- update_rlimit_cpu still returns 0 or -ESRCH, even though we don't care
about the error here. it felt safer that way in case someone uses
that function again.
v1: https://lore.kernel.org/lkml/20211213220401.1039578-1-brho@google.com/
int main(int argc, char **argv)
{
pid_t child;
struct rlimit rlim[1];
fork(); fork(); fork(); fork(); fork(); fork();
for (int i = 0; i < 5000; i++) {
child = fork();
if (child < 0)
exit(1);
if (child > 0) {
usleep(1000);
kill(child, SIGTERM);
waitpid(child, NULL, 0);
} else {
for (;;) {
setpriority(PRIO_PROCESS, 0,
getpriority(PRIO_PROCESS, 0));
getrlimit(RLIMIT_CPU, rlim);
}
}
}
return 0;
}
Barret Rhoden (3):
setpriority: only grab the tasklist_lock for PRIO_PGRP
prlimit: make do_prlimit() static
prlimit: do not grab the tasklist_lock
include/linux/posix-timers.h | 2 +-
include/linux/resource.h | 2 -
kernel/sys.c | 127 +++++++++++++++++----------------
kernel/time/posix-cpu-timers.c | 12 +++-
4 files changed, 76 insertions(+), 67 deletions(-)
I have dropped the first change in this series as an almost identical
change was merged as commit 7f8ca0edfe ("kernel/sys.c: only take
tasklist_lock for get/setpriority(PRIO_PGRP)").
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEgjlraLDcwBA2B+6cC/v6Eiajj0AFAmI7eCAACgkQC/v6Eiaj
j0CN8w/+MEol1+sB/mDKgDgqbNE0sIXHTjQF37KPrsqB51aas9LSX7E7CBzvxF3M
Y0MSk0VzSt4oGpmrNQOAEueeMeaMucPxI5JejGHEhtdHFBMqYXKpWuhqewIHx1pc
lUcYpDeUOOBjwLO/VT5hfAKzIEMUl6tEDfzexl9IvpVwd661nVjDe+z12mDplJTi
tjO8ZiSHkjkLE3cAYaTCajsaqpj7NLuIYB1d4CbbpU3vO5LYoffj/vtQ1e+7UxMB
jhgaP/ylo0Ab8udYJ0PFIDmmQG/6s7csc3I1wtMgf8mqv88z4xspXNZBwYvf2hxa
lBpSo+zD8Q88XipC+w63iBUa7YElLaai9xpLInO/Ir42G03/H/8TS9me1OLG+1Cz
vloOid6CqH7KkNQ842txXeyj3xjW1DGR7U0QOrSxFQuWc6WZ2Q/l8KIZsuXuyt9G
EwTjtoQvr1R+FNMtT/4g5WZ8sTYooIaHFvFQ745T6FzBp8mCVjINg4SUbVV3Wvck
JRMxuHSFFBXj8IIJi9Bv6UE/j5APwa209KthvFCQayniNZU3XPKVa/bDWVoBk+SK
Hch3M//QdAjKYmRf5gmDaBbRyqzaeiFjvX1MSnkbFryBX4/yIoEfo0/QsDRzSrJV
vSSSU79h/XDI080gILOzNX4HiI4cpNcpOIB63Pmajyr6MxhrMqE=
=VVGP
-----END PGP SIGNATURE-----
Merge tag 'prlimit-tasklist_lock-for-v5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull tasklist_lock optimizations from Eric Biederman:
"prlimit and getpriority tasklist_lock optimizations
The tasklist_lock popped up as a scalability bottleneck on some
testing workloads. The readlocks in do_prlimit and set/getpriority are
not necessary in all cases.
Based on a cycles profile, it looked like ~87% of the time was spent
in the kernel, ~42% of which was just trying to get *some* spinlock
(queued_spin_lock_slowpath, not necessarily the tasklist_lock).
The big offenders (with rough percentages in cycles of the overall
trace):
- do_wait 11%
- setpriority 8% (done previously in commit 7f8ca0edfe)
- kill 8%
- do_exit 5%
- clone 3%
- prlimit64 2% (this patchset)
- getrlimit 1% (this patchset)
I can't easily test this patchset on the original workload for various
reasons. Instead, I used the microbenchmark below to at least verify
there was some improvement. This patchset had a 28% speedup (12% from
baseline to set/getprio, then another 14% for prlimit).
This series used to do the setpriority case, but an almost identical
change was merged as commit 7f8ca0edfe ("kernel/sys.c: only take
tasklist_lock for get/setpriority(PRIO_PGRP)") so that has been
dropped from here.
One interesting thing is that my libc's getrlimit() was calling
prlimit64, so hoisting the read_lock(tasklist_lock) into sys_prlimit64
had no effect - it essentially optimized the older syscalls only. I
didn't do that in this patchset, but figured I'd mention it since it
was an option from the previous patch's discussion"
micobenchmark.c:
---------------
int main(int argc, char **argv)
{
pid_t child;
struct rlimit rlim[1];
fork(); fork(); fork(); fork(); fork(); fork();
for (int i = 0; i < 5000; i++) {
child = fork();
if (child < 0)
exit(1);
if (child > 0) {
usleep(1000);
kill(child, SIGTERM);
waitpid(child, NULL, 0);
} else {
for (;;) {
setpriority(PRIO_PROCESS, 0,
getpriority(PRIO_PROCESS, 0));
getrlimit(RLIMIT_CPU, rlim);
}
}
}
return 0;
}
Link: https://lore.kernel.org/lkml/20211213220401.1039578-1-brho@google.com/ [v1]
Link: https://lore.kernel.org/lkml/20220105212828.197013-1-brho@google.com/ [v2]
Link: https://lore.kernel.org/lkml/20220106172041.522167-1-brho@google.com/ [v3]
* tag 'prlimit-tasklist_lock-for-v5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
prlimit: do not grab the tasklist_lock
prlimit: make do_prlimit() static