forked from Minki/linux
cgroup: use an ordered workqueue for cgroup destruction
Sometimes the cleanup after memcg hierarchy testing gets stuck in
mem_cgroup_reparent_charges(), unable to bring non-kmem usage down to 0.
There may turn out to be several causes, but a major cause is this: the
workitem to offline parent can get run before workitem to offline child;
parent's mem_cgroup_reparent_charges() circles around waiting for the
child's pages to be reparented to its lrus, but it's holding cgroup_mutex
which prevents the child from reaching its mem_cgroup_reparent_charges().
Just use an ordered workqueue for cgroup_destroy_wq.
tj: Committing as the temporary fix until the reverse dependency can
be removed from memcg. Comment updated accordingly.
Fixes: e5fca243ab
("cgroup: use a dedicated workqueue for cgroup destruction")
Suggested-by: Filipe Brandenburger <filbranden@google.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org # 3.10+
Signed-off-by: Tejun Heo <tj@kernel.org>
This commit is contained in:
parent
0a6be65553
commit
ab3f5faa62
@ -4845,12 +4845,16 @@ static int __init cgroup_wq_init(void)
|
||||
/*
|
||||
* There isn't much point in executing destruction path in
|
||||
* parallel. Good chunk is serialized with cgroup_mutex anyway.
|
||||
* Use 1 for @max_active.
|
||||
*
|
||||
* XXX: Must be ordered to make sure parent is offlined after
|
||||
* children. The ordering requirement is for memcg where a
|
||||
* parent's offline may wait for a child's leading to deadlock. In
|
||||
* the long term, this should be fixed from memcg side.
|
||||
*
|
||||
* We would prefer to do this in cgroup_init() above, but that
|
||||
* is called before init_workqueues(): so leave this until after.
|
||||
*/
|
||||
cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
|
||||
cgroup_destroy_wq = alloc_ordered_workqueue("cgroup_destroy", 0);
|
||||
BUG_ON(!cgroup_destroy_wq);
|
||||
|
||||
/*
|
||||
|
Loading…
Reference in New Issue
Block a user