rescounters: add res_counter_uncharge_until()

When killing a res_counter which is a child of other counter, we need to
do

	res_counter_uncharge(child, xxx)
	res_counter_charge(parent, xxx)

This is not atomic and wastes CPU.  This patch adds
res_counter_uncharge_until().  This function's uncharge propagates to
ancestors until specified res_counter.

	res_counter_uncharge_until(child, parent, xxx)

Now the operation is atomic and efficient.

Signed-off-by: Frederic Weisbecker <fweisbec@redhat.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Ying Han <yinghan@google.com>
Cc: Glauber Costa <glommer@parallels.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Frederic Weisbecker 2012-05-29 15:07:03 -07:00 committed by Linus Torvalds
parent f9be23d6da
commit 2bb2ba9d51
3 changed files with 19 additions and 2 deletions

View File

@ -92,6 +92,14 @@ to work with it.
The _locked routines imply that the res_counter->lock is taken. The _locked routines imply that the res_counter->lock is taken.
f. void res_counter_uncharge_until
(struct res_counter *rc, struct res_counter *top,
unsinged long val)
Almost same as res_cunter_uncharge() but propagation of uncharge
stops when rc == top. This is useful when kill a res_coutner in
child cgroup.
2.1 Other accounting routines 2.1 Other accounting routines
There are more routines that may help you with common needs, like There are more routines that may help you with common needs, like

View File

@ -135,6 +135,9 @@ int __must_check res_counter_charge_nofail(struct res_counter *counter,
void res_counter_uncharge_locked(struct res_counter *counter, unsigned long val); void res_counter_uncharge_locked(struct res_counter *counter, unsigned long val);
void res_counter_uncharge(struct res_counter *counter, unsigned long val); void res_counter_uncharge(struct res_counter *counter, unsigned long val);
void res_counter_uncharge_until(struct res_counter *counter,
struct res_counter *top,
unsigned long val);
/** /**
* res_counter_margin - calculate chargeable space of a counter * res_counter_margin - calculate chargeable space of a counter
* @cnt: the counter * @cnt: the counter

View File

@ -94,13 +94,15 @@ void res_counter_uncharge_locked(struct res_counter *counter, unsigned long val)
counter->usage -= val; counter->usage -= val;
} }
void res_counter_uncharge(struct res_counter *counter, unsigned long val) void res_counter_uncharge_until(struct res_counter *counter,
struct res_counter *top,
unsigned long val)
{ {
unsigned long flags; unsigned long flags;
struct res_counter *c; struct res_counter *c;
local_irq_save(flags); local_irq_save(flags);
for (c = counter; c != NULL; c = c->parent) { for (c = counter; c != top; c = c->parent) {
spin_lock(&c->lock); spin_lock(&c->lock);
res_counter_uncharge_locked(c, val); res_counter_uncharge_locked(c, val);
spin_unlock(&c->lock); spin_unlock(&c->lock);
@ -108,6 +110,10 @@ void res_counter_uncharge(struct res_counter *counter, unsigned long val)
local_irq_restore(flags); local_irq_restore(flags);
} }
void res_counter_uncharge(struct res_counter *counter, unsigned long val)
{
res_counter_uncharge_until(counter, NULL, val);
}
static inline unsigned long long * static inline unsigned long long *
res_counter_member(struct res_counter *counter, int member) res_counter_member(struct res_counter *counter, int member)