2017-05-17 09:18:13 +00:00
|
|
|
=================================
|
2007-02-10 09:46:20 +00:00
|
|
|
Red-black Trees (rbtree) in Linux
|
2017-05-17 09:18:13 +00:00
|
|
|
=================================
|
|
|
|
|
|
|
|
|
|
|
|
:Date: January 18, 2007
|
|
|
|
:Author: Rob Landley <rob@landley.net>
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
What are red-black trees, and what are they for?
|
|
|
|
------------------------------------------------
|
|
|
|
|
|
|
|
Red-black trees are a type of self-balancing binary search tree, used for
|
|
|
|
storing sortable key/value data pairs. This differs from radix trees (which
|
|
|
|
are used to efficiently store sparse arrays and thus use long integer indexes
|
|
|
|
to insert/access/delete nodes) and hash tables (which are not kept sorted to
|
|
|
|
be easily traversed in order, and must be tuned for a specific size and
|
|
|
|
hash function where rbtrees scale gracefully storing arbitrary keys).
|
|
|
|
|
|
|
|
Red-black trees are similar to AVL trees, but provide faster real-time bounded
|
|
|
|
worst case performance for insertion and deletion (at most two rotations and
|
|
|
|
three rotations, respectively, to balance the tree), with slightly slower
|
|
|
|
(but still O(log n)) lookup time.
|
|
|
|
|
|
|
|
To quote Linux Weekly News:
|
|
|
|
|
|
|
|
There are a number of red-black trees in use in the kernel.
|
2010-11-11 11:09:59 +00:00
|
|
|
The deadline and CFQ I/O schedulers employ rbtrees to
|
|
|
|
track requests; the packet CD/DVD driver does the same.
|
2007-02-10 09:46:20 +00:00
|
|
|
The high-resolution timer code uses an rbtree to organize outstanding
|
|
|
|
timer requests. The ext3 filesystem tracks directory entries in a
|
|
|
|
red-black tree. Virtual memory areas (VMAs) are tracked with red-black
|
|
|
|
trees, as are epoll file descriptors, cryptographic keys, and network
|
|
|
|
packets in the "hierarchical token bucket" scheduler.
|
|
|
|
|
|
|
|
This document covers use of the Linux rbtree implementation. For more
|
|
|
|
information on the nature and implementation of Red Black Trees, see:
|
|
|
|
|
|
|
|
Linux Weekly News article on red-black trees
|
|
|
|
http://lwn.net/Articles/184495/
|
|
|
|
|
|
|
|
Wikipedia entry on red-black trees
|
|
|
|
http://en.wikipedia.org/wiki/Red-black_tree
|
|
|
|
|
|
|
|
Linux implementation of red-black trees
|
|
|
|
---------------------------------------
|
|
|
|
|
|
|
|
Linux's rbtree implementation lives in the file "lib/rbtree.c". To use it,
|
|
|
|
"#include <linux/rbtree.h>".
|
|
|
|
|
|
|
|
The Linux rbtree implementation is optimized for speed, and thus has one
|
|
|
|
less layer of indirection (and better cache locality) than more traditional
|
|
|
|
tree implementations. Instead of using pointers to separate rb_node and data
|
|
|
|
structures, each instance of struct rb_node is embedded in the data structure
|
|
|
|
it organizes. And instead of using a comparison callback function pointer,
|
|
|
|
users are expected to write their own tree search and insert functions
|
|
|
|
which call the provided rbtree functions. Locking is also left up to the
|
|
|
|
user of the rbtree code.
|
|
|
|
|
|
|
|
Creating a new rbtree
|
|
|
|
---------------------
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Data nodes in an rbtree tree are structures containing a struct rb_node member::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
struct mytype {
|
|
|
|
struct rb_node node;
|
|
|
|
char *keystring;
|
|
|
|
};
|
|
|
|
|
|
|
|
When dealing with a pointer to the embedded struct rb_node, the containing data
|
|
|
|
structure may be accessed with the standard container_of() macro. In addition,
|
|
|
|
individual members may be accessed directly via rb_entry(node, type, member).
|
|
|
|
|
|
|
|
At the root of each rbtree is an rb_root structure, which is initialized to be
|
|
|
|
empty via:
|
|
|
|
|
|
|
|
struct rb_root mytree = RB_ROOT;
|
|
|
|
|
|
|
|
Searching for a value in an rbtree
|
|
|
|
----------------------------------
|
|
|
|
|
|
|
|
Writing a search function for your tree is fairly straightforward: start at the
|
|
|
|
root, compare each value, and follow the left or right branch as necessary.
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Example::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
struct mytype *my_search(struct rb_root *root, char *string)
|
|
|
|
{
|
|
|
|
struct rb_node *node = root->rb_node;
|
|
|
|
|
|
|
|
while (node) {
|
|
|
|
struct mytype *data = container_of(node, struct mytype, node);
|
|
|
|
int result;
|
|
|
|
|
|
|
|
result = strcmp(string, data->keystring);
|
|
|
|
|
|
|
|
if (result < 0)
|
|
|
|
node = node->rb_left;
|
|
|
|
else if (result > 0)
|
|
|
|
node = node->rb_right;
|
|
|
|
else
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
Inserting data into an rbtree
|
|
|
|
-----------------------------
|
|
|
|
|
|
|
|
Inserting data in the tree involves first searching for the place to insert the
|
|
|
|
new node, then inserting the node and rebalancing ("recoloring") the tree.
|
|
|
|
|
|
|
|
The search for insertion differs from the previous search by finding the
|
|
|
|
location of the pointer on which to graft the new node. The new node also
|
|
|
|
needs a link to its parent node for rebalancing purposes.
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Example::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
int my_insert(struct rb_root *root, struct mytype *data)
|
|
|
|
{
|
|
|
|
struct rb_node **new = &(root->rb_node), *parent = NULL;
|
|
|
|
|
|
|
|
/* Figure out where to put new node */
|
|
|
|
while (*new) {
|
|
|
|
struct mytype *this = container_of(*new, struct mytype, node);
|
|
|
|
int result = strcmp(data->keystring, this->keystring);
|
|
|
|
|
|
|
|
parent = *new;
|
|
|
|
if (result < 0)
|
|
|
|
new = &((*new)->rb_left);
|
|
|
|
else if (result > 0)
|
|
|
|
new = &((*new)->rb_right);
|
|
|
|
else
|
|
|
|
return FALSE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Add new node and rebalance tree. */
|
2009-04-17 02:58:48 +00:00
|
|
|
rb_link_node(&data->node, parent, new);
|
|
|
|
rb_insert_color(&data->node, root);
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
Removing or replacing existing data in an rbtree
|
|
|
|
------------------------------------------------
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
To remove an existing node from a tree, call::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
void rb_erase(struct rb_node *victim, struct rb_root *tree);
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Example::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
2009-04-17 02:58:48 +00:00
|
|
|
struct mytype *data = mysearch(&mytree, "walrus");
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
if (data) {
|
2009-04-17 02:58:48 +00:00
|
|
|
rb_erase(&data->node, &mytree);
|
2007-02-10 09:46:20 +00:00
|
|
|
myfree(data);
|
|
|
|
}
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
To replace an existing node in a tree with a new one with the same key, call::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
void rb_replace_node(struct rb_node *old, struct rb_node *new,
|
|
|
|
struct rb_root *tree);
|
|
|
|
|
|
|
|
Replacing a node this way does not re-sort the tree: If the new node doesn't
|
|
|
|
have the same key as the old node, the rbtree will probably become corrupted.
|
|
|
|
|
|
|
|
Iterating through the elements stored in an rbtree (in sort order)
|
|
|
|
------------------------------------------------------------------
|
|
|
|
|
|
|
|
Four functions are provided for iterating through an rbtree's contents in
|
|
|
|
sorted order. These work on arbitrary trees, and should not need to be
|
2017-05-17 09:18:13 +00:00
|
|
|
modified or wrapped (except for locking purposes)::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
struct rb_node *rb_first(struct rb_root *tree);
|
|
|
|
struct rb_node *rb_last(struct rb_root *tree);
|
|
|
|
struct rb_node *rb_next(struct rb_node *node);
|
|
|
|
struct rb_node *rb_prev(struct rb_node *node);
|
|
|
|
|
|
|
|
To start iterating, call rb_first() or rb_last() with a pointer to the root
|
|
|
|
of the tree, which will return a pointer to the node structure contained in
|
|
|
|
the first or last element in the tree. To continue, fetch the next or previous
|
|
|
|
node by calling rb_next() or rb_prev() on the current node. This will return
|
|
|
|
NULL when there are no more nodes left.
|
|
|
|
|
|
|
|
The iterator functions return a pointer to the embedded struct rb_node, from
|
|
|
|
which the containing data structure may be accessed with the container_of()
|
|
|
|
macro, and individual members may be accessed directly via
|
|
|
|
rb_entry(node, type, member).
|
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Example::
|
2007-02-10 09:46:20 +00:00
|
|
|
|
|
|
|
struct rb_node *node;
|
|
|
|
for (node = rb_first(&mytree); node; node = rb_next(node))
|
2009-05-14 09:00:20 +00:00
|
|
|
printk("key=%s\n", rb_entry(node, struct mytype, node)->keystring);
|
2007-02-10 09:46:20 +00:00
|
|
|
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:14:36 +00:00
|
|
|
Cached rbtrees
|
|
|
|
--------------
|
|
|
|
|
|
|
|
Computing the leftmost (smallest) node is quite a common task for binary
|
|
|
|
search trees, such as for traversals or users relying on a the particular
|
|
|
|
order for their own logic. To this end, users can use 'struct rb_root_cached'
|
|
|
|
to optimize O(logN) rb_first() calls to a simple pointer fetch avoiding
|
|
|
|
potentially expensive tree iterations. This is done at negligible runtime
|
|
|
|
overhead for maintanence; albeit larger memory footprint.
|
|
|
|
|
|
|
|
Similar to the rb_root structure, cached rbtrees are initialized to be
|
2019-04-16 02:00:35 +00:00
|
|
|
empty via::
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:14:36 +00:00
|
|
|
|
|
|
|
struct rb_root_cached mytree = RB_ROOT_CACHED;
|
|
|
|
|
|
|
|
Cached rbtree is simply a regular rb_root with an extra pointer to cache the
|
|
|
|
leftmost node. This allows rb_root_cached to exist wherever rb_root does,
|
|
|
|
which permits augmented trees to be supported as well as only a few extra
|
2019-04-16 02:00:35 +00:00
|
|
|
interfaces::
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:14:36 +00:00
|
|
|
|
|
|
|
struct rb_node *rb_first_cached(struct rb_root_cached *tree);
|
|
|
|
void rb_insert_color_cached(struct rb_node *, struct rb_root_cached *, bool);
|
|
|
|
void rb_erase_cached(struct rb_node *node, struct rb_root_cached *);
|
|
|
|
|
|
|
|
Both insert and erase calls have their respective counterpart of augmented
|
2019-04-16 02:00:35 +00:00
|
|
|
trees::
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:14:36 +00:00
|
|
|
|
|
|
|
void rb_insert_augmented_cached(struct rb_node *node, struct rb_root_cached *,
|
|
|
|
bool, struct rb_augment_callbacks *);
|
|
|
|
void rb_erase_augmented_cached(struct rb_node *, struct rb_root_cached *,
|
|
|
|
struct rb_augment_callbacks *);
|
|
|
|
|
|
|
|
|
2010-02-10 23:23:44 +00:00
|
|
|
Support for Augmented rbtrees
|
|
|
|
-----------------------------
|
|
|
|
|
2012-10-08 23:31:17 +00:00
|
|
|
Augmented rbtree is an rbtree with "some" additional data stored in
|
|
|
|
each node, where the additional data for node N must be a function of
|
|
|
|
the contents of all nodes in the subtree rooted at N. This data can
|
|
|
|
be used to augment some new functionality to rbtree. Augmented rbtree
|
|
|
|
is an optional feature built on top of basic rbtree infrastructure.
|
|
|
|
An rbtree user who wants this feature will have to call the augmentation
|
|
|
|
functions with the user provided augmentation callback when inserting
|
|
|
|
and erasing nodes.
|
2011-07-24 08:23:20 +00:00
|
|
|
|
2012-10-08 23:31:33 +00:00
|
|
|
C files implementing augmented rbtree manipulation must include
|
2015-09-05 23:13:34 +00:00
|
|
|
<linux/rbtree_augmented.h> instead of <linux/rbtree.h>. Note that
|
2012-10-08 23:31:33 +00:00
|
|
|
linux/rbtree_augmented.h exposes some rbtree implementations details
|
|
|
|
you are not expected to rely on; please stick to the documented APIs
|
|
|
|
there and do not include <linux/rbtree_augmented.h> from header files
|
|
|
|
either so as to minimize chances of your users accidentally relying on
|
|
|
|
such implementation details.
|
|
|
|
|
2012-10-08 23:31:17 +00:00
|
|
|
On insertion, the user must update the augmented information on the path
|
|
|
|
leading to the inserted node, then call rb_link_node() as usual and
|
|
|
|
rb_augment_inserted() instead of the usual rb_insert_color() call.
|
|
|
|
If rb_augment_inserted() rebalances the rbtree, it will callback into
|
|
|
|
a user provided function to update the augmented information on the
|
|
|
|
affected subtrees.
|
2011-07-24 08:23:20 +00:00
|
|
|
|
2012-10-08 23:31:17 +00:00
|
|
|
When erasing a node, the user must call rb_erase_augmented() instead of
|
|
|
|
rb_erase(). rb_erase_augmented() calls back into user provided functions
|
|
|
|
to updated the augmented information on affected subtrees.
|
2010-02-10 23:23:44 +00:00
|
|
|
|
2012-10-08 23:31:17 +00:00
|
|
|
In both cases, the callbacks are provided through struct rb_augment_callbacks.
|
|
|
|
3 callbacks must be defined:
|
|
|
|
|
|
|
|
- A propagation callback, which updates the augmented value for a given
|
|
|
|
node and its ancestors, up to a given stop point (or NULL to update
|
|
|
|
all the way to the root).
|
|
|
|
|
|
|
|
- A copy callback, which copies the augmented value for a given subtree
|
|
|
|
to a newly assigned subtree root.
|
|
|
|
|
|
|
|
- A tree rotation callback, which copies the augmented value for a given
|
|
|
|
subtree to a newly assigned subtree root AND recomputes the augmented
|
|
|
|
information for the former subtree root.
|
|
|
|
|
2012-10-08 23:31:33 +00:00
|
|
|
The compiled code for rb_erase_augmented() may inline the propagation and
|
|
|
|
copy callbacks, which results in a large function, so each augmented rbtree
|
|
|
|
user should have a single rb_erase_augmented() call site in order to limit
|
|
|
|
compiled code size.
|
|
|
|
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Sample usage
|
|
|
|
^^^^^^^^^^^^
|
2010-02-10 23:23:44 +00:00
|
|
|
|
|
|
|
Interval tree is an example of augmented rb tree. Reference -
|
|
|
|
"Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.
|
|
|
|
More details about interval trees:
|
|
|
|
|
|
|
|
Classical rbtree has a single key and it cannot be directly used to store
|
|
|
|
interval ranges like [lo:hi] and do a quick lookup for any overlap with a new
|
|
|
|
lo:hi or to find whether there is an exact match for a new lo:hi.
|
|
|
|
|
|
|
|
However, rbtree can be augmented to store such interval ranges in a structured
|
|
|
|
way making it possible to do efficient lookup and exact match.
|
|
|
|
|
|
|
|
This "extra information" stored in each node is the maximum hi
|
2014-04-05 02:31:00 +00:00
|
|
|
(max_hi) value among all the nodes that are its descendants. This
|
2010-02-10 23:23:44 +00:00
|
|
|
information can be maintained at each node just be looking at the node
|
|
|
|
and its immediate children. And this will be used in O(log n) lookup
|
|
|
|
for lowest match (lowest start address among all possible matches)
|
2017-05-17 09:18:13 +00:00
|
|
|
with something like::
|
2010-02-10 23:23:44 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
struct interval_tree_node *
|
|
|
|
interval_tree_first_match(struct rb_root *root,
|
|
|
|
unsigned long start, unsigned long last)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
struct interval_tree_node *node;
|
|
|
|
|
|
|
|
if (!root->rb_node)
|
|
|
|
return NULL;
|
|
|
|
node = rb_entry(root->rb_node, struct interval_tree_node, rb);
|
|
|
|
|
|
|
|
while (true) {
|
|
|
|
if (node->rb.rb_left) {
|
|
|
|
struct interval_tree_node *left =
|
|
|
|
rb_entry(node->rb.rb_left,
|
|
|
|
struct interval_tree_node, rb);
|
|
|
|
if (left->__subtree_last >= start) {
|
|
|
|
/*
|
|
|
|
* Some nodes in left subtree satisfy Cond2.
|
|
|
|
* Iterate to find the leftmost such node N.
|
|
|
|
* If it also satisfies Cond1, that's the match
|
|
|
|
* we are looking for. Otherwise, there is no
|
|
|
|
* matching interval as nodes to the right of N
|
|
|
|
* can't satisfy Cond1 either.
|
|
|
|
*/
|
|
|
|
node = left;
|
|
|
|
continue;
|
|
|
|
}
|
2010-02-10 23:23:44 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
if (node->start <= last) { /* Cond1 */
|
|
|
|
if (node->last >= start) /* Cond2 */
|
|
|
|
return node; /* node is leftmost match */
|
|
|
|
if (node->rb.rb_right) {
|
|
|
|
node = rb_entry(node->rb.rb_right,
|
|
|
|
struct interval_tree_node, rb);
|
|
|
|
if (node->__subtree_last >= start)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL; /* No match */
|
|
|
|
}
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
Insertion/removal are defined using the following augmented callbacks::
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
static inline unsigned long
|
|
|
|
compute_subtree_last(struct interval_tree_node *node)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
unsigned long max = node->last, subtree_last;
|
|
|
|
if (node->rb.rb_left) {
|
|
|
|
subtree_last = rb_entry(node->rb.rb_left,
|
|
|
|
struct interval_tree_node, rb)->__subtree_last;
|
|
|
|
if (max < subtree_last)
|
|
|
|
max = subtree_last;
|
|
|
|
}
|
|
|
|
if (node->rb.rb_right) {
|
|
|
|
subtree_last = rb_entry(node->rb.rb_right,
|
|
|
|
struct interval_tree_node, rb)->__subtree_last;
|
|
|
|
if (max < subtree_last)
|
|
|
|
max = subtree_last;
|
|
|
|
}
|
|
|
|
return max;
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
while (rb != stop) {
|
|
|
|
struct interval_tree_node *node =
|
|
|
|
rb_entry(rb, struct interval_tree_node, rb);
|
|
|
|
unsigned long subtree_last = compute_subtree_last(node);
|
|
|
|
if (node->__subtree_last == subtree_last)
|
|
|
|
break;
|
|
|
|
node->__subtree_last = subtree_last;
|
|
|
|
rb = rb_parent(&node->rb);
|
|
|
|
}
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
struct interval_tree_node *old =
|
|
|
|
rb_entry(rb_old, struct interval_tree_node, rb);
|
|
|
|
struct interval_tree_node *new =
|
|
|
|
rb_entry(rb_new, struct interval_tree_node, rb);
|
|
|
|
|
|
|
|
new->__subtree_last = old->__subtree_last;
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
struct interval_tree_node *old =
|
|
|
|
rb_entry(rb_old, struct interval_tree_node, rb);
|
|
|
|
struct interval_tree_node *new =
|
|
|
|
rb_entry(rb_new, struct interval_tree_node, rb);
|
|
|
|
|
|
|
|
new->__subtree_last = old->__subtree_last;
|
|
|
|
old->__subtree_last = compute_subtree_last(old);
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
static const struct rb_augment_callbacks augment_callbacks = {
|
2012-10-08 23:31:17 +00:00
|
|
|
augment_propagate, augment_copy, augment_rotate
|
2017-05-17 09:18:13 +00:00
|
|
|
};
|
2012-10-08 23:31:17 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
void interval_tree_insert(struct interval_tree_node *node,
|
|
|
|
struct rb_root *root)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
struct rb_node **link = &root->rb_node, *rb_parent = NULL;
|
|
|
|
unsigned long start = node->start, last = node->last;
|
|
|
|
struct interval_tree_node *parent;
|
|
|
|
|
|
|
|
while (*link) {
|
|
|
|
rb_parent = *link;
|
|
|
|
parent = rb_entry(rb_parent, struct interval_tree_node, rb);
|
|
|
|
if (parent->__subtree_last < last)
|
|
|
|
parent->__subtree_last = last;
|
|
|
|
if (start < parent->start)
|
|
|
|
link = &parent->rb.rb_left;
|
|
|
|
else
|
|
|
|
link = &parent->rb.rb_right;
|
2010-02-10 23:23:44 +00:00
|
|
|
}
|
2012-10-08 23:31:17 +00:00
|
|
|
|
|
|
|
node->__subtree_last = last;
|
|
|
|
rb_link_node(&node->rb, rb_parent, link);
|
|
|
|
rb_insert_augmented(&node->rb, root, &augment_callbacks);
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|
2010-02-10 23:23:44 +00:00
|
|
|
|
2017-05-17 09:18:13 +00:00
|
|
|
void interval_tree_remove(struct interval_tree_node *node,
|
|
|
|
struct rb_root *root)
|
|
|
|
{
|
2012-10-08 23:31:17 +00:00
|
|
|
rb_erase_augmented(&node->rb, root, &augment_callbacks);
|
2017-05-17 09:18:13 +00:00
|
|
|
}
|