2005-04-16 22:20:36 +00:00
|
|
|
#
|
|
|
|
# Makefile for some libs needed in the kernel.
|
|
|
|
#
|
|
|
|
|
2008-10-06 23:06:12 +00:00
|
|
|
ifdef CONFIG_FUNCTION_TRACER
|
2008-07-17 15:40:48 +00:00
|
|
|
ORIG_CFLAGS := $(KBUILD_CFLAGS)
|
2015-01-09 12:06:33 +00:00
|
|
|
KBUILD_CFLAGS = $(subst $(CC_FLAGS_FTRACE),,$(ORIG_CFLAGS))
|
2008-07-17 15:40:48 +00:00
|
|
|
endif
|
|
|
|
|
kernel: add kcov code coverage
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing). Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system. A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/). However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.
kcov does not aim to collect as much coverage as possible. It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g. scheduler, locking).
Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes. Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch). I've
dropped the second mode for simplicity.
This patch adds the necessary support on kernel side. The complimentary
compiler support was added in gcc revision 231296.
We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:
https://github.com/google/syzkaller/wiki/Found-Bugs
We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation". For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.
Why not gcov. Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
typical coverage can be just a dozen of basic blocks (e.g. an invalid
input). In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M). Cost of
kcov depends only on number of executed basic blocks/edges. On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.
kcov exposes kernel PCs and control flow to user-space which is
insecure. But debugfs should not be mapped as user accessible.
Based on a patch by Quentin Casasnovas.
[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tavis Ormandy <taviso@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@google.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: David Drysdale <drysdale@google.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-22 21:27:30 +00:00
|
|
|
# These files are disabled because they produce lots of non-interesting and/or
|
|
|
|
# flaky coverage that is not a function of syscall inputs. For example,
|
|
|
|
# rbtree can be global and individual rotations don't correlate with inputs.
|
|
|
|
KCOV_INSTRUMENT_string.o := n
|
|
|
|
KCOV_INSTRUMENT_rbtree.o := n
|
|
|
|
KCOV_INSTRUMENT_list_debug.o := n
|
|
|
|
KCOV_INSTRUMENT_debugobjects.o := n
|
|
|
|
KCOV_INSTRUMENT_dynamic_debug.o := n
|
|
|
|
|
2007-10-07 07:24:34 +00:00
|
|
|
lib-y := ctype.o string.o vsprintf.o cmdline.o \
|
2010-12-09 20:02:18 +00:00
|
|
|
rbtree.o radix-tree.o dump_stack.o timerqueue.o\
|
2013-05-23 05:46:09 +00:00
|
|
|
idr.o int_sqrt.o extable.o \
|
2016-06-12 22:13:36 +00:00
|
|
|
sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
|
2016-03-31 13:51:32 +00:00
|
|
|
flex_proportions.o ratelimit.o show_mem.o \
|
2012-12-14 18:03:23 +00:00
|
|
|
is_single_threaded.o plist.o decompress.o kobject_uevent.o \
|
siphash: add cryptographically secure PRF
SipHash is a 64-bit keyed hash function that is actually a
cryptographically secure PRF, like HMAC. Except SipHash is super fast,
and is meant to be used as a hashtable keyed lookup function, or as a
general PRF for short input use cases, such as sequence numbers or RNG
chaining.
For the first usage:
There are a variety of attacks known as "hashtable poisoning" in which an
attacker forms some data such that the hash of that data will be the
same, and then preceeds to fill up all entries of a hashbucket. This is
a realistic and well-known denial-of-service vector. Currently
hashtables use jhash, which is fast but not secure, and some kind of
rotating key scheme (or none at all, which isn't good). SipHash is meant
as a replacement for jhash in these cases.
There are a modicum of places in the kernel that are vulnerable to
hashtable poisoning attacks, either via userspace vectors or network
vectors, and there's not a reliable mechanism inside the kernel at the
moment to fix it. The first step toward fixing these issues is actually
getting a secure primitive into the kernel for developers to use. Then
we can, bit by bit, port things over to it as deemed appropriate.
While SipHash is extremely fast for a cryptographically secure function,
it is likely a bit slower than the insecure jhash, and so replacements
will be evaluated on a case-by-case basis based on whether or not the
difference in speed is negligible and whether or not the current jhash usage
poses a real security risk.
For the second usage:
A few places in the kernel are using MD5 or SHA1 for creating secure
sequence numbers, syn cookies, port numbers, or fast random numbers.
SipHash is a faster and more fitting, and more secure replacement for MD5
in those situations. Replacing MD5 and SHA1 with SipHash for these uses is
obvious and straight-forward, and so is submitted along with this patch
series. There shouldn't be much of a debate over its efficacy.
Dozens of languages are already using this internally for their hash
tables and PRFs. Some of the BSDs already use this in their kernels.
SipHash is a widely known high-speed solution to a widely known set of
problems, and it's time we catch-up.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-08 12:54:00 +00:00
|
|
|
earlycpio.o seq_buf.o siphash.o \
|
|
|
|
nmi_backtrace.o nodemask.o win_minmax.o
|
[PATCH] Add initial implementation of klist helpers.
This klist interface provides a couple of structures that wrap around
struct list_head to provide explicit list "head" (struct klist) and
list "node" (struct klist_node) objects. For struct klist, a spinlock
is included that protects access to the actual list itself. struct
klist_node provides a pointer to the klist that owns it and a kref
reference count that indicates the number of current users of that node
in the list.
The entire point is to provide an interface for iterating over a list
that is safe and allows for modification of the list during the
iteration (e.g. insertion and removal), including modification of the
current node on the list.
It works using a 3rd object type - struct klist_iter - that is declared
and initialized before an iteration. klist_next() is used to acquire the
next element in the list. It returns NULL if there are no more items.
This klist interface provides a couple of structures that wrap around
struct list_head to provide explicit list "head" (struct klist) and
list "node" (struct klist_node) objects. For struct klist, a spinlock
is included that protects access to the actual list itself. struct
klist_node provides a pointer to the klist that owns it and a kref
reference count that indicates the number of current users of that node
in the list.
The entire point is to provide an interface for iterating over a list
that is safe and allows for modification of the list during the
iteration (e.g. insertion and removal), including modification of the
current node on the list.
It works using a 3rd object type - struct klist_iter - that is declared
and initialized before an iteration. klist_next() is used to acquire the
next element in the list. It returns NULL if there are no more items.
Internally, that routine takes the klist's lock, decrements the reference
count of the previous klist_node and increments the count of the next
klist_node. It then drops the lock and returns.
There are primitives for adding and removing nodes to/from a klist.
When deleting, klist_del() will simply decrement the reference count.
Only when the count goes to 0 is the node removed from the list.
klist_remove() will try to delete the node from the list and block
until it is actually removed. This is useful for objects (like devices)
that have been removed from the system and must be freed (but must wait
until all accessors have finished).
Internally, that routine takes the klist's lock, decrements the reference
count of the previous klist_node and increments the count of the next
klist_node. It then drops the lock and returns.
There are primitives for adding and removing nodes to/from a klist.
When deleting, klist_del() will simply decrement the reference count.
Only when the count goes to 0 is the node removed from the list.
klist_remove() will try to delete the node from the list and block
until it is actually removed. This is useful for objects (like devices)
that have been removed from the system and must be freed (but must wait
until all accessors have finished).
Signed-off-by: Patrick Mochel <mochel@digitalimplant.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
diff -Nru a/include/linux/klist.h b/include/linux/klist.h
2005-03-21 19:45:16 +00:00
|
|
|
|
2006-10-01 06:29:12 +00:00
|
|
|
lib-$(CONFIG_MMU) += ioremap.o
|
2006-03-25 11:08:08 +00:00
|
|
|
lib-$(CONFIG_SMP) += cpumask.o
|
2016-02-03 05:46:32 +00:00
|
|
|
lib-$(CONFIG_HAS_DMA) += dma-noop.o
|
2006-03-25 11:08:08 +00:00
|
|
|
|
2011-12-13 09:36:20 +00:00
|
|
|
lib-y += kobject.o klist.o
|
2013-09-02 18:58:20 +00:00
|
|
|
obj-y += lockref.o
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2017-02-02 16:52:14 +00:00
|
|
|
obj-y += bcd.o div64.o sort.o parser.o debug_locks.o random32.o \
|
2015-02-12 23:02:21 +00:00
|
|
|
bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
|
2014-12-10 21:05:55 +00:00
|
|
|
gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \
|
2015-04-16 19:43:19 +00:00
|
|
|
bsearch.o find_bit.o llist.o memweight.o kfifo.o \
|
2015-10-07 23:20:35 +00:00
|
|
|
percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o \
|
|
|
|
once.o
|
2013-04-30 22:27:30 +00:00
|
|
|
obj-y += string_helpers.o
|
|
|
|
obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o
|
2015-02-12 23:02:21 +00:00
|
|
|
obj-y += hexdump.o
|
2016-01-20 22:58:44 +00:00
|
|
|
obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
|
2011-03-22 23:34:40 +00:00
|
|
|
obj-y += kstrtox.o
|
2014-05-08 21:10:52 +00:00
|
|
|
obj-$(CONFIG_TEST_BPF) += test_bpf.o
|
2014-07-14 21:38:12 +00:00
|
|
|
obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
|
siphash: add cryptographically secure PRF
SipHash is a 64-bit keyed hash function that is actually a
cryptographically secure PRF, like HMAC. Except SipHash is super fast,
and is meant to be used as a hashtable keyed lookup function, or as a
general PRF for short input use cases, such as sequence numbers or RNG
chaining.
For the first usage:
There are a variety of attacks known as "hashtable poisoning" in which an
attacker forms some data such that the hash of that data will be the
same, and then preceeds to fill up all entries of a hashbucket. This is
a realistic and well-known denial-of-service vector. Currently
hashtables use jhash, which is fast but not secure, and some kind of
rotating key scheme (or none at all, which isn't good). SipHash is meant
as a replacement for jhash in these cases.
There are a modicum of places in the kernel that are vulnerable to
hashtable poisoning attacks, either via userspace vectors or network
vectors, and there's not a reliable mechanism inside the kernel at the
moment to fix it. The first step toward fixing these issues is actually
getting a secure primitive into the kernel for developers to use. Then
we can, bit by bit, port things over to it as deemed appropriate.
While SipHash is extremely fast for a cryptographically secure function,
it is likely a bit slower than the insecure jhash, and so replacements
will be evaluated on a case-by-case basis based on whether or not the
difference in speed is negligible and whether or not the current jhash usage
poses a real security risk.
For the second usage:
A few places in the kernel are using MD5 or SHA1 for creating secure
sequence numbers, syn cookies, port numbers, or fast random numbers.
SipHash is a faster and more fitting, and more secure replacement for MD5
in those situations. Replacing MD5 and SHA1 with SipHash for these uses is
obvious and straight-forward, and so is submitted along with this patch
series. There shouldn't be much of a debate over its efficacy.
Dozens of languages are already using this internally for their hash
tables and PRFs. Some of the BSDs already use this in their kernels.
SipHash is a widely known high-speed solution to a widely known set of
problems, and it's time we catch-up.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-08 12:54:00 +00:00
|
|
|
obj-$(CONFIG_TEST_HASH) += test_hash.o test_siphash.o
|
2015-02-13 22:39:53 +00:00
|
|
|
obj-$(CONFIG_TEST_KASAN) += test_kasan.o
|
|
|
|
obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
|
|
|
|
obj-$(CONFIG_TEST_LKM) += test_module.o
|
2015-01-29 14:40:25 +00:00
|
|
|
obj-$(CONFIG_TEST_RHASHTABLE) += test_rhashtable.o
|
2015-02-13 22:39:53 +00:00
|
|
|
obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
|
2015-08-03 09:42:57 +00:00
|
|
|
obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o
|
|
|
|
obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o
|
2015-11-07 00:30:29 +00:00
|
|
|
obj-$(CONFIG_TEST_PRINTF) += test_printf.o
|
2016-02-19 14:24:00 +00:00
|
|
|
obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o
|
2016-05-30 14:40:41 +00:00
|
|
|
obj-$(CONFIG_TEST_UUID) += test_uuid.o
|
2017-02-03 09:29:06 +00:00
|
|
|
obj-$(CONFIG_TEST_PARMAN) += test_parman.o
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
|
|
|
|
CFLAGS_kobject.o += -DDEBUG
|
|
|
|
CFLAGS_kobject_uevent.o += -DDEBUG
|
|
|
|
endif
|
|
|
|
|
2015-03-21 01:50:01 +00:00
|
|
|
obj-$(CONFIG_DEBUG_INFO_REDUCED) += debug_info.o
|
|
|
|
CFLAGS_debug_info.o += $(call cc-option, -femit-struct-debug-detailed=any)
|
|
|
|
|
2007-02-11 15:41:31 +00:00
|
|
|
obj-$(CONFIG_GENERIC_IOMAP) += iomap.o
|
2011-11-24 18:45:20 +00:00
|
|
|
obj-$(CONFIG_GENERIC_PCI_IOMAP) += pci_iomap.o
|
2007-08-22 21:01:36 +00:00
|
|
|
obj-$(CONFIG_HAS_IOMEM) += iomap_copy.o devres.o
|
|
|
|
obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o
|
2006-07-03 07:24:48 +00:00
|
|
|
obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o
|
2010-03-05 16:34:46 +00:00
|
|
|
|
2006-12-07 04:39:16 +00:00
|
|
|
obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o
|
2010-03-05 16:34:46 +00:00
|
|
|
|
2009-11-20 19:13:39 +00:00
|
|
|
obj-$(CONFIG_BTREE) += btree.o
|
2014-03-17 12:21:54 +00:00
|
|
|
obj-$(CONFIG_INTERVAL_TREE) += interval_tree.o
|
Add a generic associative array implementation.
Add a generic associative array implementation that can be used as the
container for keyrings, thereby massively increasing the capacity available
whilst also speeding up searching in keyrings that contain a lot of keys.
This may also be useful in FS-Cache for tracking cookies.
Documentation is added into Documentation/associative_array.txt
Some of the properties of the implementation are:
(1) Objects are opaque pointers. The implementation does not care where they
point (if anywhere) or what they point to (if anything).
[!] NOTE: Pointers to objects _must_ be zero in the two least significant
bits.
(2) Objects do not need to contain linkage blocks for use by the array. This
permits an object to be located in multiple arrays simultaneously.
Rather, the array is made up of metadata blocks that point to objects.
(3) Objects are labelled as being one of two types (the type is a bool value).
This information is stored in the array, but has no consequence to the
array itself or its algorithms.
(4) Objects require index keys to locate them within the array.
(5) Index keys must be unique. Inserting an object with the same key as one
already in the array will replace the old object.
(6) Index keys can be of any length and can be of different lengths.
(7) Index keys should encode the length early on, before any variation due to
length is seen.
(8) Index keys can include a hash to scatter objects throughout the array.
(9) The array can iterated over. The objects will not necessarily come out in
key order.
(10) The array can be iterated whilst it is being modified, provided the RCU
readlock is being held by the iterator. Note, however, under these
circumstances, some objects may be seen more than once. If this is a
problem, the iterator should lock against modification. Objects will not
be missed, however, unless deleted.
(11) Objects in the array can be looked up by means of their index key.
(12) Objects can be looked up whilst the array is being modified, provided the
RCU readlock is being held by the thread doing the look up.
The implementation uses a tree of 16-pointer nodes internally that are indexed
on each level by nibbles from the index key. To improve memory efficiency,
shortcuts can be emplaced to skip over what would otherwise be a series of
single-occupancy nodes. Further, nodes pack leaf object pointers into spare
space in the node rather than making an extra branch until as such time an
object needs to be added to a full node.
Signed-off-by: David Howells <dhowells@redhat.com>
2013-09-24 09:35:17 +00:00
|
|
|
obj-$(CONFIG_ASSOCIATIVE_ARRAY) += assoc_array.o
|
2005-06-22 00:14:34 +00:00
|
|
|
obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o
|
2006-09-29 08:59:00 +00:00
|
|
|
obj-$(CONFIG_DEBUG_LIST) += list_debug.o
|
2008-04-30 07:55:01 +00:00
|
|
|
obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-08-17 11:17:26 +00:00
|
|
|
ifneq ($(CONFIG_HAVE_DEC_LOCK),y)
|
2005-04-16 22:20:36 +00:00
|
|
|
lib-y += dec_and_lock.o
|
|
|
|
endif
|
|
|
|
|
2006-12-08 10:36:25 +00:00
|
|
|
obj-$(CONFIG_BITREVERSE) += bitrev.o
|
2009-06-11 13:51:15 +00:00
|
|
|
obj-$(CONFIG_RATIONAL) += rational.o
|
2005-04-16 22:20:36 +00:00
|
|
|
obj-$(CONFIG_CRC_CCITT) += crc-ccitt.o
|
2005-08-17 11:17:26 +00:00
|
|
|
obj-$(CONFIG_CRC16) += crc16.o
|
2008-06-25 15:22:42 +00:00
|
|
|
obj-$(CONFIG_CRC_T10DIF)+= crc-t10dif.o
|
2006-06-12 14:17:04 +00:00
|
|
|
obj-$(CONFIG_CRC_ITU_T) += crc-itu-t.o
|
2005-04-16 22:20:36 +00:00
|
|
|
obj-$(CONFIG_CRC32) += crc32.o
|
2017-02-24 23:00:49 +00:00
|
|
|
obj-$(CONFIG_CRC32_SELFTEST) += crc32test.o
|
2007-07-17 11:04:03 +00:00
|
|
|
obj-$(CONFIG_CRC7) += crc7.o
|
2005-04-16 22:20:36 +00:00
|
|
|
obj-$(CONFIG_LIBCRC32C) += libcrc32c.o
|
2011-05-31 09:22:15 +00:00
|
|
|
obj-$(CONFIG_CRC8) += crc8.o
|
2005-06-22 00:15:02 +00:00
|
|
|
obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-05-07 17:49:14 +00:00
|
|
|
obj-$(CONFIG_842_COMPRESS) += 842/
|
|
|
|
obj-$(CONFIG_842_DECOMPRESS) += 842/
|
2005-04-16 22:20:36 +00:00
|
|
|
obj-$(CONFIG_ZLIB_INFLATE) += zlib_inflate/
|
|
|
|
obj-$(CONFIG_ZLIB_DEFLATE) += zlib_deflate/
|
|
|
|
obj-$(CONFIG_REED_SOLOMON) += reed_solomon/
|
lib: add shared BCH ECC library
This is a new software BCH encoding/decoding library, similar to the shared
Reed-Solomon library.
Binary BCH (Bose-Chaudhuri-Hocquenghem) codes are widely used to correct
errors in NAND flash devices requiring more than 1-bit ecc correction; they
are generally better suited for NAND flash than RS codes because NAND bit
errors do not occur in bursts. Latest SLC NAND devices typically require at
least 4-bit ecc protection per 512 bytes block.
This library provides software encoding/decoding, but may also be used with
ASIC/SoC hardware BCH engines to perform error correction. It is being
currently used for this purpose on an OMAP3630 board (4bit/8bit HW BCH). It
has also been used to decode raw dumps of NAND devices with on-die BCH ecc
engines (e.g. Micron 4bit ecc SLC devices).
Latest NAND devices (including SLC) can exhibit high error rates (typically
a dozen or more bitflips per hour during stress tests); in order to
minimize the performance impact of error correction, this library
implements recently developed algorithms for fast polynomial root finding
(see bch.c header for details) instead of the traditional exhaustive Chien
root search; a few performance figures are provided below:
Platform: arm926ejs @ 468 MHz, 32 KiB icache, 16 KiB dcache
BCH ecc : 4-bit per 512 bytes
Encoding average throughput: 250 Mbits/s
Error correction time (compared with Chien search):
average worst average (Chien) worst (Chien)
----------------------------------------------------------
1 bit 8.5 µs 11 µs 200 µs 383 µs
2 bit 9.7 µs 12.5 µs 477 µs 728 µs
3 bit 18.1 µs 20.6 µs 758 µs 1010 µs
4 bit 19.5 µs 23 µs 1028 µs 1280 µs
In the above figures, "worst" is meant in terms of error pattern, not in
terms of cache miss / page faults effects (not taken into account here).
The library has been extensively tested on the following platforms: x86,
x86_64, arm926ejs, omap3630, qemu-ppc64, qemu-mips.
Signed-off-by: Ivan Djelic <ivan.djelic@parrot.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2011-03-11 10:05:32 +00:00
|
|
|
obj-$(CONFIG_BCH) += bch.o
|
2007-07-11 00:22:24 +00:00
|
|
|
obj-$(CONFIG_LZO_COMPRESS) += lzo/
|
|
|
|
obj-$(CONFIG_LZO_DECOMPRESS) += lzo/
|
2013-07-08 23:01:49 +00:00
|
|
|
obj-$(CONFIG_LZ4_COMPRESS) += lz4/
|
|
|
|
obj-$(CONFIG_LZ4HC_COMPRESS) += lz4/
|
2013-07-08 23:01:46 +00:00
|
|
|
obj-$(CONFIG_LZ4_DECOMPRESS) += lz4/
|
2011-01-13 01:01:22 +00:00
|
|
|
obj-$(CONFIG_XZ_DEC) += xz/
|
2009-07-13 10:35:12 +00:00
|
|
|
obj-$(CONFIG_RAID6_PQ) += raid6/
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2009-01-08 23:14:17 +00:00
|
|
|
lib-$(CONFIG_DECOMPRESS_GZIP) += decompress_inflate.o
|
|
|
|
lib-$(CONFIG_DECOMPRESS_BZIP2) += decompress_bunzip2.o
|
|
|
|
lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
|
decompressors: add boot-time XZ support
This implements the API defined in <linux/decompress/generic.h> which is
used for kernel, initramfs, and initrd decompression. This patch together
with the first patch is enough for XZ-compressed initramfs and initrd;
XZ-compressed kernel will need arch-specific changes.
The buffering requirements described in decompress_unxz.c are stricter
than with gzip, so the relevant changes should be done to the
arch-specific code when adding support for XZ-compressed kernel.
Similarly, the heap size in arch-specific pre-boot code may need to be
increased (30 KiB is enough).
The XZ decompressor needs memmove(), memeq() (memcmp() == 0), and
memzero() (memset(ptr, 0, size)), which aren't available in all
arch-specific pre-boot environments. I'm including simple versions in
decompress_unxz.c, but a cleaner solution would naturally be nicer.
Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Alain Knaff <alain@knaff.lu>
Cc: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Cc: Phillip Lougher <phillip@lougher.demon.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-13 01:01:23 +00:00
|
|
|
lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
|
2010-01-08 22:42:46 +00:00
|
|
|
lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
|
2013-07-08 23:01:46 +00:00
|
|
|
lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
|
2009-01-05 21:48:31 +00:00
|
|
|
|
2005-06-24 06:49:52 +00:00
|
|
|
obj-$(CONFIG_TEXTSEARCH) += textsearch.o
|
[LIB]: Knuth-Morris-Pratt textsearch algorithm
Implements a linear-time string-matching algorithm due to Knuth,
Morris, and Pratt [1]. Their algorithm avoids the explicit
computation of the transition function DELTA altogether. Its
matching time is O(n), for n being length(text), using just an
auxiliary function PI[1..m], for m being length(pattern),
precomputed from the pattern in time O(m). The array PI allows
the transition function DELTA to be computed efficiently
"on the fly" as needed. Roughly speaking, for any state
"q" = 0,1,...,m and any character "a" in SIGMA, the value
PI["q"] contains the information that is independent of "a" and
is needed to compute DELTA("q", "a") [2]. Since the array PI
has only m entries, whereas DELTA has O(m|SIGMA|) entries, we
save a factor of |SIGMA| in the preprocessing time by computing
PI rather than DELTA.
[1] Cormen, Leiserson, Rivest, Stein
Introdcution to Algorithms, 2nd Edition, MIT Press
[2] See finite automation theory
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-24 03:58:37 +00:00
|
|
|
obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
|
2005-08-25 23:12:22 +00:00
|
|
|
obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
|
2005-06-24 03:59:16 +00:00
|
|
|
obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
|
2006-06-23 09:05:40 +00:00
|
|
|
obj-$(CONFIG_SMP) += percpu_counter.o
|
2006-09-12 07:04:40 +00:00
|
|
|
obj-$(CONFIG_AUDIT_GENERIC) += audit.o
|
2014-03-15 05:48:00 +00:00
|
|
|
obj-$(CONFIG_AUDIT_COMPAT_GENERIC) += compat_audit.o
|
2005-06-24 03:49:30 +00:00
|
|
|
|
2005-09-29 21:42:42 +00:00
|
|
|
obj-$(CONFIG_SWIOTLB) += swiotlb.o
|
2015-03-13 00:02:35 +00:00
|
|
|
obj-$(CONFIG_IOMMU_HELPER) += iommu-helper.o iommu-common.o
|
2006-12-08 10:39:43 +00:00
|
|
|
obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o
|
2012-07-30 21:43:02 +00:00
|
|
|
obj-$(CONFIG_NOTIFIER_ERROR_INJECTION) += notifier-error-inject.o
|
2012-07-30 21:43:07 +00:00
|
|
|
obj-$(CONFIG_PM_NOTIFIER_ERROR_INJECT) += pm-notifier-error-inject.o
|
2015-11-28 12:45:28 +00:00
|
|
|
obj-$(CONFIG_NETDEV_NOTIFIER_ERROR_INJECT) += netdev-notifier-error-inject.o
|
2012-07-30 21:43:10 +00:00
|
|
|
obj-$(CONFIG_MEMORY_NOTIFIER_ERROR_INJECT) += memory-notifier-error-inject.o
|
2012-12-13 23:32:52 +00:00
|
|
|
obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \
|
|
|
|
of-reconfig-notifier-error-inject.o
|
2005-09-29 21:42:42 +00:00
|
|
|
|
[PATCH] Generic BUG implementation
This patch adds common handling for kernel BUGs, for use by architectures as
they wish. The code is derived from arch/powerpc.
The advantages of having common BUG handling are:
- consistent BUG reporting across architectures
- shared implementation of out-of-line file/line data
- implement CONFIG_DEBUG_BUGVERBOSE consistently
This means that in inline impact of BUG is just the illegal instruction
itself, which is an improvement for i386 and x86-64.
A BUG is represented in the instruction stream as an illegal instruction,
which has file/line information associated with it. This extra information is
stored in the __bug_table section in the ELF file.
When the kernel gets an illegal instruction, it first confirms it might
possibly be from a BUG (ie, in kernel mode, the right illegal instruction).
It then calls report_bug(). This searches __bug_table for a matching
instruction pointer, and if found, prints the corresponding file/line
information. If report_bug() determines that it wasn't a BUG which caused the
trap, it returns BUG_TRAP_TYPE_NONE.
Some architectures (powerpc) implement WARN using the same mechanism; if the
illegal instruction was the result of a WARN, then report_bug(Q) returns
CONFIG_DEBUG_BUGVERBOSE; otherwise it returns BUG_TRAP_TYPE_BUG.
lib/bug.c keeps a list of loaded modules which can be searched for __bug_table
entries. The architecture must call
module_bug_finalize()/module_bug_cleanup() from its corresponding
module_finalize/cleanup functions.
Unsetting CONFIG_DEBUG_BUGVERBOSE will reduce the kernel size by some amount.
At the very least, filename and line information will not be recorded for each
but, but architectures may decide to store no extra information per BUG at
all.
Unfortunately, gcc doesn't have a general way to mark an asm() as noreturn, so
architectures will generally have to include an infinite loop (or similar) in
the BUG code, so that gcc knows execution won't continue beyond that point.
gcc does have a __builtin_trap() operator which may be useful to achieve the
same effect, unfortunately it cannot be used to actually implement the BUG
itself, because there's no way to get the instruction's address for use in
generating the __bug_table entry.
[randy.dunlap@oracle.com: Handle BUG=n, GENERIC_BUG=n to prevent build errors]
[bunk@stusta.de: include/linux/bug.h must always #include <linux/module.h]
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Andi Kleen <ak@muc.de>
Cc: Hugh Dickens <hugh@veritas.com>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-08 10:36:19 +00:00
|
|
|
lib-$(CONFIG_GENERIC_BUG) += bug.o
|
|
|
|
|
2008-07-26 02:45:59 +00:00
|
|
|
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
|
|
|
|
|
2009-02-05 16:51:38 +00:00
|
|
|
obj-$(CONFIG_DYNAMIC_DEBUG) += dynamic_debug.o
|
driver core: basic infrastructure for per-module dynamic debug messages
Base infrastructure to enable per-module debug messages.
I've introduced CONFIG_DYNAMIC_PRINTK_DEBUG, which when enabled centralizes
control of debugging statements on a per-module basis in one /proc file,
currently, <debugfs>/dynamic_printk/modules. When, CONFIG_DYNAMIC_PRINTK_DEBUG,
is not set, debugging statements can still be enabled as before, often by
defining 'DEBUG' for the proper compilation unit. Thus, this patch set has no
affect when CONFIG_DYNAMIC_PRINTK_DEBUG is not set.
The infrastructure currently ties into all pr_debug() and dev_dbg() calls. That
is, if CONFIG_DYNAMIC_PRINTK_DEBUG is set, all pr_debug() and dev_dbg() calls
can be dynamically enabled/disabled on a per-module basis.
Future plans include extending this functionality to subsystems, that define
their own debug levels and flags.
Usage:
Dynamic debugging is controlled by the debugfs file,
<debugfs>/dynamic_printk/modules. This file contains a list of the modules that
can be enabled. The format of the file is as follows:
<module_name> <enabled=0/1>
.
.
.
<module_name> : Name of the module in which the debug call resides
<enabled=0/1> : whether the messages are enabled or not
For example:
snd_hda_intel enabled=0
fixup enabled=1
driver enabled=0
Enable a module:
$echo "set enabled=1 <module_name>" > dynamic_printk/modules
Disable a module:
$echo "set enabled=0 <module_name>" > dynamic_printk/modules
Enable all modules:
$echo "set enabled=1 all" > dynamic_printk/modules
Disable all modules:
$echo "set enabled=0 all" > dynamic_printk/modules
Finally, passing "dynamic_printk" at the command line enables
debugging for all modules. This mode can be turned off via the above
disable command.
[gkh: minor cleanups and tweaks to make the build work quietly]
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-08-12 20:46:19 +00:00
|
|
|
|
2009-03-04 06:53:30 +00:00
|
|
|
obj-$(CONFIG_NLATTR) += nlattr.o
|
driver core: basic infrastructure for per-module dynamic debug messages
Base infrastructure to enable per-module debug messages.
I've introduced CONFIG_DYNAMIC_PRINTK_DEBUG, which when enabled centralizes
control of debugging statements on a per-module basis in one /proc file,
currently, <debugfs>/dynamic_printk/modules. When, CONFIG_DYNAMIC_PRINTK_DEBUG,
is not set, debugging statements can still be enabled as before, often by
defining 'DEBUG' for the proper compilation unit. Thus, this patch set has no
affect when CONFIG_DYNAMIC_PRINTK_DEBUG is not set.
The infrastructure currently ties into all pr_debug() and dev_dbg() calls. That
is, if CONFIG_DYNAMIC_PRINTK_DEBUG is set, all pr_debug() and dev_dbg() calls
can be dynamically enabled/disabled on a per-module basis.
Future plans include extending this functionality to subsystems, that define
their own debug levels and flags.
Usage:
Dynamic debugging is controlled by the debugfs file,
<debugfs>/dynamic_printk/modules. This file contains a list of the modules that
can be enabled. The format of the file is as follows:
<module_name> <enabled=0/1>
.
.
.
<module_name> : Name of the module in which the debug call resides
<enabled=0/1> : whether the messages are enabled or not
For example:
snd_hda_intel enabled=0
fixup enabled=1
driver enabled=0
Enable a module:
$echo "set enabled=1 <module_name>" > dynamic_printk/modules
Disable a module:
$echo "set enabled=0 <module_name>" > dynamic_printk/modules
Enable all modules:
$echo "set enabled=1 all" > dynamic_printk/modules
Disable all modules:
$echo "set enabled=0 all" > dynamic_printk/modules
Finally, passing "dynamic_printk" at the command line enables
debugging for all modules. This mode can be turned off via the above
disable command.
[gkh: minor cleanups and tweaks to make the build work quietly]
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-08-12 20:46:19 +00:00
|
|
|
|
2009-09-25 23:07:19 +00:00
|
|
|
obj-$(CONFIG_LRU_CACHE) += lru_cache.o
|
|
|
|
|
2009-01-09 11:19:52 +00:00
|
|
|
obj-$(CONFIG_DMA_API_DEBUG) += dma-debug.o
|
|
|
|
|
2009-05-13 22:56:38 +00:00
|
|
|
obj-$(CONFIG_GENERIC_CSUM) += checksum.o
|
|
|
|
|
2009-06-12 21:10:05 +00:00
|
|
|
obj-$(CONFIG_GENERIC_ATOMIC64) += atomic64.o
|
|
|
|
|
2010-02-24 09:54:24 +00:00
|
|
|
obj-$(CONFIG_ATOMIC64_SELFTEST) += atomic64_test.o
|
|
|
|
|
2011-01-19 11:03:25 +00:00
|
|
|
obj-$(CONFIG_CPU_RMAP) += cpu_rmap.o
|
|
|
|
|
2011-05-31 09:22:16 +00:00
|
|
|
obj-$(CONFIG_CORDIC) += cordic.o
|
|
|
|
|
dql: Dynamic queue limits
Implementation of dynamic queue limits (dql). This is a libary which
allows a queue limit to be dynamically managed. The goal of dql is
to set the queue limit, number of objects to the queue, to be minimized
without allowing the queue to be starved.
dql would be used with a queue which has these properties:
1) Objects are queued up to some limit which can be expressed as a
count of objects.
2) Periodically a completion process executes which retires consumed
objects.
3) Starvation occurs when limit has been reached, all queued data has
actually been consumed but completion processing has not yet run,
so queuing new data is blocked.
4) Minimizing the amount of queued data is desirable.
A canonical example of such a queue would be a NIC HW transmit queue.
The queue limit is dynamic, it will increase or decrease over time
depending on the workload. The queue limit is recalculated each time
completion processing is done. Increases occur when the queue is
starved and can exponentially increase over successive intervals.
Decreases occur when more data is being maintained in the queue than
needed to prevent starvation. The number of extra objects, or "slack",
is measured over successive intervals, and to avoid hysteresis the
limit is only reduced by the miminum slack seen over a configurable
time period.
dql API provides routines to manage the queue:
- dql_init is called to intialize the dql structure
- dql_reset is called to reset dynamic values
- dql_queued called when objects are being enqueued
- dql_avail returns availability in the queue
- dql_completed is called when objects have be consumed in the queue
Configuration consists of:
- max_limit, maximum limit
- min_limit, minimum limit
- slack_hold_time, time to measure instances of slack before reducing
queue limit
Signed-off-by: Tom Herbert <therbert@google.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-28 16:32:35 +00:00
|
|
|
obj-$(CONFIG_DQL) += dynamic_queue_limits.o
|
|
|
|
|
2014-08-06 23:09:23 +00:00
|
|
|
obj-$(CONFIG_GLOB) += glob.o
|
|
|
|
|
2011-08-31 11:05:16 +00:00
|
|
|
obj-$(CONFIG_MPILIB) += mpi/
|
2012-01-17 15:12:03 +00:00
|
|
|
obj-$(CONFIG_SIGNATURE) += digsig.o
|
2011-08-31 11:05:16 +00:00
|
|
|
|
2016-01-20 22:59:12 +00:00
|
|
|
lib-$(CONFIG_CLZ_TAB) += clz_tab.o
|
2012-02-01 22:17:54 +00:00
|
|
|
|
2012-04-27 12:24:03 +00:00
|
|
|
obj-$(CONFIG_DDR) += jedec_ddr_data.o
|
|
|
|
|
2012-05-24 20:12:28 +00:00
|
|
|
obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o
|
2012-05-26 18:06:38 +00:00
|
|
|
obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o
|
2012-05-24 20:12:28 +00:00
|
|
|
|
2013-06-04 16:46:26 +00:00
|
|
|
obj-$(CONFIG_GENERIC_NET_UTILS) += net_utils.o
|
|
|
|
|
lib: scatterlist: add sg splitting function
Sometimes a scatter-gather has to be split into several chunks, or sub
scatter lists. This happens for example if a scatter list will be
handled by multiple DMA channels, each one filling a part of it.
A concrete example comes with the media V4L2 API, where the scatter list
is allocated from userspace to hold an image, regardless of the
knowledge of how many DMAs will fill it :
- in a simple RGB565 case, one DMA will pump data from the camera ISP
to memory
- in the trickier YUV422 case, 3 DMAs will pump data from the camera
ISP pipes, one for pipe Y, one for pipe U and one for pipe V
For these cases, it is necessary to split the original scatter list into
multiple scatter lists, which is the purpose of this patch.
The guarantees that are required for this patch are :
- the intersection of spans of any couple of resulting scatter lists is
empty.
- the union of spans of all resulting scatter lists is a subrange of
the span of the original scatter list.
- streaming DMA API operations (mapping, unmapping) should not happen
both on both the resulting and the original scatter list. It's either
the first or the later ones.
- the caller is reponsible to call kfree() on the resulting
scatterlists.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-08 08:44:10 +00:00
|
|
|
obj-$(CONFIG_SG_SPLIT) += sg_split.o
|
2016-04-04 21:48:11 +00:00
|
|
|
obj-$(CONFIG_SG_POOL) += sg_pool.o
|
lib: add support for stmp-style devices
MX23/28 use IP cores which follow a register layout I have first seen on
STMP3xxx SoCs. In this layout, every register actually has four u32:
1.) to store a value directly
2.) a SET register where every 1-bit sets the corresponding bit,
others are unaffected
3.) same with a CLR register
4.) same with a TOG (toggle) register
Also, the 2 MSBs in register 0 are always the same and can be used to reset
the IP core.
All this is strictly speaking not mach-specific (but IP core specific) and,
thus, doesn't need to be in mach-mxs/include. At least mx6 also uses IP cores
following this stmp-style. So:
Introduce a stmp-style device, put the code and defines for that in a public
place (lib/), and let drivers for stmp-style devices select that code.
To avoid regressions and ease reviewing, the actual code is simply copied from
mach-mxs. It definately wants updates, but those need a seperate patch series.
Voila, mach dependency gone, reusable code introduced. Note that I didn't
remove the duplicated code from mach-mxs yet, first the drivers have to be
converted.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Dong Aisheng <dong.aisheng@linaro.org>
2011-08-31 18:35:40 +00:00
|
|
|
obj-$(CONFIG_STMP_DEVICE) += stmp_device.o
|
2015-11-10 13:56:14 +00:00
|
|
|
obj-$(CONFIG_IRQ_POLL) += irq_poll.o
|
lib: add support for stmp-style devices
MX23/28 use IP cores which follow a register layout I have first seen on
STMP3xxx SoCs. In this layout, every register actually has four u32:
1.) to store a value directly
2.) a SET register where every 1-bit sets the corresponding bit,
others are unaffected
3.) same with a CLR register
4.) same with a TOG (toggle) register
Also, the 2 MSBs in register 0 are always the same and can be used to reset
the IP core.
All this is strictly speaking not mach-specific (but IP core specific) and,
thus, doesn't need to be in mach-mxs/include. At least mx6 also uses IP cores
following this stmp-style. So:
Introduce a stmp-style device, put the code and defines for that in a public
place (lib/), and let drivers for stmp-style devices select that code.
To avoid regressions and ease reviewing, the actual code is simply copied from
mach-mxs. It definately wants updates, but those need a seperate patch series.
Voila, mach dependency gone, reusable code introduced. Note that I didn't
remove the duplicated code from mach-mxs yet, first the drivers have to be
converted.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Dong Aisheng <dong.aisheng@linaro.org>
2011-08-31 18:35:40 +00:00
|
|
|
|
2016-03-25 21:22:08 +00:00
|
|
|
obj-$(CONFIG_STACKDEPOT) += stackdepot.o
|
|
|
|
KASAN_SANITIZE_stackdepot.o := n
|
2016-10-11 20:54:47 +00:00
|
|
|
KCOV_INSTRUMENT_stackdepot.o := n
|
2016-03-25 21:22:08 +00:00
|
|
|
|
2014-02-04 16:11:10 +00:00
|
|
|
libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o \
|
|
|
|
fdt_empty_tree.o
|
2012-07-05 16:12:38 +00:00
|
|
|
$(foreach file, $(libfdt_files), \
|
|
|
|
$(eval CFLAGS_$(file) = -I$(src)/../scripts/dtc/libfdt))
|
|
|
|
lib-$(CONFIG_LIBFDT) += $(libfdt_files)
|
|
|
|
|
2012-10-08 23:30:39 +00:00
|
|
|
obj-$(CONFIG_RBTREE_TEST) += rbtree_test.o
|
rbtree: add prio tree and interval tree tests
Patch 1 implements support for interval trees, on top of the augmented
rbtree API. It also adds synthetic tests to compare the performance of
interval trees vs prio trees. Short answers is that interval trees are
slightly faster (~25%) on insert/erase, and much faster (~2.4 - 3x)
on search. It is debatable how realistic the synthetic test is, and I have
not made such measurements yet, but my impression is that interval trees
would still come out faster.
Patch 2 uses a preprocessor template to make the interval tree generic,
and uses it as a replacement for the vma prio_tree.
Patch 3 takes the other prio_tree user, kmemleak, and converts it to use
a basic rbtree. We don't actually need the augmented rbtree support here
because the intervals are always non-overlapping.
Patch 4 removes the now-unused prio tree library.
Patch 5 proposes an additional optimization to rb_erase_augmented, now
providing it as an inline function so that the augmented callbacks can be
inlined in. This provides an additional 5-10% performance improvement
for the interval tree insert/erase benchmark. There is a maintainance cost
as it exposes augmented rbtree users to some of the rbtree library internals;
however I think this cost shouldn't be too high as I expect the augmented
rbtree will always have much less users than the base rbtree.
I should probably add a quick summary of why I think it makes sense to
replace prio trees with augmented rbtree based interval trees now. One of
the drivers is that we need augmented rbtrees for Rik's vma gap finding
code, and once you have them, it just makes sense to use them for interval
trees as well, as this is the simpler and more well known algorithm. prio
trees, in comparison, seem *too* clever: they impose an additional 'heap'
constraint on the tree, which they use to guarantee a faster worst-case
complexity of O(k+log N) for stabbing queries in a well-balanced prio
tree, vs O(k*log N) for interval trees (where k=number of matches,
N=number of intervals). Now this sounds great, but in practice prio trees
don't realize this theorical benefit. First, the additional constraint
makes them harder to update, so that the kernel implementation has to
simplify things by balancing them like a radix tree, which is not always
ideal. Second, the fact that there are both index and heap properties
makes both tree manipulation and search more complex, which results in a
higher multiplicative time constant. As it turns out, the simple interval
tree algorithm ends up running faster than the more clever prio tree.
This patch:
Add two test modules:
- prio_tree_test measures the performance of lib/prio_tree.c, both for
insertion/removal and for stabbing searches
- interval_tree_test measures the performance of a library of equivalent
functionality, built using the augmented rbtree support.
In order to support the second test module, lib/interval_tree.c is
introduced. It is kept separate from the interval_tree_test main file
for two reasons: first we don't want to provide an unfair advantage
over prio_tree_test by having everything in a single compilation unit,
and second there is the possibility that the interval tree functionality
could get some non-test users in kernel over time.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-08 23:31:23 +00:00
|
|
|
obj-$(CONFIG_INTERVAL_TREE_TEST) += interval_tree_test.o
|
|
|
|
|
2013-11-12 23:08:34 +00:00
|
|
|
obj-$(CONFIG_PERCPU_TEST) += percpu_test.o
|
|
|
|
|
2012-09-24 16:11:16 +00:00
|
|
|
obj-$(CONFIG_ASN1) += asn1_decoder.o
|
|
|
|
|
2013-06-09 09:46:43 +00:00
|
|
|
obj-$(CONFIG_FONT_SUPPORT) += fonts/
|
|
|
|
|
2016-12-22 14:45:14 +00:00
|
|
|
obj-$(CONFIG_PRIME_NUMBERS) += prime_numbers.o
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
hostprogs-y := gen_crc32table
|
|
|
|
clean-files := crc32table.h
|
|
|
|
|
|
|
|
$(obj)/crc32.o: $(obj)/crc32table.h
|
|
|
|
|
|
|
|
quiet_cmd_crc32 = GEN $@
|
|
|
|
cmd_crc32 = $< > $@
|
|
|
|
|
|
|
|
$(obj)/crc32table.h: $(obj)/gen_crc32table
|
|
|
|
$(call cmd,crc32)
|
2012-09-21 22:30:46 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Build a fast OID lookip registry from include/linux/oid_registry.h
|
|
|
|
#
|
|
|
|
obj-$(CONFIG_OID_REGISTRY) += oid_registry.o
|
|
|
|
|
2012-12-04 19:52:28 +00:00
|
|
|
$(obj)/oid_registry.o: $(obj)/oid_registry_data.c
|
2012-09-21 22:30:46 +00:00
|
|
|
|
|
|
|
$(obj)/oid_registry_data.c: $(srctree)/include/linux/oid_registry.h \
|
|
|
|
$(src)/build_OID_registry
|
|
|
|
$(call cmd,build_OID_registry)
|
|
|
|
|
|
|
|
quiet_cmd_build_OID_registry = GEN $@
|
|
|
|
cmd_build_OID_registry = perl $(srctree)/$(src)/build_OID_registry $< $@
|
|
|
|
|
|
|
|
clean-files += oid_registry_data.c
|
2013-04-15 20:09:45 +00:00
|
|
|
|
|
|
|
obj-$(CONFIG_UCS2_STRING) += ucs2_string.o
|
2016-01-20 23:00:55 +00:00
|
|
|
obj-$(CONFIG_UBSAN) += ubsan.o
|
|
|
|
|
|
|
|
UBSAN_SANITIZE_ubsan.o := n
|
2016-09-17 14:38:44 +00:00
|
|
|
|
|
|
|
obj-$(CONFIG_SBITMAP) += sbitmap.o
|
2017-02-03 09:29:06 +00:00
|
|
|
|
|
|
|
obj-$(CONFIG_PARMAN) += parman.o
|