mirror of
https://github.com/torvalds/linux.git
synced 2024-12-09 20:51:43 +00:00
c640868491
Charlie Jenkins <charlie@rivosinc.com> says: Each architecture generally implements fine-tuned checksum functions to leverage the instruction set. This patch adds the main checksum functions that are used in networking. Tested on QEMU, this series allows the CHECKSUM_KUNIT tests to complete an average of 50.9% faster. This patch takes heavy use of the Zbb extension using alternatives patching. To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT. I have attempted to make these functions as optimal as possible, but I have not ran anything on actual riscv hardware. My performance testing has been limited to inspecting the assembly, running the algorithms on x86 hardware, and running in QEMU. ip_fast_csum is a relatively small function so even though it is possible to read 64 bits at a time on compatible hardware, the bottleneck becomes the clean up and setup code so loading 32 bits at a time is actually faster. * b4-shazam-merge: kunit: Add tests for csum_ipv6_magic and ip_fast_csum riscv: Add checksum library riscv: Add checksum header riscv: Add static key for misaligned accesses asm-generic: Improve csum_fold Link: https://lore.kernel.org/r/20240108-optimize_checksum-v15-0-1c50de5f2167@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
20 lines
526 B
Makefile
20 lines
526 B
Makefile
# SPDX-License-Identifier: GPL-2.0-only
|
|
lib-y += delay.o
|
|
lib-y += memcpy.o
|
|
lib-y += memset.o
|
|
lib-y += memmove.o
|
|
lib-y += strcmp.o
|
|
lib-y += strlen.o
|
|
lib-y += strncmp.o
|
|
lib-y += csum.o
|
|
ifeq ($(CONFIG_MMU), y)
|
|
lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o
|
|
endif
|
|
lib-$(CONFIG_MMU) += uaccess.o
|
|
lib-$(CONFIG_64BIT) += tishift.o
|
|
lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o
|
|
|
|
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
|
|
lib-$(CONFIG_RISCV_ISA_V) += xor.o
|
|
lib-$(CONFIG_RISCV_ISA_V) += riscv_v_helpers.o
|