Commit Graph

1042671 Commits

Author SHA1 Message Date
Kumar Kartikeya Dwivedi
3f19956010 samples: bpf: Convert xdp_monitor_kern.o to XDP samples helper
We already moved all the functionality it provided in XDP samples helper
userspace and kernel BPF object, so just delete the unneeded code.

We also add generation of BPF skeleton and compilation using clang
-target bpf for files ending with .bpf.c suffix (to denote that they use
vmlinux.h).

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-14-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
384b6b3bbf samples: bpf: Add vmlinux.h generation support
Also, take this opportunity to depend on in-tree bpftool, so that we can
use static linking support in subsequent commits for XDP samples BPF
helper object.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-13-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
af93d58c27 samples: bpf: Add devmap_xmit tracepoint statistics support
This adds support for retrieval and printing for devmap_xmit total and
mutli mode tracepoint. For multi mode, we keep a hash map entry for each
redirection stream, such that we can dynamically add and remove entries
on output.

The from_match and to_match will be set by individual samples when
setting up the XDP program on these devices.

The multi mode tracepoint is also handy for xdp_redirect_map_multi,
where up to 32 devices can be specified.

Also add samples_init_pre_load macro to finally set up the resized maps
and mmap them in place for low overhead stats retrieval.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-12-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
5f116212f4 samples: bpf: Add BPF support for devmap_xmit tracepoint
This adds support for the devmap_xmit tracepoint, and its multi device
variant that can be used to obtain streams for each individual
net_device to net_device redirection. This is useful for decomposing
total xmit stats in xdp_monitor.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-11-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
d771e21750 samples: bpf: Add cpumap tracepoint statistics support
This consolidates retrieval and printing into the XDP sample helper. For
the kthread stats, it expands xdp_stats separately with its own per-CPU
stats. For cpumap enqueue, we display FROM->TO stats also with its
per-CPU stats.

The help out explains in detail the various aspects of the output.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-10-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
0cf3c2fc4b samples: bpf: Add BPF support for cpumap tracepoints
These are invoked in two places, when the XDP frame or SKB (for generic
XDP) enqueued to the ptr_ring (cpumap_enqueue) and when kthread processes
the frame after invoking the CPUMAP program for it (returning stats for
the batch).

We use cpumap_map_id to filter on the map_id as a way to avoid printing
incorrect stats for parallel sessions of xdp_redirect_cpu.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-9-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
82c450803a samples: bpf: Add xdp_exception tracepoint statistics support
This implements the retrieval and printing, as well the help output.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-8-memxor@gmail.com
2021-08-24 14:48:41 -07:00
Kumar Kartikeya Dwivedi
451588764e samples: bpf: Add BPF support for xdp_exception tracepoint
This would allow us to store stats for each XDP action, including their
per-CPU counts. Consolidating this here allows all redirect samples to
detect xdp_exception events.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-7-memxor@gmail.com
2021-08-24 14:48:40 -07:00
Kumar Kartikeya Dwivedi
1d930fd2cd samples: bpf: Add redirect tracepoint statistics support
This implements per-errno reporting (for the ones we explicitly
recognize), adds some help output, and implements the stats retrieval
and printing functions.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-6-memxor@gmail.com
2021-08-24 14:48:40 -07:00
Kumar Kartikeya Dwivedi
3231403894 samples: bpf: Add BPF support for redirect tracepoint
This adds the shared BPF file that will be used going forward for
sharing tracepoint programs among XDP redirect samples.

Since vmlinux.h conflicts with tools/include for READ_ONCE/WRITE_ONCE
and ARRAY_SIZE, they are copied in to xdp_sample.bpf.h along with other
helpers that will be required.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-5-memxor@gmail.com
2021-08-24 14:48:40 -07:00
Kumar Kartikeya Dwivedi
156f886cf6 samples: bpf: Add basic infrastructure for XDP samples
This file implements some common helpers to consolidate differences in
features and functionality between the various XDP samples and give them
a consistent look, feel, and reporting capabilities.

This commit only adds support for receive statistics, which does not
rely on any tracepoint, but on the XDP program installed on the device
by each XDP redirect sample.

Some of the key features are:
 * A concise output format accompanied by helpful text explaining its
   fields.
 * An elaborate output format building upon the concise one, and folding
   out details in case of errors and staying out of view otherwise.
 * Printing driver names for devices redirecting packets.
 * Getting mac address for interface.
 * Printing summarized total statistics for the entire session.
 * Ability to dynamically switch between concise and verbose mode, using
   SIGQUIT (Ctrl + \).

In later patches, the support will be extended for each tracepoint with
its own custom output in concise and verbose mode.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-4-memxor@gmail.com
2021-08-24 14:48:40 -07:00
Kumar Kartikeya Dwivedi
f2e85d4a75 tools: include: Add ethtool_drvinfo definition to UAPI header
Instead of copying the whole header in, just add the struct definitions
we need for now. In the future it can be synced as a copy of in-tree
header if required.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-3-memxor@gmail.com
2021-08-24 14:48:40 -07:00
Kumar Kartikeya Dwivedi
50b796e645 samples: bpf: Fix a couple of warnings
cookie_uid_helper_example.c: In function ‘main’:
cookie_uid_helper_example.c:178:69: warning: ‘ -j ACCEPT’ directive
	writing 10 bytes into a region of size between 8 and 58
	[-Wformat-overflow=]
  178 |  sprintf(rules, "iptables -A OUTPUT -m bpf --object-pinned %s -j ACCEPT",
      |								       ^~~~~~~~~~
/home/kkd/src/linux/samples/bpf/cookie_uid_helper_example.c:178:9: note:
	‘sprintf’ output between 53 and 103 bytes into a destination of size 100
  178 |  sprintf(rules, "iptables -A OUTPUT -m bpf --object-pinned %s -j ACCEPT",
      |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  179 |         file);
      |         ~~~~~

Fix by using snprintf and a sufficiently sized buffer.

tracex4_user.c:35:15: warning: ‘write’ reading 12 bytes from a region of
	size 11 [-Wstringop-overread]
   35 |         key = write(1, "\e[1;1H\e[2J", 12); /* clear screen */
      |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~

Use size as 11.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210821002010.845777-2-memxor@gmail.com
2021-08-24 14:48:40 -07:00
Andrey Ignatov
d7af7e497f bpf: Fix possible out of bound write in narrow load handling
Fix a verifier bug found by smatch static checker in [0].

This problem has never been seen in prod to my best knowledge. Fixing it
still seems to be a good idea since it's hard to say for sure whether
it's possible or not to have a scenario where a combination of
convert_ctx_access() and a narrow load would lead to an out of bound
write.

When narrow load is handled, one or two new instructions are added to
insn_buf array, but before it was only checked that

	cnt >= ARRAY_SIZE(insn_buf)

And it's safe to add a new instruction to insn_buf[cnt++] only once. The
second try will lead to out of bound write. And this is what can happen
if `shift` is set.

Fix it by making sure that if the BPF_RSH instruction has to be added in
addition to BPF_AND then there is enough space for two more instructions
in insn_buf.

The full report [0] is below:

kernel/bpf/verifier.c:12304 convert_ctx_accesses() warn: offset 'cnt' incremented past end of array
kernel/bpf/verifier.c:12311 convert_ctx_accesses() warn: offset 'cnt' incremented past end of array

kernel/bpf/verifier.c
    12282
    12283 			insn->off = off & ~(size_default - 1);
    12284 			insn->code = BPF_LDX | BPF_MEM | size_code;
    12285 		}
    12286
    12287 		target_size = 0;
    12288 		cnt = convert_ctx_access(type, insn, insn_buf, env->prog,
    12289 					 &target_size);
    12290 		if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf) ||
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Bounds check.

    12291 		    (ctx_field_size && !target_size)) {
    12292 			verbose(env, "bpf verifier is misconfigured\n");
    12293 			return -EINVAL;
    12294 		}
    12295
    12296 		if (is_narrower_load && size < target_size) {
    12297 			u8 shift = bpf_ctx_narrow_access_offset(
    12298 				off, size, size_default) * 8;
    12299 			if (ctx_field_size <= 4) {
    12300 				if (shift)
    12301 					insn_buf[cnt++] = BPF_ALU32_IMM(BPF_RSH,
                                                         ^^^^^
increment beyond end of array

    12302 									insn->dst_reg,
    12303 									shift);
--> 12304 				insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg,
                                                 ^^^^^
out of bounds write

    12305 								(1 << size * 8) - 1);
    12306 			} else {
    12307 				if (shift)
    12308 					insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH,
    12309 									insn->dst_reg,
    12310 									shift);
    12311 				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
                                        ^^^^^^^^^^^^^^^
Same.

    12312 								(1ULL << size * 8) - 1);
    12313 			}
    12314 		}
    12315
    12316 		new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
    12317 		if (!new_prog)
    12318 			return -ENOMEM;
    12319
    12320 		delta += cnt - 1;
    12321
    12322 		/* keep walking new program and skip insns we just inserted */
    12323 		env->prog = new_prog;
    12324 		insn      = new_prog->insnsi + i + delta;
    12325 	}
    12326
    12327 	return 0;
    12328 }

[0] https://lore.kernel.org/bpf/20210817050843.GA21456@kili/

v1->v2:
- clarify that problem was only seen by static checker but not in prod;

Fixes: 46f53a65d2 ("bpf: Allow narrow loads with offset > 0")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210820163935.1902398-1-rdna@fb.com
2021-08-24 14:32:26 -07:00
sumiyawang
32b2397c1e libnvdimm/pmem: Fix crash triggered when I/O in-flight during unbind
There is a use after free crash when the pmem driver tears down its
mapping while I/O is still inbound.

This is triggered by driver unbind, "ndctl destroy-namespace", while I/O
is in flight.

Fix the sequence of blk_cleanup_queue() vs memunmap().

The crash signature is of the form:

 BUG: unable to handle page fault for address: ffffc90080200000
 CPU: 36 PID: 9606 Comm: systemd-udevd
 Call Trace:
  ? pmem_do_bvec+0xf9/0x3a0
  ? xas_alloc+0x55/0xd0
  pmem_rw_page+0x4b/0x80
  bdev_read_page+0x86/0xb0
  do_mpage_readpage+0x5d4/0x7a0
  ? lru_cache_add+0xe/0x10
  mpage_readpages+0xf9/0x1c0
  ? bd_link_disk_holder+0x1a0/0x1a0
  blkdev_readpages+0x1d/0x20
  read_pages+0x67/0x1a0

  ndctl Call Trace in vmcore:
  PID: 23473  TASK: ffff88c4fbbe8000  CPU: 1   COMMAND: "ndctl"
  __schedule
  schedule
  blk_mq_freeze_queue_wait
  blk_freeze_queue
  blk_cleanup_queue
  pmem_release_queue
  devm_action_release
  release_nodes
  devres_release_all
  device_release_driver_internal
  device_driver_detach
  unbind_store

Cc: <stable@vger.kernel.org>
Signed-off-by: sumiyawang <sumiyawang@tencent.com>
Reviewed-by: yongduan <yongduan@tencent.com>
Link: https://lore.kernel.org/r/1629632949-14749-1-git-send-email-sumiyawang@tencent.com
Fixes: 50f44ee724 ("mm/devm_memremap_pages: fix final page put race")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-08-24 14:25:59 -07:00
Vineet Gupta
a79a9c765f ARC: mm: move MMU specific bits out of entry code ...
... to avoid polluting shared entry code (across three ISA variants)
with ISA/MMU specific code.

Cc: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
89d0d42412 ARC: mm: move MMU specific bits out of ASID allocator
And while at it, rewrite commentary on ASID allocator

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
be43b096ed ARC: mm: non-functional code movement/cleanup
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
e93e59ac1e ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set)
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
da773cf20e ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flag
and remove the one off uncached definition for ARC

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
1b4013b9ae ARC: mm: Enable STRICT_MM_TYPECHECKS
In the past I've refrained from doing this (at least 2 times) due to the
slight code bloat due to ABI implications of pte_t etc becoming struct

Per ARC ABI, functions return struct via memory and not through register
r0, even if the struct would fit in register(s)

 - caller allocates space on stack and passes the address as first arg
   (r0), shifting rest of args by one

 - callee creates return struct in memory (referenced via r0)

This time around the code actually shrunk slightly (due to subtle
inlining heuristic effects), but still slightly inefficient due to
return values passed through memory. That however seems like a small
cost compared to maintenance burden given the impending new mmu support
for page walk etc

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
366440eec8 ARC: mm: Fixes to allow STRICT_MM_TYPECHECKS
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
47910ca3ce ARC: mm: move mmu/cache externs out to setup.h
Don't pollute mmu.h and cache.h with ARC internal bootlog/setup
related functions. Move them aside to setup.h

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
12e7804c26 ARC: mm: remove tlb paranoid code
This was used back in arc700 days when ASID allocator was fragile.
Not needed in last 5 years

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
6128df5be4 ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 only
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in
ARC700 SMP port, it has to be repurposed for re-entrant interrupt
handling, while UP port doesn't. We currently handle these use-cases
using a fabricated #define which has usual issues of dependency nesting
and obvious ugliness.

So clean this up: for ARC700 don't use to cache pgd (even in UP) and do
the opposite for ARCv2.

And while here, switch to canonical pgd_offset().

Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:48 -07:00
Vineet Gupta
288ff7de62 ARC: retire MMUv1 and MMUv2 support
There's no known/active customer using them with latest kernels anyways.

Removal helps cleanup code and remove the hack for
MMU_VER to MMU_V[3-4] conversion

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
767a697e75 ARC: retire ARC750 support
There's no known/active customer using them with latest kernels anyways.

Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
301014cf6d ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants
And move them out of cmpxchg.h to canonical atomic.h

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
ddc348c44d ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only)
It only makes sense to do this for the LLSC config

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
e188f3330a ARC: cmpxchg/xchg: rewrite as macros to make type safe
Existing code forces/assume args to type "long" which won't work in LP64
regime, so prepare code for that

Interestingly this should be a non functional change but I do see
some codegen changes

| bloat-o-meter vmlinux-cmpxchg-A vmlinux-cmpxchg-B
| add/remove: 0/0 grow/shrink: 17/12 up/down: 218/-150 (68)
|
| Function                                     old     new   delta
| rwsem_optimistic_spin                        518     550     +32
| rwsem_down_write_slowpath                   1244    1274     +30
| __do_sys_perf_event_open                    2576    2600     +24
| down_read                                    192     200      +8
| __down_read                                  192     200      +8
...
| task_work_run                                168     148     -20
| dma_fence_chain_walk.part                    760     736     -24
| __genradix_ptr_alloc                         674     646     -28

Total: Before=6187409, After=6187477, chg +0.00%

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
ecf51c9fa0 ARC: xchg: !LLSC: remove UP micro-optimization/hack
It gets in the way of cleaning things up and is a maintenance
pain-in-neck !

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
9d011e1207 ARC: bitops: fls/ffs to take int (vs long) per asm-generic defines
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
cea4314790 ARC: switch to generic bitops
- !LLSC now only needs a single spinlock for atomics and bitops

 - Some codegen changes (slight bloat) with generic bitops

   1. code increase due to LD-check-atomic paradigm vs. unconditonal
      atomic (but dirty'ing the cache line even if set already).
      So despite increase, generic is right thing to do.

   2. code decrease (but use of costlier instructions such as DIV vs.
      shifts based math) due to signed arithmetic.
      This needs to be revisited seperately.

     arc:
     static inline int test_bit(unsigned int nr, const volatile unsigned long *addr)
                                ^^^^^^^^^^^^
     generic:
     static inline int test_bit(int nr, const volatile unsigned long *addr)
                                ^^^

Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[vgupta: wrote patch based on Will's poc, analysed codegen diffs]
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
b64be68369 ARC: atomics: implement relaxed variants
The current ARC fetch/return atomics provide fully ordered semantics
only with 2 full barriers around the operation.

Instead implement them as relaxed variants without any barriers and
rely on generic code to generate the fully-ordered, acquire and release
varaints by adding the appropriate full barriers.

This helps elide some extra barriers in case of acquire/release/relaxed
calls.

bloat-o-meter for hsdk defconfig shows codegen improvements, although
numbers below inflated due to unrelated inlining heuristic changes

| bloat-o-meter vmlinux-643babe34fd7-non-relaxed vmlinux-45aa05cb44d7-relaxed
| add/remove: 2/5 grow/shrink: 42/1222 up/down: 4158/-14312 (-10154)
| Function                                     old     new   delta
| ..
| sys_renameat                                 462     476     +14
| ip_mc_inc_group                              424     436     +12
| do_read_cache_page                          1882    1894     +12
| ..
| refcount_dec_and_mutex_lock                  254     250      -4
| refcount_dec_and_lock_irqsave                258     254      -4
| refcount_dec_and_lock                        254     250      -4
| ..
| tcp_v6_route_req                             246     238      -8
| tcp_v4_destroy_sock                          286     278      -8
| tcp_twsk_unique                              352     344      -8

Link: https://lore.kernel.org/r/20180830144344.GW24142@hirez.programming.kicks-ass.net
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
7e8f8cbb43 ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_return
This is a non-functional change since those wrappers are not
used in kernel sources at all.

Link: http://lists.infradead.org/pipermail/linux-snps-arc/2018-August/004246.html
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:47 -07:00
Vineet Gupta
ca766f04ad ARC: atomic: !LLSC: use int data type consistently
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:46 -07:00
Vineet Gupta
b1040148b2 ARC: atomic: !LLSC: remove hack in atomic_set() for for UP
!LLSC atomics use spinlock (SMP) or irq-disable (UP) to implement
criticla regions. UP atomic_set() however was "cheating" by not doing
any of that so and still being functional.

Remove this anomaly (primarily as cleanup for future code improvements)
given that this config is not worth hassle of special case code.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:46 -07:00
Vineet Gupta
b0f839b4b9 ARC: atomics: disintegrate header
Non functional change, to ease future addition/removal

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:46 -07:00
Randy Dunlap
6b5ff0405e ARC: export clear_user_page() for modules
0day bot reports a build error:
  ERROR: modpost: "clear_user_page" [drivers/media/v4l2-core/videobuf-dma-sg.ko] undefined!
so export it in arch/arc/ to fix the build error.

In most ARCHes, clear_user_page() is a macro. OTOH, in a few
ARCHes it is a function and needs to be exported.
PowerPC exported it in 2004. It looks like nds32 and nios2
still need to have it exported.

Fixes: 4102b53392 ("ARC: [mm] Aliasing VIPT dcache support 2/4")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: kernel test robot <lkp@intel.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: linux-snps-arc@lists.infradead.org
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:46 -07:00
Changcheng Deng
82a423053e arch/arc/kernel/: fix misspellings using codespell tool
Some typos are found out by codespell tool:

./intc-compact.c:145: prioity ==> priority
./smp.c:286: recevier ==> receiver
./stacktrace.c:152 prelogue ==> prologue

Fix typos found by codespell.

Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn>
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24 14:25:46 -07:00
Alexei Starovoitov
f63693e3ae Merge branch 'bpf: Allow bpf_get_netns_cookie in BPF_PROG_TYPE_SK_MSG'
Xu Liu says:

====================

We'd like to be able to identify netns from sk_msg hooks
to accelerate local process communication form different netns.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-08-24 14:17:53 -07:00
Xu Liu
6cbca1ee0d selftests/bpf: Test for get_netns_cookie
Add test to use get_netns_cookie() from BPF_PROG_TYPE_SK_MSG.

Signed-off-by: Xu Liu <liuxu623@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210820071712.52852-3-liuxu623@gmail.com
2021-08-24 14:17:53 -07:00
Xu Liu
fab60e29fc bpf: Allow bpf_get_netns_cookie in BPF_PROG_TYPE_SK_MSG
We'd like to be able to identify netns from sk_msg hooks
to accelerate local process communication form different netns.

Signed-off-by: Xu Liu <liuxu623@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210820071712.52852-2-liuxu623@gmail.com
2021-08-24 14:17:53 -07:00
Alexei Starovoitov
8c0bb89e8e Merge branch 'selftests/bpf: minor fixups'
Li Zhijian says:

====================
Fix a few issues reported by 0Day/LKP during runing selftests/bpf.

Changelog:
V2:
- folded previous similar standalone patch to [1/5], and add acked tag
  from Song Liu
- add acked tag to [2/5], [3/5] from Song Liu
- [4/5]: move test_bpftool.py to TEST_PROGS_EXTENDED, files in TEST_GEN_PROGS_EXTENDED
are generated by make. Otherwise, it will break out-of-tree install:
'make O=/kselftest-build SKIP_TARGETS= V=1 -C tools/testing/selftests install INSTALL_PATH=/kselftest-install'
- [5/5]: new patch
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-08-24 14:01:11 -07:00
Li Zhijian
00e1116031 selftests/bpf: Exit with KSFT_SKIP if no Makefile found
This would happend when we run the tests after install kselftests
 root@lkp-skl-d01 ~# /kselftests/run_kselftest.sh -t bpf:test_doc_build.sh
 TAP version 13
 1..1
 # selftests: bpf: test_doc_build.sh
 perl: warning: Setting locale failed.
 perl: warning: Please check that your locale settings:
         LANGUAGE = (unset),
         LC_ALL = (unset),
         LC_ADDRESS = "en_US.UTF-8",
         LC_NAME = "en_US.UTF-8",
         LC_MONETARY = "en_US.UTF-8",
         LC_PAPER = "en_US.UTF-8",
         LC_IDENTIFICATION = "en_US.UTF-8",
         LC_TELEPHONE = "en_US.UTF-8",
         LC_MEASUREMENT = "en_US.UTF-8",
         LC_TIME = "en_US.UTF-8",
         LC_NUMERIC = "en_US.UTF-8",
         LANG = "en_US.UTF-8"
     are supported and installed on your system.
 perl: warning: Falling back to the standard locale ("C").
 # skip:    bpftool files not found!
 #
 ok 1 selftests: bpf: test_doc_build.sh # SKIP

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210820025549.28325-1-lizhijian@cn.fujitsu.com
2021-08-24 14:01:10 -07:00
Li Zhijian
404bd9ff5d selftests/bpf: Add missing files required by test_bpftool.sh for installing
test_bpftool.sh relies on bpftool and test_bpftool.py.

'make install' will install bpftool to INSTALL_PATH/bpf/bpftool, and
export it to PATH so that it can be used after installing.

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210820015556.23276-5-lizhijian@cn.fujitsu.com
2021-08-24 14:01:10 -07:00
Li Zhijian
7a3bdca20b selftests/bpf: Add default bpftool built by selftests to PATH
For 'make run_tests':
selftests will build bpftool into tools/testing/selftests/bpf/tools/sbin/bpftool
by default.

==================
root@lkp-skl-d01 /opt/rootfs/v5.14-rc4# make -C tools/testing/selftests/bpf run_tests
make: Entering directory '/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf'
  MKDIR    include
  MKDIR    libbpf
  MKDIR    bpftool
[...]
  GEN     /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/tools/build/bpftool/profiler.skel.h
  CC      /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/tools/build/bpftool/prog.o
  GEN     /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/tools/build/bpftool/pid_iter.skel.h
  CC      /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/tools/build/bpftool/pids.o
  LINK    /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/tools/build/bpftool/bpftool
  INSTALL bpftool
  GEN      vmlinux.h
[...]
 # test_feature_dev_json (test_bpftool.TestBpftool) ... ERROR
 # test_feature_kernel (test_bpftool.TestBpftool) ... ERROR
 # test_feature_kernel_full (test_bpftool.TestBpftool) ... ERROR
 # test_feature_kernel_full_vs_not_full (test_bpftool.TestBpftool) ... ERROR
 # test_feature_macros (test_bpftool.TestBpftool) ... Error: bug: failed to retrieve CAP_BPF status: Invalid argument
 # ERROR
 #
 # ======================================================================
 # ERROR: test_feature_dev_json (test_bpftool.TestBpftool)
 # ----------------------------------------------------------------------
 # Traceback (most recent call last):
 #   File "/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/test_bpftool.py", line 57, in wrapper
 #     return f(*args, iface, **kwargs)
 #   File "/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/test_bpftool.py", line 82, in test_feature_dev_json
 #     res = bpftool_json(["feature", "probe", "dev", iface])
 #   File "/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/test_bpftool.py", line 42, in bpftool_json
 #     res = _bpftool(args)
 #   File "/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/test_bpftool.py", line 34, in _bpftool
 #     return subprocess.check_output(_args)
 #   File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
 #     **kwargs).stdout
 #   File "/usr/lib/python3.7/subprocess.py", line 487, in run
 #     output=stdout, stderr=stderr)
 # subprocess.CalledProcessError: Command '['bpftool', '-j', 'feature', 'probe', 'dev', 'dummy0']' returned non-zero exit status 255.
 #
==================

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-4-lizhijian@cn.fujitsu.com
2021-08-24 14:01:10 -07:00
Li Zhijian
5a980b5baf selftests/bpf: Make test_doc_build.sh work from script directory
Previously, it fails as below:
-------------
root@lkp-skl-d01 /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf# ./test_doc_build.sh
++ realpath --relative-to=/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf ./test_doc_build.sh
+ SCRIPT_REL_PATH=test_doc_build.sh
++ dirname test_doc_build.sh
+ SCRIPT_REL_DIR=.
++ realpath /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/./../../../../
+ KDIR_ROOT_DIR=/opt/rootfs/v5.14-rc4
+ cd /opt/rootfs/v5.14-rc4
+ for tgt in docs docs-clean
+ make -s -C /opt/rootfs/v5.14-rc4/. docs
make: *** No rule to make target 'docs'.  Stop.
+ for tgt in docs docs-clean
+ make -s -C /opt/rootfs/v5.14-rc4/. docs-clean
make: *** No rule to make target 'docs-clean'.  Stop.
-----------

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-3-lizhijian@cn.fujitsu.com
2021-08-24 14:01:10 -07:00
Li Zhijian
2d82d73da3 selftests/bpf: Enlarge select() timeout for test_maps
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
 # selftests: bpf: test_maps
 # Fork 1024 tasks to 'test_update_delete'
 # Fork 1024 tasks to 'test_update_delete'
 # Fork 100 tasks to 'test_hashmap'
 # Fork 100 tasks to 'test_hashmap_percpu'
 # Fork 100 tasks to 'test_hashmap_sizes'
 # Fork 100 tasks to 'test_hashmap_walk'
 # Fork 100 tasks to 'test_arraymap'
 # Fork 100 tasks to 'test_arraymap_percpu'
 # Failed sockmap unexpected timeout
 not ok 3 selftests: bpf: test_maps # exit=1
 # selftests: bpf: test_lru_map
 # nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.

In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
2021-08-24 14:01:10 -07:00
Matija Glavinic Pecotic
ea4ab99cb5 spi: davinci: invoke chipselect callback
Davinci needs to configure chipselect on transfer.

Fixes: 4a07b8bcd5 ("spi: bitbang: Make chipselect callback optional")
Signed-off-by: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
Reviewed-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Link: https://lore.kernel.org/r/735fb7b0-82aa-5b9b-85e4-53f0c348cc0e@nokia.com
Signed-off-by: Mark Brown <broonie@kernel.org>
2021-08-24 20:53:24 +01:00