decode-insn code has to be reused by arm64 uprobe implementation as well.
Therefore, this patch protects some portion of kprobe code and renames few
other, so that decode-insn functionality can be reused by uprobe even when
CONFIG_KPROBES is not defined.
kprobe_opcode_t and struct arch_specific_insn are also defined by
linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these
definitions in asm/probes.h.
linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion
of asm/kprobes.h from decode-insn.c.
There are some definitions like kprobe_insn and kprobes_handler_t etc can
be re-used by uprobe. So, it would be better to remove 'k' from their
names.
struct arch_specific_insn is specific to kprobe. Therefore, introduce a new
struct arch_probe_insn which will be common for both kprobe and uprobe, so
that decode-insn code can be shared. Modify kprobe code accordingly.
Function arm_probe_decode_insn() will be needed by uprobe as well. So make
it global.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Kprobes does not need its own homebrewed (and frankly inscrutable) sign
extension macro; just use the standard kernel functions instead. Since
the compiler actually recognises the sign-extension idiom of the latter,
we also get the small bonus of some nicer codegen, as each displacement
calculation helper then compiles to a single optimal SBFX instruction.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
These files were only including module.h for exception table
related functions. We've now separated that content out into its
own file "extable.h" so now move over to that and avoid all the
extra header content in module.h that we don't really need to compile
these files.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Kprobes searches backwards a finite number of instructions to determine if
there is an attempt to probe a load/store exclusive sequence. It stops when
it hits the maximum number of instructions or a load or store exclusive.
However this means it can run up past the beginning of the function and
start looking at literal constants. This has been shown to cause a false
positive and blocks insertion of the probe. To fix this, further limit the
backwards search to stop if it hits a symbol address from kallsyms. The
presumption is that this is the entry point to this code (particularly for
the common case of placing probes at the beginning of functions).
This also improves efficiency by not searching code that is not part of the
function. There may be some possibility that the label might not denote the
entry path to the probed instruction but the likelihood seems low and this
is just another example of how the kprobes user really needs to be
careful about what they are doing.
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Each time new section markers are added, kernel/vmlinux.ld.S is updated,
and new extern char __start_foo[] definitions are scattered through the
tree.
Create asm/include/sections.h to collect these definitions (and include
the existing asm-generic version).
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Whenever we are hitting a kprobe from a none-kprobe debug exception handler,
we hit an infinite occurrences of "Unexpected kernel single-step exception
at EL1"
PSTATE.D is debug exception mask bit. It is set whenever we enter into an
exception mode. When it is set then Watchpoint, Breakpoint, and Software
Step exceptions are masked. However, software Breakpoint Instruction
exceptions can never be masked. Therefore, if we ever execute a BRK
instruction, irrespective of D-bit setting, we will be receiving a
corresponding breakpoint exception.
For example:
- We are executing kprobe pre/post handler, and kprobe has been inserted in
one of the instruction of a function called by handler. So, it executes
BRK instruction and we land into the case of KPROBE_REENTER. (This case is
already handled by current code)
- We are executing uprobe handler or any other BRK handler such as in
WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we
enter into kprobe breakpoint handler,from another BRK handler.(This case
is not being handled currently)
In all such cases kprobe breakpoint exception will be raised when we were
already in debug exception mode. SPSR's D bit (bit 9) shows the value of
PSTATE.D immediately before the exception was taken. So, in above example
cases we would find it set in kprobe breakpoint handler. Single step
exception will always be followed by a kprobe breakpoint exception.However,
it will only be raised gracefully if we clear D bit while returning from
breakpoint exception. If D bit is set then, it results into undefined
exception and when it's handler enables dbg then single step exception is
generated, however it will never be handled(because address does not match
and therefore treated as unexpected).
This patch clears D-flag unconditionally in setup_singlestep, so that we can
always get single step exception correctly after returning from breakpoint
exception. Additionally, it also removes D-flag set statement for
KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will
always take place in a debug exception state. So, D-flag will already be set
in this case.
Acked-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Because the arm64 calling standard allows stacked function arguments to be
anywhere in the stack frame, do not attempt to duplicate the stack frame for
jprobes handler functions.
Documentation changes to describe this issue have been broken out into a
separate patch in order to simultaneously address them in other
architecture(s).
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
jprobe_return seems to have aged badly. Comments referring to
non-existent behaviours, and a dangerous habit of messing
with registers without telling the compiler.
This patches applies the following remedies:
- Fix the comments to describe the actual behaviour
- Tidy up the asm sequence to directly assign the
stack pointer without clobbering extra registers
- Mark the rest of the function as unreachable() so
that the compiler knows that there is no need for
an epilogue
- Stop making jprobe_return_break a global function
(you really don't want to call that guy, and it isn't
even a function).
Tested with tcp_probe.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The MIN_STACK_SIZE macro tries evaluate how much stack space needs
to be saved in the jprobes_stack array, sized at 128 bytes.
When using the IRQ stack, said macro can happily return up to
IRQ_STACK_SIZE, which is 16kB. Mayhem follows.
This patch fixes things by getting rid of the crazy macro and
limiting the copy to be at most the size of the jprobes_stack
array, no matter which stack we're on.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Stepping with PSTATE.D=1 is bad news. The step won't generate a debug
exception and we'll likely walk off into random data structures. This
should never happen, but when it does, it's a PITA to debug. Add a
WARN_ON to shout if we realise this is about to take place.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The pre-handler of this special 'trampoline' kprobe executes the return
probe handler functions and restores original return address in ELR_EL1.
This way the saved pt_regs still hold the original register context to be
carried back to the probed kernel function.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The trampoline code is used by kretprobes to capture a return from a probed
function. This is done by saving the registers, calling the handler, and
restoring the registers. The code then returns to the original saved caller
return address. It is necessary to do this directly instead of using a
software breakpoint because the code used in processing that breakpoint
could itself be kprobe'd and cause a problematic reentry into the debug
exception handler.
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: removed unnecessary masking of the PSTATE bits]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Kprobes needs simulation of instructions that cannot be stepped
from a different memory location, e.g.: those instructions
that uses PC-relative addressing. In simulation, the behaviour
of the instruction is implemented using a copy of pt_regs.
The following instruction categories are simulated:
- All branching instructions(conditional, register, and immediate)
- Literal access instructions(load-literal, adr/adrp)
Conditional execution is limited to branching instructions in
ARM v8. If conditions at PSTATE do not match the condition fields
of opcode, the instruction is effectively NOP.
Thanks to Will Cohen for assorted suggested changes.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: removed linux/module.h include]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Entry symbols are not kprobe safe. So blacklist them for kprobing.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: Do not include syscall wrappers in .entry.text]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add support for basic kernel probes(kprobes) and jump probes
(jprobes) for ARM64.
Kprobes utilizes software breakpoint and single step debug
exceptions supported on ARM v8.
A software breakpoint is placed at the probe address to trap the
kernel execution into the kprobe handler.
ARM v8 supports enabling single stepping before the break exception
return (ERET), with next PC in exception return address (ELR_EL1). The
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed, and
enables single stepping. The PC is set to the out-of-line slot address
before the ERET. With this scheme, the instruction is executed with the
exact same register context except for the PC (and DAIF) registers.
Debug mask (PSTATE.D) is enabled only when single stepping a recursive
kprobe, e.g.: during kprobes reenter so that probed instruction can be
single stepped within the kprobe handler -exception- context.
The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
any further re-entry is prevented by not calling handlers and the case
counted as a missed kprobe).
Single stepping from the x-o-l slot has a drawback for PC-relative accesses
like branching and symbolic literals access as the offset from the new PC
(slot address) may not be ensured to fit in the immediate value of
the opcode. Such instructions need simulation, so reject
probing them.
Instructions generating exceptions or cpu mode change are rejected
for probing.
Exclusive load/store instructions are rejected too. Additionally, the
code is checked to see if it is inside an exclusive load/store sequence
(code from Pratyush).
System instructions are mostly enabled for stepping, except MSR/MRS
accesses to "DAIF" flags in PSTATE, which are not safe for
probing.
This also changes arch/arm64/include/asm/ptrace.h to use
include/asm-generic/ptrace.h.
Thanks to Steve Capper and Pratyush Anand for several suggested
Changes.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>