ftrace updates for v6.13:

- Merged tag ftrace-v6.12-rc4
 
   There was a fix to locking in register_ftrace_graph() for shadow stacks
   that was sent upstream. But this code was also being rewritten, and the
   locking fix was needed. Merging this fix was required to continue the
   work.
 
 - Restructure the function graph shadow stack to prepare it for use with
   kretprobes
 
   With the goal of merging the shadow stack logic of function graph and
   kretprobes, some more restructuring of the function shadow stack is
   required.
 
   Move out function graph specific fields from the fgraph infrastructure and
   store it on the new stack variables that can pass data from the entry
   callback to the exit callback.
 
   Hopefully, with this change, the merge of kretprobes to use fgraph shadow
   stacks will be ready by the next merge window.
 
 - Make shadow stack 4k instead of using PAGE_SIZE.
 
   Some architectures have very large PAGE_SIZE values which make its use for
   shadow stacks waste a lot of memory.
 
 - Give shadow stacks its own kmem cache.
 
   When function graph is started, every task on the system gets a shadow
   stack. In the future, shadow stacks may not be 4K in size. Have it have
   its own kmem cache so that whatever size it becomes will still be
   efficient in allocations.
 
 - Initialize profiler graph ops as it will be needed for new updates to fgraph
 
 - Convert to use guard(mutex) for several ftrace and fgraph functions
 
 - Add more comments and documentation
 
 - Show function return address in function graph tracer
 
   Add an option to show the caller of a function at each entry of the
   function graph tracer, similar to what the function tracer does.
 
 - Abstract out ftrace_regs from being used directly like pt_regs
 
   ftrace_regs was created to store a partial pt_regs. It holds only the
   registers and stack information to get to the function arguments and
   return values. On several archs, it is simply a wrapper around pt_regs.
   But some users would access ftrace_regs directly to get the pt_regs which
   will not work on all archs. Make ftrace_regs an abstract structure that
   requires all access to its fields be through accessor functions.
 
 - Show how long it takes to do function code modifications
 
   When code modification for function hooks happen, it always had the time
   recorded in how long it took to do the conversion. But this value was
   never exported. Recently the code was touched due to new ROX modification
   handling that caused a large slow down in doing the modifications and
   had a significant impact on boot times.
 
   Expose the timings in the dyn_ftrace_total_info file. This file was
   created a while ago to show information about memory usage and such to
   implement dynamic function tracing. It's also an appropriate file to store
   the timings of this modification as well. This will make it easier to see
   the impact of changes to code modification on boot up timings.
 
 - Other clean ups and small fixes
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZztrUxQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qnnNAQD6w4q9VQ7oOE2qKLqtnj87h4c1GqKn
 SPkpEfC3n/ATEAD/fnYjT/eOSlHiGHuD/aTA+U/bETrT99bozGM/4mFKEgY=
 =6nCa
 -----END PGP SIGNATURE-----

Merge tag 'ftrace-v6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull ftrace updates from Steven Rostedt:

 - Restructure the function graph shadow stack to prepare it for use
   with kretprobes

   With the goal of merging the shadow stack logic of function graph and
   kretprobes, some more restructuring of the function shadow stack is
   required.

   Move out function graph specific fields from the fgraph
   infrastructure and store it on the new stack variables that can pass
   data from the entry callback to the exit callback.

   Hopefully, with this change, the merge of kretprobes to use fgraph
   shadow stacks will be ready by the next merge window.

 - Make shadow stack 4k instead of using PAGE_SIZE.

   Some architectures have very large PAGE_SIZE values which make its
   use for shadow stacks waste a lot of memory.

 - Give shadow stacks its own kmem cache.

   When function graph is started, every task on the system gets a
   shadow stack. In the future, shadow stacks may not be 4K in size.
   Have it have its own kmem cache so that whatever size it becomes will
   still be efficient in allocations.

 - Initialize profiler graph ops as it will be needed for new updates to
   fgraph

 - Convert to use guard(mutex) for several ftrace and fgraph functions

 - Add more comments and documentation

 - Show function return address in function graph tracer

   Add an option to show the caller of a function at each entry of the
   function graph tracer, similar to what the function tracer does.

 - Abstract out ftrace_regs from being used directly like pt_regs

   ftrace_regs was created to store a partial pt_regs. It holds only the
   registers and stack information to get to the function arguments and
   return values. On several archs, it is simply a wrapper around
   pt_regs. But some users would access ftrace_regs directly to get the
   pt_regs which will not work on all archs. Make ftrace_regs an
   abstract structure that requires all access to its fields be through
   accessor functions.

 - Show how long it takes to do function code modifications

   When code modification for function hooks happen, it always had the
   time recorded in how long it took to do the conversion. But this
   value was never exported. Recently the code was touched due to new
   ROX modification handling that caused a large slow down in doing the
   modifications and had a significant impact on boot times.

   Expose the timings in the dyn_ftrace_total_info file. This file was
   created a while ago to show information about memory usage and such
   to implement dynamic function tracing. It's also an appropriate file
   to store the timings of this modification as well. This will make it
   easier to see the impact of changes to code modification on boot up
   timings.

 - Other clean ups and small fixes

* tag 'ftrace-v6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (22 commits)
  ftrace: Show timings of how long nop patching took
  ftrace: Use guard to take ftrace_lock in ftrace_graph_set_hash()
  ftrace: Use guard to take the ftrace_lock in release_probe()
  ftrace: Use guard to lock ftrace_lock in cache_mod()
  ftrace: Use guard for match_records()
  fgraph: Use guard(mutex)(&ftrace_lock) for unregister_ftrace_graph()
  fgraph: Give ret_stack its own kmem cache
  fgraph: Separate size of ret_stack from PAGE_SIZE
  ftrace: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
  selftests/ftrace: Fix check of return value in fgraph-retval.tc test
  ftrace: Use arch_ftrace_regs() for ftrace_regs_*() macros
  ftrace: Consolidate ftrace_regs accessor functions for archs using pt_regs
  ftrace: Make ftrace_regs abstract from direct use
  fgragh: No need to invoke the function call_filter_check_discard()
  fgraph: Simplify return address printing in function graph tracer
  function_graph: Remove unnecessary initialization in ftrace_graph_ret_addr()
  function_graph: Support recording and printing the function return address
  ftrace: Have calltime be saved in the fgraph storage
  ftrace: Use a running sleeptime instead of saving on shadow stack
  fgraph: Use fgraph data to store subtime for profiler
  ...
This commit is contained in:
Linus Torvalds 2024-11-20 11:34:10 -08:00
commit aad3a0d084
29 changed files with 633 additions and 333 deletions

View File

@ -54,8 +54,11 @@ extern void return_to_handler(void);
unsigned long ftrace_call_adjust(unsigned long addr);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
#define HAVE_ARCH_FTRACE_REGS
struct dyn_ftrace;
struct ftrace_ops;
struct ftrace_regs;
#define arch_ftrace_regs(fregs) ((struct __arch_ftrace_regs *)(fregs))
#define arch_ftrace_get_regs(regs) NULL
@ -63,7 +66,7 @@ struct ftrace_ops;
* Note: sizeof(struct ftrace_regs) must be a multiple of 16 to ensure correct
* stack alignment
*/
struct ftrace_regs {
struct __arch_ftrace_regs {
/* x0 - x8 */
unsigned long regs[9];
@ -83,47 +86,47 @@ struct ftrace_regs {
static __always_inline unsigned long
ftrace_regs_get_instruction_pointer(const struct ftrace_regs *fregs)
{
return fregs->pc;
return arch_ftrace_regs(fregs)->pc;
}
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long pc)
{
fregs->pc = pc;
arch_ftrace_regs(fregs)->pc = pc;
}
static __always_inline unsigned long
ftrace_regs_get_stack_pointer(const struct ftrace_regs *fregs)
{
return fregs->sp;
return arch_ftrace_regs(fregs)->sp;
}
static __always_inline unsigned long
ftrace_regs_get_argument(struct ftrace_regs *fregs, unsigned int n)
{
if (n < 8)
return fregs->regs[n];
return arch_ftrace_regs(fregs)->regs[n];
return 0;
}
static __always_inline unsigned long
ftrace_regs_get_return_value(const struct ftrace_regs *fregs)
{
return fregs->regs[0];
return arch_ftrace_regs(fregs)->regs[0];
}
static __always_inline void
ftrace_regs_set_return_value(struct ftrace_regs *fregs,
unsigned long ret)
{
fregs->regs[0] = ret;
arch_ftrace_regs(fregs)->regs[0] = ret;
}
static __always_inline void
ftrace_override_function_with_return(struct ftrace_regs *fregs)
{
fregs->pc = fregs->lr;
arch_ftrace_regs(fregs)->pc = arch_ftrace_regs(fregs)->lr;
}
int ftrace_regs_query_register_offset(const char *name);
@ -143,7 +146,7 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs,
* The ftrace trampoline will return to this address instead of the
* instrumented function.
*/
fregs->direct_tramp = addr;
arch_ftrace_regs(fregs)->direct_tramp = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */

View File

@ -80,19 +80,19 @@ int main(void)
DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
BLANK();
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
DEFINE(FREGS_X0, offsetof(struct ftrace_regs, regs[0]));
DEFINE(FREGS_X2, offsetof(struct ftrace_regs, regs[2]));
DEFINE(FREGS_X4, offsetof(struct ftrace_regs, regs[4]));
DEFINE(FREGS_X6, offsetof(struct ftrace_regs, regs[6]));
DEFINE(FREGS_X8, offsetof(struct ftrace_regs, regs[8]));
DEFINE(FREGS_FP, offsetof(struct ftrace_regs, fp));
DEFINE(FREGS_LR, offsetof(struct ftrace_regs, lr));
DEFINE(FREGS_SP, offsetof(struct ftrace_regs, sp));
DEFINE(FREGS_PC, offsetof(struct ftrace_regs, pc));
DEFINE(FREGS_X0, offsetof(struct __arch_ftrace_regs, regs[0]));
DEFINE(FREGS_X2, offsetof(struct __arch_ftrace_regs, regs[2]));
DEFINE(FREGS_X4, offsetof(struct __arch_ftrace_regs, regs[4]));
DEFINE(FREGS_X6, offsetof(struct __arch_ftrace_regs, regs[6]));
DEFINE(FREGS_X8, offsetof(struct __arch_ftrace_regs, regs[8]));
DEFINE(FREGS_FP, offsetof(struct __arch_ftrace_regs, fp));
DEFINE(FREGS_LR, offsetof(struct __arch_ftrace_regs, lr));
DEFINE(FREGS_SP, offsetof(struct __arch_ftrace_regs, sp));
DEFINE(FREGS_PC, offsetof(struct __arch_ftrace_regs, pc));
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
DEFINE(FREGS_DIRECT_TRAMP, offsetof(struct ftrace_regs, direct_tramp));
DEFINE(FREGS_DIRECT_TRAMP, offsetof(struct __arch_ftrace_regs, direct_tramp));
#endif
DEFINE(FREGS_SIZE, sizeof(struct ftrace_regs));
DEFINE(FREGS_SIZE, sizeof(struct __arch_ftrace_regs));
BLANK();
#endif
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));

View File

@ -23,10 +23,10 @@ struct fregs_offset {
int offset;
};
#define FREGS_OFFSET(n, field) \
{ \
.name = n, \
.offset = offsetof(struct ftrace_regs, field), \
#define FREGS_OFFSET(n, field) \
{ \
.name = n, \
.offset = offsetof(struct __arch_ftrace_regs, field), \
}
static const struct fregs_offset fregs_offsets[] = {
@ -481,7 +481,7 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
prepare_ftrace_return(ip, &fregs->lr, fregs->fp);
prepare_ftrace_return(ip, &arch_ftrace_regs(fregs)->lr, arch_ftrace_regs(fregs)->fp);
}
#else
/*

View File

@ -44,40 +44,19 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent);
#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
struct ftrace_ops;
struct ftrace_regs {
struct pt_regs regs;
};
#include <linux/ftrace_regs.h>
static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
return &fregs->regs;
}
static __always_inline unsigned long
ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs)
{
return instruction_pointer(&fregs->regs);
return &arch_ftrace_regs(fregs)->regs;
}
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip)
{
instruction_pointer_set(&fregs->regs, ip);
instruction_pointer_set(&arch_ftrace_regs(fregs)->regs, ip);
}
#define ftrace_regs_get_argument(fregs, n) \
regs_get_kernel_argument(&(fregs)->regs, n)
#define ftrace_regs_get_stack_pointer(fregs) \
kernel_stack_pointer(&(fregs)->regs)
#define ftrace_regs_return_value(fregs) \
regs_return_value(&(fregs)->regs)
#define ftrace_regs_set_return_value(fregs, ret) \
regs_set_return_value(&(fregs)->regs, ret)
#define ftrace_override_function_with_return(fregs) \
override_function_with_return(&(fregs)->regs)
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
#define ftrace_graph_func ftrace_graph_func
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);
@ -90,7 +69,7 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
}
#define arch_ftrace_set_direct_caller(fregs, addr) \
__arch_ftrace_set_direct_caller(&(fregs)->regs, addr)
__arch_ftrace_set_direct_caller(&arch_ftrace_regs(fregs)->regs, addr)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
#endif

View File

@ -241,7 +241,7 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent)
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
struct pt_regs *regs = &fregs->regs;
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
unsigned long *parent = (unsigned long *)&regs->regs[1];
prepare_ftrace_return(ip, (unsigned long *)parent);

View File

@ -32,42 +32,21 @@ struct dyn_arch_ftrace {
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#define ftrace_init_nop ftrace_init_nop
struct ftrace_regs {
struct pt_regs regs;
};
#include <linux/ftrace_regs.h>
static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
/* We clear regs.msr in ftrace_call */
return fregs->regs.msr ? &fregs->regs : NULL;
return arch_ftrace_regs(fregs)->regs.msr ? &arch_ftrace_regs(fregs)->regs : NULL;
}
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long ip)
{
regs_set_return_ip(&fregs->regs, ip);
regs_set_return_ip(&arch_ftrace_regs(fregs)->regs, ip);
}
static __always_inline unsigned long
ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs)
{
return instruction_pointer(&fregs->regs);
}
#define ftrace_regs_get_argument(fregs, n) \
regs_get_kernel_argument(&(fregs)->regs, n)
#define ftrace_regs_get_stack_pointer(fregs) \
kernel_stack_pointer(&(fregs)->regs)
#define ftrace_regs_return_value(fregs) \
regs_return_value(&(fregs)->regs)
#define ftrace_regs_set_return_value(fregs, ret) \
regs_set_return_value(&(fregs)->regs, ret)
#define ftrace_override_function_with_return(fregs) \
override_function_with_return(&(fregs)->regs)
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
struct ftrace_ops;
#define ftrace_graph_func ftrace_graph_func

View File

@ -421,7 +421,7 @@ int __init ftrace_dyn_arch_init(void)
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
unsigned long sp = fregs->regs.gpr[1];
unsigned long sp = arch_ftrace_regs(fregs)->regs.gpr[1];
int bit;
if (unlikely(ftrace_graph_is_dead()))
@ -439,6 +439,6 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
ftrace_test_recursion_unlock(bit);
out:
fregs->regs.link = parent_ip;
arch_ftrace_regs(fregs)->regs.link = parent_ip;
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

View File

@ -829,7 +829,7 @@ out:
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]);
arch_ftrace_regs(fregs)->regs.link = __prepare_ftrace_return(parent_ip, ip, arch_ftrace_regs(fregs)->regs.gpr[1]);
}
#else
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,

View File

@ -125,8 +125,12 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
#define arch_ftrace_get_regs(regs) NULL
#define HAVE_ARCH_FTRACE_REGS
struct ftrace_ops;
struct ftrace_regs {
struct ftrace_regs;
#define arch_ftrace_regs(fregs) ((struct __arch_ftrace_regs *)(fregs))
struct __arch_ftrace_regs {
unsigned long epc;
unsigned long ra;
unsigned long sp;
@ -150,42 +154,42 @@ struct ftrace_regs {
static __always_inline unsigned long ftrace_regs_get_instruction_pointer(const struct ftrace_regs
*fregs)
{
return fregs->epc;
return arch_ftrace_regs(fregs)->epc;
}
static __always_inline void ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long pc)
{
fregs->epc = pc;
arch_ftrace_regs(fregs)->epc = pc;
}
static __always_inline unsigned long ftrace_regs_get_stack_pointer(const struct ftrace_regs *fregs)
{
return fregs->sp;
return arch_ftrace_regs(fregs)->sp;
}
static __always_inline unsigned long ftrace_regs_get_argument(struct ftrace_regs *fregs,
unsigned int n)
{
if (n < 8)
return fregs->args[n];
return arch_ftrace_regs(fregs)->args[n];
return 0;
}
static __always_inline unsigned long ftrace_regs_get_return_value(const struct ftrace_regs *fregs)
{
return fregs->a0;
return arch_ftrace_regs(fregs)->a0;
}
static __always_inline void ftrace_regs_set_return_value(struct ftrace_regs *fregs,
unsigned long ret)
{
fregs->a0 = ret;
arch_ftrace_regs(fregs)->a0 = ret;
}
static __always_inline void ftrace_override_function_with_return(struct ftrace_regs *fregs)
{
fregs->epc = fregs->ra;
arch_ftrace_regs(fregs)->epc = arch_ftrace_regs(fregs)->ra;
}
int ftrace_regs_query_register_offset(const char *name);
@ -196,7 +200,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr)
{
fregs->t1 = addr;
arch_ftrace_regs(fregs)->t1 = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */

View File

@ -496,19 +496,19 @@ void asm_offsets(void)
OFFSET(STACKFRAME_RA, stackframe, ra);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
DEFINE(FREGS_SIZE_ON_STACK, ALIGN(sizeof(struct ftrace_regs), STACK_ALIGN));
DEFINE(FREGS_EPC, offsetof(struct ftrace_regs, epc));
DEFINE(FREGS_RA, offsetof(struct ftrace_regs, ra));
DEFINE(FREGS_SP, offsetof(struct ftrace_regs, sp));
DEFINE(FREGS_S0, offsetof(struct ftrace_regs, s0));
DEFINE(FREGS_T1, offsetof(struct ftrace_regs, t1));
DEFINE(FREGS_A0, offsetof(struct ftrace_regs, a0));
DEFINE(FREGS_A1, offsetof(struct ftrace_regs, a1));
DEFINE(FREGS_A2, offsetof(struct ftrace_regs, a2));
DEFINE(FREGS_A3, offsetof(struct ftrace_regs, a3));
DEFINE(FREGS_A4, offsetof(struct ftrace_regs, a4));
DEFINE(FREGS_A5, offsetof(struct ftrace_regs, a5));
DEFINE(FREGS_A6, offsetof(struct ftrace_regs, a6));
DEFINE(FREGS_A7, offsetof(struct ftrace_regs, a7));
DEFINE(FREGS_SIZE_ON_STACK, ALIGN(sizeof(struct __arch_ftrace_regs), STACK_ALIGN));
DEFINE(FREGS_EPC, offsetof(struct __arch_ftrace_regs, epc));
DEFINE(FREGS_RA, offsetof(struct __arch_ftrace_regs, ra));
DEFINE(FREGS_SP, offsetof(struct __arch_ftrace_regs, sp));
DEFINE(FREGS_S0, offsetof(struct __arch_ftrace_regs, s0));
DEFINE(FREGS_T1, offsetof(struct __arch_ftrace_regs, t1));
DEFINE(FREGS_A0, offsetof(struct __arch_ftrace_regs, a0));
DEFINE(FREGS_A1, offsetof(struct __arch_ftrace_regs, a1));
DEFINE(FREGS_A2, offsetof(struct __arch_ftrace_regs, a2));
DEFINE(FREGS_A3, offsetof(struct __arch_ftrace_regs, a3));
DEFINE(FREGS_A4, offsetof(struct __arch_ftrace_regs, a4));
DEFINE(FREGS_A5, offsetof(struct __arch_ftrace_regs, a5));
DEFINE(FREGS_A6, offsetof(struct __arch_ftrace_regs, a6));
DEFINE(FREGS_A7, offsetof(struct __arch_ftrace_regs, a7));
#endif
}

View File

@ -214,7 +214,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
prepare_ftrace_return(&fregs->ra, ip, fregs->s0);
prepare_ftrace_return(&arch_ftrace_regs(fregs)->ra, ip, arch_ftrace_regs(fregs)->s0);
}
#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
extern void ftrace_graph_call(void);

View File

@ -51,13 +51,11 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
return addr;
}
struct ftrace_regs {
struct pt_regs regs;
};
#include <linux/ftrace_regs.h>
static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
struct pt_regs *regs = &fregs->regs;
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
if (test_pt_regs_flag(regs, PIF_FTRACE_FULL_REGS))
return regs;
@ -81,32 +79,13 @@ static __always_inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
static __always_inline unsigned long
ftrace_regs_get_instruction_pointer(const struct ftrace_regs *fregs)
{
return fregs->regs.psw.addr;
}
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long ip)
{
fregs->regs.psw.addr = ip;
arch_ftrace_regs(fregs)->regs.psw.addr = ip;
}
#define ftrace_regs_get_argument(fregs, n) \
regs_get_kernel_argument(&(fregs)->regs, n)
#define ftrace_regs_get_stack_pointer(fregs) \
kernel_stack_pointer(&(fregs)->regs)
#define ftrace_regs_return_value(fregs) \
regs_return_value(&(fregs)->regs)
#define ftrace_regs_set_return_value(fregs, ret) \
regs_set_return_value(&(fregs)->regs, ret)
#define ftrace_override_function_with_return(fregs) \
override_function_with_return(&(fregs)->regs)
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/*
* When an ftrace registered caller is tracing a function that is
@ -117,7 +96,7 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
*/
static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr)
{
struct pt_regs *regs = &fregs->regs;
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
regs->orig_gpr2 = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */

View File

@ -181,8 +181,8 @@ int main(void)
OFFSET(__FGRAPH_RET_FP, fgraph_ret_regs, fp);
DEFINE(__FGRAPH_RET_SIZE, sizeof(struct fgraph_ret_regs));
#endif
OFFSET(__FTRACE_REGS_PT_REGS, ftrace_regs, regs);
DEFINE(__FTRACE_REGS_SIZE, sizeof(struct ftrace_regs));
OFFSET(__FTRACE_REGS_PT_REGS, __arch_ftrace_regs, regs);
DEFINE(__FTRACE_REGS_SIZE, sizeof(struct __arch_ftrace_regs));
OFFSET(__PCPU_FLAGS, pcpu, flags);
return 0;

View File

@ -318,7 +318,7 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
if (bit < 0)
return;
kmsan_unpoison_memory(fregs, sizeof(*fregs));
kmsan_unpoison_memory(fregs, ftrace_regs_size());
regs = ftrace_get_regs(fregs);
p = get_kprobe((kprobe_opcode_t *)ip);
if (!regs || unlikely(!p) || kprobe_disabled(p))

View File

@ -270,9 +270,9 @@ static void notrace __used test_unwind_ftrace_handler(unsigned long ip,
struct ftrace_ops *fops,
struct ftrace_regs *fregs)
{
struct unwindme *u = (struct unwindme *)fregs->regs.gprs[2];
struct unwindme *u = (struct unwindme *)arch_ftrace_regs(fregs)->regs.gprs[2];
u->ret = test_unwind(NULL, (u->flags & UWM_REGS) ? &fregs->regs : NULL,
u->ret = test_unwind(NULL, (u->flags & UWM_REGS) ? &arch_ftrace_regs(fregs)->regs : NULL,
(u->flags & UWM_SP) ? u->sp : 0);
}

View File

@ -35,37 +35,21 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
}
#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
struct ftrace_regs {
struct pt_regs regs;
};
#include <linux/ftrace_regs.h>
static __always_inline struct pt_regs *
arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
/* Only when FL_SAVE_REGS is set, cs will be non zero */
if (!fregs->regs.cs)
if (!arch_ftrace_regs(fregs)->regs.cs)
return NULL;
return &fregs->regs;
return &arch_ftrace_regs(fregs)->regs;
}
#define ftrace_regs_set_instruction_pointer(fregs, _ip) \
do { (fregs)->regs.ip = (_ip); } while (0)
do { arch_ftrace_regs(fregs)->regs.ip = (_ip); } while (0)
#define ftrace_regs_get_instruction_pointer(fregs) \
((fregs)->regs.ip)
#define ftrace_regs_get_argument(fregs, n) \
regs_get_kernel_argument(&(fregs)->regs, n)
#define ftrace_regs_get_stack_pointer(fregs) \
kernel_stack_pointer(&(fregs)->regs)
#define ftrace_regs_return_value(fregs) \
regs_return_value(&(fregs)->regs)
#define ftrace_regs_set_return_value(fregs, ret) \
regs_set_return_value(&(fregs)->regs, ret)
#define ftrace_override_function_with_return(fregs) \
override_function_with_return(&(fregs)->regs)
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
struct ftrace_ops;
#define ftrace_graph_func ftrace_graph_func
@ -90,7 +74,7 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
regs->orig_ax = addr;
}
#define arch_ftrace_set_direct_caller(fregs, addr) \
__arch_ftrace_set_direct_caller(&(fregs)->regs, addr)
__arch_ftrace_set_direct_caller(&arch_ftrace_regs(fregs)->regs, addr)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
#ifdef CONFIG_DYNAMIC_FTRACE

View File

@ -647,7 +647,7 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
struct pt_regs *regs = &fregs->regs;
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs);
prepare_ftrace_return(ip, (unsigned long *)stack, 0);

View File

@ -113,14 +113,54 @@ static inline int ftrace_mod_get_kallsym(unsigned int symnum, unsigned long *val
#ifdef CONFIG_FUNCTION_TRACER
#include <linux/ftrace_regs.h>
extern int ftrace_enabled;
#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
/**
* ftrace_regs - ftrace partial/optimal register set
*
* ftrace_regs represents a group of registers which is used at the
* function entry and exit. There are three types of registers.
*
* - Registers for passing the parameters to callee, including the stack
* pointer. (e.g. rcx, rdx, rdi, rsi, r8, r9 and rsp on x86_64)
* - Registers for passing the return values to caller.
* (e.g. rax and rdx on x86_64)
* - Registers for hooking the function call and return including the
* frame pointer (the frame pointer is architecture/config dependent)
* (e.g. rip, rbp and rsp for x86_64)
*
* Also, architecture dependent fields can be used for internal process.
* (e.g. orig_ax on x86_64)
*
* On the function entry, those registers will be restored except for
* the stack pointer, so that user can change the function parameters
* and instruction pointer (e.g. live patching.)
* On the function exit, only registers which is used for return values
* are restored.
*
* NOTE: user *must not* access regs directly, only do it via APIs, because
* the member can be changed according to the architecture.
* This is why the structure is empty here, so that nothing accesses
* the ftrace_regs directly.
*/
struct ftrace_regs {
struct pt_regs regs;
/* Nothing to see here, use the accessor functions! */
};
#define arch_ftrace_get_regs(fregs) (&(fregs)->regs)
#define ftrace_regs_size() sizeof(struct __arch_ftrace_regs)
#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
/*
* Architectures that define HAVE_DYNAMIC_FTRACE_WITH_ARGS must define their own
* arch_ftrace_get_regs() where it only returns pt_regs *if* it is fully
* populated. It should return NULL otherwise.
*/
static inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
return &arch_ftrace_regs(fregs)->regs;
}
/*
* ftrace_regs_set_instruction_pointer() is to be defined by the architecture
@ -150,23 +190,6 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs)
return ftrace_get_regs(fregs) != NULL;
}
#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
#define ftrace_regs_get_instruction_pointer(fregs) \
instruction_pointer(ftrace_get_regs(fregs))
#define ftrace_regs_get_argument(fregs, n) \
regs_get_kernel_argument(ftrace_get_regs(fregs), n)
#define ftrace_regs_get_stack_pointer(fregs) \
kernel_stack_pointer(ftrace_get_regs(fregs))
#define ftrace_regs_return_value(fregs) \
regs_return_value(ftrace_get_regs(fregs))
#define ftrace_regs_set_return_value(fregs, ret) \
regs_set_return_value(ftrace_get_regs(fregs), ret)
#define ftrace_override_function_with_return(fregs) \
override_function_with_return(ftrace_get_regs(fregs))
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
#endif
typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);
@ -1014,6 +1037,17 @@ struct ftrace_graph_ent {
int depth;
} __packed;
/*
* Structure that defines an entry function trace with retaddr.
* It's already packed but the attribute "packed" is needed
* to remove extra padding at the end.
*/
struct fgraph_retaddr_ent {
unsigned long func; /* Current function */
int depth;
unsigned long retaddr; /* Return address */
} __packed;
/*
* Structure that defines a return function trace.
* It's already packed but the attribute "packed" is needed
@ -1039,7 +1073,8 @@ typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *,
typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *,
struct fgraph_ops *); /* entry */
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops);
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops);
bool ftrace_pids_enabled(struct ftrace_ops *ops);
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
@ -1055,6 +1090,7 @@ struct fgraph_ops {
void *fgraph_reserve_data(int idx, int size_bytes);
void *fgraph_retrieve_data(int idx, int *size_bytes);
void *fgraph_retrieve_parent_data(int idx, int *size_bytes, int depth);
/*
* Stack of return addresses for functions
@ -1064,10 +1100,6 @@ void *fgraph_retrieve_data(int idx, int *size_bytes);
struct ftrace_ret_stack {
unsigned long ret;
unsigned long func;
unsigned long long calltime;
#ifdef CONFIG_FUNCTION_PROFILER
unsigned long long subtime;
#endif
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
unsigned long fp;
#endif
@ -1087,6 +1119,7 @@ function_graph_enter(unsigned long ret, unsigned long func,
struct ftrace_ret_stack *
ftrace_graph_get_ret_stack(struct task_struct *task, int skip);
unsigned long ftrace_graph_top_ret_addr(struct task_struct *task);
unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,
unsigned long ret, unsigned long *retp);

View File

@ -0,0 +1,36 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_FTRACE_REGS_H
#define _LINUX_FTRACE_REGS_H
/*
* For archs that just copy pt_regs in ftrace regs, it can use this default.
* If an architecture does not use pt_regs, it must define all the below
* accessor functions.
*/
#ifndef HAVE_ARCH_FTRACE_REGS
struct __arch_ftrace_regs {
struct pt_regs regs;
};
#define arch_ftrace_regs(fregs) ((struct __arch_ftrace_regs *)(fregs))
struct ftrace_regs;
#define ftrace_regs_get_instruction_pointer(fregs) \
instruction_pointer(&arch_ftrace_regs(fregs)->regs)
#define ftrace_regs_get_argument(fregs, n) \
regs_get_kernel_argument(&arch_ftrace_regs(fregs)->regs, n)
#define ftrace_regs_get_stack_pointer(fregs) \
kernel_stack_pointer(&arch_ftrace_regs(fregs)->regs)
#define ftrace_regs_get_return_value(fregs) \
regs_return_value(&arch_ftrace_regs(fregs)->regs)
#define ftrace_regs_set_return_value(fregs, ret) \
regs_set_return_value(&arch_ftrace_regs(fregs)->regs, ret)
#define ftrace_override_function_with_return(fregs) \
override_function_with_return(&arch_ftrace_regs(fregs)->regs)
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
#endif /* HAVE_ARCH_FTRACE_REGS */
#endif /* _LINUX_FTRACE_REGS_H */

View File

@ -1441,6 +1441,7 @@ struct task_struct {
/* Timestamp for last schedule: */
unsigned long long ftrace_timestamp;
unsigned long long ftrace_sleeptime;
/*
* Number of functions that haven't been traced

View File

@ -242,6 +242,16 @@ config FUNCTION_GRAPH_RETVAL
enable it via the trace option funcgraph-retval.
See Documentation/trace/ftrace.rst
config FUNCTION_GRAPH_RETADDR
bool "Kernel Function Graph Return Address"
depends on FUNCTION_GRAPH_TRACER
default n
help
Support recording and printing the function return address when
using function graph tracer. It can be helpful to locate code line that
the function is called. This feature is off by default, and you can
enable it via the trace option funcgraph-retaddr.
config DYNAMIC_FTRACE
bool "enable/disable function tracing dynamically"
depends on FUNCTION_TRACER

View File

@ -153,7 +153,7 @@ enum {
* SHADOW_STACK_OFFSET: The size in long words of the shadow stack
* SHADOW_STACK_MAX_OFFSET: The max offset of the stack for a new frame to be added
*/
#define SHADOW_STACK_SIZE (PAGE_SIZE)
#define SHADOW_STACK_SIZE (4096)
#define SHADOW_STACK_OFFSET (SHADOW_STACK_SIZE / sizeof(long))
/* Leave on a buffer at the end */
#define SHADOW_STACK_MAX_OFFSET \
@ -172,6 +172,8 @@ enum {
DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph);
int ftrace_graph_active;
static struct kmem_cache *fgraph_stack_cachep;
static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE];
static unsigned long fgraph_array_bitmask;
@ -390,21 +392,7 @@ void *fgraph_reserve_data(int idx, int size_bytes)
*/
void *fgraph_retrieve_data(int idx, int *size_bytes)
{
int offset = current->curr_ret_stack - 1;
unsigned long val;
val = get_fgraph_entry(current, offset);
while (__get_type(val) == FGRAPH_TYPE_DATA) {
if (__get_data_index(val) == idx)
goto found;
offset -= __get_data_size(val) + 1;
val = get_fgraph_entry(current, offset);
}
return NULL;
found:
if (size_bytes)
*size_bytes = __get_data_size(val) * sizeof(long);
return get_data_type_data(current, offset);
return fgraph_retrieve_parent_data(idx, size_bytes, 0);
}
/**
@ -460,8 +448,56 @@ get_ret_stack(struct task_struct *t, int offset, int *frame_offset)
return RET_STACK(t, offset);
}
/**
* fgraph_retrieve_parent_data - get data from a parent function
* @idx: The index into the fgraph_array (fgraph_ops::idx)
* @size_bytes: A pointer to retrieved data size
* @depth: The depth to find the parent (0 is the current function)
*
* This is similar to fgraph_retrieve_data() but can be used to retrieve
* data from a parent caller function.
*
* Return: a pointer to the specified parent data or NULL if not found
*/
void *fgraph_retrieve_parent_data(int idx, int *size_bytes, int depth)
{
struct ftrace_ret_stack *ret_stack = NULL;
int offset = current->curr_ret_stack;
unsigned long val;
if (offset <= 0)
return NULL;
for (;;) {
int next_offset;
ret_stack = get_ret_stack(current, offset, &next_offset);
if (!ret_stack || --depth < 0)
break;
offset = next_offset;
}
if (!ret_stack)
return NULL;
offset--;
val = get_fgraph_entry(current, offset);
while (__get_type(val) == FGRAPH_TYPE_DATA) {
if (__get_data_index(val) == idx)
goto found;
offset -= __get_data_size(val) + 1;
val = get_fgraph_entry(current, offset);
}
return NULL;
found:
if (size_bytes)
*size_bytes = __get_data_size(val) * sizeof(long);
return get_data_type_data(current, offset);
}
/* Both enabled by default (can be cleared by function_graph tracer flags */
static bool fgraph_sleep_time = true;
bool fgraph_sleep_time = true;
#ifdef CONFIG_DYNAMIC_FTRACE
/*
@ -524,7 +560,6 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func,
int fgraph_idx)
{
struct ftrace_ret_stack *ret_stack;
unsigned long long calltime;
unsigned long val;
int offset;
@ -554,8 +589,6 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func,
return -EBUSY;
}
calltime = trace_clock_local();
offset = READ_ONCE(current->curr_ret_stack);
ret_stack = RET_STACK(current, offset);
offset += FGRAPH_FRAME_OFFSET;
@ -589,7 +622,6 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func,
ret_stack->ret = ret;
ret_stack->func = func;
ret_stack->calltime = calltime;
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
ret_stack->fp = frame_pointer;
#endif
@ -723,7 +755,6 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret,
*offset += FGRAPH_FRAME_OFFSET;
*ret = ret_stack->ret;
trace->func = ret_stack->func;
trace->calltime = ret_stack->calltime;
trace->overrun = atomic_read(&current->trace_overrun);
trace->depth = current->curr_ret_depth;
/*
@ -867,6 +898,29 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx)
return ret_stack;
}
/**
* ftrace_graph_top_ret_addr - return the top return address in the shadow stack
* @task: The task to read the shadow stack from.
*
* Return the first return address on the shadow stack of the @task, which is
* not the fgraph's return_to_handler.
*/
unsigned long ftrace_graph_top_ret_addr(struct task_struct *task)
{
unsigned long return_handler = (unsigned long)dereference_kernel_function_descriptor(return_to_handler);
struct ftrace_ret_stack *ret_stack = NULL;
int offset = task->curr_ret_stack;
if (offset < 0)
return 0;
do {
ret_stack = get_ret_stack(task, offset, &offset);
} while (ret_stack && ret_stack->ret == return_handler);
return ret_stack ? ret_stack->ret : 0;
}
/**
* ftrace_graph_ret_addr - return the original value of the return address
* @task: The task the unwinder is being executed on
@ -892,7 +946,7 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,
{
struct ftrace_ret_stack *ret_stack;
unsigned long return_handler = (unsigned long)dereference_kernel_function_descriptor(return_to_handler);
int i = task->curr_ret_stack;
int i;
if (ret != return_handler)
return ret;
@ -970,8 +1024,11 @@ static int alloc_retstack_tasklist(unsigned long **ret_stack_list)
int start = 0, end = FTRACE_RETSTACK_ALLOC_SIZE;
struct task_struct *g, *t;
if (WARN_ON_ONCE(!fgraph_stack_cachep))
return -ENOMEM;
for (i = 0; i < FTRACE_RETSTACK_ALLOC_SIZE; i++) {
ret_stack_list[i] = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL);
ret_stack_list[i] = kmem_cache_alloc(fgraph_stack_cachep, GFP_KERNEL);
if (!ret_stack_list[i]) {
start = 0;
end = i;
@ -1002,7 +1059,7 @@ unlock:
rcu_read_unlock();
free:
for (i = start; i < end; i++)
kfree(ret_stack_list[i]);
kmem_cache_free(fgraph_stack_cachep, ret_stack_list[i]);
return ret;
}
@ -1012,9 +1069,7 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt,
struct task_struct *next,
unsigned int prev_state)
{
struct ftrace_ret_stack *ret_stack;
unsigned long long timestamp;
int offset;
/*
* Does the user want to count the time a function was asleep.
@ -1031,17 +1086,7 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt,
if (!next->ftrace_timestamp)
return;
/*
* Update all the counters in next to make up for the
* time next was sleeping.
*/
timestamp -= next->ftrace_timestamp;
for (offset = next->curr_ret_stack; offset > 0; ) {
ret_stack = get_ret_stack(next, offset, &offset);
if (ret_stack)
ret_stack->calltime += timestamp;
}
next->ftrace_sleeptime += timestamp - next->ftrace_timestamp;
}
static DEFINE_PER_CPU(unsigned long *, idle_ret_stack);
@ -1077,9 +1122,12 @@ void ftrace_graph_init_idle_task(struct task_struct *t, int cpu)
if (ftrace_graph_active) {
unsigned long *ret_stack;
if (WARN_ON_ONCE(!fgraph_stack_cachep))
return;
ret_stack = per_cpu(idle_ret_stack, cpu);
if (!ret_stack) {
ret_stack = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL);
ret_stack = kmem_cache_alloc(fgraph_stack_cachep, GFP_KERNEL);
if (!ret_stack)
return;
per_cpu(idle_ret_stack, cpu) = ret_stack;
@ -1099,7 +1147,10 @@ void ftrace_graph_init_task(struct task_struct *t)
if (ftrace_graph_active) {
unsigned long *ret_stack;
ret_stack = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL);
if (WARN_ON_ONCE(!fgraph_stack_cachep))
return;
ret_stack = kmem_cache_alloc(fgraph_stack_cachep, GFP_KERNEL);
if (!ret_stack)
return;
graph_init_task(t, ret_stack);
@ -1114,7 +1165,11 @@ void ftrace_graph_exit_task(struct task_struct *t)
/* NULL must become visible to IRQs before we free it: */
barrier();
kfree(ret_stack);
if (ret_stack) {
if (WARN_ON_ONCE(!fgraph_stack_cachep))
return;
kmem_cache_free(fgraph_stack_cachep, ret_stack);
}
}
#ifdef CONFIG_DYNAMIC_FTRACE
@ -1254,6 +1309,14 @@ int register_ftrace_graph(struct fgraph_ops *gops)
guard(mutex)(&ftrace_lock);
if (!fgraph_stack_cachep) {
fgraph_stack_cachep = kmem_cache_create("fgraph_stack",
SHADOW_STACK_SIZE,
SHADOW_STACK_SIZE, 0, NULL);
if (!fgraph_stack_cachep)
return -ENOMEM;
}
if (!fgraph_initialized) {
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph:online",
fgraph_cpu_init, NULL);
@ -1318,17 +1381,17 @@ void unregister_ftrace_graph(struct fgraph_ops *gops)
{
int command = 0;
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
if (unlikely(!ftrace_graph_active))
goto out;
return;
if (unlikely(gops->idx < 0 || gops->idx >= FGRAPH_ARRAY_SIZE ||
fgraph_array[gops->idx] != gops))
goto out;
return;
if (fgraph_lru_release_index(gops->idx) < 0)
goto out;
return;
fgraph_array[gops->idx] = &fgraph_stub;
@ -1350,7 +1413,5 @@ void unregister_ftrace_graph(struct fgraph_ops *gops)
unregister_pm_notifier(&ftrace_suspend_notifier);
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
}
out:
gops->saved_func = NULL;
mutex_unlock(&ftrace_lock);
}

View File

@ -820,10 +820,16 @@ void ftrace_graph_graph_time_control(bool enable)
fgraph_graph_time = enable;
}
struct profile_fgraph_data {
unsigned long long calltime;
unsigned long long subtime;
unsigned long long sleeptime;
};
static int profile_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
struct ftrace_ret_stack *ret_stack;
struct profile_fgraph_data *profile_data;
function_profile_call(trace->func, 0, NULL, NULL);
@ -831,9 +837,13 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace,
if (!current->ret_stack)
return 0;
ret_stack = ftrace_graph_get_ret_stack(current, 0);
if (ret_stack)
ret_stack->subtime = 0;
profile_data = fgraph_reserve_data(gops->idx, sizeof(*profile_data));
if (!profile_data)
return 0;
profile_data->subtime = 0;
profile_data->sleeptime = current->ftrace_sleeptime;
profile_data->calltime = trace_clock_local();
return 1;
}
@ -841,33 +851,42 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace,
static void profile_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct ftrace_ret_stack *ret_stack;
struct profile_fgraph_data *profile_data;
struct ftrace_profile_stat *stat;
unsigned long long calltime;
unsigned long long rettime = trace_clock_local();
struct ftrace_profile *rec;
unsigned long flags;
int size;
local_irq_save(flags);
stat = this_cpu_ptr(&ftrace_profile_stats);
if (!stat->hash || !ftrace_profile_enabled)
goto out;
profile_data = fgraph_retrieve_data(gops->idx, &size);
/* If the calltime was zero'd ignore it */
if (!trace->calltime)
if (!profile_data || !profile_data->calltime)
goto out;
calltime = trace->rettime - trace->calltime;
calltime = rettime - profile_data->calltime;
if (!fgraph_sleep_time) {
if (current->ftrace_sleeptime)
calltime -= current->ftrace_sleeptime - profile_data->sleeptime;
}
if (!fgraph_graph_time) {
struct profile_fgraph_data *parent_data;
/* Append this call time to the parent time to subtract */
ret_stack = ftrace_graph_get_ret_stack(current, 1);
if (ret_stack)
ret_stack->subtime += calltime;
parent_data = fgraph_retrieve_parent_data(gops->idx, &size, 1);
if (parent_data)
parent_data->subtime += calltime;
ret_stack = ftrace_graph_get_ret_stack(current, 0);
if (ret_stack && ret_stack->subtime < calltime)
calltime -= ret_stack->subtime;
if (profile_data->subtime && profile_data->subtime < calltime)
calltime -= profile_data->subtime;
else
calltime = 0;
}
@ -883,6 +902,10 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
}
static struct fgraph_ops fprofiler_ops = {
.ops = {
.flags = FTRACE_OPS_FL_INITIALIZED,
INIT_OPS_HASH(fprofiler_ops.ops)
},
.entryfunc = &profile_graph_entry,
.retfunc = &profile_graph_return,
};
@ -3663,7 +3686,8 @@ static int ftrace_hash_move_and_update_subops(struct ftrace_ops *subops,
}
static u64 ftrace_update_time;
u64 ftrace_update_time;
u64 ftrace_total_mod_time;
unsigned long ftrace_update_tot_cnt;
unsigned long ftrace_number_of_pages;
unsigned long ftrace_number_of_groups;
@ -3683,7 +3707,7 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
bool init_nop = ftrace_need_init_nop();
struct ftrace_page *pg;
struct dyn_ftrace *p;
u64 start, stop;
u64 start, stop, update_time;
unsigned long update_cnt = 0;
unsigned long rec_flags = 0;
int i;
@ -3727,7 +3751,11 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
}
stop = ftrace_now(raw_smp_processor_id());
ftrace_update_time = stop - start;
update_time = stop - start;
if (mod)
ftrace_total_mod_time += update_time;
else
ftrace_update_time = update_time;
ftrace_update_tot_cnt += update_cnt;
return 0;
@ -4806,15 +4834,13 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
mod_g.len = strlen(mod_g.search);
}
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
if (unlikely(ftrace_disabled))
goto out_unlock;
return 0;
if (func_g.type == MATCH_INDEX) {
found = add_rec_by_index(hash, &func_g, clear_filter);
goto out_unlock;
}
if (func_g.type == MATCH_INDEX)
return add_rec_by_index(hash, &func_g, clear_filter);
do_for_each_ftrace_rec(pg, rec) {
@ -4823,16 +4849,12 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
if (ftrace_match_record(rec, &func_g, mod_match, exclude_mod)) {
ret = enter_record(hash, rec, clear_filter);
if (ret < 0) {
found = ret;
goto out_unlock;
}
if (ret < 0)
return ret;
found = 1;
}
cond_resched();
} while_for_each_ftrace_rec();
out_unlock:
mutex_unlock(&ftrace_lock);
return found;
}
@ -4930,14 +4952,14 @@ static int cache_mod(struct trace_array *tr,
{
struct ftrace_mod_load *ftrace_mod, *n;
struct list_head *head = enable ? &tr->mod_trace : &tr->mod_notrace;
int ret;
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
/* We do not cache inverse filters */
if (func[0] == '!') {
int ret = -EINVAL;
func++;
ret = -EINVAL;
/* Look to remove this hash */
list_for_each_entry_safe(ftrace_mod, n, head, list) {
@ -4953,20 +4975,15 @@ static int cache_mod(struct trace_array *tr,
continue;
}
}
goto out;
return ret;
}
ret = -EINVAL;
/* We only care about modules that have not been loaded yet */
if (module_exists(module))
goto out;
return -EINVAL;
/* Save this string off, and execute it when the module is loaded */
ret = ftrace_add_mod(tr, func, module, enable);
out:
mutex_unlock(&ftrace_lock);
return ret;
return ftrace_add_mod(tr, func, module, enable);
}
static int
@ -5276,7 +5293,7 @@ static void release_probe(struct ftrace_func_probe *probe)
{
struct ftrace_probe_ops *probe_ops;
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
WARN_ON(probe->ref <= 0);
@ -5294,7 +5311,6 @@ static void release_probe(struct ftrace_func_probe *probe)
list_del(&probe->list);
kfree(probe);
}
mutex_unlock(&ftrace_lock);
}
static void acquire_probe_locked(struct ftrace_func_probe *probe)
@ -6805,12 +6821,10 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
func_g.len = strlen(func_g.search);
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
if (unlikely(ftrace_disabled)) {
mutex_unlock(&ftrace_lock);
if (unlikely(ftrace_disabled))
return -ENODEV;
}
do_for_each_ftrace_rec(pg, rec) {
@ -6826,7 +6840,7 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
if (entry)
continue;
if (add_hash_entry(hash, rec->ip) == NULL)
goto out;
return 0;
} else {
if (entry) {
free_hash_entry(hash, entry);
@ -6835,13 +6849,8 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
}
}
} while_for_each_ftrace_rec();
out:
mutex_unlock(&ftrace_lock);
if (fail)
return -EINVAL;
return 0;
return fail ? -EINVAL : 0;
}
static ssize_t
@ -7920,7 +7929,7 @@ out:
void arch_ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
kmsan_unpoison_memory(fregs, sizeof(*fregs));
kmsan_unpoison_memory(fregs, ftrace_regs_size());
__ftrace_ops_list_func(ip, parent_ip, NULL, fregs);
}
#else

View File

@ -8587,15 +8587,22 @@ tracing_read_dyn_info(struct file *filp, char __user *ubuf,
char *buf;
int r;
/* 256 should be plenty to hold the amount needed */
buf = kmalloc(256, GFP_KERNEL);
/* 512 should be plenty to hold the amount needed */
#define DYN_INFO_BUF_SIZE 512
buf = kmalloc(DYN_INFO_BUF_SIZE, GFP_KERNEL);
if (!buf)
return -ENOMEM;
r = scnprintf(buf, 256, "%ld pages:%ld groups: %ld\n",
r = scnprintf(buf, DYN_INFO_BUF_SIZE,
"%ld pages:%ld groups: %ld\n"
"ftrace boot update time = %llu (ns)\n"
"ftrace module total update time = %llu (ns)\n",
ftrace_update_tot_cnt,
ftrace_number_of_pages,
ftrace_number_of_groups);
ftrace_number_of_groups,
ftrace_update_time,
ftrace_total_mod_time);
ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
kfree(buf);

View File

@ -46,6 +46,7 @@ enum trace_type {
TRACE_BRANCH,
TRACE_GRAPH_RET,
TRACE_GRAPH_ENT,
TRACE_GRAPH_RETADDR_ENT,
TRACE_USER_STACK,
TRACE_BLK,
TRACE_BPUTS,
@ -512,6 +513,8 @@ extern void __ftrace_bad_type(void);
IF_ASSIGN(var, ent, struct trace_branch, TRACE_BRANCH); \
IF_ASSIGN(var, ent, struct ftrace_graph_ent_entry, \
TRACE_GRAPH_ENT); \
IF_ASSIGN(var, ent, struct fgraph_retaddr_ent_entry,\
TRACE_GRAPH_RETADDR_ENT); \
IF_ASSIGN(var, ent, struct ftrace_graph_ret_entry, \
TRACE_GRAPH_RET); \
IF_ASSIGN(var, ent, struct func_repeats_entry, \
@ -772,6 +775,8 @@ extern void trace_event_follow_fork(struct trace_array *tr, bool enable);
extern unsigned long ftrace_update_tot_cnt;
extern unsigned long ftrace_number_of_pages;
extern unsigned long ftrace_number_of_groups;
extern u64 ftrace_update_time;
extern u64 ftrace_total_mod_time;
void ftrace_init_trace_array(struct trace_array *tr);
#else
static inline void ftrace_init_trace_array(struct trace_array *tr) { }
@ -879,6 +884,7 @@ static __always_inline bool ftrace_hash_empty(struct ftrace_hash *hash)
#define TRACE_GRAPH_GRAPH_TIME 0x400
#define TRACE_GRAPH_PRINT_RETVAL 0x800
#define TRACE_GRAPH_PRINT_RETVAL_HEX 0x1000
#define TRACE_GRAPH_PRINT_RETADDR 0x2000
#define TRACE_GRAPH_PRINT_FILL_SHIFT 28
#define TRACE_GRAPH_PRINT_FILL_MASK (0x3 << TRACE_GRAPH_PRINT_FILL_SHIFT)
@ -900,6 +906,10 @@ extern void graph_trace_close(struct trace_iterator *iter);
extern int __trace_graph_entry(struct trace_array *tr,
struct ftrace_graph_ent *trace,
unsigned int trace_ctx);
extern int __trace_graph_retaddr_entry(struct trace_array *tr,
struct ftrace_graph_ent *trace,
unsigned int trace_ctx,
unsigned long retaddr);
extern void __trace_graph_return(struct trace_array *tr,
struct ftrace_graph_ret *trace,
unsigned int trace_ctx);
@ -1048,6 +1058,7 @@ static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftra
#endif /* CONFIG_DYNAMIC_FTRACE */
extern unsigned int fgraph_max_depth;
extern bool fgraph_sleep_time;
static inline bool
ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace)

View File

@ -85,9 +85,35 @@ FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry,
F_printk("--> %ps (%d)", (void *)__entry->func, __entry->depth)
);
/* Function return entry */
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
/* Function call entry with a return address */
FTRACE_ENTRY_PACKED(fgraph_retaddr_entry, fgraph_retaddr_ent_entry,
TRACE_GRAPH_RETADDR_ENT,
F_STRUCT(
__field_struct( struct fgraph_retaddr_ent, graph_ent )
__field_packed( unsigned long, graph_ent, func )
__field_packed( int, graph_ent, depth )
__field_packed( unsigned long, graph_ent, retaddr )
),
F_printk("--> %ps (%d) <- %ps", (void *)__entry->func, __entry->depth,
(void *)__entry->retaddr)
);
#else
#ifndef fgraph_retaddr_ent_entry
#define fgraph_retaddr_ent_entry ftrace_graph_ent_entry
#endif
#endif
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
/* Function return entry */
FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
TRACE_GRAPH_RET,
@ -110,6 +136,7 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
#else
/* Function return entry */
FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
TRACE_GRAPH_RET,

View File

@ -31,7 +31,10 @@ struct fgraph_data {
struct fgraph_cpu_data __percpu *cpu_data;
/* Place to preserve last processed entry. */
struct ftrace_graph_ent_entry ent;
union {
struct ftrace_graph_ent_entry ent;
struct fgraph_retaddr_ent_entry rent;
} ent;
struct ftrace_graph_ret_entry ret;
int failed;
int cpu;
@ -63,6 +66,10 @@ static struct tracer_opt trace_opts[] = {
{ TRACER_OPT(funcgraph-retval, TRACE_GRAPH_PRINT_RETVAL) },
/* Display function return value in hexadecimal format ? */
{ TRACER_OPT(funcgraph-retval-hex, TRACE_GRAPH_PRINT_RETVAL_HEX) },
#endif
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
/* Display function return address ? */
{ TRACER_OPT(funcgraph-retaddr, TRACE_GRAPH_PRINT_RETADDR) },
#endif
/* Include sleep time (scheduled out) between entry and return */
{ TRACER_OPT(sleep-time, TRACE_GRAPH_SLEEP_TIME) },
@ -83,6 +90,11 @@ static struct tracer_flags tracer_flags = {
.opts = trace_opts
};
static bool tracer_flags_is_set(u32 flags)
{
return (tracer_flags.val & flags) == flags;
}
/*
* DURATION column is being also used to display IRQ signs,
* following values are used by print_graph_irq and others
@ -119,6 +131,38 @@ int __trace_graph_entry(struct trace_array *tr,
return 1;
}
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
int __trace_graph_retaddr_entry(struct trace_array *tr,
struct ftrace_graph_ent *trace,
unsigned int trace_ctx,
unsigned long retaddr)
{
struct ring_buffer_event *event;
struct trace_buffer *buffer = tr->array_buffer.buffer;
struct fgraph_retaddr_ent_entry *entry;
event = trace_buffer_lock_reserve(buffer, TRACE_GRAPH_RETADDR_ENT,
sizeof(*entry), trace_ctx);
if (!event)
return 0;
entry = ring_buffer_event_data(event);
entry->graph_ent.func = trace->func;
entry->graph_ent.depth = trace->depth;
entry->graph_ent.retaddr = retaddr;
trace_buffer_unlock_commit_nostack(buffer, event);
return 1;
}
#else
int __trace_graph_retaddr_entry(struct trace_array *tr,
struct ftrace_graph_ent *trace,
unsigned int trace_ctx,
unsigned long retaddr)
{
return 1;
}
#endif
static inline int ftrace_graph_ignore_irqs(void)
{
if (!ftrace_graph_skip_irqs || trace_recursion_test(TRACE_IRQ_BIT))
@ -127,12 +171,18 @@ static inline int ftrace_graph_ignore_irqs(void)
return in_hardirq();
}
struct fgraph_times {
unsigned long long calltime;
unsigned long long sleeptime; /* may be optional! */
};
int trace_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
unsigned long *task_var = fgraph_get_task_var(gops);
struct trace_array *tr = gops->private;
struct trace_array_cpu *data;
struct fgraph_times *ftimes;
unsigned long flags;
unsigned int trace_ctx;
long disabled;
@ -167,6 +217,19 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
if (ftrace_graph_ignore_irqs())
return 0;
if (fgraph_sleep_time) {
/* Only need to record the calltime */
ftimes = fgraph_reserve_data(gops->idx, sizeof(ftimes->calltime));
} else {
ftimes = fgraph_reserve_data(gops->idx, sizeof(*ftimes));
if (ftimes)
ftimes->sleeptime = current->ftrace_sleeptime;
}
if (!ftimes)
return 0;
ftimes->calltime = trace_clock_local();
/*
* Stop here if tracing_threshold is set. We only write function return
* events to the ring buffer.
@ -180,7 +243,13 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
trace_ctx = tracing_gen_ctx_flags(flags);
ret = __trace_graph_entry(tr, trace, trace_ctx);
if (unlikely(IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR))) {
unsigned long retaddr = ftrace_graph_top_ret_addr(current);
ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx, retaddr);
} else
ret = __trace_graph_entry(tr, trace, trace_ctx);
} else {
ret = 0;
}
@ -238,15 +307,27 @@ void __trace_graph_return(struct trace_array *tr,
trace_buffer_unlock_commit_nostack(buffer, event);
}
static void handle_nosleeptime(struct ftrace_graph_ret *trace,
struct fgraph_times *ftimes,
int size)
{
if (fgraph_sleep_time || size < sizeof(*ftimes))
return;
ftimes->calltime += current->ftrace_sleeptime - ftimes->sleeptime;
}
void trace_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
unsigned long *task_var = fgraph_get_task_var(gops);
struct trace_array *tr = gops->private;
struct trace_array_cpu *data;
struct fgraph_times *ftimes;
unsigned long flags;
unsigned int trace_ctx;
long disabled;
int size;
int cpu;
ftrace_graph_addr_finish(gops, trace);
@ -256,6 +337,14 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
return;
}
ftimes = fgraph_retrieve_data(gops->idx, &size);
if (!ftimes)
return;
handle_nosleeptime(trace, ftimes, size);
trace->calltime = ftimes->calltime;
local_irq_save(flags);
cpu = raw_smp_processor_id();
data = per_cpu_ptr(tr->array_buffer.data, cpu);
@ -271,6 +360,9 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct fgraph_times *ftimes;
int size;
ftrace_graph_addr_finish(gops, trace);
if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) {
@ -278,8 +370,16 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
return;
}
ftimes = fgraph_retrieve_data(gops->idx, &size);
if (!ftimes)
return;
handle_nosleeptime(trace, ftimes, size);
trace->calltime = ftimes->calltime;
if (tracing_thresh &&
(trace->rettime - trace->calltime < tracing_thresh))
(trace->rettime - ftimes->calltime < tracing_thresh))
return;
else
trace_graph_return(trace, gops);
@ -457,7 +557,7 @@ get_return_for_leaf(struct trace_iterator *iter,
* then we just reuse the data from before.
*/
if (data && data->failed) {
curr = &data->ent;
curr = &data->ent.ent;
next = &data->ret;
} else {
@ -487,7 +587,10 @@ get_return_for_leaf(struct trace_iterator *iter,
* Save current and next entries for later reference
* if the output fails.
*/
data->ent = *curr;
if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT))
data->ent.rent = *(struct fgraph_retaddr_ent_entry *)curr;
else
data->ent.ent = *curr;
/*
* If the next event is not a return type, then
* we only care about what type it is. Otherwise we can
@ -651,52 +754,96 @@ print_graph_duration(struct trace_array *tr, unsigned long long duration,
}
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
#define __TRACE_GRAPH_PRINT_RETVAL TRACE_GRAPH_PRINT_RETVAL
#else
#define __TRACE_GRAPH_PRINT_RETVAL 0
#endif
static void print_graph_retval(struct trace_seq *s, unsigned long retval,
bool leaf, void *func, bool hex_format)
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
#define __TRACE_GRAPH_PRINT_RETADDR TRACE_GRAPH_PRINT_RETADDR
static void print_graph_retaddr(struct trace_seq *s, struct fgraph_retaddr_ent_entry *entry,
u32 trace_flags, bool comment)
{
if (comment)
trace_seq_puts(s, " /*");
trace_seq_puts(s, " <-");
seq_print_ip_sym(s, entry->graph_ent.retaddr, trace_flags | TRACE_ITER_SYM_OFFSET);
if (comment)
trace_seq_puts(s, " */");
}
#else
#define __TRACE_GRAPH_PRINT_RETADDR 0
#define print_graph_retaddr(_seq, _entry, _tflags, _comment) do { } while (0)
#endif
#if defined(CONFIG_FUNCTION_GRAPH_RETVAL) || defined(CONFIG_FUNCTION_GRAPH_RETADDR)
static void print_graph_retval(struct trace_seq *s, struct ftrace_graph_ent_entry *entry,
struct ftrace_graph_ret *graph_ret, void *func,
u32 opt_flags, u32 trace_flags)
{
unsigned long err_code = 0;
unsigned long retval = 0;
bool print_retaddr = false;
bool print_retval = false;
bool hex_format = !!(opt_flags & TRACE_GRAPH_PRINT_RETVAL_HEX);
if (retval == 0 || hex_format)
goto done;
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
retval = graph_ret->retval;
print_retval = !!(opt_flags & TRACE_GRAPH_PRINT_RETVAL);
#endif
/* Check if the return value matches the negative format */
if (IS_ENABLED(CONFIG_64BIT) && (retval & BIT(31)) &&
(((u64)retval) >> 32) == 0) {
/* sign extension */
err_code = (unsigned long)(s32)retval;
} else {
err_code = retval;
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
print_retaddr = !!(opt_flags & TRACE_GRAPH_PRINT_RETADDR);
#endif
if (print_retval && retval && !hex_format) {
/* Check if the return value matches the negative format */
if (IS_ENABLED(CONFIG_64BIT) && (retval & BIT(31)) &&
(((u64)retval) >> 32) == 0) {
err_code = sign_extend64(retval, 31);
} else {
err_code = retval;
}
if (!IS_ERR_VALUE(err_code))
err_code = 0;
}
if (!IS_ERR_VALUE(err_code))
err_code = 0;
if (entry) {
if (entry->ent.type != TRACE_GRAPH_RETADDR_ENT)
print_retaddr = false;
done:
if (leaf) {
if (hex_format || (err_code == 0))
trace_seq_printf(s, "%ps(); /* = 0x%lx */\n",
func, retval);
trace_seq_printf(s, "%ps();", func);
if (print_retval || print_retaddr)
trace_seq_puts(s, " /*");
else
trace_seq_printf(s, "%ps(); /* = %ld */\n",
func, err_code);
trace_seq_putc(s, '\n');
} else {
if (hex_format || (err_code == 0))
trace_seq_printf(s, "} /* %ps = 0x%lx */\n",
func, retval);
else
trace_seq_printf(s, "} /* %ps = %ld */\n",
func, err_code);
print_retaddr = false;
trace_seq_printf(s, "} /* %ps", func);
}
if (print_retaddr)
print_graph_retaddr(s, (struct fgraph_retaddr_ent_entry *)entry,
trace_flags, false);
if (print_retval) {
if (hex_format || (err_code == 0))
trace_seq_printf(s, " ret=0x%lx", retval);
else
trace_seq_printf(s, " ret=%ld", err_code);
}
if (!entry || print_retval || print_retaddr)
trace_seq_puts(s, " */\n");
}
#else
#define __TRACE_GRAPH_PRINT_RETVAL 0
#define print_graph_retval(_seq, _retval, _leaf, _func, _format) do {} while (0)
#define print_graph_retval(_seq, _ent, _ret, _func, _opt_flags, _trace_flags) do {} while (0)
#endif
@ -748,14 +895,15 @@ print_graph_entry_leaf(struct trace_iterator *iter,
trace_seq_putc(s, ' ');
/*
* Write out the function return value if the option function-retval is
* enabled.
* Write out the function return value or return address
*/
if (flags & __TRACE_GRAPH_PRINT_RETVAL)
print_graph_retval(s, graph_ret->retval, true, (void *)func,
!!(flags & TRACE_GRAPH_PRINT_RETVAL_HEX));
else
if (flags & (__TRACE_GRAPH_PRINT_RETVAL | __TRACE_GRAPH_PRINT_RETADDR)) {
print_graph_retval(s, entry, graph_ret,
(void *)graph_ret->func + iter->tr->text_delta,
flags, tr->trace_flags);
} else {
trace_seq_printf(s, "%ps();\n", (void *)func);
}
print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET,
cpu, iter->ent->pid, flags);
@ -796,7 +944,12 @@ print_graph_entry_nested(struct trace_iterator *iter,
func = call->func + iter->tr->text_delta;
trace_seq_printf(s, "%ps() {\n", (void *)func);
trace_seq_printf(s, "%ps() {", (void *)func);
if (flags & __TRACE_GRAPH_PRINT_RETADDR &&
entry->ent.type == TRACE_GRAPH_RETADDR_ENT)
print_graph_retaddr(s, (struct fgraph_retaddr_ent_entry *)entry,
tr->trace_flags, true);
trace_seq_putc(s, '\n');
if (trace_seq_has_overflowed(s))
return TRACE_TYPE_PARTIAL_LINE;
@ -1043,11 +1196,10 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
/*
* Always write out the function name and its return value if the
* function-retval option is enabled.
* funcgraph-retval option is enabled.
*/
if (flags & __TRACE_GRAPH_PRINT_RETVAL) {
print_graph_retval(s, trace->retval, false, (void *)func,
!!(flags & TRACE_GRAPH_PRINT_RETVAL_HEX));
print_graph_retval(s, NULL, trace, (void *)func, flags, tr->trace_flags);
} else {
/*
* If the return function does not have a matching entry,
@ -1162,7 +1314,7 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
* to print out the missing entry which would never go out.
*/
if (data && data->failed) {
field = &data->ent;
field = &data->ent.ent;
iter->cpu = data->cpu;
ret = print_graph_entry(field, s, iter, flags);
if (ret == TRACE_TYPE_HANDLED && iter->cpu != cpu) {
@ -1186,6 +1338,16 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
saved = *field;
return print_graph_entry(&saved, s, iter, flags);
}
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
case TRACE_GRAPH_RETADDR_ENT: {
struct fgraph_retaddr_ent_entry saved;
struct fgraph_retaddr_ent_entry *rfield;
trace_assign_type(rfield, entry);
saved = *rfield;
return print_graph_entry((struct ftrace_graph_ent_entry *)&saved, s, iter, flags);
}
#endif
case TRACE_GRAPH_RET: {
struct ftrace_graph_ret_entry *field;
trace_assign_type(field, entry);
@ -1380,6 +1542,13 @@ static struct trace_event graph_trace_entry_event = {
.funcs = &graph_functions,
};
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
static struct trace_event graph_trace_retaddr_entry_event = {
.type = TRACE_GRAPH_RETADDR_ENT,
.funcs = &graph_functions,
};
#endif
static struct trace_event graph_trace_ret_event = {
.type = TRACE_GRAPH_RET,
.funcs = &graph_functions
@ -1466,6 +1635,13 @@ static __init int init_graph_trace(void)
return 1;
}
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
if (!register_trace_event(&graph_trace_retaddr_entry_event)) {
pr_warn("Warning: could not register graph trace retaddr events\n");
return 1;
}
#endif
if (!register_trace_event(&graph_trace_ret_event)) {
pr_warn("Warning: could not register graph trace events\n");
return 1;

View File

@ -17,6 +17,7 @@ static inline int trace_valid_entry(struct trace_entry *entry)
case TRACE_PRINT:
case TRACE_BRANCH:
case TRACE_GRAPH_ENT:
case TRACE_GRAPH_RETADDR_ENT:
case TRACE_GRAPH_RET:
return 1;
}

View File

@ -29,7 +29,7 @@ set -e
: "Test printing the error code in signed decimal format"
echo 0 > options/funcgraph-retval-hex
count=`cat trace | grep 'proc_reg_write' | grep '= -5' | wc -l`
count=`cat trace | grep 'proc_reg_write' | grep -e '=-5 ' -e '= -5 ' | wc -l`
if [ $count -eq 0 ]; then
fail "Return value can not be printed in signed decimal format"
fi