linux/arch
Steven Rostedt (VMware) adab66b71a Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"
It was believed that metag was the only architecture that required the ring
buffer to keep 8 byte words aligned on 8 byte architectures, and with its
removal, it was assumed that the ring buffer code did not need to handle
this case. It appears that sparc64 also requires this.

The following was reported on a sparc64 boot up:

   kernel: futex hash table entries: 65536 (order: 9, 4194304 bytes, linear)
   kernel: Running postponed tracer tests:
   kernel: Testing tracer function:
   kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140
   kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140
   kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140
   kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140
   kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140
   kernel: PASSED

Need to put back the 64BIT aligned code for the ring buffer.

Link: https://lore.kernel.org/r/CADxRZqzXQRYgKc=y-KV=S_yHL+Y8Ay2mh5ezeZUnpRvg+syWKw@mail.gmail.com

Cc: stable@vger.kernel.org
Fixes: 86b3de60a0 ("ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS")
Reported-by: Anatoly Pugachev <matorola@gmail.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-12-14 12:33:51 -05:00
..
alpha arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
arc treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
arm ARM: SoC fixes for v5.10 2020-10-30 13:06:07 -07:00
arm64 ARM: 2020-11-01 09:43:32 -08:00
c6x arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
csky ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs 2020-11-13 12:14:55 -05:00
h8300 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
hexagon arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
ia64 treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
m68k arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
microblaze treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
mips treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
nds32 ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs 2020-11-13 12:14:55 -05:00
nios2 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
openrisc arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
parisc ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs 2020-11-13 12:14:55 -05:00
powerpc livepatch: Use the default ftrace_ops instead of REGS when ARGS is available 2020-11-13 12:15:28 -05:00
riscv treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
s390 livepatch: Use the default ftrace_ops instead of REGS when ARGS is available 2020-11-13 12:15:28 -05:00
sh treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
sparc treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
um arch/um: partially revert the conversion to __section() macro 2020-10-26 15:39:37 -07:00
x86 livepatch: Use the default ftrace_ops instead of REGS when ARGS is available 2020-11-13 12:15:28 -05:00
xtensa treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
.gitignore
Kconfig Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS" 2020-12-14 12:33:51 -05:00