linux/arch/x86/lib
Dan Williams ec6347bb43 x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()
In reaction to a proposal to introduce a memcpy_mcsafe_fast()
implementation Linus points out that memcpy_mcsafe() is poorly named
relative to communicating the scope of the interface. Specifically what
addresses are valid to pass as source, destination, and what faults /
exceptions are handled.

Of particular concern is that even though x86 might be able to handle
the semantics of copy_mc_to_user() with its common copy_user_generic()
implementation other archs likely need / want an explicit path for this
case:

  On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote:
  >
  > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote:
  > >
  > > However now I see that copy_user_generic() works for the wrong reason.
  > > It works because the exception on the source address due to poison
  > > looks no different than a write fault on the user address to the
  > > caller, it's still just a short copy. So it makes copy_to_user() work
  > > for the wrong reason relative to the name.
  >
  > Right.
  >
  > And it won't work that way on other architectures. On x86, we have a
  > generic function that can take faults on either side, and we use it
  > for both cases (and for the "in_user" case too), but that's an
  > artifact of the architecture oddity.
  >
  > In fact, it's probably wrong even on x86 - because it can hide bugs -
  > but writing those things is painful enough that everybody prefers
  > having just one function.

Replace a single top-level memcpy_mcsafe() with either
copy_mc_to_user(), or copy_mc_to_kernel().

Introduce an x86 copy_mc_fragile() name as the rename for the
low-level x86 implementation formerly named memcpy_mcsafe(). It is used
as the slow / careful backend that is supplanted by a fast
copy_mc_generic() in a follow-on patch.

One side-effect of this reorganization is that separating copy_mc_64.S
to its own file means that perf no longer needs to track dependencies
for its memcpy_64.S benchmarks.

 [ bp: Massage a bit. ]

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: <stable@vger.kernel.org>
Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com
Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
2020-10-06 11:18:04 +02:00
..
.gitignore .gitignore: add SPDX License Identifier 2020-03-25 11:50:48 +01:00
atomic64_32.c x86: Adjust asm constraints in atomic64 wrappers 2012-01-20 17:29:31 -08:00
atomic64_386_32.S x86/asm/32: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 12:03:43 +02:00
atomic64_cx8_32.S x86/asm/32: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 12:03:43 +02:00
cache-smp.c smp: Remove smp_call_function() and on_each_cpu() return values 2019-06-23 14:26:26 +02:00
checksum_32.S x86: Change {JMP,CALL}_NOSPEC argument 2020-04-30 20:14:34 +02:00
clear_page_64.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
cmdline.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 497 2019-06-19 17:09:53 +02:00
cmpxchg8b_emu.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
cmpxchg16b_emu.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
copy_mc_64.S x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() 2020-10-06 11:18:04 +02:00
copy_mc.c x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() 2020-10-06 11:18:04 +02:00
copy_page_64.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
copy_user_64.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
cpu.c x86/lib/cpu: Address missing prototypes warning 2019-08-08 08:25:53 +02:00
csum-copy_64.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
csum-partial_64.c x86/lib: Fix indentation issue, remove extra tab 2019-03-21 12:24:38 +01:00
csum-wrappers_64.c x86: switch both 32bit and 64bit to providing csum_and_copy_from_user() 2020-05-29 16:11:48 -04:00
delay.c x86/delay: Introduce TPAUSE delay 2020-05-07 16:06:20 +02:00
error-inject.c x86/asm: Mark all top level asm statements as .text 2019-04-19 17:46:55 +02:00
getuser.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
hweight.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
inat.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156 2019-05-30 11:26:35 -07:00
insn-eval.c x86/insn-eval: Add support for 64-bit kernel mode 2019-12-30 20:17:15 +01:00
insn.c x86: xen: insn: Decode Xen and KVM emulate-prefix signature 2019-10-17 21:31:57 +02:00
iomap_copy_64.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
iomem.c x86: explicitly align IO accesses in memcpy_{to,from}io 2019-02-01 09:07:48 -08:00
kaslr.c x86/kaslr: Fix incorrect i8254 outb() parameters 2019-01-11 21:35:47 +01:00
Makefile x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() 2020-10-06 11:18:04 +02:00
memcpy_32.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
memcpy_64.S x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() 2020-10-06 11:18:04 +02:00
memmove_64.S x86/cpufeatures: Add support for fast short REP; MOVSB 2020-01-08 11:29:25 +01:00
memset_64.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
misc.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
mmx_32.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
msr-reg-export.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
msr-reg.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
msr-smp.c x86/msr: Make rdmsrl_safe_on_cpu() scheduling safe as well 2018-03-28 10:34:13 +02:00
msr.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
putuser.S x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* 2019-10-18 11:58:33 +02:00
retpoline.S x86/retpoline: Fix retpoline unwind 2020-04-30 20:14:34 +02:00
string_32.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
strstr_32.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
usercopy_32.c docs/core-api/mm: fix user memory accessors formatting 2019-03-05 21:07:20 -08:00
usercopy_64.c x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() 2020-10-06 11:18:04 +02:00
usercopy.c x86/nmi: Fix NMI uaccess race against CR3 switching 2018-08-31 17:08:22 +02:00
x86-opcode-map.txt x86/insn: Add Control-flow Enforcement (CET) instructions to the opcode map 2020-03-26 12:21:40 +01:00