linux/arch/sh/mm
Linus Torvalds 643ad15d47 Merge branch 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 protection key support from Ingo Molnar:
 "This tree adds support for a new memory protection hardware feature
  that is available in upcoming Intel CPUs: 'protection keys' (pkeys).

  There's a background article at LWN.net:

      https://lwn.net/Articles/643797/

  The gist is that protection keys allow the encoding of
  user-controllable permission masks in the pte.  So instead of having a
  fixed protection mask in the pte (which needs a system call to change
  and works on a per page basis), the user can map a (handful of)
  protection mask variants and can change the masks runtime relatively
  cheaply, without having to change every single page in the affected
  virtual memory range.

  This allows the dynamic switching of the protection bits of large
  amounts of virtual memory, via user-space instructions.  It also
  allows more precise control of MMU permission bits: for example the
  executable bit is separate from the read bit (see more about that
  below).

  This tree adds the MM infrastructure and low level x86 glue needed for
  that, plus it adds a high level API to make use of protection keys -
  if a user-space application calls:

        mmap(..., PROT_EXEC);

  or

        mprotect(ptr, sz, PROT_EXEC);

  (note PROT_EXEC-only, without PROT_READ/WRITE), the kernel will notice
  this special case, and will set a special protection key on this
  memory range.  It also sets the appropriate bits in the Protection
  Keys User Rights (PKRU) register so that the memory becomes unreadable
  and unwritable.

  So using protection keys the kernel is able to implement 'true'
  PROT_EXEC on x86 CPUs: without protection keys PROT_EXEC implies
  PROT_READ as well.  Unreadable executable mappings have security
  advantages: they cannot be read via information leaks to figure out
  ASLR details, nor can they be scanned for ROP gadgets - and they
  cannot be used by exploits for data purposes either.

  We know about no user-space code that relies on pure PROT_EXEC
  mappings today, but binary loaders could start making use of this new
  feature to map binaries and libraries in a more secure fashion.

  There is other pending pkeys work that offers more high level system
  call APIs to manage protection keys - but those are not part of this
  pull request.

  Right now there's a Kconfig that controls this feature
  (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) that is default enabled
  (like most x86 CPU feature enablement code that has no runtime
  overhead), but it's not user-configurable at the moment.  If there's
  any serious problem with this then we can make it configurable and/or
  flip the default"

* 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits)
  x86/mm/pkeys: Fix mismerge of protection keys CPUID bits
  mm/pkeys: Fix siginfo ABI breakage caused by new u64 field
  x86/mm/pkeys: Fix access_error() denial of writes to write-only VMA
  mm/core, x86/mm/pkeys: Add execute-only protection keys support
  x86/mm/pkeys: Create an x86 arch_calc_vm_prot_bits() for VMA flags
  x86/mm/pkeys: Allow kernel to modify user pkey rights register
  x86/fpu: Allow setting of XSAVE state
  x86/mm: Factor out LDT init from context init
  mm/core, x86/mm/pkeys: Add arch_validate_pkey()
  mm/core, arch, powerpc: Pass a protection key in to calc_vm_flag_bits()
  x86/mm/pkeys: Actually enable Memory Protection Keys in the CPU
  x86/mm/pkeys: Add Kconfig prompt to existing config option
  x86/mm/pkeys: Dump pkey from VMA in /proc/pid/smaps
  x86/mm/pkeys: Dump PKRU with other kernel registers
  mm/core, x86/mm/pkeys: Differentiate instruction fetches
  x86/mm/pkeys: Optimize fault handling in access_error()
  mm/core: Do not enforce PKEY permissions on remote mm access
  um, pkeys: Add UML arch_*_access_permitted() methods
  mm/gup, x86/mm/pkeys: Check VMAs and PTEs for protection keys
  x86/mm/gup: Simplify get_user_pages() PTE bit handling
  ...
2016-03-20 19:08:56 -07:00
..
alignment.c procfs: new helper - PDE_DATA(inode) 2013-04-09 14:13:32 -04:00
asids-debugfs.c arch/sh/mm/asids-debugfs.c: use PTR_ERR_OR_ZERO 2014-08-06 18:01:12 -07:00
cache-debugfs.c sh: prefix sh-specific "CCR" and "CCR2" by "SH_" 2014-03-04 07:55:49 -08:00
cache-sh2.c sh: prefix sh-specific "CCR" and "CCR2" by "SH_" 2014-03-04 07:55:49 -08:00
cache-sh2a.c sh: prefix sh-specific "CCR" and "CCR2" by "SH_" 2014-03-04 07:55:49 -08:00
cache-sh3.c sh: Mass ctrl_in/outX to __raw_read/writeX conversion. 2010-01-26 12:58:40 +09:00
cache-sh4.c mm: differentiate page_mapped() from page_mapcount() for compound pages 2016-01-15 17:56:32 -08:00
cache-sh5.c tree-wide: fix comment/printk typos 2010-11-01 15:38:34 -04:00
cache-sh7705.c sh: Assume new page cache pages have dirty dcache lines. 2010-12-01 15:39:51 +09:00
cache-shx3.c sh: prefix sh-specific "CCR" and "CCR2" by "SH_" 2014-03-04 07:55:49 -08:00
cache.c mm: differentiate page_mapped() from page_mapcount() for compound pages 2016-01-15 17:56:32 -08:00
consistent.c SH: adapt for dma_map_ops changes 2012-03-28 16:36:37 +02:00
extable_32.c
extable_64.c
fault.c mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in the handler 2015-05-19 08:39:15 +02:00
flush-sh4.c sh: fix up fallout from system.h disintegration. 2012-03-30 19:29:57 +09:00
gup.c mm/gup: Switch all callers of get_user_pages() to not pass tsk/mm 2016-02-16 10:11:12 +01:00
hugetlbpage.c mm: cleanup *pte_alloc* interfaces 2016-03-17 15:09:34 -07:00
init.c libnvdimm for 4.3: 2015-09-08 14:35:59 -07:00
ioremap_fixed.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
ioremap.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
Kconfig memblock: kill "config MAX_ACTIVE_REGIONS" 2013-04-18 13:03:53 +10:00
kmap.c sched/preempt, sh: kmap_coherent relies on disabled preemption 2016-03-17 19:46:14 +00:00
Makefile sh64: Fix up caller-save register settings for fast-path. 2012-05-14 16:46:07 +09:00
mmap.c Merge branch 'akpm' (Andrew's patchbomb) 2012-12-11 18:05:37 -08:00
nommu.c sh: stub __flush_tlb_global() definition for nommu. 2010-08-16 14:53:01 +09:00
numa.c sh: use PFN_DOWN macro 2015-09-04 16:54:41 -07:00
pgtable.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
pmb.c Disintegrate asm/system.h for SH 2012-03-28 18:30:03 +01:00
sram.c sh: fix up fallout from system.h disintegration. 2012-03-30 19:29:57 +09:00
tlb-debugfs.c sh: provide generic arch_debugfs_dir. 2010-09-24 04:04:26 +09:00
tlb-pteaex.c Disintegrate asm/system.h for SH 2012-03-28 18:30:03 +01:00
tlb-sh3.c Disintegrate asm/system.h for SH 2012-03-28 18:30:03 +01:00
tlb-sh4.c Disintegrate asm/system.h for SH 2012-03-28 18:30:03 +01:00
tlb-sh5.c sh: delete __cpuinit usage from all sh files 2013-07-14 19:36:53 -04:00
tlb-urb.c sh: update the TLB replacement counter for entry wiring. 2010-03-26 11:37:16 +09:00
tlbex_32.c sh: Enable shared page fault handler for _32/_64. 2012-05-14 15:33:28 +09:00
tlbex_64.c sh64: Set additional fault code values. 2012-05-14 17:46:49 +09:00
tlbflush_32.c sh: Provide a global TLB flush for U/I-TLB clear. 2010-07-02 15:44:09 +09:00
tlbflush_64.c sh64: Migrate to __update_tlb() API. 2012-05-14 15:52:28 +09:00
uncached.c sh: nommu: use 32-bit phys mode. 2010-11-04 12:32:24 +09:00