Commit Graph

7 Commits

Author SHA1 Message Date
Michel Lespinasse
b4265f1234 mm: use vm_unmapped_area() on sh architecture
Update the sh arch_get_unmapped_area[_topdown] functions to make use of
vm_unmapped_area() instead of implementing a brute force search.

[akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()]
Signed-off-by: Michel Lespinasse <walken@google.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-12-11 17:22:26 -08:00
Al Viro
e77414e0aa fix broken aliasing checks for MAP_FIXED on sparc32, mips, arm and sh
We want addr - (pgoff << PAGE_SHIFT) consistently coloured...

Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2009-12-11 06:44:59 -05:00
Paul Mundt
dde5e3ffb7 sh: rework nommu for generic cache.c use.
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:49:32 +09:00
Paul Mundt
ee1acbfabd sh: Handle shm_align_mask also for HAVE_ARCH_UNMAPPED_AREA_TOPDOWN.
Presently shm_align_mask is only looked at for the bottom up case, but we
still want this for proper colouring constraints in the topdown case.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-07 16:38:16 +09:00
Paul Mundt
4a4a9be3eb sh: Move arch_get_unmapped_area() in to arch/sh/mm/mmap.c.
Now that arch/sh/mm/mmap.c exists, move arch_get_unmapped_area() there.
Follows the ARM change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:42:49 +09:00
Paul Mundt
10840f034e sh: Don't factor in PAGE_OFFSET for valid_phys_addr_range() check.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-11-13 15:38:02 +09:00
Paul Mundt
185aed7557 sh: Provide a sane valid_phys_addr_range() to prevent TLB reset with PMB.
With the PMB enabled, only P1SEG and up are covered by the PMB mappings,
meaning that situations where out-of-bounds physical addresses are read
from will lead to TLB reset after the PMB miss, allowing for use cases
like dd if=/dev/mem to reset the TLB.

Fix this up to make sure the reference is between __MEMORY_START (phys)
and __pa(high_memory). This is coherent across all variants of sh/sh64
with and without MMU, though the PMB bug itself is only applicable to
SH-4A parts.

Reported-by: Hideo Saito <saito@densan.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-11-12 12:53:48 +09:00