Commit Graph

28 Commits

Author SHA1 Message Date
Cong Wang
bc3e11be88 sh: remove the second argument of k[un]map_atomic()
Signed-off-by: Cong Wang <amwang@redhat.com>
2012-03-20 21:48:15 +08:00
Stuart Menefy
a25bbe1222 sh: Flush executable pages in copy_user_highpage
This resolves a problem seen when using the Android dynamic linker.
Sometimes the dynamic linker would seg-fault at start up and this
was eventually traced to the handling of a COW fault for a page which
was being modified by the linker. If there was no cache aliasing between
the kernel and the user page, the page was not flushed, leaving the
newly copied data in the D-cache. However when executing instructions
from that page, the I-cache is filled directly from external memory,
rather than the D-cache, and causing garbage to be executed.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2011-02-15 16:24:31 +09:00
Paul Mundt
55661fc1f1 sh: Assume new page cache pages have dirty dcache lines.
This follows the ARM change c01778001a
("ARM: 6379/1: Assume new page cache pages have dirty D-cache") for the
same rationale:

    There are places in Linux where writes to newly allocated page
    cache pages happen without a subsequent call to flush_dcache_page()
    (several PIO drivers including USB HCD). This patch changes the
    meaning of PG_arch_1 to be PG_dcache_clean and always flush the
    D-cache for a newly mapped page in update_mmu_cache().

This addresses issues seen with executing binaries from MMC, in
addition to some of the other HCDs that don't explicitly do cache
management for their pipe-in buffers.

Requested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-12-01 15:39:51 +09:00
Paul Mundt
3cf6fa1e33 sh: Enable SH-X3 hardware synonym avoidance handling.
This enables support for the hardware synonym avoidance handling on SH-X3
CPUs for the case where dcache aliases are possible. icache handling is
retained, but we flip on broadcasting of the block invalidations due to
the lack of coherency otherwise on SMP.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-19 17:27:17 +09:00
Paul Mundt
a6198a238b sh: Guard against early IPIs in flush_cache_all().
flush_cache_all() gets called in to when we do some early ioremapping.
Unfortunately on SDK7786 the interrupt controller itself requires
ioremapping, leading to a bit of a chicken and egg scenario. For now,
don't bother with IPI crosscalls if there aren't any other CPUs online.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-15 14:21:37 +09:00
Markus Pietrek
76382b5bdb sh: Ensure all PG_dcache_dirty pages are written back.
With some of the cache rework an address aliasing optimization was added,
but this managed to fail on certain mappings resulting in pages with
PG_dcache_dirty set never writing back their dcache lines. This patch
reverts to the earlier behaviour of simply always writing back when the
dirty bit is set.

Signed-off-by: Markus Pietrek <Markus.Pietrek@emtrion.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-24 15:12:02 +09:00
Paul Mundt
7e01c94998 sh: Partial revert of copy/clear_user_highpage() optimizations.
These still require more testing, so revert them for now. We keep the
off-by-1 in the fixmap colouring and drop the rest.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-04 15:14:52 +09:00
Stuart Menefy
39ac11c160 sh: Improve performance of SH4 versions of copy/clear_user_highpage
The previous implementation of clear_user_highpage and copy_user_highpage
checked to see if there was a D-cache aliasing issue between the user
and kernel mappings of a page, but if there was they always did a
flush with writeback on the dirtied kernel alias.

However as we now have the ability to map a page into kernel space
with the same cache colour as the user mapping, there is no need to
write back this data.

Currently we also invalidate the kernel alias as a precaution, however
I'm not sure if this is actually required.

Also correct the definition of FIX_CMAP_END so that the mappings created
by kmap_coherent() are actually at the correct colour.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-24 17:13:35 +09:00
Paul Mundt
3af539e59c sh64: Fix up reworked cache op build.
This gets the build fixed up for the sh64 cache enabled case.
Disabling still needs further abstraction for independent I/D-cache
disabling.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-12 17:03:28 +09:00
Paul Mundt
0a993b0a29 sh64: cache flush symbol exports.
These were previously hidden in sh_ksyms_32, despite also being needed
for sh64 now that the cache.c code is shared.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 10:51:35 +09:00
Paul Mundt
abeaf33a41 Merge branch 'sh/stable-updates'
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-10-16 15:14:50 +09:00
Magnus Damm
5fb80ae8bd sh: disabled cache handling fix.
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:38:48 +09:00
Paul Mundt
95019b48ad Merge branch 'sh/stable-updates' 2009-10-13 11:27:08 +09:00
Paul Mundt
964f7e5a56 sh: force dcache flush if dcache_dirty bit set.
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.

This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.

Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 11:18:34 +09:00
Paul Mundt
654d364e26 sh: sh4_flush_cache_mm() optimizations.
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.

After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.

Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:04:06 +09:00
Paul Mundt
6e4154d4c2 sh: Use more aggressive dcache purging in kmap teardown.
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-08 16:21:00 +09:00
Paul Mundt
0906a3ad33 sh: Fix up and optimize the kmap_coherent() interface.
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.

One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-03 17:21:10 +09:00
Paul Mundt
6f3795788b sh: Fix up UP deadlock with SMP-aware cache ops.
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 21:21:36 +09:00
Paul Mundt
f26b2a562b sh: Make cache flushers SMP-aware.
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 17:23:14 +09:00
Paul Mundt
2b4315185a sh: Wire up sh5_cache_init().
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 02:16:44 +09:00
Paul Mundt
0d051d90bb sh: Convert SH7705 extended mode to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:53:39 +09:00
Paul Mundt
79f1c9da5e sh: Convert SH-3 to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:42:55 +09:00
Paul Mundt
a58e1a2ab4 sh: Convert SH-2A to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:38:29 +09:00
Paul Mundt
109b44a82a sh: Convert SH-2 to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:35:15 +09:00
Paul Mundt
37443ef3f0 sh: Migrate SH-4 cacheflush ops to function pointers.
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:29:49 +09:00
Paul Mundt
27d59ec170 sh: Move alias computation to shared cache init.
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().

This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:11:16 +09:00
Paul Mundt
ecba106058 sh: Centralize the CPU cache initialization routines.
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:05:42 +09:00
Paul Mundt
cbbe2f68f6 sh: rename pg-mmu.c -> cache.c, enable generically.
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:30:39 +09:00