Commit Graph

28 Commits

Author SHA1 Message Date
Barry Song
91c2ebb90b ARM: 7114/1: cache-l2x0: add resume entry for l2 in secure mode
we save the l2x0 registers at the first initialization, and platform codes
can get them to restore l2x0 status after wakeup.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:11:51 +01:00
Barry Song
74d41f39a9 ARM: 7090/1: CACHE-L2X0: filter start address can be 0 and is often 0
this patch fixes the error in Rob Herring's
ARM: 7009/1: l2x0: Add OF based initialization
http://www.spinics.net/lists/arm-kernel/msg131123.html
it has been in rmk/for-next with commit 41c86ff5b

Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Rob Herring <robherring2@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:11:40 +01:00
Barry Song
1caf30924f ARM: 7089/1: L2X0: add explicit cpu_relax() for busy wait loop
using cpu_relax in busy loops is a well-known idiom in the kernel.
It's more for documentation purposes than technically needed here.

Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:11:36 +01:00
Rob Herring
8c369264b6 ARM: 7009/1: l2x0: Add OF based initialization
This adds probing for ARM L2x0 cache controllers via device tree. Support
includes the L210, L220, and PL310 controllers. The binding allows setting
up cache RAM latencies and filter addresses (PL310 only).

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Barry Song <21cnbao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:11:30 +01:00
Linus Walleij
bac7e6ecf6 ARM: 7080/1: l2x0: make sure I&D are not locked down on init
Fighting unfixed U-Boots and other beasts that may the cache in
a locked-down state when starting the kernel, we make sure to
disable all cache lock-down when initializing the l2x0 so we
are in a known state.

Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
Cc: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Adrian Bunk <adrian.bunk@movial.com>
Cc: Rob Herring <robherring2@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reported-by: Jan Rinze <janrinze@gmail.com>
Tested-by: Robert Marklund <robert.marklund@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-09-07 00:48:03 +01:00
Will Deacon
38a8914f9a ARM: 6987/1: l2x0: fix disabling function to avoid deadlock
The l2x0_disable function attempts to writel with the l2x0_lock held.
This results in deadlock when the writel contains an outer_sync call
for the platform since the l2x0_lock is already held by the disable
function. A further problem is that disabling the L2 without flushing it
first can lead to the spin_lock operation becoming visible after the
spin_unlock, causing any subsequent L2 maintenance to deadlock.

This patch replaces the writel with a call to writel_relaxed in the
disabling code and adds a flush before disabling in the control
register, preventing livelock from occurring.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06 20:48:08 +01:00
Russell King
1f0090a1ea Merge branch 'misc' into devel
Conflicts:
	arch/arm/Kconfig
2011-03-16 23:35:25 +00:00
Santosh Shilimkar
2839e06c95 ARM: 6795/1: l2x0: Errata fix for flush by Way operation can cause data corrupti
PL310 implements the Clean & Invalidate by Way L2 cache maintenance
operation (offset 0x7FC). This operation runs in background so that
PL310 can handle normal accesses while it is in progress. Under very
rare circumstances, due to this erratum, write data can be lost when
PL310 treats a cacheable write transaction during a Clean & Invalidate
by Way operation.

Workaround:
Disable Write-Back and Cache Linefill (Debug Control Register)
Clean & Invalidate by Way (0x7FC)
Re-enable Write-Back and Cache Linefill (Debug Control Register)

This patch also removes any OMAP dependency on PL310 Errata's

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-03-09 00:18:34 +00:00
Srinidhi Kasagar
885028e4ba ARM: 6741/1: errata: pl310 cache sync operation may be faulty
The effect of cache sync operation is to drain the store buffer and
wait for all internal buffers to be empty. In normal conditions, store
buffer is able to merge the normal memory writes within its 32-byte
data buffers.  Due to this erratum present in r3p0, the effect of cache
sync operation on the store buffer still remains when the operation
completes. This means that the store buffer is always asked to drain
and this prevents it from merging any further writes.

This can severely affect performance on the write traffic esp. on
Normal memory NC one.

The proposed workaround is to replace the normal offset of cache sync
operation(0x730) by another offset targeting an unmapped PL310
register 0x740.

Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-19 11:23:21 +00:00
Santosh Shilimkar
444457c1f5 ARM: l2x0: Optimise the range based operations
For the big buffers which are in excess of cache size, the maintaince
operations by PA are very slow. For such buffers the maintainace
operations can be speeded up by using the WAY based method.

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
2010-10-26 11:40:05 +05:30
Santosh Shilimkar
5ba7037228 ARM: l2x0: Determine the cache size
The cache size is needed for to optimise range based
maintainance operations

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
2010-10-26 11:40:03 +05:30
Thomas Gleixner
2fd8658931 arm: Implement l2x0 cache disable functions
Add flush_all, inv_all and disable functions to the l2x0 code. These
functions are called from kexec code to prevent random crashes in the
new kernel.

Platforms like OMAP which control L2 enable/disable via SMI mode can
override the outer_cache.disable() function to implement their own.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
2010-10-26 11:39:58 +05:30
Catalin Marinas
9a6655e49f ARM: Improve the L2 cache performance when PL310 is used
With this L2 cache controller, the cache maintenance by PA and sync
operations are atomic and do not require a "wait" loop. This patch
conditionally defines the cache_wait() function.

Since L2x0 cache controllers do not work with ARMv7 CPUs, the patch
automatically enables CACHE_PL310 when only CPU_V7 is defined.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2010-10-26 11:39:54 +05:30
Catalin Marinas
6775a558fe ARM: 6272/1: Convert L2x0 to use the IO relaxed operations
This patch is in preparation for a subsequent patch which adds barriers
to the I/O accessors. Since the mandatory barriers may do an L2 cache
sync, this patch avoids a recursive call into l2x0_cache_sync() via the
write*() accessors and wmb() and a call into l2x0_cache_sync() with the
l2x0_lock held.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-29 14:04:36 +01:00
Sascha Hauer
4082cfa776 ARM: 6210/1: Do not rely on reset defaults of L2X0_AUX_CTRL
On i.MX35 the L2X0_AUX_CTRL register does not have sensible reset
default values. Allow them to be overwritten with the aux_val/aux_mask
arguments passed to l2x0_init().

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-09 11:28:53 +01:00
Russell King
ac1d426e82 Merge branch 'devel-stable' into devel
Conflicts:
	arch/arm/Kconfig
	arch/arm/include/asm/system.h
	arch/arm/mm/Kconfig
2010-05-17 17:24:04 +01:00
Jason McMullan
64039be822 ARM: 6094/1: Extend cache-l2x0 to support the 16-way PL310
The L310 cache controller's interface is almost identical
to the L210. One major difference is that the PL310 can
have up to 16 ways.

This change uses the cache's part ID and the Associativity
bits in the AUX_CTRL register to determine the number of ways.

Also, this version prints out the CACHE_ID and AUX_CTRL registers.

Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jason S. McMullan <jason.mcmullan@netronome.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-15 15:03:50 +01:00
Catalin Marinas
23107c5420 ARM: 5995/1: ARM: Add L2x0 outer_sync() support (3/4)
The L2x0 cache controllers need to explicitly drain their write buffer
even for Normal Noncacheable memory accesses.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-25 21:13:50 +00:00
Santosh Shilimkar
9e65582a8e ARM: 5919/1: ARM: L2 : Errata 588369: Clean & Invalidate do not invalidate clean lines
This patch implements the work-around for the errata 588369.The secure
API is used to alter L2 debug register because of trust-zone.

This version updated with comments from Russell and Catalin and
generated against 2.6.33-rc6 mainline kernel. Detail
comments can be found:
http://www.spinics.net/lists/linux-omap/msg23431.html

Signed-off-by: Woodruff Richard <r-woodruff2@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:55 +00:00
Santosh Shilimkar
424d6b145f ARM: 5916/1: ARM: L2 : Add maintainace by line helper functions
This patch adds the cache maintainance by line helper functions.

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:54 +00:00
Russell King
bf32eb8549 Merge branch 'pending-l2x0' into cache 2009-12-14 14:54:10 +00:00
Russell King
3d1074349b ARM: cache-l2x0: make better use of background cache handling
There's no point having the hardware support background operations
if we issue a cache operation, and then wait for it to complete
before calculating the address of the next operation.  We gain no
advantage in the cache controller stalling the bus until completion.

What we should be doing is using the 'wait' time productively by
calculating the address of the next operation, and only then waiting
for the previous operation to complete.  This means that cache
operations can occur in parallel with the CPU calculating the next
address.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2009-12-14 13:35:13 +00:00
Russell King
0eb948dd7f ARM: cache-l2x0: avoid taking spinlock for every iteration
Taking the spinlock for every iteration is very expensive; instead,
batch iterations up into 4K blocks, releasing and reacquiring the
spinlock between each block.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2009-12-14 13:34:58 +00:00
Srinidhi Kasagar
48371cd3f4 ARM: 5845/1: l2x0: check whether l2x0 already enabled
If running in non-secure mode accessing
some registers of l2x0 will fault. So
check if l2x0 is already enabled, if so
do not access those secure registers.

Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-03 19:42:30 +00:00
Russell King
fced80c735 [ARM] Convert asm/io.h to linux/io.h
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-06 12:10:45 +01:00
Rui Sousa
4f6627ac3b [ARM] 4568/1: fix l2x0 cache invalidate handling of unaligned addresses
The l2x0_inv_range() function doesn't handle unaligned addresses
correctly. It's necessary to clean the cache lines that are at the
start and end of the invalidate range, if the addresses are not aligned,
to prevent corruption of other data sharing the same cache line.

Signed-off-by: Rui Sousa <rui.p.m.sousa@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-09-17 14:56:39 +01:00
Catalin Marinas
0762097625 [ARM] 4500/1: Add locking around the background L2x0 cache operations
The background operations of the L2x0 cache controllers are aborted if
another operation is issued on the same or different core. This patch
protects the maintenance operation issuing/polling with a spinlock.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-07-20 21:29:44 +01:00
Catalin Marinas
382266ad5a [ARM] 4135/1: Add support for the L210/L220 cache controllers
This patch adds the support for the L210/L220 (outer) cache
controller. The cache range operations are done by index/way since L2
cache controller only accepts physical addresses.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-02-11 16:48:02 +00:00