Add support for the MIPS Boston development board to generic kernels,
which essentially amounts to:
- Adding the device tree source for the MIPS Boston board.
- Adding a Kconfig fragment which enables the appropriate drivers for
the MIPS Boston board.
With these changes in place generic kernels will support the board by
default, and kernels with only the drivers needed for Boston enabled can
be configured by setting BOARDS=boston during configuration. For
example:
$ make ARCH=mips 64r6el_defconfig BOARDS=boston
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16485/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When building a FIT image we may want the kernel to build multiple .dtb
files, but we don't want to build them all into the kernel binary as
object files since they'll instead be included in the FIT image.
Commit daa10170da ("MIPS: DTS: img: add device tree for Marduk board")
however created arch/mips/boot/dts/img/Makefile with a line that builds
any enabled .dtb files into the kernel. Remove this & build the
pistachio object specifically, in preparation for adding .dtb targets
which we don't want to build into the kernel.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Rahul Bedarkar <rahulbedarkar89@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16484/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add a driver for the clocks provided by the MIPS Boston board from
Imagination Technologies. 2 clocks are provided - the system clock & the
CPU clock - and each is a simple fixed rate clock whose frequency can be
determined by reading a register provided by the board.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Michael Turquette <mturquette@baylibre.com>
Cc: linux-clk@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16483/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add device tree binding documentation for the clocks provided by the
MIPS Boston development board from Imagination Technologies, and a
header file describing the available clocks for use by device trees &
driver.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Michael Turquette <mturquette@baylibre.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Cc: linux-clk@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16482/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If a negative system call number is used when system call tracing is
enabled, syscall_trace_enter() will return that negative system call
number without having written the return value and error flag into the
pt_regs.
The caller then treats it as a cancelled system call and assumes that
the return value and error flag are already written, leaving the
negative system call number in the return register ($v0), and the 4th
system call argument in the error register ($a3).
Add a special case to detect this at the end of syscall_trace_enter(),
to set the return value to error -ENOSYS when this happens.
Fixes: d218af7849 ("MIPS: scall: Always run the seccomp syscall filters")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16653/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When the system call return value is forced to be an error (for example
due to SECCOMP_RET_ERRNO), syscall_set_return_value() puts the error
code in the return register $v0 and -1 in the error register $a3.
However normally executed system calls put 1 in the error register
rather than -1, so fix syscall_set_return_value() to be consistent with
that.
I don't anticipate that anything would have been broken by this, since
the most natural way to check the error register on MIPS would be a
conditional branch if error register is [not] equal to zero (bnez or
beqz).
Fixes: 1d7bf993e0 ("MIPS: ftrace: Add support for syscall tracepoints.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16652/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The sys_exit trace event takes a single return value for the system
call, which MIPS passes the value of the $v0 (result) register, however
MIPS returns positive error codes in $v0 with $a3 specifying that $v0
contains an error code. As a result erroring system calls are traced
returning positive error numbers that can't always be distinguished from
success.
Use regs_return_value() to negate the error code if $a3 is set.
Fixes: 1d7bf993e0 ("MIPS: ftrace: Add support for syscall tracepoints.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.13+
Patchwork: https://patchwork.linux-mips.org/patch/16651/
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
MIPS selects HAVE_SYSCALL_TRACEPOINTS twice. The first was added back in
v3.13 by commit 2d7bf993e073 ("MIPS: ftrace: Add support for syscall
tracepoints."), but then a second redundant one was added in v4.2 by
commit fb59e394c3 ("MIPS: ftrace: Enable support for syscall
tracepoints.").
Drop the duplicate select.
Fixes: fb59e394c3 ("MIPS: ftrace: Enable support for syscall tracepoints.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16654/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Hardcode the absence of the MIPS16e2 ASE for all the systems that do so
for the MIPS16 ASE already, providing for code to be optimized away.
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16097/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Only now that both feature determination and unaligned emulation is in
place add reporting to /proc/cpuinfo, so that the presence of "mips16e2"
there not only indicates our recognition of the hardware feature, but
correct unaligned emulation as well.
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16757/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Implement extended LWSP/SWSP instruction subdecoding for the purpose of
unaligned GP-relative memory access emulation.
With the introduction of the MIPS16e2 ASE[1] the previously must-be-zero
3-bit field at bits 7..5 of the extended encodings of the instructions
selected with the LWSP and SWSP major opcodes has become a `sel' field,
acting as an opcode extension for additional operations. In both cases
the `sel' value of 0 has retained the original operation, that is:
LW rx, offset(sp)
and:
SW rx, offset(sp)
for LWSP and SWSP respectively. In hardware predating the MIPS16e2 ASE
other values may or may not have been decoded, architecturally yielding
unpredictable results, and in our unaligned memory access emulation we
have treated the 3-bit field as a don't-care, that is effectively making
all the possible encodings of the field alias to the architecturally
defined encoding of 0.
For the non-zero values of the `sel' field the MIPS16e2 ASE has in
particular defined these GP-relative operations:
LW rx, offset(gp) # sel = 1
LH rx, offset(gp) # sel = 2
LHU rx, offset(gp) # sel = 4
and
SW rx, offset(gp) # sel = 1
SH rx, offset(gp) # sel = 2
for LWSP and SWSP respectively, which will trap with an Address Error
exception if the effective address calculated is not naturally-aligned
for the operation requested. These operations have been selected for
unaligned access emulation, for consistency with the corresponding
regular MIPS and microMIPS operations.
For other non-zero values of the `sel' field the MIPS16e2 ASE has
defined further operations, which however either never trap with an
Address Error exception, such as LWL or GP-relative SB, or are not
supposed to be emulated, such as LL or SC. These operations have been
selected to exclude from unaligned access emulation, should an Address
Error exception ever happen with them.
Subdecode the `sel' field in unaligned access emulation then for the
extended encodings of the instructions selected with the LWSP and SWSP
major opcodes, whenever support for the MIPS16e2 ASE has been detected
in hardware, and either emulate the operation requested or send SIGBUS
to the originating process, according to the selection described above.
For hardware implementing the MIPS16 ASE, however lacking MIPS16e2 ASE
support retain the original interpretation of the `sel' field.
The effects of this change are illustrated with the following user
program:
$ cat mips16e2-test.c
#include <inttypes.h>
#include <stdio.h>
int main(void)
{
int64_t scratch[16] = { 0 };
int32_t *tmp0, *tmp1, *tmp2;
int i;
scratch[0] = 0xc8c7c6c5c4c3c2c1;
scratch[1] = 0xd0cfcecdcccbcac9;
asm volatile(
"move %0, $sp\n\t"
"move %1, $gp\n\t"
"move $sp, %4\n\t"
"addiu %2, %4, 8\n\t"
"move $gp, %2\n\t"
"lw %2, 2($sp)\n\t"
"sw %2, 16(%4)\n\t"
"lw %2, 2($gp)\n\t"
"sw %2, 24(%4)\n\t"
"lw %2, 1($sp)\n\t"
"sw %2, 32(%4)\n\t"
"lh %2, 1($gp)\n\t"
"sw %2, 40(%4)\n\t"
"lw %2, 3($sp)\n\t"
"sw %2, 48(%4)\n\t"
"lhu %2, 3($gp)\n\t"
"sw %2, 56(%4)\n\t"
"lw %2, 0(%4)\n\t"
"sw %2, 66($sp)\n\t"
"lw %2, 8(%4)\n\t"
"sw %2, 82($gp)\n\t"
"lw %2, 0(%4)\n\t"
"sw %2, 97($sp)\n\t"
"lw %2, 8(%4)\n\t"
"sh %2, 113($gp)\n\t"
"move $gp, %1\n\t"
"move $sp, %0"
: "=&d" (tmp0), "=&d" (tmp1), "=&d" (tmp2), "=m" (scratch)
: "d" (scratch));
for (i = 0; i < sizeof(scratch) / sizeof(*scratch); i += 2)
printf("%016" PRIx64 "\t%016" PRIx64 "\n",
scratch[i], scratch[i + 1]);
return 0;
}
$
to be compiled with:
$ gcc -mips16 -mips32r2 -Wa,-mmips16e2 -o mips16e2-test mips16e2-test.c
$
With 74Kf hardware, which does not implement the MIPS16e2 ASE, this
program produces the following output:
$ ./mips16e2-test
c8c7c6c5c4c3c2c1 d0cfcecdcccbcac9
00000000c6c5c4c3 00000000c6c5c4c3
00000000c5c4c3c2 00000000c5c4c3c2
00000000c7c6c5c4 00000000c7c6c5c4
0000c4c3c2c10000 0000000000000000
0000cccbcac90000 0000000000000000
000000c4c3c2c100 0000000000000000
000000cccbcac900 0000000000000000
$
regardless of whether the change has been applied or not.
With the change not applied and interAptive MR2 hardware[2], which does
implement the MIPS16e2 ASE, it produces the following output:
$ ./mips16e2-test
c8c7c6c5c4c3c2c1 d0cfcecdcccbcac9
00000000c6c5c4c3 00000000cecdcccb
00000000c5c4c3c2 00000000cdcccbca
00000000c7c6c5c4 00000000cfcecdcc
0000c4c3c2c10000 0000000000000000
0000000000000000 0000cccbcac90000
000000c4c3c2c100 0000000000000000
0000000000000000 000000cccbcac900
$
which shows that for GP-relative operations the correct trapping address
calculated from $gp has been obtained from the CP0 BadVAddr register and
so has data from the source operand, however masking and extension has
not been applied for halfword operations.
With the change applied and interAptive MR2 hardware the program
produces the following output:
$ ./mips16e2-test
c8c7c6c5c4c3c2c1 d0cfcecdcccbcac9
00000000c6c5c4c3 00000000cecdcccb
00000000c5c4c3c2 00000000ffffcbca
00000000c7c6c5c4 000000000000cdcc
0000c4c3c2c10000 0000000000000000
0000000000000000 0000cccbcac90000
000000c4c3c2c100 0000000000000000
0000000000000000 0000000000cac900
$
as expected.
References:
[1] "MIPS32 Architecture for Programmers: MIPS16e2 Application-Specific
Extension Technical Reference Manual", Imagination Technologies
Ltd., Document Number: MD01172, Revision 01.00, April 26, 2016
[2] "MIPS32 interAptiv Multiprocessing System Software User's Manual",
Imagination Technologies Ltd., Document Number: MD00904, Revision
02.01, June 15, 2016, Chapter 24 "MIPS16e Application-Specific
Extension to the MIPS32 Instruction Set", pp. 871-883
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16095/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Identify the presence of the MIPS16e2 ASE as per the architecture
specification[1], by checking for CP0 Config5.CA2 bit being 1[2].
References:
[1] "MIPS32 Architecture for Programmers: MIPS16e2 Application-Specific
Extension Technical Reference Manual", Imagination Technologies
Ltd., Document Number: MD01172, Revision 01.00, April 26, 2016,
Section 1.2 "Software Detection of the ASE", p. 5
[2] "MIPS32 interAptiv Multiprocessing System Software User's Manual",
Imagination Technologies Ltd., Document Number: MD00904, Revision
02.01, June 15, 2016, Section 2.2.1.6 "Device Configuration 5 --
Config5 (CP0 Register 16, Select 5)", pp. 71-72
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16094/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch adds gettimeofday_fallback() function that wraps assembly
invocation of gettimeofday() syscall using __NR_gettimeofday.
This function is used if pure VDSO implementation gettimeofday()
does not succeed for any reason. Its imeplementation is enclosed in
"#ifdef CONFIG_MIPS_CLOCK_VSYSCALL" to be in sync with the similar
arrangement for __vdso_gettimeofday().
If syscall invocation via __NR_gettimeofday fails, register a3 will
be set. So, after the syscall, register a3 is tested and the return
valuem is negated if it's set.
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: Douglas Leung <douglas.leung@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Petar Jovanovic <petar.jovanovic@imgtec.com>
Cc: Raghu Gandham <raghu.gandham@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16640/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch adds clock_gettime_fallback() function that wraps assembly
invocation of clock_gettime() syscall using __NR_clock_gettime.
This function is used if pure VDSO implementation of clock_gettime()
does not succeed for any reason. For example, it is called if the
clkid parameter of clock_gettime() is not one of the clkids listed
in the switch-case block of the function __vdso_clock_gettime()
(one such case for clkid is CLOCK_BOOTIME).
If syscall invocation via __NR_clock_gettime fails, register a3 will
be set. So, after the syscall, register a3 is tested and the return
value is negated if it's set.
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: Douglas Leung <douglas.leung@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Petar Jovanovic <petar.jovanovic@imgtec.com>
Cc: Raghu Gandham <raghu.gandham@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16639/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix incorrect calculation in do_monotonic() and do_monotonic_coarse()
function that in turn caused incorrect values returned by the vdso
version of system call clock_gettime() on mips64 if its system clock
ID parameter was CLOCK_MONOTONIC or CLOCK_MONOTONIC_COARSE.
Consider these variables and their types on mips32 and mips64:
tk->wall_to_monotonic.tv_sec s64, s64 (kernel/vdso.c)
vdso_data.wall_to_mono_sec u32, u32 (kernel/vdso.c)
to_mono_sec u32, u32 (vdso/gettimeofday.c)
ts->tv_sec s32, s64 (vdso/gettimeofday.c)
For mips64 case, u32 vdso_data.wall_to_mono_sec variable is updated
from the 64-bit signed variable tk->wall_to_monotonic.tv_sec
(kernel/vdso.c:76) which is a negative number holding the time passed
from 1970-01-01 to the time boot started. This 64-bit signed value is
currently around 47+ years, in seconds. For instance, let this value
be:
-1489757461
or
11111111111111111111111111111111 10100111001101000001101011101011
By updating 32-bit vdso_data.wall_to_mono_sec variable, we lose upper
32 bits (signed 1's).
to_mono_sec variable is a parameter of do_monotonic() and
do_monotonic_coarse() functions which holds vdso_data.wall_to_mono_sec
value. Its value needs to be added (or subtracted considering it holds
negative value from the tk->wall_to_monotonic.tv_sec) to the current
time passed from 1970-01-01 (ts->tv_sec), which is again something like
47+ years, but increased by the time passed from the boot to the
current time. ts->tv_sec is 32-bit long in case of 32-bit architecture
and 64-bit long in case of 64-bit architecture. Consider the update of
ts->tv_sec (vdso/gettimeofday.c:55 & 167):
ts->tv_sec += to_mono_sec;
mips32 case: This update will be performed correctly, since both
ts->tv_sec and to_mono_sec are 32-bit long and the sign in to_mono_sec
is preserved. Implicit conversion from u32 to s32 will be done
correctly.
mips64 case: This update will be wrong, since the implicit conversion
will not be done correctly. The reason is that the conversion will be
from u32 to s64. This is because to_mono_sec is 32-bit long for both
mips32 and mips64 cases and s64..33 bits of converted to_mono_sec
variable will be zeros.
So, in order to make MIPS64 implementation work properly for
MONOTONIC and MONOTONIC_COARSE clock ids on mips64, the size of
wall_to_mono_sec variable in mips_vdso_data union and respective
parameters in do_monotonic() and do_monotonic_coarse() functions
should be changed from u32 to u64. Because of consistency, this
size change from u32 and u64 is also done for wall_to_mono_nsec
variable and corresponding function parameters.
As far as similar situations for other architectures are concerned,
let's take a look at arm. Arm has two distinct vdso_data structures
for 32-bit & 64-bit cases, and arm's wall_to_mono_sec and
wall_to_mono_nsec are u32 for 32-bit and u64 for 64-bit cases.
On the other hand, MIPS has only one structure (mips_vdso_data),
hence the need for changing the size of above mentioned parameters.
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: Douglas Leung <douglas.leung@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Petar Jovanovic <petar.jovanovic@imgtec.com>
Cc: Raghu Gandham <raghu.gandham@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16638/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Use current_cpu_type() to check for 4Kc processors instead of checking
the PRID directly. This will allow for the 4Kc case to be optimised out
of kernels that can't run on 4KC processors, thanks to __get_cpu_type()
and its unreachable() call.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16205/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
CONFIG_MIPS_PGD_C0_CONTEXT, which allows a pointer to the page directory
to be stored in the cop0 Context register when enabled, was previously
only allowed for MIPSr2. MIPSr6 is just as able to make use of it, so
allow it there too.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16204/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In systems where there are multiple actors updating the TLB, the
potential exists for a race condition wherein a CPU hits a TLB exception
but by the time it reaches a TLBP instruction the affected TLB entry may
have been replaced. This can happen if, for example, a CPU shares the
TLB between hardware threads (VPs) within a core and one of them
replaces the entry that another has just taken a TLB exception for.
We handle this race in the case of the Hardware Table Walker (HTW) being
the other actor already, but didn't take into account the potential for
multiple threads racing. Include the code for aborting TLB exception
handling in affected multi-threaded systems, those being the I6400 &
I6500 CPUs which share TLB entries between VPs.
In the case of using RiXi without dedicated exceptions we have never
handled this race even for HTW. This patch adds WARN()s to these cases
which ought never to be hit because all CPUs with either HTW or shared
FTLB RAMs also include dedicated RiXi exceptions, but the WARN()s will
ensure this is always the case.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16203/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Some systems share FTLB RAMs or entries between sibling CPUs (ie.
hardware threads, or VP(E)s, within a core). These properties require
kernel handling in various places. As a start this patch introduces
cpu_has_shared_ftlb_ram & cpu_has_shared_ftlb_entries feature macros
which we set appropriately for I6400 & I6500 CPUs. Further patches will
make use of these macros as appropriate.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16202/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On pre-r6 systems with the MT ASE the CPS SMP code included checks to
halt the VPE running mips_cps_boot_vpes() if its bit in the struct
core_boot_config vpe_mask field is clear. This was largely done in order
to allow us to start arbitrary VPEs within a core despite the fact that
hardware is typically configured to run only VPE0 after powering up a
core. VPE0 would start the desired other VPEs, halt itself, and the fact
that VPE0 started would be largely hidden & irrelevant.
In MIPSr6 multithreading we have control over which VPs start executing
when a core powers up via the cores CPC registers accessed remotely
through the redirect block. For this reason the MIPSr6 multithreading
path in mips_cps_boot_vpes() hasn't bothered up until now to handle
halting the VP running it.
However it is possible to power up cores entirely in hardware by using a
pwr_up pin associated with the core. Unfortunately some systems wire
this pin to a logic 1, which means that it is possible for a core to
power up at a point that software doesn't expect. The result is that we
generally go execute the kernel on a CPU that ought not to be running &
the results can be unpredictable.
Handle this case by stopping VPs that we don't expect to be running in
mips_cps_boot_vpes() - with this change even if a core powers up it will
do nothing useful & all VPs within it will stop running before they
proceed to run general kernel code & do any damage. Ideally we would
produce some sort of warning here, but given the stage of core bringup
this happens at that would be non-trivial. We also will only hit this if
a core starts up after being offlined via hotplug, and when that happens
we will already produce a warning that the CPU didn't power down in
cps_cpu_die() which seems sufficient.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16198/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If we get into a state where a core that ought to power down isn't doing
so then the current result is that another CPU gets stuck inside
cps_cpu_die() waiting for CPU that ought to be powering down to do so.
The best case scenario is that we then trigger RCU stall messages or
lockup messages, but neither makes it particularly clear what's
happening.
Handle this more gracefully by introducing a timeout beyond which we
warn the user that the core didn't power down & stop waiting for it.
This at least allows the CPU running cps_cpu_die() to continue normally,
and hopefully presuming the CPU that powered back up is doing nothing
harmful the system will continue functioning as normal.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16197/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Systems using the MIPS Coherence Manager (CM) cannot support multi-core
SMP with dcache aliasing. This is because CPU caches are VIPT, but
interventions in CM-based systems provide only the physical address to
remote caches. This means that interventions may behave incorrectly in
the presence of an aliasing dcache, since the physical address used
when handling an intervention may lead to operation on an aliased cache
line rather than the correct line.
Prevent us from running into this issue by refusing to boot secondary
cores in systems where dcache aliasing may occur.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16196/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Prior to MIPSr6 multithreading is only supported if CONFIG_MIPS_MT_SMP
is enabled, so CONFIG_MIPS_MT_SMP selects CONFIG_SYS_SUPPORTS_SCHED_SMT.
With MIPSr6 the CONFIG_MIPS_CPS SMP implementation always supports
multithreading, so have it select CONFIG_SYS_SUPPORTS_SCHED_SMT in order
to allow the scheduler to make better informed decisions on
multithreaded MIPSr6 systems (for example those using I6400 or I6500
CPUs).
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16195/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Rather than using BUG_ON in the case of an invalid attempt to lock
access to a non-zero VP on a pre-CM3 system, use WARN_ON so that we have
even the slightest chance of recovery.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16194/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
CM3 provides a GCR_CL_OTHER register per VP, rather than only per core.
This means that we don't need to prevent other VPs within a core from
racing with code that makes use of the core-other register region.
Reduce locking overhead by demoting the per-core spinlock providing
protection for CM2.5 & lower to a per-CPU/per-VP spinlock for CM3 &
higher.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16193/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If we're running on a system with only 1 possible CPU then it makes no
sense to reserve or initialise IPIs since we'll never use them. Avoid
doing so.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16192/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Reduce the log level for branch emulation error messages issued before
sending SIGILL by `__compute_return_epc_for_insn' as these are triggered
by user software and are not an event that would normally require any
attention. The same signal sent from elsewhere does not actually leave
any trace in the kernel log at all.
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16402/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Update commit 1ac944007b ("MIPS: math-emu: Add mfhc1 & mthc1
support.") and like done throughout `cop1Emulate' for other cases also
for the MFHC1 and MTHC1 instructions return SIGILL right away rather
than jumping to a single `return' statement.
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16401/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This is a user-visible message, so we want it to be spelled correctly.
Fixes: 5f9f41c474 ("MIPS: kernel: Prepare the JR instruction for emulation on MIPS R6")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16400/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix:
* commit 8467ca0122 ("MIPS: Emulate the new MIPS R6 branch compact
(BC) instruction"),
* commit 84fef63012 ("MIPS: Emulate the new MIPS R6 BALC
instruction"),
* commit 69b9a2fd05 ("MIPS: Emulate the new MIPS R6 BEQZC and JIC
instructions"),
* commit 28d6f93d20 ("MIPS: Emulate the new MIPS R6 BNEZC and JIALC
instructions"),
* commit c893ce38b2 ("MIPS: Emulate the new MIPS R6 BOVC, BEQC and
BEQZALC instructions")
and send SIGILL rather than returning -SIGILL for R6 branch and jump
instructions. Returning -SIGILL is never correct as the API defines
this function's result upon error to be -EFAULT and a signal actually
issued.
Fixes: 8467ca0122 ("MIPS: Emulate the new MIPS R6 branch compact (BC) instruction")
Fixes: 84fef63012 ("MIPS: Emulate the new MIPS R6 BALC instruction")
Fixes: 69b9a2fd05 ("MIPS: Emulate the new MIPS R6 BEQZC and JIC instructions")
Fixes: 28d6f93d20 ("MIPS: Emulate the new MIPS R6 BNEZC and JIALC instructions")
Fixes: c893ce38b2 ("MIPS: Emulate the new MIPS R6 BOVC, BEQC and BEQZALC instructions")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16399/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix commit 319824eabc ("MIPS: kernel: branch: Do not emulate the
branch likelies on MIPS R6") and also send SIGILL rather than returning
-SIGILL for BLTZAL, BLTZALL, BGEZAL and BGEZALL instruction encodings no
longer supported in R6, except where emulated. Returning -SIGILL is
never correct as the API defines this function's result upon error to be
-EFAULT and a signal actually issued.
Fixes: 319824eabc ("MIPS: kernel: branch: Do not emulate the branch likelies on MIPS R6")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16398/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Use the more accurate `sigill_r2r6' name for the label used in the case
of sending SIGILL in the absence of the instruction emulator for an
earlier ISA level instruction that has been removed as from the R6 ISA,
so that the `sigill_r6' name is freed for the situation where an R6
instruction is not supposed to be interpreted, because the executing
processor does not support the R6 ISA.
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16397/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix commit e50c0a8fa6 ("Support the MIPS32 / MIPS64 DSP ASE.") and
send SIGILL rather than SIGBUS whenever an unimplemented BPOSGE32 DSP
ASE instruction has been encountered in `__compute_return_epc_for_insn'
as our Reserved Instruction exception handler would in response to an
attempt to actually execute the instruction. Sending SIGBUS only makes
sense for the unaligned PC case, since moved to `__compute_return_epc'.
Adjust function documentation accordingly, correct formatting and use
`pr_info' rather than `printk' as the other exit path already does.
Fixes: e50c0a8fa6 ("Support the MIPS32 / MIPS64 DSP ASE.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 2.6.14+
Patchwork: https://patchwork.linux-mips.org/patch/16396/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix a regression introduced with commit fb6883e580 ("MIPS: microMIPS:
Support handling of delay slots.") and defer to `__compute_return_epc'
if the ISA bit is set in EPC with non-MIPS16, non-microMIPS hardware,
which will then arrange for a SIGBUS due to an unaligned instruction
reference. Returning EPC here is never correct as the API defines this
function's result to be either a negative error code on failure or one
of 0 and BRANCH_LIKELY_TAKEN on success.
Fixes: fb6883e580 ("MIPS: microMIPS: Support handling of delay slots.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16395/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Complement commit fb6883e580 ("MIPS: microMIPS: Support handling of
delay slots.") and actually decode the regular MIPS JALX major
instruction opcode, the handling of which has been added with the said
commit for EPC calculation in `__compute_return_epc_for_insn'.
Fixes: fb6883e580 ("MIPS: microMIPS: Support handling of delay slots.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16394/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Terminate FPU emulation immediately whenever an ISA mode switch has been
observed. This is so that we do not interpret machine code in the wrong
mode, for example when a regular MIPS FPU instruction has been placed in
a delay slot of a jump that switches into the MIPS16 mode, as with the
following code (taken from a GCC test suite case):
00400650 <set_fast_math>:
400650: 3c020100 lui v0,0x100
400654: 03e00008 jr ra
400658: 44c2f800 ctc1 v0,c1_fcsr
40065c: 00000000 nop
[...]
004012d0 <__libc_csu_init>:
4012d0: f000 6a02 li v0,2
4012d4: f150 0b1c la v1,3f9430 <_DYNAMIC-0x6df0>
4012d8: f400 3240 sll v0,16
4012dc: e269 addu v0,v1
4012de: 659a move gp,v0
4012e0: f00c 64f6 save a0-a2,48,ra,s0-s1
4012e4: 673c move s1,gp
4012e6: f010 9978 lw v1,-32744(s1)
4012ea: d204 sw v0,16(sp)
4012ec: eb40 jalr v1
4012ee: 653b move t9,v1
4012f0: f010 997c lw v1,-32740(s1)
4012f4: f030 9920 lw s1,-32736(s1)
4012f8: e32f subu v1,s1
4012fa: 326b sra v0,v1,2
4012fc: d206 sw v0,24(sp)
4012fe: 220c beqz v0,401318 <__libc_csu_init+0x48>
401300: 6800 li s0,0
401302: 99e0 lw a3,0(s1)
401304: 4801 addiu s0,1
401306: 960e lw a2,56(sp)
401308: 4904 addiu s1,4
40130a: 950d lw a1,52(sp)
40130c: 940c lw a0,48(sp)
40130e: ef40 jalr a3
401310: 653f move t9,a3
401312: 9206 lw v0,24(sp)
401314: ea0a cmp v0,s0
401316: 61f5 btnez 401302 <__libc_csu_init+0x32>
401318: 6476 restore 48,ra,s0-s1
40131a: e8a0 jrc ra
Here `set_fast_math' is called from `40130e' (`40130f' with the ISA bit)
and emulation triggers for the CTC1 instruction. As it is in a jump
delay slot emulation continues from `401312' (`401313' with the ISA
bit). However we have no path to handle MIPS16 FPU code emulation,
because there are no MIPS16 FPU instructions. So the default emulation
path is taken, interpreting a 32-bit word fetched by `get_user' from
`401313' as a regular MIPS instruction, which is:
401313: f5ea0a92 sdc1 $f10,2706(t7)
This makes the FPU emulator proceed with the supposed SDC1 instruction
and consequently makes the program considered here terminate with
SIGSEGV.
A similar although less severe issue exists with pure-microMIPS
processors in the case where similarly an FPU instruction is emulated in
a delay slot of a register jump that (incorrectly) switches into the
regular MIPS mode. A subsequent instruction fetch from the jump's
target is supposed to cause an Address Error exception, however instead
we proceed with regular MIPS FPU emulation.
For simplicity then, always terminate the emulation loop whenever a mode
change is detected, denoted by an ISA mode bit flip. As from commit
377cb1b6c1 ("MIPS: Disable MIPS16/microMIPS crap for platforms not
supporting these ASEs.") the result of `get_isa16_mode' can be hardcoded
to 0, so we need to examine the ISA mode bit by hand.
This complements commit 102cedc32a ("MIPS: microMIPS: Floating point
support.") which added JALX decoding to FPU emulation.
Fixes: 102cedc32a ("MIPS: microMIPS: Floating point support.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16393/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch switches MIPS to make use of generically implemented queued
spinlocks, rather than the ticket spinlocks used previously. This allows
us to drop a whole load of inline assembly, share more generic code, and
is also a performance win.
Results from running the AIM7 short workload on a MIPS Creator Ci40 (ie.
2 core 2 thread interAptiv CPU clocked at 546MHz) with v4.12-rc4
pistachio_defconfig, with ftrace disabled due to a current bug, and both
with & without use of queued rwlocks & spinlocks:
Forks | v4.12-rc4 | +qlocks | Change
-------|-----------|----------|--------
10 | 52630.32 | 53316.31 | +1.01%
20 | 51777.80 | 52623.15 | +1.02%
30 | 51645.92 | 52517.26 | +1.02%
40 | 51634.88 | 52419.89 | +1.02%
50 | 51506.75 | 52307.81 | +1.02%
60 | 51500.74 | 52322.72 | +1.02%
70 | 51434.81 | 52288.60 | +1.02%
80 | 51423.22 | 52434.85 | +1.02%
90 | 51428.65 | 52410.10 | +1.02%
The kernels used for these tests also had my "MIPS: Hardcode cpu_has_*
where known at compile time due to ISA" patch applied, which allows the
kernel_uses_llsc checks in cmpxchg() & xchg() to be optimised away at
compile time.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16358/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch switches MIPS to make use of generically implemented queued
read/write locks, rather than the custom implementation used previously.
This allows us to drop a whole load of inline assembly, share more
generic code, and is also a performance win.
Results from running the AIM7 short workload on a MIPS Creator Ci40 (ie.
2 core 2 thread interAptiv CPU clocked at 546MHz) with v4.12-rc4
pistachio_defconfig, with ftrace disabled due to a current bug, and both
with & without use of queued rwlocks & spinlocks:
Forks | v4.12-rc4 | +qlocks | Change
-------|-----------|----------|--------
10 | 52630.32 | 53316.31 | +1.01%
20 | 51777.80 | 52623.15 | +1.02%
30 | 51645.92 | 52517.26 | +1.02%
40 | 51634.88 | 52419.89 | +1.02%
50 | 51506.75 | 52307.81 | +1.02%
60 | 51500.74 | 52322.72 | +1.02%
70 | 51434.81 | 52288.60 | +1.02%
80 | 51423.22 | 52434.85 | +1.02%
90 | 51428.65 | 52410.10 | +1.02%
The kernels used for these tests also had my "MIPS: Hardcode cpu_has_*
where known at compile time due to ISA" patch applied, which allows the
kernel_uses_llsc checks in cmpxchg() & xchg() to be optimised away at
compile time.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16357/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The __xchg() function declares its first 2 arguments in reverse order
compared to the xchg() macro, which is confusing & serves no purpose.
Reorder the arguments such that __xchg() & xchg() match.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16356/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Implement support for 1 & 2 byte cmpxchg() using read-modify-write atop
a 4 byte cmpxchg(). This allows us to support these atomic operations
despite the MIPS ISA only providing 4 & 8 byte atomic operations.
This is required in order to support queued rwlocks (qrwlock) in a later
patch, since these make use of a 1 byte cmpxchg() in their slow path.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16355/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Implement 1 & 2 byte xchg() using read-modify-write atop a 4 byte
cmpxchg(). This allows us to support these atomic operations despite the
MIPS ISA only providing for 4 & 8 byte atomic operations.
This is required in order to support queued spinlocks (qspinlock) in a
later patch, since these make use of a 2 byte xchg() in their slow path.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16354/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Replace the macro definition of __cmpxchg() with an inline function,
which is easier to read & modify. The cmpxchg() & cmpxchg_local() macros
are adjusted to call the new __cmpxchg() function.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16353/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The __xchg_u32() & __xchg_u64() functions now add very little value.
This patch therefore removes them, by:
- Moving memory barriers out of them & into xchg(), which also removes
the duplication & readies us to support xchg_relaxed() if we wish to.
- Calling __xchg_asm() directly from __xchg().
- Performing the check for CONFIG_64BIT being enabled in the size=8
case of __xchg().
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16352/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
xchg() has up until now simply returned the x parameter in cases where
it is called with a pointer to a value of an unsupported size. This will
often cause the calling code to hit a failure path, presuming that the
value of x differs from the content of the memory pointed at by ptr, but
we can do better by producing a compile-time or link-time error such
that unsupported calls to xchg() are detectable earlier than runtime.
This patch does this in the same was as is already done for cmpxchg(),
using a call to a missing function annotated with __compiletime_error().
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16351/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Our cmpxchg() implementation relies upon generating a call to a function
which doesn't really exist (__cmpxchg_called_with_bad_pointer) to create
a link failure in cases where cmpxchg() is called with a pointer to a
value of an unsupported size.
The __compiletime_error macro can be used to decorate a function such
that a call to it generates a compile-time, rather than a link-time,
error. This patch uses __compiletime_error to cause bad cmpxchg() calls
to error out at compile time rather than link time, allowing errors to
occur more quickly & making it easier to spot where the problem comes
from.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16350/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Use a macro to generate the 32 & 64 bit variants of the backing code for
xchg(), much as is already done for cmpxchg(). This removes the
duplication that could previously be found in __xchg_u32() &
__xchg_u64().
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16349/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Prior to this patch the xchg & cmpxchg functions have duplicated code
which is for all intents & purposes identical apart from use of a
branch-likely instruction in the R10000_LLSC_WAR case & a regular branch
instruction in the non-R10000_LLSC_WAR case.
This patch removes the duplication, declaring a __scbeqz macro to select
the branch instruction suitable for use when checking the result of an
sc instruction & making use of it to unify the 2 cases.
In __xchg_u{32,64}() this means writing the branch in asm, where it was
previously being done in C as a do...while loop for the
non-R10000_LLSC_WAR case. As this is a single instruction, and adds
consistency with the R10000_LLSC_WAR cases & the cmpxchg() code, this
seems worthwhile.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16348/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Disable usage of PREF instruction usage by memcpy for MIPS R6.
MIPS R6 redefines PREF instruction with smaller offset than
ordinary MIPS. However, the memcpy code uses PREF instruction
with offsets bigger than +-256 bytes.
Malta kernels already disable usage of PREF for memcpy.
This was found during adaptation of MIPS R6 for virtual board
used by Android emulator.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtech.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16510/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>