forked from Minki/linux
PM voltage domain clean-up via Kevin Hilman <khilman@linaro.org>:
OMAP: PM: remove requirement for voltage domain data; remove dummy data -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iQIcBAABAgAGBQJRwBYDAAoJEBvUPslcq6VzTIUP/3Cv+G/yAtbegSsodSmgBMhh 0qor0Kon/zO7rNYr2IEWI/kvVB7PwqLWxooJHc2/HXGSinb49Y4BUQbkRevYxqno I4hieuFivSgprVSgpwmNGdLFzEslVID2orTFot29UXGLqqtQ6qPtO/hZhbHWG5Uv bLQ+fCB3Rx4AKfEMWSdqrcnufjMB1GJJozvwgCMZY1KbCn30hx+fGQCBVFcXc84W yTpq6c9tYLEJjcB+KaTesd343rjV0SP28pzbdivpylSdVoDR2g/oqv8twqHlW8KI jrHDryCVmuZh6iFifhN3BwWtKti0/zqeb0e3UPV3SA+Uo0S4UFemwG/y4/rLF89y mKLYlASoBdLiqwu8uhpuAK5qHOcQ74WCGMJf9JKDf1oOZ63AUo9oYrQHdKK2GR8o tzQiRQ843yY3hmbsKe9j2LDKw/eMzoU0E7bR6DL6uZTOhxnHgYDSFBQc+QURcdy7 QRyeTrjVCUo3HV4AM2V1jKUeCQzaY2g6ZKGxS9MZUKyyP5pW71sJZvVP9fB9VJ/d Sbiqmz0ULyw08HeZfq45J96F2WZRvoqP8kFzustOzXoRijrl+Mcg63A3B3El1onW THBNw27+MC0x7Omfe6t4fDflKc5n0PzemqjWTwBuJgLvSrIBYVhVFpmtKBmCc9oH mL8VHpjVpNtpU+pM8v0m =KjMH -----END PGP SIGNATURE----- Merge tag 'omap-for-v3.11/pm-voltdomain-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/cleanup From Tony Lindgren: PM voltage domain clean-up via Kevin Hilman <khilman@linaro.org>: OMAP: PM: remove requirement for voltage domain data; remove dummy data * tag 'omap-for-v3.11/pm-voltdomain-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: ARM: AM33xx: Remove the unused voltagedomain data ARM: OMAP2+: Powerdomain: Remove the need to always have a voltdm associated to a pwrdm Includes an update to Linux 3.10-rc6. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This commit is contained in:
commit
0a15e0b5b9
@ -319,7 +319,10 @@ cache<0..n>
|
||||
Symlink to each of the cache devices comprising this cache set.
|
||||
|
||||
cache_available_percent
|
||||
Percentage of cache device free.
|
||||
Percentage of cache device which doesn't contain dirty data, and could
|
||||
potentially be used for writeback. This doesn't mean this space isn't used
|
||||
for clean cached data; the unused statistic (in priority_stats) is typically
|
||||
much lower.
|
||||
|
||||
clear_stats
|
||||
Clears the statistics associated with this cache
|
||||
@ -423,8 +426,11 @@ nbuckets
|
||||
Total buckets in this cache
|
||||
|
||||
priority_stats
|
||||
Statistics about how recently data in the cache has been accessed. This can
|
||||
reveal your working set size.
|
||||
Statistics about how recently data in the cache has been accessed.
|
||||
This can reveal your working set size. Unused is the percentage of
|
||||
the cache that doesn't contain any data. Metadata is bcache's
|
||||
metadata overhead. Average is the average priority of cache buckets.
|
||||
Next is a list of quantiles with the priority threshold of each.
|
||||
|
||||
written
|
||||
Sum of all data that has been written to the cache; comparison with
|
||||
|
@ -498,12 +498,8 @@ Your cooperation is appreciated.
|
||||
|
||||
Each device type has 5 bits (32 minors).
|
||||
|
||||
13 block 8-bit MFM/RLL/IDE controller
|
||||
0 = /dev/xda First XT disk whole disk
|
||||
64 = /dev/xdb Second XT disk whole disk
|
||||
|
||||
Partitions are handled in the same way as IDE disks
|
||||
(see major number 3).
|
||||
13 block Previously used for the XT disk (/dev/xdN)
|
||||
Deleted in kernel v3.9.
|
||||
|
||||
14 char Open Sound System (OSS)
|
||||
0 = /dev/mixer Mixer control
|
||||
|
@ -1,7 +1,7 @@
|
||||
Atmel AT91RM9200 Real Time Clock
|
||||
|
||||
Required properties:
|
||||
- compatible: should be: "atmel,at91rm9200-rtc"
|
||||
- compatible: should be: "atmel,at91rm9200-rtc" or "atmel,at91sam9x5-rtc"
|
||||
- reg: physical base address of the controller and length of memory mapped
|
||||
region.
|
||||
- interrupts: rtc alarm/event interrupt
|
||||
|
@ -34,7 +34,7 @@ command:
|
||||
After a while you will start to get messages about current status or error like
|
||||
in the original code.
|
||||
|
||||
Note that running a new test will stop any in progress test.
|
||||
Note that running a new test will not stop any in progress test.
|
||||
|
||||
The following command should return actual state of the test.
|
||||
% cat /sys/kernel/debug/dmatest/run
|
||||
@ -52,8 +52,8 @@ To wait for test done the user may perform a busy loop that checks the state.
|
||||
|
||||
The module parameters that is supplied to the kernel command line will be used
|
||||
for the first performed test. After user gets a control, the test could be
|
||||
interrupted or re-run with same or different parameters. For the details see
|
||||
the above section "Part 2 - When dmatest is built as a module..."
|
||||
re-run with the same or different parameters. For the details see the above
|
||||
section "Part 2 - When dmatest is built as a module..."
|
||||
|
||||
In both cases the module parameters are used as initial values for the test case.
|
||||
You always could check them at run-time by running
|
||||
|
@ -33,6 +33,9 @@ When mounting an XFS filesystem, the following options are accepted.
|
||||
removing extended attributes) the on-disk superblock feature
|
||||
bit field will be updated to reflect this format being in use.
|
||||
|
||||
CRC enabled filesystems always use the attr2 format, and so
|
||||
will reject the noattr2 mount option if it is set.
|
||||
|
||||
barrier
|
||||
Enables the use of block layer write barriers for writes into
|
||||
the journal and unwritten extent conversion. This allows for
|
||||
|
@ -3351,9 +3351,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
plus one apbt timer for broadcast timer.
|
||||
x86_mrst_timer=apbt_only | lapic_and_apbt
|
||||
|
||||
xd= [HW,XT] Original XT pre-IDE (RLL encoded) disks.
|
||||
xd_geo= See header of drivers/block/xd.c.
|
||||
|
||||
xen_emul_unplug= [HW,X86,XEN]
|
||||
Unplug Xen emulated devices
|
||||
Format: [unplug0,][unplug1]
|
||||
|
@ -80,8 +80,6 @@ Valid names are:
|
||||
/dev/sdd: -> 0x0830 (forth SCSI disk)
|
||||
/dev/sde: -> 0x0840 (fifth SCSI disk)
|
||||
/dev/fd : -> 0x0200 (floppy disk)
|
||||
/dev/xda: -> 0x0c00 (first XT disk, unused in Linux/m68k)
|
||||
/dev/xdb: -> 0x0c40 (second XT disk, unused in Linux/m68k)
|
||||
|
||||
The name must be followed by a decimal number, that stands for the
|
||||
partition number. Internally, the value of the number is just
|
||||
|
19
MAINTAINERS
19
MAINTAINERS
@ -2890,8 +2890,8 @@ F: drivers/media/dvb-frontends/ec100*
|
||||
|
||||
ECRYPT FILE SYSTEM
|
||||
M: Tyler Hicks <tyhicks@canonical.com>
|
||||
M: Dustin Kirkland <dustin.kirkland@gazzang.com>
|
||||
L: ecryptfs@vger.kernel.org
|
||||
W: http://ecryptfs.org
|
||||
W: https://launchpad.net/ecryptfs
|
||||
S: Supported
|
||||
F: Documentation/filesystems/ecryptfs.txt
|
||||
@ -4448,6 +4448,16 @@ S: Maintained
|
||||
F: drivers/scsi/*iscsi*
|
||||
F: include/scsi/*iscsi*
|
||||
|
||||
ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR
|
||||
M: Or Gerlitz <ogerlitz@mellanox.com>
|
||||
M: Roi Dayan <roid@mellanox.com>
|
||||
L: linux-rdma@vger.kernel.org
|
||||
S: Supported
|
||||
W: http://www.openfabrics.org
|
||||
W: www.open-iscsi.org
|
||||
Q: http://patchwork.kernel.org/project/linux-rdma/list/
|
||||
F: drivers/infiniband/ulp/iser
|
||||
|
||||
ISDN SUBSYSTEM
|
||||
M: Karsten Keil <isdn@linux-pingi.de>
|
||||
L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
|
||||
@ -5756,7 +5766,7 @@ M: Matthew Wilcox <willy@linux.intel.com>
|
||||
L: linux-nvme@lists.infradead.org
|
||||
T: git git://git.infradead.org/users/willy/linux-nvme.git
|
||||
S: Supported
|
||||
F: drivers/block/nvme.c
|
||||
F: drivers/block/nvme*
|
||||
F: include/linux/nvme.h
|
||||
|
||||
OMAP SUPPORT
|
||||
@ -7614,7 +7624,7 @@ F: drivers/clk/spear/
|
||||
SPI SUBSYSTEM
|
||||
M: Mark Brown <broonie@kernel.org>
|
||||
M: Grant Likely <grant.likely@linaro.org>
|
||||
L: spi-devel-general@lists.sourceforge.net
|
||||
L: linux-spi@vger.kernel.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
|
||||
Q: http://patchwork.kernel.org/project/spi-devel-general/list/
|
||||
S: Maintained
|
||||
@ -8994,7 +9004,7 @@ S: Maintained
|
||||
F: drivers/net/wireless/wl3501*
|
||||
|
||||
WM97XX TOUCHSCREEN DRIVERS
|
||||
M: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
||||
M: Mark Brown <broonie@kernel.org>
|
||||
M: Liam Girdwood <lrg@slimlogic.co.uk>
|
||||
L: linux-input@vger.kernel.org
|
||||
T: git git://opensource.wolfsonmicro.com/linux-2.6-touch
|
||||
@ -9004,7 +9014,6 @@ F: drivers/input/touchscreen/*wm97*
|
||||
F: include/linux/wm97xx.h
|
||||
|
||||
WOLFSON MICROELECTRONICS DRIVERS
|
||||
M: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
||||
L: patches@opensource.wolfsonmicro.com
|
||||
T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc
|
||||
T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
VERSION = 3
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc4
|
||||
EXTRAVERSION = -rc6
|
||||
NAME = Unicycling Gorilla
|
||||
|
||||
# *DOCUMENTATION*
|
||||
|
@ -124,7 +124,7 @@ KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS))
|
||||
endif
|
||||
|
||||
ccflags-y := -fpic -mno-single-pic-base -fno-builtin -I$(obj)
|
||||
asflags-y := -Wa,-march=all -DZIMAGE
|
||||
asflags-y := -DZIMAGE
|
||||
|
||||
# Supply kernel BSS size to the decompressor via a linker symbol.
|
||||
KBSS_SZ = $(shell $(CROSS_COMPILE)size $(obj)/../../../../vmlinux | \
|
||||
|
@ -1,6 +1,8 @@
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/assembler.h>
|
||||
|
||||
#ifndef CONFIG_DEBUG_SEMIHOSTING
|
||||
|
||||
#include CONFIG_DEBUG_LL_INCLUDE
|
||||
|
||||
ENTRY(putc)
|
||||
@ -10,3 +12,29 @@ ENTRY(putc)
|
||||
busyuart r3, r1
|
||||
mov pc, lr
|
||||
ENDPROC(putc)
|
||||
|
||||
#else
|
||||
|
||||
ENTRY(putc)
|
||||
adr r1, 1f
|
||||
ldmia r1, {r2, r3}
|
||||
add r2, r2, r1
|
||||
ldr r1, [r2, r3]
|
||||
strb r0, [r1]
|
||||
mov r0, #0x03 @ SYS_WRITEC
|
||||
ARM( svc #0x123456 )
|
||||
THUMB( svc #0xab )
|
||||
mov pc, lr
|
||||
.align 2
|
||||
1: .word _GLOBAL_OFFSET_TABLE_ - .
|
||||
.word semi_writec_buf(GOT)
|
||||
ENDPROC(putc)
|
||||
|
||||
.bss
|
||||
.global semi_writec_buf
|
||||
.type semi_writec_buf, %object
|
||||
semi_writec_buf:
|
||||
.space 4
|
||||
.size semi_writec_buf, 4
|
||||
|
||||
#endif
|
||||
|
@ -11,6 +11,7 @@
|
||||
#include <asm/mach-types.h>
|
||||
|
||||
.section ".start", "ax"
|
||||
.arch armv4
|
||||
|
||||
__SA1100_start:
|
||||
|
||||
|
@ -18,6 +18,7 @@
|
||||
|
||||
.section ".start", "ax"
|
||||
|
||||
.arch armv4
|
||||
b __beginning
|
||||
|
||||
__ofw_data: .long 0 @ the number of memory blocks
|
||||
|
@ -11,6 +11,7 @@
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/assembler.h>
|
||||
|
||||
.arch armv7-a
|
||||
/*
|
||||
* Debugging stuff
|
||||
*
|
||||
@ -805,8 +806,8 @@ call_cache_fn: adr r12, proc_types
|
||||
.align 2
|
||||
.type proc_types,#object
|
||||
proc_types:
|
||||
.word 0x00000000 @ old ARM ID
|
||||
.word 0x0000f000
|
||||
.word 0x41000000 @ old ARM ID
|
||||
.word 0xff00f000
|
||||
mov pc, lr
|
||||
THUMB( nop )
|
||||
mov pc, lr
|
||||
|
@ -39,8 +39,9 @@
|
||||
};
|
||||
|
||||
soc {
|
||||
ranges = <0 0 0xd0000000 0x100000
|
||||
0xf0000000 0 0xf0000000 0x1000000>;
|
||||
ranges = <0 0 0xd0000000 0x100000 /* Internal registers 1MiB */
|
||||
0xe0000000 0 0xe0000000 0x8100000 /* PCIe */
|
||||
0xf0000000 0 0xf0000000 0x1000000 /* Device Bus, NOR 16MiB */>;
|
||||
|
||||
internal-regs {
|
||||
serial@12000 {
|
||||
|
@ -27,8 +27,9 @@
|
||||
};
|
||||
|
||||
soc {
|
||||
ranges = <0 0 0xd0000000 0x100000
|
||||
0xf0000000 0 0xf0000000 0x8000000>;
|
||||
ranges = <0 0 0xd0000000 0x100000 /* Internal registers 1MiB */
|
||||
0xe0000000 0 0xe0000000 0x8100000 /* PCIe */
|
||||
0xf0000000 0 0xf0000000 0x8000000 /* Device Bus, NOR 128MiB */>;
|
||||
|
||||
internal-regs {
|
||||
serial@12000 {
|
||||
|
@ -44,6 +44,7 @@
|
||||
reg = <0x7e201000 0x1000>;
|
||||
interrupts = <2 25>;
|
||||
clock-frequency = <3000000>;
|
||||
arm,primecell-periphid = <0x00241011>;
|
||||
};
|
||||
|
||||
gpio: gpio {
|
||||
|
@ -141,8 +141,8 @@
|
||||
#size-cells = <0>;
|
||||
compatible = "fsl,imx25-cspi", "fsl,imx35-cspi";
|
||||
reg = <0x43fa4000 0x4000>;
|
||||
clocks = <&clks 62>;
|
||||
clock-names = "ipg";
|
||||
clocks = <&clks 62>, <&clks 62>;
|
||||
clock-names = "ipg", "per";
|
||||
interrupts = <14>;
|
||||
status = "disabled";
|
||||
};
|
||||
@ -182,8 +182,8 @@
|
||||
compatible = "fsl,imx25-cspi", "fsl,imx35-cspi";
|
||||
reg = <0x50004000 0x4000>;
|
||||
interrupts = <0>;
|
||||
clocks = <&clks 80>;
|
||||
clock-names = "ipg";
|
||||
clocks = <&clks 80>, <&clks 80>;
|
||||
clock-names = "ipg", "per";
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
@ -210,8 +210,8 @@
|
||||
#size-cells = <0>;
|
||||
compatible = "fsl,imx25-cspi", "fsl,imx35-cspi";
|
||||
reg = <0x50010000 0x4000>;
|
||||
clocks = <&clks 79>;
|
||||
clock-names = "ipg";
|
||||
clocks = <&clks 79>, <&clks 79>;
|
||||
clock-names = "ipg", "per";
|
||||
interrupts = <13>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
@ -131,7 +131,7 @@
|
||||
compatible = "fsl,imx27-cspi";
|
||||
reg = <0x1000e000 0x1000>;
|
||||
interrupts = <16>;
|
||||
clocks = <&clks 53>, <&clks 0>;
|
||||
clocks = <&clks 53>, <&clks 53>;
|
||||
clock-names = "ipg", "per";
|
||||
status = "disabled";
|
||||
};
|
||||
@ -142,7 +142,7 @@
|
||||
compatible = "fsl,imx27-cspi";
|
||||
reg = <0x1000f000 0x1000>;
|
||||
interrupts = <15>;
|
||||
clocks = <&clks 52>, <&clks 0>;
|
||||
clocks = <&clks 52>, <&clks 52>;
|
||||
clock-names = "ipg", "per";
|
||||
status = "disabled";
|
||||
};
|
||||
@ -223,7 +223,7 @@
|
||||
compatible = "fsl,imx27-cspi";
|
||||
reg = <0x10017000 0x1000>;
|
||||
interrupts = <6>;
|
||||
clocks = <&clks 51>, <&clks 0>;
|
||||
clocks = <&clks 51>, <&clks 51>;
|
||||
clock-names = "ipg", "per";
|
||||
status = "disabled";
|
||||
};
|
||||
|
@ -631,7 +631,7 @@
|
||||
compatible = "fsl,imx51-cspi", "fsl,imx35-cspi";
|
||||
reg = <0x83fc0000 0x4000>;
|
||||
interrupts = <38>;
|
||||
clocks = <&clks 55>, <&clks 0>;
|
||||
clocks = <&clks 55>, <&clks 55>;
|
||||
clock-names = "ipg", "per";
|
||||
status = "disabled";
|
||||
};
|
||||
|
@ -714,7 +714,7 @@
|
||||
compatible = "fsl,imx53-cspi", "fsl,imx35-cspi";
|
||||
reg = <0x63fc0000 0x4000>;
|
||||
interrupts = <38>;
|
||||
clocks = <&clks 55>, <&clks 0>;
|
||||
clocks = <&clks 55>, <&clks 55>;
|
||||
clock-names = "ipg", "per";
|
||||
status = "disabled";
|
||||
};
|
||||
|
@ -30,8 +30,15 @@ static inline void set_my_cpu_offset(unsigned long off)
|
||||
static inline unsigned long __my_cpu_offset(void)
|
||||
{
|
||||
unsigned long off;
|
||||
/* Read TPIDRPRW */
|
||||
asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : : "memory");
|
||||
register unsigned long *sp asm ("sp");
|
||||
|
||||
/*
|
||||
* Read TPIDRPRW.
|
||||
* We want to allow caching the value, so avoid using volatile and
|
||||
* instead use a fake stack read to hazard against barrier().
|
||||
*/
|
||||
asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : "Q" (*sp));
|
||||
|
||||
return off;
|
||||
}
|
||||
#define __my_cpu_offset __my_cpu_offset()
|
||||
|
@ -33,18 +33,6 @@
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
/*
|
||||
* We need to delay page freeing for SMP as other CPUs can access pages
|
||||
* which have been removed but not yet had their TLB entries invalidated.
|
||||
* Also, as ARMv7 speculative prefetch can drag new entries into the TLB,
|
||||
* we need to apply this same delaying tactic to ensure correct operation.
|
||||
*/
|
||||
#if defined(CONFIG_SMP) || defined(CONFIG_CPU_32v7)
|
||||
#define tlb_fast_mode(tlb) 0
|
||||
#else
|
||||
#define tlb_fast_mode(tlb) 1
|
||||
#endif
|
||||
|
||||
#define MMU_GATHER_BUNDLE 8
|
||||
|
||||
/*
|
||||
@ -112,12 +100,10 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb)
|
||||
static inline void tlb_flush_mmu(struct mmu_gather *tlb)
|
||||
{
|
||||
tlb_flush(tlb);
|
||||
if (!tlb_fast_mode(tlb)) {
|
||||
free_pages_and_swap_cache(tlb->pages, tlb->nr);
|
||||
tlb->nr = 0;
|
||||
if (tlb->pages == tlb->local)
|
||||
__tlb_alloc_page(tlb);
|
||||
}
|
||||
free_pages_and_swap_cache(tlb->pages, tlb->nr);
|
||||
tlb->nr = 0;
|
||||
if (tlb->pages == tlb->local)
|
||||
__tlb_alloc_page(tlb);
|
||||
}
|
||||
|
||||
static inline void
|
||||
@ -178,11 +164,6 @@ tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
|
||||
|
||||
static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
|
||||
{
|
||||
if (tlb_fast_mode(tlb)) {
|
||||
free_page_and_swap_cache(page);
|
||||
return 1; /* avoid calling tlb_flush_mmu */
|
||||
}
|
||||
|
||||
tlb->pages[tlb->nr++] = page;
|
||||
VM_BUG_ON(tlb->nr > tlb->max);
|
||||
return tlb->max - tlb->nr;
|
||||
|
@ -13,6 +13,7 @@
|
||||
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/node.h>
|
||||
@ -200,6 +201,7 @@ static inline void update_cpu_power(unsigned int cpuid, unsigned int mpidr) {}
|
||||
* cpu topology table
|
||||
*/
|
||||
struct cputopo_arm cpu_topology[NR_CPUS];
|
||||
EXPORT_SYMBOL_GPL(cpu_topology);
|
||||
|
||||
const struct cpumask *cpu_coregroup_mask(int cpu)
|
||||
{
|
||||
|
@ -492,6 +492,11 @@ static void vcpu_pause(struct kvm_vcpu *vcpu)
|
||||
wait_event_interruptible(*wq, !vcpu->arch.pause);
|
||||
}
|
||||
|
||||
static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return vcpu->arch.target >= 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
|
||||
* @vcpu: The VCPU pointer
|
||||
@ -508,8 +513,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
int ret;
|
||||
sigset_t sigsaved;
|
||||
|
||||
/* Make sure they initialize the vcpu with KVM_ARM_VCPU_INIT */
|
||||
if (unlikely(vcpu->arch.target < 0))
|
||||
if (unlikely(!kvm_vcpu_initialized(vcpu)))
|
||||
return -ENOEXEC;
|
||||
|
||||
ret = kvm_vcpu_first_run_init(vcpu);
|
||||
@ -710,6 +714,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
||||
case KVM_SET_ONE_REG:
|
||||
case KVM_GET_ONE_REG: {
|
||||
struct kvm_one_reg reg;
|
||||
|
||||
if (unlikely(!kvm_vcpu_initialized(vcpu)))
|
||||
return -ENOEXEC;
|
||||
|
||||
if (copy_from_user(®, argp, sizeof(reg)))
|
||||
return -EFAULT;
|
||||
if (ioctl == KVM_SET_ONE_REG)
|
||||
@ -722,6 +730,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
||||
struct kvm_reg_list reg_list;
|
||||
unsigned n;
|
||||
|
||||
if (unlikely(!kvm_vcpu_initialized(vcpu)))
|
||||
return -ENOEXEC;
|
||||
|
||||
if (copy_from_user(®_list, user_list, sizeof(reg_list)))
|
||||
return -EFAULT;
|
||||
n = reg_list.n;
|
||||
|
@ -43,7 +43,14 @@ static phys_addr_t hyp_idmap_vector;
|
||||
|
||||
static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
|
||||
{
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
|
||||
/*
|
||||
* This function also gets called when dealing with HYP page
|
||||
* tables. As HYP doesn't have an associated struct kvm (and
|
||||
* the HYP page tables are fairly static), we don't do
|
||||
* anything there.
|
||||
*/
|
||||
if (kvm)
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
|
||||
}
|
||||
|
||||
static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
|
||||
@ -78,18 +85,20 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
|
||||
return p;
|
||||
}
|
||||
|
||||
static void clear_pud_entry(pud_t *pud)
|
||||
static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
|
||||
{
|
||||
pmd_t *pmd_table = pmd_offset(pud, 0);
|
||||
pud_clear(pud);
|
||||
kvm_tlb_flush_vmid_ipa(kvm, addr);
|
||||
pmd_free(NULL, pmd_table);
|
||||
put_page(virt_to_page(pud));
|
||||
}
|
||||
|
||||
static void clear_pmd_entry(pmd_t *pmd)
|
||||
static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
|
||||
{
|
||||
pte_t *pte_table = pte_offset_kernel(pmd, 0);
|
||||
pmd_clear(pmd);
|
||||
kvm_tlb_flush_vmid_ipa(kvm, addr);
|
||||
pte_free_kernel(NULL, pte_table);
|
||||
put_page(virt_to_page(pmd));
|
||||
}
|
||||
@ -100,11 +109,12 @@ static bool pmd_empty(pmd_t *pmd)
|
||||
return page_count(pmd_page) == 1;
|
||||
}
|
||||
|
||||
static void clear_pte_entry(pte_t *pte)
|
||||
static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr)
|
||||
{
|
||||
if (pte_present(*pte)) {
|
||||
kvm_set_pte(pte, __pte(0));
|
||||
put_page(virt_to_page(pte));
|
||||
kvm_tlb_flush_vmid_ipa(kvm, addr);
|
||||
}
|
||||
}
|
||||
|
||||
@ -114,7 +124,8 @@ static bool pte_empty(pte_t *pte)
|
||||
return page_count(pte_page) == 1;
|
||||
}
|
||||
|
||||
static void unmap_range(pgd_t *pgdp, unsigned long long start, u64 size)
|
||||
static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
|
||||
unsigned long long start, u64 size)
|
||||
{
|
||||
pgd_t *pgd;
|
||||
pud_t *pud;
|
||||
@ -138,15 +149,15 @@ static void unmap_range(pgd_t *pgdp, unsigned long long start, u64 size)
|
||||
}
|
||||
|
||||
pte = pte_offset_kernel(pmd, addr);
|
||||
clear_pte_entry(pte);
|
||||
clear_pte_entry(kvm, pte, addr);
|
||||
range = PAGE_SIZE;
|
||||
|
||||
/* If we emptied the pte, walk back up the ladder */
|
||||
if (pte_empty(pte)) {
|
||||
clear_pmd_entry(pmd);
|
||||
clear_pmd_entry(kvm, pmd, addr);
|
||||
range = PMD_SIZE;
|
||||
if (pmd_empty(pmd)) {
|
||||
clear_pud_entry(pud);
|
||||
clear_pud_entry(kvm, pud, addr);
|
||||
range = PUD_SIZE;
|
||||
}
|
||||
}
|
||||
@ -165,14 +176,14 @@ void free_boot_hyp_pgd(void)
|
||||
mutex_lock(&kvm_hyp_pgd_mutex);
|
||||
|
||||
if (boot_hyp_pgd) {
|
||||
unmap_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
|
||||
unmap_range(boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
|
||||
unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
|
||||
unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
|
||||
kfree(boot_hyp_pgd);
|
||||
boot_hyp_pgd = NULL;
|
||||
}
|
||||
|
||||
if (hyp_pgd)
|
||||
unmap_range(hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
|
||||
unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
|
||||
|
||||
kfree(init_bounce_page);
|
||||
init_bounce_page = NULL;
|
||||
@ -200,9 +211,10 @@ void free_hyp_pgds(void)
|
||||
|
||||
if (hyp_pgd) {
|
||||
for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
|
||||
unmap_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
|
||||
unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
|
||||
for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
|
||||
unmap_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
|
||||
unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
|
||||
|
||||
kfree(hyp_pgd);
|
||||
hyp_pgd = NULL;
|
||||
}
|
||||
@ -393,7 +405,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
|
||||
*/
|
||||
static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
|
||||
{
|
||||
unmap_range(kvm->arch.pgd, start, size);
|
||||
unmap_range(kvm, kvm->arch.pgd, start, size);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -675,7 +687,6 @@ static void handle_hva_to_gpa(struct kvm *kvm,
|
||||
static void kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
|
||||
{
|
||||
unmap_stage2_range(kvm, gpa, PAGE_SIZE);
|
||||
kvm_tlb_flush_vmid_ipa(kvm, gpa);
|
||||
}
|
||||
|
||||
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
|
||||
|
@ -386,6 +386,8 @@ int __init exynos_fdt_map_chipid(unsigned long node, const char *uname,
|
||||
|
||||
void __init exynos_init_io(struct map_desc *mach_desc, int size)
|
||||
{
|
||||
debug_ll_io_init();
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
if (initial_boot_params)
|
||||
of_scan_flat_dt(exynos_fdt_map_chipid, NULL);
|
||||
|
@ -181,14 +181,14 @@ static const char *periph_clk2_sels[] = { "pll3_usb_otg", "osc", "osc", "dummy",
|
||||
static const char *periph2_clk2_sels[] = { "pll3_usb_otg", "pll2_bus", };
|
||||
static const char *periph_sels[] = { "periph_pre", "periph_clk2", };
|
||||
static const char *periph2_sels[] = { "periph2_pre", "periph2_clk2", };
|
||||
static const char *axi_sels[] = { "periph", "pll2_pfd2_396m", "pll3_pfd1_540m", };
|
||||
static const char *axi_sels[] = { "periph", "pll2_pfd2_396m", "periph", "pll3_pfd1_540m", };
|
||||
static const char *audio_sels[] = { "pll4_post_div", "pll3_pfd2_508m", "pll3_pfd3_454m", "pll3_usb_otg", };
|
||||
static const char *gpu_axi_sels[] = { "axi", "ahb", };
|
||||
static const char *gpu2d_core_sels[] = { "axi", "pll3_usb_otg", "pll2_pfd0_352m", "pll2_pfd2_396m", };
|
||||
static const char *gpu3d_core_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll2_pfd2_396m", };
|
||||
static const char *gpu3d_shader_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll3_pfd0_720m", };
|
||||
static const char *ipu_sels[] = { "mmdc_ch0_axi", "pll2_pfd2_396m", "pll3_120m", "pll3_pfd1_540m", };
|
||||
static const char *ldb_di_sels[] = { "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_usb_otg", };
|
||||
static const char *ldb_di_sels[] = { "pll5_video_div", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_usb_otg", };
|
||||
static const char *ipu_di_pre_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll5_video_div", "pll2_pfd0_352m", "pll2_pfd2_396m", "pll3_pfd1_540m", };
|
||||
static const char *ipu1_di0_sels[] = { "ipu1_di0_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", };
|
||||
static const char *ipu1_di1_sels[] = { "ipu1_di1_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", };
|
||||
|
@ -41,13 +41,3 @@ void __init qnap_dt_ts219_init(void)
|
||||
|
||||
pm_power_off = qnap_tsx1x_power_off;
|
||||
}
|
||||
|
||||
/* FIXME: Will not work with DT. Maybe use MPP40_GPIO? */
|
||||
static int __init ts219_pci_init(void)
|
||||
{
|
||||
if (machine_is_ts219())
|
||||
kirkwood_pcie_init(KW_PCIE0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
subsys_initcall(ts219_pci_init);
|
||||
|
@ -22,9 +22,10 @@ static unsigned int __init kirkwood_variant(void)
|
||||
|
||||
kirkwood_pcie_id(&dev, &rev);
|
||||
|
||||
if ((dev == MV88F6281_DEV_ID && rev >= MV88F6281_REV_A0) ||
|
||||
(dev == MV88F6282_DEV_ID))
|
||||
if (dev == MV88F6281_DEV_ID && rev >= MV88F6281_REV_A0)
|
||||
return MPP_F6281_MASK;
|
||||
if (dev == MV88F6282_DEV_ID)
|
||||
return MPP_F6282_MASK;
|
||||
if (dev == MV88F6192_DEV_ID && rev >= MV88F6192_REV_A0)
|
||||
return MPP_F6192_MASK;
|
||||
if (dev == MV88F6180_DEV_ID)
|
||||
|
@ -32,15 +32,21 @@ ENTRY(ll_set_cpu_coherent)
|
||||
|
||||
/* Add CPU to SMP group - Atomic */
|
||||
add r3, r0, #ARMADA_XP_CFB_CTL_REG_OFFSET
|
||||
ldr r2, [r3]
|
||||
1:
|
||||
ldrex r2, [r3]
|
||||
orr r2, r2, r1
|
||||
str r2, [r3]
|
||||
strex r0, r2, [r3]
|
||||
cmp r0, #0
|
||||
bne 1b
|
||||
|
||||
/* Enable coherency on CPU - Atomic */
|
||||
add r3, r0, #ARMADA_XP_CFB_CFG_REG_OFFSET
|
||||
ldr r2, [r3]
|
||||
add r3, r3, #ARMADA_XP_CFB_CFG_REG_OFFSET
|
||||
1:
|
||||
ldrex r2, [r3]
|
||||
orr r2, r2, r1
|
||||
str r2, [r3]
|
||||
strex r0, r2, [r3]
|
||||
cmp r0, #0
|
||||
bne 1b
|
||||
|
||||
dsb
|
||||
|
||||
|
@ -124,7 +124,6 @@ obj-$(CONFIG_ARCH_OMAP3) += voltagedomains3xxx_data.o
|
||||
obj-$(CONFIG_ARCH_OMAP4) += $(voltagedomain-common)
|
||||
obj-$(CONFIG_ARCH_OMAP4) += voltagedomains44xx_data.o
|
||||
obj-$(CONFIG_SOC_AM33XX) += $(voltagedomain-common)
|
||||
obj-$(CONFIG_SOC_AM33XX) += voltagedomains33xx_data.o
|
||||
obj-$(CONFIG_SOC_OMAP5) += $(voltagedomain-common)
|
||||
|
||||
# OMAP powerdomain framework
|
||||
|
@ -577,7 +577,6 @@ void __init am33xx_init_early(void)
|
||||
omap2_set_globals_cm(AM33XX_L4_WK_IO_ADDRESS(AM33XX_PRCM_BASE), NULL);
|
||||
omap3xxx_check_revision();
|
||||
ti81xx_check_features();
|
||||
am33xx_voltagedomains_init();
|
||||
am33xx_powerdomains_init();
|
||||
am33xx_clockdomains_init();
|
||||
am33xx_hwmod_init();
|
||||
|
@ -102,6 +102,10 @@ static int _pwrdm_register(struct powerdomain *pwrdm)
|
||||
if (_pwrdm_lookup(pwrdm->name))
|
||||
return -EEXIST;
|
||||
|
||||
if (arch_pwrdm && arch_pwrdm->pwrdm_has_voltdm)
|
||||
if (!arch_pwrdm->pwrdm_has_voltdm())
|
||||
goto skip_voltdm;
|
||||
|
||||
voltdm = voltdm_lookup(pwrdm->voltdm.name);
|
||||
if (!voltdm) {
|
||||
pr_err("powerdomain: %s: voltagedomain %s does not exist\n",
|
||||
@ -111,6 +115,7 @@ static int _pwrdm_register(struct powerdomain *pwrdm)
|
||||
pwrdm->voltdm.ptr = voltdm;
|
||||
INIT_LIST_HEAD(&pwrdm->voltdm_node);
|
||||
voltdm_add_pwrdm(voltdm, pwrdm);
|
||||
skip_voltdm:
|
||||
spin_lock_init(&pwrdm->_lock);
|
||||
|
||||
list_add(&pwrdm->node, &pwrdm_list);
|
||||
|
@ -166,6 +166,7 @@ struct powerdomain {
|
||||
* @pwrdm_disable_hdwr_sar: Disable Hardware Save-Restore feature for a pd
|
||||
* @pwrdm_set_lowpwrstchange: Enable pd transitions from a shallow to deep sleep
|
||||
* @pwrdm_wait_transition: Wait for a pd state transition to complete
|
||||
* @pwrdm_has_voltdm: Check if a voltdm association is needed
|
||||
*
|
||||
* Regarding @pwrdm_set_lowpwrstchange: On the OMAP2 and 3-family
|
||||
* chips, a powerdomain's power state is not allowed to directly
|
||||
@ -196,6 +197,7 @@ struct pwrdm_ops {
|
||||
int (*pwrdm_disable_hdwr_sar)(struct powerdomain *pwrdm);
|
||||
int (*pwrdm_set_lowpwrstchange)(struct powerdomain *pwrdm);
|
||||
int (*pwrdm_wait_transition)(struct powerdomain *pwrdm);
|
||||
int (*pwrdm_has_voltdm)(void);
|
||||
};
|
||||
|
||||
int pwrdm_register_platform_funcs(struct pwrdm_ops *custom_funcs);
|
||||
|
@ -320,6 +320,12 @@ static int am33xx_pwrdm_wait_transition(struct powerdomain *pwrdm)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int am33xx_check_vcvp(void)
|
||||
{
|
||||
/* No VC/VP on am33xx devices */
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct pwrdm_ops am33xx_pwrdm_operations = {
|
||||
.pwrdm_set_next_pwrst = am33xx_pwrdm_set_next_pwrst,
|
||||
.pwrdm_read_next_pwrst = am33xx_pwrdm_read_next_pwrst,
|
||||
@ -335,4 +341,5 @@ struct pwrdm_ops am33xx_pwrdm_operations = {
|
||||
.pwrdm_set_mem_onst = am33xx_pwrdm_set_mem_onst,
|
||||
.pwrdm_set_mem_retst = am33xx_pwrdm_set_mem_retst,
|
||||
.pwrdm_wait_transition = am33xx_pwrdm_wait_transition,
|
||||
.pwrdm_has_voltdm = am33xx_check_vcvp,
|
||||
};
|
||||
|
@ -169,7 +169,6 @@ int omap_voltage_late_init(void);
|
||||
|
||||
extern void omap2xxx_voltagedomains_init(void);
|
||||
extern void omap3xxx_voltagedomains_init(void);
|
||||
extern void am33xx_voltagedomains_init(void);
|
||||
extern void omap44xx_voltagedomains_init(void);
|
||||
|
||||
struct voltagedomain *voltdm_lookup(const char *name);
|
||||
|
@ -1,43 +0,0 @@
|
||||
/*
|
||||
* AM33XX voltage domain data
|
||||
*
|
||||
* Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com/
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public License as
|
||||
* published by the Free Software Foundation version 2.
|
||||
*
|
||||
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
|
||||
* kind, whether express or implied; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
|
||||
#include "voltage.h"
|
||||
|
||||
static struct voltagedomain am33xx_voltdm_mpu = {
|
||||
.name = "mpu",
|
||||
};
|
||||
|
||||
static struct voltagedomain am33xx_voltdm_core = {
|
||||
.name = "core",
|
||||
};
|
||||
|
||||
static struct voltagedomain am33xx_voltdm_rtc = {
|
||||
.name = "rtc",
|
||||
};
|
||||
|
||||
static struct voltagedomain *voltagedomains_am33xx[] __initdata = {
|
||||
&am33xx_voltdm_mpu,
|
||||
&am33xx_voltdm_core,
|
||||
&am33xx_voltdm_rtc,
|
||||
NULL,
|
||||
};
|
||||
|
||||
void __init am33xx_voltagedomains_init(void)
|
||||
{
|
||||
voltdm_init(voltagedomains_am33xx);
|
||||
}
|
@ -101,8 +101,10 @@ static int __init sirfsoc_of_pwrc_init(void)
|
||||
struct device_node *np;
|
||||
|
||||
np = of_find_matching_node(NULL, pwrc_ids);
|
||||
if (!np)
|
||||
panic("unable to find compatible pwrc node in dtb\n");
|
||||
if (!np) {
|
||||
pr_err("unable to find compatible sirf pwrc node in dtb\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
/*
|
||||
* pwrc behind rtciobrg is not located in memory space
|
||||
|
@ -28,8 +28,10 @@ static int __init sirfsoc_of_rstc_init(void)
|
||||
struct device_node *np;
|
||||
|
||||
np = of_find_matching_node(NULL, rstc_ids);
|
||||
if (!np)
|
||||
panic("unable to find compatible rstc node in dtb\n");
|
||||
if (!np) {
|
||||
pr_err("unable to find compatible sirf rstc node in dtb\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
sirfsoc_rstc_base = of_iomap(np, 0);
|
||||
if (!sirfsoc_rstc_base)
|
||||
|
@ -252,7 +252,7 @@ static struct sh_timer_config cmt10_platform_data = {
|
||||
.name = "CMT10",
|
||||
.channel_offset = 0x10,
|
||||
.timer_bit = 0,
|
||||
.clockevent_rating = 125,
|
||||
.clockevent_rating = 80,
|
||||
.clocksource_rating = 125,
|
||||
};
|
||||
|
||||
|
@ -374,6 +374,7 @@ static struct ab8500_regulator_reg_init ab8500_reg_init[] = {
|
||||
static struct regulator_init_data ab8500_regulators[AB8500_NUM_REGULATORS] = {
|
||||
/* supplies to the display/camera */
|
||||
[AB8500_LDO_AUX1] = {
|
||||
.supply_regulator = "ab8500-ext-supply3",
|
||||
.constraints = {
|
||||
.name = "V-DISPLAY",
|
||||
.min_uV = 2800000,
|
||||
@ -387,6 +388,7 @@ static struct regulator_init_data ab8500_regulators[AB8500_NUM_REGULATORS] = {
|
||||
},
|
||||
/* supplies to the on-board eMMC */
|
||||
[AB8500_LDO_AUX2] = {
|
||||
.supply_regulator = "ab8500-ext-supply3",
|
||||
.constraints = {
|
||||
.name = "V-eMMC1",
|
||||
.min_uV = 1100000,
|
||||
@ -402,6 +404,7 @@ static struct regulator_init_data ab8500_regulators[AB8500_NUM_REGULATORS] = {
|
||||
},
|
||||
/* supply for VAUX3, supplies to SDcard slots */
|
||||
[AB8500_LDO_AUX3] = {
|
||||
.supply_regulator = "ab8500-ext-supply3",
|
||||
.constraints = {
|
||||
.name = "V-MMC-SD",
|
||||
.min_uV = 1100000,
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include <asm/proc-fns.h>
|
||||
|
||||
#include "db8500-regs.h"
|
||||
#include "id.h"
|
||||
|
||||
static atomic_t master = ATOMIC_INIT(0);
|
||||
static DEFINE_SPINLOCK(master_lock);
|
||||
@ -114,6 +115,9 @@ static struct cpuidle_driver ux500_idle_driver = {
|
||||
|
||||
int __init ux500_idle_init(void)
|
||||
{
|
||||
if (!(cpu_is_u8500_family() || cpu_is_ux540_family()))
|
||||
return -ENODEV;
|
||||
|
||||
/* Configure wake up reasons */
|
||||
prcmu_enable_wakeups(PRCMU_WAKEUP(ARM) | PRCMU_WAKEUP(RTC) |
|
||||
PRCMU_WAKEUP(ABB));
|
||||
|
@ -66,6 +66,9 @@ uart_rd(unsigned int reg)
|
||||
|
||||
static void putc(int ch)
|
||||
{
|
||||
if (!config_enabled(CONFIG_DEBUG_LL))
|
||||
return;
|
||||
|
||||
if (uart_rd(S3C2410_UFCON) & S3C2410_UFCON_FIFOMODE) {
|
||||
int level;
|
||||
|
||||
@ -118,7 +121,12 @@ static void arch_decomp_error(const char *x)
|
||||
#ifdef CONFIG_S3C_BOOT_UART_FORCE_FIFO
|
||||
static inline void arch_enable_uart_fifo(void)
|
||||
{
|
||||
u32 fifocon = uart_rd(S3C2410_UFCON);
|
||||
u32 fifocon;
|
||||
|
||||
if (!config_enabled(CONFIG_DEBUG_LL))
|
||||
return;
|
||||
|
||||
fifocon = uart_rd(S3C2410_UFCON);
|
||||
|
||||
if (!(fifocon & S3C2410_UFCON_FIFOMODE)) {
|
||||
fifocon |= S3C2410_UFCON_RESETBOTH;
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/serial_core.h>
|
||||
#include <linux/io.h>
|
||||
|
||||
@ -261,7 +262,8 @@ static int s3c_pm_enter(suspend_state_t state)
|
||||
* require a full power-cycle)
|
||||
*/
|
||||
|
||||
if (!any_allowed(s3c_irqwake_intmask, s3c_irqwake_intallow) &&
|
||||
if (!of_have_populated_dt() &&
|
||||
!any_allowed(s3c_irqwake_intmask, s3c_irqwake_intallow) &&
|
||||
!any_allowed(s3c_irqwake_eintmask, s3c_irqwake_eintallow)) {
|
||||
printk(KERN_ERR "%s: No wake-up sources!\n", __func__);
|
||||
printk(KERN_ERR "%s: Aborting sleep\n", __func__);
|
||||
@ -270,8 +272,11 @@ static int s3c_pm_enter(suspend_state_t state)
|
||||
|
||||
/* save all necessary core registers not covered by the drivers */
|
||||
|
||||
samsung_pm_save_gpios();
|
||||
samsung_pm_saved_gpios();
|
||||
if (!of_have_populated_dt()) {
|
||||
samsung_pm_save_gpios();
|
||||
samsung_pm_saved_gpios();
|
||||
}
|
||||
|
||||
s3c_pm_save_uarts();
|
||||
s3c_pm_save_core();
|
||||
|
||||
@ -310,8 +315,11 @@ static int s3c_pm_enter(suspend_state_t state)
|
||||
|
||||
s3c_pm_restore_core();
|
||||
s3c_pm_restore_uarts();
|
||||
samsung_pm_restore_gpios();
|
||||
s3c_pm_restored_gpios();
|
||||
|
||||
if (!of_have_populated_dt()) {
|
||||
samsung_pm_restore_gpios();
|
||||
s3c_pm_restored_gpios();
|
||||
}
|
||||
|
||||
s3c_pm_debug_init();
|
||||
|
||||
|
@ -46,12 +46,6 @@
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/machvec.h>
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
# define tlb_fast_mode(tlb) ((tlb)->nr == ~0U)
|
||||
#else
|
||||
# define tlb_fast_mode(tlb) (1)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If we can't allocate a page to make a big batch of page pointers
|
||||
* to work on, then just handle a few from the on-stack structure.
|
||||
@ -60,7 +54,7 @@
|
||||
|
||||
struct mmu_gather {
|
||||
struct mm_struct *mm;
|
||||
unsigned int nr; /* == ~0U => fast mode */
|
||||
unsigned int nr;
|
||||
unsigned int max;
|
||||
unsigned char fullmm; /* non-zero means full mm flush */
|
||||
unsigned char need_flush; /* really unmapped some PTEs? */
|
||||
@ -103,6 +97,7 @@ extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS];
|
||||
static inline void
|
||||
ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
|
||||
{
|
||||
unsigned long i;
|
||||
unsigned int nr;
|
||||
|
||||
if (!tlb->need_flush)
|
||||
@ -141,13 +136,11 @@ ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long e
|
||||
|
||||
/* lastly, release the freed pages */
|
||||
nr = tlb->nr;
|
||||
if (!tlb_fast_mode(tlb)) {
|
||||
unsigned long i;
|
||||
tlb->nr = 0;
|
||||
tlb->start_addr = ~0UL;
|
||||
for (i = 0; i < nr; ++i)
|
||||
free_page_and_swap_cache(tlb->pages[i]);
|
||||
}
|
||||
|
||||
tlb->nr = 0;
|
||||
tlb->start_addr = ~0UL;
|
||||
for (i = 0; i < nr; ++i)
|
||||
free_page_and_swap_cache(tlb->pages[i]);
|
||||
}
|
||||
|
||||
static inline void __tlb_alloc_page(struct mmu_gather *tlb)
|
||||
@ -167,20 +160,7 @@ tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_m
|
||||
tlb->mm = mm;
|
||||
tlb->max = ARRAY_SIZE(tlb->local);
|
||||
tlb->pages = tlb->local;
|
||||
/*
|
||||
* Use fast mode if only 1 CPU is online.
|
||||
*
|
||||
* It would be tempting to turn on fast-mode for full_mm_flush as well. But this
|
||||
* doesn't work because of speculative accesses and software prefetching: the page
|
||||
* table of "mm" may (and usually is) the currently active page table and even
|
||||
* though the kernel won't do any user-space accesses during the TLB shoot down, a
|
||||
* compiler might use speculation or lfetch.fault on what happens to be a valid
|
||||
* user-space address. This in turn could trigger a TLB miss fault (or a VHPT
|
||||
* walk) and re-insert a TLB entry we just removed. Slow mode avoids such
|
||||
* problems. (We could make fast-mode work by switching the current task to a
|
||||
* different "mm" during the shootdown.) --davidm 08/02/2002
|
||||
*/
|
||||
tlb->nr = (num_online_cpus() == 1) ? ~0U : 0;
|
||||
tlb->nr = 0;
|
||||
tlb->fullmm = full_mm_flush;
|
||||
tlb->start_addr = ~0UL;
|
||||
}
|
||||
@ -214,11 +194,6 @@ static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
|
||||
{
|
||||
tlb->need_flush = 1;
|
||||
|
||||
if (tlb_fast_mode(tlb)) {
|
||||
free_page_and_swap_cache(page);
|
||||
return 1; /* avoid calling tlb_flush_mmu */
|
||||
}
|
||||
|
||||
if (!tlb->nr && tlb->pages == tlb->local)
|
||||
__tlb_alloc_page(tlb);
|
||||
|
||||
|
@ -86,6 +86,7 @@ static inline int gpio_cansleep(unsigned gpio)
|
||||
return gpio < MCFGPIO_PIN_MAX ? 0 : __gpio_cansleep(gpio);
|
||||
}
|
||||
|
||||
#ifndef CONFIG_GPIOLIB
|
||||
static inline int gpio_request_one(unsigned gpio, unsigned long flags, const char *label)
|
||||
{
|
||||
int err;
|
||||
@ -105,5 +106,5 @@ static inline int gpio_request_one(unsigned gpio, unsigned long flags, const cha
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
#endif /* !CONFIG_GPIOLIB */
|
||||
#endif
|
||||
|
@ -2752,11 +2752,9 @@ func_return get_new_page
|
||||
#ifdef CONFIG_MAC
|
||||
|
||||
L(scc_initable_mac):
|
||||
.byte 9,12 /* Reset */
|
||||
.byte 4,0x44 /* x16, 1 stopbit, no parity */
|
||||
.byte 3,0xc0 /* receiver: 8 bpc */
|
||||
.byte 5,0xe2 /* transmitter: 8 bpc, assert dtr/rts */
|
||||
.byte 9,0 /* no interrupts */
|
||||
.byte 10,0 /* NRZ */
|
||||
.byte 11,0x50 /* use baud rate generator */
|
||||
.byte 12,1,13,0 /* 38400 baud */
|
||||
@ -2899,6 +2897,7 @@ func_start serial_init,%d0/%d1/%a0/%a1
|
||||
is_not_mac(L(serial_init_not_mac))
|
||||
|
||||
#ifdef SERIAL_DEBUG
|
||||
|
||||
/* You may define either or both of these. */
|
||||
#define MAC_USE_SCC_A /* Modem port */
|
||||
#define MAC_USE_SCC_B /* Printer port */
|
||||
@ -2908,9 +2907,21 @@ func_start serial_init,%d0/%d1/%a0/%a1
|
||||
#define mac_scc_cha_b_data_offset 0x4
|
||||
#define mac_scc_cha_a_data_offset 0x6
|
||||
|
||||
#if defined(MAC_USE_SCC_A) || defined(MAC_USE_SCC_B)
|
||||
movel %pc@(L(mac_sccbase)),%a0
|
||||
/* Reset SCC device */
|
||||
moveb #9,%a0@(mac_scc_cha_a_ctrl_offset)
|
||||
moveb #0xc0,%a0@(mac_scc_cha_a_ctrl_offset)
|
||||
/* Wait for 5 PCLK cycles, which is about 68 CPU cycles */
|
||||
/* 5 / 3.6864 MHz = approx. 1.36 us = 68 / 50 MHz */
|
||||
movel #35,%d0
|
||||
5:
|
||||
subq #1,%d0
|
||||
jne 5b
|
||||
#endif
|
||||
|
||||
#ifdef MAC_USE_SCC_A
|
||||
/* Initialize channel A */
|
||||
movel %pc@(L(mac_sccbase)),%a0
|
||||
lea %pc@(L(scc_initable_mac)),%a1
|
||||
5: moveb %a1@+,%d0
|
||||
jmi 6f
|
||||
@ -2922,9 +2933,6 @@ func_start serial_init,%d0/%d1/%a0/%a1
|
||||
|
||||
#ifdef MAC_USE_SCC_B
|
||||
/* Initialize channel B */
|
||||
#ifndef MAC_USE_SCC_A /* Load mac_sccbase only if needed */
|
||||
movel %pc@(L(mac_sccbase)),%a0
|
||||
#endif /* MAC_USE_SCC_A */
|
||||
lea %pc@(L(scc_initable_mac)),%a1
|
||||
7: moveb %a1@+,%d0
|
||||
jmi 8f
|
||||
@ -2933,6 +2941,7 @@ func_start serial_init,%d0/%d1/%a0/%a1
|
||||
jra 7b
|
||||
8:
|
||||
#endif /* MAC_USE_SCC_B */
|
||||
|
||||
#endif /* SERIAL_DEBUG */
|
||||
|
||||
jra L(serial_init_done)
|
||||
@ -3006,17 +3015,17 @@ func_start serial_putc,%d0/%d1/%a0/%a1
|
||||
|
||||
#ifdef SERIAL_DEBUG
|
||||
|
||||
#ifdef MAC_USE_SCC_A
|
||||
#if defined(MAC_USE_SCC_A) || defined(MAC_USE_SCC_B)
|
||||
movel %pc@(L(mac_sccbase)),%a1
|
||||
#endif
|
||||
|
||||
#ifdef MAC_USE_SCC_A
|
||||
3: btst #2,%a1@(mac_scc_cha_a_ctrl_offset)
|
||||
jeq 3b
|
||||
moveb %d0,%a1@(mac_scc_cha_a_data_offset)
|
||||
#endif /* MAC_USE_SCC_A */
|
||||
|
||||
#ifdef MAC_USE_SCC_B
|
||||
#ifndef MAC_USE_SCC_A /* Load mac_sccbase only if needed */
|
||||
movel %pc@(L(mac_sccbase)),%a1
|
||||
#endif /* MAC_USE_SCC_A */
|
||||
4: btst #2,%a1@(mac_scc_cha_b_ctrl_offset)
|
||||
jeq 4b
|
||||
moveb %d0,%a1@(mac_scc_cha_b_data_offset)
|
||||
|
@ -102,21 +102,23 @@ do { \
|
||||
|
||||
#define flush_cache_range(vma, start, len) do { } while (0)
|
||||
|
||||
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
|
||||
do { \
|
||||
u32 addr = virt_to_phys(dst); \
|
||||
memcpy((dst), (src), (len)); \
|
||||
if (vma->vm_flags & VM_EXEC) { \
|
||||
invalidate_icache_range((unsigned) (addr), \
|
||||
(unsigned) (addr) + PAGE_SIZE); \
|
||||
flush_dcache_range((unsigned) (addr), \
|
||||
(unsigned) (addr) + PAGE_SIZE); \
|
||||
} \
|
||||
} while (0)
|
||||
static inline void copy_to_user_page(struct vm_area_struct *vma,
|
||||
struct page *page, unsigned long vaddr,
|
||||
void *dst, void *src, int len)
|
||||
{
|
||||
u32 addr = virt_to_phys(dst);
|
||||
memcpy(dst, src, len);
|
||||
if (vma->vm_flags & VM_EXEC) {
|
||||
invalidate_icache_range(addr, addr + PAGE_SIZE);
|
||||
flush_dcache_range(addr, addr + PAGE_SIZE);
|
||||
}
|
||||
}
|
||||
|
||||
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
|
||||
do { \
|
||||
memcpy((dst), (src), (len)); \
|
||||
} while (0)
|
||||
static inline void copy_from_user_page(struct vm_area_struct *vma,
|
||||
struct page *page, unsigned long vaddr,
|
||||
void *dst, void *src, int len)
|
||||
{
|
||||
memcpy(dst, src, len);
|
||||
}
|
||||
|
||||
#endif /* _ASM_MICROBLAZE_CACHEFLUSH_H */
|
||||
|
@ -99,13 +99,13 @@ static inline int access_ok(int type, const void __user *addr,
|
||||
if ((get_fs().seg < ((unsigned long)addr)) ||
|
||||
(get_fs().seg < ((unsigned long)addr + size - 1))) {
|
||||
pr_debug("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
|
||||
type ? "WRITE" : "READ ", (u32)addr, (u32)size,
|
||||
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
|
||||
(u32)get_fs().seg);
|
||||
return 0;
|
||||
}
|
||||
ok:
|
||||
pr_debug("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
|
||||
type ? "WRITE" : "READ ", (u32)addr, (u32)size,
|
||||
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
|
||||
(u32)get_fs().seg);
|
||||
return 1;
|
||||
}
|
||||
|
@ -428,13 +428,16 @@ static void octeon_restart(char *command)
|
||||
*/
|
||||
static void octeon_kill_core(void *arg)
|
||||
{
|
||||
mb();
|
||||
if (octeon_is_simulation()) {
|
||||
/* The simulator needs the watchdog to stop for dead cores */
|
||||
cvmx_write_csr(CVMX_CIU_WDOGX(cvmx_get_core_num()), 0);
|
||||
if (octeon_is_simulation())
|
||||
/* A break instruction causes the simulator stop a core */
|
||||
asm volatile ("sync\nbreak");
|
||||
}
|
||||
asm volatile ("break" ::: "memory");
|
||||
|
||||
local_irq_disable();
|
||||
/* Disable watchdog on this core. */
|
||||
cvmx_write_csr(CVMX_CIU_WDOGX(cvmx_get_core_num()), 0);
|
||||
/* Spin in a low power mode. */
|
||||
while (true)
|
||||
asm volatile ("wait" ::: "memory");
|
||||
}
|
||||
|
||||
|
||||
|
@ -496,10 +496,6 @@ struct kvm_mips_callbacks {
|
||||
uint32_t cause);
|
||||
int (*irq_clear) (struct kvm_vcpu *vcpu, unsigned int priority,
|
||||
uint32_t cause);
|
||||
int (*vcpu_ioctl_get_regs) (struct kvm_vcpu *vcpu,
|
||||
struct kvm_regs *regs);
|
||||
int (*vcpu_ioctl_set_regs) (struct kvm_vcpu *vcpu,
|
||||
struct kvm_regs *regs);
|
||||
};
|
||||
extern struct kvm_mips_callbacks *kvm_mips_callbacks;
|
||||
int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
|
||||
|
@ -117,7 +117,7 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
|
||||
if (! ((asid += ASID_INC) & ASID_MASK) ) {
|
||||
if (cpu_has_vtag_icache)
|
||||
flush_icache_all();
|
||||
#ifdef CONFIG_VIRTUALIZATION
|
||||
#ifdef CONFIG_KVM
|
||||
kvm_local_flush_tlb_all(); /* start new asid cycle */
|
||||
#else
|
||||
local_flush_tlb_all(); /* start new asid cycle */
|
||||
|
@ -16,6 +16,38 @@
|
||||
#include <asm/isadep.h>
|
||||
#include <uapi/asm/ptrace.h>
|
||||
|
||||
/*
|
||||
* This struct defines the way the registers are stored on the stack during a
|
||||
* system call/exception. As usual the registers k0/k1 aren't being saved.
|
||||
*/
|
||||
struct pt_regs {
|
||||
#ifdef CONFIG_32BIT
|
||||
/* Pad bytes for argument save space on the stack. */
|
||||
unsigned long pad0[6];
|
||||
#endif
|
||||
|
||||
/* Saved main processor registers. */
|
||||
unsigned long regs[32];
|
||||
|
||||
/* Saved special registers. */
|
||||
unsigned long cp0_status;
|
||||
unsigned long hi;
|
||||
unsigned long lo;
|
||||
#ifdef CONFIG_CPU_HAS_SMARTMIPS
|
||||
unsigned long acx;
|
||||
#endif
|
||||
unsigned long cp0_badvaddr;
|
||||
unsigned long cp0_cause;
|
||||
unsigned long cp0_epc;
|
||||
#ifdef CONFIG_MIPS_MT_SMTC
|
||||
unsigned long cp0_tcstatus;
|
||||
#endif /* CONFIG_MIPS_MT_SMTC */
|
||||
#ifdef CONFIG_CPU_CAVIUM_OCTEON
|
||||
unsigned long long mpl[3]; /* MTM{0,1,2} */
|
||||
unsigned long long mtp[3]; /* MTP{0,1,2} */
|
||||
#endif
|
||||
} __aligned(8);
|
||||
|
||||
struct task_struct;
|
||||
|
||||
extern int ptrace_getregs(struct task_struct *child, __s64 __user *data);
|
||||
|
@ -1,55 +1,135 @@
|
||||
/*
|
||||
* This file is subject to the terms and conditions of the GNU General Public
|
||||
* License. See the file "COPYING" in the main directory of this archive
|
||||
* for more details.
|
||||
*
|
||||
* Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved.
|
||||
* Authors: Sanjay Lal <sanjayl@kymasys.com>
|
||||
*/
|
||||
* This file is subject to the terms and conditions of the GNU General Public
|
||||
* License. See the file "COPYING" in the main directory of this archive
|
||||
* for more details.
|
||||
*
|
||||
* Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved.
|
||||
* Copyright (C) 2013 Cavium, Inc.
|
||||
* Authors: Sanjay Lal <sanjayl@kymasys.com>
|
||||
*/
|
||||
|
||||
#ifndef __LINUX_KVM_MIPS_H
|
||||
#define __LINUX_KVM_MIPS_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#define __KVM_MIPS
|
||||
/*
|
||||
* KVM MIPS specific structures and definitions.
|
||||
*
|
||||
* Some parts derived from the x86 version of this file.
|
||||
*/
|
||||
|
||||
#define N_MIPS_COPROC_REGS 32
|
||||
#define N_MIPS_COPROC_SEL 8
|
||||
|
||||
/* for KVM_GET_REGS and KVM_SET_REGS */
|
||||
/*
|
||||
* for KVM_GET_REGS and KVM_SET_REGS
|
||||
*
|
||||
* If Config[AT] is zero (32-bit CPU), the register contents are
|
||||
* stored in the lower 32-bits of the struct kvm_regs fields and sign
|
||||
* extended to 64-bits.
|
||||
*/
|
||||
struct kvm_regs {
|
||||
__u32 gprs[32];
|
||||
__u32 hi;
|
||||
__u32 lo;
|
||||
__u32 pc;
|
||||
|
||||
__u32 cp0reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
|
||||
/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
|
||||
__u64 gpr[32];
|
||||
__u64 hi;
|
||||
__u64 lo;
|
||||
__u64 pc;
|
||||
};
|
||||
|
||||
/* for KVM_GET_SREGS and KVM_SET_SREGS */
|
||||
struct kvm_sregs {
|
||||
};
|
||||
|
||||
/* for KVM_GET_FPU and KVM_SET_FPU */
|
||||
/*
|
||||
* for KVM_GET_FPU and KVM_SET_FPU
|
||||
*
|
||||
* If Status[FR] is zero (32-bit FPU), the upper 32-bits of the FPRs
|
||||
* are zero filled.
|
||||
*/
|
||||
struct kvm_fpu {
|
||||
__u64 fpr[32];
|
||||
__u32 fir;
|
||||
__u32 fccr;
|
||||
__u32 fexr;
|
||||
__u32 fenr;
|
||||
__u32 fcsr;
|
||||
__u32 pad;
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
* For MIPS, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access CP0
|
||||
* registers. The id field is broken down as follows:
|
||||
*
|
||||
* bits[2..0] - Register 'sel' index.
|
||||
* bits[7..3] - Register 'rd' index.
|
||||
* bits[15..8] - Must be zero.
|
||||
* bits[31..16] - 1 -> CP0 registers.
|
||||
* bits[51..32] - Must be zero.
|
||||
* bits[63..52] - As per linux/kvm.h
|
||||
*
|
||||
* Other sets registers may be added in the future. Each set would
|
||||
* have its own identifier in bits[31..16].
|
||||
*
|
||||
* The registers defined in struct kvm_regs are also accessible, the
|
||||
* id values for these are below.
|
||||
*/
|
||||
|
||||
#define KVM_REG_MIPS_R0 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 0)
|
||||
#define KVM_REG_MIPS_R1 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 1)
|
||||
#define KVM_REG_MIPS_R2 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 2)
|
||||
#define KVM_REG_MIPS_R3 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 3)
|
||||
#define KVM_REG_MIPS_R4 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 4)
|
||||
#define KVM_REG_MIPS_R5 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 5)
|
||||
#define KVM_REG_MIPS_R6 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 6)
|
||||
#define KVM_REG_MIPS_R7 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 7)
|
||||
#define KVM_REG_MIPS_R8 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 8)
|
||||
#define KVM_REG_MIPS_R9 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 9)
|
||||
#define KVM_REG_MIPS_R10 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 10)
|
||||
#define KVM_REG_MIPS_R11 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 11)
|
||||
#define KVM_REG_MIPS_R12 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 12)
|
||||
#define KVM_REG_MIPS_R13 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 13)
|
||||
#define KVM_REG_MIPS_R14 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 14)
|
||||
#define KVM_REG_MIPS_R15 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 15)
|
||||
#define KVM_REG_MIPS_R16 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 16)
|
||||
#define KVM_REG_MIPS_R17 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 17)
|
||||
#define KVM_REG_MIPS_R18 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 18)
|
||||
#define KVM_REG_MIPS_R19 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 19)
|
||||
#define KVM_REG_MIPS_R20 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 20)
|
||||
#define KVM_REG_MIPS_R21 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 21)
|
||||
#define KVM_REG_MIPS_R22 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 22)
|
||||
#define KVM_REG_MIPS_R23 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 23)
|
||||
#define KVM_REG_MIPS_R24 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 24)
|
||||
#define KVM_REG_MIPS_R25 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 25)
|
||||
#define KVM_REG_MIPS_R26 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 26)
|
||||
#define KVM_REG_MIPS_R27 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 27)
|
||||
#define KVM_REG_MIPS_R28 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 28)
|
||||
#define KVM_REG_MIPS_R29 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 29)
|
||||
#define KVM_REG_MIPS_R30 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 30)
|
||||
#define KVM_REG_MIPS_R31 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 31)
|
||||
|
||||
#define KVM_REG_MIPS_HI (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 32)
|
||||
#define KVM_REG_MIPS_LO (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 33)
|
||||
#define KVM_REG_MIPS_PC (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 34)
|
||||
|
||||
/*
|
||||
* KVM MIPS specific structures and definitions
|
||||
*
|
||||
*/
|
||||
struct kvm_debug_exit_arch {
|
||||
__u64 epc;
|
||||
};
|
||||
|
||||
/* for KVM_SET_GUEST_DEBUG */
|
||||
struct kvm_guest_debug_arch {
|
||||
};
|
||||
|
||||
/* definition of registers in kvm_run */
|
||||
struct kvm_sync_regs {
|
||||
};
|
||||
|
||||
/* dummy definition */
|
||||
struct kvm_sregs {
|
||||
};
|
||||
|
||||
struct kvm_mips_interrupt {
|
||||
/* in */
|
||||
__u32 cpu;
|
||||
__u32 irq;
|
||||
};
|
||||
|
||||
/* definition of registers in kvm_run */
|
||||
struct kvm_sync_regs {
|
||||
};
|
||||
|
||||
#endif /* __LINUX_KVM_MIPS_H */
|
||||
|
@ -22,16 +22,12 @@
|
||||
#define DSP_CONTROL 77
|
||||
#define ACX 78
|
||||
|
||||
#ifndef __KERNEL__
|
||||
/*
|
||||
* This struct defines the way the registers are stored on the stack during a
|
||||
* system call/exception. As usual the registers k0/k1 aren't being saved.
|
||||
*/
|
||||
struct pt_regs {
|
||||
#ifdef CONFIG_32BIT
|
||||
/* Pad bytes for argument save space on the stack. */
|
||||
unsigned long pad0[6];
|
||||
#endif
|
||||
|
||||
/* Saved main processor registers. */
|
||||
unsigned long regs[32];
|
||||
|
||||
@ -39,20 +35,11 @@ struct pt_regs {
|
||||
unsigned long cp0_status;
|
||||
unsigned long hi;
|
||||
unsigned long lo;
|
||||
#ifdef CONFIG_CPU_HAS_SMARTMIPS
|
||||
unsigned long acx;
|
||||
#endif
|
||||
unsigned long cp0_badvaddr;
|
||||
unsigned long cp0_cause;
|
||||
unsigned long cp0_epc;
|
||||
#ifdef CONFIG_MIPS_MT_SMTC
|
||||
unsigned long cp0_tcstatus;
|
||||
#endif /* CONFIG_MIPS_MT_SMTC */
|
||||
#ifdef CONFIG_CPU_CAVIUM_OCTEON
|
||||
unsigned long long mpl[3]; /* MTM{0,1,2} */
|
||||
unsigned long long mtp[3]; /* MTP{0,1,2} */
|
||||
#endif
|
||||
} __attribute__ ((aligned (8)));
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
/* Arbitrarily choose the same ptrace numbers as used by the Sparc code. */
|
||||
#define PTRACE_GETREGS 12
|
||||
|
@ -119,4 +119,15 @@ MODULE_AUTHOR("Ralf Baechle (ralf@linux-mips.org)");
|
||||
#undef TASK_SIZE
|
||||
#define TASK_SIZE TASK_SIZE32
|
||||
|
||||
#undef cputime_to_timeval
|
||||
#define cputime_to_timeval cputime_to_compat_timeval
|
||||
static __inline__ void
|
||||
cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value)
|
||||
{
|
||||
unsigned long jiffies = cputime_to_jiffies(cputime);
|
||||
|
||||
value->tv_usec = (jiffies % HZ) * (1000000L / HZ);
|
||||
value->tv_sec = jiffies / HZ;
|
||||
}
|
||||
|
||||
#include "../../../fs/binfmt_elf.c"
|
||||
|
@ -162,4 +162,15 @@ MODULE_AUTHOR("Ralf Baechle (ralf@linux-mips.org)");
|
||||
#undef TASK_SIZE
|
||||
#define TASK_SIZE TASK_SIZE32
|
||||
|
||||
#undef cputime_to_timeval
|
||||
#define cputime_to_timeval cputime_to_compat_timeval
|
||||
static __inline__ void
|
||||
cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value)
|
||||
{
|
||||
unsigned long jiffies = cputime_to_jiffies(cputime);
|
||||
|
||||
value->tv_usec = (jiffies % HZ) * (1000000L / HZ);
|
||||
value->tv_sec = jiffies / HZ;
|
||||
}
|
||||
|
||||
#include "../../../fs/binfmt_elf.c"
|
||||
|
@ -25,12 +25,16 @@
|
||||
#define MCOUNT_OFFSET_INSNS 4
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
|
||||
/* Arch override because MIPS doesn't need to run this from stop_machine() */
|
||||
void arch_ftrace_update_code(int command)
|
||||
{
|
||||
ftrace_modify_all_code(command);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Check if the address is in kernel space
|
||||
*
|
||||
|
@ -93,26 +93,27 @@ static void rm7k_wait_irqoff(void)
|
||||
}
|
||||
|
||||
/*
|
||||
* The Au1xxx wait is available only if using 32khz counter or
|
||||
* external timer source, but specifically not CP0 Counter.
|
||||
* alchemy/common/time.c may override cpu_wait!
|
||||
* Au1 'wait' is only useful when the 32kHz counter is used as timer,
|
||||
* since coreclock (and the cp0 counter) stops upon executing it. Only an
|
||||
* interrupt can wake it, so they must be enabled before entering idle modes.
|
||||
*/
|
||||
static void au1k_wait(void)
|
||||
{
|
||||
unsigned long c0status = read_c0_status() | 1; /* irqs on */
|
||||
|
||||
__asm__(
|
||||
" .set mips3 \n"
|
||||
" cache 0x14, 0(%0) \n"
|
||||
" cache 0x14, 32(%0) \n"
|
||||
" sync \n"
|
||||
" nop \n"
|
||||
" mtc0 %1, $12 \n" /* wr c0status */
|
||||
" wait \n"
|
||||
" nop \n"
|
||||
" nop \n"
|
||||
" nop \n"
|
||||
" nop \n"
|
||||
" .set mips0 \n"
|
||||
: : "r" (au1k_wait));
|
||||
local_irq_enable();
|
||||
: : "r" (au1k_wait), "r" (c0status));
|
||||
}
|
||||
|
||||
static int __initdata nowait;
|
||||
|
@ -40,6 +40,7 @@
|
||||
#include <asm/processor.h>
|
||||
#include <asm/vpe.h>
|
||||
#include <asm/rtlx.h>
|
||||
#include <asm/setup.h>
|
||||
|
||||
static struct rtlx_info *rtlx;
|
||||
static int major;
|
||||
|
@ -897,22 +897,24 @@ out_sigsegv:
|
||||
|
||||
asmlinkage void do_tr(struct pt_regs *regs)
|
||||
{
|
||||
unsigned int opcode, tcode = 0;
|
||||
u32 opcode, tcode = 0;
|
||||
u16 instr[2];
|
||||
unsigned long epc = exception_epc(regs);
|
||||
unsigned long epc = msk_isa16_mode(exception_epc(regs));
|
||||
|
||||
if ((__get_user(instr[0], (u16 __user *)msk_isa16_mode(epc))) ||
|
||||
(__get_user(instr[1], (u16 __user *)msk_isa16_mode(epc + 2))))
|
||||
if (get_isa16_mode(regs->cp0_epc)) {
|
||||
if (__get_user(instr[0], (u16 __user *)(epc + 0)) ||
|
||||
__get_user(instr[1], (u16 __user *)(epc + 2)))
|
||||
goto out_sigsegv;
|
||||
opcode = (instr[0] << 16) | instr[1];
|
||||
|
||||
/* Immediate versions don't provide a code. */
|
||||
if (!(opcode & OPCODE)) {
|
||||
if (get_isa16_mode(regs->cp0_epc))
|
||||
/* microMIPS */
|
||||
tcode = (opcode >> 12) & 0x1f;
|
||||
else
|
||||
tcode = ((opcode >> 6) & ((1 << 10) - 1));
|
||||
opcode = (instr[0] << 16) | instr[1];
|
||||
/* Immediate versions don't provide a code. */
|
||||
if (!(opcode & OPCODE))
|
||||
tcode = (opcode >> 12) & ((1 << 4) - 1);
|
||||
} else {
|
||||
if (__get_user(opcode, (u32 __user *)epc))
|
||||
goto out_sigsegv;
|
||||
/* Immediate versions don't provide a code. */
|
||||
if (!(opcode & OPCODE))
|
||||
tcode = (opcode >> 6) & ((1 << 10) - 1);
|
||||
}
|
||||
|
||||
do_trap_or_bp(regs, tcode, "Trap");
|
||||
|
@ -195,7 +195,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
long
|
||||
kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
|
||||
{
|
||||
return -EINVAL;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
void kvm_arch_free_memslot(struct kvm_memory_slot *free,
|
||||
@ -401,7 +401,7 @@ int
|
||||
kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
|
||||
struct kvm_guest_debug *dbg)
|
||||
{
|
||||
return -EINVAL;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
@ -475,14 +475,248 @@ int
|
||||
kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
|
||||
struct kvm_mp_state *mp_state)
|
||||
{
|
||||
return -EINVAL;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
int
|
||||
kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
|
||||
struct kvm_mp_state *mp_state)
|
||||
{
|
||||
return -EINVAL;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
#define MIPS_CP0_32(_R, _S) \
|
||||
(KVM_REG_MIPS | KVM_REG_SIZE_U32 | 0x10000 | (8 * (_R) + (_S)))
|
||||
|
||||
#define MIPS_CP0_64(_R, _S) \
|
||||
(KVM_REG_MIPS | KVM_REG_SIZE_U64 | 0x10000 | (8 * (_R) + (_S)))
|
||||
|
||||
#define KVM_REG_MIPS_CP0_INDEX MIPS_CP0_32(0, 0)
|
||||
#define KVM_REG_MIPS_CP0_ENTRYLO0 MIPS_CP0_64(2, 0)
|
||||
#define KVM_REG_MIPS_CP0_ENTRYLO1 MIPS_CP0_64(3, 0)
|
||||
#define KVM_REG_MIPS_CP0_CONTEXT MIPS_CP0_64(4, 0)
|
||||
#define KVM_REG_MIPS_CP0_USERLOCAL MIPS_CP0_64(4, 2)
|
||||
#define KVM_REG_MIPS_CP0_PAGEMASK MIPS_CP0_32(5, 0)
|
||||
#define KVM_REG_MIPS_CP0_PAGEGRAIN MIPS_CP0_32(5, 1)
|
||||
#define KVM_REG_MIPS_CP0_WIRED MIPS_CP0_32(6, 0)
|
||||
#define KVM_REG_MIPS_CP0_HWRENA MIPS_CP0_32(7, 0)
|
||||
#define KVM_REG_MIPS_CP0_BADVADDR MIPS_CP0_64(8, 0)
|
||||
#define KVM_REG_MIPS_CP0_COUNT MIPS_CP0_32(9, 0)
|
||||
#define KVM_REG_MIPS_CP0_ENTRYHI MIPS_CP0_64(10, 0)
|
||||
#define KVM_REG_MIPS_CP0_COMPARE MIPS_CP0_32(11, 0)
|
||||
#define KVM_REG_MIPS_CP0_STATUS MIPS_CP0_32(12, 0)
|
||||
#define KVM_REG_MIPS_CP0_CAUSE MIPS_CP0_32(13, 0)
|
||||
#define KVM_REG_MIPS_CP0_EBASE MIPS_CP0_64(15, 1)
|
||||
#define KVM_REG_MIPS_CP0_CONFIG MIPS_CP0_32(16, 0)
|
||||
#define KVM_REG_MIPS_CP0_CONFIG1 MIPS_CP0_32(16, 1)
|
||||
#define KVM_REG_MIPS_CP0_CONFIG2 MIPS_CP0_32(16, 2)
|
||||
#define KVM_REG_MIPS_CP0_CONFIG3 MIPS_CP0_32(16, 3)
|
||||
#define KVM_REG_MIPS_CP0_CONFIG7 MIPS_CP0_32(16, 7)
|
||||
#define KVM_REG_MIPS_CP0_XCONTEXT MIPS_CP0_64(20, 0)
|
||||
#define KVM_REG_MIPS_CP0_ERROREPC MIPS_CP0_64(30, 0)
|
||||
|
||||
static u64 kvm_mips_get_one_regs[] = {
|
||||
KVM_REG_MIPS_R0,
|
||||
KVM_REG_MIPS_R1,
|
||||
KVM_REG_MIPS_R2,
|
||||
KVM_REG_MIPS_R3,
|
||||
KVM_REG_MIPS_R4,
|
||||
KVM_REG_MIPS_R5,
|
||||
KVM_REG_MIPS_R6,
|
||||
KVM_REG_MIPS_R7,
|
||||
KVM_REG_MIPS_R8,
|
||||
KVM_REG_MIPS_R9,
|
||||
KVM_REG_MIPS_R10,
|
||||
KVM_REG_MIPS_R11,
|
||||
KVM_REG_MIPS_R12,
|
||||
KVM_REG_MIPS_R13,
|
||||
KVM_REG_MIPS_R14,
|
||||
KVM_REG_MIPS_R15,
|
||||
KVM_REG_MIPS_R16,
|
||||
KVM_REG_MIPS_R17,
|
||||
KVM_REG_MIPS_R18,
|
||||
KVM_REG_MIPS_R19,
|
||||
KVM_REG_MIPS_R20,
|
||||
KVM_REG_MIPS_R21,
|
||||
KVM_REG_MIPS_R22,
|
||||
KVM_REG_MIPS_R23,
|
||||
KVM_REG_MIPS_R24,
|
||||
KVM_REG_MIPS_R25,
|
||||
KVM_REG_MIPS_R26,
|
||||
KVM_REG_MIPS_R27,
|
||||
KVM_REG_MIPS_R28,
|
||||
KVM_REG_MIPS_R29,
|
||||
KVM_REG_MIPS_R30,
|
||||
KVM_REG_MIPS_R31,
|
||||
|
||||
KVM_REG_MIPS_HI,
|
||||
KVM_REG_MIPS_LO,
|
||||
KVM_REG_MIPS_PC,
|
||||
|
||||
KVM_REG_MIPS_CP0_INDEX,
|
||||
KVM_REG_MIPS_CP0_CONTEXT,
|
||||
KVM_REG_MIPS_CP0_PAGEMASK,
|
||||
KVM_REG_MIPS_CP0_WIRED,
|
||||
KVM_REG_MIPS_CP0_BADVADDR,
|
||||
KVM_REG_MIPS_CP0_ENTRYHI,
|
||||
KVM_REG_MIPS_CP0_STATUS,
|
||||
KVM_REG_MIPS_CP0_CAUSE,
|
||||
/* EPC set via kvm_regs, et al. */
|
||||
KVM_REG_MIPS_CP0_CONFIG,
|
||||
KVM_REG_MIPS_CP0_CONFIG1,
|
||||
KVM_REG_MIPS_CP0_CONFIG2,
|
||||
KVM_REG_MIPS_CP0_CONFIG3,
|
||||
KVM_REG_MIPS_CP0_CONFIG7,
|
||||
KVM_REG_MIPS_CP0_ERROREPC
|
||||
};
|
||||
|
||||
static int kvm_mips_get_reg(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_one_reg *reg)
|
||||
{
|
||||
struct mips_coproc *cop0 = vcpu->arch.cop0;
|
||||
s64 v;
|
||||
|
||||
switch (reg->id) {
|
||||
case KVM_REG_MIPS_R0 ... KVM_REG_MIPS_R31:
|
||||
v = (long)vcpu->arch.gprs[reg->id - KVM_REG_MIPS_R0];
|
||||
break;
|
||||
case KVM_REG_MIPS_HI:
|
||||
v = (long)vcpu->arch.hi;
|
||||
break;
|
||||
case KVM_REG_MIPS_LO:
|
||||
v = (long)vcpu->arch.lo;
|
||||
break;
|
||||
case KVM_REG_MIPS_PC:
|
||||
v = (long)vcpu->arch.pc;
|
||||
break;
|
||||
|
||||
case KVM_REG_MIPS_CP0_INDEX:
|
||||
v = (long)kvm_read_c0_guest_index(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONTEXT:
|
||||
v = (long)kvm_read_c0_guest_context(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_PAGEMASK:
|
||||
v = (long)kvm_read_c0_guest_pagemask(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_WIRED:
|
||||
v = (long)kvm_read_c0_guest_wired(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_BADVADDR:
|
||||
v = (long)kvm_read_c0_guest_badvaddr(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_ENTRYHI:
|
||||
v = (long)kvm_read_c0_guest_entryhi(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_STATUS:
|
||||
v = (long)kvm_read_c0_guest_status(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CAUSE:
|
||||
v = (long)kvm_read_c0_guest_cause(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_ERROREPC:
|
||||
v = (long)kvm_read_c0_guest_errorepc(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONFIG:
|
||||
v = (long)kvm_read_c0_guest_config(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONFIG1:
|
||||
v = (long)kvm_read_c0_guest_config1(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONFIG2:
|
||||
v = (long)kvm_read_c0_guest_config2(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONFIG3:
|
||||
v = (long)kvm_read_c0_guest_config3(cop0);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONFIG7:
|
||||
v = (long)kvm_read_c0_guest_config7(cop0);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) {
|
||||
u64 __user *uaddr64 = (u64 __user *)(long)reg->addr;
|
||||
return put_user(v, uaddr64);
|
||||
} else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) {
|
||||
u32 __user *uaddr32 = (u32 __user *)(long)reg->addr;
|
||||
u32 v32 = (u32)v;
|
||||
return put_user(v32, uaddr32);
|
||||
} else {
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
static int kvm_mips_set_reg(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_one_reg *reg)
|
||||
{
|
||||
struct mips_coproc *cop0 = vcpu->arch.cop0;
|
||||
u64 v;
|
||||
|
||||
if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) {
|
||||
u64 __user *uaddr64 = (u64 __user *)(long)reg->addr;
|
||||
|
||||
if (get_user(v, uaddr64) != 0)
|
||||
return -EFAULT;
|
||||
} else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) {
|
||||
u32 __user *uaddr32 = (u32 __user *)(long)reg->addr;
|
||||
s32 v32;
|
||||
|
||||
if (get_user(v32, uaddr32) != 0)
|
||||
return -EFAULT;
|
||||
v = (s64)v32;
|
||||
} else {
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (reg->id) {
|
||||
case KVM_REG_MIPS_R0:
|
||||
/* Silently ignore requests to set $0 */
|
||||
break;
|
||||
case KVM_REG_MIPS_R1 ... KVM_REG_MIPS_R31:
|
||||
vcpu->arch.gprs[reg->id - KVM_REG_MIPS_R0] = v;
|
||||
break;
|
||||
case KVM_REG_MIPS_HI:
|
||||
vcpu->arch.hi = v;
|
||||
break;
|
||||
case KVM_REG_MIPS_LO:
|
||||
vcpu->arch.lo = v;
|
||||
break;
|
||||
case KVM_REG_MIPS_PC:
|
||||
vcpu->arch.pc = v;
|
||||
break;
|
||||
|
||||
case KVM_REG_MIPS_CP0_INDEX:
|
||||
kvm_write_c0_guest_index(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CONTEXT:
|
||||
kvm_write_c0_guest_context(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_PAGEMASK:
|
||||
kvm_write_c0_guest_pagemask(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_WIRED:
|
||||
kvm_write_c0_guest_wired(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_BADVADDR:
|
||||
kvm_write_c0_guest_badvaddr(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_ENTRYHI:
|
||||
kvm_write_c0_guest_entryhi(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_STATUS:
|
||||
kvm_write_c0_guest_status(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_CAUSE:
|
||||
kvm_write_c0_guest_cause(cop0, v);
|
||||
break;
|
||||
case KVM_REG_MIPS_CP0_ERROREPC:
|
||||
kvm_write_c0_guest_errorepc(cop0, v);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
long
|
||||
@ -491,9 +725,38 @@ kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
|
||||
struct kvm_vcpu *vcpu = filp->private_data;
|
||||
void __user *argp = (void __user *)arg;
|
||||
long r;
|
||||
int intr;
|
||||
|
||||
switch (ioctl) {
|
||||
case KVM_SET_ONE_REG:
|
||||
case KVM_GET_ONE_REG: {
|
||||
struct kvm_one_reg reg;
|
||||
if (copy_from_user(®, argp, sizeof(reg)))
|
||||
return -EFAULT;
|
||||
if (ioctl == KVM_SET_ONE_REG)
|
||||
return kvm_mips_set_reg(vcpu, ®);
|
||||
else
|
||||
return kvm_mips_get_reg(vcpu, ®);
|
||||
}
|
||||
case KVM_GET_REG_LIST: {
|
||||
struct kvm_reg_list __user *user_list = argp;
|
||||
u64 __user *reg_dest;
|
||||
struct kvm_reg_list reg_list;
|
||||
unsigned n;
|
||||
|
||||
if (copy_from_user(®_list, user_list, sizeof(reg_list)))
|
||||
return -EFAULT;
|
||||
n = reg_list.n;
|
||||
reg_list.n = ARRAY_SIZE(kvm_mips_get_one_regs);
|
||||
if (copy_to_user(user_list, ®_list, sizeof(reg_list)))
|
||||
return -EFAULT;
|
||||
if (n < reg_list.n)
|
||||
return -E2BIG;
|
||||
reg_dest = user_list->reg;
|
||||
if (copy_to_user(reg_dest, kvm_mips_get_one_regs,
|
||||
sizeof(kvm_mips_get_one_regs)))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
case KVM_NMI:
|
||||
/* Treat the NMI as a CPU reset */
|
||||
r = kvm_mips_reset_vcpu(vcpu);
|
||||
@ -505,8 +768,6 @@ kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
|
||||
if (copy_from_user(&irq, argp, sizeof(irq)))
|
||||
goto out;
|
||||
|
||||
intr = (int)irq.irq;
|
||||
|
||||
kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__,
|
||||
irq.irq);
|
||||
|
||||
@ -514,7 +775,7 @@ kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
|
||||
break;
|
||||
}
|
||||
default:
|
||||
r = -EINVAL;
|
||||
r = -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
out:
|
||||
@ -565,7 +826,7 @@ long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
|
||||
|
||||
switch (ioctl) {
|
||||
default:
|
||||
r = -EINVAL;
|
||||
r = -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
return r;
|
||||
@ -593,13 +854,13 @@ void kvm_arch_exit(void)
|
||||
int
|
||||
kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
int
|
||||
kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
|
||||
@ -609,12 +870,12 @@ int kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
|
||||
|
||||
int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
|
||||
@ -627,6 +888,9 @@ int kvm_dev_ioctl_check_extension(long ext)
|
||||
int r;
|
||||
|
||||
switch (ext) {
|
||||
case KVM_CAP_ONE_REG:
|
||||
r = 1;
|
||||
break;
|
||||
case KVM_CAP_COALESCED_MMIO:
|
||||
r = KVM_COALESCED_MMIO_PAGE_OFFSET;
|
||||
break;
|
||||
@ -635,7 +899,6 @@ int kvm_dev_ioctl_check_extension(long ext)
|
||||
break;
|
||||
}
|
||||
return r;
|
||||
|
||||
}
|
||||
|
||||
int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
|
||||
@ -677,28 +940,28 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 32; i++)
|
||||
vcpu->arch.gprs[i] = regs->gprs[i];
|
||||
|
||||
for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
|
||||
vcpu->arch.gprs[i] = regs->gpr[i];
|
||||
vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
|
||||
vcpu->arch.hi = regs->hi;
|
||||
vcpu->arch.lo = regs->lo;
|
||||
vcpu->arch.pc = regs->pc;
|
||||
|
||||
return kvm_mips_callbacks->vcpu_ioctl_set_regs(vcpu, regs);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 32; i++)
|
||||
regs->gprs[i] = vcpu->arch.gprs[i];
|
||||
for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
|
||||
regs->gpr[i] = vcpu->arch.gprs[i];
|
||||
|
||||
regs->hi = vcpu->arch.hi;
|
||||
regs->lo = vcpu->arch.lo;
|
||||
regs->pc = vcpu->arch.pc;
|
||||
|
||||
return kvm_mips_callbacks->vcpu_ioctl_get_regs(vcpu, regs);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void kvm_mips_comparecount_func(unsigned long data)
|
||||
|
@ -345,54 +345,6 @@ static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
kvm_trap_emul_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
|
||||
{
|
||||
struct mips_coproc *cop0 = vcpu->arch.cop0;
|
||||
|
||||
kvm_write_c0_guest_index(cop0, regs->cp0reg[MIPS_CP0_TLB_INDEX][0]);
|
||||
kvm_write_c0_guest_context(cop0, regs->cp0reg[MIPS_CP0_TLB_CONTEXT][0]);
|
||||
kvm_write_c0_guest_badvaddr(cop0, regs->cp0reg[MIPS_CP0_BAD_VADDR][0]);
|
||||
kvm_write_c0_guest_entryhi(cop0, regs->cp0reg[MIPS_CP0_TLB_HI][0]);
|
||||
kvm_write_c0_guest_epc(cop0, regs->cp0reg[MIPS_CP0_EXC_PC][0]);
|
||||
|
||||
kvm_write_c0_guest_status(cop0, regs->cp0reg[MIPS_CP0_STATUS][0]);
|
||||
kvm_write_c0_guest_cause(cop0, regs->cp0reg[MIPS_CP0_CAUSE][0]);
|
||||
kvm_write_c0_guest_pagemask(cop0,
|
||||
regs->cp0reg[MIPS_CP0_TLB_PG_MASK][0]);
|
||||
kvm_write_c0_guest_wired(cop0, regs->cp0reg[MIPS_CP0_TLB_WIRED][0]);
|
||||
kvm_write_c0_guest_errorepc(cop0, regs->cp0reg[MIPS_CP0_ERROR_PC][0]);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
kvm_trap_emul_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
|
||||
{
|
||||
struct mips_coproc *cop0 = vcpu->arch.cop0;
|
||||
|
||||
regs->cp0reg[MIPS_CP0_TLB_INDEX][0] = kvm_read_c0_guest_index(cop0);
|
||||
regs->cp0reg[MIPS_CP0_TLB_CONTEXT][0] = kvm_read_c0_guest_context(cop0);
|
||||
regs->cp0reg[MIPS_CP0_BAD_VADDR][0] = kvm_read_c0_guest_badvaddr(cop0);
|
||||
regs->cp0reg[MIPS_CP0_TLB_HI][0] = kvm_read_c0_guest_entryhi(cop0);
|
||||
regs->cp0reg[MIPS_CP0_EXC_PC][0] = kvm_read_c0_guest_epc(cop0);
|
||||
|
||||
regs->cp0reg[MIPS_CP0_STATUS][0] = kvm_read_c0_guest_status(cop0);
|
||||
regs->cp0reg[MIPS_CP0_CAUSE][0] = kvm_read_c0_guest_cause(cop0);
|
||||
regs->cp0reg[MIPS_CP0_TLB_PG_MASK][0] =
|
||||
kvm_read_c0_guest_pagemask(cop0);
|
||||
regs->cp0reg[MIPS_CP0_TLB_WIRED][0] = kvm_read_c0_guest_wired(cop0);
|
||||
regs->cp0reg[MIPS_CP0_ERROR_PC][0] = kvm_read_c0_guest_errorepc(cop0);
|
||||
|
||||
regs->cp0reg[MIPS_CP0_CONFIG][0] = kvm_read_c0_guest_config(cop0);
|
||||
regs->cp0reg[MIPS_CP0_CONFIG][1] = kvm_read_c0_guest_config1(cop0);
|
||||
regs->cp0reg[MIPS_CP0_CONFIG][2] = kvm_read_c0_guest_config2(cop0);
|
||||
regs->cp0reg[MIPS_CP0_CONFIG][3] = kvm_read_c0_guest_config3(cop0);
|
||||
regs->cp0reg[MIPS_CP0_CONFIG][7] = kvm_read_c0_guest_config7(cop0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kvm_trap_emul_vm_init(struct kvm *kvm)
|
||||
{
|
||||
return 0;
|
||||
@ -471,8 +423,6 @@ static struct kvm_mips_callbacks kvm_trap_emul_callbacks = {
|
||||
.dequeue_io_int = kvm_mips_dequeue_io_int_cb,
|
||||
.irq_deliver = kvm_mips_irq_deliver_cb,
|
||||
.irq_clear = kvm_mips_irq_clear_cb,
|
||||
.vcpu_ioctl_get_regs = kvm_trap_emul_ioctl_get_regs,
|
||||
.vcpu_ioctl_set_regs = kvm_trap_emul_ioctl_set_regs,
|
||||
};
|
||||
|
||||
int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks)
|
||||
|
@ -301,10 +301,6 @@ static u32 tlb_handler[128] __cpuinitdata;
|
||||
static struct uasm_label labels[128] __cpuinitdata;
|
||||
static struct uasm_reloc relocs[128] __cpuinitdata;
|
||||
|
||||
#ifdef CONFIG_64BIT
|
||||
static int check_for_high_segbits __cpuinitdata;
|
||||
#endif
|
||||
|
||||
static int check_for_high_segbits __cpuinitdata;
|
||||
|
||||
static unsigned int kscratch_used_mask __cpuinitdata;
|
||||
|
@ -88,7 +88,7 @@ void __init plat_mem_setup(void)
|
||||
__dt_setup_arch(&__dtb_start);
|
||||
|
||||
if (soc_info.mem_size)
|
||||
add_memory_region(soc_info.mem_base, soc_info.mem_size,
|
||||
add_memory_region(soc_info.mem_base, soc_info.mem_size * SZ_1M,
|
||||
BOOT_MEM_RAM);
|
||||
else
|
||||
detect_memory_region(soc_info.mem_base,
|
||||
|
@ -176,6 +176,7 @@ extern const char *powerpc_base_platform;
|
||||
#define CPU_FTR_CFAR LONG_ASM_CONST(0x0100000000000000)
|
||||
#define CPU_FTR_HAS_PPR LONG_ASM_CONST(0x0200000000000000)
|
||||
#define CPU_FTR_DAWR LONG_ASM_CONST(0x0400000000000000)
|
||||
#define CPU_FTR_DABRX LONG_ASM_CONST(0x0800000000000000)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
@ -394,19 +395,20 @@ extern const char *powerpc_base_platform;
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_201 | \
|
||||
CPU_FTR_ALTIVEC_COMP | CPU_FTR_CAN_NAP | CPU_FTR_MMCRA | \
|
||||
CPU_FTR_CP_USE_DCBTZ | CPU_FTR_STCX_CHECKS_ADDRESS | \
|
||||
CPU_FTR_HVMODE)
|
||||
CPU_FTR_HVMODE | CPU_FTR_DABRX)
|
||||
#define CPU_FTRS_POWER5 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
|
||||
CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
CPU_FTR_COHERENT_ICACHE | CPU_FTR_PURR | \
|
||||
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB)
|
||||
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_DABRX)
|
||||
#define CPU_FTRS_POWER6 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
|
||||
CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
CPU_FTR_COHERENT_ICACHE | \
|
||||
CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \
|
||||
CPU_FTR_DSCR | CPU_FTR_UNALIGNED_LD_STD | \
|
||||
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR)
|
||||
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR | \
|
||||
CPU_FTR_DABRX)
|
||||
#define CPU_FTRS_POWER7 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
|
||||
CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
@ -415,7 +417,7 @@ extern const char *powerpc_base_platform;
|
||||
CPU_FTR_DSCR | CPU_FTR_SAO | CPU_FTR_ASYM_SMT | \
|
||||
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
|
||||
CPU_FTR_ICSWX | CPU_FTR_CFAR | CPU_FTR_HVMODE | \
|
||||
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR)
|
||||
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX)
|
||||
#define CPU_FTRS_POWER8 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
|
||||
CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
@ -430,14 +432,15 @@ extern const char *powerpc_base_platform;
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
|
||||
CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
CPU_FTR_PAUSE_ZERO | CPU_FTR_CELL_TB_BUG | CPU_FTR_CP_USE_DCBTZ | \
|
||||
CPU_FTR_UNALIGNED_LD_STD)
|
||||
CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_DABRX)
|
||||
#define CPU_FTRS_PA6T (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP | \
|
||||
CPU_FTR_PURR | CPU_FTR_REAL_LE)
|
||||
CPU_FTR_PURR | CPU_FTR_REAL_LE | CPU_FTR_DABRX)
|
||||
#define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2)
|
||||
|
||||
#define CPU_FTRS_A2 (CPU_FTR_USE_TB | CPU_FTR_SMT | CPU_FTR_DBELL | \
|
||||
CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | CPU_FTR_ICSWX)
|
||||
CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | \
|
||||
CPU_FTR_ICSWX | CPU_FTR_DABRX )
|
||||
|
||||
#ifdef __powerpc64__
|
||||
#ifdef CONFIG_PPC_BOOK3E
|
||||
|
@ -513,7 +513,7 @@ label##_common: \
|
||||
*/
|
||||
#define STD_EXCEPTION_COMMON_ASYNC(trap, label, hdlr) \
|
||||
EXCEPTION_COMMON(trap, label, hdlr, ret_from_except_lite, \
|
||||
FINISH_NAP;RUNLATCH_ON;DISABLE_INTS)
|
||||
FINISH_NAP;DISABLE_INTS;RUNLATCH_ON)
|
||||
|
||||
/*
|
||||
* When the idle code in power4_idle puts the CPU into NAP mode,
|
||||
|
@ -54,8 +54,16 @@
|
||||
#define BOOKE_INTERRUPT_DEBUG 15
|
||||
|
||||
/* E500 */
|
||||
#define BOOKE_INTERRUPT_SPE_UNAVAIL 32
|
||||
#define BOOKE_INTERRUPT_SPE_FP_DATA 33
|
||||
#define BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 32
|
||||
#define BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 33
|
||||
/*
|
||||
* TODO: Unify 32-bit and 64-bit kernel exception handlers to use same defines
|
||||
*/
|
||||
#define BOOKE_INTERRUPT_SPE_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL
|
||||
#define BOOKE_INTERRUPT_SPE_FP_DATA BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST
|
||||
#define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL
|
||||
#define BOOKE_INTERRUPT_ALTIVEC_ASSIST \
|
||||
BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST
|
||||
#define BOOKE_INTERRUPT_SPE_FP_ROUND 34
|
||||
#define BOOKE_INTERRUPT_PERFORMANCE_MONITOR 35
|
||||
#define BOOKE_INTERRUPT_DOORBELL 36
|
||||
@ -67,10 +75,6 @@
|
||||
#define BOOKE_INTERRUPT_HV_SYSCALL 40
|
||||
#define BOOKE_INTERRUPT_HV_PRIV 41
|
||||
|
||||
/* altivec */
|
||||
#define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL 42
|
||||
#define BOOKE_INTERRUPT_ALTIVEC_ASSIST 43
|
||||
|
||||
/* book3s */
|
||||
|
||||
#define BOOK3S_INTERRUPT_SYSTEM_RESET 0x100
|
||||
|
@ -452,8 +452,8 @@ static struct cpu_spec __initdata cpu_specs[] = {
|
||||
.mmu_features = MMU_FTRS_POWER8,
|
||||
.icache_bsize = 128,
|
||||
.dcache_bsize = 128,
|
||||
.oprofile_type = PPC_OPROFILE_POWER4,
|
||||
.oprofile_cpu_type = 0,
|
||||
.oprofile_type = PPC_OPROFILE_INVALID,
|
||||
.oprofile_cpu_type = "ppc64/ibm-compat-v1",
|
||||
.cpu_setup = __setup_cpu_power8,
|
||||
.cpu_restore = __restore_cpu_power8,
|
||||
.platform = "power8",
|
||||
@ -506,8 +506,8 @@ static struct cpu_spec __initdata cpu_specs[] = {
|
||||
.dcache_bsize = 128,
|
||||
.num_pmcs = 6,
|
||||
.pmc_type = PPC_PMC_IBM,
|
||||
.oprofile_cpu_type = 0,
|
||||
.oprofile_type = PPC_OPROFILE_POWER4,
|
||||
.oprofile_cpu_type = "ppc64/power8",
|
||||
.oprofile_type = PPC_OPROFILE_INVALID,
|
||||
.cpu_setup = __setup_cpu_power8,
|
||||
.cpu_restore = __restore_cpu_power8,
|
||||
.platform = "power8",
|
||||
|
@ -465,20 +465,6 @@ BEGIN_FTR_SECTION
|
||||
std r0, THREAD_EBBHR(r3)
|
||||
mfspr r0, SPRN_EBBRR
|
||||
std r0, THREAD_EBBRR(r3)
|
||||
|
||||
/* PMU registers made user read/(write) by EBB */
|
||||
mfspr r0, SPRN_SIAR
|
||||
std r0, THREAD_SIAR(r3)
|
||||
mfspr r0, SPRN_SDAR
|
||||
std r0, THREAD_SDAR(r3)
|
||||
mfspr r0, SPRN_SIER
|
||||
std r0, THREAD_SIER(r3)
|
||||
mfspr r0, SPRN_MMCR0
|
||||
std r0, THREAD_MMCR0(r3)
|
||||
mfspr r0, SPRN_MMCR2
|
||||
std r0, THREAD_MMCR2(r3)
|
||||
mfspr r0, SPRN_MMCRA
|
||||
std r0, THREAD_MMCRA(r3)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
||||
#endif
|
||||
|
||||
@ -581,20 +567,6 @@ BEGIN_FTR_SECTION
|
||||
ld r0, THREAD_EBBRR(r4)
|
||||
mtspr SPRN_EBBRR, r0
|
||||
|
||||
/* PMU registers made user read/(write) by EBB */
|
||||
ld r0, THREAD_SIAR(r4)
|
||||
mtspr SPRN_SIAR, r0
|
||||
ld r0, THREAD_SDAR(r4)
|
||||
mtspr SPRN_SDAR, r0
|
||||
ld r0, THREAD_SIER(r4)
|
||||
mtspr SPRN_SIER, r0
|
||||
ld r0, THREAD_MMCR0(r4)
|
||||
mtspr SPRN_MMCR0, r0
|
||||
ld r0, THREAD_MMCR2(r4)
|
||||
mtspr SPRN_MMCR2, r0
|
||||
ld r0, THREAD_MMCRA(r4)
|
||||
mtspr SPRN_MMCRA, r0
|
||||
|
||||
ld r0,THREAD_TAR(r4)
|
||||
mtspr SPRN_TAR,r0
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
||||
|
@ -454,38 +454,14 @@ BEGIN_FTR_SECTION
|
||||
xori r10,r10,(MSR_FE0|MSR_FE1)
|
||||
mtmsrd r10
|
||||
sync
|
||||
fmr 0,0
|
||||
fmr 1,1
|
||||
fmr 2,2
|
||||
fmr 3,3
|
||||
fmr 4,4
|
||||
fmr 5,5
|
||||
fmr 6,6
|
||||
fmr 7,7
|
||||
fmr 8,8
|
||||
fmr 9,9
|
||||
fmr 10,10
|
||||
fmr 11,11
|
||||
fmr 12,12
|
||||
fmr 13,13
|
||||
fmr 14,14
|
||||
fmr 15,15
|
||||
fmr 16,16
|
||||
fmr 17,17
|
||||
fmr 18,18
|
||||
fmr 19,19
|
||||
fmr 20,20
|
||||
fmr 21,21
|
||||
fmr 22,22
|
||||
fmr 23,23
|
||||
fmr 24,24
|
||||
fmr 25,25
|
||||
fmr 26,26
|
||||
fmr 27,27
|
||||
fmr 28,28
|
||||
fmr 29,29
|
||||
fmr 30,30
|
||||
fmr 31,31
|
||||
|
||||
#define FMR2(n) fmr (n), (n) ; fmr n+1, n+1
|
||||
#define FMR4(n) FMR2(n) ; FMR2(n+2)
|
||||
#define FMR8(n) FMR4(n) ; FMR4(n+4)
|
||||
#define FMR16(n) FMR8(n) ; FMR8(n+8)
|
||||
#define FMR32(n) FMR16(n) ; FMR16(n+16)
|
||||
FMR32(0)
|
||||
|
||||
FTR_SECTION_ELSE
|
||||
/*
|
||||
* To denormalise we need to move a copy of the register to itself.
|
||||
@ -495,39 +471,25 @@ FTR_SECTION_ELSE
|
||||
oris r10,r10,MSR_VSX@h
|
||||
mtmsrd r10
|
||||
sync
|
||||
XVCPSGNDP(0,0,0)
|
||||
XVCPSGNDP(1,1,1)
|
||||
XVCPSGNDP(2,2,2)
|
||||
XVCPSGNDP(3,3,3)
|
||||
XVCPSGNDP(4,4,4)
|
||||
XVCPSGNDP(5,5,5)
|
||||
XVCPSGNDP(6,6,6)
|
||||
XVCPSGNDP(7,7,7)
|
||||
XVCPSGNDP(8,8,8)
|
||||
XVCPSGNDP(9,9,9)
|
||||
XVCPSGNDP(10,10,10)
|
||||
XVCPSGNDP(11,11,11)
|
||||
XVCPSGNDP(12,12,12)
|
||||
XVCPSGNDP(13,13,13)
|
||||
XVCPSGNDP(14,14,14)
|
||||
XVCPSGNDP(15,15,15)
|
||||
XVCPSGNDP(16,16,16)
|
||||
XVCPSGNDP(17,17,17)
|
||||
XVCPSGNDP(18,18,18)
|
||||
XVCPSGNDP(19,19,19)
|
||||
XVCPSGNDP(20,20,20)
|
||||
XVCPSGNDP(21,21,21)
|
||||
XVCPSGNDP(22,22,22)
|
||||
XVCPSGNDP(23,23,23)
|
||||
XVCPSGNDP(24,24,24)
|
||||
XVCPSGNDP(25,25,25)
|
||||
XVCPSGNDP(26,26,26)
|
||||
XVCPSGNDP(27,27,27)
|
||||
XVCPSGNDP(28,28,28)
|
||||
XVCPSGNDP(29,29,29)
|
||||
XVCPSGNDP(30,30,30)
|
||||
XVCPSGNDP(31,31,31)
|
||||
|
||||
#define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1)
|
||||
#define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2)
|
||||
#define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4)
|
||||
#define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8)
|
||||
#define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16)
|
||||
XVCPSGNDP32(0)
|
||||
|
||||
ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
b denorm_done
|
||||
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
||||
/*
|
||||
* To denormalise we need to move a copy of the register to itself.
|
||||
* For POWER8 we need to do that for all 64 VSX registers
|
||||
*/
|
||||
XVCPSGNDP32(32)
|
||||
denorm_done:
|
||||
mtspr SPRN_HSRR0,r11
|
||||
mtcrf 0x80,r9
|
||||
ld r9,PACA_EXGEN+EX_R9(r13)
|
||||
@ -721,7 +683,7 @@ machine_check_common:
|
||||
STD_EXCEPTION_COMMON(0xb00, trap_0b, .unknown_exception)
|
||||
STD_EXCEPTION_COMMON(0xd00, single_step, .single_step_exception)
|
||||
STD_EXCEPTION_COMMON(0xe00, trap_0e, .unknown_exception)
|
||||
STD_EXCEPTION_COMMON(0xe40, emulation_assist, .program_check_exception)
|
||||
STD_EXCEPTION_COMMON(0xe40, emulation_assist, .emulation_assist_interrupt)
|
||||
STD_EXCEPTION_COMMON(0xe60, hmi_exception, .unknown_exception)
|
||||
#ifdef CONFIG_PPC_DOORBELL
|
||||
STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, .doorbell_exception)
|
||||
|
@ -162,7 +162,7 @@ notrace unsigned int __check_irq_replay(void)
|
||||
* in case we also had a rollover while hard disabled
|
||||
*/
|
||||
local_paca->irq_happened &= ~PACA_IRQ_DEC;
|
||||
if (decrementer_check_overflow())
|
||||
if ((happened & PACA_IRQ_DEC) || decrementer_check_overflow())
|
||||
return 0x900;
|
||||
|
||||
/* Finally check if an external interrupt happened */
|
||||
|
@ -827,6 +827,7 @@ static void pcibios_fixup_resources(struct pci_dev *dev)
|
||||
}
|
||||
for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
|
||||
struct resource *res = dev->resource + i;
|
||||
struct pci_bus_region reg;
|
||||
if (!res->flags)
|
||||
continue;
|
||||
|
||||
@ -835,8 +836,9 @@ static void pcibios_fixup_resources(struct pci_dev *dev)
|
||||
* at 0 as unset as well, except if PCI_PROBE_ONLY is also set
|
||||
* since in that case, we don't want to re-assign anything
|
||||
*/
|
||||
pcibios_resource_to_bus(dev, ®, res);
|
||||
if (pci_has_flag(PCI_REASSIGN_ALL_RSRC) ||
|
||||
(res->start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) {
|
||||
(reg.start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) {
|
||||
/* Only print message if not re-assigning */
|
||||
if (!pci_has_flag(PCI_REASSIGN_ALL_RSRC))
|
||||
pr_debug("PCI:%s Resource %d %016llx-%016llx [%x] "
|
||||
|
@ -399,7 +399,8 @@ static inline int __set_dabr(unsigned long dabr, unsigned long dabrx)
|
||||
static inline int __set_dabr(unsigned long dabr, unsigned long dabrx)
|
||||
{
|
||||
mtspr(SPRN_DABR, dabr);
|
||||
mtspr(SPRN_DABRX, dabrx);
|
||||
if (cpu_has_feature(CPU_FTR_DABRX))
|
||||
mtspr(SPRN_DABRX, dabrx);
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
@ -1368,7 +1369,7 @@ void show_stack(struct task_struct *tsk, unsigned long *stack)
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
/* Called with hard IRQs off */
|
||||
void __ppc64_runlatch_on(void)
|
||||
void notrace __ppc64_runlatch_on(void)
|
||||
{
|
||||
struct thread_info *ti = current_thread_info();
|
||||
unsigned long ctrl;
|
||||
@ -1381,7 +1382,7 @@ void __ppc64_runlatch_on(void)
|
||||
}
|
||||
|
||||
/* Called with hard IRQs off */
|
||||
void __ppc64_runlatch_off(void)
|
||||
void notrace __ppc64_runlatch_off(void)
|
||||
{
|
||||
struct thread_info *ti = current_thread_info();
|
||||
unsigned long ctrl;
|
||||
|
@ -1165,6 +1165,16 @@ bail:
|
||||
exception_exit(prev_state);
|
||||
}
|
||||
|
||||
/*
|
||||
* This occurs when running in hypervisor mode on POWER6 or later
|
||||
* and an illegal instruction is encountered.
|
||||
*/
|
||||
void __kprobes emulation_assist_interrupt(struct pt_regs *regs)
|
||||
{
|
||||
regs->msr |= REASON_ILLEGAL;
|
||||
program_check_exception(regs);
|
||||
}
|
||||
|
||||
void alignment_exception(struct pt_regs *regs)
|
||||
{
|
||||
enum ctx_state prev_state = exception_enter();
|
||||
|
@ -441,6 +441,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
|
||||
struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu);
|
||||
struct kvmppc_44x_tlbe *tlbe;
|
||||
unsigned int gtlb_index;
|
||||
int idx;
|
||||
|
||||
gtlb_index = kvmppc_get_gpr(vcpu, ra);
|
||||
if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) {
|
||||
@ -473,6 +474,8 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
|
||||
return EMULATE_FAIL;
|
||||
}
|
||||
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
|
||||
if (tlbe_is_host_safe(vcpu, tlbe)) {
|
||||
gva_t eaddr;
|
||||
gpa_t gpaddr;
|
||||
@ -489,6 +492,8 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
|
||||
kvmppc_mmu_map(vcpu, eaddr, gpaddr, gtlb_index);
|
||||
}
|
||||
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
|
||||
trace_kvm_gtlb_write(gtlb_index, tlbe->tid, tlbe->word0, tlbe->word1,
|
||||
tlbe->word2);
|
||||
|
||||
|
@ -832,6 +832,18 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
{
|
||||
int r = RESUME_HOST;
|
||||
int s;
|
||||
int idx;
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
WARN_ON(local_paca->irq_happened != 0);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We enter with interrupts disabled in hardware, but
|
||||
* we need to call hard_irq_disable anyway to ensure that
|
||||
* the software state is kept in sync.
|
||||
*/
|
||||
hard_irq_disable();
|
||||
|
||||
/* update before a new last_exit_type is rewritten */
|
||||
kvmppc_update_timing_stats(vcpu);
|
||||
@ -1053,6 +1065,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
break;
|
||||
}
|
||||
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
|
||||
gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr);
|
||||
gfn = gpaddr >> PAGE_SHIFT;
|
||||
|
||||
@ -1075,6 +1089,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
kvmppc_account_exit(vcpu, MMIO_EXITS);
|
||||
}
|
||||
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
break;
|
||||
}
|
||||
|
||||
@ -1098,6 +1113,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
|
||||
kvmppc_account_exit(vcpu, ITLB_VIRT_MISS_EXITS);
|
||||
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
|
||||
gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr);
|
||||
gfn = gpaddr >> PAGE_SHIFT;
|
||||
|
||||
@ -1114,6 +1131,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_MACHINE_CHECK);
|
||||
}
|
||||
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -396,6 +396,7 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu)
|
||||
struct kvm_book3e_206_tlb_entry *gtlbe;
|
||||
int tlbsel, esel;
|
||||
int recal = 0;
|
||||
int idx;
|
||||
|
||||
tlbsel = get_tlb_tlbsel(vcpu);
|
||||
esel = get_tlb_esel(vcpu, tlbsel);
|
||||
@ -430,6 +431,8 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu)
|
||||
kvmppc_set_tlb1map_range(vcpu, gtlbe);
|
||||
}
|
||||
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
|
||||
/* Invalidate shadow mappings for the about-to-be-clobbered TLBE. */
|
||||
if (tlbe_is_host_safe(vcpu, gtlbe)) {
|
||||
u64 eaddr = get_tlb_eaddr(gtlbe);
|
||||
@ -444,6 +447,8 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu)
|
||||
kvmppc_mmu_map(vcpu, eaddr, raddr, index_of(tlbsel, esel));
|
||||
}
|
||||
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
|
||||
kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS);
|
||||
return EMULATE_DONE;
|
||||
}
|
||||
|
@ -177,8 +177,6 @@ int kvmppc_core_check_processor_compat(void)
|
||||
r = 0;
|
||||
else if (strcmp(cur_cpu_spec->cpu_name, "e5500") == 0)
|
||||
r = 0;
|
||||
else if (strcmp(cur_cpu_spec->cpu_name, "e6500") == 0)
|
||||
r = 0;
|
||||
else
|
||||
r = -ENOTSUPP;
|
||||
|
||||
|
@ -1758,7 +1758,7 @@ static void perf_event_interrupt(struct pt_regs *regs)
|
||||
}
|
||||
}
|
||||
}
|
||||
if ((!found) && printk_ratelimit())
|
||||
if (!found && !nmi && printk_ratelimit())
|
||||
printk(KERN_WARNING "Can't find PMC that caused IRQ\n");
|
||||
|
||||
/*
|
||||
|
@ -83,7 +83,11 @@ static int pseries_eeh_init(void)
|
||||
ibm_configure_pe = rtas_token("ibm,configure-pe");
|
||||
ibm_configure_bridge = rtas_token("ibm,configure-bridge");
|
||||
|
||||
/* necessary sanity check */
|
||||
/*
|
||||
* Necessary sanity check. We needn't check "get-config-addr-info"
|
||||
* and its variant since the old firmware probably support address
|
||||
* of domain/bus/slot/function for EEH RTAS operations.
|
||||
*/
|
||||
if (ibm_set_eeh_option == RTAS_UNKNOWN_SERVICE) {
|
||||
pr_warning("%s: RTAS service <ibm,set-eeh-option> invalid\n",
|
||||
__func__);
|
||||
@ -102,12 +106,6 @@ static int pseries_eeh_init(void)
|
||||
pr_warning("%s: RTAS service <ibm,slot-error-detail> invalid\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
} else if (ibm_get_config_addr_info2 == RTAS_UNKNOWN_SERVICE &&
|
||||
ibm_get_config_addr_info == RTAS_UNKNOWN_SERVICE) {
|
||||
pr_warning("%s: RTAS service <ibm,get-config-addr-info2> and "
|
||||
"<ibm,get-config-addr-info> invalid\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
} else if (ibm_configure_pe == RTAS_UNKNOWN_SERVICE &&
|
||||
ibm_configure_bridge == RTAS_UNKNOWN_SERVICE) {
|
||||
pr_warning("%s: RTAS service <ibm,configure-pe> and "
|
||||
|
@ -212,7 +212,9 @@ appldata_timer_handler(ctl_table *ctl, int write,
|
||||
return 0;
|
||||
}
|
||||
if (!write) {
|
||||
len = sprintf(buf, appldata_timer_active ? "1\n" : "0\n");
|
||||
strncpy(buf, appldata_timer_active ? "1\n" : "0\n",
|
||||
ARRAY_SIZE(buf));
|
||||
len = strnlen(buf, ARRAY_SIZE(buf));
|
||||
if (len > *lenp)
|
||||
len = *lenp;
|
||||
if (copy_to_user(buffer, buf, len))
|
||||
@ -317,7 +319,8 @@ appldata_generic_handler(ctl_table *ctl, int write,
|
||||
return 0;
|
||||
}
|
||||
if (!write) {
|
||||
len = sprintf(buf, ops->active ? "1\n" : "0\n");
|
||||
strncpy(buf, ops->active ? "1\n" : "0\n", ARRAY_SIZE(buf));
|
||||
len = strnlen(buf, ARRAY_SIZE(buf));
|
||||
if (len > *lenp)
|
||||
len = *lenp;
|
||||
if (copy_to_user(buffer, buf, len)) {
|
||||
|
@ -71,8 +71,8 @@ static inline void dma_free_coherent(struct device *dev, size_t size,
|
||||
{
|
||||
struct dma_map_ops *dma_ops = get_dma_ops(dev);
|
||||
|
||||
dma_ops->free(dev, size, cpu_addr, dma_handle, NULL);
|
||||
debug_dma_free_coherent(dev, size, cpu_addr, dma_handle);
|
||||
dma_ops->free(dev, size, cpu_addr, dma_handle, NULL);
|
||||
}
|
||||
|
||||
#endif /* _ASM_S390_DMA_MAPPING_H */
|
||||
|
@ -36,6 +36,7 @@ static inline void * phys_to_virt(unsigned long address)
|
||||
}
|
||||
|
||||
void *xlate_dev_mem_ptr(unsigned long phys);
|
||||
#define xlate_dev_mem_ptr xlate_dev_mem_ptr
|
||||
void unxlate_dev_mem_ptr(unsigned long phys, void *addr);
|
||||
|
||||
/*
|
||||
|
@ -623,7 +623,7 @@ static inline pgste_t pgste_get_lock(pte_t *ptep)
|
||||
" csg %0,%1,%2\n"
|
||||
" jl 0b\n"
|
||||
: "=&d" (old), "=&d" (new), "=Q" (ptep[PTRS_PER_PTE])
|
||||
: "Q" (ptep[PTRS_PER_PTE]) : "cc");
|
||||
: "Q" (ptep[PTRS_PER_PTE]) : "cc", "memory");
|
||||
#endif
|
||||
return __pgste(new);
|
||||
}
|
||||
@ -635,18 +635,26 @@ static inline void pgste_set_unlock(pte_t *ptep, pgste_t pgste)
|
||||
" nihh %1,0xff7f\n" /* clear RCP_PCL_BIT */
|
||||
" stg %1,%0\n"
|
||||
: "=Q" (ptep[PTRS_PER_PTE])
|
||||
: "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE]) : "cc");
|
||||
: "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE])
|
||||
: "cc", "memory");
|
||||
preempt_enable();
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void pgste_set(pte_t *ptep, pgste_t pgste)
|
||||
{
|
||||
#ifdef CONFIG_PGSTE
|
||||
*(pgste_t *)(ptep + PTRS_PER_PTE) = pgste;
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline pgste_t pgste_update_all(pte_t *ptep, pgste_t pgste)
|
||||
{
|
||||
#ifdef CONFIG_PGSTE
|
||||
unsigned long address, bits;
|
||||
unsigned char skey;
|
||||
|
||||
if (!pte_present(*ptep))
|
||||
if (pte_val(*ptep) & _PAGE_INVALID)
|
||||
return pgste;
|
||||
address = pte_val(*ptep) & PAGE_MASK;
|
||||
skey = page_get_storage_key(address);
|
||||
@ -680,7 +688,7 @@ static inline pgste_t pgste_update_young(pte_t *ptep, pgste_t pgste)
|
||||
#ifdef CONFIG_PGSTE
|
||||
int young;
|
||||
|
||||
if (!pte_present(*ptep))
|
||||
if (pte_val(*ptep) & _PAGE_INVALID)
|
||||
return pgste;
|
||||
/* Get referenced bit from storage key */
|
||||
young = page_reset_referenced(pte_val(*ptep) & PAGE_MASK);
|
||||
@ -704,17 +712,19 @@ static inline void pgste_set_key(pte_t *ptep, pgste_t pgste, pte_t entry)
|
||||
{
|
||||
#ifdef CONFIG_PGSTE
|
||||
unsigned long address;
|
||||
unsigned long okey, nkey;
|
||||
unsigned long nkey;
|
||||
|
||||
if (!pte_present(entry))
|
||||
if (pte_val(entry) & _PAGE_INVALID)
|
||||
return;
|
||||
VM_BUG_ON(!(pte_val(*ptep) & _PAGE_INVALID));
|
||||
address = pte_val(entry) & PAGE_MASK;
|
||||
okey = nkey = page_get_storage_key(address);
|
||||
nkey &= ~(_PAGE_ACC_BITS | _PAGE_FP_BIT);
|
||||
/* Set page access key and fetch protection bit from pgste */
|
||||
nkey |= (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56;
|
||||
if (okey != nkey)
|
||||
page_set_storage_key(address, nkey, 0);
|
||||
/*
|
||||
* Set page access key and fetch protection bit from pgste.
|
||||
* The guest C/R information is still in the PGSTE, set real
|
||||
* key C/R to 0.
|
||||
*/
|
||||
nkey = (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56;
|
||||
page_set_storage_key(address, nkey, 0);
|
||||
#endif
|
||||
}
|
||||
|
||||
@ -1098,6 +1108,11 @@ static inline pte_t ptep_modify_prot_start(struct mm_struct *mm,
|
||||
pte = *ptep;
|
||||
if (!mm_exclusive(mm))
|
||||
__ptep_ipte(address, ptep);
|
||||
|
||||
if (mm_has_pgste(mm)) {
|
||||
pgste = pgste_update_all(&pte, pgste);
|
||||
pgste_set(ptep, pgste);
|
||||
}
|
||||
return pte;
|
||||
}
|
||||
|
||||
@ -1105,9 +1120,13 @@ static inline void ptep_modify_prot_commit(struct mm_struct *mm,
|
||||
unsigned long address,
|
||||
pte_t *ptep, pte_t pte)
|
||||
{
|
||||
pgste_t pgste;
|
||||
|
||||
if (mm_has_pgste(mm)) {
|
||||
pgste = *(pgste_t *)(ptep + PTRS_PER_PTE);
|
||||
pgste_set_key(ptep, pgste, pte);
|
||||
pgste_set_pte(ptep, pte);
|
||||
pgste_set_unlock(ptep, *(pgste_t *)(ptep + PTRS_PER_PTE));
|
||||
pgste_set_unlock(ptep, pgste);
|
||||
} else
|
||||
*ptep = pte;
|
||||
}
|
||||
|
@ -74,6 +74,8 @@ __show_trace(unsigned long sp, unsigned long low, unsigned long high)
|
||||
|
||||
static void show_trace(struct task_struct *task, unsigned long *stack)
|
||||
{
|
||||
const unsigned long frame_size =
|
||||
STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
|
||||
register unsigned long __r15 asm ("15");
|
||||
unsigned long sp;
|
||||
|
||||
@ -82,11 +84,13 @@ static void show_trace(struct task_struct *task, unsigned long *stack)
|
||||
sp = task ? task->thread.ksp : __r15;
|
||||
printk("Call Trace:\n");
|
||||
#ifdef CONFIG_CHECK_STACK
|
||||
sp = __show_trace(sp, S390_lowcore.panic_stack - 4096,
|
||||
S390_lowcore.panic_stack);
|
||||
sp = __show_trace(sp,
|
||||
S390_lowcore.panic_stack + frame_size - 4096,
|
||||
S390_lowcore.panic_stack + frame_size);
|
||||
#endif
|
||||
sp = __show_trace(sp, S390_lowcore.async_stack - ASYNC_SIZE,
|
||||
S390_lowcore.async_stack);
|
||||
sp = __show_trace(sp,
|
||||
S390_lowcore.async_stack + frame_size - ASYNC_SIZE,
|
||||
S390_lowcore.async_stack + frame_size);
|
||||
if (task)
|
||||
__show_trace(sp, (unsigned long) task_stack_page(task),
|
||||
(unsigned long) task_stack_page(task) + THREAD_SIZE);
|
||||
|
@ -311,3 +311,67 @@ void measurement_alert_subclass_unregister(void)
|
||||
spin_unlock(&ma_subclass_lock);
|
||||
}
|
||||
EXPORT_SYMBOL(measurement_alert_subclass_unregister);
|
||||
|
||||
void synchronize_irq(unsigned int irq)
|
||||
{
|
||||
/*
|
||||
* Not needed, the handler is protected by a lock and IRQs that occur
|
||||
* after the handler is deleted are just NOPs.
|
||||
*/
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(synchronize_irq);
|
||||
|
||||
#ifndef CONFIG_PCI
|
||||
|
||||
/* Only PCI devices have dynamically-defined IRQ handlers */
|
||||
|
||||
int request_irq(unsigned int irq, irq_handler_t handler,
|
||||
unsigned long irqflags, const char *devname, void *dev_id)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(request_irq);
|
||||
|
||||
void free_irq(unsigned int irq, void *dev_id)
|
||||
{
|
||||
WARN_ON(1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(free_irq);
|
||||
|
||||
void enable_irq(unsigned int irq)
|
||||
{
|
||||
WARN_ON(1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(enable_irq);
|
||||
|
||||
void disable_irq(unsigned int irq)
|
||||
{
|
||||
WARN_ON(1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(disable_irq);
|
||||
|
||||
#endif /* !CONFIG_PCI */
|
||||
|
||||
void disable_irq_nosync(unsigned int irq)
|
||||
{
|
||||
disable_irq(irq);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(disable_irq_nosync);
|
||||
|
||||
unsigned long probe_irq_on(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(probe_irq_on);
|
||||
|
||||
int probe_irq_off(unsigned long val)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(probe_irq_off);
|
||||
|
||||
unsigned int probe_irq_mask(unsigned long val)
|
||||
{
|
||||
return val;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(probe_irq_mask);
|
||||
|
@ -225,7 +225,7 @@ _sclp_print:
|
||||
ahi %r2,1
|
||||
ltr %r0,%r0 # end of string?
|
||||
jz .LfinalizemtoS4
|
||||
chi %r0,0x15 # end of line (NL)?
|
||||
chi %r0,0x0a # end of line (NL)?
|
||||
jz .LfinalizemtoS4
|
||||
stc %r0,0(%r6,%r7) # copy to mto
|
||||
la %r11,0(%r6,%r7)
|
||||
|
@ -428,34 +428,27 @@ void smp_stop_cpu(void)
|
||||
* This is the main routine where commands issued by other
|
||||
* cpus are handled.
|
||||
*/
|
||||
static void smp_handle_ext_call(void)
|
||||
{
|
||||
unsigned long bits;
|
||||
|
||||
/* handle bit signal external calls */
|
||||
bits = xchg(&pcpu_devices[smp_processor_id()].ec_mask, 0);
|
||||
if (test_bit(ec_stop_cpu, &bits))
|
||||
smp_stop_cpu();
|
||||
if (test_bit(ec_schedule, &bits))
|
||||
scheduler_ipi();
|
||||
if (test_bit(ec_call_function, &bits))
|
||||
generic_smp_call_function_interrupt();
|
||||
if (test_bit(ec_call_function_single, &bits))
|
||||
generic_smp_call_function_single_interrupt();
|
||||
}
|
||||
|
||||
static void do_ext_call_interrupt(struct ext_code ext_code,
|
||||
unsigned int param32, unsigned long param64)
|
||||
{
|
||||
unsigned long bits;
|
||||
int cpu;
|
||||
|
||||
cpu = smp_processor_id();
|
||||
if (ext_code.code == 0x1202)
|
||||
inc_irq_stat(IRQEXT_EXC);
|
||||
else
|
||||
inc_irq_stat(IRQEXT_EMS);
|
||||
/*
|
||||
* handle bit signal external calls
|
||||
*/
|
||||
bits = xchg(&pcpu_devices[cpu].ec_mask, 0);
|
||||
|
||||
if (test_bit(ec_stop_cpu, &bits))
|
||||
smp_stop_cpu();
|
||||
|
||||
if (test_bit(ec_schedule, &bits))
|
||||
scheduler_ipi();
|
||||
|
||||
if (test_bit(ec_call_function, &bits))
|
||||
generic_smp_call_function_interrupt();
|
||||
|
||||
if (test_bit(ec_call_function_single, &bits))
|
||||
generic_smp_call_function_single_interrupt();
|
||||
|
||||
inc_irq_stat(ext_code.code == 0x1202 ? IRQEXT_EXC : IRQEXT_EMS);
|
||||
smp_handle_ext_call();
|
||||
}
|
||||
|
||||
void arch_send_call_function_ipi_mask(const struct cpumask *mask)
|
||||
@ -760,6 +753,8 @@ int __cpu_disable(void)
|
||||
{
|
||||
unsigned long cregs[16];
|
||||
|
||||
/* Handle possible pending IPIs */
|
||||
smp_handle_ext_call();
|
||||
set_cpu_online(smp_processor_id(), false);
|
||||
/* Disable pseudo page faults on this cpu. */
|
||||
pfault_fini();
|
||||
|
@ -492,7 +492,7 @@ static int gmap_connect_pgtable(unsigned long address, unsigned long segment,
|
||||
mp = (struct gmap_pgtable *) page->index;
|
||||
rmap->gmap = gmap;
|
||||
rmap->entry = segment_ptr;
|
||||
rmap->vmaddr = address;
|
||||
rmap->vmaddr = address & PMD_MASK;
|
||||
spin_lock(&mm->page_table_lock);
|
||||
if (*segment_ptr == segment) {
|
||||
list_add(&rmap->list, &mp->mapper);
|
||||
|
@ -302,15 +302,6 @@ static int zpci_cfg_store(struct zpci_dev *zdev, int offset, u32 val, u8 len)
|
||||
return rc;
|
||||
}
|
||||
|
||||
void synchronize_irq(unsigned int irq)
|
||||
{
|
||||
/*
|
||||
* Not needed, the handler is protected by a lock and IRQs that occur
|
||||
* after the handler is deleted are just NOPs.
|
||||
*/
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(synchronize_irq);
|
||||
|
||||
void enable_irq(unsigned int irq)
|
||||
{
|
||||
struct msi_desc *msi = irq_get_msi_desc(irq);
|
||||
@ -327,30 +318,6 @@ void disable_irq(unsigned int irq)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(disable_irq);
|
||||
|
||||
void disable_irq_nosync(unsigned int irq)
|
||||
{
|
||||
disable_irq(irq);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(disable_irq_nosync);
|
||||
|
||||
unsigned long probe_irq_on(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(probe_irq_on);
|
||||
|
||||
int probe_irq_off(unsigned long val)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(probe_irq_off);
|
||||
|
||||
unsigned int probe_irq_mask(unsigned long val)
|
||||
{
|
||||
return val;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(probe_irq_mask);
|
||||
|
||||
void pcibios_fixup_bus(struct pci_bus *bus)
|
||||
{
|
||||
}
|
||||
|
@ -54,6 +54,7 @@ EXPORT_SYMBOL(of_set_property_mutex);
|
||||
int of_set_property(struct device_node *dp, const char *name, void *val, int len)
|
||||
{
|
||||
struct property **prevp;
|
||||
unsigned long flags;
|
||||
void *new_val;
|
||||
int err;
|
||||
|
||||
@ -64,7 +65,7 @@ int of_set_property(struct device_node *dp, const char *name, void *val, int len
|
||||
err = -ENODEV;
|
||||
|
||||
mutex_lock(&of_set_property_mutex);
|
||||
raw_spin_lock(&devtree_lock);
|
||||
raw_spin_lock_irqsave(&devtree_lock, flags);
|
||||
prevp = &dp->properties;
|
||||
while (*prevp) {
|
||||
struct property *prop = *prevp;
|
||||
@ -91,7 +92,7 @@ int of_set_property(struct device_node *dp, const char *name, void *val, int len
|
||||
}
|
||||
prevp = &(*prevp)->next;
|
||||
}
|
||||
raw_spin_unlock(&devtree_lock);
|
||||
raw_spin_unlock_irqrestore(&devtree_lock, flags);
|
||||
mutex_unlock(&of_set_property_mutex);
|
||||
|
||||
/* XXX Upate procfs if necessary... */
|
||||
|
@ -251,51 +251,6 @@ static void find_bits(unsigned long mask, u8 *pos, u8 *size)
|
||||
*size = len;
|
||||
}
|
||||
|
||||
static efi_status_t setup_efi_vars(struct boot_params *params)
|
||||
{
|
||||
struct setup_data *data;
|
||||
struct efi_var_bootdata *efidata;
|
||||
u64 store_size, remaining_size, var_size;
|
||||
efi_status_t status;
|
||||
|
||||
if (sys_table->runtime->hdr.revision < EFI_2_00_SYSTEM_TABLE_REVISION)
|
||||
return EFI_UNSUPPORTED;
|
||||
|
||||
data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
|
||||
|
||||
while (data && data->next)
|
||||
data = (struct setup_data *)(unsigned long)data->next;
|
||||
|
||||
status = efi_call_phys4((void *)sys_table->runtime->query_variable_info,
|
||||
EFI_VARIABLE_NON_VOLATILE |
|
||||
EFI_VARIABLE_BOOTSERVICE_ACCESS |
|
||||
EFI_VARIABLE_RUNTIME_ACCESS, &store_size,
|
||||
&remaining_size, &var_size);
|
||||
|
||||
if (status != EFI_SUCCESS)
|
||||
return status;
|
||||
|
||||
status = efi_call_phys3(sys_table->boottime->allocate_pool,
|
||||
EFI_LOADER_DATA, sizeof(*efidata), &efidata);
|
||||
|
||||
if (status != EFI_SUCCESS)
|
||||
return status;
|
||||
|
||||
efidata->data.type = SETUP_EFI_VARS;
|
||||
efidata->data.len = sizeof(struct efi_var_bootdata) -
|
||||
sizeof(struct setup_data);
|
||||
efidata->data.next = 0;
|
||||
efidata->store_size = store_size;
|
||||
efidata->remaining_size = remaining_size;
|
||||
efidata->max_var_size = var_size;
|
||||
|
||||
if (data)
|
||||
data->next = (unsigned long)efidata;
|
||||
else
|
||||
params->hdr.setup_data = (unsigned long)efidata;
|
||||
|
||||
}
|
||||
|
||||
static efi_status_t setup_efi_pci(struct boot_params *params)
|
||||
{
|
||||
efi_pci_io_protocol *pci;
|
||||
@ -1202,8 +1157,6 @@ struct boot_params *efi_main(void *handle, efi_system_table_t *_table,
|
||||
|
||||
setup_graphics(boot_params);
|
||||
|
||||
setup_efi_vars(boot_params);
|
||||
|
||||
setup_efi_pci(boot_params);
|
||||
|
||||
status = efi_call_phys3(sys_table->boottime->allocate_pool,
|
||||
|
@ -102,13 +102,6 @@ extern void efi_call_phys_epilog(void);
|
||||
extern void efi_unmap_memmap(void);
|
||||
extern void efi_memory_uc(u64 addr, unsigned long size);
|
||||
|
||||
struct efi_var_bootdata {
|
||||
struct setup_data data;
|
||||
u64 store_size;
|
||||
u64 remaining_size;
|
||||
u64 max_var_size;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_EFI
|
||||
|
||||
static inline bool efi_is_native(void)
|
||||
|
@ -6,7 +6,6 @@
|
||||
#define SETUP_E820_EXT 1
|
||||
#define SETUP_DTB 2
|
||||
#define SETUP_PCI 3
|
||||
#define SETUP_EFI_VARS 4
|
||||
|
||||
/* ram_size flags */
|
||||
#define RAMDISK_IMAGE_START_MASK 0x07FF
|
||||
|
@ -160,7 +160,7 @@ identity_mapped:
|
||||
xorq %rbp, %rbp
|
||||
xorq %r8, %r8
|
||||
xorq %r9, %r9
|
||||
xorq %r10, %r9
|
||||
xorq %r10, %r10
|
||||
xorq %r11, %r11
|
||||
xorq %r12, %r12
|
||||
xorq %r13, %r13
|
||||
|
@ -1240,9 +1240,12 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
|
||||
ctxt->modrm_seg = VCPU_SREG_DS;
|
||||
|
||||
if (ctxt->modrm_mod == 3) {
|
||||
int highbyte_regs = ctxt->rex_prefix == 0;
|
||||
|
||||
op->type = OP_REG;
|
||||
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
||||
op->addr.reg = decode_register(ctxt, ctxt->modrm_rm, ctxt->d & ByteOp);
|
||||
op->addr.reg = decode_register(ctxt, ctxt->modrm_rm,
|
||||
highbyte_regs && (ctxt->d & ByteOp));
|
||||
if (ctxt->d & Sse) {
|
||||
op->type = OP_XMM;
|
||||
op->bytes = 16;
|
||||
@ -3997,7 +4000,8 @@ static const struct opcode twobyte_table[256] = {
|
||||
DI(ImplicitOps | Priv, invd), DI(ImplicitOps | Priv, wbinvd), N, N,
|
||||
N, D(ImplicitOps | ModRM), N, N,
|
||||
/* 0x10 - 0x1F */
|
||||
N, N, N, N, N, N, N, N, D(ImplicitOps | ModRM), N, N, N, N, N, N, N,
|
||||
N, N, N, N, N, N, N, N,
|
||||
D(ImplicitOps | ModRM), N, N, N, N, N, N, D(ImplicitOps | ModRM),
|
||||
/* 0x20 - 0x2F */
|
||||
DIP(ModRM | DstMem | Priv | Op3264, cr_read, check_cr_read),
|
||||
DIP(ModRM | DstMem | Priv | Op3264, dr_read, check_dr_read),
|
||||
@ -4836,6 +4840,7 @@ twobyte_insn:
|
||||
case 0x08: /* invd */
|
||||
case 0x0d: /* GrpP (prefetch) */
|
||||
case 0x18: /* Grp16 (prefetch/nop) */
|
||||
case 0x1f: /* nop */
|
||||
break;
|
||||
case 0x20: /* mov cr, reg */
|
||||
ctxt->dst.val = ops->get_cr(ctxt, ctxt->modrm_reg);
|
||||
|
@ -1861,11 +1861,14 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_lapic *apic = vcpu->arch.apic;
|
||||
unsigned int sipi_vector;
|
||||
unsigned long pe;
|
||||
|
||||
if (!kvm_vcpu_has_lapic(vcpu))
|
||||
if (!kvm_vcpu_has_lapic(vcpu) || !apic->pending_events)
|
||||
return;
|
||||
|
||||
if (test_and_clear_bit(KVM_APIC_INIT, &apic->pending_events)) {
|
||||
pe = xchg(&apic->pending_events, 0);
|
||||
|
||||
if (test_bit(KVM_APIC_INIT, &pe)) {
|
||||
kvm_lapic_reset(vcpu);
|
||||
kvm_vcpu_reset(vcpu);
|
||||
if (kvm_vcpu_is_bsp(apic->vcpu))
|
||||
@ -1873,7 +1876,7 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
|
||||
else
|
||||
vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
|
||||
}
|
||||
if (test_and_clear_bit(KVM_APIC_SIPI, &apic->pending_events) &&
|
||||
if (test_bit(KVM_APIC_SIPI, &pe) &&
|
||||
vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
|
||||
/* evaluate pending_events before reading the vector */
|
||||
smp_rmb();
|
||||
|
@ -277,6 +277,9 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
|
||||
end_pfn = limit_pfn;
|
||||
nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
|
||||
|
||||
if (!after_bootmem)
|
||||
adjust_range_page_size_mask(mr, nr_range);
|
||||
|
||||
/* try to merge same page size and continuous */
|
||||
for (i = 0; nr_range > 1 && i < nr_range - 1; i++) {
|
||||
unsigned long old_start;
|
||||
@ -291,9 +294,6 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
|
||||
nr_range--;
|
||||
}
|
||||
|
||||
if (!after_bootmem)
|
||||
adjust_range_page_size_mask(mr, nr_range);
|
||||
|
||||
for (i = 0; i < nr_range; i++)
|
||||
printk(KERN_DEBUG " [mem %#010lx-%#010lx] page %s\n",
|
||||
mr[i].start, mr[i].end - 1,
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user