PPC updates from Alex.
* 'for-upstream' of git://github.com/agraf/linux-2.6:
KVM: PPC: Emulator: clean up SPR reads and writes
KVM: PPC: Emulator: clean up instruction parsing
kvm/powerpc: Add new ioctl to retreive server MMU infos
kvm/book3s: Make kernel emulated H_PUT_TCE available for "PR" KVM
KVM: PPC: bookehv: Fix r8/r13 storing in level exception handler
KVM: PPC: Book3S: Enable IRQs during exit handling
KVM: PPC: Fix PR KVM on POWER7 bare metal
KVM: PPC: Fix stbux emulation
KVM: PPC: bookehv: Use lwz/stw instead of PPC_LL/PPC_STL for 32-bit fields
KVM: PPC: Book3S: PR: No isync in slbie path
KVM: PPC: Book3S: PR: Optimize entry path
KVM: PPC: booke(hv): Fix save/restore of guest accessible SPRGs.
KVM: PPC: Restrict PPC_[L|ST]D macro to asm code
KVM: PPC: bookehv: Use a Macro for saving/restoring guest registers to/from their 64 bit copies.
KVM: PPC: Use clockevent multiplier and shifter for decrementer
KVM: Use minimum and maximum address mapped by TLB1
Signed-off-by: Avi Kivity <avi@redhat.com>
When reading and writing SPRs, every SPR emulation piece had to read
or write the respective GPR the value was read from or stored in itself.
This approach is pretty prone to failure. What if we accidentally
implement mfspr emulation where we just do "break" and nothing else?
Suddenly we would get a random value in the return register - which is
always a bad idea.
So let's consolidate the generic code paths and only give the core
specific SPR handling code readily made variables to read/write from/to.
Functionally, this patch doesn't change anything, but it increases the
readability of the code and makes is less prone to bugs.
Signed-off-by: Alexander Graf <agraf@suse.de>
Instructions on PPC are pretty similarly encoded. So instead of
every instruction emulation code decoding the instruction fields
itself, we can move that code to more generic places and rely on
the compiler to optimize the unused bits away.
This has 2 advantages. It makes the code smaller and it makes the
code less error prone, as the instruction fields are always
available, so accidental misusage is reduced.
Functionally, this patch doesn't change anything.
Signed-off-by: Alexander Graf <agraf@suse.de>
This is necessary for qemu to be able to pass the right information
to the guest, such as the supported page sizes and corresponding
encodings in the SLB and hash table, which can vary depending
on the processor type, the type of KVM used (PR vs HV) and the
version of KVM
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[agraf: fix compilation on hv, adjust for newer ioctl numbers]
Signed-off-by: Alexander Graf <agraf@suse.de>
There is nothing in the code for emulating TCE tables in the kernel
that prevents it from working on "PR" KVM... other than ifdef's and
location of the code.
This and moves the bulk of the code there to a new file called
book3s_64_vio.c.
This speeds things up a bit on my G5.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[agraf: fix for hv kvm, 32bit, whitespace]
Signed-off-by: Alexander Graf <agraf@suse.de>
Guest r8 register is held in the scratch register and stored correctly,
so remove the instruction that clobbers it. Guest r13 was missing from vcpu,
store it there.
Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
While handling an exit, we should listen for interrupts and make sure to
receive them when they arrive, to keep our latencies low.
Signed-off-by: Alexander Graf <agraf@suse.de>
When running on a system that is HV capable, some interrupts use HSRR
SPRs instead of the normal SRR SPRs. These are also used in the Linux
handlers to jump back to code after an interrupt got processed.
Unfortunately, in our "jump back to the real host handler after we've
done the context switch" code, we were only setting the SRR SPRs,
rendering Linux to jump back to some invalid IP after it's processed
the interrupt.
This fixes random crashes on p7 opal mode with PR KVM for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
Stbux writes the address it's operating on to the register specified in ra,
not into the data source register.
Signed-off-by: Alexander Graf <agraf@suse.de>
Interrupt code used PPC_LL/PPC_STL macros to load/store some of u32 fields
which led to memory overflow on 64-bit. Use lwz/stw instead.
Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
While messing around with the SLBs we're running in real mode. The
entry to guest space goes through rfid, which is context synchronizing,
so there's no need to manually synchronize anything through isync.
With this patch and a simple priviledged SPR access loop guest, I get
a speed bump from 2035607 to 2181301 exits per second.
Signed-off-by: Alexander Graf <agraf@suse.de>
By shuffling a few instructions around we can execute more memory
loads in parallel, giving us a small performance boost.
With this patch and a simple priviledged SPR access loop guest, I get
a speed bump from 2013052 to 2035607 exits per second.
Signed-off-by: Alexander Graf <agraf@suse.de>
For Guest accessible SPRGs 4-7, save/restore must be handled differently for 64bit and
non-64 bit case. Use the PPC_STD/PPC_LD macros for saving/restoring to/from these registers.
Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Introduced PPC_STD/PPC_LD macros for saving/restoring guest registers to/from their 64 bit copies.
Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Time for which the hrtimer is started for decrementer emulation is calculated
using tb_ticks_per_usec. While hrtimer uses the clockevent for DEC
reprogramming (if needed) and which calculate timebase ticks using the
multiplier and shifter mechanism implemented within clockevent layer.
It was observed that this conversion (timebase->time->timebase) are not
correct because the mechanism are not consistent.
In our setup it adds 2% jitter.
With this patch clockevent multiplier and shifter mechanism are used when
starting hrtimer for decrementer emulation. Now the jitter is < 0.5%.
Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Keep track of minimum and maximum address mapped by tlb1.
This helps in TLBMISS handling in KVM to quick check whether the address lies in mapped range.
If address does not lies in this range then no need to look in each tlb1 entry of tlb1 array.
Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Although ModRM byte is fetched for group decoding, it is soon pushed
back to make decode_modrm() fetch it later again.
Now that ModRM flag can be found in the top level opcode tables, fetch
ModRM byte before group decoding to make the code simpler.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
Needed for the following patch which simplifies ModRM fetching code.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
This cpuid range does not exist on real HW and Intel spec says that
"Information returned for highest basic information leaf" will be
returned. Not very well defined.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
cpuid eax should return the max leaf so that
guests can find out the valid range.
This matches Xen et al.
Update documentation to match.
Tested with -cpu host.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Let userspace know the number of max and supported cpus for kvm on s390.
Return KVM_MAX_VCPUS (currently 64) for both values.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Lets replace the old open coded version of diag 0x44 (which relied on
compat_sched_yield) with kvm_vcpu_on_spin.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
This patch implements the directed yield hypercall found on other
System z hypervisors. It delegates execution time to the virtual cpu
specified in the instruction's parameter.
Useful to avoid long spinlock waits in the guest.
Christian Borntraeger: moved common code in virt/kvm/
Signed-off-by: Konstantin Weitz <WEITZKON@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
We can't run PIT IRQ injection work in the interrupt context of the host
timer. This would allow the user to influence the handler complexity by
asking for a broadcast to a large number of VCPUs. Therefore, this work
was pushed into workqueue context in 9d244caf2e. However, this prevents
prioritizing the PIT injection over other task as workqueues share
kernel threads.
This replaces the workqueue with a kthread worker and gives that thread
a name in the format "kvm-pit/<owner-process-pid>". That allows to
identify and adjust the kthread priority according to the VM process
parameters.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
The patch introduces a bitmap that will hold reasons apic should be
checked during vmexit. This is in a preparation for vp eoi patch
that will add one more check on vmexit. With the bitmap we can do
if(apic_attention) to check everything simultaneously which will
add zero overhead on the fast path.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently, MSI messages can only be injected to in-kernel irqchips by
defining a corresponding IRQ route for each message. This is not only
unhandy if the MSI messages are generated "on the fly" by user space,
IRQ routes are a limited resource that user space has to manage
carefully.
By providing a direct injection path, we can both avoid using up limited
resources and simplify the necessary steps for user land.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
MMIO that are split across a page boundary are currently broken - the
code does not expect to be aborted by the exit to userspace for the
first MMIO fragment.
This patch fixes the problem by generalizing the current code for handling
16-byte MMIOs to handle a number of "fragments", and changes the MMIO
code to create those fragments.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Merge reason: development work has dependency on kvm patches merged
upstream.
Conflicts:
Documentation/feature-removal-schedule.txt
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Pull KVM updates from Marcelo Tosatti.
* git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: lock slots_lock around device assignment
KVM: VMX: Fix kvm_set_shared_msr() called in preemptible context
KVM: unmap pages from the iommu when slots are removed
KVM: PMU emulation: GLOBAL_CTRL MSR should be enabled on reset
kvm_set_shared_msr() may not be called in preemptible context,
but vmx_set_msr() does so:
BUG: using smp_processor_id() in preemptible [00000000] code: qemu-kvm/22713
caller is kvm_set_shared_msr+0x32/0xa0 [kvm]
Pid: 22713, comm: qemu-kvm Not tainted 3.4.0-rc3+ #39
Call Trace:
[<ffffffff8131fa82>] debug_smp_processor_id+0xe2/0x100
[<ffffffffa0328ae2>] kvm_set_shared_msr+0x32/0xa0 [kvm]
[<ffffffffa03a103b>] vmx_set_msr+0x28b/0x2d0 [kvm_intel]
...
Making kvm_set_shared_msr() work in preemptible is cleaner, but
it's used in the fast path. Making two variants is overkill, so
this patch just disables preemption around the call.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Its much cleaner to use PT_PAGE_TABLE_LEVEL than its numeric value.
Signed-off-by: Davidlohr Bueso <dave@gnu.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Pull s390 updates from Martin Schwidefsky:
"A couple of bug fixes, one of them is a TLB flush fix. Included as
well is one small coding style patch and a patch to update the default
configuration."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
[S390] Fix compile error in swab.h
[S390] Fix stfle() lowcore protection problem
[S390] cpum_cf: get rid of compile warnings
[S390] irq: simple coding style change
[S390] update default configuration
[S390] fix tlb flushing for page table pages
[S390] kernel: Use local_irq_save() for memcpy_real()
[S390] s390/char/vmur.c: fix memory leak
[S390] drivers/s390/block/dasd_eckd.c: add missing dasd_sfree_request
Michel Lespinasse cleaned up the futex calling conventions in commit
37a9d912b2 ("futex: Sanitize cmpxchg_futex_value_locked API").
But the ia64 implementation was subtly broken. Gcc does not know that
register "r8" will be updated by the fault handler if the cmpxchg
instruction takes an exception. So it feels safe in letting the
initialization of r8 slide to after the cmpxchg. Result: we always
return 0 whether the user address faulted or not.
Fix by moving the initialization of r8 into the __asm__ code so gcc
won't move it.
Reported-by: <emeric.maschino@gmail.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42757
Tested-by: <emeric.maschino@gmail.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: stable@vger.kernel.org # v2.6.39+
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Intel spec says that TMR needs to be set/cleared
when IRR is set, but kvm also clears it on EOI.
I did some tests on a real (AMD based) system,
and I see same TMR values both before
and after EOI, so I think it's a minor bug in kvm.
This patch fixes TMR to be set/cleared on IRR set
only as per spec.
And now that we don't clear TMR, we can save
an atomic read of TMR on EOI that's not propagated
to ioapic, by checking whether ioapic needs
a specific vector first and calculating
the mode afterwards.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
General support for the MMX instruction set. Special care is taken
to trap pending x87 exceptions so that they are properly reflected
to the guest.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
An Ubuntu 9.10 Karmic Koala guest is unable to boot or install due to
missing movdqa emulation:
kvm_exit: reason EXCEPTION_NMI rip 0x7fef3e025a7b info 7fef3e799000 80000b0e
kvm_page_fault: address 7fef3e799000 error_code f
kvm_emulate_insn: 0:7fef3e025a7b: 66 0f 7f 07 (prot64)
movdqa %xmm0,(%rdi)
[avi: mark it explicitly aligned]
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
x86 defines three classes of vector instructions: explicitly
aligned (#GP(0) if unaligned, explicitly unaligned, and default
(which depends on the encoding: AVX is unaligned, SSE is aligned).
Add support for marking an instruction as explicitly aligned or
unaligned, and mark MOVDQU as unaligned.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Enable x86 feature-based autoloading for the kvm-amd module on CPUs
with X86_FEATURE_SVM.
Signed-off-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
This can happen if the instruction is much longer than the maximum length,
or if insn->opnd_bytes is manually changed.
This patch also fixes warnings from -Wswitch-default flag.
Reported-by: Prashanth Nageshappa <prashanth@linux.vnet.ibm.com>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@linux.vnet.ibm.com>
Cc: Linux-mm <linux-mm@kvack.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Anton Arapov <anton@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20120413032427.32577.42602.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Pull ARM fixes from Russell King:
"Nothing too disasterous, the biggest thing being the removal of the
regulator support for vcore in the AMBA driver; only one SoC was using
this and it got broken during the last merge window, which then
started causing problems for other people. Mutual agreement was
reached for it to be removed."
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7386/1: jump_label: fixup for rename to static_key
ARM: 7384/1: ThumbEE: Disable userspace TEEHBR access for !CONFIG_ARM_THUMBEE
ARM: 7382/1: mm: truncate memory banks to fit in 4GB space for classic MMU
ARM: 7359/2: smp_twd: Only wait for reprogramming on active cpus
ARM: 7383/1: nommu: populate vectors page from paging_init
ARM: 7381/1: nommu: fix typo in mm/Kconfig
ARM: 7380/1: DT: do not add a zero-sized memory property
ARM: 7379/1: DT: fix atags_to_fdt() second call site
ARM: 7366/3: amba: Remove AMBA level regulator support
ARM: 7377/1: vic: re-read status register before dispatching each IRQ handler
ARM: 7368/1: fault.c: correct how the tsk->[maj|min]_flt gets incremented
The 'max' range needs to be unsigned, since the size of the user address
space is bigger than 2GB.
We know that 'count' is positive in 'long' (that is checked in the
caller), so we will truncate 'max' down to something that fits in a
signed long, but before we actually do that, that comparison needs to be
done in unsigned.
Bug introduced in commit 92ae03f2ef ("x86: merge 32/64-bit versions of
'strncpy_from_user()' and speed it up"). On x86-64 you can't trigger
this, since the user address space is much smaller than 63 bits, and on
x86-32 it works in practice, since you would seldom hit the strncpy
limits anyway.
I had actually tested the corner-cases, I had only tested them on
x86-64. Besides, I had only worried about the case of a pointer *close*
to the end of the address space, rather than really far away from it ;)
This also changes the "we hit the user-specified maximum" to return
'res', for the trivial reason that gcc seems to generate better code
that way. 'res' and 'count' are the same in that case, so it really
doesn't matter which one we return.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
c5905afb0 ("static keys: Introduce 'struct static_key'...") renamed
struct jump_label_key to struct static_key. Fixup ARM for this to
eliminate these build warnings:
include/linux/jump_label.h:113:2:
warning: passing argument 1 of 'arch_static_branch' from incompatible pointer type
include/asm/jump_label.h:17:82:
note: expected 'struct jump_label_key *' but argument is of type 'struct static_key *'
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>