Remove the unused code after the switch to clock events.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Combine the timex.h variants and move the TSC related code into tsc.h.
Move the set_cyc2ns_scale() call into the tsc calibraction code, where
it belongs.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Useless header file with 32 bit and 64 bit variants. Move the
single useful line to the place where it is used.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
AMDs C1E enabled CPUs stop the local apic timer, when both cores are
idle. This is a hardware feature which breaks highres/dynticks.
Add the same quirk as we have for 32 bit already.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
The clock events merge introduced a change to the nmi watchdog code to
handle the not longer increasing local apic timer count in the
broadcast mode. This is fine for UP, but on SMP it pampers over a
stuck CPU which is not handling the broadcast interrupt due to the
unconditional sum up of local apic timer count and irq0 count.
To cover all cases we need to keep track on which CPU irq0 is
handled. In theory this is CPU#0 due to the explicit disabling of irq
balancing for irq0, but there are systems which ignore this on the
hardware level. The per cpu irq0 accounting allows us to remove the
irq0 to CPU0 binding as well.
Add a per cpu counter for irq0 and evaluate this instead of the global
irq0 count in the nmi watchdog code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Finally switch to the clockevents code. Share code with i386 for
hpet and PIT.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
[ tglx: arch/x86 adaptation ]
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Len Brown <lenb@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
PIT clock events work already and the PIT handling is the same for
i386 and x86_64. x86_64 does not support PIT as a clock source, so
disable the PIT clocksource for x86_64.
Use the i386 i8253.h include file for x86_64 as well to share the
exports and the PIT constants.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
PIT clock events work already and the PIT handling is the same for
i386 and x86_64. x86_64 does not support PIT as a clock source, so
disable the PIT clocksource for x86_64.
Prepare i8253.h to be shared with x8664
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Move the TSC calibration code to tsc.c. Reimplement it so the
pm timer can be used as a reference as well.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
This adds support for Multi-Function General Purpose Timers. It detects the
available timers during southbridge init, and provides an API for allocating
and setting the timers. They're higher resolution than the standard PIT, so
the MFGPTs come in handy for quite a few things.
Note that we never clobber the timers that the BIOS might have opted to use;
we just check for unused timers.
Signed-off-by: Jordan Crouse <jordan.crouse@amd.com>
Signed-off-by: Andres Salomon <dilinger@debian.org>
Cc: Andi Kleen <ak@suse.de>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The clock events merge introduced a change to the nmi watchdog code to
handle the not longer increasing local apic timer count in the
broadcast mode. This is fine for UP, but on SMP it pampers over a
stuck CPU which is not handling the broadcast interrupt due to the
unconditional sum up of local apic timer count and irq0 count.
To cover all cases we need to keep track on which CPU irq0 is
handled. In theory this is CPU#0 due to the explicit disabling of irq
balancing for irq0, but there are systems which ignore this on the
hardware level. The per cpu irq0 accounting allows us to remove the
irq0 to CPU0 binding as well.
Add a per cpu counter for irq0 and evaluate this instead of the global
irq0 count in the nmi watchdog code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Run the lockdep_sys_exit hook after all other C code on the syscall
return path.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Run the lockdep_sys_exit hook after all other C code on the syscall
return path.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm: (106 commits)
KVM: Replace enum by #define
KVM: Skip pio instruction when it is emulated, not executed
KVM: x86 emulator: popf
KVM: x86 emulator: fix src, dst value initialization
KVM: x86 emulator: jmp abs
KVM: x86 emulator: lea
KVM: X86 emulator: jump conditional short
KVM: x86 emulator: imlpement jump conditional relative
KVM: x86 emulator: sort opcodes into ascending order
KVM: Improve emulation failure reporting
KVM: x86 emulator: pushf
KVM: x86 emulator: call near
KVM: x86 emulator: push imm8
KVM: VMX: Fix exit qualification width on i386
KVM: Move main vcpu loop into subarch independent code
KVM: VMX: Move vm entry failure handling to the exit handler
KVM: MMU: Don't do GFP_NOWAIT allocations
KVM: Rename kvm_arch_ops to kvm_x86_ops
KVM: Simplify memory allocation
KVM: Hoist SVM's get_cs_db_l_bits into core code.
...
a) include/asm-um/arch can't just point to include/asm-$(SUBARCH) now
b) arch/{i386,x86_64}/crypto are merged now
c) subarch-obj needed changes
d) cpufeature_64.h should pull "cpufeature_32.h", not <asm/cpufeature_32.h>
since it can be included from asm-um/cpufeature.h
e) in case of uml-i386 we need CONFIG_X86_32 for make and gcc, but not
for Kconfig
f) sysctl.c shouldn't do vdso_enabled for uml-i386 (actually, that one
should be registered from corresponding arch/*/kernel/*, with ifdef
going away; that's a separate patch, though).
With that and with Stephen's patch ("[PATCH net-2.6] uml: hard_header fix")
we have uml allmodconfig building both on i386 and amd64.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Intel manual (and KVM definition) say the TPR is 4 bits wide. Also fix
CR8_RESEVED_BITS typo.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
KVM reuses the IOAPIC register definitions, and needs them even if the
host is not compiled with IOAPIC support. Move the #ifdef below so that only
the IOAPIC variables and functions are protected, and the register definitions
are available to all.
Signed-off-by: Avi Kivity <avi@qumranet.com>
According to latest memory ordering specification documents from Intel
and AMD, both manufacturers are committed to in-order loads from
cacheable memory for the x86 architecture. Hence, smp_rmb() may be a
simple barrier.
Also according to those documents, and according to existing practice in
Linux (eg. spin_unlock doesn't enforce ordering), stores to cacheable
memory are visible in program order too. Special string stores are safe
-- their constituent stores may be out of order, but they must complete
in order WRT surrounding stores. Nontemporal stores to WB memory can go
out of order, and so they should be fenced explicitly to make them
appear in-order WRT other stores. Hence, smp_wmb() may be a simple
barrier.
http://developer.intel.com/products/processor/manuals/318147.pdfhttp://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf
In userspace microbenchmarks on a core2 system, fence instructions range
anywhere from around 15 cycles to 50, which may not be totally
insignificant in performance critical paths (code size will go down
too).
However the primary motivation for this is to have the canonical barrier
implementation for x86 architecture.
smp_rmb on buggy pentium pros remains a locked op, which is apparently
required.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
wmb() on x86 must always include a barrier, because stores can go out of
order in many cases when dealing with devices (eg. WC memory).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Move the headers to include/asm-x86 and fixup the
header install make rules
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>