In order to work around old LILO versions providing an invalid ss
register, the current setup code always sets up a new stack,
immediately following .bss and the heap. But this breaks LOADLIN.
This rewrite of the workaround checks for an invalid stack (ss!=ds)
first, and leaves ss:sp alone otherwise (apart from aligning esp).
[hpa note: LOADLIN has a number of arbitrary hard-coded limits that
are being pushed up against. Without some major revision of LOADLIN
itself it will not be sustainable keeping it alive. This gives it
another brief lease on life, however. This patch also helps the
cmdline truncation problem with old versions of SYSLINUX.]
Signed-off-by: Jens Rottmann <JRottmann at LiPPERT-AT. de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
For x86 ARCH may say i386 or x86_64 and soon x86.
Rely on CONFIG_X64_32 to select between 32/64 or just
hardcode the value as appropriate.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Move all CPU definitions to Kconfig.cpu
Always define X86_MINIMUM_CPU_FAMILY and do the
obvious code cleanup in boot/cpucheck.c
Comments from: Adrian Bunk <bunk@kernel.org> incorporated.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: Adrian Bunk <bunk@kernel.org>
Cc: Brian Gerst <bgerst@didntduck.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
In accordance with the newly formalized 32-bit boot protocol, set
%ebx == %ebp == %edi == 0 in order to support future extensions to the
protocol.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
The kernel only ever supports 1 version of the boot protocol
so there is no need to check the boot protocol revision to
see if a feature is supported.
Both x86 and x86_64 support the same boot protocol so we need
to implement the KEEP_SEGMENTS on x86_64 as well. It isn't
just paravirt bootloaders that could use this functionality.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@suse.de>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
We use signed values for limit checking since the values can go
negative under certain circumstances. However, sizeof() is unsigned
and forces the comparison to be unsigned, so move the comparison into
the heap_free() macros so we can ensure it is a signed comparison.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Apparently some specific versions of LILO enter the kernel with a
stack pointer that doesn't match the rest of the segments. Make our
best attempt at untangling the resulting mess.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Make <asm/setup.h> usable by the boot code.
Clean up vestiges of the old command-line protocol from setup.h and
head_32.S (it is still supported from the boot loader point of
view, since it is converted to the new command-line protocol by the
boot code.)
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This patch uses the updated boot protocol to do paravirtualized boot.
If the boot version is >= 2.07, then it will do two things:
1. Check the bootparams loadflags to see if we should reload the
segment registers and clear interrupts. This is appropriate
for normal native boot and some paravirtualized environments, but
inapproprate for others.
2. Check the hardware architecture, and dispatch to the appropriate
kernel entrypoint. If the bootloader doesn't set this, then we
simply do the normal boot sequence.
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
x86 uses target specific assignment of EXTRA_AFLAGS,
EXTRA_CFLAGS - this caused troubles with
introducing asflags-y, ccflags-y.
Fixed the target specific assignments in arch/x86/boot/Makefile
and auditted the rest of the kernel for similar usage.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
The variable AFLAGS is a wellknown variable and the usage by
kbuild may result in unexpected behaviour.
On top of that several people over time has asked for a way to
pass in additional flags to gcc.
This patch replace use of AFLAGS with KBUILD_AFLAGS all over
the tree.
Patch was tested on following architectures:
alpha, arm, i386, x86_64, mips, sparc, sparc64, ia64, m68k, s390
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
The variable CFLAGS is a wellknown variable and the usage by
kbuild may result in unexpected behaviour.
On top of that several people over time has asked for a way to
pass in additional flags to gcc.
This patch replace use of CFLAGS with KBUILD_CFLAGS all over the
tree and enabling one to use:
make CFLAGS=...
to specify additional gcc commandline options.
One usecase is when trying to find gcc bugs but other
use cases has been requested too.
Patch was tested on following architectures:
alpha, arm, i386, x86_64, mips, sparc, sparc64, ia64, m68k
Test was simple to do a defconfig build, apply the patch and check
that nothing got rebuild.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>