linux/arch/powerpc/Kconfig

1215 lines
33 KiB
Plaintext
Raw Normal View History

source "arch/powerpc/platforms/Kconfig.cputype"
config PPC32
bool
default y if !PPC64
config 32BIT
bool
default y if PPC32
config 64BIT
bool
default y if PPC64
config ARCH_PHYS_ADDR_T_64BIT
def_bool PPC64 || PHYS_64BIT
config ARCH_DMA_ADDR_T_64BIT
def_bool ARCH_PHYS_ADDR_T_64BIT
config MMU
bool
default y
powerpc/mm: Add support for runtime configuration of ASLR limits Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two sysctls that allow a user to configure the number of bits of randomness used for ASLR. Because of the way the Kconfig for ARCH_MMAP_RND_BITS is defined, we have to construct at least the MIN value in Kconfig, vs in a header which would be more natural. Given that we just go ahead and do it all in Kconfig. At least according to the code (the documentation makes no mention of it), the value is defined as the number of bits of randomisation *of the page*, not the address. This makes some sense, with larger page sizes more of the low bits are forced to zero, which would reduce the randomisation if we didn't take the PAGE_SIZE into account. However it does mean the min/max values have to change depending on the PAGE_SIZE in order to actually limit the amount of address space consumed by the randomisation. The result of that is that we have to define the default values based on both 32-bit vs 64-bit, but also the configured PAGE_SIZE. Furthermore now that we have 128TB address space support on Book3S, we also have to take that into account. Finally we can wire up the value in arch_mmap_rnd(). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Tested-by: Bhupesh Sharma <bhsharma@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
2017-04-20 14:36:20 +00:00
config ARCH_MMAP_RND_BITS_MAX
# On Book3S 64, the default virtual address space for 64-bit processes
# is 2^47 (128TB). As a maximum, allow randomisation to consume up to
# 32T of address space (2^45), which should ensure a reasonable gap
# between bottom-up and top-down allocations for applications that
# consume "normal" amounts of address space. Book3S 64 only supports 64K
# and 4K page sizes.
default 29 if PPC_BOOK3S_64 && PPC_64K_PAGES # 29 = 45 (32T) - 16 (64K)
default 33 if PPC_BOOK3S_64 # 33 = 45 (32T) - 12 (4K)
#
# On all other 64-bit platforms (currently only Book3E), the virtual
# address space is 2^46 (64TB). Allow randomisation to consume up to 16T
# of address space (2^44). Only 4K page sizes are supported.
default 32 if 64BIT # 32 = 44 (16T) - 12 (4K)
#
# For 32-bit, use the compat values, as they're the same.
default ARCH_MMAP_RND_COMPAT_BITS_MAX
config ARCH_MMAP_RND_BITS_MIN
# Allow randomisation to consume up to 1GB of address space (2^30).
default 14 if 64BIT && PPC_64K_PAGES # 14 = 30 (1GB) - 16 (64K)
default 18 if 64BIT # 18 = 30 (1GB) - 12 (4K)
#
# For 32-bit, use the compat values, as they're the same.
default ARCH_MMAP_RND_COMPAT_BITS_MIN
config ARCH_MMAP_RND_COMPAT_BITS_MAX
# Total virtual address space for 32-bit processes is 2^31 (2GB).
# Allow randomisation to consume up to 512MB of address space (2^29).
default 11 if PPC_256K_PAGES # 11 = 29 (512MB) - 18 (256K)
default 13 if PPC_64K_PAGES # 13 = 29 (512MB) - 16 (64K)
default 15 if PPC_16K_PAGES # 15 = 29 (512MB) - 14 (16K)
default 17 # 17 = 29 (512MB) - 12 (4K)
config ARCH_MMAP_RND_COMPAT_BITS_MIN
# Total virtual address space for 32-bit processes is 2^31 (2GB).
# Allow randomisation to consume up to 8MB of address space (2^23).
default 5 if PPC_256K_PAGES # 5 = 23 (8MB) - 18 (256K)
default 7 if PPC_64K_PAGES # 7 = 23 (8MB) - 16 (64K)
default 9 if PPC_16K_PAGES # 9 = 23 (8MB) - 14 (16K)
default 11 # 11 = 23 (8MB) - 12 (4K)
config HAVE_SETUP_PER_CPU_AREA
percpu: use dynamic percpu allocator as the default percpu allocator This patch makes most !CONFIG_HAVE_SETUP_PER_CPU_AREA archs use dynamic percpu allocator. The first chunk is allocated using embedding helper and 8k is reserved for modules. This ensures that the new allocator behaves almost identically to the original allocator as long as static percpu variables are concerned, so it shouldn't introduce much breakage. s390 and alpha use custom SHIFT_PERCPU_PTR() to work around addressing range limit the addressing model imposes. Unfortunately, this breaks if the address is specified using a variable, so for now, the two archs aren't converted. The following architectures are affected by this change. * sh * arm * cris * mips * sparc(32) * blackfin * avr32 * parisc (broken, under investigation) * m32r * powerpc(32) As this change makes the dynamic allocator the default one, CONFIG_HAVE_DYNAMIC_PER_CPU_AREA is replaced with its invert - CONFIG_HAVE_LEGACY_PER_CPU_AREA, which is added to yet-to-be converted archs. These archs implement their own setup_per_cpu_areas() and the conversion is not trivial. * powerpc(64) * sparc(64) * ia64 * alpha * s390 Boot and batch alloc/free tests on x86_32 with debug code (x86_32 doesn't use default first chunk initialization). Compile tested on sparc(32), powerpc(32), arm and alpha. Kyle McMartin reported that this change breaks parisc. The problem is still under investigation and he is okay with pushing this patch forward and fixing parisc later. [ Impact: use dynamic allocator for most archs w/o custom percpu setup ] Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: Christoph Lameter <cl@linux.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mikael Starvik <starvik@axis.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Bryan Wu <cooloney@kernel.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Matthew Wilcox <matthew@wil.cx> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu>
2009-03-30 10:07:44 +00:00
def_bool PPC64
config NEED_PER_CPU_EMBED_FIRST_CHUNK
def_bool PPC64
config NR_IRQS
int "Number of virtual interrupt numbers"
range 32 32768
default "512"
help
This defines the number of virtual interrupt numbers the kernel
can manage. Virtual interrupt numbers are what you see in
/proc/interrupts. If you configure your system to have too few,
drivers will fail to load or worse - handle with care.
config NMI_IPI
bool
depends on SMP && (DEBUGGER || KEXEC_CORE)
default y
config STACKTRACE_SUPPORT
bool
default y
config TRACE_IRQFLAGS_SUPPORT
bool
default y
config LOCKDEP_SUPPORT
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
config RWSEM_XCHGADD_ALGORITHM
bool
default y
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config ARCH_HAS_ILOG2_U32
bool
default y
config ARCH_HAS_ILOG2_U64
bool
default y if 64BIT
config GENERIC_HWEIGHT
bool
default y
config ARCH_HAS_DMA_SET_COHERENT_MASK
bool
config PPC
bool
default y
#
# Please keep this list sorted alphabetically.
#
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_DMA_SET_COHERENT_MASK
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WEAK_RELEASE_ACQUIRE
select BINFMT_ELF
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
select DCACHE_WORD_ACCESS if PPC64 && CPU_LITTLE_ENDIAN
select EDAC_ATOMIC_SCRUB
select EDAC_SUPPORT
select GENERIC_ATOMIC64 if PPC32
select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_CMOS_UPDATE
select GENERIC_CPU_AUTOPROBE
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_SMP_IDLE_THREAD
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select GENERIC_TIME_VSYSCALL_OLD
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KGDB
powerpc/mm: Add support for runtime configuration of ASLR limits Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two sysctls that allow a user to configure the number of bits of randomness used for ASLR. Because of the way the Kconfig for ARCH_MMAP_RND_BITS is defined, we have to construct at least the MIN value in Kconfig, vs in a header which would be more natural. Given that we just go ahead and do it all in Kconfig. At least according to the code (the documentation makes no mention of it), the value is defined as the number of bits of randomisation *of the page*, not the address. This makes some sense, with larger page sizes more of the low bits are forced to zero, which would reduce the randomisation if we didn't take the PAGE_SIZE into account. However it does mean the min/max values have to change depending on the PAGE_SIZE in order to actually limit the amount of address space consumed by the randomisation. The result of that is that we have to define the default values based on both 32-bit vs 64-bit, but also the configured PAGE_SIZE. Furthermore now that we have 128TB address space support on Book3S, we also have to take that into account. Finally we can wire up the value in arch_mmap_rnd(). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Tested-by: Bhupesh Sharma <bhsharma@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
2017-04-20 14:36:20 +00:00
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select HAVE_CBPF_JIT if !PPC64
select HAVE_CONTEXT_TRACKING if PPC64
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL
select HAVE_EBPF_JIT if PPC64
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS
select HAVE_GENERIC_RCU_GUP
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx)
select HAVE_IDE
select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_KERNEL_GZIP
select HAVE_KPROBES
select HAVE_KPROBES_ON_FTRACE
select HAVE_KRETPROBES
select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI if PERF_EVENTS
select HAVE_OPROFILE
select HAVE_OPTPROBES if PPC64
perf: Do the big rename: Performance Counters -> Performance Events Bye-bye Performance Counters, welcome Performance Events! In the past few months the perfcounters subsystem has grown out its initial role of counting hardware events, and has become (and is becoming) a much broader generic event enumeration, reporting, logging, monitoring, analysis facility. Naming its core object 'perf_counter' and naming the subsystem 'perfcounters' has become more and more of a misnomer. With pending code like hw-breakpoints support the 'counter' name is less and less appropriate. All in one, we've decided to rename the subsystem to 'performance events' and to propagate this rename through all fields, variables and API names. (in an ABI compatible fashion) The word 'event' is also a bit shorter than 'counter' - which makes it slightly more convenient to write/handle as well. Thanks goes to Stephane Eranian who first observed this misnomer and suggested a rename. User-space tooling and ABI compatibility is not affected - this patch should be function-invariant. (Also, defconfigs were not touched to keep the size down.) This patch has been generated via the following script: FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/PERF_EVENT_/PERF_RECORD_/g' \ -e 's/PERF_COUNTER/PERF_EVENT/g' \ -e 's/perf_counter/perf_event/g' \ -e 's/nb_counters/nb_events/g' \ -e 's/swcounter/swevent/g' \ -e 's/tpcounter_event/tp_event/g' \ $FILES for N in $(find . -name perf_counter.[ch]); do M=$(echo $N | sed 's/perf_counter/perf_event/g') mv $N $M done FILES=$(find . -name perf_event.*) sed -i \ -e 's/COUNTER_MASK/REG_MASK/g' \ -e 's/COUNTER/EVENT/g' \ -e 's/\<event\>/event_id/g' \ -e 's/counter/event/g' \ -e 's/Counter/Event/g' \ $FILES ... to keep it as correct as possible. This script can also be used by anyone who has pending perfcounters patches - it converts a Linux kernel tree over to the new naming. We tried to time this change to the point in time where the amount of pending patches is the smallest: the end of the merge window. Namespace clashes were fixed up in a preparatory patch - and some stylistic fallout will be fixed up in a subsequent patch. ( NOTE: 'counters' are still the proper terminology when we deal with hardware registers - and these sed scripts are a bit over-eager in renaming them. I've undone some of that, but in case there's something left where 'counter' would be better than 'event' we can undo that on an individual basis instead of touching an otherwise nicely automated patch. ) Suggested-by: Stephane Eranian <eranian@google.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Paul Mackerras <paulus@samba.org> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <linux-arch@vger.kernel.org> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 10:02:48 +00:00
select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if PPC64
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_RCU_TABLE_FREE if SMP
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
2012-09-28 05:01:03 +00:00
select MODULES_USE_ELF_RELA
select NO_BOOTMEM
select OF
select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select OLD_SIGACTION if PPC32
select OLD_SIGSUSPEND
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
select VIRT_TO_BUS if !PPC64
#
# Please keep this list sorted alphabetically.
#
config GENERIC_CSUM
def_bool n
config EARLY_PRINTK
bool
default y
config PANIC_TIMEOUT
int
default 180
config COMPAT
bool
default y if PPC64
select COMPAT_BINFMT_ELF
2012-03-15 17:13:38 +00:00
select ARCH_WANT_OLD_COMPAT_IPC
select COMPAT_OLD_SIGACTION
config SYSVIPC_COMPAT
bool
depends on COMPAT && SYSVIPC
default y
# All PPC32s use generic nvram driver through ppc_md
config GENERIC_NVRAM
bool
default y if PPC32
config SCHED_OMIT_FRAME_POINTER
bool
default y
config ARCH_MAY_HAVE_PC_FDC
bool
default PCI
config PPC_UDBG_16550
bool
default n
config GENERIC_TBSYNC
bool
default y if PPC32 && SMP
default n
config AUDIT_ARCH
bool
default y
config GENERIC_BUG
bool
default y
depends on BUG
config SYS_SUPPORTS_APM_EMULATION
default y if PMAC_APM_EMU
bool
config EPAPR_BOOT
bool
help
Used to allow a board to specify it wants an ePAPR compliant wrapper.
default n
config DEFAULT_UIMAGE
bool
help
Used to allow a board to specify it wants a uImage built by default
default n
config ARCH_HIBERNATION_POSSIBLE
bool
default y
config ARCH_SUSPEND_POSSIBLE
def_bool y
depends on ADB_PMU || PPC_EFIKA || PPC_LITE5200 || PPC_83xx || \
(PPC_85xx && !PPC_E500MC) || PPC_86xx || PPC_PSERIES \
|| 44x || 40x
config PPC_DCR_NATIVE
bool
default n
config PPC_DCR_MMIO
bool
default n
config PPC_DCR
bool
depends on PPC_DCR_NATIVE || PPC_DCR_MMIO
default y
config PPC_OF_PLATFORM_PCI
bool
depends on PCI
depends on PPC64 # not supported on 32 bits yet
default n
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
depends on PPC32 || PPC_STD_MMU_64
def_bool y
config ARCH_SUPPORTS_UPROBES
def_bool y
config PPC_ADV_DEBUG_REGS
bool
depends on 40x || BOOKE
default y
config PPC_ADV_DEBUG_IACS
int
depends on PPC_ADV_DEBUG_REGS
default 4 if 44x
default 2
config PPC_ADV_DEBUG_DACS
int
depends on PPC_ADV_DEBUG_REGS
default 2
config PPC_ADV_DEBUG_DVCS
int
depends on PPC_ADV_DEBUG_REGS
default 2 if 44x
default 0
config PPC_ADV_DEBUG_DAC_RANGE
bool
depends on PPC_ADV_DEBUG_REGS && 44x
default y
config PPC_EMULATE_SSTEP
bool
default y if KPROBES || UPROBES || XMON || HAVE_HW_BREAKPOINT
config ZONE_DMA32
bool
default y if PPC64
config PGTABLE_LEVELS
int
default 2 if !PPC64
default 3 if PPC_64K_PAGES && !PPC_BOOK3S_64
default 4
source "init/Kconfig"
container freezer: implement freezer cgroup subsystem This patch implements a new freezer subsystem in the control groups framework. It provides a way to stop and resume execution of all tasks in a cgroup by writing in the cgroup filesystem. The freezer subsystem in the container filesystem defines a file named freezer.state. Writing "FROZEN" to the state file will freeze all tasks in the cgroup. Subsequently writing "RUNNING" will unfreeze the tasks in the cgroup. Reading will return the current state. * Examples of usage : # mkdir /containers/freezer # mount -t cgroup -ofreezer freezer /containers # mkdir /containers/0 # echo $some_pid > /containers/0/tasks to get status of the freezer subsystem : # cat /containers/0/freezer.state RUNNING to freeze all tasks in the container : # echo FROZEN > /containers/0/freezer.state # cat /containers/0/freezer.state FREEZING # cat /containers/0/freezer.state FROZEN to unfreeze all tasks in the container : # echo RUNNING > /containers/0/freezer.state # cat /containers/0/freezer.state RUNNING This is the basic mechanism which should do the right thing for user space task in a simple scenario. It's important to note that freezing can be incomplete. In that case we return EBUSY. This means that some tasks in the cgroup are busy doing something that prevents us from completely freezing the cgroup at this time. After EBUSY, the cgroup will remain partially frozen -- reflected by freezer.state reporting "FREEZING" when read. The state will remain "FREEZING" until one of these things happens: 1) Userspace cancels the freezing operation by writing "RUNNING" to the freezer.state file 2) Userspace retries the freezing operation by writing "FROZEN" to the freezer.state file (writing "FREEZING" is not legal and returns EIO) 3) The tasks that blocked the cgroup from entering the "FROZEN" state disappear from the cgroup's set of tasks. [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: export thaw_process] Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Matt Helsley <matthltc@us.ibm.com> Acked-by: Serge E. Hallyn <serue@us.ibm.com> Tested-by: Matt Helsley <matthltc@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-19 03:27:21 +00:00
source "kernel/Kconfig.freezer"
source "arch/powerpc/sysdev/Kconfig"
source "arch/powerpc/platforms/Kconfig"
menu "Kernel options"
config HIGHMEM
bool "High memory support"
depends on PPC32
source kernel/Kconfig.hz
source kernel/Kconfig.preempt
source "fs/Kconfig.binfmt"
config HUGETLB_PAGE_SIZE_VARIABLE
bool
depends on HUGETLB_PAGE
default y
config MATH_EMULATION
bool "Math emulation"
depends on 4xx || 8xx || PPC_MPC832x || BOOKE
---help---
Some PowerPC chips designed for embedded applications do not have
a floating-point unit and therefore do not implement the
floating-point instructions in the PowerPC instruction set. If you
say Y here, the kernel will include code to emulate a floating-point
unit, which will allow programs that use floating-point
instructions to run.
This is also useful to emulate missing (optional) instructions
such as fsqrt on cores that do have an FPU but do not implement
them (such as Freescale BookE).
choice
prompt "Math emulation options"
default MATH_EMULATION_FULL
depends on MATH_EMULATION
config MATH_EMULATION_FULL
bool "Emulate all the floating point instructions"
---help---
Select this option will enable the kernel to support to emulate
all the floating point instructions. If your SoC doesn't have
a FPU, you should select this.
config MATH_EMULATION_HW_UNIMPLEMENTED
bool "Just emulate the FPU unimplemented instructions"
---help---
Select this if you know there does have a hardware FPU on your
SoC, but some floating point instructions are not implemented by that.
endchoice
config PPC_TRANSACTIONAL_MEM
bool "Transactional Memory support for POWERPC"
depends on PPC_BOOK3S_64
depends on SMP
select ALTIVEC
select VSX
default n
---help---
Support user-mode Transactional Memory on POWERPC.
config DISABLE_MPROFILE_KERNEL
bool "Disable use of mprofile-kernel for kernel tracing"
depends on PPC64 && CPU_LITTLE_ENDIAN
default y
help
Selecting this options disables use of the mprofile-kernel ABI for
kernel tracing. That will cause options such as live patching
(CONFIG_LIVEPATCH) which depend on CONFIG_DYNAMIC_FTRACE_WITH_REGS to
be disabled also.
If you have a toolchain which supports mprofile-kernel, then you can
disable this. Otherwise leave it enabled. If you're not sure, say
"Y".
config MPROFILE_KERNEL
depends on PPC64 && CPU_LITTLE_ENDIAN
def_bool !DISABLE_MPROFILE_KERNEL
config USE_THIN_ARCHIVES
bool "Build the kernel using thin archives"
default n
select THIN_ARCHIVES
help
Build the kernel using thin archives.
If you're unsure say N.
config IOMMU_HELPER
def_bool PPC64
config SWIOTLB
bool "SWIOTLB support"
default n
select IOMMU_HELPER
---help---
Support for IO bounce buffering for systems without an IOMMU.
This allows us to DMA to the full physical address space on
platforms where the size of a physical address is larger
than the bus address. Not all platforms support this.
config HOTPLUG_CPU
bool "Support for enabling/disabling CPUs"
depends on SMP && (PPC_PSERIES || \
PPC_PMAC || PPC_POWERNV || FSL_SOC_BOOKE)
---help---
Say Y here to be able to disable and re-enable individual
CPUs at runtime on SMP machines.
Say N if you are unsure.
config ARCH_CPU_PROBE_RELEASE
def_bool y
depends on HOTPLUG_CPU
config ARCH_ENABLE_MEMORY_HOTPLUG
def_bool y
config ARCH_HAS_WALK_MEMORY
def_bool y
config ARCH_ENABLE_MEMORY_HOTREMOVE
def_bool y
config PPC64_SUPPORTS_MEMORY_FAILURE
bool "Add support for memory hwpoison"
depends on PPC_BOOK3S_64
default "y" if PPC_POWERNV
select ARCH_SUPPORTS_MEMORY_FAILURE
config KEXEC
bool "kexec system call"
depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP)) || PPC_BOOK3E
2015-09-09 22:38:55 +00:00
select KEXEC_CORE
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
but it is independent of the system firmware. And like a reboot
you can start any kernel with it, not just Linux.
The name comes from the similarity to the exec system call.
It is an ongoing process to be certain the hardware in a machine
is properly shutdown, so do not be surprised if this code does not
initially work for you. As of this writing the exact hardware
interface is strongly in flux, so no good recommendation can be
made.
config KEXEC_FILE
bool "kexec file based system call"
select KEXEC_CORE
powerpc: ima: get the kexec buffer passed by the previous kernel Patch series "ima: carry the measurement list across kexec", v8. The TPM PCRs are only reset on a hard reboot. In order to validate a TPM's quote after a soft reboot (eg. kexec -e), the IMA measurement list of the running kernel must be saved and then restored on the subsequent boot, possibly of a different architecture. The existing securityfs binary_runtime_measurements file conveniently provides a serialized format of the IMA measurement list. This patch set serializes the measurement list in this format and restores it. Up to now, the binary_runtime_measurements was defined as architecture native format. The assumption being that userspace could and would handle any architecture conversions. With the ability of carrying the measurement list across kexec, possibly from one architecture to a different one, the per boot architecture information is lost and with it the ability of recalculating the template digest hash. To resolve this problem, without breaking the existing ABI, this patch set introduces the boot command line option "ima_canonical_fmt", which is arbitrarily defined as little endian. The need for this boot command line option will be limited to the existing version 1 format of the binary_runtime_measurements. Subsequent formats will be defined as canonical format (eg. TPM 2.0 support for larger digests). A simplified method of Thiago Bauermann's "kexec buffer handover" patch series for carrying the IMA measurement list across kexec is included in this patch set. The simplified method requires all file measurements be taken prior to executing the kexec load, as subsequent measurements will not be carried across the kexec and restored. This patch (of 10): The IMA kexec buffer allows the currently running kernel to pass the measurement list via a kexec segment to the kernel that will be kexec'd. The second kernel can check whether the previous kernel sent the buffer and retrieve it. This is the architecture-specific part which enables IMA to receive the measurement list passed by the previous kernel. It will be used in the next patch. The change in machine_kexec_64.c is to factor out the logic of removing an FDT memory reservation so that it can be used by remove_ima_buffer. Link: http://lkml.kernel.org/r/1480554346-29071-2-git-send-email-zohar@linux.vnet.ibm.com Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Andreas Steffen <andreas.steffen@strongswan.org> Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com> Cc: Josh Sklar <sklar@linux.vnet.ibm.com> Cc: Dave Young <dyoung@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Baoquan He <bhe@redhat.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-20 00:22:32 +00:00
select HAVE_IMA_KEXEC
select BUILD_BIN2C
depends on PPC64
depends on CRYPTO=y
depends on CRYPTO_SHA256=y
help
This is a new version of the kexec system call. This call is
file based and takes in file descriptors as system call arguments
for kernel and initramfs as opposed to a list of segments as is the
case for the older kexec call.
config RELOCATABLE
bool "Build a relocatable kernel"
depends on PPC64 || (FLATMEM && (44x || FSL_BOOKE))
select NONSTATIC_KERNEL
modversions: treat symbol CRCs as 32 bit quantities The modversion symbol CRCs are emitted as ELF symbols, which allows us to easily populate the kcrctab sections by relying on the linker to associate each kcrctab slot with the correct value. This has a couple of downsides: - Given that the CRCs are treated as memory addresses, we waste 4 bytes for each CRC on 64 bit architectures, - On architectures that support runtime relocation, a R_<arch>_RELATIVE relocation entry is emitted for each CRC value, which identifies it as a quantity that requires fixing up based on the actual runtime load offset of the kernel. This results in corrupted CRCs unless we explicitly undo the fixup (and this is currently being handled in the core module code) - Such runtime relocation entries take up 24 bytes of __init space each, resulting in a x8 overhead in [uncompressed] kernel size for CRCs. Switching to explicit 32 bit values on 64 bit architectures fixes most of these issues, given that 32 bit values are not treated as quantities that require fixing up based on the actual runtime load offset. Note that on some ELF64 architectures [such as PPC64], these 32-bit values are still emitted as [absolute] runtime relocatable quantities, even if the value resolves to a build time constant. Since relative relocations are always resolved at build time, this patch enables MODULE_REL_CRCS on powerpc when CONFIG_RELOCATABLE=y, which turns the absolute CRC references into relative references into .rodata where the actual CRC value is stored. So redefine all CRC fields and variables as u32, and redefine the __CRC_SYMBOL() macro for 64 bit builds to emit the CRC reference using inline assembler (which is necessary since 64-bit C code cannot use 32-bit types to hold memory addresses, even if they are ultimately resolved using values that do not exceed 0xffffffff). To avoid potential problems with legacy 32-bit architectures using legacy toolchains, the equivalent C definition of the kcrctab entry is retained for 32-bit architectures. Note that this mostly reverts commit d4703aefdbc8 ("module: handle ppc64 relocating kcrctabs when CONFIG_RELOCATABLE=y") Acked-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-03 09:54:06 +00:00
select MODULE_REL_CRCS if MODVERSIONS
help
This builds a kernel image that is capable of running at the
location the kernel is loaded at. For ppc32, there is no any
alignment restrictions, and this feature is a superset of
DYNAMIC_MEMSTART and hence overrides it. For ppc64, we should use
16k-aligned base address. The kernel is linked as a
position-independent executable (PIE) and contains dynamic relocations
which are processed early in the bootup process.
One use is for the kexec on panic case where the recovery kernel
must live at a different physical address than the primary
kernel.
Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
it has been loaded at and the compile time physical addresses
CONFIG_PHYSICAL_START is ignored. However CONFIG_PHYSICAL_START
setting can still be useful to bootwrappers that need to know the
load address of the kernel (eg. u-boot/mkimage).
config RELOCATABLE_TEST
bool "Test relocatable kernel"
depends on (PPC64 && RELOCATABLE)
default n
help
This runs the relocatable kernel at the address it was initially
loaded at, which tends to be non-zero and therefore test the
relocation code.
config CRASH_DUMP
bool "Build a dump capture kernel"
depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP)
select RELOCATABLE if PPC64 || 44x || FSL_BOOKE
help
Build a kernel suitable for use as a dump capture kernel.
The same kernel binary can be used as production kernel and dump
capture kernel.
config FA_DUMP
bool "Firmware-assisted dump"
depends on PPC64 && PPC_RTAS
select CRASH_CORE
select CRASH_DUMP
help
A robust mechanism to get reliable kernel crash dump with
assistance from firmware. This approach does not use kexec,
instead firmware assists in booting the capture kernel
while preserving memory contents. Firmware-assisted dump
is meant to be a kdump replacement offering robustness and
speed not possible without system firmware assistance.
If unsure, say "N"
config IRQ_ALL_CPUS
bool "Distribute interrupts on all CPUs by default"
depends on SMP
help
This option gives the kernel permission to distribute IRQs across
multiple CPUs. Saying N here will route all IRQs to the first
CPU. Generally saying Y is safe, although some problems have been
reported with SMP Power Macintoshes with this option enabled.
config NUMA
bool "NUMA support"
depends on PPC64
default y if SMP && PPC_PSERIES
[PATCH] Configurable NODES_SHIFT Current implementations define NODES_SHIFT in include/asm-xxx/numnodes.h for each arch. Its definition is sometimes configurable. Indeed, ia64 defines 5 NODES_SHIFT values in the current git tree. But it looks a bit messy. SGI-SN2(ia64) system requires 1024 nodes, and the number of nodes already has been changeable by config. Suitable node's number may be changed in the future even if it is other architecture. So, I wrote configurable node's number. This patch set defines just default value for each arch which needs multi nodes except ia64. But, it is easy to change to configurable if necessary. On ia64 the number of nodes can be already configured in generic ia64 and SN2 config. But, NODES_SHIFT is defined for DIG64 and HP'S machine too. So, I changed it so that all platforms can be configured via CONFIG_NODES_SHIFT. It would be simpler. See also: http://marc.theaimsgroup.com/?l=linux-kernel&m=114358010523896&w=2 Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jack Steiner <steiner@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-04-11 05:53:53 +00:00
config NODES_SHIFT
int
default "8" if PPC64
[PATCH] Configurable NODES_SHIFT Current implementations define NODES_SHIFT in include/asm-xxx/numnodes.h for each arch. Its definition is sometimes configurable. Indeed, ia64 defines 5 NODES_SHIFT values in the current git tree. But it looks a bit messy. SGI-SN2(ia64) system requires 1024 nodes, and the number of nodes already has been changeable by config. Suitable node's number may be changed in the future even if it is other architecture. So, I wrote configurable node's number. This patch set defines just default value for each arch which needs multi nodes except ia64. But, it is easy to change to configurable if necessary. On ia64 the number of nodes can be already configured in generic ia64 and SN2 config. But, NODES_SHIFT is defined for DIG64 and HP'S machine too. So, I changed it so that all platforms can be configured via CONFIG_NODES_SHIFT. It would be simpler. See also: http://marc.theaimsgroup.com/?l=linux-kernel&m=114358010523896&w=2 Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jack Steiner <steiner@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-04-11 05:53:53 +00:00
default "4"
depends on NEED_MULTIPLE_NODES
config USE_PERCPU_NUMA_NODE_ID
def_bool y
depends on NUMA
powerpc/numa: Enable CONFIG_HAVE_MEMORYLESS_NODES Based off fd1197f1 for ia64, enable CONFIG_HAVE_MEMORYLESS_NODES if NUMA. Initialize the local memory node in start_secondary. With this commit and the preceding to enable CONFIG_USER_PERCPU_NUMA_NODE_ID, which is a prerequisite, in a PowerKVM guest with the following topology: numactl --hardware available: 3 nodes (0-2) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 node 0 size: 1998 MB node 0 free: 521 MB node 1 cpus: 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 node 1 size: 0 MB node 1 free: 0 MB node 2 cpus: node 2 size: 2039 MB node 2 free: 1739 MB node distances: node 0 1 2 0: 10 40 40 1: 40 10 40 2: 40 40 10 the unreclaimable slab is reduced by close to 130M: Before: Slab: 418176 kB SReclaimable: 26624 kB SUnreclaim: 391552 kB After: Slab: 298944 kB SReclaimable: 31744 kB SUnreclaim: 267200 kB Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-05-16 23:41:20 +00:00
config HAVE_MEMORYLESS_NODES
def_bool y
depends on NUMA
config ARCH_SELECT_MEMORY_MODEL
def_bool y
depends on PPC64
config ARCH_FLATMEM_ENABLE
def_bool y
depends on (PPC64 && !NUMA) || PPC32
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on PPC64
select SPARSEMEM_VMEMMAP_ENABLE
config ARCH_SPARSEMEM_DEFAULT
def_bool y
depends on PPC_BOOK3S_64
config SYS_SUPPORTS_HUGETLBFS
bool
config ILLEGAL_POINTER_VALUE
hex
# This is roughly half way between the top of user space and the bottom
# of kernel space, which seems about as good as we can get.
default 0x5deadbeef0000000 if PPC64
default 0
source "mm/Kconfig"
config ARCH_MEMORY_PROBE
def_bool y
depends on MEMORY_HOTPLUG
# Some NUMA nodes have memory ranges that span
# other nodes. Even though a pfn is valid and
# between a node's start and end pfns, it may not
# reside on that node. See memmap_init_zone()
# for details.
config NODES_SPAN_OTHER_NODES
def_bool y
depends on NEED_MULTIPLE_NODES
powerpc/44x: Support for 256KB PAGE_SIZE This patch adds support for 256KB pages on ppc44x-based boards. For simplification of implementation with 256KB pages we still assume 2-level paging. As a side effect this leads to wasting extra memory space reserved for PTE tables: only 1/4 of pages allocated for PTEs are actually used. But this may be an acceptable trade-off to achieve the high performance we have with big PAGE_SIZEs in some applications (e.g. RAID). Also with 256KB PAGE_SIZE we increase THREAD_SIZE up to 32KB to minimize the risk of stack overflows in the cases of on-stack arrays, which size depends on the page size (e.g. multipage BIOs, NTFS, etc.). With 256KB PAGE_SIZE we need to decrease the PKMAP_ORDER at least down to 9, otherwise all high memory (2 ^ 10 * PAGE_SIZE == 256MB) we'll be occupied by PKMAP addresses leaving no place for vmalloc. We do not separate PKMAP_ORDER for 256K from 16K/64K PAGE_SIZE here; actually that value of 10 in support for 16K/64K had been selected rather intuitively. Thus now for all cases of PAGE_SIZE on ppc44x (including the default, 4KB, one) we have 512 pages for PKMAP. Because ELF standard supports only page sizes up to 64K, then you should use binutils later than 2.17.50.0.3 with '-zmax-page-size' set to 256K for building applications, which are to be run with the 256KB-page sized kernel. If using the older binutils, then you should patch them like follows: --- binutils/bfd/elf32-ppc.c.orig +++ binutils/bfd/elf32-ppc.c -#define ELF_MAXPAGESIZE 0x10000 +#define ELF_MAXPAGESIZE 0x40000 One more restriction we currently have with 256KB page sizes is inability to use shmem safely, so, for now, the 256KB is available only if you turn the CONFIG_SHMEM option off (another variant is to use BROKEN). Though, if you need shmem with 256KB pages, you can always remove the !SHMEM dependency in 'config PPC_256K_PAGES', and use the workaround available here: http://lkml.org/lkml/2008/12/19/20 Signed-off-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Ilya Yanok <yanok@emcraft.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2009-01-29 01:40:44 +00:00
config STDBINUTILS
bool "Using standard binutils settings"
depends on 44x
default y
help
Turning this option off allows you to select 256KB PAGE_SIZE on 44x.
Note, that kernel will be able to run only those applications,
which had been compiled using binutils later than 2.17.50.0.3 with
'-zmax-page-size' set to 256K (the default is 64K). Or, if using
the older binutils, you can patch them with a trivial patch, which
changes the ELF_MAXPAGESIZE definition from 0x10000 to 0x40000.
choice
prompt "Page size"
default PPC_4K_PAGES
help
Select the kernel logical page size. Increasing the page size
will reduce software overhead at each page boundary, allow
hardware prefetch mechanisms to be more effective, and allow
larger dma transfers increasing IO efficiency and reducing
overhead. However the utilization of memory will increase.
For example, each cached file will using a multiple of the
page size to hold its contents and the difference between the
end of file and the end of page is wasted.
Some dedicated systems, such as software raid serving with
accelerated calculations, have shown significant increases.
If you configure a 64 bit kernel for 64k pages but the
processor does not support them, then the kernel will simulate
them with 4k pages, loading them on demand, but with the
reduced software overhead and larger internal fragmentation.
For the 32 bit kernel, a large page option will not be offered
unless it is supported by the configured processor.
If unsure, choose 4K_PAGES.
config PPC_4K_PAGES
bool "4k page size"
select HAVE_ARCH_SOFT_DIRTY if PPC_BOOK3S_64
config PPC_16K_PAGES
bool "16k page size"
depends on 44x || PPC_8xx
config PPC_64K_PAGES
bool "64k page size"
depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
select HAVE_ARCH_SOFT_DIRTY if PPC_BOOK3S_64
powerpc/44x: Support for 256KB PAGE_SIZE This patch adds support for 256KB pages on ppc44x-based boards. For simplification of implementation with 256KB pages we still assume 2-level paging. As a side effect this leads to wasting extra memory space reserved for PTE tables: only 1/4 of pages allocated for PTEs are actually used. But this may be an acceptable trade-off to achieve the high performance we have with big PAGE_SIZEs in some applications (e.g. RAID). Also with 256KB PAGE_SIZE we increase THREAD_SIZE up to 32KB to minimize the risk of stack overflows in the cases of on-stack arrays, which size depends on the page size (e.g. multipage BIOs, NTFS, etc.). With 256KB PAGE_SIZE we need to decrease the PKMAP_ORDER at least down to 9, otherwise all high memory (2 ^ 10 * PAGE_SIZE == 256MB) we'll be occupied by PKMAP addresses leaving no place for vmalloc. We do not separate PKMAP_ORDER for 256K from 16K/64K PAGE_SIZE here; actually that value of 10 in support for 16K/64K had been selected rather intuitively. Thus now for all cases of PAGE_SIZE on ppc44x (including the default, 4KB, one) we have 512 pages for PKMAP. Because ELF standard supports only page sizes up to 64K, then you should use binutils later than 2.17.50.0.3 with '-zmax-page-size' set to 256K for building applications, which are to be run with the 256KB-page sized kernel. If using the older binutils, then you should patch them like follows: --- binutils/bfd/elf32-ppc.c.orig +++ binutils/bfd/elf32-ppc.c -#define ELF_MAXPAGESIZE 0x10000 +#define ELF_MAXPAGESIZE 0x40000 One more restriction we currently have with 256KB page sizes is inability to use shmem safely, so, for now, the 256KB is available only if you turn the CONFIG_SHMEM option off (another variant is to use BROKEN). Though, if you need shmem with 256KB pages, you can always remove the !SHMEM dependency in 'config PPC_256K_PAGES', and use the workaround available here: http://lkml.org/lkml/2008/12/19/20 Signed-off-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Ilya Yanok <yanok@emcraft.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2009-01-29 01:40:44 +00:00
config PPC_256K_PAGES
bool "256k page size"
depends on 44x && !STDBINUTILS
powerpc/44x: Support for 256KB PAGE_SIZE This patch adds support for 256KB pages on ppc44x-based boards. For simplification of implementation with 256KB pages we still assume 2-level paging. As a side effect this leads to wasting extra memory space reserved for PTE tables: only 1/4 of pages allocated for PTEs are actually used. But this may be an acceptable trade-off to achieve the high performance we have with big PAGE_SIZEs in some applications (e.g. RAID). Also with 256KB PAGE_SIZE we increase THREAD_SIZE up to 32KB to minimize the risk of stack overflows in the cases of on-stack arrays, which size depends on the page size (e.g. multipage BIOs, NTFS, etc.). With 256KB PAGE_SIZE we need to decrease the PKMAP_ORDER at least down to 9, otherwise all high memory (2 ^ 10 * PAGE_SIZE == 256MB) we'll be occupied by PKMAP addresses leaving no place for vmalloc. We do not separate PKMAP_ORDER for 256K from 16K/64K PAGE_SIZE here; actually that value of 10 in support for 16K/64K had been selected rather intuitively. Thus now for all cases of PAGE_SIZE on ppc44x (including the default, 4KB, one) we have 512 pages for PKMAP. Because ELF standard supports only page sizes up to 64K, then you should use binutils later than 2.17.50.0.3 with '-zmax-page-size' set to 256K for building applications, which are to be run with the 256KB-page sized kernel. If using the older binutils, then you should patch them like follows: --- binutils/bfd/elf32-ppc.c.orig +++ binutils/bfd/elf32-ppc.c -#define ELF_MAXPAGESIZE 0x10000 +#define ELF_MAXPAGESIZE 0x40000 One more restriction we currently have with 256KB page sizes is inability to use shmem safely, so, for now, the 256KB is available only if you turn the CONFIG_SHMEM option off (another variant is to use BROKEN). Though, if you need shmem with 256KB pages, you can always remove the !SHMEM dependency in 'config PPC_256K_PAGES', and use the workaround available here: http://lkml.org/lkml/2008/12/19/20 Signed-off-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Ilya Yanok <yanok@emcraft.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2009-01-29 01:40:44 +00:00
help
Make the page size 256k.
As the ELF standard only requires alignment to support page
sizes up to 64k, you will need to compile all of your user
space applications with a non-standard binutils settings
(see the STDBINUTILS description for details).
Say N unless you know what you are doing.
endchoice
config THREAD_SHIFT
int "Thread shift" if EXPERT
range 13 15
default "15" if PPC_256K_PAGES
default "14" if PPC64
default "13"
help
Used to define the stack size. The default is almost always what you
want. Only change this if you know what you are doing.
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
range 8 9 if PPC64 && PPC_64K_PAGES
default "9" if PPC64 && PPC_64K_PAGES
range 13 13 if PPC64 && !PPC_64K_PAGES
default "13" if PPC64 && !PPC_64K_PAGES
range 9 64 if PPC32 && PPC_16K_PAGES
default "9" if PPC32 && PPC_16K_PAGES
range 7 64 if PPC32 && PPC_64K_PAGES
default "7" if PPC32 && PPC_64K_PAGES
range 5 64 if PPC32 && PPC_256K_PAGES
default "5" if PPC32 && PPC_256K_PAGES
range 11 64
default "11"
help
The kernel memory allocator divides physically contiguous memory
blocks into "zones", where each zone is a power of two number of
pages. This option selects the largest power of two that the kernel
keeps in the memory allocator. If you need to allocate very large
blocks of physically contiguous memory, then you may need to
increase this value.
This config option is actually maximum order plus one. For example,
a value of 11 means that the largest free memory block is 2^10 pages.
The page size is not necessarily 4KB. For example, on 64-bit
systems, 64KB pages can be enabled via CONFIG_PPC_64K_PAGES. Keep
this in mind when choosing a value for this option.
[POWERPC] Provide a way to protect 4k subpages when using 64k pages Using 64k pages on 64-bit PowerPC systems makes life difficult for emulators that are trying to emulate an ISA, such as x86, which use a smaller page size, since the emulator can no longer use the MMU and the normal system calls for controlling page protections. Of course, the emulator can emulate the MMU by checking and possibly remapping the address for each memory access in software, but that is pretty slow. This provides a facility for such programs to control the access permissions on individual 4k sub-pages of 64k pages. The idea is that the emulator supplies an array of protection masks to apply to a specified range of virtual addresses. These masks are applied at the level where hardware PTEs are inserted into the hardware page table based on the Linux PTEs, so the Linux PTEs are not affected. Note that this new mechanism does not allow any access that would otherwise be prohibited; it can only prohibit accesses that would otherwise be allowed. This new facility is only available on 64-bit PowerPC and only when the kernel is configured for 64k pages. The masks are supplied using a new subpage_prot system call, which takes a starting virtual address and length, and a pointer to an array of protection masks in memory. The array has a 32-bit word per 64k page to be protected; each 32-bit word consists of 16 2-bit fields, for which 0 allows any access (that is otherwise allowed), 1 prevents write accesses, and 2 or 3 prevent any access. Implicit in this is that the regions of the address space that are protected are switched to use 4k hardware pages rather than 64k hardware pages (on machines with hardware 64k page support). In fact the whole process is switched to use 4k hardware pages when the subpage_prot system call is used, but this could be improved in future to switch only the affected segments. The subpage protection bits are stored in a 3 level tree akin to the page table tree. The top level of this tree is stored in a structure that is appended to the top level of the page table tree, i.e., the pgd array. Since it will often only be 32-bit addresses (below 4GB) that are protected, the pointers to the first four bottom level pages are also stored in this structure (each bottom level page contains the protection bits for 1GB of address space), so the protection bits for addresses below 4GB can be accessed with one fewer loads than those for higher addresses. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-23 21:35:13 +00:00
config PPC_SUBPAGE_PROT
bool "Support setting protections for 4k subpages"
depends on PPC_STD_MMU_64 && PPC_64K_PAGES
[POWERPC] Provide a way to protect 4k subpages when using 64k pages Using 64k pages on 64-bit PowerPC systems makes life difficult for emulators that are trying to emulate an ISA, such as x86, which use a smaller page size, since the emulator can no longer use the MMU and the normal system calls for controlling page protections. Of course, the emulator can emulate the MMU by checking and possibly remapping the address for each memory access in software, but that is pretty slow. This provides a facility for such programs to control the access permissions on individual 4k sub-pages of 64k pages. The idea is that the emulator supplies an array of protection masks to apply to a specified range of virtual addresses. These masks are applied at the level where hardware PTEs are inserted into the hardware page table based on the Linux PTEs, so the Linux PTEs are not affected. Note that this new mechanism does not allow any access that would otherwise be prohibited; it can only prohibit accesses that would otherwise be allowed. This new facility is only available on 64-bit PowerPC and only when the kernel is configured for 64k pages. The masks are supplied using a new subpage_prot system call, which takes a starting virtual address and length, and a pointer to an array of protection masks in memory. The array has a 32-bit word per 64k page to be protected; each 32-bit word consists of 16 2-bit fields, for which 0 allows any access (that is otherwise allowed), 1 prevents write accesses, and 2 or 3 prevent any access. Implicit in this is that the regions of the address space that are protected are switched to use 4k hardware pages rather than 64k hardware pages (on machines with hardware 64k page support). In fact the whole process is switched to use 4k hardware pages when the subpage_prot system call is used, but this could be improved in future to switch only the affected segments. The subpage protection bits are stored in a 3 level tree akin to the page table tree. The top level of this tree is stored in a structure that is appended to the top level of the page table tree, i.e., the pgd array. Since it will often only be 32-bit addresses (below 4GB) that are protected, the pointers to the first four bottom level pages are also stored in this structure (each bottom level page contains the protection bits for 1GB of address space), so the protection bits for addresses below 4GB can be accessed with one fewer loads than those for higher addresses. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-23 21:35:13 +00:00
help
This option adds support for a system call to allow user programs
to set access permissions (read/write, readonly, or no access)
on the 4k subpages of each 64k page.
config PPC_COPRO_BASE
bool
default n
config SCHED_SMT
bool "SMT (Hyperthreading) scheduler support"
depends on PPC64 && SMP
help
SMT scheduler support improves the CPU scheduler's decision making
when dealing with POWER5 cpus at a cost of slightly increased
overhead in some places. If unsure say N here.
config PPC_DENORMALISATION
bool "PowerPC denormalisation exception handling"
depends on PPC_BOOK3S_64
default "y" if PPC_POWERNV
---help---
Add support for handling denormalisation of single precision
values. Useful for bare metal only. If unsure say Y here.
config CMDLINE_BOOL
bool "Default bootloader kernel arguments"
config CMDLINE
string "Initial kernel command string"
depends on CMDLINE_BOOL
default "console=ttyS0,9600 console=tty0 root=/dev/sda2"
help
On some platforms, there is currently no way for the boot loader to
pass arguments to the kernel. For these platforms, you can supply
some command-line options at build time by entering them here. In
most cases you will need to specify the root device here.
config CMDLINE_FORCE
bool "Always use the default kernel command string"
depends on CMDLINE_BOOL
help
Always use the default kernel command string, even if the boot
loader passes other arguments to the kernel.
This is useful if you cannot or don't want to change the
command-line options your boot loader passes to the kernel.
config EXTRA_TARGETS
string "Additional default image types"
help
List additional targets to be built by the bootwrapper here (separated
by spaces). This is useful for targets that depend of device tree
files in the .dts directory.
Targets in this list will be build as part of the default build
target, or when the user does a 'make zImage' or a
'make zImage.initrd'.
If unsure, leave blank
config ARCH_WANTS_FREEZER_CONTROL
def_bool y
depends on ADB_PMU
source kernel/power/Kconfig
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"
depends on PROC_FS
default y
help
This kernel feature is useful for number crunching applications
that may need to compute untrusted bytecode during their
execution. By using pipes or other transports made available to
the process as file descriptors supporting the read/write
syscalls, it's possible to isolate those applications in
their own address space using seccomp. Once seccomp is
enabled via /proc/<pid>/seccomp, it cannot be disabled
and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode.
If unsure, say Y. Only embedded should say N here.
endmenu
config ISA_DMA_API
bool
default PCI
menu "Bus options"
config ISA
bool "Support for ISA-bus hardware"
depends on PPC_CHRP
select PPC_I8259
help
Find out whether you have ISA slots on your motherboard. ISA is the
name of a bus system, i.e. the way the CPU talks to the other stuff
inside your box. If you have an Apple machine, say N here; if you
have an IBM RS/6000 or pSeries machine, say Y. If you have an
embedded board, consult your board documentation.
config ZONE_DMA
bool
default y
config NEED_DMA_MAP_STATE
def_bool (PPC64 || NOT_COHERENT_CACHE)
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA
bool
depends on ISA_DMA_API
default y
config PPC_INDIRECT_PCI
bool
depends on PCI
default y if 40x || 44x
default n
config EISA
bool
config SBUS
bool
config FSL_SOC
bool
config FSL_PCI
bool
select PPC_INDIRECT_PCI
select PCI_QUIRKS
config FSL_PMC
bool
default y
depends on SUSPEND && (PPC_85xx || PPC_86xx)
help
Freescale MPC85xx/MPC86xx power management controller support
(suspend/resume). For MPC83xx see platforms/83xx/suspend.c
config PPC4xx_CPM
bool
default y
depends on SUSPEND && (44x || 40x)
help
PPC4xx Clock Power Management (CPM) support (suspend/resume).
It also enables support for two different idle states (idle-wait
and idle-doze).
config 4xx_SOC
bool
config FSL_LBC
bool "Freescale Local Bus support"
help
Enables reporting of errors from the Freescale local bus
controller. Also contains some common code used by
drivers for specific local bus peripherals.
config FSL_GTM
bool
depends on PPC_83xx || QUICC_ENGINE || CPM2
help
Freescale General-purpose Timers support
# Yes MCA RS/6000s exist but Linux-PPC does not currently support any
config MCA
bool
# Platforms that what PCI turned unconditionally just do select PCI
# in their config node. Platforms that want to choose at config
# time should select PPC_PCI_CHOICE
config PPC_PCI_CHOICE
bool
config PCI
bool "PCI support" if PPC_PCI_CHOICE
default y if !40x && !CPM2 && !8xx && !PPC_83xx \
&& !PPC_85xx && !PPC_86xx && !GAMECUBE_COMMON
default PCI_QSPAN if !4xx && !CPM2 && 8xx
select GENERIC_PCI_IOMAP
help
Find out whether your system includes a PCI bus. PCI is the name of
a bus system, i.e. the way the CPU talks to the other stuff inside
your box. If you say Y here, the kernel will include drivers and
infrastructure code to support PCI bus devices.
config PCI_DOMAINS
def_bool PCI
config PCI_SYSCALL
def_bool PCI
config PCI_QSPAN
bool "QSpan PCI"
depends on !4xx && !CPM2 && 8xx
select PPC_I8259
help
Say Y here if you have a system based on a Motorola 8xx-series
embedded processor with a QSPAN PCI interface, otherwise say N.
config PCI_8260
bool
depends on PCI && 8260
select PPC_INDIRECT_PCI
default y
source "drivers/pci/Kconfig"
source "drivers/pcmcia/Kconfig"
config HAS_RAPIDIO
bool
default n
config RAPIDIO
tristate "RapidIO support"
depends on HAS_RAPIDIO || PCI
help
If you say Y here, the kernel will include drivers and
infrastructure code to support RapidIO interconnect devices.
config FSL_RIO
bool "Freescale Embedded SRIO Controller support"
depends on RAPIDIO = y && HAS_RAPIDIO
default "n"
---help---
Include support for RapidIO controller on Freescale embedded
processors (MPC8548, MPC8641, etc).
source "drivers/rapidio/Kconfig"
endmenu
config NONSTATIC_KERNEL
bool
default n
menu "Advanced setup"
depends on PPC32
config ADVANCED_OPTIONS
bool "Prompt for advanced kernel configuration options"
help
This option will enable prompting for a variety of advanced kernel
configuration options. These options can cause the kernel to not
work if they are set incorrectly, but can be used to optimize certain
aspects of kernel memory management.
Unless you know what you are doing, say N here.
comment "Default settings for advanced configuration options are used"
depends on !ADVANCED_OPTIONS
config LOWMEM_SIZE_BOOL
bool "Set maximum low memory"
depends on ADVANCED_OPTIONS
help
This option allows you to set the maximum amount of memory which
will be used as "low memory", that is, memory which the kernel can
access directly, without having to set up a kernel virtual mapping.
This can be useful in optimizing the layout of kernel virtual
memory.
Say N here unless you know what you are doing.
config LOWMEM_SIZE
hex "Maximum low memory size (in bytes)" if LOWMEM_SIZE_BOOL
default "0x30000000"
config LOWMEM_CAM_NUM_BOOL
bool "Set number of CAMs to use to map low memory"
depends on ADVANCED_OPTIONS && FSL_BOOKE
help
This option allows you to set the maximum number of CAM slots that
will be used to map low memory. There are a limited number of slots
available and even more limited number that will fit in the L1 MMU.
However, using more entries will allow mapping more low memory. This
can be useful in optimizing the layout of kernel virtual memory.
Say N here unless you know what you are doing.
config LOWMEM_CAM_NUM
depends on FSL_BOOKE
int "Number of CAMs to use to map low memory" if LOWMEM_CAM_NUM_BOOL
default 3
config DYNAMIC_MEMSTART
bool "Enable page aligned dynamic load address for kernel"
depends on ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || 44x)
select NONSTATIC_KERNEL
help
This option enables the kernel to be loaded at any page aligned
physical address. The kernel creates a mapping from KERNELBASE to
the address where the kernel is loaded. The page size here implies
the TLB page size of the mapping for kernel on the particular platform.
Please refer to the init code for finding the TLB page size.
DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE
kernel image, where the only restriction is the page aligned kernel
load address. When this option is enabled, the compile time physical
address CONFIG_PHYSICAL_START is ignored.
This option is overridden by CONFIG_RELOCATABLE
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
config PAGE_OFFSET_BOOL
bool "Set custom page offset address"
depends on ADVANCED_OPTIONS
help
This option allows you to set the kernel virtual address at which
the kernel will map low memory. This can be useful in optimizing
the virtual memory layout of the system.
Say N here unless you know what you are doing.
config PAGE_OFFSET
hex "Virtual address of memory base" if PAGE_OFFSET_BOOL
default "0xc0000000"
config KERNEL_START_BOOL
bool "Set custom kernel base address"
depends on ADVANCED_OPTIONS
help
This option allows you to set the kernel virtual address at which
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
the kernel will be loaded. Normally this should match PAGE_OFFSET
however there are times (like kdump) that one might not want them
to be the same.
Say N here unless you know what you are doing.
config KERNEL_START
hex "Virtual address of kernel base" if KERNEL_START_BOOL
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
default PAGE_OFFSET if PAGE_OFFSET_BOOL
default "0xc2000000" if CRASH_DUMP && !NONSTATIC_KERNEL
default "0xc0000000"
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
config PHYSICAL_START_BOOL
bool "Set physical address where the kernel is loaded"
depends on ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
help
This gives the physical address where the kernel is loaded.
Say N here unless you know what you are doing.
config PHYSICAL_START
hex "Physical address where the kernel is loaded" if PHYSICAL_START_BOOL
default "0x02000000" if PPC_STD_MMU && CRASH_DUMP && !NONSTATIC_KERNEL
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
default "0x00000000"
config PHYSICAL_ALIGN
hex
powerpc/fsl-booke: Allow larger CAM sizes than 256 MB The code that maps kernel low memory would only use page sizes up to 256 MB. On E500v2 pages up to 4 GB are supported. However, a page must be aligned to a multiple of the page's size. I.e. 256 MB pages must aligned to a 256 MB boundary. This was enforced by a requirement that the physical and virtual addresses of the start of lowmem be aligned to 256 MB. Clearly requiring 1GB or 4GB alignment to allow pages of that size isn't acceptable. To solve this, I simply have adjust_total_lowmem() take alignment into account when it decides what size pages to use. Give it PAGE_OFFSET = 0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map pages like this: PA 0x3000_0000 VA 0x7000_0000 Size 256 MB PA 0x4000_0000 VA 0x8000_0000 Size 1 GB PA 0x8000_0000 VA 0xC000_0000 Size 256 MB PA 0x9000_0000 VA 0xD000_0000 Size 256 MB PA 0xA000_0000 VA 0xE000_0000 Size 256 MB Because the lowmem mapping code now takes alignment into account, PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB. Even lower might be possible. The lowmem code will work down to 4 kB but it's possible some of the boot code will fail before then. Poor alignment will force small pages to be used, which combined with the limited number of TLB1 pages available, will result in very little memory getting mapped. So alignments less than 64 MB probably aren't very useful anyway. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-09 03:34:59 +00:00
default "0x04000000" if FSL_BOOKE
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
help
This value puts the alignment restrictions on physical address
where kernel is loaded and run from. Kernel is compiled for an
address which meets above alignment restriction.
config TASK_SIZE_BOOL
bool "Set custom user task size"
depends on ADVANCED_OPTIONS
help
This option allows you to set the amount of virtual address space
allocated to user tasks. This can be useful in optimizing the
virtual memory layout of the system.
Say N here unless you know what you are doing.
config TASK_SIZE
hex "Size of user task space" if TASK_SIZE_BOOL
default "0x80000000" if PPC_8xx
default "0xc0000000"
config CONSISTENT_SIZE_BOOL
bool "Set custom consistent memory pool size"
depends on ADVANCED_OPTIONS && NOT_COHERENT_CACHE
help
This option allows you to set the size of the
consistent memory pool. This pool of virtual memory
is used to make consistent memory allocations.
config CONSISTENT_SIZE
hex "Size of consistent memory pool" if CONSISTENT_SIZE_BOOL
default "0x00200000" if NOT_COHERENT_CACHE
config PIN_TLB
bool "Pinned Kernel TLBs (860 ONLY)"
depends on ADVANCED_OPTIONS && 8xx
config PIN_TLB_IMMR
bool "Pinned TLB for IMMR"
depends on PIN_TLB
default y
endmenu
if PPC64
# This value must have zeroes in the bottom 60 bits otherwise lots will break
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
config PAGE_OFFSET
hex
default "0xc000000000000000"
config KERNEL_START
hex
default "0xc000000000000000"
[POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-21 18:22:34 +00:00
config PHYSICAL_START
hex
default "0x00000000"
endif
config ARCH_RANDOM
def_bool n
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
source "lib/Kconfig"
source "arch/powerpc/Kconfig.debug"
source "security/Kconfig"
config KEYS_COMPAT
bool
depends on COMPAT && KEYS
default y
source "crypto/Kconfig"
config PPC_LIB_RHEAP
bool
source "arch/powerpc/kvm/Kconfig"
powerpc/livepatch: Add live patching support on ppc64le Add the kconfig logic & assembly support for handling live patched functions. This depends on DYNAMIC_FTRACE_WITH_REGS, which in turn depends on the new -mprofile-kernel ftrace ABI, which is only supported currently on ppc64le. Live patching is handled by a special ftrace handler. This means it runs from ftrace_caller(). The live patch handler modifies the NIP so as to redirect the return from ftrace_caller() to the new patched function. However there is one particularly tricky case we need to handle. If a function A calls another function B, and it is known at link time that they share the same TOC, then A will not save or restore its TOC, and will call the local entry point of B. When we live patch B, we replace it with a new function C, which may not have the same TOC as A. At live patch time it's too late to modify A to do the TOC save/restore, so the live patching code must interpose itself between A and C, and do the TOC save/restore that A omitted. An additionaly complication is that the livepatch code can not create a stack frame in order to save the TOC. That is because if C takes > 8 arguments, or is varargs, A will have written the arguments for C in A's stack frame. To solve this, we introduce a "livepatch stack" which grows upward from the base of the regular stack, and is used to store the TOC & LR when calling a live patched function. When the patched function returns, we retrieve the real LR & TOC from the livepatch stack, restore them, and pop the livepatch "stack frame". Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Torsten Duwe <duwe@suse.de> Reviewed-by: Balbir Singh <bsingharora@gmail.com>
2016-03-24 11:04:05 +00:00
source "kernel/livepatch/Kconfig"