linux/arch/arc/Kconfig

542 lines
13 KiB
Plaintext
Raw Normal View History

# SPDX-License-Identifier: GPL-2.0-only
#
# Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
#
config ARC
def_bool y
select ARC_TIMERS
select ARCH_HAS_DMA_COHERENT_TO_PFN
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
select COMMON_CLK
select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC)
select GENERIC_CLOCKEVENTS
select GENERIC_FIND_FIRST_BIT
# for now, we don't need GENERIC_IRQ_PROBE, CONFIG_GENERIC_IRQ_CHIP
select GENERIC_IRQ_SHOW
select GENERIC_PCI_IOMAP
select GENERIC_PENDING_IRQ if SMP
clocksource/drivers/arc_timer: Utilize generic sched_clock It turned out we used to use default implementation of sched_clock() from kernel/sched/clock.c which was as precise as 1/HZ, i.e. by default we had 10 msec granularity of time measurement. Now given ARC built-in timers are clocked with the same frequency as CPU cores we may get much higher precision of time tracking. Thus we switch to generic sched_clock which really reads ARC hardware counters. This is especially helpful for measuring short events. That's what we used to have: ------------------------------>8------------------------ $ perf stat /bin/sh -c /root/lmbench-master/bin/arc/hello > /dev/null Performance counter stats for '/bin/sh -c /root/lmbench-master/bin/arc/hello': 10.000000 task-clock (msec) # 2.832 CPUs utilized 1 context-switches # 0.100 K/sec 1 cpu-migrations # 0.100 K/sec 63 page-faults # 0.006 M/sec 3049480 cycles # 0.305 GHz 1091259 instructions # 0.36 insn per cycle 256828 branches # 25.683 M/sec 27026 branch-misses # 10.52% of all branches 0.003530687 seconds time elapsed 0.000000000 seconds user 0.010000000 seconds sys ------------------------------>8------------------------ And now we'll see: ------------------------------>8------------------------ $ perf stat /bin/sh -c /root/lmbench-master/bin/arc/hello > /dev/null Performance counter stats for '/bin/sh -c /root/lmbench-master/bin/arc/hello': 3.004322 task-clock (msec) # 0.865 CPUs utilized 1 context-switches # 0.333 K/sec 1 cpu-migrations # 0.333 K/sec 63 page-faults # 0.021 M/sec 2986734 cycles # 0.994 GHz 1087466 instructions # 0.36 insn per cycle 255209 branches # 84.947 M/sec 26002 branch-misses # 10.19% of all branches 0.003474829 seconds time elapsed 0.003519000 seconds user 0.000000000 seconds sys ------------------------------>8------------------------ Note how much more meaningful is the second output - time spent for execution pretty much matches number of cycles spent (we're runnign @ 1GHz here). Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Acked-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2018-11-19 11:29:17 +00:00
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_IOREMAP_PROT
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZMA
select HAVE_KPROBES
select HAVE_KRETPROBES
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_OPROFILE
select HAVE_PERF_EVENTS
select HANDLE_DOMAIN_IRQ
select IRQ_DOMAIN
select MODULES_USE_ELF_RELA
select OF
select OF_EARLY_FLATTREE
select PCI_SYSCALL if PCI
select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING
ARC: dma [non-IOC] setup SMP_CACHE_BYTES and cache_line_size As for today we don't setup SMP_CACHE_BYTES and cache_line_size for ARC, so they are set to L1_CACHE_BYTES by default. L1 line length (L1_CACHE_BYTES) might be easily smaller than L2 line (which is usually the case BTW). This breaks code. For example this breaks ethernet infrastructure on HSDK/AXS103 boards with IOC disabled, involving manual cache flushes Functions which alloc and manage sk_buff packet data area rely on SMP_CACHE_BYTES define. In the result we can share last L2 cache line in sk_buff linear packet data area between DMA buffer and some useful data in other structure. So we can lose this data when we invalidate DMA buffer. sk_buff linear packet data area | | | skb->end skb->tail V | | V V ----------------------------------------------. packet data | <tail padding> | <useful data in other struct> ----------------------------------------------. ---------------------.--------------------------------------------------. SLC line | SLC (L2 cache) line (128B) | ---------------------.--------------------------------------------------. ^ ^ | | These cache lines will be invalidated when we invalidate skb linear packet data area before DMA transaction starting. This leads to issues painful to debug as it reproduces only if (sk_buff->end - sk_buff->tail) < SLC_LINE_SIZE and if we have some useful data right after sk_buff->end. Fix that by hardcode SMP_CACHE_BYTES to max line length we may have. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2018-07-26 13:15:43 +00:00
config ARCH_HAS_CACHE_LINE_SIZE
def_bool y
config TRACE_IRQFLAGS_SUPPORT
def_bool y
config LOCKDEP_SUPPORT
def_bool y
config SCHED_OMIT_FRAME_POINTER
def_bool y
config GENERIC_CSUM
def_bool y
config ARCH_DISCONTIGMEM_ENABLE
def_bool n
config ARCH_FLATMEM_ENABLE
def_bool y
config MMU
def_bool y
config NO_IOPORT_MAP
def_bool y
config GENERIC_CALIBRATE_DELAY
def_bool y
config GENERIC_HWEIGHT
def_bool y
config STACKTRACE_SUPPORT
def_bool y
select STACKTRACE
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
def_bool y
depends on ARC_MMU_V4
menu "ARC Architecture Configuration"
menu "ARC Platform/SoC/Board"
source "arch/arc/plat-tb10x/Kconfig"
source "arch/arc/plat-axs10x/Kconfig"
#New platform adds here
source "arch/arc/plat-eznps/Kconfig"
source "arch/arc/plat-hsdk/Kconfig"
endmenu
choice
prompt "ARC Instruction Set"
default ISA_ARCV2
config ISA_ARCOMPACT
bool "ARCompact ISA"
lib/GCD.c: use binary GCD algorithm instead of Euclidean The binary GCD algorithm is based on the following facts: 1. If a and b are all evens, then gcd(a,b) = 2 * gcd(a/2, b/2) 2. If a is even and b is odd, then gcd(a,b) = gcd(a/2, b) 3. If a and b are all odds, then gcd(a,b) = gcd((a-b)/2, b) = gcd((a+b)/2, b) Even on x86 machines with reasonable division hardware, the binary algorithm runs about 25% faster (80% the execution time) than the division-based Euclidian algorithm. On platforms like Alpha and ARMv6 where division is a function call to emulation code, it's even more significant. There are two variants of the code here, depending on whether a fast __ffs (find least significant set bit) instruction is available. This allows the unpredictable branches in the bit-at-a-time shifting loop to be eliminated. If fast __ffs is not available, the "even/odd" GCD variant is used. I use the following code to benchmark: #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <string.h> #include <time.h> #include <unistd.h> #define swap(a, b) \ do { \ a ^= b; \ b ^= a; \ a ^= b; \ } while (0) unsigned long gcd0(unsigned long a, unsigned long b) { unsigned long r; if (a < b) { swap(a, b); } if (b == 0) return a; while ((r = a % b) != 0) { a = b; b = r; } return b; } unsigned long gcd1(unsigned long a, unsigned long b) { unsigned long r = a | b; if (!a || !b) return r; b >>= __builtin_ctzl(b); for (;;) { a >>= __builtin_ctzl(a); if (a == b) return a << __builtin_ctzl(r); if (a < b) swap(a, b); a -= b; } } unsigned long gcd2(unsigned long a, unsigned long b) { unsigned long r = a | b; if (!a || !b) return r; r &= -r; while (!(b & r)) b >>= 1; for (;;) { while (!(a & r)) a >>= 1; if (a == b) return a; if (a < b) swap(a, b); a -= b; a >>= 1; if (a & r) a += b; a >>= 1; } } unsigned long gcd3(unsigned long a, unsigned long b) { unsigned long r = a | b; if (!a || !b) return r; b >>= __builtin_ctzl(b); if (b == 1) return r & -r; for (;;) { a >>= __builtin_ctzl(a); if (a == 1) return r & -r; if (a == b) return a << __builtin_ctzl(r); if (a < b) swap(a, b); a -= b; } } unsigned long gcd4(unsigned long a, unsigned long b) { unsigned long r = a | b; if (!a || !b) return r; r &= -r; while (!(b & r)) b >>= 1; if (b == r) return r; for (;;) { while (!(a & r)) a >>= 1; if (a == r) return r; if (a == b) return a; if (a < b) swap(a, b); a -= b; a >>= 1; if (a & r) a += b; a >>= 1; } } static unsigned long (*gcd_func[])(unsigned long a, unsigned long b) = { gcd0, gcd1, gcd2, gcd3, gcd4, }; #define TEST_ENTRIES (sizeof(gcd_func) / sizeof(gcd_func[0])) #if defined(__x86_64__) #define rdtscll(val) do { \ unsigned long __a,__d; \ __asm__ __volatile__("rdtsc" : "=a" (__a), "=d" (__d)); \ (val) = ((unsigned long long)__a) | (((unsigned long long)__d)<<32); \ } while(0) static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long), unsigned long a, unsigned long b, unsigned long *res) { unsigned long long start, end; unsigned long long ret; unsigned long gcd_res; rdtscll(start); gcd_res = gcd(a, b); rdtscll(end); if (end >= start) ret = end - start; else ret = ~0ULL - start + 1 + end; *res = gcd_res; return ret; } #else static inline struct timespec read_time(void) { struct timespec time; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time); return time; } static inline unsigned long long diff_time(struct timespec start, struct timespec end) { struct timespec temp; if ((end.tv_nsec - start.tv_nsec) < 0) { temp.tv_sec = end.tv_sec - start.tv_sec - 1; temp.tv_nsec = 1000000000ULL + end.tv_nsec - start.tv_nsec; } else { temp.tv_sec = end.tv_sec - start.tv_sec; temp.tv_nsec = end.tv_nsec - start.tv_nsec; } return temp.tv_sec * 1000000000ULL + temp.tv_nsec; } static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long), unsigned long a, unsigned long b, unsigned long *res) { struct timespec start, end; unsigned long gcd_res; start = read_time(); gcd_res = gcd(a, b); end = read_time(); *res = gcd_res; return diff_time(start, end); } #endif static inline unsigned long get_rand() { if (sizeof(long) == 8) return (unsigned long)rand() << 32 | rand(); else return rand(); } int main(int argc, char **argv) { unsigned int seed = time(0); int loops = 100; int repeats = 1000; unsigned long (*res)[TEST_ENTRIES]; unsigned long long elapsed[TEST_ENTRIES]; int i, j, k; for (;;) { int opt = getopt(argc, argv, "n:r:s:"); /* End condition always first */ if (opt == -1) break; switch (opt) { case 'n': loops = atoi(optarg); break; case 'r': repeats = atoi(optarg); break; case 's': seed = strtoul(optarg, NULL, 10); break; default: /* You won't actually get here. */ break; } } res = malloc(sizeof(unsigned long) * TEST_ENTRIES * loops); memset(elapsed, 0, sizeof(elapsed)); srand(seed); for (j = 0; j < loops; j++) { unsigned long a = get_rand(); /* Do we have args? */ unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand(); unsigned long long min_elapsed[TEST_ENTRIES]; for (k = 0; k < repeats; k++) { for (i = 0; i < TEST_ENTRIES; i++) { unsigned long long tmp = benchmark_gcd_func(gcd_func[i], a, b, &res[j][i]); if (k == 0 || min_elapsed[i] > tmp) min_elapsed[i] = tmp; } } for (i = 0; i < TEST_ENTRIES; i++) elapsed[i] += min_elapsed[i]; } for (i = 0; i < TEST_ENTRIES; i++) printf("gcd%d: elapsed %llu\n", i, elapsed[i]); k = 0; srand(seed); for (j = 0; j < loops; j++) { unsigned long a = get_rand(); unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand(); for (i = 1; i < TEST_ENTRIES; i++) { if (res[j][i] != res[j][0]) break; } if (i < TEST_ENTRIES) { if (k == 0) { k = 1; fprintf(stderr, "Error:\n"); } fprintf(stderr, "gcd(%lu, %lu): ", a, b); for (i = 0; i < TEST_ENTRIES; i++) fprintf(stderr, "%ld%s", res[j][i], i < TEST_ENTRIES - 1 ? ", " : "\n"); } } if (k == 0) fprintf(stderr, "PASS\n"); free(res); return 0; } Compiled with "-O2", on "VirtualBox 4.4.0-22-generic #38-Ubuntu x86_64" got: zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10 gcd0: elapsed 10174 gcd1: elapsed 2120 gcd2: elapsed 2902 gcd3: elapsed 2039 gcd4: elapsed 2812 PASS zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10 gcd0: elapsed 9309 gcd1: elapsed 2280 gcd2: elapsed 2822 gcd3: elapsed 2217 gcd4: elapsed 2710 PASS zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10 gcd0: elapsed 9589 gcd1: elapsed 2098 gcd2: elapsed 2815 gcd3: elapsed 2030 gcd4: elapsed 2718 PASS zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10 gcd0: elapsed 9914 gcd1: elapsed 2309 gcd2: elapsed 2779 gcd3: elapsed 2228 gcd4: elapsed 2709 PASS [akpm@linux-foundation.org: avoid #defining a CONFIG_ variable] Signed-off-by: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com> Signed-off-by: George Spelvin <linux@horizon.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 00:03:57 +00:00
select CPU_NO_EFFICIENT_FFS
help
The original ARC ISA of ARC600/700 cores
config ISA_ARCV2
bool "ARC ISA v2"
select ARC_TIMERS_64BIT
help
ISA for the Next Generation ARC-HS cores
endchoice
menu "ARC CPU Configuration"
choice
prompt "ARC Core"
default ARC_CPU_770 if ISA_ARCOMPACT
default ARC_CPU_HS if ISA_ARCV2
if ISA_ARCOMPACT
config ARC_CPU_750D
bool "ARC750D"
select ARC_CANT_LLSC
help
Support for ARC750 core
config ARC_CPU_770
bool "ARC770"
select ARC_HAS_SWAPE
help
Support for ARC770 core introduced with Rel 4.10 (Summer 2011)
This core has a bunch of cool new features:
-MMU-v3: Variable Page Sz (4k, 8k, 16k), bigger J-TLB (128x4)
Shared Address Spaces (for sharing TLB entries in MMU)
-Caches: New Prog Model, Region Flush
-Insns: endian swap, load-locked/store-conditional, time-stamp-ctr
endif #ISA_ARCOMPACT
config ARC_CPU_HS
bool "ARC-HS"
depends on ISA_ARCV2
help
Support for ARC HS38x Cores based on ARCv2 ISA
The notable features are:
- SMP configurations of upto 4 core with coherency
- Optional L2 Cache and IO-Coherency
- Revised Interrupt Architecture (multiple priorites, reg banks,
auto stack switch, auto regfile save/restore)
- MMUv4 (PIPT dcache, Huge Pages)
- Instructions for
* 64bit load/store: LDD, STD
* Hardware assisted divide/remainder: DIV, REM
* Function prologue/epilogue: ENTER_S, LEAVE_S
* IRQ enable/disable: CLRI, SETI
* pop count: FFS, FLS
* SETcc, BMSKN, XBFU...
endchoice
config CPU_BIG_ENDIAN
bool "Enable Big Endian Mode"
help
Build kernel for Big Endian Mode of ARC CPU
config SMP
bool "Symmetric Multi-Processing"
select ARC_MCIP if ISA_ARCV2
help
This enables support for systems with more than one CPU.
if SMP
config NR_CPUS
int "Maximum number of CPUs (2-4096)"
range 2 4096
default "4"
config ARC_SMP_HALT_ON_RESET
bool "Enable Halt-on-reset boot mode"
help
In SMP configuration cores can be configured as Halt-on-reset
or they could all start at same time. For Halt-on-reset, non
masters are parked until Master kicks them so they can start of
at designated entry point. For other case, all jump to common
entry point and spin wait for Master's signal.
endif #SMP
config ARC_MCIP
bool "ARConnect Multicore IP (MCIP) Support "
depends on ISA_ARCV2
default y if SMP
help
This IP block enables SMP in ARC-HS38 cores.
It provides for cross-core interrupts, multi-core debug
hardware semaphores, shared memory,....
menuconfig ARC_CACHE
bool "Enable Cache Support"
default y
if ARC_CACHE
config ARC_CACHE_LINE_SHIFT
int "Cache Line Length (as power of 2)"
range 5 7
default "6"
help
Starting with ARC700 4.9, Cache line length is configurable,
This option specifies "N", with Line-len = 2 power N
So line lengths of 32, 64, 128 are specified by 5,6,7, respectively
Linux only supports same line lengths for I and D caches.
config ARC_HAS_ICACHE
bool "Use Instruction Cache"
default y
config ARC_HAS_DCACHE
bool "Use Data Cache"
default y
config ARC_CACHE_PAGES
bool "Per Page Cache Control"
default y
depends on ARC_HAS_ICACHE || ARC_HAS_DCACHE
help
This can be used to over-ride the global I/D Cache Enable on a
per-page basis (but only for pages accessed via MMU such as
Kernel Virtual address or User Virtual Address)
TLB entries have a per-page Cache Enable Bit.
Note that Global I/D ENABLE + Per Page DISABLE works but corollary
Global DISABLE + Per Page ENABLE won't work
config ARC_CACHE_VIPT_ALIASING
bool "Support VIPT Aliasing D$"
depends on ARC_HAS_DCACHE && ISA_ARCOMPACT
endif #ARC_CACHE
config ARC_HAS_ICCM
bool "Use ICCM"
help
Single Cycle RAMS to store Fast Path Code
config ARC_ICCM_SZ
int "ICCM Size in KB"
default "64"
depends on ARC_HAS_ICCM
config ARC_HAS_DCCM
bool "Use DCCM"
help
Single Cycle RAMS to store Fast Path Data
config ARC_DCCM_SZ
int "DCCM Size in KB"
default "64"
depends on ARC_HAS_DCCM
config ARC_DCCM_BASE
hex "DCCM map address"
default "0xA0000000"
depends on ARC_HAS_DCCM
choice
prompt "MMU Version"
default ARC_MMU_V3 if ARC_CPU_770
default ARC_MMU_V2 if ARC_CPU_750D
default ARC_MMU_V4 if ARC_CPU_HS
if ISA_ARCOMPACT
config ARC_MMU_V1
bool "MMU v1"
help
Orig ARC700 MMU
config ARC_MMU_V2
bool "MMU v2"
help
Fixed the deficiency of v1 - possible thrashing in memcpy scenario
when 2 D-TLB and 1 I-TLB entries index into same 2way set.
config ARC_MMU_V3
bool "MMU v3"
depends on ARC_CPU_770
help
Introduced with ARC700 4.10: New Features
Variable Page size (1k-16k), var JTLB size 128 x (2 or 4)
Shared Address Spaces (SASID)
endif
config ARC_MMU_V4
bool "MMU v4"
depends on ISA_ARCV2
endchoice
choice
prompt "MMU Page Size"
default ARC_PAGE_SIZE_8K
config ARC_PAGE_SIZE_8K
bool "8KB"
help
Choose between 8k vs 16k
config ARC_PAGE_SIZE_16K
bool "16KB"
depends on ARC_MMU_V3 || ARC_MMU_V4
config ARC_PAGE_SIZE_4K
bool "4KB"
depends on ARC_MMU_V3 || ARC_MMU_V4
endchoice
choice
prompt "MMU Super Page Size"
depends on ISA_ARCV2 && TRANSPARENT_HUGEPAGE
default ARC_HUGEPAGE_2M
config ARC_HUGEPAGE_2M
bool "2MB"
config ARC_HUGEPAGE_16M
bool "16MB"
endchoice
config NODES_SHIFT
int "Maximum NUMA Nodes (as a power of 2)"
default "0" if !DISCONTIGMEM
default "1" if DISCONTIGMEM
depends on NEED_MULTIPLE_NODES
---help---
Accessing memory beyond 1GB (with or w/o PAE) requires 2 memory
zones.
if ISA_ARCOMPACT
config ARC_COMPACT_IRQ_LEVELS
bool "Setup Timer IRQ as high Priority"
# if SMP, LV2 enabled ONLY if ARC implementation has LV2 re-entrancy
depends on !SMP
config ARC_FPU_SAVE_RESTORE
bool "Enable FPU state persistence across context switch"
help
Double Precision Floating Point unit had dedicated regs which
need to be saved/restored across context-switch.
Note that ARC FPU is overly simplistic, unlike say x86, which has
hardware pieces to allow software to conditionally save/restore,
based on actual usage of FPU by a task. Thus our implemn does
this for all tasks in system.
endif #ISA_ARCOMPACT
config ARC_CANT_LLSC
def_bool n
config ARC_HAS_LLSC
bool "Insn: LLOCK/SCOND (efficient atomic ops)"
default y
depends on !ARC_CANT_LLSC
config ARC_HAS_SWAPE
bool "Insn: SWAPE (endian-swap)"
default y
if ISA_ARCV2
config ARC_USE_UNALIGNED_MEM_ACCESS
bool "Enable unaligned access in HW"
default y
select HAVE_EFFICIENT_UNALIGNED_ACCESS
help
The ARC HS architecture supports unaligned memory access
which is disabled by default. Enable unaligned access in
hardware and use software to use it
config ARC_HAS_LL64
bool "Insn: 64bit LDD/STD"
help
Enable gcc to generate 64-bit load/store instructions
ISA mandates even/odd registers to allow encoding of two
dest operands with 2 possible source operands.
default y
config ARC_HAS_DIV_REM
bool "Insn: div, divu, rem, remu"
default y
config ARC_HAS_ACCL_REGS
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
default y
help
Depending on the configuration, CPU can contain accumulator reg-pair
(also referred to as r58:r59). These can also be used by gcc as GPR so
kernel needs to save/restore per process
config ARC_IRQ_NO_AUTOSAVE
bool "Disable hardware autosave regfile on interrupts"
default n
help
On HS cores, taken interrupt auto saves the regfile on stack.
This is programmable and can be optionally disabled in which case
software INTERRUPT_PROLOGUE/EPILGUE do the needed work
endif # ISA_ARCV2
endmenu # "ARC CPU Configuration"
config LINUX_LINK_BASE
hex "Kernel link address"
default "0x80000000"
help
ARC700 divides the 32 bit phy address space into two equal halves
-Lower 2G (0 - 0x7FFF_FFFF ) is user virtual, translated by MMU
-Upper 2G (0x8000_0000 onwards) is untranslated, for kernel
Typically Linux kernel is linked at the start of untransalted addr,
hence the default value of 0x8zs.
However some customers have peripherals mapped at this addr, so
Linux needs to be scooted a bit.
If you don't know what the above means, leave this setting alone.
This needs to match memory start address specified in Device Tree
config LINUX_RAM_BASE
hex "RAM base address"
default LINUX_LINK_BASE
help
By default Linux is linked at base of RAM. However in some special
cases (such as HSDK), Linux can't be linked at start of DDR, hence
this option.
config HIGHMEM
bool "High Memory Support"
select ARCH_DISCONTIGMEM_ENABLE
help
With ARC 2G:2G address split, only upper 2G is directly addressable by
kernel. Enable this to potentially allow access to rest of 2G and PAE
in future
config ARC_HAS_PAE40
bool "Support for the 40-bit Physical Address Extension"
depends on ISA_ARCV2
select HIGHMEM
select PHYS_ADDR_T_64BIT
help
Enable access to physical memory beyond 4G, only supported on
ARC cores with 40 bit Physical Addressing support
config ARC_KVADDR_SIZE
int "Kernel Virtual Address Space size (MB)"
range 0 512
default "256"
help
The kernel address space is carved out of 256MB of translated address
space for catering to vmalloc, modules, pkmap, fixmap. This however may
not suffice vmalloc requirements of a 4K CPU EZChip system. So allow
this to be stretched to 512 MB (by extending into the reserved
kernel-user gutter)
config ARC_CURR_IN_REG
bool "Dedicate Register r25 for current_task pointer"
default y
help
This reserved Register R25 to point to Current Task in
kernel mode. This saves memory access for each such access
config ARC_EMUL_UNALIGNED
bool "Emulate unaligned memory access (userspace only)"
select SYSCTL_ARCH_UNALIGN_NO_WARN
select SYSCTL_ARCH_UNALIGN_ALLOW
depends on ISA_ARCOMPACT
help
This enables misaligned 16 & 32 bit memory access from user space.
Use ONLY-IF-ABS-NECESSARY as it will be very slow and also can hide
potential bugs in code
config HZ
int "Timer Frequency"
default 100
config ARC_METAWARE_HLINK
bool "Support for Metaware debugger assisted Host access"
help
This options allows a Linux userland apps to directly access
host file system (open/creat/read/write etc) with help from
Metaware Debugger. This can come in handy for Linux-host communication
when there is no real usable peripheral such as EMAC.
menuconfig ARC_DBG
bool "ARC debugging"
default y
if ARC_DBG
config ARC_DW2_UNWIND
bool "Enable DWARF specific kernel stack unwind"
default y
select KALLSYMS
help
Compiles the kernel with DWARF unwind information and can be used
to get stack backtraces.
If you say Y here the resulting kernel image will be slightly larger
but not slower, and it will give very useful debugging information.
If you don't debug the kernel, you can say N, but we may not be able
to solve problems without frame unwind information
config ARC_DBG_TLB_PARANOIA
bool "Paranoia Checks in Low Level TLB Handlers"
endif
config ARC_BUILTIN_DTB_NAME
string "Built in DTB"
help
Set the name of the DTB to embed in the vmlinux binary
Leaving it blank selects the minimal "skeleton" dtb
endmenu # "ARC Architecture Configuration"
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
default "12" if ARC_HUGEPAGE_16M
default "11"
source "kernel/power/Kconfig"