Linux support for the 64-bit ARM architecture (AArch64)
Features currently supported: - 39-bit address space for user and kernel (each) - 4KB and 64KB page configurations - Compat (32-bit) user applications (ARMv7, EABI only) - Flattened Device Tree (mandated for all AArch64 platforms) - ARM generic timers -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iQIcBAABAgAGBQJQabRiAAoJEGvWsS0AyF7xXgcQAK+FTXt0ikdQYMkV5AIZXb9i xHRhuiZWx2vKyk0mCqpyGLY58GSmSb6uTBg/2P2Ej7vXdH/RB2goPzjlspfjkDL4 o8RJp7eQ07Uz3KRDYEJgMP8xKZid6KFG93RJ6TjjpKZLuDBdwiG1GP1vb0jVcWfo ttZrj/aI8lMcqrh3Vq5qefP7GWP1OVATqeaGTiT7oo38pXwF3t237xfBr2iDGFBp ZgIRddrxpa7JYUesfJDDDdGHvLq7Vh2jJV+io9qasBZDrtppGJIhZ0vUni2DgIi7 r4i1LcynDN4JaG0maZ4U/YQm74TCD4BqxV8GJ7zwLPTWeN+of+skjhPSLOkA+0fp I+sWjXlv200gDfJZ9qnUld2kFpoDfJi2b7fNDouSDd2OhmVOVWG3jnVP4Z7meVSb O8BYzWDdsAiabuwciUY3OsmW6424lT93b2v86Vncs4unKMvEjOPxYZbUxhqX8f2j gsmWwwD/yS4THx2B6OyW9VT3I5J6miqs2Glt/GG6vPWT5AKQJn9jCxKaBGhPMPIs xe5/GycBYjdk/Y8qRjegxFbEqzQuiRzmkeFn5jwjmBLqpGNbZDpvMaL6adhAKM5/ v6UIKa91ra4fC9N0h6G61pOc9N9DbT8wPbCbdYY0RMTMRuLDZDgAM3Bvz0r2APdD 96leNy6vx684hbkCSLJs =buJB -----END PGP SIGNATURE----- Merge tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 Pull arm64 support from Catalin Marinas: "Linux support for the 64-bit ARM architecture (AArch64) Features currently supported: - 39-bit address space for user and kernel (each) - 4KB and 64KB page configurations - Compat (32-bit) user applications (ARMv7, EABI only) - Flattened Device Tree (mandated for all AArch64 platforms) - ARM generic timers" * tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (35 commits) arm64: ptrace: remove obsolete ptrace request numbers from user headers arm64: Do not set the SMP/nAMP processor bit arm64: MAINTAINERS update arm64: Build infrastructure arm64: Miscellaneous header files arm64: Generic timers support arm64: Loadable modules arm64: Miscellaneous library functions arm64: Performance counters support arm64: Add support for /proc/sys/debug/exception-trace arm64: Debugging support arm64: Floating point and SIMD arm64: 32-bit (compat) applications support arm64: User access library functions arm64: Signal handling support arm64: VDSO support arm64: System calls handling arm64: ELF definitions arm64: SMP support arm64: DMA mapping API ...
This commit is contained in:
commit
81f56e5375
152
Documentation/arm64/booting.txt
Normal file
152
Documentation/arm64/booting.txt
Normal file
@ -0,0 +1,152 @@
|
||||
Booting AArch64 Linux
|
||||
=====================
|
||||
|
||||
Author: Will Deacon <will.deacon@arm.com>
|
||||
Date : 07 September 2012
|
||||
|
||||
This document is based on the ARM booting document by Russell King and
|
||||
is relevant to all public releases of the AArch64 Linux kernel.
|
||||
|
||||
The AArch64 exception model is made up of a number of exception levels
|
||||
(EL0 - EL3), with EL0 and EL1 having a secure and a non-secure
|
||||
counterpart. EL2 is the hypervisor level and exists only in non-secure
|
||||
mode. EL3 is the highest priority level and exists only in secure mode.
|
||||
|
||||
For the purposes of this document, we will use the term `boot loader'
|
||||
simply to define all software that executes on the CPU(s) before control
|
||||
is passed to the Linux kernel. This may include secure monitor and
|
||||
hypervisor code, or it may just be a handful of instructions for
|
||||
preparing a minimal boot environment.
|
||||
|
||||
Essentially, the boot loader should provide (as a minimum) the
|
||||
following:
|
||||
|
||||
1. Setup and initialise the RAM
|
||||
2. Setup the device tree
|
||||
3. Decompress the kernel image
|
||||
4. Call the kernel image
|
||||
|
||||
|
||||
1. Setup and initialise RAM
|
||||
---------------------------
|
||||
|
||||
Requirement: MANDATORY
|
||||
|
||||
The boot loader is expected to find and initialise all RAM that the
|
||||
kernel will use for volatile data storage in the system. It performs
|
||||
this in a machine dependent manner. (It may use internal algorithms
|
||||
to automatically locate and size all RAM, or it may use knowledge of
|
||||
the RAM in the machine, or any other method the boot loader designer
|
||||
sees fit.)
|
||||
|
||||
|
||||
2. Setup the device tree
|
||||
-------------------------
|
||||
|
||||
Requirement: MANDATORY
|
||||
|
||||
The device tree blob (dtb) must be no bigger than 2 megabytes in size
|
||||
and placed at a 2-megabyte boundary within the first 512 megabytes from
|
||||
the start of the kernel image. This is to allow the kernel to map the
|
||||
blob using a single section mapping in the initial page tables.
|
||||
|
||||
|
||||
3. Decompress the kernel image
|
||||
------------------------------
|
||||
|
||||
Requirement: OPTIONAL
|
||||
|
||||
The AArch64 kernel does not currently provide a decompressor and
|
||||
therefore requires decompression (gzip etc.) to be performed by the boot
|
||||
loader if a compressed Image target (e.g. Image.gz) is used. For
|
||||
bootloaders that do not implement this requirement, the uncompressed
|
||||
Image target is available instead.
|
||||
|
||||
|
||||
4. Call the kernel image
|
||||
------------------------
|
||||
|
||||
Requirement: MANDATORY
|
||||
|
||||
The decompressed kernel image contains a 32-byte header as follows:
|
||||
|
||||
u32 magic = 0x14000008; /* branch to stext, little-endian */
|
||||
u32 res0 = 0; /* reserved */
|
||||
u64 text_offset; /* Image load offset */
|
||||
u64 res1 = 0; /* reserved */
|
||||
u64 res2 = 0; /* reserved */
|
||||
|
||||
The image must be placed at the specified offset (currently 0x80000)
|
||||
from the start of the system RAM and called there. The start of the
|
||||
system RAM must be aligned to 2MB.
|
||||
|
||||
Before jumping into the kernel, the following conditions must be met:
|
||||
|
||||
- Quiesce all DMA capable devices so that memory does not get
|
||||
corrupted by bogus network packets or disk data. This will save
|
||||
you many hours of debug.
|
||||
|
||||
- Primary CPU general-purpose register settings
|
||||
x0 = physical address of device tree blob (dtb) in system RAM.
|
||||
x1 = 0 (reserved for future use)
|
||||
x2 = 0 (reserved for future use)
|
||||
x3 = 0 (reserved for future use)
|
||||
|
||||
- CPU mode
|
||||
All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError,
|
||||
IRQ and FIQ).
|
||||
The CPU must be in either EL2 (RECOMMENDED in order to have access to
|
||||
the virtualisation extensions) or non-secure EL1.
|
||||
|
||||
- Caches, MMUs
|
||||
The MMU must be off.
|
||||
Instruction cache may be on or off.
|
||||
Data cache must be off and invalidated.
|
||||
External caches (if present) must be configured and disabled.
|
||||
|
||||
- Architected timers
|
||||
CNTFRQ must be programmed with the timer frequency.
|
||||
If entering the kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0)
|
||||
set where available.
|
||||
|
||||
- Coherency
|
||||
All CPUs to be booted by the kernel must be part of the same coherency
|
||||
domain on entry to the kernel. This may require IMPLEMENTATION DEFINED
|
||||
initialisation to enable the receiving of maintenance operations on
|
||||
each CPU.
|
||||
|
||||
- System registers
|
||||
All writable architected system registers at the exception level where
|
||||
the kernel image will be entered must be initialised by software at a
|
||||
higher exception level to prevent execution in an UNKNOWN state.
|
||||
|
||||
The boot loader is expected to enter the kernel on each CPU in the
|
||||
following manner:
|
||||
|
||||
- The primary CPU must jump directly to the first instruction of the
|
||||
kernel image. The device tree blob passed by this CPU must contain
|
||||
for each CPU node:
|
||||
|
||||
1. An 'enable-method' property. Currently, the only supported value
|
||||
for this field is the string "spin-table".
|
||||
|
||||
2. A 'cpu-release-addr' property identifying a 64-bit,
|
||||
zero-initialised memory location.
|
||||
|
||||
It is expected that the bootloader will generate these device tree
|
||||
properties and insert them into the blob prior to kernel entry.
|
||||
|
||||
- Any secondary CPUs must spin outside of the kernel in a reserved area
|
||||
of memory (communicated to the kernel by a /memreserve/ region in the
|
||||
device tree) polling their cpu-release-addr location, which must be
|
||||
contained in the reserved region. A wfe instruction may be inserted
|
||||
to reduce the overhead of the busy-loop and a sev will be issued by
|
||||
the primary CPU. When a read of the location pointed to by the
|
||||
cpu-release-addr returns a non-zero value, the CPU must jump directly
|
||||
to this value.
|
||||
|
||||
- Secondary CPU general-purpose register settings
|
||||
x0 = 0 (reserved for future use)
|
||||
x1 = 0 (reserved for future use)
|
||||
x2 = 0 (reserved for future use)
|
||||
x3 = 0 (reserved for future use)
|
73
Documentation/arm64/memory.txt
Normal file
73
Documentation/arm64/memory.txt
Normal file
@ -0,0 +1,73 @@
|
||||
Memory Layout on AArch64 Linux
|
||||
==============================
|
||||
|
||||
Author: Catalin Marinas <catalin.marinas@arm.com>
|
||||
Date : 20 February 2012
|
||||
|
||||
This document describes the virtual memory layout used by the AArch64
|
||||
Linux kernel. The architecture allows up to 4 levels of translation
|
||||
tables with a 4KB page size and up to 3 levels with a 64KB page size.
|
||||
|
||||
AArch64 Linux uses 3 levels of translation tables with the 4KB page
|
||||
configuration, allowing 39-bit (512GB) virtual addresses for both user
|
||||
and kernel. With 64KB pages, only 2 levels of translation tables are
|
||||
used but the memory layout is the same.
|
||||
|
||||
User addresses have bits 63:39 set to 0 while the kernel addresses have
|
||||
the same bits set to 1. TTBRx selection is given by bit 63 of the
|
||||
virtual address. The swapper_pg_dir contains only kernel (global)
|
||||
mappings while the user pgd contains only user (non-global) mappings.
|
||||
The swapper_pgd_dir address is written to TTBR1 and never written to
|
||||
TTBR0.
|
||||
|
||||
|
||||
AArch64 Linux memory layout:
|
||||
|
||||
Start End Size Use
|
||||
-----------------------------------------------------------------------
|
||||
0000000000000000 0000007fffffffff 512GB user
|
||||
|
||||
ffffff8000000000 ffffffbbfffcffff ~240GB vmalloc
|
||||
|
||||
ffffffbbfffd0000 ffffffbcfffdffff 64KB [guard page]
|
||||
|
||||
ffffffbbfffe0000 ffffffbcfffeffff 64KB PCI I/O space
|
||||
|
||||
ffffffbbffff0000 ffffffbcffffffff 64KB [guard page]
|
||||
|
||||
ffffffbc00000000 ffffffbdffffffff 8GB vmemmap
|
||||
|
||||
ffffffbe00000000 ffffffbffbffffff ~8GB [guard, future vmmemap]
|
||||
|
||||
ffffffbffc000000 ffffffbfffffffff 64MB modules
|
||||
|
||||
ffffffc000000000 ffffffffffffffff 256GB memory
|
||||
|
||||
|
||||
Translation table lookup with 4KB pages:
|
||||
|
||||
+--------+--------+--------+--------+--------+--------+--------+--------+
|
||||
|63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0|
|
||||
+--------+--------+--------+--------+--------+--------+--------+--------+
|
||||
| | | | | |
|
||||
| | | | | v
|
||||
| | | | | [11:0] in-page offset
|
||||
| | | | +-> [20:12] L3 index
|
||||
| | | +-----------> [29:21] L2 index
|
||||
| | +---------------------> [38:30] L1 index
|
||||
| +-------------------------------> [47:39] L0 index (not used)
|
||||
+-------------------------------------------------> [63] TTBR0/1
|
||||
|
||||
|
||||
Translation table lookup with 64KB pages:
|
||||
|
||||
+--------+--------+--------+--------+--------+--------+--------+--------+
|
||||
|63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0|
|
||||
+--------+--------+--------+--------+--------+--------+--------+--------+
|
||||
| | | | |
|
||||
| | | | v
|
||||
| | | | [15:0] in-page offset
|
||||
| | | +----------> [28:16] L3 index
|
||||
| | +--------------------------> [41:29] L2 index (only 38:29 used)
|
||||
| +-------------------------------> [47:42] L1 index (not used)
|
||||
+-------------------------------------------------> [63] TTBR0/1
|
@ -1209,6 +1209,12 @@ S: Maintained
|
||||
F: arch/arm/mach-pxa/z2.c
|
||||
F: arch/arm/mach-pxa/include/mach/z2.h
|
||||
|
||||
ARM64 PORT (AARCH64 ARCHITECTURE)
|
||||
M: Catalin Marinas <catalin.marinas@arm.com>
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: arch/arm64/
|
||||
|
||||
ASC7621 HARDWARE MONITOR DRIVER
|
||||
M: George Joseph <george.joseph@fairview5.com>
|
||||
L: lm-sensors@lm-sensors.org
|
||||
|
222
arch/arm64/Kconfig
Normal file
222
arch/arm64/Kconfig
Normal file
@ -0,0 +1,222 @@
|
||||
config ARM64
|
||||
def_bool y
|
||||
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
|
||||
select GENERIC_CLOCKEVENTS
|
||||
select GENERIC_HARDIRQS_NO_DEPRECATED
|
||||
select GENERIC_IOMAP
|
||||
select GENERIC_IRQ_PROBE
|
||||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_SMP_IDLE_THREAD
|
||||
select GENERIC_TIME_VSYSCALL
|
||||
select HARDIRQS_SW_RESEND
|
||||
select HAVE_ARCH_TRACEHOOK
|
||||
select HAVE_DMA_API_DEBUG
|
||||
select HAVE_DMA_ATTRS
|
||||
select HAVE_GENERIC_DMA_COHERENT
|
||||
select HAVE_GENERIC_HARDIRQS
|
||||
select HAVE_HW_BREAKPOINT if PERF_EVENTS
|
||||
select HAVE_IRQ_WORK
|
||||
select HAVE_MEMBLOCK
|
||||
select HAVE_PERF_EVENTS
|
||||
select HAVE_SPARSE_IRQ
|
||||
select IRQ_DOMAIN
|
||||
select NO_BOOTMEM
|
||||
select OF
|
||||
select OF_EARLY_FLATTREE
|
||||
select PERF_USE_VMALLOC
|
||||
select RTC_LIB
|
||||
select SPARSE_IRQ
|
||||
help
|
||||
ARM 64-bit (AArch64) Linux support.
|
||||
|
||||
config 64BIT
|
||||
def_bool y
|
||||
|
||||
config ARCH_PHYS_ADDR_T_64BIT
|
||||
def_bool y
|
||||
|
||||
config MMU
|
||||
def_bool y
|
||||
|
||||
config NO_IOPORT
|
||||
def_bool y
|
||||
|
||||
config STACKTRACE_SUPPORT
|
||||
def_bool y
|
||||
|
||||
config LOCKDEP_SUPPORT
|
||||
def_bool y
|
||||
|
||||
config TRACE_IRQFLAGS_SUPPORT
|
||||
def_bool y
|
||||
|
||||
config GENERIC_LOCKBREAK
|
||||
def_bool y
|
||||
depends on SMP && PREEMPT
|
||||
|
||||
config RWSEM_GENERIC_SPINLOCK
|
||||
def_bool y
|
||||
|
||||
config GENERIC_HWEIGHT
|
||||
def_bool y
|
||||
|
||||
config GENERIC_CSUM
|
||||
def_bool y
|
||||
|
||||
config GENERIC_CALIBRATE_DELAY
|
||||
def_bool y
|
||||
|
||||
config ZONE_DMA32
|
||||
def_bool y
|
||||
|
||||
config ARCH_DMA_ADDR_T_64BIT
|
||||
def_bool y
|
||||
|
||||
config NEED_DMA_MAP_STATE
|
||||
def_bool y
|
||||
|
||||
config NEED_SG_DMA_LENGTH
|
||||
def_bool y
|
||||
|
||||
config SWIOTLB
|
||||
def_bool y
|
||||
|
||||
config IOMMU_HELPER
|
||||
def_bool SWIOTLB
|
||||
|
||||
source "init/Kconfig"
|
||||
|
||||
source "kernel/Kconfig.freezer"
|
||||
|
||||
menu "System Type"
|
||||
|
||||
endmenu
|
||||
|
||||
menu "Bus support"
|
||||
|
||||
config ARM_AMBA
|
||||
bool
|
||||
|
||||
endmenu
|
||||
|
||||
menu "Kernel Features"
|
||||
|
||||
source "kernel/time/Kconfig"
|
||||
|
||||
config ARM64_64K_PAGES
|
||||
bool "Enable 64KB pages support"
|
||||
help
|
||||
This feature enables 64KB pages support (4KB by default)
|
||||
allowing only two levels of page tables and faster TLB
|
||||
look-up. AArch32 emulation is not available when this feature
|
||||
is enabled.
|
||||
|
||||
config SMP
|
||||
bool "Symmetric Multi-Processing"
|
||||
select USE_GENERIC_SMP_HELPERS
|
||||
help
|
||||
This enables support for systems with more than one CPU. If
|
||||
you say N here, the kernel will run on single and
|
||||
multiprocessor machines, but will use only one CPU of a
|
||||
multiprocessor machine. If you say Y here, the kernel will run
|
||||
on many, but not all, single processor machines. On a single
|
||||
processor machine, the kernel will run faster if you say N
|
||||
here.
|
||||
|
||||
If you don't know what to do here, say N.
|
||||
|
||||
config NR_CPUS
|
||||
int "Maximum number of CPUs (2-32)"
|
||||
range 2 32
|
||||
depends on SMP
|
||||
default "4"
|
||||
|
||||
source kernel/Kconfig.preempt
|
||||
|
||||
config HZ
|
||||
int
|
||||
default 100
|
||||
|
||||
config ARCH_HAS_HOLES_MEMORYMODEL
|
||||
def_bool y if SPARSEMEM
|
||||
|
||||
config ARCH_SPARSEMEM_ENABLE
|
||||
def_bool y
|
||||
select SPARSEMEM_VMEMMAP_ENABLE
|
||||
|
||||
config ARCH_SPARSEMEM_DEFAULT
|
||||
def_bool ARCH_SPARSEMEM_ENABLE
|
||||
|
||||
config ARCH_SELECT_MEMORY_MODEL
|
||||
def_bool ARCH_SPARSEMEM_ENABLE
|
||||
|
||||
config HAVE_ARCH_PFN_VALID
|
||||
def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM
|
||||
|
||||
config HW_PERF_EVENTS
|
||||
bool "Enable hardware performance counter support for perf events"
|
||||
depends on PERF_EVENTS
|
||||
default y
|
||||
help
|
||||
Enable hardware performance counter support for perf events. If
|
||||
disabled, perf events will use software events only.
|
||||
|
||||
source "mm/Kconfig"
|
||||
|
||||
endmenu
|
||||
|
||||
menu "Boot options"
|
||||
|
||||
config CMDLINE
|
||||
string "Default kernel command string"
|
||||
default ""
|
||||
help
|
||||
Provide a set of default command-line options at build time by
|
||||
entering them here. As a minimum, you should specify the the
|
||||
root device (e.g. root=/dev/nfs).
|
||||
|
||||
config CMDLINE_FORCE
|
||||
bool "Always use the default kernel command string"
|
||||
help
|
||||
Always use the default kernel command string, even if the boot
|
||||
loader passes other arguments to the kernel.
|
||||
This is useful if you cannot or don't want to change the
|
||||
command-line options your boot loader passes to the kernel.
|
||||
|
||||
endmenu
|
||||
|
||||
menu "Userspace binary formats"
|
||||
|
||||
source "fs/Kconfig.binfmt"
|
||||
|
||||
config COMPAT
|
||||
bool "Kernel support for 32-bit EL0"
|
||||
depends on !ARM64_64K_PAGES
|
||||
select COMPAT_BINFMT_ELF
|
||||
help
|
||||
This option enables support for a 32-bit EL0 running under a 64-bit
|
||||
kernel at EL1. AArch32-specific components such as system calls,
|
||||
the user helper functions, VFP support and the ptrace interface are
|
||||
handled appropriately by the kernel.
|
||||
|
||||
If you want to execute 32-bit userspace applications, say Y.
|
||||
|
||||
config SYSVIPC_COMPAT
|
||||
def_bool y
|
||||
depends on COMPAT && SYSVIPC
|
||||
|
||||
endmenu
|
||||
|
||||
source "net/Kconfig"
|
||||
|
||||
source "drivers/Kconfig"
|
||||
|
||||
source "fs/Kconfig"
|
||||
|
||||
source "arch/arm64/Kconfig.debug"
|
||||
|
||||
source "security/Kconfig"
|
||||
|
||||
source "crypto/Kconfig"
|
||||
|
||||
source "lib/Kconfig"
|
27
arch/arm64/Kconfig.debug
Normal file
27
arch/arm64/Kconfig.debug
Normal file
@ -0,0 +1,27 @@
|
||||
menu "Kernel hacking"
|
||||
|
||||
source "lib/Kconfig.debug"
|
||||
|
||||
config FRAME_POINTER
|
||||
bool
|
||||
default y
|
||||
|
||||
config DEBUG_ERRORS
|
||||
bool "Verbose kernel error messages"
|
||||
depends on DEBUG_KERNEL
|
||||
help
|
||||
This option controls verbose debugging information which can be
|
||||
printed when the kernel detects an internal error. This debugging
|
||||
information is useful to kernel hackers when tracking down problems,
|
||||
but mostly meaningless to other people. It's safe to say Y unless
|
||||
you are concerned with the code size or don't want to see these
|
||||
messages.
|
||||
|
||||
config DEBUG_STACK_USAGE
|
||||
bool "Enable stack utilization instrumentation"
|
||||
depends on DEBUG_KERNEL
|
||||
help
|
||||
Enables the display of the minimum amount of free stack which each
|
||||
task has ever had available in the sysrq-T output.
|
||||
|
||||
endmenu
|
71
arch/arm64/Makefile
Normal file
71
arch/arm64/Makefile
Normal file
@ -0,0 +1,71 @@
|
||||
#
|
||||
# arch/arm64/Makefile
|
||||
#
|
||||
# This file is included by the global makefile so that you can add your own
|
||||
# architecture-specific flags and dependencies.
|
||||
#
|
||||
# This file is subject to the terms and conditions of the GNU General Public
|
||||
# License. See the file "COPYING" in the main directory of this archive
|
||||
# for more details.
|
||||
#
|
||||
# Copyright (C) 1995-2001 by Russell King
|
||||
|
||||
LDFLAGS_vmlinux :=-p --no-undefined -X
|
||||
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
OBJCOPYFLAGS :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
|
||||
GZFLAGS :=-9
|
||||
|
||||
LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name)
|
||||
|
||||
KBUILD_DEFCONFIG := defconfig
|
||||
|
||||
KBUILD_CFLAGS += -mgeneral-regs-only
|
||||
KBUILD_CPPFLAGS += -mlittle-endian
|
||||
AS += -EL
|
||||
LD += -EL
|
||||
|
||||
comma = ,
|
||||
|
||||
CHECKFLAGS += -D__aarch64__
|
||||
|
||||
# Default value
|
||||
head-y := arch/arm64/kernel/head.o
|
||||
|
||||
# The byte offset of the kernel image in RAM from the start of RAM.
|
||||
TEXT_OFFSET := 0x00080000
|
||||
|
||||
export TEXT_OFFSET GZFLAGS
|
||||
|
||||
core-y += arch/arm64/kernel/ arch/arm64/mm/
|
||||
libs-y := arch/arm64/lib/ $(libs-y)
|
||||
libs-y += $(LIBGCC)
|
||||
|
||||
# Default target when executing plain make
|
||||
KBUILD_IMAGE := Image.gz
|
||||
|
||||
all: $(KBUILD_IMAGE)
|
||||
|
||||
boot := arch/arm64/boot
|
||||
|
||||
Image Image.gz: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/$@
|
||||
|
||||
zinstall install: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $@
|
||||
|
||||
%.dtb:
|
||||
$(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/$@
|
||||
|
||||
# We use MRPROPER_FILES and CLEAN_FILES now
|
||||
archclean:
|
||||
$(Q)$(MAKE) $(clean)=$(boot)
|
||||
|
||||
define archhelp
|
||||
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
|
||||
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
|
||||
echo ' install - Install uncompressed kernel'
|
||||
echo ' zinstall - Install compressed kernel'
|
||||
echo ' Install using (your) ~/bin/installkernel or'
|
||||
echo ' (distribution) /sbin/installkernel or'
|
||||
echo ' install to $$(INSTALL_PATH) and run lilo'
|
||||
endef
|
2
arch/arm64/boot/.gitignore
vendored
Normal file
2
arch/arm64/boot/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
Image
|
||||
Image.gz
|
36
arch/arm64/boot/Makefile
Normal file
36
arch/arm64/boot/Makefile
Normal file
@ -0,0 +1,36 @@
|
||||
#
|
||||
# arch/arm64/boot/Makefile
|
||||
#
|
||||
# This file is included by the global makefile so that you can add your own
|
||||
# architecture-specific flags and dependencies.
|
||||
#
|
||||
# This file is subject to the terms and conditions of the GNU General Public
|
||||
# License. See the file "COPYING" in the main directory of this archive
|
||||
# for more details.
|
||||
#
|
||||
# Copyright (C) 2012, ARM Ltd.
|
||||
# Author: Will Deacon <will.deacon@arm.com>
|
||||
#
|
||||
# Based on the ia64 boot/Makefile.
|
||||
#
|
||||
|
||||
targets := Image Image.gz
|
||||
|
||||
$(obj)/Image: vmlinux FORCE
|
||||
$(call if_changed,objcopy)
|
||||
|
||||
$(obj)/Image.gz: $(obj)/Image FORCE
|
||||
$(call if_changed,gzip)
|
||||
|
||||
$(obj)/%.dtb: $(src)/dts/%.dts
|
||||
$(call cmd,dtc)
|
||||
|
||||
install: $(obj)/Image
|
||||
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
|
||||
$(obj)/Image System.map "$(INSTALL_PATH)"
|
||||
|
||||
zinstall: $(obj)/Image.gz
|
||||
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
|
||||
$(obj)/Image.gz System.map "$(INSTALL_PATH)"
|
||||
|
||||
clean-files += *.dtb
|
46
arch/arm64/boot/install.sh
Normal file
46
arch/arm64/boot/install.sh
Normal file
@ -0,0 +1,46 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# arch/arm64/boot/install.sh
|
||||
#
|
||||
# This file is subject to the terms and conditions of the GNU General Public
|
||||
# License. See the file "COPYING" in the main directory of this archive
|
||||
# for more details.
|
||||
#
|
||||
# Copyright (C) 1995 by Linus Torvalds
|
||||
#
|
||||
# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
|
||||
# Adapted from code in arch/i386/boot/install.sh by Russell King
|
||||
#
|
||||
# "make install" script for the AArch64 Linux port
|
||||
#
|
||||
# Arguments:
|
||||
# $1 - kernel version
|
||||
# $2 - kernel image file
|
||||
# $3 - kernel map file
|
||||
# $4 - default install path (blank if root directory)
|
||||
#
|
||||
|
||||
# User may have a custom install script
|
||||
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
|
||||
if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
|
||||
|
||||
if [ "$(basename $2)" = "Image.gz" ]; then
|
||||
# Compressed install
|
||||
echo "Installing compressed kernel"
|
||||
base=vmlinuz
|
||||
else
|
||||
# Normal install
|
||||
echo "Installing normal kernel"
|
||||
base=vmlinux
|
||||
fi
|
||||
|
||||
if [ -f $4/$base-$1 ]; then
|
||||
mv $4/$base-$1 $4/$base-$1.old
|
||||
fi
|
||||
cat $2 > $4/$base-$1
|
||||
|
||||
# Install system map file
|
||||
if [ -f $4/System.map-$1 ]; then
|
||||
mv $4/System.map-$1 $4/System.map-$1.old
|
||||
fi
|
||||
cp $3 $4/System.map-$1
|
85
arch/arm64/configs/defconfig
Normal file
85
arch/arm64/configs/defconfig
Normal file
@ -0,0 +1,85 @@
|
||||
CONFIG_EXPERIMENTAL=y
|
||||
# CONFIG_LOCALVERSION_AUTO is not set
|
||||
# CONFIG_SWAP is not set
|
||||
CONFIG_SYSVIPC=y
|
||||
CONFIG_POSIX_MQUEUE=y
|
||||
CONFIG_BSD_PROCESS_ACCT=y
|
||||
CONFIG_BSD_PROCESS_ACCT_V3=y
|
||||
CONFIG_NO_HZ=y
|
||||
CONFIG_HIGH_RES_TIMERS=y
|
||||
CONFIG_IKCONFIG=y
|
||||
CONFIG_IKCONFIG_PROC=y
|
||||
CONFIG_LOG_BUF_SHIFT=14
|
||||
# CONFIG_UTS_NS is not set
|
||||
# CONFIG_IPC_NS is not set
|
||||
# CONFIG_PID_NS is not set
|
||||
# CONFIG_NET_NS is not set
|
||||
CONFIG_SCHED_AUTOGROUP=y
|
||||
CONFIG_BLK_DEV_INITRD=y
|
||||
CONFIG_KALLSYMS_ALL=y
|
||||
# CONFIG_COMPAT_BRK is not set
|
||||
CONFIG_PROFILING=y
|
||||
CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
# CONFIG_BLK_DEV_BSG is not set
|
||||
# CONFIG_IOSCHED_DEADLINE is not set
|
||||
CONFIG_SMP=y
|
||||
CONFIG_PREEMPT_VOLUNTARY=y
|
||||
CONFIG_CMDLINE="console=ttyAMA0"
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
CONFIG_COMPAT=y
|
||||
CONFIG_NET=y
|
||||
CONFIG_PACKET=y
|
||||
CONFIG_UNIX=y
|
||||
CONFIG_INET=y
|
||||
CONFIG_IP_PNP=y
|
||||
CONFIG_IP_PNP_DHCP=y
|
||||
CONFIG_IP_PNP_BOOTP=y
|
||||
# CONFIG_INET_LRO is not set
|
||||
# CONFIG_IPV6 is not set
|
||||
# CONFIG_WIRELESS is not set
|
||||
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
|
||||
CONFIG_DEVTMPFS=y
|
||||
# CONFIG_BLK_DEV is not set
|
||||
CONFIG_SCSI=y
|
||||
# CONFIG_SCSI_PROC_FS is not set
|
||||
CONFIG_BLK_DEV_SD=y
|
||||
# CONFIG_SCSI_LOWLEVEL is not set
|
||||
CONFIG_NETDEVICES=y
|
||||
CONFIG_MII=y
|
||||
# CONFIG_WLAN is not set
|
||||
CONFIG_INPUT_EVDEV=y
|
||||
# CONFIG_SERIO_I8042 is not set
|
||||
# CONFIG_SERIO_SERPORT is not set
|
||||
CONFIG_LEGACY_PTY_COUNT=16
|
||||
# CONFIG_HW_RANDOM is not set
|
||||
# CONFIG_HWMON is not set
|
||||
CONFIG_FB=y
|
||||
# CONFIG_VGA_CONSOLE is not set
|
||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||
CONFIG_LOGO=y
|
||||
# CONFIG_LOGO_LINUX_MONO is not set
|
||||
# CONFIG_LOGO_LINUX_VGA16 is not set
|
||||
# CONFIG_USB_SUPPORT is not set
|
||||
# CONFIG_IOMMU_SUPPORT is not set
|
||||
CONFIG_EXT2_FS=y
|
||||
CONFIG_EXT3_FS=y
|
||||
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
|
||||
# CONFIG_EXT3_FS_XATTR is not set
|
||||
CONFIG_FUSE_FS=y
|
||||
CONFIG_CUSE=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_TMPFS=y
|
||||
# CONFIG_MISC_FILESYSTEMS is not set
|
||||
CONFIG_NFS_FS=y
|
||||
CONFIG_ROOT_NFS=y
|
||||
CONFIG_NLS_CODEPAGE_437=y
|
||||
CONFIG_NLS_ISO8859_1=y
|
||||
CONFIG_MAGIC_SYSRQ=y
|
||||
CONFIG_DEBUG_FS=y
|
||||
CONFIG_DEBUG_KERNEL=y
|
||||
# CONFIG_SCHED_DEBUG is not set
|
||||
CONFIG_DEBUG_INFO=y
|
||||
# CONFIG_FTRACE is not set
|
||||
CONFIG_ATOMIC64_SELFTEST=y
|
||||
CONFIG_DEBUG_ERRORS=y
|
51
arch/arm64/include/asm/Kbuild
Normal file
51
arch/arm64/include/asm/Kbuild
Normal file
@ -0,0 +1,51 @@
|
||||
include include/asm-generic/Kbuild.asm
|
||||
|
||||
header-y += hwcap.h
|
||||
|
||||
generic-y += bug.h
|
||||
generic-y += bugs.h
|
||||
generic-y += checksum.h
|
||||
generic-y += cputime.h
|
||||
generic-y += current.h
|
||||
generic-y += delay.h
|
||||
generic-y += div64.h
|
||||
generic-y += dma.h
|
||||
generic-y += emergency-restart.h
|
||||
generic-y += errno.h
|
||||
generic-y += ftrace.h
|
||||
generic-y += hw_irq.h
|
||||
generic-y += ioctl.h
|
||||
generic-y += ioctls.h
|
||||
generic-y += ipcbuf.h
|
||||
generic-y += irq_regs.h
|
||||
generic-y += kdebug.h
|
||||
generic-y += kmap_types.h
|
||||
generic-y += linkage.h
|
||||
generic-y += local.h
|
||||
generic-y += local64.h
|
||||
generic-y += mman.h
|
||||
generic-y += msgbuf.h
|
||||
generic-y += mutex.h
|
||||
generic-y += pci.h
|
||||
generic-y += percpu.h
|
||||
generic-y += poll.h
|
||||
generic-y += posix_types.h
|
||||
generic-y += resource.h
|
||||
generic-y += scatterlist.h
|
||||
generic-y += sections.h
|
||||
generic-y += segment.h
|
||||
generic-y += sembuf.h
|
||||
generic-y += serial.h
|
||||
generic-y += shmbuf.h
|
||||
generic-y += sizes.h
|
||||
generic-y += socket.h
|
||||
generic-y += sockios.h
|
||||
generic-y += string.h
|
||||
generic-y += switch_to.h
|
||||
generic-y += swab.h
|
||||
generic-y += termbits.h
|
||||
generic-y += termios.h
|
||||
generic-y += topology.h
|
||||
generic-y += types.h
|
||||
generic-y += unaligned.h
|
||||
generic-y += user.h
|
100
arch/arm64/include/asm/arm_generic.h
Normal file
100
arch/arm64/include/asm/arm_generic.h
Normal file
@ -0,0 +1,100 @@
|
||||
/*
|
||||
* arch/arm64/include/asm/arm_generic.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
* Author: Marc Zyngier <marc.zyngier@arm.com>
|
||||
*
|
||||
* This program is free software: you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_ARM_GENERIC_H
|
||||
#define __ASM_ARM_GENERIC_H
|
||||
|
||||
#include <linux/clocksource.h>
|
||||
|
||||
#define ARCH_TIMER_CTRL_ENABLE (1 << 0)
|
||||
#define ARCH_TIMER_CTRL_IMASK (1 << 1)
|
||||
#define ARCH_TIMER_CTRL_ISTATUS (1 << 2)
|
||||
|
||||
#define ARCH_TIMER_REG_CTRL 0
|
||||
#define ARCH_TIMER_REG_FREQ 1
|
||||
#define ARCH_TIMER_REG_TVAL 2
|
||||
|
||||
static inline void arch_timer_reg_write(int reg, u32 val)
|
||||
{
|
||||
switch (reg) {
|
||||
case ARCH_TIMER_REG_CTRL:
|
||||
asm volatile("msr cntp_ctl_el0, %0" : : "r" (val));
|
||||
break;
|
||||
case ARCH_TIMER_REG_TVAL:
|
||||
asm volatile("msr cntp_tval_el0, %0" : : "r" (val));
|
||||
break;
|
||||
default:
|
||||
BUILD_BUG();
|
||||
}
|
||||
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline u32 arch_timer_reg_read(int reg)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
switch (reg) {
|
||||
case ARCH_TIMER_REG_CTRL:
|
||||
asm volatile("mrs %0, cntp_ctl_el0" : "=r" (val));
|
||||
break;
|
||||
case ARCH_TIMER_REG_FREQ:
|
||||
asm volatile("mrs %0, cntfrq_el0" : "=r" (val));
|
||||
break;
|
||||
case ARCH_TIMER_REG_TVAL:
|
||||
asm volatile("mrs %0, cntp_tval_el0" : "=r" (val));
|
||||
break;
|
||||
default:
|
||||
BUILD_BUG();
|
||||
}
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline void __cpuinit arch_counter_enable_user_access(void)
|
||||
{
|
||||
u32 cntkctl;
|
||||
|
||||
/* Disable user access to the timers and the virtual counter. */
|
||||
asm volatile("mrs %0, cntkctl_el1" : "=r" (cntkctl));
|
||||
cntkctl &= ~((3 << 8) | (1 << 1));
|
||||
|
||||
/* Enable user access to the physical counter and frequency. */
|
||||
cntkctl |= 1;
|
||||
asm volatile("msr cntkctl_el1, %0" : : "r" (cntkctl));
|
||||
}
|
||||
|
||||
static inline cycle_t arch_counter_get_cntpct(void)
|
||||
{
|
||||
cycle_t cval;
|
||||
|
||||
asm volatile("mrs %0, cntpct_el0" : "=r" (cval));
|
||||
|
||||
return cval;
|
||||
}
|
||||
|
||||
static inline cycle_t arch_counter_get_cntvct(void)
|
||||
{
|
||||
cycle_t cval;
|
||||
|
||||
asm volatile("mrs %0, cntvct_el0" : "=r" (cval));
|
||||
|
||||
return cval;
|
||||
}
|
||||
|
||||
#endif
|
1
arch/arm64/include/asm/asm-offsets.h
Normal file
1
arch/arm64/include/asm/asm-offsets.h
Normal file
@ -0,0 +1 @@
|
||||
#include <generated/asm-offsets.h>
|
109
arch/arm64/include/asm/assembler.h
Normal file
109
arch/arm64/include/asm/assembler.h
Normal file
@ -0,0 +1,109 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/assembler.h
|
||||
*
|
||||
* Copyright (C) 1996-2000 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASSEMBLY__
|
||||
#error "Only include this from assembly code"
|
||||
#endif
|
||||
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
/*
|
||||
* Stack pushing/popping (register pairs only). Equivalent to store decrement
|
||||
* before, load increment after.
|
||||
*/
|
||||
.macro push, xreg1, xreg2
|
||||
stp \xreg1, \xreg2, [sp, #-16]!
|
||||
.endm
|
||||
|
||||
.macro pop, xreg1, xreg2
|
||||
ldp \xreg1, \xreg2, [sp], #16
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Enable and disable interrupts.
|
||||
*/
|
||||
.macro disable_irq
|
||||
msr daifset, #2
|
||||
.endm
|
||||
|
||||
.macro enable_irq
|
||||
msr daifclr, #2
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Save/disable and restore interrupts.
|
||||
*/
|
||||
.macro save_and_disable_irqs, olddaif
|
||||
mrs \olddaif, daif
|
||||
disable_irq
|
||||
.endm
|
||||
|
||||
.macro restore_irqs, olddaif
|
||||
msr daif, \olddaif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Enable and disable debug exceptions.
|
||||
*/
|
||||
.macro disable_dbg
|
||||
msr daifset, #8
|
||||
.endm
|
||||
|
||||
.macro enable_dbg
|
||||
msr daifclr, #8
|
||||
.endm
|
||||
|
||||
.macro disable_step, tmp
|
||||
mrs \tmp, mdscr_el1
|
||||
bic \tmp, \tmp, #1
|
||||
msr mdscr_el1, \tmp
|
||||
.endm
|
||||
|
||||
.macro enable_step, tmp
|
||||
mrs \tmp, mdscr_el1
|
||||
orr \tmp, \tmp, #1
|
||||
msr mdscr_el1, \tmp
|
||||
.endm
|
||||
|
||||
.macro enable_dbg_if_not_stepping, tmp
|
||||
mrs \tmp, mdscr_el1
|
||||
tbnz \tmp, #1, 9990f
|
||||
enable_dbg
|
||||
9990:
|
||||
.endm
|
||||
|
||||
/*
|
||||
* SMP data memory barrier
|
||||
*/
|
||||
.macro smp_dmb, opt
|
||||
#ifdef CONFIG_SMP
|
||||
dmb \opt
|
||||
#endif
|
||||
.endm
|
||||
|
||||
#define USER(l, x...) \
|
||||
9999: x; \
|
||||
.section __ex_table,"a"; \
|
||||
.align 3; \
|
||||
.quad 9999b,l; \
|
||||
.previous
|
||||
|
||||
/*
|
||||
* Register aliases.
|
||||
*/
|
||||
lr .req x30 // link register
|
305
arch/arm64/include/asm/atomic.h
Normal file
305
arch/arm64/include/asm/atomic.h
Normal file
@ -0,0 +1,305 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/atomic.h
|
||||
*
|
||||
* Copyright (C) 1996 Russell King.
|
||||
* Copyright (C) 2002 Deep Blue Solutions Ltd.
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_ATOMIC_H
|
||||
#define __ASM_ATOMIC_H
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/barrier.h>
|
||||
#include <asm/cmpxchg.h>
|
||||
|
||||
#define ATOMIC_INIT(i) { (i) }
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
* On ARM, ordinary assignment (str instruction) doesn't clear the local
|
||||
* strex/ldrex monitor on some implementations. The reason we can use it for
|
||||
* atomic_set() is the clrex or dummy strex done on every exception return.
|
||||
*/
|
||||
#define atomic_read(v) (*(volatile int *)&(v)->counter)
|
||||
#define atomic_set(v,i) (((v)->counter) = (i))
|
||||
|
||||
/*
|
||||
* AArch64 UP and SMP safe atomic ops. We use load exclusive and
|
||||
* store exclusive to ensure that these are atomic. We may loop
|
||||
* to ensure that the update happens.
|
||||
*/
|
||||
static inline void atomic_add(int i, atomic_t *v)
|
||||
{
|
||||
unsigned long tmp;
|
||||
int result;
|
||||
|
||||
asm volatile("// atomic_add\n"
|
||||
"1: ldxr %w0, [%3]\n"
|
||||
" add %w0, %w0, %w4\n"
|
||||
" stxr %w1, %w0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
}
|
||||
|
||||
static inline int atomic_add_return(int i, atomic_t *v)
|
||||
{
|
||||
unsigned long tmp;
|
||||
int result;
|
||||
|
||||
asm volatile("// atomic_add_return\n"
|
||||
"1: ldaxr %w0, [%3]\n"
|
||||
" add %w0, %w0, %w4\n"
|
||||
" stlxr %w1, %w0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static inline void atomic_sub(int i, atomic_t *v)
|
||||
{
|
||||
unsigned long tmp;
|
||||
int result;
|
||||
|
||||
asm volatile("// atomic_sub\n"
|
||||
"1: ldxr %w0, [%3]\n"
|
||||
" sub %w0, %w0, %w4\n"
|
||||
" stxr %w1, %w0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
}
|
||||
|
||||
static inline int atomic_sub_return(int i, atomic_t *v)
|
||||
{
|
||||
unsigned long tmp;
|
||||
int result;
|
||||
|
||||
asm volatile("// atomic_sub_return\n"
|
||||
"1: ldaxr %w0, [%3]\n"
|
||||
" sub %w0, %w0, %w4\n"
|
||||
" stlxr %w1, %w0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
|
||||
{
|
||||
unsigned long tmp;
|
||||
int oldval;
|
||||
|
||||
asm volatile("// atomic_cmpxchg\n"
|
||||
"1: ldaxr %w1, [%3]\n"
|
||||
" cmp %w1, %w4\n"
|
||||
" b.ne 2f\n"
|
||||
" stlxr %w0, %w5, [%3]\n"
|
||||
" cbnz %w0, 1b\n"
|
||||
"2:"
|
||||
: "=&r" (tmp), "=&r" (oldval), "+o" (ptr->counter)
|
||||
: "r" (&ptr->counter), "Ir" (old), "r" (new)
|
||||
: "cc");
|
||||
|
||||
return oldval;
|
||||
}
|
||||
|
||||
static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
|
||||
{
|
||||
unsigned long tmp, tmp2;
|
||||
|
||||
asm volatile("// atomic_clear_mask\n"
|
||||
"1: ldxr %0, [%3]\n"
|
||||
" bic %0, %0, %4\n"
|
||||
" stxr %w1, %0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (tmp), "=&r" (tmp2), "+o" (*addr)
|
||||
: "r" (addr), "Ir" (mask)
|
||||
: "cc");
|
||||
}
|
||||
|
||||
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
|
||||
|
||||
static inline int __atomic_add_unless(atomic_t *v, int a, int u)
|
||||
{
|
||||
int c, old;
|
||||
|
||||
c = atomic_read(v);
|
||||
while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
|
||||
c = old;
|
||||
return c;
|
||||
}
|
||||
|
||||
#define atomic_inc(v) atomic_add(1, v)
|
||||
#define atomic_dec(v) atomic_sub(1, v)
|
||||
|
||||
#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0)
|
||||
#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0)
|
||||
#define atomic_inc_return(v) (atomic_add_return(1, v))
|
||||
#define atomic_dec_return(v) (atomic_sub_return(1, v))
|
||||
#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0)
|
||||
|
||||
#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0)
|
||||
|
||||
#define smp_mb__before_atomic_dec() smp_mb()
|
||||
#define smp_mb__after_atomic_dec() smp_mb()
|
||||
#define smp_mb__before_atomic_inc() smp_mb()
|
||||
#define smp_mb__after_atomic_inc() smp_mb()
|
||||
|
||||
/*
|
||||
* 64-bit atomic operations.
|
||||
*/
|
||||
#define ATOMIC64_INIT(i) { (i) }
|
||||
|
||||
#define atomic64_read(v) (*(volatile long long *)&(v)->counter)
|
||||
#define atomic64_set(v,i) (((v)->counter) = (i))
|
||||
|
||||
static inline void atomic64_add(u64 i, atomic64_t *v)
|
||||
{
|
||||
long result;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile("// atomic64_add\n"
|
||||
"1: ldxr %0, [%3]\n"
|
||||
" add %0, %0, %4\n"
|
||||
" stxr %w1, %0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
}
|
||||
|
||||
static inline long atomic64_add_return(long i, atomic64_t *v)
|
||||
{
|
||||
long result;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile("// atomic64_add_return\n"
|
||||
"1: ldaxr %0, [%3]\n"
|
||||
" add %0, %0, %4\n"
|
||||
" stlxr %w1, %0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static inline void atomic64_sub(u64 i, atomic64_t *v)
|
||||
{
|
||||
long result;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile("// atomic64_sub\n"
|
||||
"1: ldxr %0, [%3]\n"
|
||||
" sub %0, %0, %4\n"
|
||||
" stxr %w1, %0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
}
|
||||
|
||||
static inline long atomic64_sub_return(long i, atomic64_t *v)
|
||||
{
|
||||
long result;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile("// atomic64_sub_return\n"
|
||||
"1: ldaxr %0, [%3]\n"
|
||||
" sub %0, %0, %4\n"
|
||||
" stlxr %w1, %0, [%3]\n"
|
||||
" cbnz %w1, 1b"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter), "Ir" (i)
|
||||
: "cc");
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static inline long atomic64_cmpxchg(atomic64_t *ptr, long old, long new)
|
||||
{
|
||||
long oldval;
|
||||
unsigned long res;
|
||||
|
||||
asm volatile("// atomic64_cmpxchg\n"
|
||||
"1: ldaxr %1, [%3]\n"
|
||||
" cmp %1, %4\n"
|
||||
" b.ne 2f\n"
|
||||
" stlxr %w0, %5, [%3]\n"
|
||||
" cbnz %w0, 1b\n"
|
||||
"2:"
|
||||
: "=&r" (res), "=&r" (oldval), "+o" (ptr->counter)
|
||||
: "r" (&ptr->counter), "Ir" (old), "r" (new)
|
||||
: "cc");
|
||||
|
||||
return oldval;
|
||||
}
|
||||
|
||||
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
|
||||
|
||||
static inline long atomic64_dec_if_positive(atomic64_t *v)
|
||||
{
|
||||
long result;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile("// atomic64_dec_if_positive\n"
|
||||
"1: ldaxr %0, [%3]\n"
|
||||
" subs %0, %0, #1\n"
|
||||
" b.mi 2f\n"
|
||||
" stlxr %w1, %0, [%3]\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
"2:"
|
||||
: "=&r" (result), "=&r" (tmp), "+o" (v->counter)
|
||||
: "r" (&v->counter)
|
||||
: "cc");
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
|
||||
{
|
||||
long c, old;
|
||||
|
||||
c = atomic64_read(v);
|
||||
while (c != u && (old = atomic64_cmpxchg((v), c, c + a)) != c)
|
||||
c = old;
|
||||
|
||||
return c != u;
|
||||
}
|
||||
|
||||
#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
|
||||
#define atomic64_inc(v) atomic64_add(1LL, (v))
|
||||
#define atomic64_inc_return(v) atomic64_add_return(1LL, (v))
|
||||
#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0)
|
||||
#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0)
|
||||
#define atomic64_dec(v) atomic64_sub(1LL, (v))
|
||||
#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v))
|
||||
#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0)
|
||||
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL)
|
||||
|
||||
#endif
|
||||
#endif
|
22
arch/arm64/include/asm/auxvec.h
Normal file
22
arch/arm64/include/asm/auxvec.h
Normal file
@ -0,0 +1,22 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_AUXVEC_H
|
||||
#define __ASM_AUXVEC_H
|
||||
|
||||
/* vDSO location */
|
||||
#define AT_SYSINFO_EHDR 33
|
||||
|
||||
#endif
|
52
arch/arm64/include/asm/barrier.h
Normal file
52
arch/arm64/include/asm/barrier.h
Normal file
@ -0,0 +1,52 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/barrier.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_BARRIER_H
|
||||
#define __ASM_BARRIER_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#define sev() asm volatile("sev" : : : "memory")
|
||||
#define wfe() asm volatile("wfe" : : : "memory")
|
||||
#define wfi() asm volatile("wfi" : : : "memory")
|
||||
|
||||
#define isb() asm volatile("isb" : : : "memory")
|
||||
#define dsb() asm volatile("dsb sy" : : : "memory")
|
||||
|
||||
#define mb() dsb()
|
||||
#define rmb() asm volatile("dsb ld" : : : "memory")
|
||||
#define wmb() asm volatile("dsb st" : : : "memory")
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
#define smp_mb() barrier()
|
||||
#define smp_rmb() barrier()
|
||||
#define smp_wmb() barrier()
|
||||
#else
|
||||
#define smp_mb() asm volatile("dmb ish" : : : "memory")
|
||||
#define smp_rmb() asm volatile("dmb ishld" : : : "memory")
|
||||
#define smp_wmb() asm volatile("dmb ishst" : : : "memory")
|
||||
#endif
|
||||
|
||||
#define read_barrier_depends() do { } while(0)
|
||||
#define smp_read_barrier_depends() do { } while(0)
|
||||
|
||||
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
|
||||
#define nop() asm volatile("nop");
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* __ASM_BARRIER_H */
|
53
arch/arm64/include/asm/bitops.h
Normal file
53
arch/arm64/include/asm/bitops.h
Normal file
@ -0,0 +1,53 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_BITOPS_H
|
||||
#define __ASM_BITOPS_H
|
||||
|
||||
#include <linux/compiler.h>
|
||||
|
||||
#include <asm/barrier.h>
|
||||
|
||||
/*
|
||||
* clear_bit may not imply a memory barrier
|
||||
*/
|
||||
#ifndef smp_mb__before_clear_bit
|
||||
#define smp_mb__before_clear_bit() smp_mb()
|
||||
#define smp_mb__after_clear_bit() smp_mb()
|
||||
#endif
|
||||
|
||||
#ifndef _LINUX_BITOPS_H
|
||||
#error only <linux/bitops.h> can be included directly
|
||||
#endif
|
||||
|
||||
#include <asm-generic/bitops/builtin-__ffs.h>
|
||||
#include <asm-generic/bitops/builtin-ffs.h>
|
||||
#include <asm-generic/bitops/builtin-__fls.h>
|
||||
#include <asm-generic/bitops/builtin-fls.h>
|
||||
|
||||
#include <asm-generic/bitops/ffz.h>
|
||||
#include <asm-generic/bitops/fls64.h>
|
||||
#include <asm-generic/bitops/find.h>
|
||||
|
||||
#include <asm-generic/bitops/sched.h>
|
||||
#include <asm-generic/bitops/hweight.h>
|
||||
#include <asm-generic/bitops/lock.h>
|
||||
|
||||
#include <asm-generic/bitops/atomic.h>
|
||||
#include <asm-generic/bitops/non-atomic.h>
|
||||
#include <asm-generic/bitops/le.h>
|
||||
#include <asm-generic/bitops/ext2-atomic.h>
|
||||
|
||||
#endif /* __ASM_BITOPS_H */
|
23
arch/arm64/include/asm/bitsperlong.h
Normal file
23
arch/arm64/include/asm/bitsperlong.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_BITSPERLONG_H
|
||||
#define __ASM_BITSPERLONG_H
|
||||
|
||||
#define __BITS_PER_LONG 64
|
||||
|
||||
#include <asm-generic/bitsperlong.h>
|
||||
|
||||
#endif /* __ASM_BITSPERLONG_H */
|
21
arch/arm64/include/asm/byteorder.h
Normal file
21
arch/arm64/include/asm/byteorder.h
Normal file
@ -0,0 +1,21 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_BYTEORDER_H
|
||||
#define __ASM_BYTEORDER_H
|
||||
|
||||
#include <linux/byteorder/little_endian.h>
|
||||
|
||||
#endif /* __ASM_BYTEORDER_H */
|
32
arch/arm64/include/asm/cache.h
Normal file
32
arch/arm64/include/asm/cache.h
Normal file
@ -0,0 +1,32 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_CACHE_H
|
||||
#define __ASM_CACHE_H
|
||||
|
||||
#define L1_CACHE_SHIFT 6
|
||||
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
|
||||
|
||||
/*
|
||||
* Memory returned by kmalloc() may be used for DMA, so we must make
|
||||
* sure that all such allocations are cache aligned. Otherwise,
|
||||
* unrelated code may cause parts of the buffer to be read into the
|
||||
* cache before the transfer is done, causing old data to be seen by
|
||||
* the CPU.
|
||||
*/
|
||||
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
|
||||
#define ARCH_SLAB_MINALIGN 8
|
||||
|
||||
#endif
|
148
arch/arm64/include/asm/cacheflush.h
Normal file
148
arch/arm64/include/asm/cacheflush.h
Normal file
@ -0,0 +1,148 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/cacheflush.h
|
||||
*
|
||||
* Copyright (C) 1999-2002 Russell King.
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_CACHEFLUSH_H
|
||||
#define __ASM_CACHEFLUSH_H
|
||||
|
||||
#include <linux/mm.h>
|
||||
|
||||
/*
|
||||
* This flag is used to indicate that the page pointed to by a pte is clean
|
||||
* and does not require cleaning before returning it to the user.
|
||||
*/
|
||||
#define PG_dcache_clean PG_arch_1
|
||||
|
||||
/*
|
||||
* MM Cache Management
|
||||
* ===================
|
||||
*
|
||||
* The arch/arm64/mm/cache.S implements these methods.
|
||||
*
|
||||
* Start addresses are inclusive and end addresses are exclusive; start
|
||||
* addresses should be rounded down, end addresses up.
|
||||
*
|
||||
* See Documentation/cachetlb.txt for more information. Please note that
|
||||
* the implementation assumes non-aliasing VIPT D-cache and (aliasing)
|
||||
* VIPT or ASID-tagged VIVT I-cache.
|
||||
*
|
||||
* flush_cache_all()
|
||||
*
|
||||
* Unconditionally clean and invalidate the entire cache.
|
||||
*
|
||||
* flush_cache_mm(mm)
|
||||
*
|
||||
* Clean and invalidate all user space cache entries
|
||||
* before a change of page tables.
|
||||
*
|
||||
* flush_icache_range(start, end)
|
||||
*
|
||||
* Ensure coherency between the I-cache and the D-cache in the
|
||||
* region described by start, end.
|
||||
* - start - virtual start address
|
||||
* - end - virtual end address
|
||||
*
|
||||
* __flush_cache_user_range(start, end)
|
||||
*
|
||||
* Ensure coherency between the I-cache and the D-cache in the
|
||||
* region described by start, end.
|
||||
* - start - virtual start address
|
||||
* - end - virtual end address
|
||||
*
|
||||
* __flush_dcache_area(kaddr, size)
|
||||
*
|
||||
* Ensure that the data held in page is written back.
|
||||
* - kaddr - page address
|
||||
* - size - region size
|
||||
*/
|
||||
extern void flush_cache_all(void);
|
||||
extern void flush_cache_mm(struct mm_struct *mm);
|
||||
extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
|
||||
extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn);
|
||||
extern void flush_icache_range(unsigned long start, unsigned long end);
|
||||
extern void __flush_dcache_area(void *addr, size_t len);
|
||||
extern void __flush_cache_user_range(unsigned long start, unsigned long end);
|
||||
|
||||
/*
|
||||
* Copy user data from/to a page which is mapped into a different
|
||||
* processes address space. Really, we want to allow our "user
|
||||
* space" model to handle this.
|
||||
*/
|
||||
extern void copy_to_user_page(struct vm_area_struct *, struct page *,
|
||||
unsigned long, void *, const void *, unsigned long);
|
||||
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
|
||||
do { \
|
||||
memcpy(dst, src, len); \
|
||||
} while (0)
|
||||
|
||||
#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
|
||||
|
||||
/*
|
||||
* flush_dcache_page is used when the kernel has written to the page
|
||||
* cache page at virtual address page->virtual.
|
||||
*
|
||||
* If this page isn't mapped (ie, page_mapping == NULL), or it might
|
||||
* have userspace mappings, then we _must_ always clean + invalidate
|
||||
* the dcache entries associated with the kernel mapping.
|
||||
*
|
||||
* Otherwise we can defer the operation, and clean the cache when we are
|
||||
* about to change to user space. This is the same method as used on SPARC64.
|
||||
* See update_mmu_cache for the user space part.
|
||||
*/
|
||||
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
|
||||
extern void flush_dcache_page(struct page *);
|
||||
|
||||
static inline void __flush_icache_all(void)
|
||||
{
|
||||
asm("ic ialluis");
|
||||
}
|
||||
|
||||
#define flush_dcache_mmap_lock(mapping) \
|
||||
spin_lock_irq(&(mapping)->tree_lock)
|
||||
#define flush_dcache_mmap_unlock(mapping) \
|
||||
spin_unlock_irq(&(mapping)->tree_lock)
|
||||
|
||||
#define flush_icache_user_range(vma,page,addr,len) \
|
||||
flush_dcache_page(page)
|
||||
|
||||
/*
|
||||
* We don't appear to need to do anything here. In fact, if we did, we'd
|
||||
* duplicate cache flushing elsewhere performed by flush_dcache_page().
|
||||
*/
|
||||
#define flush_icache_page(vma,page) do { } while (0)
|
||||
|
||||
/*
|
||||
* flush_cache_vmap() is used when creating mappings (eg, via vmap,
|
||||
* vmalloc, ioremap etc) in kernel space for pages. On non-VIPT
|
||||
* caches, since the direct-mappings of these pages may contain cached
|
||||
* data, we need to do a full cache flush to ensure that writebacks
|
||||
* don't corrupt data placed into these pages via the new mappings.
|
||||
*/
|
||||
static inline void flush_cache_vmap(unsigned long start, unsigned long end)
|
||||
{
|
||||
/*
|
||||
* set_pte_at() called from vmap_pte_range() does not
|
||||
* have a DSB after cleaning the cache line.
|
||||
*/
|
||||
dsb();
|
||||
}
|
||||
|
||||
static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
|
||||
{
|
||||
}
|
||||
|
||||
#endif
|
48
arch/arm64/include/asm/cachetype.h
Normal file
48
arch/arm64/include/asm/cachetype.h
Normal file
@ -0,0 +1,48 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_CACHETYPE_H
|
||||
#define __ASM_CACHETYPE_H
|
||||
|
||||
#include <asm/cputype.h>
|
||||
|
||||
#define CTR_L1IP_SHIFT 14
|
||||
#define CTR_L1IP_MASK 3
|
||||
|
||||
#define ICACHE_POLICY_RESERVED 0
|
||||
#define ICACHE_POLICY_AIVIVT 1
|
||||
#define ICACHE_POLICY_VIPT 2
|
||||
#define ICACHE_POLICY_PIPT 3
|
||||
|
||||
static inline u32 icache_policy(void)
|
||||
{
|
||||
return (read_cpuid_cachetype() >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK;
|
||||
}
|
||||
|
||||
/*
|
||||
* Whilst the D-side always behaves as PIPT on AArch64, aliasing is
|
||||
* permitted in the I-cache.
|
||||
*/
|
||||
static inline int icache_is_aliasing(void)
|
||||
{
|
||||
return icache_policy() != ICACHE_POLICY_PIPT;
|
||||
}
|
||||
|
||||
static inline int icache_is_aivivt(void)
|
||||
{
|
||||
return icache_policy() == ICACHE_POLICY_AIVIVT;
|
||||
}
|
||||
|
||||
#endif /* __ASM_CACHETYPE_H */
|
173
arch/arm64/include/asm/cmpxchg.h
Normal file
173
arch/arm64/include/asm/cmpxchg.h
Normal file
@ -0,0 +1,173 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/cmpxchg.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_CMPXCHG_H
|
||||
#define __ASM_CMPXCHG_H
|
||||
|
||||
#include <linux/bug.h>
|
||||
|
||||
#include <asm/barrier.h>
|
||||
|
||||
static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
|
||||
{
|
||||
unsigned long ret, tmp;
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
asm volatile("// __xchg1\n"
|
||||
"1: ldaxrb %w0, [%3]\n"
|
||||
" stlxrb %w1, %w2, [%3]\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
: "=&r" (ret), "=&r" (tmp)
|
||||
: "r" (x), "r" (ptr)
|
||||
: "memory", "cc");
|
||||
break;
|
||||
case 2:
|
||||
asm volatile("// __xchg2\n"
|
||||
"1: ldaxrh %w0, [%3]\n"
|
||||
" stlxrh %w1, %w2, [%3]\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
: "=&r" (ret), "=&r" (tmp)
|
||||
: "r" (x), "r" (ptr)
|
||||
: "memory", "cc");
|
||||
break;
|
||||
case 4:
|
||||
asm volatile("// __xchg4\n"
|
||||
"1: ldaxr %w0, [%3]\n"
|
||||
" stlxr %w1, %w2, [%3]\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
: "=&r" (ret), "=&r" (tmp)
|
||||
: "r" (x), "r" (ptr)
|
||||
: "memory", "cc");
|
||||
break;
|
||||
case 8:
|
||||
asm volatile("// __xchg8\n"
|
||||
"1: ldaxr %0, [%3]\n"
|
||||
" stlxr %w1, %2, [%3]\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
: "=&r" (ret), "=&r" (tmp)
|
||||
: "r" (x), "r" (ptr)
|
||||
: "memory", "cc");
|
||||
break;
|
||||
default:
|
||||
BUILD_BUG();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define xchg(ptr,x) \
|
||||
((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
|
||||
|
||||
static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
|
||||
unsigned long new, int size)
|
||||
{
|
||||
unsigned long oldval = 0, res;
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
do {
|
||||
asm volatile("// __cmpxchg1\n"
|
||||
" ldxrb %w1, [%2]\n"
|
||||
" mov %w0, #0\n"
|
||||
" cmp %w1, %w3\n"
|
||||
" b.ne 1f\n"
|
||||
" stxrb %w0, %w4, [%2]\n"
|
||||
"1:\n"
|
||||
: "=&r" (res), "=&r" (oldval)
|
||||
: "r" (ptr), "Ir" (old), "r" (new)
|
||||
: "cc");
|
||||
} while (res);
|
||||
break;
|
||||
|
||||
case 2:
|
||||
do {
|
||||
asm volatile("// __cmpxchg2\n"
|
||||
" ldxrh %w1, [%2]\n"
|
||||
" mov %w0, #0\n"
|
||||
" cmp %w1, %w3\n"
|
||||
" b.ne 1f\n"
|
||||
" stxrh %w0, %w4, [%2]\n"
|
||||
"1:\n"
|
||||
: "=&r" (res), "=&r" (oldval)
|
||||
: "r" (ptr), "Ir" (old), "r" (new)
|
||||
: "memory", "cc");
|
||||
} while (res);
|
||||
break;
|
||||
|
||||
case 4:
|
||||
do {
|
||||
asm volatile("// __cmpxchg4\n"
|
||||
" ldxr %w1, [%2]\n"
|
||||
" mov %w0, #0\n"
|
||||
" cmp %w1, %w3\n"
|
||||
" b.ne 1f\n"
|
||||
" stxr %w0, %w4, [%2]\n"
|
||||
"1:\n"
|
||||
: "=&r" (res), "=&r" (oldval)
|
||||
: "r" (ptr), "Ir" (old), "r" (new)
|
||||
: "cc");
|
||||
} while (res);
|
||||
break;
|
||||
|
||||
case 8:
|
||||
do {
|
||||
asm volatile("// __cmpxchg8\n"
|
||||
" ldxr %1, [%2]\n"
|
||||
" mov %w0, #0\n"
|
||||
" cmp %1, %3\n"
|
||||
" b.ne 1f\n"
|
||||
" stxr %w0, %4, [%2]\n"
|
||||
"1:\n"
|
||||
: "=&r" (res), "=&r" (oldval)
|
||||
: "r" (ptr), "Ir" (old), "r" (new)
|
||||
: "cc");
|
||||
} while (res);
|
||||
break;
|
||||
|
||||
default:
|
||||
BUILD_BUG();
|
||||
}
|
||||
|
||||
return oldval;
|
||||
}
|
||||
|
||||
static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old,
|
||||
unsigned long new, int size)
|
||||
{
|
||||
unsigned long ret;
|
||||
|
||||
smp_mb();
|
||||
ret = __cmpxchg(ptr, old, new, size);
|
||||
smp_mb();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define cmpxchg(ptr,o,n) \
|
||||
((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \
|
||||
(unsigned long)(o), \
|
||||
(unsigned long)(n), \
|
||||
sizeof(*(ptr))))
|
||||
|
||||
#define cmpxchg_local(ptr,o,n) \
|
||||
((__typeof__(*(ptr)))__cmpxchg((ptr), \
|
||||
(unsigned long)(o), \
|
||||
(unsigned long)(n), \
|
||||
sizeof(*(ptr))))
|
||||
|
||||
#endif /* __ASM_CMPXCHG_H */
|
242
arch/arm64/include/asm/compat.h
Normal file
242
arch/arm64/include/asm/compat.h
Normal file
@ -0,0 +1,242 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_COMPAT_H
|
||||
#define __ASM_COMPAT_H
|
||||
#ifdef __KERNEL__
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
/*
|
||||
* Architecture specific compatibility types
|
||||
*/
|
||||
#include <linux/types.h>
|
||||
#include <linux/sched.h>
|
||||
|
||||
#define COMPAT_USER_HZ 100
|
||||
#define COMPAT_UTS_MACHINE "armv8l\0\0"
|
||||
|
||||
typedef u32 compat_size_t;
|
||||
typedef s32 compat_ssize_t;
|
||||
typedef s32 compat_time_t;
|
||||
typedef s32 compat_clock_t;
|
||||
typedef s32 compat_pid_t;
|
||||
typedef u32 __compat_uid_t;
|
||||
typedef u32 __compat_gid_t;
|
||||
typedef u32 __compat_uid32_t;
|
||||
typedef u32 __compat_gid32_t;
|
||||
typedef u32 compat_mode_t;
|
||||
typedef u32 compat_ino_t;
|
||||
typedef u32 compat_dev_t;
|
||||
typedef s32 compat_off_t;
|
||||
typedef s64 compat_loff_t;
|
||||
typedef s16 compat_nlink_t;
|
||||
typedef u16 compat_ipc_pid_t;
|
||||
typedef s32 compat_daddr_t;
|
||||
typedef u32 compat_caddr_t;
|
||||
typedef __kernel_fsid_t compat_fsid_t;
|
||||
typedef s32 compat_key_t;
|
||||
typedef s32 compat_timer_t;
|
||||
|
||||
typedef s32 compat_int_t;
|
||||
typedef s32 compat_long_t;
|
||||
typedef s64 compat_s64;
|
||||
typedef u32 compat_uint_t;
|
||||
typedef u32 compat_ulong_t;
|
||||
typedef u64 compat_u64;
|
||||
|
||||
struct compat_timespec {
|
||||
compat_time_t tv_sec;
|
||||
s32 tv_nsec;
|
||||
};
|
||||
|
||||
struct compat_timeval {
|
||||
compat_time_t tv_sec;
|
||||
s32 tv_usec;
|
||||
};
|
||||
|
||||
struct compat_stat {
|
||||
compat_dev_t st_dev;
|
||||
compat_ino_t st_ino;
|
||||
compat_mode_t st_mode;
|
||||
compat_nlink_t st_nlink;
|
||||
__compat_uid32_t st_uid;
|
||||
__compat_gid32_t st_gid;
|
||||
compat_dev_t st_rdev;
|
||||
compat_off_t st_size;
|
||||
compat_off_t st_blksize;
|
||||
compat_off_t st_blocks;
|
||||
compat_time_t st_atime;
|
||||
u32 st_atime_nsec;
|
||||
compat_time_t st_mtime;
|
||||
u32 st_mtime_nsec;
|
||||
compat_time_t st_ctime;
|
||||
u32 st_ctime_nsec;
|
||||
u32 __unused4[2];
|
||||
};
|
||||
|
||||
struct compat_flock {
|
||||
short l_type;
|
||||
short l_whence;
|
||||
compat_off_t l_start;
|
||||
compat_off_t l_len;
|
||||
compat_pid_t l_pid;
|
||||
};
|
||||
|
||||
#define F_GETLK64 12 /* using 'struct flock64' */
|
||||
#define F_SETLK64 13
|
||||
#define F_SETLKW64 14
|
||||
|
||||
struct compat_flock64 {
|
||||
short l_type;
|
||||
short l_whence;
|
||||
compat_loff_t l_start;
|
||||
compat_loff_t l_len;
|
||||
compat_pid_t l_pid;
|
||||
};
|
||||
|
||||
struct compat_statfs {
|
||||
int f_type;
|
||||
int f_bsize;
|
||||
int f_blocks;
|
||||
int f_bfree;
|
||||
int f_bavail;
|
||||
int f_files;
|
||||
int f_ffree;
|
||||
compat_fsid_t f_fsid;
|
||||
int f_namelen; /* SunOS ignores this field. */
|
||||
int f_frsize;
|
||||
int f_flags;
|
||||
int f_spare[4];
|
||||
};
|
||||
|
||||
#define COMPAT_RLIM_INFINITY 0xffffffff
|
||||
|
||||
typedef u32 compat_old_sigset_t;
|
||||
|
||||
#define _COMPAT_NSIG 64
|
||||
#define _COMPAT_NSIG_BPW 32
|
||||
|
||||
typedef u32 compat_sigset_word;
|
||||
|
||||
#define COMPAT_OFF_T_MAX 0x7fffffff
|
||||
#define COMPAT_LOFF_T_MAX 0x7fffffffffffffffL
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
typedef u32 compat_uptr_t;
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
return (void __user *)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline void __user *arch_compat_alloc_user_space(long len)
|
||||
{
|
||||
struct pt_regs *regs = task_pt_regs(current);
|
||||
return (void __user *)regs->compat_sp - len;
|
||||
}
|
||||
|
||||
struct compat_ipc64_perm {
|
||||
compat_key_t key;
|
||||
__compat_uid32_t uid;
|
||||
__compat_gid32_t gid;
|
||||
__compat_uid32_t cuid;
|
||||
__compat_gid32_t cgid;
|
||||
unsigned short mode;
|
||||
unsigned short __pad1;
|
||||
unsigned short seq;
|
||||
unsigned short __pad2;
|
||||
compat_ulong_t unused1;
|
||||
compat_ulong_t unused2;
|
||||
};
|
||||
|
||||
struct compat_semid64_ds {
|
||||
struct compat_ipc64_perm sem_perm;
|
||||
compat_time_t sem_otime;
|
||||
compat_ulong_t __unused1;
|
||||
compat_time_t sem_ctime;
|
||||
compat_ulong_t __unused2;
|
||||
compat_ulong_t sem_nsems;
|
||||
compat_ulong_t __unused3;
|
||||
compat_ulong_t __unused4;
|
||||
};
|
||||
|
||||
struct compat_msqid64_ds {
|
||||
struct compat_ipc64_perm msg_perm;
|
||||
compat_time_t msg_stime;
|
||||
compat_ulong_t __unused1;
|
||||
compat_time_t msg_rtime;
|
||||
compat_ulong_t __unused2;
|
||||
compat_time_t msg_ctime;
|
||||
compat_ulong_t __unused3;
|
||||
compat_ulong_t msg_cbytes;
|
||||
compat_ulong_t msg_qnum;
|
||||
compat_ulong_t msg_qbytes;
|
||||
compat_pid_t msg_lspid;
|
||||
compat_pid_t msg_lrpid;
|
||||
compat_ulong_t __unused4;
|
||||
compat_ulong_t __unused5;
|
||||
};
|
||||
|
||||
struct compat_shmid64_ds {
|
||||
struct compat_ipc64_perm shm_perm;
|
||||
compat_size_t shm_segsz;
|
||||
compat_time_t shm_atime;
|
||||
compat_ulong_t __unused1;
|
||||
compat_time_t shm_dtime;
|
||||
compat_ulong_t __unused2;
|
||||
compat_time_t shm_ctime;
|
||||
compat_ulong_t __unused3;
|
||||
compat_pid_t shm_cpid;
|
||||
compat_pid_t shm_lpid;
|
||||
compat_ulong_t shm_nattch;
|
||||
compat_ulong_t __unused4;
|
||||
compat_ulong_t __unused5;
|
||||
};
|
||||
|
||||
static inline int is_compat_task(void)
|
||||
{
|
||||
return test_thread_flag(TIF_32BIT);
|
||||
}
|
||||
|
||||
static inline int is_compat_thread(struct thread_info *thread)
|
||||
{
|
||||
return test_ti_thread_flag(thread, TIF_32BIT);
|
||||
}
|
||||
|
||||
#else /* !CONFIG_COMPAT */
|
||||
|
||||
static inline int is_compat_task(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int is_compat_thread(struct thread_info *thread)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_COMPAT */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_COMPAT_H */
|
30
arch/arm64/include/asm/compiler.h
Normal file
30
arch/arm64/include/asm/compiler.h
Normal file
@ -0,0 +1,30 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/compiler.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_COMPILER_H
|
||||
#define __ASM_COMPILER_H
|
||||
|
||||
/*
|
||||
* This is used to ensure the compiler did actually allocate the register we
|
||||
* asked it for some inline assembly sequences. Apparently we can't trust the
|
||||
* compiler from one version to another so a bit of paranoia won't hurt. This
|
||||
* string is meant to be concatenated with the inline asm string and will
|
||||
* cause compilation to stop on mismatch. (for details, see gcc PR 15089)
|
||||
*/
|
||||
#define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t"
|
||||
|
||||
#endif /* __ASM_COMPILER_H */
|
30
arch/arm64/include/asm/cputable.h
Normal file
30
arch/arm64/include/asm/cputable.h
Normal file
@ -0,0 +1,30 @@
|
||||
/*
|
||||
* arch/arm64/include/asm/cputable.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software: you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_CPUTABLE_H
|
||||
#define __ASM_CPUTABLE_H
|
||||
|
||||
struct cpu_info {
|
||||
unsigned int cpu_id_val;
|
||||
unsigned int cpu_id_mask;
|
||||
const char *cpu_name;
|
||||
unsigned long (*cpu_setup)(void);
|
||||
};
|
||||
|
||||
extern struct cpu_info *lookup_processor_type(unsigned int);
|
||||
|
||||
#endif
|
49
arch/arm64/include/asm/cputype.h
Normal file
49
arch/arm64/include/asm/cputype.h
Normal file
@ -0,0 +1,49 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_CPUTYPE_H
|
||||
#define __ASM_CPUTYPE_H
|
||||
|
||||
#define ID_MIDR_EL1 "midr_el1"
|
||||
#define ID_CTR_EL0 "ctr_el0"
|
||||
|
||||
#define ID_AA64PFR0_EL1 "id_aa64pfr0_el1"
|
||||
#define ID_AA64DFR0_EL1 "id_aa64dfr0_el1"
|
||||
#define ID_AA64AFR0_EL1 "id_aa64afr0_el1"
|
||||
#define ID_AA64ISAR0_EL1 "id_aa64isar0_el1"
|
||||
#define ID_AA64MMFR0_EL1 "id_aa64mmfr0_el1"
|
||||
|
||||
#define read_cpuid(reg) ({ \
|
||||
u64 __val; \
|
||||
asm("mrs %0, " reg : "=r" (__val)); \
|
||||
__val; \
|
||||
})
|
||||
|
||||
/*
|
||||
* The CPU ID never changes at run time, so we might as well tell the
|
||||
* compiler that it's constant. Use this function to read the CPU ID
|
||||
* rather than directly reading processor_id or read_cpuid() directly.
|
||||
*/
|
||||
static inline u32 __attribute_const__ read_cpuid_id(void)
|
||||
{
|
||||
return read_cpuid(ID_MIDR_EL1);
|
||||
}
|
||||
|
||||
static inline u32 __attribute_const__ read_cpuid_cachetype(void)
|
||||
{
|
||||
return read_cpuid(ID_CTR_EL0);
|
||||
}
|
||||
|
||||
#endif
|
88
arch/arm64/include/asm/debug-monitors.h
Normal file
88
arch/arm64/include/asm/debug-monitors.h
Normal file
@ -0,0 +1,88 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_DEBUG_MONITORS_H
|
||||
#define __ASM_DEBUG_MONITORS_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#define DBG_ESR_EVT(x) (((x) >> 27) & 0x7)
|
||||
|
||||
/* AArch64 */
|
||||
#define DBG_ESR_EVT_HWBP 0x0
|
||||
#define DBG_ESR_EVT_HWSS 0x1
|
||||
#define DBG_ESR_EVT_HWWP 0x2
|
||||
#define DBG_ESR_EVT_BRK 0x6
|
||||
|
||||
enum debug_el {
|
||||
DBG_ACTIVE_EL0 = 0,
|
||||
DBG_ACTIVE_EL1,
|
||||
};
|
||||
|
||||
/* AArch32 */
|
||||
#define DBG_ESR_EVT_BKPT 0x4
|
||||
#define DBG_ESR_EVT_VECC 0x5
|
||||
|
||||
#define AARCH32_BREAK_ARM 0x07f001f0
|
||||
#define AARCH32_BREAK_THUMB 0xde01
|
||||
#define AARCH32_BREAK_THUMB2_LO 0xf7f0
|
||||
#define AARCH32_BREAK_THUMB2_HI 0xa000
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
struct task_struct;
|
||||
|
||||
#define local_dbg_save(flags) \
|
||||
do { \
|
||||
typecheck(unsigned long, flags); \
|
||||
asm volatile( \
|
||||
"mrs %0, daif // local_dbg_save\n" \
|
||||
"msr daifset, #8" \
|
||||
: "=r" (flags) : : "memory"); \
|
||||
} while (0)
|
||||
|
||||
#define local_dbg_restore(flags) \
|
||||
do { \
|
||||
typecheck(unsigned long, flags); \
|
||||
asm volatile( \
|
||||
"msr daif, %0 // local_dbg_restore\n" \
|
||||
: : "r" (flags) : "memory"); \
|
||||
} while (0)
|
||||
|
||||
#define DBG_ARCH_ID_RESERVED 0 /* In case of ptrace ABI updates. */
|
||||
|
||||
u8 debug_monitors_arch(void);
|
||||
|
||||
void enable_debug_monitors(enum debug_el el);
|
||||
void disable_debug_monitors(enum debug_el el);
|
||||
|
||||
void user_rewind_single_step(struct task_struct *task);
|
||||
void user_fastforward_single_step(struct task_struct *task);
|
||||
|
||||
void kernel_enable_single_step(struct pt_regs *regs);
|
||||
void kernel_disable_single_step(void);
|
||||
int kernel_active_single_step(void);
|
||||
|
||||
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
||||
int reinstall_suspended_bps(struct pt_regs *regs);
|
||||
#else
|
||||
static inline int reinstall_suspended_bps(struct pt_regs *regs)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* __ASSEMBLY */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_DEBUG_MONITORS_H */
|
26
arch/arm64/include/asm/device.h
Normal file
26
arch/arm64/include/asm/device.h
Normal file
@ -0,0 +1,26 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_DEVICE_H
|
||||
#define __ASM_DEVICE_H
|
||||
|
||||
struct dev_archdata {
|
||||
struct dma_map_ops *dma_ops;
|
||||
};
|
||||
|
||||
struct pdev_archdata {
|
||||
};
|
||||
|
||||
#endif
|
124
arch/arm64/include/asm/dma-mapping.h
Normal file
124
arch/arm64/include/asm/dma-mapping.h
Normal file
@ -0,0 +1,124 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_DMA_MAPPING_H
|
||||
#define __ASM_DMA_MAPPING_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#include <asm-generic/dma-coherent.h>
|
||||
|
||||
#define ARCH_HAS_DMA_GET_REQUIRED_MASK
|
||||
|
||||
extern struct dma_map_ops *dma_ops;
|
||||
|
||||
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
|
||||
{
|
||||
if (unlikely(!dev) || !dev->archdata.dma_ops)
|
||||
return dma_ops;
|
||||
else
|
||||
return dev->archdata.dma_ops;
|
||||
}
|
||||
|
||||
#include <asm-generic/dma-mapping-common.h>
|
||||
|
||||
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
|
||||
{
|
||||
return (dma_addr_t)paddr;
|
||||
}
|
||||
|
||||
static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr)
|
||||
{
|
||||
return (phys_addr_t)dev_addr;
|
||||
}
|
||||
|
||||
static inline int dma_mapping_error(struct device *dev, dma_addr_t dev_addr)
|
||||
{
|
||||
struct dma_map_ops *ops = get_dma_ops(dev);
|
||||
return ops->mapping_error(dev, dev_addr);
|
||||
}
|
||||
|
||||
static inline int dma_supported(struct device *dev, u64 mask)
|
||||
{
|
||||
struct dma_map_ops *ops = get_dma_ops(dev);
|
||||
return ops->dma_supported(dev, mask);
|
||||
}
|
||||
|
||||
static inline int dma_set_mask(struct device *dev, u64 mask)
|
||||
{
|
||||
if (!dev->dma_mask || !dma_supported(dev, mask))
|
||||
return -EIO;
|
||||
*dev->dma_mask = mask;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
|
||||
{
|
||||
if (!dev->dma_mask)
|
||||
return 0;
|
||||
|
||||
return addr + size - 1 <= *dev->dma_mask;
|
||||
}
|
||||
|
||||
static inline void dma_mark_clean(void *addr, size_t size)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void *dma_alloc_coherent(struct device *dev, size_t size,
|
||||
dma_addr_t *dma_handle, gfp_t flags)
|
||||
{
|
||||
struct dma_map_ops *ops = get_dma_ops(dev);
|
||||
void *vaddr;
|
||||
|
||||
if (dma_alloc_from_coherent(dev, size, dma_handle, &vaddr))
|
||||
return vaddr;
|
||||
|
||||
vaddr = ops->alloc(dev, size, dma_handle, flags, NULL);
|
||||
debug_dma_alloc_coherent(dev, size, *dma_handle, vaddr);
|
||||
return vaddr;
|
||||
}
|
||||
|
||||
static inline void dma_free_coherent(struct device *dev, size_t size,
|
||||
void *vaddr, dma_addr_t dev_addr)
|
||||
{
|
||||
struct dma_map_ops *ops = get_dma_ops(dev);
|
||||
|
||||
if (dma_release_from_coherent(dev, get_order(size), vaddr))
|
||||
return;
|
||||
|
||||
debug_dma_free_coherent(dev, size, vaddr, dev_addr);
|
||||
ops->free(dev, size, vaddr, dev_addr, NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
* There is no dma_cache_sync() implementation, so just return NULL here.
|
||||
*/
|
||||
static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
|
||||
dma_addr_t *handle, gfp_t flags)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline void dma_free_noncoherent(struct device *dev, size_t size,
|
||||
void *cpu_addr, dma_addr_t handle)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_DMA_MAPPING_H */
|
179
arch/arm64/include/asm/elf.h
Normal file
179
arch/arm64/include/asm/elf.h
Normal file
@ -0,0 +1,179 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_ELF_H
|
||||
#define __ASM_ELF_H
|
||||
|
||||
#include <asm/hwcap.h>
|
||||
|
||||
/*
|
||||
* ELF register definitions..
|
||||
*/
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/user.h>
|
||||
|
||||
typedef unsigned long elf_greg_t;
|
||||
typedef unsigned long elf_freg_t[3];
|
||||
|
||||
#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
|
||||
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
|
||||
|
||||
typedef struct user_fp elf_fpregset_t;
|
||||
|
||||
#define EM_AARCH64 183
|
||||
|
||||
/*
|
||||
* AArch64 static relocation types.
|
||||
*/
|
||||
|
||||
/* Miscellaneous. */
|
||||
#define R_ARM_NONE 0
|
||||
#define R_AARCH64_NONE 256
|
||||
|
||||
/* Data. */
|
||||
#define R_AARCH64_ABS64 257
|
||||
#define R_AARCH64_ABS32 258
|
||||
#define R_AARCH64_ABS16 259
|
||||
#define R_AARCH64_PREL64 260
|
||||
#define R_AARCH64_PREL32 261
|
||||
#define R_AARCH64_PREL16 262
|
||||
|
||||
/* Instructions. */
|
||||
#define R_AARCH64_MOVW_UABS_G0 263
|
||||
#define R_AARCH64_MOVW_UABS_G0_NC 264
|
||||
#define R_AARCH64_MOVW_UABS_G1 265
|
||||
#define R_AARCH64_MOVW_UABS_G1_NC 266
|
||||
#define R_AARCH64_MOVW_UABS_G2 267
|
||||
#define R_AARCH64_MOVW_UABS_G2_NC 268
|
||||
#define R_AARCH64_MOVW_UABS_G3 269
|
||||
|
||||
#define R_AARCH64_MOVW_SABS_G0 270
|
||||
#define R_AARCH64_MOVW_SABS_G1 271
|
||||
#define R_AARCH64_MOVW_SABS_G2 272
|
||||
|
||||
#define R_AARCH64_LD_PREL_LO19 273
|
||||
#define R_AARCH64_ADR_PREL_LO21 274
|
||||
#define R_AARCH64_ADR_PREL_PG_HI21 275
|
||||
#define R_AARCH64_ADR_PREL_PG_HI21_NC 276
|
||||
#define R_AARCH64_ADD_ABS_LO12_NC 277
|
||||
#define R_AARCH64_LDST8_ABS_LO12_NC 278
|
||||
|
||||
#define R_AARCH64_TSTBR14 279
|
||||
#define R_AARCH64_CONDBR19 280
|
||||
#define R_AARCH64_JUMP26 282
|
||||
#define R_AARCH64_CALL26 283
|
||||
#define R_AARCH64_LDST16_ABS_LO12_NC 284
|
||||
#define R_AARCH64_LDST32_ABS_LO12_NC 285
|
||||
#define R_AARCH64_LDST64_ABS_LO12_NC 286
|
||||
#define R_AARCH64_LDST128_ABS_LO12_NC 299
|
||||
|
||||
#define R_AARCH64_MOVW_PREL_G0 287
|
||||
#define R_AARCH64_MOVW_PREL_G0_NC 288
|
||||
#define R_AARCH64_MOVW_PREL_G1 289
|
||||
#define R_AARCH64_MOVW_PREL_G1_NC 290
|
||||
#define R_AARCH64_MOVW_PREL_G2 291
|
||||
#define R_AARCH64_MOVW_PREL_G2_NC 292
|
||||
#define R_AARCH64_MOVW_PREL_G3 293
|
||||
|
||||
|
||||
/*
|
||||
* These are used to set parameters in the core dumps.
|
||||
*/
|
||||
#define ELF_CLASS ELFCLASS64
|
||||
#define ELF_DATA ELFDATA2LSB
|
||||
#define ELF_ARCH EM_AARCH64
|
||||
|
||||
#define ELF_PLATFORM_SIZE 16
|
||||
#define ELF_PLATFORM ("aarch64")
|
||||
|
||||
/*
|
||||
* This is used to ensure we don't load something for the wrong architecture.
|
||||
*/
|
||||
#define elf_check_arch(x) ((x)->e_machine == EM_AARCH64)
|
||||
|
||||
#define elf_read_implies_exec(ex,stk) (stk != EXSTACK_DISABLE_X)
|
||||
|
||||
#define CORE_DUMP_USE_REGSET
|
||||
#define ELF_EXEC_PAGESIZE PAGE_SIZE
|
||||
|
||||
/*
|
||||
* This is the location that an ET_DYN program is loaded if exec'ed. Typical
|
||||
* use of this is to invoke "./ld.so someprog" to test out a new version of
|
||||
* the loader. We need to make sure that it is out of the way of the program
|
||||
* that it will "exec", and that there is sufficient room for the brk.
|
||||
*/
|
||||
extern unsigned long randomize_et_dyn(unsigned long base);
|
||||
#define ELF_ET_DYN_BASE (randomize_et_dyn(2 * TASK_SIZE_64 / 3))
|
||||
|
||||
/*
|
||||
* When the program starts, a1 contains a pointer to a function to be
|
||||
* registered with atexit, as per the SVR4 ABI. A value of 0 means we have no
|
||||
* such handler.
|
||||
*/
|
||||
#define ELF_PLAT_INIT(_r, load_addr) (_r)->regs[0] = 0
|
||||
|
||||
#define SET_PERSONALITY(ex) clear_thread_flag(TIF_32BIT);
|
||||
|
||||
#define ARCH_DLINFO \
|
||||
do { \
|
||||
NEW_AUX_ENT(AT_SYSINFO_EHDR, \
|
||||
(elf_addr_t)current->mm->context.vdso); \
|
||||
} while (0)
|
||||
|
||||
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
|
||||
struct linux_binprm;
|
||||
extern int arch_setup_additional_pages(struct linux_binprm *bprm,
|
||||
int uses_interp);
|
||||
|
||||
/* 1GB of VA */
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define STACK_RND_MASK (test_thread_flag(TIF_32BIT) ? \
|
||||
0x7ff >> (PAGE_SHIFT - 12) : \
|
||||
0x3ffff >> (PAGE_SHIFT - 12))
|
||||
#else
|
||||
#define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12))
|
||||
#endif
|
||||
|
||||
struct mm_struct;
|
||||
extern unsigned long arch_randomize_brk(struct mm_struct *mm);
|
||||
#define arch_randomize_brk arch_randomize_brk
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define EM_ARM 40
|
||||
#define COMPAT_ELF_PLATFORM ("v8l")
|
||||
|
||||
#define COMPAT_ELF_ET_DYN_BASE (randomize_et_dyn(2 * TASK_SIZE_32 / 3))
|
||||
|
||||
/* AArch32 registers. */
|
||||
#define COMPAT_ELF_NGREG 18
|
||||
typedef unsigned int compat_elf_greg_t;
|
||||
typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG];
|
||||
|
||||
/* AArch32 EABI. */
|
||||
#define EF_ARM_EABI_MASK 0xff000000
|
||||
#define compat_elf_check_arch(x) (((x)->e_machine == EM_ARM) && \
|
||||
((x)->e_flags & EF_ARM_EABI_MASK))
|
||||
|
||||
#define compat_start_thread compat_start_thread
|
||||
#define COMPAT_SET_PERSONALITY(ex) set_thread_flag(TIF_32BIT);
|
||||
#define COMPAT_ARCH_DLINFO
|
||||
extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
|
||||
int uses_interp);
|
||||
#define compat_arch_setup_additional_pages \
|
||||
aarch32_setup_vectors_page
|
||||
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
#endif
|
23
arch/arm64/include/asm/exception.h
Normal file
23
arch/arm64/include/asm/exception.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/exception.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_EXCEPTION_H
|
||||
#define __ASM_EXCEPTION_H
|
||||
|
||||
#define __exception __attribute__((section(".exception.text")))
|
||||
|
||||
#endif /* __ASM_EXCEPTION_H */
|
23
arch/arm64/include/asm/exec.h
Normal file
23
arch/arm64/include/asm/exec.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/exec.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_EXEC_H
|
||||
#define __ASM_EXEC_H
|
||||
|
||||
extern unsigned long arch_align_stack(unsigned long sp);
|
||||
|
||||
#endif /* __ASM_EXEC_H */
|
34
arch/arm64/include/asm/fb.h
Normal file
34
arch/arm64/include/asm/fb.h
Normal file
@ -0,0 +1,34 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_FB_H_
|
||||
#define __ASM_FB_H_
|
||||
|
||||
#include <linux/fb.h>
|
||||
#include <linux/fs.h>
|
||||
#include <asm/page.h>
|
||||
|
||||
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
|
||||
unsigned long off)
|
||||
{
|
||||
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
|
||||
}
|
||||
|
||||
static inline int fb_is_primary_device(struct fb_info *info)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* __ASM_FB_H_ */
|
29
arch/arm64/include/asm/fcntl.h
Normal file
29
arch/arm64/include/asm/fcntl.h
Normal file
@ -0,0 +1,29 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_FCNTL_H
|
||||
#define __ASM_FCNTL_H
|
||||
|
||||
/*
|
||||
* Using our own definitions for AArch32 (compat) support.
|
||||
*/
|
||||
#define O_DIRECTORY 040000 /* must be a directory */
|
||||
#define O_NOFOLLOW 0100000 /* don't follow links */
|
||||
#define O_DIRECT 0200000 /* direct disk access hint - currently ignored */
|
||||
#define O_LARGEFILE 0400000
|
||||
|
||||
#include <asm-generic/fcntl.h>
|
||||
|
||||
#endif
|
64
arch/arm64/include/asm/fpsimd.h
Normal file
64
arch/arm64/include/asm/fpsimd.h
Normal file
@ -0,0 +1,64 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_FP_H
|
||||
#define __ASM_FP_H
|
||||
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* FP/SIMD storage area has:
|
||||
* - FPSR and FPCR
|
||||
* - 32 128-bit data registers
|
||||
*
|
||||
* Note that user_fp forms a prefix of this structure, which is relied
|
||||
* upon in the ptrace FP/SIMD accessors. struct user_fpsimd_state must
|
||||
* form a prefix of struct fpsimd_state.
|
||||
*/
|
||||
struct fpsimd_state {
|
||||
union {
|
||||
struct user_fpsimd_state user_fpsimd;
|
||||
struct {
|
||||
__uint128_t vregs[32];
|
||||
u32 fpsr;
|
||||
u32 fpcr;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
|
||||
/* Masks for extracting the FPSR and FPCR from the FPSCR */
|
||||
#define VFP_FPSCR_STAT_MASK 0xf800009f
|
||||
#define VFP_FPSCR_CTRL_MASK 0x07f79f00
|
||||
/*
|
||||
* The VFP state has 32x64-bit registers and a single 32-bit
|
||||
* control/status register.
|
||||
*/
|
||||
#define VFP_STATE_SIZE ((32 * 8) + 4)
|
||||
#endif
|
||||
|
||||
struct task_struct;
|
||||
|
||||
extern void fpsimd_save_state(struct fpsimd_state *state);
|
||||
extern void fpsimd_load_state(struct fpsimd_state *state);
|
||||
|
||||
extern void fpsimd_thread_switch(struct task_struct *next);
|
||||
extern void fpsimd_flush_thread(void);
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
136
arch/arm64/include/asm/futex.h
Normal file
136
arch/arm64/include/asm/futex.h
Normal file
@ -0,0 +1,136 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_FUTEX_H
|
||||
#define __ASM_FUTEX_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/futex.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/errno.h>
|
||||
|
||||
#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
|
||||
asm volatile( \
|
||||
"1: ldaxr %w1, %2\n" \
|
||||
insn "\n" \
|
||||
"2: stlxr %w3, %w0, %2\n" \
|
||||
" cbnz %w3, 1b\n" \
|
||||
"3:\n" \
|
||||
" .pushsection .fixup,\"ax\"\n" \
|
||||
"4: mov %w0, %w5\n" \
|
||||
" b 3b\n" \
|
||||
" .popsection\n" \
|
||||
" .pushsection __ex_table,\"a\"\n" \
|
||||
" .align 3\n" \
|
||||
" .quad 1b, 4b, 2b, 4b\n" \
|
||||
" .popsection\n" \
|
||||
: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp) \
|
||||
: "r" (oparg), "Ir" (-EFAULT) \
|
||||
: "cc")
|
||||
|
||||
static inline int
|
||||
futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
|
||||
{
|
||||
int op = (encoded_op >> 28) & 7;
|
||||
int cmp = (encoded_op >> 24) & 15;
|
||||
int oparg = (encoded_op << 8) >> 20;
|
||||
int cmparg = (encoded_op << 20) >> 20;
|
||||
int oldval = 0, ret, tmp;
|
||||
|
||||
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
|
||||
oparg = 1 << oparg;
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
|
||||
return -EFAULT;
|
||||
|
||||
pagefault_disable(); /* implies preempt_disable() */
|
||||
|
||||
switch (op) {
|
||||
case FUTEX_OP_SET:
|
||||
__futex_atomic_op("mov %w0, %w4",
|
||||
ret, oldval, uaddr, tmp, oparg);
|
||||
break;
|
||||
case FUTEX_OP_ADD:
|
||||
__futex_atomic_op("add %w0, %w1, %w4",
|
||||
ret, oldval, uaddr, tmp, oparg);
|
||||
break;
|
||||
case FUTEX_OP_OR:
|
||||
__futex_atomic_op("orr %w0, %w1, %w4",
|
||||
ret, oldval, uaddr, tmp, oparg);
|
||||
break;
|
||||
case FUTEX_OP_ANDN:
|
||||
__futex_atomic_op("and %w0, %w1, %w4",
|
||||
ret, oldval, uaddr, tmp, ~oparg);
|
||||
break;
|
||||
case FUTEX_OP_XOR:
|
||||
__futex_atomic_op("eor %w0, %w1, %w4",
|
||||
ret, oldval, uaddr, tmp, oparg);
|
||||
break;
|
||||
default:
|
||||
ret = -ENOSYS;
|
||||
}
|
||||
|
||||
pagefault_enable(); /* subsumes preempt_enable() */
|
||||
|
||||
if (!ret) {
|
||||
switch (cmp) {
|
||||
case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
|
||||
case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
|
||||
case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
|
||||
case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
|
||||
case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
|
||||
case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
|
||||
default: ret = -ENOSYS;
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int
|
||||
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
|
||||
u32 oldval, u32 newval)
|
||||
{
|
||||
int ret = 0;
|
||||
u32 val, tmp;
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
|
||||
return -EFAULT;
|
||||
|
||||
asm volatile("// futex_atomic_cmpxchg_inatomic\n"
|
||||
"1: ldaxr %w1, %2\n"
|
||||
" sub %w3, %w1, %w4\n"
|
||||
" cbnz %w3, 3f\n"
|
||||
"2: stlxr %w3, %w5, %2\n"
|
||||
" cbnz %w3, 1b\n"
|
||||
"3:\n"
|
||||
" .pushsection .fixup,\"ax\"\n"
|
||||
"4: mov %w0, %w6\n"
|
||||
" b 3b\n"
|
||||
" .popsection\n"
|
||||
" .pushsection __ex_table,\"a\"\n"
|
||||
" .align 3\n"
|
||||
" .quad 1b, 4b, 2b, 4b\n"
|
||||
" .popsection\n"
|
||||
: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp)
|
||||
: "r" (oldval), "r" (newval), "Ir" (-EFAULT)
|
||||
: "cc", "memory");
|
||||
|
||||
*uval = val;
|
||||
return ret;
|
||||
}
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_FUTEX_H */
|
52
arch/arm64/include/asm/hardirq.h
Normal file
52
arch/arm64/include/asm/hardirq.h
Normal file
@ -0,0 +1,52 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_HARDIRQ_H
|
||||
#define __ASM_HARDIRQ_H
|
||||
|
||||
#include <linux/cache.h>
|
||||
#include <linux/threads.h>
|
||||
#include <asm/irq.h>
|
||||
|
||||
#define NR_IPI 4
|
||||
|
||||
typedef struct {
|
||||
unsigned int __softirq_pending;
|
||||
#ifdef CONFIG_SMP
|
||||
unsigned int ipi_irqs[NR_IPI];
|
||||
#endif
|
||||
} ____cacheline_aligned irq_cpustat_t;
|
||||
|
||||
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
|
||||
|
||||
#define __inc_irq_stat(cpu, member) __IRQ_STAT(cpu, member)++
|
||||
#define __get_irq_stat(cpu, member) __IRQ_STAT(cpu, member)
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
u64 smp_irq_stat_cpu(unsigned int cpu);
|
||||
#define arch_irq_stat_cpu smp_irq_stat_cpu
|
||||
#endif
|
||||
|
||||
#define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
|
||||
|
||||
static inline void ack_bad_irq(unsigned int irq)
|
||||
{
|
||||
extern unsigned long irq_err_count;
|
||||
irq_err_count++;
|
||||
}
|
||||
|
||||
extern void handle_IRQ(unsigned int, struct pt_regs *);
|
||||
|
||||
#endif /* __ASM_HARDIRQ_H */
|
137
arch/arm64/include/asm/hw_breakpoint.h
Normal file
137
arch/arm64/include/asm/hw_breakpoint.h
Normal file
@ -0,0 +1,137 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_HW_BREAKPOINT_H
|
||||
#define __ASM_HW_BREAKPOINT_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
struct arch_hw_breakpoint_ctrl {
|
||||
u32 __reserved : 19,
|
||||
len : 8,
|
||||
type : 2,
|
||||
privilege : 2,
|
||||
enabled : 1;
|
||||
};
|
||||
|
||||
struct arch_hw_breakpoint {
|
||||
u64 address;
|
||||
u64 trigger;
|
||||
struct arch_hw_breakpoint_ctrl ctrl;
|
||||
};
|
||||
|
||||
static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
|
||||
{
|
||||
return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
|
||||
ctrl.enabled;
|
||||
}
|
||||
|
||||
static inline void decode_ctrl_reg(u32 reg,
|
||||
struct arch_hw_breakpoint_ctrl *ctrl)
|
||||
{
|
||||
ctrl->enabled = reg & 0x1;
|
||||
reg >>= 1;
|
||||
ctrl->privilege = reg & 0x3;
|
||||
reg >>= 2;
|
||||
ctrl->type = reg & 0x3;
|
||||
reg >>= 2;
|
||||
ctrl->len = reg & 0xff;
|
||||
}
|
||||
|
||||
/* Breakpoint */
|
||||
#define ARM_BREAKPOINT_EXECUTE 0
|
||||
|
||||
/* Watchpoints */
|
||||
#define ARM_BREAKPOINT_LOAD 1
|
||||
#define ARM_BREAKPOINT_STORE 2
|
||||
#define AARCH64_ESR_ACCESS_MASK (1 << 6)
|
||||
|
||||
/* Privilege Levels */
|
||||
#define AARCH64_BREAKPOINT_EL1 1
|
||||
#define AARCH64_BREAKPOINT_EL0 2
|
||||
|
||||
/* Lengths */
|
||||
#define ARM_BREAKPOINT_LEN_1 0x1
|
||||
#define ARM_BREAKPOINT_LEN_2 0x3
|
||||
#define ARM_BREAKPOINT_LEN_4 0xf
|
||||
#define ARM_BREAKPOINT_LEN_8 0xff
|
||||
|
||||
/* Kernel stepping */
|
||||
#define ARM_KERNEL_STEP_NONE 0
|
||||
#define ARM_KERNEL_STEP_ACTIVE 1
|
||||
#define ARM_KERNEL_STEP_SUSPEND 2
|
||||
|
||||
/*
|
||||
* Limits.
|
||||
* Changing these will require modifications to the register accessors.
|
||||
*/
|
||||
#define ARM_MAX_BRP 16
|
||||
#define ARM_MAX_WRP 16
|
||||
#define ARM_MAX_HBP_SLOTS (ARM_MAX_BRP + ARM_MAX_WRP)
|
||||
|
||||
/* Virtual debug register bases. */
|
||||
#define AARCH64_DBG_REG_BVR 0
|
||||
#define AARCH64_DBG_REG_BCR (AARCH64_DBG_REG_BVR + ARM_MAX_BRP)
|
||||
#define AARCH64_DBG_REG_WVR (AARCH64_DBG_REG_BCR + ARM_MAX_BRP)
|
||||
#define AARCH64_DBG_REG_WCR (AARCH64_DBG_REG_WVR + ARM_MAX_WRP)
|
||||
|
||||
/* Debug register names. */
|
||||
#define AARCH64_DBG_REG_NAME_BVR "bvr"
|
||||
#define AARCH64_DBG_REG_NAME_BCR "bcr"
|
||||
#define AARCH64_DBG_REG_NAME_WVR "wvr"
|
||||
#define AARCH64_DBG_REG_NAME_WCR "wcr"
|
||||
|
||||
/* Accessor macros for the debug registers. */
|
||||
#define AARCH64_DBG_READ(N, REG, VAL) do {\
|
||||
asm volatile("mrs %0, dbg" REG #N "_el1" : "=r" (VAL));\
|
||||
} while (0)
|
||||
|
||||
#define AARCH64_DBG_WRITE(N, REG, VAL) do {\
|
||||
asm volatile("msr dbg" REG #N "_el1, %0" :: "r" (VAL));\
|
||||
} while (0)
|
||||
|
||||
struct task_struct;
|
||||
struct notifier_block;
|
||||
struct perf_event;
|
||||
struct pmu;
|
||||
|
||||
extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
|
||||
int *gen_len, int *gen_type);
|
||||
extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
|
||||
extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
|
||||
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
|
||||
unsigned long val, void *data);
|
||||
|
||||
extern int arch_install_hw_breakpoint(struct perf_event *bp);
|
||||
extern void arch_uninstall_hw_breakpoint(struct perf_event *bp);
|
||||
extern void hw_breakpoint_pmu_read(struct perf_event *bp);
|
||||
extern int hw_breakpoint_slots(int type);
|
||||
|
||||
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
||||
extern void hw_breakpoint_thread_switch(struct task_struct *next);
|
||||
extern void ptrace_hw_copy_thread(struct task_struct *task);
|
||||
#else
|
||||
static inline void hw_breakpoint_thread_switch(struct task_struct *next)
|
||||
{
|
||||
}
|
||||
static inline void ptrace_hw_copy_thread(struct task_struct *task)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
extern struct pmu perf_ops_bp;
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_BREAKPOINT_H */
|
53
arch/arm64/include/asm/hwcap.h
Normal file
53
arch/arm64/include/asm/hwcap.h
Normal file
@ -0,0 +1,53 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_HWCAP_H
|
||||
#define __ASM_HWCAP_H
|
||||
|
||||
/*
|
||||
* HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
|
||||
*/
|
||||
#define HWCAP_FP (1 << 0)
|
||||
#define HWCAP_ASIMD (1 << 1)
|
||||
|
||||
#define COMPAT_HWCAP_HALF (1 << 1)
|
||||
#define COMPAT_HWCAP_THUMB (1 << 2)
|
||||
#define COMPAT_HWCAP_FAST_MULT (1 << 4)
|
||||
#define COMPAT_HWCAP_VFP (1 << 6)
|
||||
#define COMPAT_HWCAP_EDSP (1 << 7)
|
||||
#define COMPAT_HWCAP_NEON (1 << 12)
|
||||
#define COMPAT_HWCAP_VFPv3 (1 << 13)
|
||||
#define COMPAT_HWCAP_TLS (1 << 15)
|
||||
#define COMPAT_HWCAP_VFPv4 (1 << 16)
|
||||
#define COMPAT_HWCAP_IDIVA (1 << 17)
|
||||
#define COMPAT_HWCAP_IDIVT (1 << 18)
|
||||
#define COMPAT_HWCAP_IDIV (COMPAT_HWCAP_IDIVA|COMPAT_HWCAP_IDIVT)
|
||||
|
||||
#if defined(__KERNEL__) && !defined(__ASSEMBLY__)
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
* instruction set this cpu supports.
|
||||
*/
|
||||
#define ELF_HWCAP (elf_hwcap)
|
||||
#define COMPAT_ELF_HWCAP (COMPAT_HWCAP_HALF|COMPAT_HWCAP_THUMB|\
|
||||
COMPAT_HWCAP_FAST_MULT|COMPAT_HWCAP_EDSP|\
|
||||
COMPAT_HWCAP_TLS|COMPAT_HWCAP_VFP|\
|
||||
COMPAT_HWCAP_VFPv3|COMPAT_HWCAP_VFPv4|\
|
||||
COMPAT_HWCAP_NEON|COMPAT_HWCAP_IDIV)
|
||||
|
||||
extern unsigned int elf_hwcap;
|
||||
#endif
|
||||
|
||||
#endif
|
258
arch/arm64/include/asm/io.h
Normal file
258
arch/arm64/include/asm/io.h
Normal file
@ -0,0 +1,258 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/io.h
|
||||
*
|
||||
* Copyright (C) 1996-2000 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_IO_H
|
||||
#define __ASM_IO_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/barrier.h>
|
||||
#include <asm/pgtable.h>
|
||||
|
||||
/*
|
||||
* Generic IO read/write. These perform native-endian accesses.
|
||||
*/
|
||||
static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
|
||||
{
|
||||
asm volatile("strb %w0, [%1]" : : "r" (val), "r" (addr));
|
||||
}
|
||||
|
||||
static inline void __raw_writew(u16 val, volatile void __iomem *addr)
|
||||
{
|
||||
asm volatile("strh %w0, [%1]" : : "r" (val), "r" (addr));
|
||||
}
|
||||
|
||||
static inline void __raw_writel(u32 val, volatile void __iomem *addr)
|
||||
{
|
||||
asm volatile("str %w0, [%1]" : : "r" (val), "r" (addr));
|
||||
}
|
||||
|
||||
static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
|
||||
{
|
||||
asm volatile("str %0, [%1]" : : "r" (val), "r" (addr));
|
||||
}
|
||||
|
||||
static inline u8 __raw_readb(const volatile void __iomem *addr)
|
||||
{
|
||||
u8 val;
|
||||
asm volatile("ldrb %w0, [%1]" : "=r" (val) : "r" (addr));
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline u16 __raw_readw(const volatile void __iomem *addr)
|
||||
{
|
||||
u16 val;
|
||||
asm volatile("ldrh %w0, [%1]" : "=r" (val) : "r" (addr));
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline u32 __raw_readl(const volatile void __iomem *addr)
|
||||
{
|
||||
u32 val;
|
||||
asm volatile("ldr %w0, [%1]" : "=r" (val) : "r" (addr));
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline u64 __raw_readq(const volatile void __iomem *addr)
|
||||
{
|
||||
u64 val;
|
||||
asm volatile("ldr %0, [%1]" : "=r" (val) : "r" (addr));
|
||||
return val;
|
||||
}
|
||||
|
||||
/* IO barriers */
|
||||
#define __iormb() rmb()
|
||||
#define __iowmb() wmb()
|
||||
|
||||
#define mmiowb() do { } while (0)
|
||||
|
||||
/*
|
||||
* Relaxed I/O memory access primitives. These follow the Device memory
|
||||
* ordering rules but do not guarantee any ordering relative to Normal memory
|
||||
* accesses.
|
||||
*/
|
||||
#define readb_relaxed(c) ({ u8 __v = __raw_readb(c); __v; })
|
||||
#define readw_relaxed(c) ({ u16 __v = le16_to_cpu((__force __le16)__raw_readw(c)); __v; })
|
||||
#define readl_relaxed(c) ({ u32 __v = le32_to_cpu((__force __le32)__raw_readl(c)); __v; })
|
||||
|
||||
#define writeb_relaxed(v,c) ((void)__raw_writeb((v),(c)))
|
||||
#define writew_relaxed(v,c) ((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
|
||||
#define writel_relaxed(v,c) ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
|
||||
|
||||
/*
|
||||
* I/O memory access primitives. Reads are ordered relative to any
|
||||
* following Normal memory access. Writes are ordered relative to any prior
|
||||
* Normal memory access.
|
||||
*/
|
||||
#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; })
|
||||
#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
|
||||
#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
|
||||
|
||||
#define writeb(v,c) ({ __iowmb(); writeb_relaxed((v),(c)); })
|
||||
#define writew(v,c) ({ __iowmb(); writew_relaxed((v),(c)); })
|
||||
#define writel(v,c) ({ __iowmb(); writel_relaxed((v),(c)); })
|
||||
|
||||
/*
|
||||
* I/O port access primitives.
|
||||
*/
|
||||
#define IO_SPACE_LIMIT 0xffff
|
||||
#define PCI_IOBASE ((void __iomem *)0xffffffbbfffe0000UL)
|
||||
|
||||
static inline u8 inb(unsigned long addr)
|
||||
{
|
||||
return readb(addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline u16 inw(unsigned long addr)
|
||||
{
|
||||
return readw(addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline u32 inl(unsigned long addr)
|
||||
{
|
||||
return readl(addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void outb(u8 b, unsigned long addr)
|
||||
{
|
||||
writeb(b, addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void outw(u16 b, unsigned long addr)
|
||||
{
|
||||
writew(b, addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void outl(u32 b, unsigned long addr)
|
||||
{
|
||||
writel(b, addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
#define inb_p(addr) inb(addr)
|
||||
#define inw_p(addr) inw(addr)
|
||||
#define inl_p(addr) inl(addr)
|
||||
|
||||
#define outb_p(x, addr) outb((x), (addr))
|
||||
#define outw_p(x, addr) outw((x), (addr))
|
||||
#define outl_p(x, addr) outl((x), (addr))
|
||||
|
||||
static inline void insb(unsigned long addr, void *buffer, int count)
|
||||
{
|
||||
u8 *buf = buffer;
|
||||
while (count--)
|
||||
*buf++ = __raw_readb(addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void insw(unsigned long addr, void *buffer, int count)
|
||||
{
|
||||
u16 *buf = buffer;
|
||||
while (count--)
|
||||
*buf++ = __raw_readw(addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void insl(unsigned long addr, void *buffer, int count)
|
||||
{
|
||||
u32 *buf = buffer;
|
||||
while (count--)
|
||||
*buf++ = __raw_readl(addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void outsb(unsigned long addr, const void *buffer, int count)
|
||||
{
|
||||
const u8 *buf = buffer;
|
||||
while (count--)
|
||||
__raw_writeb(*buf++, addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void outsw(unsigned long addr, const void *buffer, int count)
|
||||
{
|
||||
const u16 *buf = buffer;
|
||||
while (count--)
|
||||
__raw_writew(*buf++, addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
static inline void outsl(unsigned long addr, const void *buffer, int count)
|
||||
{
|
||||
const u32 *buf = buffer;
|
||||
while (count--)
|
||||
__raw_writel(*buf++, addr + PCI_IOBASE);
|
||||
}
|
||||
|
||||
#define insb_p(port,to,len) insb(port,to,len)
|
||||
#define insw_p(port,to,len) insw(port,to,len)
|
||||
#define insl_p(port,to,len) insl(port,to,len)
|
||||
|
||||
#define outsb_p(port,from,len) outsb(port,from,len)
|
||||
#define outsw_p(port,from,len) outsw(port,from,len)
|
||||
#define outsl_p(port,from,len) outsl(port,from,len)
|
||||
|
||||
/*
|
||||
* String version of I/O memory access operations.
|
||||
*/
|
||||
extern void __memcpy_fromio(void *, const volatile void __iomem *, size_t);
|
||||
extern void __memcpy_toio(volatile void __iomem *, const void *, size_t);
|
||||
extern void __memset_io(volatile void __iomem *, int, size_t);
|
||||
|
||||
#define memset_io(c,v,l) __memset_io((c),(v),(l))
|
||||
#define memcpy_fromio(a,c,l) __memcpy_fromio((a),(c),(l))
|
||||
#define memcpy_toio(c,a,l) __memcpy_toio((c),(a),(l))
|
||||
|
||||
/*
|
||||
* I/O memory mapping functions.
|
||||
*/
|
||||
extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot);
|
||||
extern void __iounmap(volatile void __iomem *addr);
|
||||
|
||||
#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY)
|
||||
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_XN | PTE_ATTRINDX(MT_DEVICE_nGnRE))
|
||||
#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL_NC))
|
||||
|
||||
#define ioremap(addr, size) __ioremap((addr), (size), PROT_DEVICE_nGnRE)
|
||||
#define ioremap_nocache(addr, size) __ioremap((addr), (size), PROT_DEVICE_nGnRE)
|
||||
#define ioremap_wc(addr, size) __ioremap((addr), (size), PROT_NORMAL_NC)
|
||||
#define iounmap __iounmap
|
||||
|
||||
#define ARCH_HAS_IOREMAP_WC
|
||||
#include <asm-generic/iomap.h>
|
||||
|
||||
/*
|
||||
* More restrictive address range checking than the default implementation
|
||||
* (PHYS_OFFSET and PHYS_MASK taken into account).
|
||||
*/
|
||||
#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
|
||||
extern int valid_phys_addr_range(unsigned long addr, size_t size);
|
||||
extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
|
||||
|
||||
extern int devmem_is_allowed(unsigned long pfn);
|
||||
|
||||
/*
|
||||
* Convert a physical pointer to a virtual kernel pointer for /dev/mem
|
||||
* access
|
||||
*/
|
||||
#define xlate_dev_mem_ptr(p) __va(p)
|
||||
|
||||
/*
|
||||
* Convert a virtual cached pointer to an uncached pointer
|
||||
*/
|
||||
#define xlate_dev_kmem_ptr(p) p
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_IO_H */
|
8
arch/arm64/include/asm/irq.h
Normal file
8
arch/arm64/include/asm/irq.h
Normal file
@ -0,0 +1,8 @@
|
||||
#ifndef __ASM_IRQ_H
|
||||
#define __ASM_IRQ_H
|
||||
|
||||
#include <asm-generic/irq.h>
|
||||
|
||||
extern void (*handle_arch_irq)(struct pt_regs *);
|
||||
|
||||
#endif
|
91
arch/arm64/include/asm/irqflags.h
Normal file
91
arch/arm64/include/asm/irqflags.h
Normal file
@ -0,0 +1,91 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_IRQFLAGS_H
|
||||
#define __ASM_IRQFLAGS_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
/*
|
||||
* CPU interrupt mask handling.
|
||||
*/
|
||||
static inline unsigned long arch_local_irq_save(void)
|
||||
{
|
||||
unsigned long flags;
|
||||
asm volatile(
|
||||
"mrs %0, daif // arch_local_irq_save\n"
|
||||
"msr daifset, #2"
|
||||
: "=r" (flags)
|
||||
:
|
||||
: "memory");
|
||||
return flags;
|
||||
}
|
||||
|
||||
static inline void arch_local_irq_enable(void)
|
||||
{
|
||||
asm volatile(
|
||||
"msr daifclr, #2 // arch_local_irq_enable"
|
||||
:
|
||||
:
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline void arch_local_irq_disable(void)
|
||||
{
|
||||
asm volatile(
|
||||
"msr daifset, #2 // arch_local_irq_disable"
|
||||
:
|
||||
:
|
||||
: "memory");
|
||||
}
|
||||
|
||||
#define local_fiq_enable() asm("msr daifclr, #1" : : : "memory")
|
||||
#define local_fiq_disable() asm("msr daifset, #1" : : : "memory")
|
||||
|
||||
/*
|
||||
* Save the current interrupt enable state.
|
||||
*/
|
||||
static inline unsigned long arch_local_save_flags(void)
|
||||
{
|
||||
unsigned long flags;
|
||||
asm volatile(
|
||||
"mrs %0, daif // arch_local_save_flags"
|
||||
: "=r" (flags)
|
||||
:
|
||||
: "memory");
|
||||
return flags;
|
||||
}
|
||||
|
||||
/*
|
||||
* restore saved IRQ state
|
||||
*/
|
||||
static inline void arch_local_irq_restore(unsigned long flags)
|
||||
{
|
||||
asm volatile(
|
||||
"msr daif, %0 // arch_local_irq_restore"
|
||||
:
|
||||
: "r" (flags)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline int arch_irqs_disabled_flags(unsigned long flags)
|
||||
{
|
||||
return flags & PSR_I_BIT;
|
||||
}
|
||||
|
||||
#endif
|
||||
#endif
|
21
arch/arm64/include/asm/memblock.h
Normal file
21
arch/arm64/include/asm/memblock.h
Normal file
@ -0,0 +1,21 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_MEMBLOCK_H
|
||||
#define __ASM_MEMBLOCK_H
|
||||
|
||||
extern void arm64_memblock_init(void);
|
||||
|
||||
#endif
|
144
arch/arm64/include/asm/memory.h
Normal file
144
arch/arm64/include/asm/memory.h
Normal file
@ -0,0 +1,144 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/memory.h
|
||||
*
|
||||
* Copyright (C) 2000-2002 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*
|
||||
* Note: this file should not be included by non-asm/.h files
|
||||
*/
|
||||
#ifndef __ASM_MEMORY_H
|
||||
#define __ASM_MEMORY_H
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/const.h>
|
||||
#include <linux/types.h>
|
||||
#include <asm/sizes.h>
|
||||
|
||||
/*
|
||||
* Allow for constants defined here to be used from assembly code
|
||||
* by prepending the UL suffix only with actual C code compilation.
|
||||
*/
|
||||
#define UL(x) _AC(x, UL)
|
||||
|
||||
/*
|
||||
* PAGE_OFFSET - the virtual address of the start of the kernel image.
|
||||
* VA_BITS - the maximum number of bits for virtual addresses.
|
||||
* TASK_SIZE - the maximum size of a user space task.
|
||||
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
|
||||
* The module space lives between the addresses given by TASK_SIZE
|
||||
* and PAGE_OFFSET - it must be within 128MB of the kernel text.
|
||||
*/
|
||||
#define PAGE_OFFSET UL(0xffffffc000000000)
|
||||
#define MODULES_END (PAGE_OFFSET)
|
||||
#define MODULES_VADDR (MODULES_END - SZ_64M)
|
||||
#define VA_BITS (39)
|
||||
#define TASK_SIZE_64 (UL(1) << VA_BITS)
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define TASK_SIZE_32 UL(0x100000000)
|
||||
#define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \
|
||||
TASK_SIZE_32 : TASK_SIZE_64)
|
||||
#else
|
||||
#define TASK_SIZE TASK_SIZE_64
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 4))
|
||||
|
||||
#if TASK_SIZE_64 > MODULES_VADDR
|
||||
#error Top of 64-bit user space clashes with start of module space
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Physical vs virtual RAM address space conversion. These are
|
||||
* private definitions which should NOT be used outside memory.h
|
||||
* files. Use virt_to_phys/phys_to_virt/__pa/__va instead.
|
||||
*/
|
||||
#define __virt_to_phys(x) (((phys_addr_t)(x) - PAGE_OFFSET + PHYS_OFFSET))
|
||||
#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
|
||||
|
||||
/*
|
||||
* Convert a physical address to a Page Frame Number and back
|
||||
*/
|
||||
#define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
|
||||
#define __pfn_to_phys(pfn) ((phys_addr_t)(pfn) << PAGE_SHIFT)
|
||||
|
||||
/*
|
||||
* Convert a page to/from a physical address
|
||||
*/
|
||||
#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page)))
|
||||
#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))
|
||||
|
||||
/*
|
||||
* Memory types available.
|
||||
*/
|
||||
#define MT_DEVICE_nGnRnE 0
|
||||
#define MT_DEVICE_nGnRE 1
|
||||
#define MT_DEVICE_GRE 2
|
||||
#define MT_NORMAL_NC 3
|
||||
#define MT_NORMAL 4
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
extern phys_addr_t memstart_addr;
|
||||
/* PHYS_OFFSET - the physical address of the start of memory. */
|
||||
#define PHYS_OFFSET ({ memstart_addr; })
|
||||
|
||||
/*
|
||||
* PFNs are used to describe any physical page; this means
|
||||
* PFN 0 == physical address 0.
|
||||
*
|
||||
* This is the PFN of the first RAM page in the kernel
|
||||
* direct-mapped view. We assume this is the first page
|
||||
* of RAM in the mem_map as well.
|
||||
*/
|
||||
#define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT)
|
||||
|
||||
/*
|
||||
* Note: Drivers should NOT use these. They are the wrong
|
||||
* translation for translating DMA addresses. Use the driver
|
||||
* DMA support - see dma-mapping.h.
|
||||
*/
|
||||
static inline phys_addr_t virt_to_phys(const volatile void *x)
|
||||
{
|
||||
return __virt_to_phys((unsigned long)(x));
|
||||
}
|
||||
|
||||
static inline void *phys_to_virt(phys_addr_t x)
|
||||
{
|
||||
return (void *)(__phys_to_virt(x));
|
||||
}
|
||||
|
||||
/*
|
||||
* Drivers should NOT use these either.
|
||||
*/
|
||||
#define __pa(x) __virt_to_phys((unsigned long)(x))
|
||||
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
|
||||
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
|
||||
|
||||
/*
|
||||
* virt_to_page(k) convert a _valid_ virtual address to struct page *
|
||||
* virt_addr_valid(k) indicates whether a virtual address is valid
|
||||
*/
|
||||
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
|
||||
|
||||
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
|
||||
#define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
|
||||
((void *)(kaddr) < (void *)high_memory))
|
||||
|
||||
#endif
|
||||
|
||||
#include <asm-generic/memory_model.h>
|
||||
|
||||
#endif
|
30
arch/arm64/include/asm/mmu.h
Normal file
30
arch/arm64/include/asm/mmu.h
Normal file
@ -0,0 +1,30 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_MMU_H
|
||||
#define __ASM_MMU_H
|
||||
|
||||
typedef struct {
|
||||
unsigned int id;
|
||||
raw_spinlock_t id_lock;
|
||||
void *vdso;
|
||||
} mm_context_t;
|
||||
|
||||
#define ASID(mm) ((mm)->context.id & 0xffff)
|
||||
|
||||
extern void paging_init(void);
|
||||
extern void setup_mm_for_reboot(void);
|
||||
|
||||
#endif
|
152
arch/arm64/include/asm/mmu_context.h
Normal file
152
arch/arm64/include/asm/mmu_context.h
Normal file
@ -0,0 +1,152 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/mmu_context.h
|
||||
*
|
||||
* Copyright (C) 1996 Russell King.
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_MMU_CONTEXT_H
|
||||
#define __ASM_MMU_CONTEXT_H
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/sched.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/proc-fns.h>
|
||||
#include <asm-generic/mm_hooks.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/pgtable.h>
|
||||
|
||||
#define MAX_ASID_BITS 16
|
||||
|
||||
extern unsigned int cpu_last_asid;
|
||||
|
||||
void __init_new_context(struct task_struct *tsk, struct mm_struct *mm);
|
||||
void __new_context(struct mm_struct *mm);
|
||||
|
||||
/*
|
||||
* Set TTBR0 to empty_zero_page. No translations will be possible via TTBR0.
|
||||
*/
|
||||
static inline void cpu_set_reserved_ttbr0(void)
|
||||
{
|
||||
unsigned long ttbr = page_to_phys(empty_zero_page);
|
||||
|
||||
asm(
|
||||
" msr ttbr0_el1, %0 // set TTBR0\n"
|
||||
" isb"
|
||||
:
|
||||
: "r" (ttbr));
|
||||
}
|
||||
|
||||
static inline void switch_new_context(struct mm_struct *mm)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
__new_context(mm);
|
||||
|
||||
local_irq_save(flags);
|
||||
cpu_switch_mm(mm->pgd, mm);
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
static inline void check_and_switch_context(struct mm_struct *mm,
|
||||
struct task_struct *tsk)
|
||||
{
|
||||
/*
|
||||
* Required during context switch to avoid speculative page table
|
||||
* walking with the wrong TTBR.
|
||||
*/
|
||||
cpu_set_reserved_ttbr0();
|
||||
|
||||
if (!((mm->context.id ^ cpu_last_asid) >> MAX_ASID_BITS))
|
||||
/*
|
||||
* The ASID is from the current generation, just switch to the
|
||||
* new pgd. This condition is only true for calls from
|
||||
* context_switch() and interrupts are already disabled.
|
||||
*/
|
||||
cpu_switch_mm(mm->pgd, mm);
|
||||
else if (irqs_disabled())
|
||||
/*
|
||||
* Defer the new ASID allocation until after the context
|
||||
* switch critical region since __new_context() cannot be
|
||||
* called with interrupts disabled.
|
||||
*/
|
||||
set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
|
||||
else
|
||||
/*
|
||||
* That is a direct call to switch_mm() or activate_mm() with
|
||||
* interrupts enabled and a new context.
|
||||
*/
|
||||
switch_new_context(mm);
|
||||
}
|
||||
|
||||
#define init_new_context(tsk,mm) (__init_new_context(tsk,mm),0)
|
||||
#define destroy_context(mm) do { } while(0)
|
||||
|
||||
#define finish_arch_post_lock_switch \
|
||||
finish_arch_post_lock_switch
|
||||
static inline void finish_arch_post_lock_switch(void)
|
||||
{
|
||||
if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
|
||||
struct mm_struct *mm = current->mm;
|
||||
unsigned long flags;
|
||||
|
||||
__new_context(mm);
|
||||
|
||||
local_irq_save(flags);
|
||||
cpu_switch_mm(mm->pgd, mm);
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This is called when "tsk" is about to enter lazy TLB mode.
|
||||
*
|
||||
* mm: describes the currently active mm context
|
||||
* tsk: task which is entering lazy tlb
|
||||
* cpu: cpu number which is entering lazy tlb
|
||||
*
|
||||
* tsk->mm will be NULL
|
||||
*/
|
||||
static inline void
|
||||
enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
|
||||
{
|
||||
}
|
||||
|
||||
/*
|
||||
* This is the actual mm switch as far as the scheduler
|
||||
* is concerned. No registers are touched. We avoid
|
||||
* calling the CPU specific function when the mm hasn't
|
||||
* actually changed.
|
||||
*/
|
||||
static inline void
|
||||
switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
||||
struct task_struct *tsk)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/* check for possible thread migration */
|
||||
if (!cpumask_empty(mm_cpumask(next)) &&
|
||||
!cpumask_test_cpu(cpu, mm_cpumask(next)))
|
||||
__flush_icache_all();
|
||||
#endif
|
||||
if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next)
|
||||
check_and_switch_context(next, tsk);
|
||||
}
|
||||
|
||||
#define deactivate_mm(tsk,mm) do { } while (0)
|
||||
#define activate_mm(prev,next) switch_mm(prev, next, NULL)
|
||||
|
||||
#endif
|
23
arch/arm64/include/asm/module.h
Normal file
23
arch/arm64/include/asm/module.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_MODULE_H
|
||||
#define __ASM_MODULE_H
|
||||
|
||||
#include <asm-generic/module.h>
|
||||
|
||||
#define MODULE_ARCH_VERMAGIC "aarch64"
|
||||
|
||||
#endif /* __ASM_MODULE_H */
|
67
arch/arm64/include/asm/page.h
Normal file
67
arch/arm64/include/asm/page.h
Normal file
@ -0,0 +1,67 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/page.h
|
||||
*
|
||||
* Copyright (C) 1995-2003 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PAGE_H
|
||||
#define __ASM_PAGE_H
|
||||
|
||||
/* PAGE_SHIFT determines the page size */
|
||||
#ifdef CONFIG_ARM64_64K_PAGES
|
||||
#define PAGE_SHIFT 16
|
||||
#else
|
||||
#define PAGE_SHIFT 12
|
||||
#endif
|
||||
#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
|
||||
#define PAGE_MASK (~(PAGE_SIZE-1))
|
||||
|
||||
/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
|
||||
#define __HAVE_ARCH_GATE_AREA 1
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#ifdef CONFIG_ARM64_64K_PAGES
|
||||
#include <asm/pgtable-2level-types.h>
|
||||
#else
|
||||
#include <asm/pgtable-3level-types.h>
|
||||
#endif
|
||||
|
||||
extern void __cpu_clear_user_page(void *p, unsigned long user);
|
||||
extern void __cpu_copy_user_page(void *to, const void *from,
|
||||
unsigned long user);
|
||||
extern void copy_page(void *to, const void *from);
|
||||
extern void clear_page(void *to);
|
||||
|
||||
#define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr)
|
||||
#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr)
|
||||
|
||||
typedef struct page *pgtable_t;
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_PFN_VALID
|
||||
extern int pfn_valid(unsigned long);
|
||||
#endif
|
||||
|
||||
#include <asm/memory.h>
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#define VM_DATA_DEFAULT_FLAGS \
|
||||
(((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0) | \
|
||||
VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
|
||||
|
||||
#include <asm-generic/getorder.h>
|
||||
|
||||
#endif
|
23
arch/arm64/include/asm/param.h
Normal file
23
arch/arm64/include/asm/param.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PARAM_H
|
||||
#define __ASM_PARAM_H
|
||||
|
||||
#define EXEC_PAGESIZE 65536
|
||||
|
||||
#include <asm-generic/param.h>
|
||||
|
||||
#endif
|
22
arch/arm64/include/asm/perf_event.h
Normal file
22
arch/arm64/include/asm/perf_event.h
Normal file
@ -0,0 +1,22 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef __ASM_PERF_EVENT_H
|
||||
#define __ASM_PERF_EVENT_H
|
||||
|
||||
/* It's quiet around here... */
|
||||
|
||||
#endif
|
113
arch/arm64/include/asm/pgalloc.h
Normal file
113
arch/arm64/include/asm/pgalloc.h
Normal file
@ -0,0 +1,113 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/pgalloc.h
|
||||
*
|
||||
* Copyright (C) 2000-2001 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGALLOC_H
|
||||
#define __ASM_PGALLOC_H
|
||||
|
||||
#include <asm/pgtable-hwdef.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
#define check_pgt_cache() do { } while (0)
|
||||
|
||||
#ifndef CONFIG_ARM64_64K_PAGES
|
||||
|
||||
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
return (pmd_t *)get_zeroed_page(GFP_KERNEL | __GFP_REPEAT);
|
||||
}
|
||||
|
||||
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
|
||||
{
|
||||
BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
|
||||
free_page((unsigned long)pmd);
|
||||
}
|
||||
|
||||
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
|
||||
{
|
||||
set_pud(pud, __pud(__pa(pmd) | PMD_TYPE_TABLE));
|
||||
}
|
||||
|
||||
#endif /* CONFIG_ARM64_64K_PAGES */
|
||||
|
||||
extern pgd_t *pgd_alloc(struct mm_struct *mm);
|
||||
extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
|
||||
|
||||
#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO)
|
||||
|
||||
static inline pte_t *
|
||||
pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
return (pte_t *)__get_free_page(PGALLOC_GFP);
|
||||
}
|
||||
|
||||
static inline pgtable_t
|
||||
pte_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
struct page *pte;
|
||||
|
||||
pte = alloc_pages(PGALLOC_GFP, 0);
|
||||
if (pte)
|
||||
pgtable_page_ctor(pte);
|
||||
|
||||
return pte;
|
||||
}
|
||||
|
||||
/*
|
||||
* Free a PTE table.
|
||||
*/
|
||||
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
|
||||
{
|
||||
if (pte)
|
||||
free_page((unsigned long)pte);
|
||||
}
|
||||
|
||||
static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
|
||||
{
|
||||
pgtable_page_dtor(pte);
|
||||
__free_page(pte);
|
||||
}
|
||||
|
||||
static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t pte,
|
||||
pmdval_t prot)
|
||||
{
|
||||
set_pmd(pmdp, __pmd(pte | prot));
|
||||
}
|
||||
|
||||
/*
|
||||
* Populate the pmdp entry with a pointer to the pte. This pmd is part
|
||||
* of the mm address space.
|
||||
*/
|
||||
static inline void
|
||||
pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep)
|
||||
{
|
||||
/*
|
||||
* The pmd must be loaded with the physical address of the PTE table
|
||||
*/
|
||||
__pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE);
|
||||
}
|
||||
|
||||
static inline void
|
||||
pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
|
||||
{
|
||||
__pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE);
|
||||
}
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
|
||||
#endif
|
43
arch/arm64/include/asm/pgtable-2level-hwdef.h
Normal file
43
arch/arm64/include/asm/pgtable-2level-hwdef.h
Normal file
@ -0,0 +1,43 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGTABLE_2LEVEL_HWDEF_H
|
||||
#define __ASM_PGTABLE_2LEVEL_HWDEF_H
|
||||
|
||||
/*
|
||||
* With LPAE and 64KB pages, there are 2 levels of page tables. Each level has
|
||||
* 8192 entries of 8 bytes each, occupying a 64KB page. Levels 0 and 1 are not
|
||||
* used. The 2nd level table (PGD for Linux) can cover a range of 4TB, each
|
||||
* entry representing 512MB. The user and kernel address spaces are limited to
|
||||
* 512GB and therefore we only use 1024 entries in the PGD.
|
||||
*/
|
||||
#define PTRS_PER_PTE 8192
|
||||
#define PTRS_PER_PGD 1024
|
||||
|
||||
/*
|
||||
* PGDIR_SHIFT determines the size a top-level page table entry can map.
|
||||
*/
|
||||
#define PGDIR_SHIFT 29
|
||||
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
/*
|
||||
* section address mask and size definitions.
|
||||
*/
|
||||
#define SECTION_SHIFT 29
|
||||
#define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT)
|
||||
#define SECTION_MASK (~(SECTION_SIZE-1))
|
||||
|
||||
#endif
|
60
arch/arm64/include/asm/pgtable-2level-types.h
Normal file
60
arch/arm64/include/asm/pgtable-2level-types.h
Normal file
@ -0,0 +1,60 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGTABLE_2LEVEL_TYPES_H
|
||||
#define __ASM_PGTABLE_2LEVEL_TYPES_H
|
||||
|
||||
typedef u64 pteval_t;
|
||||
typedef u64 pgdval_t;
|
||||
typedef pgdval_t pmdval_t;
|
||||
|
||||
#undef STRICT_MM_TYPECHECKS
|
||||
|
||||
#ifdef STRICT_MM_TYPECHECKS
|
||||
|
||||
/*
|
||||
* These are used to make use of C type-checking..
|
||||
*/
|
||||
typedef struct { pteval_t pte; } pte_t;
|
||||
typedef struct { pgdval_t pgd; } pgd_t;
|
||||
typedef struct { pteval_t pgprot; } pgprot_t;
|
||||
|
||||
#define pte_val(x) ((x).pte)
|
||||
#define pgd_val(x) ((x).pgd)
|
||||
#define pgprot_val(x) ((x).pgprot)
|
||||
|
||||
#define __pte(x) ((pte_t) { (x) } )
|
||||
#define __pgd(x) ((pgd_t) { (x) } )
|
||||
#define __pgprot(x) ((pgprot_t) { (x) } )
|
||||
|
||||
#else /* !STRICT_MM_TYPECHECKS */
|
||||
|
||||
typedef pteval_t pte_t;
|
||||
typedef pgdval_t pgd_t;
|
||||
typedef pteval_t pgprot_t;
|
||||
|
||||
#define pte_val(x) (x)
|
||||
#define pgd_val(x) (x)
|
||||
#define pgprot_val(x) (x)
|
||||
|
||||
#define __pte(x) (x)
|
||||
#define __pgd(x) (x)
|
||||
#define __pgprot(x) (x)
|
||||
|
||||
#endif /* STRICT_MM_TYPECHECKS */
|
||||
|
||||
#include <asm-generic/pgtable-nopmd.h>
|
||||
|
||||
#endif /* __ASM_PGTABLE_2LEVEL_TYPES_H */
|
50
arch/arm64/include/asm/pgtable-3level-hwdef.h
Normal file
50
arch/arm64/include/asm/pgtable-3level-hwdef.h
Normal file
@ -0,0 +1,50 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGTABLE_3LEVEL_HWDEF_H
|
||||
#define __ASM_PGTABLE_3LEVEL_HWDEF_H
|
||||
|
||||
/*
|
||||
* With LPAE and 4KB pages, there are 3 levels of page tables. Each level has
|
||||
* 512 entries of 8 bytes each, occupying a 4K page. The first level table
|
||||
* covers a range of 512GB, each entry representing 1GB. The user and kernel
|
||||
* address spaces are limited to 512GB each.
|
||||
*/
|
||||
#define PTRS_PER_PTE 512
|
||||
#define PTRS_PER_PMD 512
|
||||
#define PTRS_PER_PGD 512
|
||||
|
||||
/*
|
||||
* PGDIR_SHIFT determines the size a top-level page table entry can map.
|
||||
*/
|
||||
#define PGDIR_SHIFT 30
|
||||
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
/*
|
||||
* PMD_SHIFT determines the size a middle-level page table entry can map.
|
||||
*/
|
||||
#define PMD_SHIFT 21
|
||||
#define PMD_SIZE (_AC(1, UL) << PMD_SHIFT)
|
||||
#define PMD_MASK (~(PMD_SIZE-1))
|
||||
|
||||
/*
|
||||
* section address mask and size definitions.
|
||||
*/
|
||||
#define SECTION_SHIFT 21
|
||||
#define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT)
|
||||
#define SECTION_MASK (~(SECTION_SIZE-1))
|
||||
|
||||
#endif
|
66
arch/arm64/include/asm/pgtable-3level-types.h
Normal file
66
arch/arm64/include/asm/pgtable-3level-types.h
Normal file
@ -0,0 +1,66 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGTABLE_3LEVEL_TYPES_H
|
||||
#define __ASM_PGTABLE_3LEVEL_TYPES_H
|
||||
|
||||
typedef u64 pteval_t;
|
||||
typedef u64 pmdval_t;
|
||||
typedef u64 pgdval_t;
|
||||
|
||||
#undef STRICT_MM_TYPECHECKS
|
||||
|
||||
#ifdef STRICT_MM_TYPECHECKS
|
||||
|
||||
/*
|
||||
* These are used to make use of C type-checking..
|
||||
*/
|
||||
typedef struct { pteval_t pte; } pte_t;
|
||||
typedef struct { pmdval_t pmd; } pmd_t;
|
||||
typedef struct { pgdval_t pgd; } pgd_t;
|
||||
typedef struct { pteval_t pgprot; } pgprot_t;
|
||||
|
||||
#define pte_val(x) ((x).pte)
|
||||
#define pmd_val(x) ((x).pmd)
|
||||
#define pgd_val(x) ((x).pgd)
|
||||
#define pgprot_val(x) ((x).pgprot)
|
||||
|
||||
#define __pte(x) ((pte_t) { (x) } )
|
||||
#define __pmd(x) ((pmd_t) { (x) } )
|
||||
#define __pgd(x) ((pgd_t) { (x) } )
|
||||
#define __pgprot(x) ((pgprot_t) { (x) } )
|
||||
|
||||
#else /* !STRICT_MM_TYPECHECKS */
|
||||
|
||||
typedef pteval_t pte_t;
|
||||
typedef pmdval_t pmd_t;
|
||||
typedef pgdval_t pgd_t;
|
||||
typedef pteval_t pgprot_t;
|
||||
|
||||
#define pte_val(x) (x)
|
||||
#define pmd_val(x) (x)
|
||||
#define pgd_val(x) (x)
|
||||
#define pgprot_val(x) (x)
|
||||
|
||||
#define __pte(x) (x)
|
||||
#define __pmd(x) (x)
|
||||
#define __pgd(x) (x)
|
||||
#define __pgprot(x) (x)
|
||||
|
||||
#endif /* STRICT_MM_TYPECHECKS */
|
||||
|
||||
#include <asm-generic/pgtable-nopud.h>
|
||||
|
||||
#endif /* __ASM_PGTABLE_3LEVEL_TYPES_H */
|
94
arch/arm64/include/asm/pgtable-hwdef.h
Normal file
94
arch/arm64/include/asm/pgtable-hwdef.h
Normal file
@ -0,0 +1,94 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGTABLE_HWDEF_H
|
||||
#define __ASM_PGTABLE_HWDEF_H
|
||||
|
||||
#ifdef CONFIG_ARM64_64K_PAGES
|
||||
#include <asm/pgtable-2level-hwdef.h>
|
||||
#else
|
||||
#include <asm/pgtable-3level-hwdef.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Hardware page table definitions.
|
||||
*
|
||||
* Level 2 descriptor (PMD).
|
||||
*/
|
||||
#define PMD_TYPE_MASK (_AT(pmdval_t, 3) << 0)
|
||||
#define PMD_TYPE_FAULT (_AT(pmdval_t, 0) << 0)
|
||||
#define PMD_TYPE_TABLE (_AT(pmdval_t, 3) << 0)
|
||||
#define PMD_TYPE_SECT (_AT(pmdval_t, 1) << 0)
|
||||
|
||||
/*
|
||||
* Section
|
||||
*/
|
||||
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
|
||||
#define PMD_SECT_AF (_AT(pmdval_t, 1) << 10)
|
||||
#define PMD_SECT_NG (_AT(pmdval_t, 1) << 11)
|
||||
#define PMD_SECT_XN (_AT(pmdval_t, 1) << 54)
|
||||
|
||||
/*
|
||||
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
|
||||
*/
|
||||
#define PMD_ATTRINDX(t) (_AT(pmdval_t, (t)) << 2)
|
||||
#define PMD_ATTRINDX_MASK (_AT(pmdval_t, 7) << 2)
|
||||
|
||||
/*
|
||||
* Level 3 descriptor (PTE).
|
||||
*/
|
||||
#define PTE_TYPE_MASK (_AT(pteval_t, 3) << 0)
|
||||
#define PTE_TYPE_FAULT (_AT(pteval_t, 0) << 0)
|
||||
#define PTE_TYPE_PAGE (_AT(pteval_t, 3) << 0)
|
||||
#define PTE_USER (_AT(pteval_t, 1) << 6) /* AP[1] */
|
||||
#define PTE_RDONLY (_AT(pteval_t, 1) << 7) /* AP[2] */
|
||||
#define PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */
|
||||
#define PTE_AF (_AT(pteval_t, 1) << 10) /* Access Flag */
|
||||
#define PTE_NG (_AT(pteval_t, 1) << 11) /* nG */
|
||||
#define PTE_XN (_AT(pteval_t, 1) << 54) /* XN */
|
||||
|
||||
/*
|
||||
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
|
||||
*/
|
||||
#define PTE_ATTRINDX(t) (_AT(pteval_t, (t)) << 2)
|
||||
#define PTE_ATTRINDX_MASK (_AT(pteval_t, 7) << 2)
|
||||
|
||||
/*
|
||||
* 40-bit physical address supported.
|
||||
*/
|
||||
#define PHYS_MASK_SHIFT (40)
|
||||
#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1)
|
||||
|
||||
/*
|
||||
* TCR flags.
|
||||
*/
|
||||
#define TCR_TxSZ(x) (((UL(64) - (x)) << 16) | ((UL(64) - (x)) << 0))
|
||||
#define TCR_IRGN_NC ((UL(0) << 8) | (UL(0) << 24))
|
||||
#define TCR_IRGN_WBWA ((UL(1) << 8) | (UL(1) << 24))
|
||||
#define TCR_IRGN_WT ((UL(2) << 8) | (UL(2) << 24))
|
||||
#define TCR_IRGN_WBnWA ((UL(3) << 8) | (UL(3) << 24))
|
||||
#define TCR_IRGN_MASK ((UL(3) << 8) | (UL(3) << 24))
|
||||
#define TCR_ORGN_NC ((UL(0) << 10) | (UL(0) << 26))
|
||||
#define TCR_ORGN_WBWA ((UL(1) << 10) | (UL(1) << 26))
|
||||
#define TCR_ORGN_WT ((UL(2) << 10) | (UL(2) << 26))
|
||||
#define TCR_ORGN_WBnWA ((UL(3) << 10) | (UL(3) << 26))
|
||||
#define TCR_ORGN_MASK ((UL(3) << 10) | (UL(3) << 26))
|
||||
#define TCR_SHARED ((UL(3) << 12) | (UL(3) << 28))
|
||||
#define TCR_TG0_64K (UL(1) << 14)
|
||||
#define TCR_TG1_64K (UL(1) << 30)
|
||||
#define TCR_IPS_40BIT (UL(2) << 32)
|
||||
#define TCR_ASID16 (UL(1) << 36)
|
||||
|
||||
#endif
|
328
arch/arm64/include/asm/pgtable.h
Normal file
328
arch/arm64/include/asm/pgtable.h
Normal file
@ -0,0 +1,328 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PGTABLE_H
|
||||
#define __ASM_PGTABLE_H
|
||||
|
||||
#include <asm/proc-fns.h>
|
||||
|
||||
#include <asm/memory.h>
|
||||
#include <asm/pgtable-hwdef.h>
|
||||
|
||||
/*
|
||||
* Software defined PTE bits definition.
|
||||
*/
|
||||
#define PTE_VALID (_AT(pteval_t, 1) << 0) /* pte_present() check */
|
||||
#define PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !pte_present() */
|
||||
#define PTE_DIRTY (_AT(pteval_t, 1) << 55)
|
||||
#define PTE_SPECIAL (_AT(pteval_t, 1) << 56)
|
||||
|
||||
/*
|
||||
* VMALLOC and SPARSEMEM_VMEMMAP ranges.
|
||||
*/
|
||||
#define VMALLOC_START UL(0xffffff8000000000)
|
||||
#define VMALLOC_END (PAGE_OFFSET - UL(0x400000000) - SZ_64K)
|
||||
|
||||
#define vmemmap ((struct page *)(VMALLOC_END + SZ_64K))
|
||||
|
||||
#define FIRST_USER_ADDRESS 0
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
extern void __pte_error(const char *file, int line, unsigned long val);
|
||||
extern void __pmd_error(const char *file, int line, unsigned long val);
|
||||
extern void __pgd_error(const char *file, int line, unsigned long val);
|
||||
|
||||
#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte))
|
||||
#ifndef CONFIG_ARM64_64K_PAGES
|
||||
#define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd))
|
||||
#endif
|
||||
#define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd))
|
||||
|
||||
/*
|
||||
* The pgprot_* and protection_map entries will be fixed up at runtime to
|
||||
* include the cachable and bufferable bits based on memory policy, as well as
|
||||
* any architecture dependent bits like global/ASID and SMP shared mapping
|
||||
* bits.
|
||||
*/
|
||||
#define _PAGE_DEFAULT PTE_TYPE_PAGE | PTE_AF
|
||||
|
||||
extern pgprot_t pgprot_default;
|
||||
|
||||
#define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b))
|
||||
|
||||
#define PAGE_NONE _MOD_PROT(pgprot_default, PTE_NG | PTE_XN | PTE_RDONLY)
|
||||
#define PAGE_SHARED _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN)
|
||||
#define PAGE_SHARED_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG)
|
||||
#define PAGE_COPY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY)
|
||||
#define PAGE_COPY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_RDONLY)
|
||||
#define PAGE_READONLY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY)
|
||||
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_RDONLY)
|
||||
#define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_XN | PTE_DIRTY)
|
||||
#define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_DIRTY)
|
||||
|
||||
#define __PAGE_NONE __pgprot(_PAGE_DEFAULT | PTE_NG | PTE_XN | PTE_RDONLY)
|
||||
#define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN)
|
||||
#define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG)
|
||||
#define __PAGE_COPY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY)
|
||||
#define __PAGE_COPY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_RDONLY)
|
||||
#define __PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY)
|
||||
#define __PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_RDONLY)
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#define __P000 __PAGE_NONE
|
||||
#define __P001 __PAGE_READONLY
|
||||
#define __P010 __PAGE_COPY
|
||||
#define __P011 __PAGE_COPY
|
||||
#define __P100 __PAGE_READONLY_EXEC
|
||||
#define __P101 __PAGE_READONLY_EXEC
|
||||
#define __P110 __PAGE_COPY_EXEC
|
||||
#define __P111 __PAGE_COPY_EXEC
|
||||
|
||||
#define __S000 __PAGE_NONE
|
||||
#define __S001 __PAGE_READONLY
|
||||
#define __S010 __PAGE_SHARED
|
||||
#define __S011 __PAGE_SHARED
|
||||
#define __S100 __PAGE_READONLY_EXEC
|
||||
#define __S101 __PAGE_READONLY_EXEC
|
||||
#define __S110 __PAGE_SHARED_EXEC
|
||||
#define __S111 __PAGE_SHARED_EXEC
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
*/
|
||||
extern struct page *empty_zero_page;
|
||||
#define ZERO_PAGE(vaddr) (empty_zero_page)
|
||||
|
||||
#define pte_pfn(pte) ((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT)
|
||||
|
||||
#define pfn_pte(pfn,prot) (__pte(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)))
|
||||
|
||||
#define pte_none(pte) (!pte_val(pte))
|
||||
#define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0))
|
||||
#define pte_page(pte) (pfn_to_page(pte_pfn(pte)))
|
||||
#define pte_offset_kernel(dir,addr) (pmd_page_vaddr(*(dir)) + __pte_index(addr))
|
||||
|
||||
#define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr))
|
||||
#define pte_offset_map_nested(dir,addr) pte_offset_kernel((dir), (addr))
|
||||
#define pte_unmap(pte) do { } while (0)
|
||||
#define pte_unmap_nested(pte) do { } while (0)
|
||||
|
||||
/*
|
||||
* The following only work if pte_present(). Undefined behaviour otherwise.
|
||||
*/
|
||||
#define pte_present(pte) (pte_val(pte) & PTE_VALID)
|
||||
#define pte_dirty(pte) (pte_val(pte) & PTE_DIRTY)
|
||||
#define pte_young(pte) (pte_val(pte) & PTE_AF)
|
||||
#define pte_special(pte) (pte_val(pte) & PTE_SPECIAL)
|
||||
#define pte_write(pte) (!(pte_val(pte) & PTE_RDONLY))
|
||||
#define pte_exec(pte) (!(pte_val(pte) & PTE_XN))
|
||||
|
||||
#define pte_present_exec_user(pte) \
|
||||
((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_XN)) == \
|
||||
(PTE_VALID | PTE_USER))
|
||||
|
||||
#define PTE_BIT_FUNC(fn,op) \
|
||||
static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; }
|
||||
|
||||
PTE_BIT_FUNC(wrprotect, |= PTE_RDONLY);
|
||||
PTE_BIT_FUNC(mkwrite, &= ~PTE_RDONLY);
|
||||
PTE_BIT_FUNC(mkclean, &= ~PTE_DIRTY);
|
||||
PTE_BIT_FUNC(mkdirty, |= PTE_DIRTY);
|
||||
PTE_BIT_FUNC(mkold, &= ~PTE_AF);
|
||||
PTE_BIT_FUNC(mkyoung, |= PTE_AF);
|
||||
PTE_BIT_FUNC(mkspecial, |= PTE_SPECIAL);
|
||||
|
||||
static inline void set_pte(pte_t *ptep, pte_t pte)
|
||||
{
|
||||
*ptep = pte;
|
||||
}
|
||||
|
||||
extern void __sync_icache_dcache(pte_t pteval, unsigned long addr);
|
||||
|
||||
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte)
|
||||
{
|
||||
if (pte_present_exec_user(pte))
|
||||
__sync_icache_dcache(pte, addr);
|
||||
set_pte(ptep, pte);
|
||||
}
|
||||
|
||||
/*
|
||||
* Huge pte definitions.
|
||||
*/
|
||||
#define pte_huge(pte) ((pte_val(pte) & PTE_TYPE_MASK) == PTE_TYPE_HUGEPAGE)
|
||||
#define pte_mkhuge(pte) (__pte((pte_val(pte) & ~PTE_TYPE_MASK) | PTE_TYPE_HUGEPAGE))
|
||||
|
||||
#define __pgprot_modify(prot,mask,bits) \
|
||||
__pgprot((pgprot_val(prot) & ~(mask)) | (bits))
|
||||
|
||||
#define __HAVE_ARCH_PTE_SPECIAL
|
||||
|
||||
/*
|
||||
* Mark the prot value as uncacheable and unbufferable.
|
||||
*/
|
||||
#define pgprot_noncached(prot) \
|
||||
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE))
|
||||
#define pgprot_writecombine(prot) \
|
||||
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_GRE))
|
||||
#define pgprot_dmacoherent(prot) \
|
||||
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC))
|
||||
#define __HAVE_PHYS_MEM_ACCESS_PROT
|
||||
struct file;
|
||||
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
unsigned long size, pgprot_t vma_prot);
|
||||
|
||||
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||
#define pmd_present(pmd) (pmd_val(pmd))
|
||||
|
||||
#define pmd_bad(pmd) (!(pmd_val(pmd) & 2))
|
||||
|
||||
static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
|
||||
{
|
||||
*pmdp = pmd;
|
||||
dsb();
|
||||
}
|
||||
|
||||
static inline void pmd_clear(pmd_t *pmdp)
|
||||
{
|
||||
set_pmd(pmdp, __pmd(0));
|
||||
}
|
||||
|
||||
static inline pte_t *pmd_page_vaddr(pmd_t pmd)
|
||||
{
|
||||
return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK);
|
||||
}
|
||||
|
||||
#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
|
||||
|
||||
/*
|
||||
* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*/
|
||||
#define mk_pte(page,prot) pfn_pte(page_to_pfn(page),prot)
|
||||
|
||||
#ifndef CONFIG_ARM64_64K_PAGES
|
||||
|
||||
#define pud_none(pud) (!pud_val(pud))
|
||||
#define pud_bad(pud) (!(pud_val(pud) & 2))
|
||||
#define pud_present(pud) (pud_val(pud))
|
||||
|
||||
static inline void set_pud(pud_t *pudp, pud_t pud)
|
||||
{
|
||||
*pudp = pud;
|
||||
dsb();
|
||||
}
|
||||
|
||||
static inline void pud_clear(pud_t *pudp)
|
||||
{
|
||||
set_pud(pudp, __pud(0));
|
||||
}
|
||||
|
||||
static inline pmd_t *pud_page_vaddr(pud_t pud)
|
||||
{
|
||||
return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_ARM64_64K_PAGES */
|
||||
|
||||
/* to find an entry in a page-table-directory */
|
||||
#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
|
||||
|
||||
#define pgd_offset(mm, addr) ((mm)->pgd+pgd_index(addr))
|
||||
|
||||
/* to find an entry in a kernel page-table-directory */
|
||||
#define pgd_offset_k(addr) pgd_offset(&init_mm, addr)
|
||||
|
||||
/* Find an entry in the second-level page table.. */
|
||||
#ifndef CONFIG_ARM64_64K_PAGES
|
||||
#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
|
||||
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
|
||||
{
|
||||
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Find an entry in the third-level page table.. */
|
||||
#define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
|
||||
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
const pteval_t mask = PTE_USER | PTE_XN | PTE_RDONLY;
|
||||
pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
|
||||
return pte;
|
||||
}
|
||||
|
||||
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
|
||||
extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
|
||||
|
||||
#define SWAPPER_DIR_SIZE (3 * PAGE_SIZE)
|
||||
#define IDMAP_DIR_SIZE (2 * PAGE_SIZE)
|
||||
|
||||
/*
|
||||
* Encode and decode a swap entry:
|
||||
* bits 0-1: present (must be zero)
|
||||
* bit 2: PTE_FILE
|
||||
* bits 3-8: swap type
|
||||
* bits 9-63: swap offset
|
||||
*/
|
||||
#define __SWP_TYPE_SHIFT 3
|
||||
#define __SWP_TYPE_BITS 6
|
||||
#define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1)
|
||||
#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
|
||||
|
||||
#define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
|
||||
#define __swp_offset(x) ((x).val >> __SWP_OFFSET_SHIFT)
|
||||
#define __swp_entry(type,offset) ((swp_entry_t) { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
|
||||
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
|
||||
#define __swp_entry_to_pte(swp) ((pte_t) { (swp).val })
|
||||
|
||||
/*
|
||||
* Ensure that there are not more swap files than can be encoded in the kernel
|
||||
* the PTEs.
|
||||
*/
|
||||
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
|
||||
|
||||
/*
|
||||
* Encode and decode a file entry:
|
||||
* bits 0-1: present (must be zero)
|
||||
* bit 2: PTE_FILE
|
||||
* bits 3-63: file offset / PAGE_SIZE
|
||||
*/
|
||||
#define pte_file(pte) (pte_val(pte) & PTE_FILE)
|
||||
#define pte_to_pgoff(x) (pte_val(x) >> 3)
|
||||
#define pgoff_to_pte(x) __pte(((x) << 3) | PTE_FILE)
|
||||
|
||||
#define PTE_FILE_MAX_BITS 61
|
||||
|
||||
extern int kern_addr_valid(unsigned long addr);
|
||||
|
||||
#include <asm-generic/pgtable.h>
|
||||
|
||||
/*
|
||||
* remap a physical page `pfn' of size `size' with page protection `prot'
|
||||
* into virtual address `from'
|
||||
*/
|
||||
#define io_remap_pfn_range(vma,from,pfn,size,prot) \
|
||||
remap_pfn_range(vma, from, pfn, size, prot)
|
||||
|
||||
#define pgtable_cache_init() do { } while (0)
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* __ASM_PGTABLE_H */
|
82
arch/arm64/include/asm/pmu.h
Normal file
82
arch/arm64/include/asm/pmu.h
Normal file
@ -0,0 +1,82 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/pmu.h
|
||||
*
|
||||
* Copyright (C) 2009 picoChip Designs Ltd, Jamie Iles
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PMU_H
|
||||
#define __ASM_PMU_H
|
||||
|
||||
#ifdef CONFIG_HW_PERF_EVENTS
|
||||
|
||||
/* The events for a given PMU register set. */
|
||||
struct pmu_hw_events {
|
||||
/*
|
||||
* The events that are active on the PMU for the given index.
|
||||
*/
|
||||
struct perf_event **events;
|
||||
|
||||
/*
|
||||
* A 1 bit for an index indicates that the counter is being used for
|
||||
* an event. A 0 means that the counter can be used.
|
||||
*/
|
||||
unsigned long *used_mask;
|
||||
|
||||
/*
|
||||
* Hardware lock to serialize accesses to PMU registers. Needed for the
|
||||
* read/modify/write sequences.
|
||||
*/
|
||||
raw_spinlock_t pmu_lock;
|
||||
};
|
||||
|
||||
struct arm_pmu {
|
||||
struct pmu pmu;
|
||||
cpumask_t active_irqs;
|
||||
const char *name;
|
||||
irqreturn_t (*handle_irq)(int irq_num, void *dev);
|
||||
void (*enable)(struct hw_perf_event *evt, int idx);
|
||||
void (*disable)(struct hw_perf_event *evt, int idx);
|
||||
int (*get_event_idx)(struct pmu_hw_events *hw_events,
|
||||
struct hw_perf_event *hwc);
|
||||
int (*set_event_filter)(struct hw_perf_event *evt,
|
||||
struct perf_event_attr *attr);
|
||||
u32 (*read_counter)(int idx);
|
||||
void (*write_counter)(int idx, u32 val);
|
||||
void (*start)(void);
|
||||
void (*stop)(void);
|
||||
void (*reset)(void *);
|
||||
int (*map_event)(struct perf_event *event);
|
||||
int num_events;
|
||||
atomic_t active_events;
|
||||
struct mutex reserve_mutex;
|
||||
u64 max_period;
|
||||
struct platform_device *plat_device;
|
||||
struct pmu_hw_events *(*get_hw_events)(void);
|
||||
};
|
||||
|
||||
#define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu))
|
||||
|
||||
int __init armpmu_register(struct arm_pmu *armpmu, char *name, int type);
|
||||
|
||||
u64 armpmu_event_update(struct perf_event *event,
|
||||
struct hw_perf_event *hwc,
|
||||
int idx);
|
||||
|
||||
int armpmu_event_set_period(struct perf_event *event,
|
||||
struct hw_perf_event *hwc,
|
||||
int idx);
|
||||
|
||||
#endif /* CONFIG_HW_PERF_EVENTS */
|
||||
#endif /* __ASM_PMU_H */
|
50
arch/arm64/include/asm/proc-fns.h
Normal file
50
arch/arm64/include/asm/proc-fns.h
Normal file
@ -0,0 +1,50 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/proc-fns.h
|
||||
*
|
||||
* Copyright (C) 1997-1999 Russell King
|
||||
* Copyright (C) 2000 Deep Blue Solutions Ltd
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PROCFNS_H
|
||||
#define __ASM_PROCFNS_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <asm/page.h>
|
||||
|
||||
struct mm_struct;
|
||||
|
||||
extern void cpu_cache_off(void);
|
||||
extern void cpu_do_idle(void);
|
||||
extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
|
||||
extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
|
||||
|
||||
#include <asm/memory.h>
|
||||
|
||||
#define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
|
||||
|
||||
#define cpu_get_pgd() \
|
||||
({ \
|
||||
unsigned long pg; \
|
||||
asm("mrs %0, ttbr0_el1\n" \
|
||||
: "=r" (pg)); \
|
||||
pg &= ~0xffff000000003ffful; \
|
||||
(pgd_t *)phys_to_virt(pg); \
|
||||
})
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_PROCFNS_H */
|
175
arch/arm64/include/asm/processor.h
Normal file
175
arch/arm64/include/asm/processor.h
Normal file
@ -0,0 +1,175 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/processor.h
|
||||
*
|
||||
* Copyright (C) 1995-1999 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PROCESSOR_H
|
||||
#define __ASM_PROCESSOR_H
|
||||
|
||||
/*
|
||||
* Default implementation of macro that returns current
|
||||
* instruction pointer ("program counter").
|
||||
*/
|
||||
#define current_text_addr() ({ __label__ _l; _l: &&_l;})
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/string.h>
|
||||
|
||||
#include <asm/fpsimd.h>
|
||||
#include <asm/hw_breakpoint.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/types.h>
|
||||
|
||||
#ifdef __KERNEL__
|
||||
#define STACK_TOP_MAX TASK_SIZE_64
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define AARCH32_VECTORS_BASE 0xffff0000
|
||||
#define STACK_TOP (test_thread_flag(TIF_32BIT) ? \
|
||||
AARCH32_VECTORS_BASE : STACK_TOP_MAX)
|
||||
#else
|
||||
#define STACK_TOP STACK_TOP_MAX
|
||||
#endif /* CONFIG_COMPAT */
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
struct debug_info {
|
||||
/* Have we suspended stepping by a debugger? */
|
||||
int suspended_step;
|
||||
/* Allow breakpoints and watchpoints to be disabled for this thread. */
|
||||
int bps_disabled;
|
||||
int wps_disabled;
|
||||
/* Hardware breakpoints pinned to this task. */
|
||||
struct perf_event *hbp_break[ARM_MAX_BRP];
|
||||
struct perf_event *hbp_watch[ARM_MAX_WRP];
|
||||
};
|
||||
|
||||
struct cpu_context {
|
||||
unsigned long x19;
|
||||
unsigned long x20;
|
||||
unsigned long x21;
|
||||
unsigned long x22;
|
||||
unsigned long x23;
|
||||
unsigned long x24;
|
||||
unsigned long x25;
|
||||
unsigned long x26;
|
||||
unsigned long x27;
|
||||
unsigned long x28;
|
||||
unsigned long fp;
|
||||
unsigned long sp;
|
||||
unsigned long pc;
|
||||
};
|
||||
|
||||
struct thread_struct {
|
||||
struct cpu_context cpu_context; /* cpu context */
|
||||
unsigned long tp_value;
|
||||
struct fpsimd_state fpsimd_state;
|
||||
unsigned long fault_address; /* fault info */
|
||||
struct debug_info debug; /* debugging */
|
||||
};
|
||||
|
||||
#define INIT_THREAD { }
|
||||
|
||||
static inline void start_thread_common(struct pt_regs *regs, unsigned long pc)
|
||||
{
|
||||
memset(regs, 0, sizeof(*regs));
|
||||
regs->syscallno = ~0UL;
|
||||
regs->pc = pc;
|
||||
}
|
||||
|
||||
static inline void start_thread(struct pt_regs *regs, unsigned long pc,
|
||||
unsigned long sp)
|
||||
{
|
||||
unsigned long *stack = (unsigned long *)sp;
|
||||
|
||||
start_thread_common(regs, pc);
|
||||
regs->pstate = PSR_MODE_EL0t;
|
||||
regs->sp = sp;
|
||||
regs->regs[2] = stack[2]; /* x2 (envp) */
|
||||
regs->regs[1] = stack[1]; /* x1 (argv) */
|
||||
regs->regs[0] = stack[0]; /* x0 (argc) */
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
|
||||
unsigned long sp)
|
||||
{
|
||||
unsigned int *stack = (unsigned int *)sp;
|
||||
|
||||
start_thread_common(regs, pc);
|
||||
regs->pstate = COMPAT_PSR_MODE_USR;
|
||||
if (pc & 1)
|
||||
regs->pstate |= COMPAT_PSR_T_BIT;
|
||||
regs->compat_sp = sp;
|
||||
regs->regs[2] = stack[2]; /* x2 (envp) */
|
||||
regs->regs[1] = stack[1]; /* x1 (argv) */
|
||||
regs->regs[0] = stack[0]; /* x0 (argc) */
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Forward declaration, a strange C thing */
|
||||
struct task_struct;
|
||||
|
||||
/* Free all resources held by a thread. */
|
||||
extern void release_thread(struct task_struct *);
|
||||
|
||||
/* Prepare to copy thread state - unlazy all lazy status */
|
||||
#define prepare_to_copy(tsk) do { } while (0)
|
||||
|
||||
unsigned long get_wchan(struct task_struct *p);
|
||||
|
||||
#define cpu_relax() barrier()
|
||||
|
||||
/* Thread switching */
|
||||
extern struct task_struct *cpu_switch_to(struct task_struct *prev,
|
||||
struct task_struct *next);
|
||||
|
||||
/*
|
||||
* Create a new kernel thread
|
||||
*/
|
||||
extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
|
||||
|
||||
#define task_pt_regs(p) \
|
||||
((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
|
||||
|
||||
#define KSTK_EIP(tsk) task_pt_regs(tsk)->pc
|
||||
#define KSTK_ESP(tsk) task_pt_regs(tsk)->sp
|
||||
|
||||
/*
|
||||
* Prefetching support
|
||||
*/
|
||||
#define ARCH_HAS_PREFETCH
|
||||
static inline void prefetch(const void *ptr)
|
||||
{
|
||||
asm volatile("prfm pldl1keep, %a0\n" : : "p" (ptr));
|
||||
}
|
||||
|
||||
#define ARCH_HAS_PREFETCHW
|
||||
static inline void prefetchw(const void *ptr)
|
||||
{
|
||||
asm volatile("prfm pstl1keep, %a0\n" : : "p" (ptr));
|
||||
}
|
||||
|
||||
#define ARCH_HAS_SPINLOCK_PREFETCH
|
||||
static inline void spin_lock_prefetch(const void *x)
|
||||
{
|
||||
prefetchw(x);
|
||||
}
|
||||
|
||||
#define HAVE_ARCH_PICK_MMAP_LAYOUT
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* __ASM_PROCESSOR_H */
|
1
arch/arm64/include/asm/prom.h
Normal file
1
arch/arm64/include/asm/prom.h
Normal file
@ -0,0 +1 @@
|
||||
/* Empty for now */
|
207
arch/arm64/include/asm/ptrace.h
Normal file
207
arch/arm64/include/asm/ptrace.h
Normal file
@ -0,0 +1,207 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/ptrace.h
|
||||
*
|
||||
* Copyright (C) 1996-2003 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_PTRACE_H
|
||||
#define __ASM_PTRACE_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/hwcap.h>
|
||||
|
||||
/* AArch32-specific ptrace requests */
|
||||
#define COMPAT_PTRACE_GETREGS 12
|
||||
#define COMPAT_PTRACE_SETREGS 13
|
||||
#define COMPAT_PTRACE_GET_THREAD_AREA 22
|
||||
#define COMPAT_PTRACE_SET_SYSCALL 23
|
||||
#define COMPAT_PTRACE_GETVFPREGS 27
|
||||
#define COMPAT_PTRACE_SETVFPREGS 28
|
||||
#define COMPAT_PTRACE_GETHBPREGS 29
|
||||
#define COMPAT_PTRACE_SETHBPREGS 30
|
||||
|
||||
/*
|
||||
* PSR bits
|
||||
*/
|
||||
#define PSR_MODE_EL0t 0x00000000
|
||||
#define PSR_MODE_EL1t 0x00000004
|
||||
#define PSR_MODE_EL1h 0x00000005
|
||||
#define PSR_MODE_EL2t 0x00000008
|
||||
#define PSR_MODE_EL2h 0x00000009
|
||||
#define PSR_MODE_EL3t 0x0000000c
|
||||
#define PSR_MODE_EL3h 0x0000000d
|
||||
#define PSR_MODE_MASK 0x0000000f
|
||||
|
||||
/* AArch32 CPSR bits */
|
||||
#define PSR_MODE32_BIT 0x00000010
|
||||
#define COMPAT_PSR_MODE_USR 0x00000010
|
||||
#define COMPAT_PSR_T_BIT 0x00000020
|
||||
#define COMPAT_PSR_IT_MASK 0x0600fc00 /* If-Then execution state mask */
|
||||
|
||||
/* AArch64 SPSR bits */
|
||||
#define PSR_F_BIT 0x00000040
|
||||
#define PSR_I_BIT 0x00000080
|
||||
#define PSR_A_BIT 0x00000100
|
||||
#define PSR_D_BIT 0x00000200
|
||||
#define PSR_Q_BIT 0x08000000
|
||||
#define PSR_V_BIT 0x10000000
|
||||
#define PSR_C_BIT 0x20000000
|
||||
#define PSR_Z_BIT 0x40000000
|
||||
#define PSR_N_BIT 0x80000000
|
||||
|
||||
/*
|
||||
* Groups of PSR bits
|
||||
*/
|
||||
#define PSR_f 0xff000000 /* Flags */
|
||||
#define PSR_s 0x00ff0000 /* Status */
|
||||
#define PSR_x 0x0000ff00 /* Extension */
|
||||
#define PSR_c 0x000000ff /* Control */
|
||||
|
||||
/*
|
||||
* These are 'magic' values for PTRACE_PEEKUSR that return info about where a
|
||||
* process is located in memory.
|
||||
*/
|
||||
#define PT_TEXT_ADDR 0x10000
|
||||
#define PT_DATA_ADDR 0x10004
|
||||
#define PT_TEXT_END_ADDR 0x10008
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* User structures for general purpose, floating point and debug registers.
|
||||
*/
|
||||
struct user_pt_regs {
|
||||
__u64 regs[31];
|
||||
__u64 sp;
|
||||
__u64 pc;
|
||||
__u64 pstate;
|
||||
};
|
||||
|
||||
struct user_fpsimd_state {
|
||||
__uint128_t vregs[32];
|
||||
__u32 fpsr;
|
||||
__u32 fpcr;
|
||||
};
|
||||
|
||||
struct user_hwdebug_state {
|
||||
__u32 dbg_info;
|
||||
struct {
|
||||
__u64 addr;
|
||||
__u32 ctrl;
|
||||
} dbg_regs[16];
|
||||
};
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/* sizeof(struct user) for AArch32 */
|
||||
#define COMPAT_USER_SZ 296
|
||||
/* AArch32 uses x13 as the stack pointer... */
|
||||
#define compat_sp regs[13]
|
||||
/* ... and x14 as the link register. */
|
||||
#define compat_lr regs[14]
|
||||
|
||||
/*
|
||||
* This struct defines the way the registers are stored on the stack during an
|
||||
* exception. Note that sizeof(struct pt_regs) has to be a multiple of 16 (for
|
||||
* stack alignment). struct user_pt_regs must form a prefix of struct pt_regs.
|
||||
*/
|
||||
struct pt_regs {
|
||||
union {
|
||||
struct user_pt_regs user_regs;
|
||||
struct {
|
||||
u64 regs[31];
|
||||
u64 sp;
|
||||
u64 pc;
|
||||
u64 pstate;
|
||||
};
|
||||
};
|
||||
u64 orig_x0;
|
||||
u64 syscallno;
|
||||
};
|
||||
|
||||
#define arch_has_single_step() (1)
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define compat_thumb_mode(regs) \
|
||||
(((regs)->pstate & COMPAT_PSR_T_BIT))
|
||||
#else
|
||||
#define compat_thumb_mode(regs) (0)
|
||||
#endif
|
||||
|
||||
#define user_mode(regs) \
|
||||
(((regs)->pstate & PSR_MODE_MASK) == PSR_MODE_EL0t)
|
||||
|
||||
#define compat_user_mode(regs) \
|
||||
(((regs)->pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) == \
|
||||
(PSR_MODE32_BIT | PSR_MODE_EL0t))
|
||||
|
||||
#define processor_mode(regs) \
|
||||
((regs)->pstate & PSR_MODE_MASK)
|
||||
|
||||
#define interrupts_enabled(regs) \
|
||||
(!((regs)->pstate & PSR_I_BIT))
|
||||
|
||||
#define fast_interrupts_enabled(regs) \
|
||||
(!((regs)->pstate & PSR_F_BIT))
|
||||
|
||||
#define user_stack_pointer(regs) \
|
||||
((regs)->sp)
|
||||
|
||||
/*
|
||||
* Are the current registers suitable for user mode? (used to maintain
|
||||
* security in signal handlers)
|
||||
*/
|
||||
static inline int valid_user_regs(struct user_pt_regs *regs)
|
||||
{
|
||||
if (user_mode(regs) && (regs->pstate & PSR_I_BIT) == 0) {
|
||||
regs->pstate &= ~(PSR_F_BIT | PSR_A_BIT);
|
||||
|
||||
/* The T bit is reserved for AArch64 */
|
||||
if (!(regs->pstate & PSR_MODE32_BIT))
|
||||
regs->pstate &= ~COMPAT_PSR_T_BIT;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Force PSR to something logical...
|
||||
*/
|
||||
regs->pstate &= PSR_f | PSR_s | (PSR_x & ~PSR_A_BIT) | \
|
||||
COMPAT_PSR_T_BIT | PSR_MODE32_BIT;
|
||||
|
||||
if (!(regs->pstate & PSR_MODE32_BIT)) {
|
||||
regs->pstate &= ~COMPAT_PSR_T_BIT;
|
||||
regs->pstate |= PSR_MODE_EL0t;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define instruction_pointer(regs) (regs)->pc
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
extern unsigned long profile_pc(struct pt_regs *regs);
|
||||
#else
|
||||
#define profile_pc(regs) instruction_pointer(regs)
|
||||
#endif
|
||||
|
||||
extern int aarch32_break_trap(struct pt_regs *regs);
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif
|
26
arch/arm64/include/asm/setup.h
Normal file
26
arch/arm64/include/asm/setup.h
Normal file
@ -0,0 +1,26 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/setup.h
|
||||
*
|
||||
* Copyright (C) 1997-1999 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SETUP_H
|
||||
#define __ASM_SETUP_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#define COMMAND_LINE_SIZE 2048
|
||||
|
||||
#endif
|
28
arch/arm64/include/asm/shmparam.h
Normal file
28
arch/arm64/include/asm/shmparam.h
Normal file
@ -0,0 +1,28 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SHMPARAM_H
|
||||
#define __ASM_SHMPARAM_H
|
||||
|
||||
/*
|
||||
* For IPC syscalls from compat tasks, we need to use the legacy 16k
|
||||
* alignment value. Since we don't have aliasing D-caches, the rest of
|
||||
* the time we can safely use PAGE_SIZE.
|
||||
*/
|
||||
#define COMPAT_SHMLBA 0x4000
|
||||
|
||||
#include <asm-generic/shmparam.h>
|
||||
|
||||
#endif /* __ASM_SHMPARAM_H */
|
69
arch/arm64/include/asm/sigcontext.h
Normal file
69
arch/arm64/include/asm/sigcontext.h
Normal file
@ -0,0 +1,69 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SIGCONTEXT_H
|
||||
#define __ASM_SIGCONTEXT_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
/*
|
||||
* Signal context structure - contains all info to do with the state
|
||||
* before the signal handler was invoked.
|
||||
*/
|
||||
struct sigcontext {
|
||||
__u64 fault_address;
|
||||
/* AArch64 registers */
|
||||
__u64 regs[31];
|
||||
__u64 sp;
|
||||
__u64 pc;
|
||||
__u64 pstate;
|
||||
/* 4K reserved for FP/SIMD state and future expansion */
|
||||
__u8 __reserved[4096] __attribute__((__aligned__(16)));
|
||||
};
|
||||
|
||||
/*
|
||||
* Header to be used at the beginning of structures extending the user
|
||||
* context. Such structures must be placed after the rt_sigframe on the stack
|
||||
* and be 16-byte aligned. The last structure must be a dummy one with the
|
||||
* magic and size set to 0.
|
||||
*/
|
||||
struct _aarch64_ctx {
|
||||
__u32 magic;
|
||||
__u32 size;
|
||||
};
|
||||
|
||||
#define FPSIMD_MAGIC 0x46508001
|
||||
|
||||
struct fpsimd_context {
|
||||
struct _aarch64_ctx head;
|
||||
__u32 fpsr;
|
||||
__u32 fpcr;
|
||||
__uint128_t vregs[32];
|
||||
};
|
||||
|
||||
#ifdef __KERNEL__
|
||||
/*
|
||||
* Auxiliary context saved in the sigcontext.__reserved array. Not exported to
|
||||
* user space as it will change with the addition of new context. User space
|
||||
* should check the magic/size information.
|
||||
*/
|
||||
struct aux_context {
|
||||
struct fpsimd_context fpsimd;
|
||||
/* additional context to be added before "end" */
|
||||
struct _aarch64_ctx end;
|
||||
};
|
||||
#endif
|
||||
|
||||
#endif
|
23
arch/arm64/include/asm/siginfo.h
Normal file
23
arch/arm64/include/asm/siginfo.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SIGINFO_H
|
||||
#define __ASM_SIGINFO_H
|
||||
|
||||
#define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int))
|
||||
|
||||
#include <asm-generic/siginfo.h>
|
||||
|
||||
#endif
|
24
arch/arm64/include/asm/signal.h
Normal file
24
arch/arm64/include/asm/signal.h
Normal file
@ -0,0 +1,24 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SIGNAL_H
|
||||
#define __ASM_SIGNAL_H
|
||||
|
||||
/* Required for AArch32 compatibility. */
|
||||
#define SA_RESTORER 0x04000000
|
||||
|
||||
#include <asm-generic/signal.h>
|
||||
|
||||
#endif
|
53
arch/arm64/include/asm/signal32.h
Normal file
53
arch/arm64/include/asm/signal32.h
Normal file
@ -0,0 +1,53 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SIGNAL32_H
|
||||
#define __ASM_SIGNAL32_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
#ifdef CONFIG_COMPAT
|
||||
#include <linux/compat.h>
|
||||
|
||||
#define AARCH32_KERN_SIGRET_CODE_OFFSET 0x500
|
||||
|
||||
extern const compat_ulong_t aarch32_sigret_code[6];
|
||||
|
||||
int compat_setup_frame(int usig, struct k_sigaction *ka, sigset_t *set,
|
||||
struct pt_regs *regs);
|
||||
int compat_setup_rt_frame(int usig, struct k_sigaction *ka, siginfo_t *info,
|
||||
sigset_t *set, struct pt_regs *regs);
|
||||
|
||||
void compat_setup_restart_syscall(struct pt_regs *regs);
|
||||
#else
|
||||
|
||||
static inline int compat_setup_frame(int usid, struct k_sigaction *ka,
|
||||
sigset_t *set, struct pt_regs *regs)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
static inline int compat_setup_rt_frame(int usig, struct k_sigaction *ka,
|
||||
siginfo_t *info, sigset_t *set,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
static inline void compat_setup_restart_syscall(struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_COMPAT */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_SIGNAL32_H */
|
69
arch/arm64/include/asm/smp.h
Normal file
69
arch/arm64/include/asm/smp.h
Normal file
@ -0,0 +1,69 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SMP_H
|
||||
#define __ASM_SMP_H
|
||||
|
||||
#include <linux/threads.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/thread_info.h>
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
# error "<asm/smp.h> included in non-SMP build"
|
||||
#endif
|
||||
|
||||
#define raw_smp_processor_id() (current_thread_info()->cpu)
|
||||
|
||||
struct seq_file;
|
||||
|
||||
/*
|
||||
* generate IPI list text
|
||||
*/
|
||||
extern void show_ipi_list(struct seq_file *p, int prec);
|
||||
|
||||
/*
|
||||
* Called from C code, this handles an IPI.
|
||||
*/
|
||||
extern void handle_IPI(int ipinr, struct pt_regs *regs);
|
||||
|
||||
/*
|
||||
* Setup the set of possible CPUs (via set_cpu_possible)
|
||||
*/
|
||||
extern void smp_init_cpus(void);
|
||||
|
||||
/*
|
||||
* Provide a function to raise an IPI cross call on CPUs in callmap.
|
||||
*/
|
||||
extern void set_smp_cross_call(void (*)(const struct cpumask *, unsigned int));
|
||||
|
||||
/*
|
||||
* Called from the secondary holding pen, this is the secondary CPU entry point.
|
||||
*/
|
||||
asmlinkage void secondary_start_kernel(void);
|
||||
|
||||
/*
|
||||
* Initial data for bringing up a secondary CPU.
|
||||
*/
|
||||
struct secondary_data {
|
||||
void *stack;
|
||||
};
|
||||
extern struct secondary_data secondary_data;
|
||||
extern void secondary_holding_pen(void);
|
||||
extern volatile unsigned long secondary_holding_pen_release;
|
||||
|
||||
extern void arch_send_call_function_single_ipi(int cpu);
|
||||
extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
|
||||
|
||||
#endif /* ifndef __ASM_SMP_H */
|
24
arch/arm64/include/asm/sparsemem.h
Normal file
24
arch/arm64/include/asm/sparsemem.h
Normal file
@ -0,0 +1,24 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SPARSEMEM_H
|
||||
#define __ASM_SPARSEMEM_H
|
||||
|
||||
#ifdef CONFIG_SPARSEMEM
|
||||
#define MAX_PHYSMEM_BITS 40
|
||||
#define SECTION_SIZE_BITS 30
|
||||
#endif
|
||||
|
||||
#endif
|
202
arch/arm64/include/asm/spinlock.h
Normal file
202
arch/arm64/include/asm/spinlock.h
Normal file
@ -0,0 +1,202 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SPINLOCK_H
|
||||
#define __ASM_SPINLOCK_H
|
||||
|
||||
#include <asm/spinlock_types.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
/*
|
||||
* Spinlock implementation.
|
||||
*
|
||||
* The old value is read exclusively and the new one, if unlocked, is written
|
||||
* exclusively. In case of failure, the loop is restarted.
|
||||
*
|
||||
* The memory barriers are implicit with the load-acquire and store-release
|
||||
* instructions.
|
||||
*
|
||||
* Unlocked value: 0
|
||||
* Locked value: 1
|
||||
*/
|
||||
|
||||
#define arch_spin_is_locked(x) ((x)->lock != 0)
|
||||
#define arch_spin_unlock_wait(lock) \
|
||||
do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
|
||||
|
||||
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
|
||||
|
||||
static inline void arch_spin_lock(arch_spinlock_t *lock)
|
||||
{
|
||||
unsigned int tmp;
|
||||
|
||||
asm volatile(
|
||||
" sevl\n"
|
||||
"1: wfe\n"
|
||||
"2: ldaxr %w0, [%1]\n"
|
||||
" cbnz %w0, 1b\n"
|
||||
" stxr %w0, %w2, [%1]\n"
|
||||
" cbnz %w0, 2b\n"
|
||||
: "=&r" (tmp)
|
||||
: "r" (&lock->lock), "r" (1)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline int arch_spin_trylock(arch_spinlock_t *lock)
|
||||
{
|
||||
unsigned int tmp;
|
||||
|
||||
asm volatile(
|
||||
" ldaxr %w0, [%1]\n"
|
||||
" cbnz %w0, 1f\n"
|
||||
" stxr %w0, %w2, [%1]\n"
|
||||
"1:\n"
|
||||
: "=&r" (tmp)
|
||||
: "r" (&lock->lock), "r" (1)
|
||||
: "memory");
|
||||
|
||||
return !tmp;
|
||||
}
|
||||
|
||||
static inline void arch_spin_unlock(arch_spinlock_t *lock)
|
||||
{
|
||||
asm volatile(
|
||||
" stlr %w1, [%0]\n"
|
||||
: : "r" (&lock->lock), "r" (0) : "memory");
|
||||
}
|
||||
|
||||
/*
|
||||
* Write lock implementation.
|
||||
*
|
||||
* Write locks set bit 31. Unlocking, is done by writing 0 since the lock is
|
||||
* exclusively held.
|
||||
*
|
||||
* The memory barriers are implicit with the load-acquire and store-release
|
||||
* instructions.
|
||||
*/
|
||||
|
||||
static inline void arch_write_lock(arch_rwlock_t *rw)
|
||||
{
|
||||
unsigned int tmp;
|
||||
|
||||
asm volatile(
|
||||
" sevl\n"
|
||||
"1: wfe\n"
|
||||
"2: ldaxr %w0, [%1]\n"
|
||||
" cbnz %w0, 1b\n"
|
||||
" stxr %w0, %w2, [%1]\n"
|
||||
" cbnz %w0, 2b\n"
|
||||
: "=&r" (tmp)
|
||||
: "r" (&rw->lock), "r" (0x80000000)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline int arch_write_trylock(arch_rwlock_t *rw)
|
||||
{
|
||||
unsigned int tmp;
|
||||
|
||||
asm volatile(
|
||||
" ldaxr %w0, [%1]\n"
|
||||
" cbnz %w0, 1f\n"
|
||||
" stxr %w0, %w2, [%1]\n"
|
||||
"1:\n"
|
||||
: "=&r" (tmp)
|
||||
: "r" (&rw->lock), "r" (0x80000000)
|
||||
: "memory");
|
||||
|
||||
return !tmp;
|
||||
}
|
||||
|
||||
static inline void arch_write_unlock(arch_rwlock_t *rw)
|
||||
{
|
||||
asm volatile(
|
||||
" stlr %w1, [%0]\n"
|
||||
: : "r" (&rw->lock), "r" (0) : "memory");
|
||||
}
|
||||
|
||||
/* write_can_lock - would write_trylock() succeed? */
|
||||
#define arch_write_can_lock(x) ((x)->lock == 0)
|
||||
|
||||
/*
|
||||
* Read lock implementation.
|
||||
*
|
||||
* It exclusively loads the lock value, increments it and stores the new value
|
||||
* back if positive and the CPU still exclusively owns the location. If the
|
||||
* value is negative, the lock is already held.
|
||||
*
|
||||
* During unlocking there may be multiple active read locks but no write lock.
|
||||
*
|
||||
* The memory barriers are implicit with the load-acquire and store-release
|
||||
* instructions.
|
||||
*/
|
||||
static inline void arch_read_lock(arch_rwlock_t *rw)
|
||||
{
|
||||
unsigned int tmp, tmp2;
|
||||
|
||||
asm volatile(
|
||||
" sevl\n"
|
||||
"1: wfe\n"
|
||||
"2: ldaxr %w0, [%2]\n"
|
||||
" add %w0, %w0, #1\n"
|
||||
" tbnz %w0, #31, 1b\n"
|
||||
" stxr %w1, %w0, [%2]\n"
|
||||
" cbnz %w1, 2b\n"
|
||||
: "=&r" (tmp), "=&r" (tmp2)
|
||||
: "r" (&rw->lock)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline void arch_read_unlock(arch_rwlock_t *rw)
|
||||
{
|
||||
unsigned int tmp, tmp2;
|
||||
|
||||
asm volatile(
|
||||
"1: ldxr %w0, [%2]\n"
|
||||
" sub %w0, %w0, #1\n"
|
||||
" stlxr %w1, %w0, [%2]\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
: "=&r" (tmp), "=&r" (tmp2)
|
||||
: "r" (&rw->lock)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline int arch_read_trylock(arch_rwlock_t *rw)
|
||||
{
|
||||
unsigned int tmp, tmp2 = 1;
|
||||
|
||||
asm volatile(
|
||||
" ldaxr %w0, [%2]\n"
|
||||
" add %w0, %w0, #1\n"
|
||||
" tbnz %w0, #31, 1f\n"
|
||||
" stxr %w1, %w0, [%2]\n"
|
||||
"1:\n"
|
||||
: "=&r" (tmp), "+r" (tmp2)
|
||||
: "r" (&rw->lock)
|
||||
: "memory");
|
||||
|
||||
return !tmp2;
|
||||
}
|
||||
|
||||
/* read_can_lock - would read_trylock() succeed? */
|
||||
#define arch_read_can_lock(x) ((x)->lock < 0x80000000)
|
||||
|
||||
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
|
||||
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
|
||||
|
||||
#define arch_spin_relax(lock) cpu_relax()
|
||||
#define arch_read_relax(lock) cpu_relax()
|
||||
#define arch_write_relax(lock) cpu_relax()
|
||||
|
||||
#endif /* __ASM_SPINLOCK_H */
|
38
arch/arm64/include/asm/spinlock_types.h
Normal file
38
arch/arm64/include/asm/spinlock_types.h
Normal file
@ -0,0 +1,38 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SPINLOCK_TYPES_H
|
||||
#define __ASM_SPINLOCK_TYPES_H
|
||||
|
||||
#if !defined(__LINUX_SPINLOCK_TYPES_H) && !defined(__ASM_SPINLOCK_H)
|
||||
# error "please don't include this file directly"
|
||||
#endif
|
||||
|
||||
/* We only require natural alignment for exclusive accesses. */
|
||||
#define __lock_aligned
|
||||
|
||||
typedef struct {
|
||||
volatile unsigned int lock;
|
||||
} arch_spinlock_t;
|
||||
|
||||
#define __ARCH_SPIN_LOCK_UNLOCKED { 0 }
|
||||
|
||||
typedef struct {
|
||||
volatile unsigned int lock;
|
||||
} arch_rwlock_t;
|
||||
|
||||
#define __ARCH_RW_LOCK_UNLOCKED { 0 }
|
||||
|
||||
#endif
|
29
arch/arm64/include/asm/stacktrace.h
Normal file
29
arch/arm64/include/asm/stacktrace.h
Normal file
@ -0,0 +1,29 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_STACKTRACE_H
|
||||
#define __ASM_STACKTRACE_H
|
||||
|
||||
struct stackframe {
|
||||
unsigned long fp;
|
||||
unsigned long sp;
|
||||
unsigned long pc;
|
||||
};
|
||||
|
||||
extern int unwind_frame(struct stackframe *frame);
|
||||
extern void walk_stackframe(struct stackframe *frame,
|
||||
int (*fn)(struct stackframe *, void *), void *data);
|
||||
|
||||
#endif /* __ASM_STACKTRACE_H */
|
62
arch/arm64/include/asm/stat.h
Normal file
62
arch/arm64/include/asm/stat.h
Normal file
@ -0,0 +1,62 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_STAT_H
|
||||
#define __ASM_STAT_H
|
||||
|
||||
#include <asm-generic/stat.h>
|
||||
|
||||
#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
|
||||
|
||||
#include <asm/compat.h>
|
||||
|
||||
/*
|
||||
* struct stat64 is needed for compat tasks only. Its definition is different
|
||||
* from the generic struct stat64.
|
||||
*/
|
||||
struct stat64 {
|
||||
compat_u64 st_dev;
|
||||
unsigned char __pad0[4];
|
||||
|
||||
#define STAT64_HAS_BROKEN_ST_INO 1
|
||||
compat_ulong_t __st_ino;
|
||||
compat_uint_t st_mode;
|
||||
compat_uint_t st_nlink;
|
||||
|
||||
compat_ulong_t st_uid;
|
||||
compat_ulong_t st_gid;
|
||||
|
||||
compat_u64 st_rdev;
|
||||
unsigned char __pad3[4];
|
||||
|
||||
compat_s64 st_size;
|
||||
compat_ulong_t st_blksize;
|
||||
compat_u64 st_blocks; /* Number of 512-byte blocks allocated. */
|
||||
|
||||
compat_ulong_t st_atime;
|
||||
compat_ulong_t st_atime_nsec;
|
||||
|
||||
compat_ulong_t st_mtime;
|
||||
compat_ulong_t st_mtime_nsec;
|
||||
|
||||
compat_ulong_t st_ctime;
|
||||
compat_ulong_t st_ctime_nsec;
|
||||
|
||||
compat_u64 st_ino;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
23
arch/arm64/include/asm/statfs.h
Normal file
23
arch/arm64/include/asm/statfs.h
Normal file
@ -0,0 +1,23 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_STATFS_H
|
||||
#define __ASM_STATFS_H
|
||||
|
||||
#define ARCH_PACK_COMPAT_STATFS64 __attribute__((packed,aligned(4)))
|
||||
|
||||
#include <asm-generic/statfs.h>
|
||||
|
||||
#endif
|
101
arch/arm64/include/asm/syscall.h
Normal file
101
arch/arm64/include/asm/syscall.h
Normal file
@ -0,0 +1,101 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SYSCALL_H
|
||||
#define __ASM_SYSCALL_H
|
||||
|
||||
#include <linux/err.h>
|
||||
|
||||
|
||||
static inline int syscall_get_nr(struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
return regs->syscallno;
|
||||
}
|
||||
|
||||
static inline void syscall_rollback(struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
regs->regs[0] = regs->orig_x0;
|
||||
}
|
||||
|
||||
|
||||
static inline long syscall_get_error(struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
unsigned long error = regs->regs[0];
|
||||
return IS_ERR_VALUE(error) ? error : 0;
|
||||
}
|
||||
|
||||
static inline long syscall_get_return_value(struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
return regs->regs[0];
|
||||
}
|
||||
|
||||
static inline void syscall_set_return_value(struct task_struct *task,
|
||||
struct pt_regs *regs,
|
||||
int error, long val)
|
||||
{
|
||||
regs->regs[0] = (long) error ? error : val;
|
||||
}
|
||||
|
||||
#define SYSCALL_MAX_ARGS 6
|
||||
|
||||
static inline void syscall_get_arguments(struct task_struct *task,
|
||||
struct pt_regs *regs,
|
||||
unsigned int i, unsigned int n,
|
||||
unsigned long *args)
|
||||
{
|
||||
if (i + n > SYSCALL_MAX_ARGS) {
|
||||
unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;
|
||||
unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;
|
||||
pr_warning("%s called with max args %d, handling only %d\n",
|
||||
__func__, i + n, SYSCALL_MAX_ARGS);
|
||||
memset(args_bad, 0, n_bad * sizeof(args[0]));
|
||||
}
|
||||
|
||||
if (i == 0) {
|
||||
args[0] = regs->orig_x0;
|
||||
args++;
|
||||
i++;
|
||||
n--;
|
||||
}
|
||||
|
||||
memcpy(args, ®s->regs[i], n * sizeof(args[0]));
|
||||
}
|
||||
|
||||
static inline void syscall_set_arguments(struct task_struct *task,
|
||||
struct pt_regs *regs,
|
||||
unsigned int i, unsigned int n,
|
||||
const unsigned long *args)
|
||||
{
|
||||
if (i + n > SYSCALL_MAX_ARGS) {
|
||||
pr_warning("%s called with max args %d, handling only %d\n",
|
||||
__func__, i + n, SYSCALL_MAX_ARGS);
|
||||
n = SYSCALL_MAX_ARGS - i;
|
||||
}
|
||||
|
||||
if (i == 0) {
|
||||
regs->orig_x0 = args[0];
|
||||
args++;
|
||||
i++;
|
||||
n--;
|
||||
}
|
||||
|
||||
memcpy(®s->regs[i], args, n * sizeof(args[0]));
|
||||
}
|
||||
|
||||
#endif /* __ASM_SYSCALL_H */
|
40
arch/arm64/include/asm/syscalls.h
Normal file
40
arch/arm64/include/asm/syscalls.h
Normal file
@ -0,0 +1,40 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SYSCALLS_H
|
||||
#define __ASM_SYSCALLS_H
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/signal.h>
|
||||
|
||||
/*
|
||||
* System call wrappers implemented in kernel/entry.S.
|
||||
*/
|
||||
asmlinkage long sys_execve_wrapper(const char __user *filename,
|
||||
const char __user *const __user *argv,
|
||||
const char __user *const __user *envp);
|
||||
asmlinkage long sys_clone_wrapper(unsigned long clone_flags,
|
||||
unsigned long newsp,
|
||||
void __user *parent_tid,
|
||||
unsigned long tls_val,
|
||||
void __user *child_tid);
|
||||
asmlinkage long sys_rt_sigreturn_wrapper(void);
|
||||
asmlinkage long sys_sigaltstack_wrapper(const stack_t __user *uss,
|
||||
stack_t __user *uoss);
|
||||
|
||||
#include <asm-generic/syscalls.h>
|
||||
|
||||
#endif /* __ASM_SYSCALLS_H */
|
54
arch/arm64/include/asm/system_misc.h
Normal file
54
arch/arm64/include/asm/system_misc.h
Normal file
@ -0,0 +1,54 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/system_misc.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_SYSTEM_MISC_H
|
||||
#define __ASM_SYSTEM_MISC_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/irqflags.h>
|
||||
|
||||
struct pt_regs;
|
||||
|
||||
void die(const char *msg, struct pt_regs *regs, int err);
|
||||
|
||||
struct siginfo;
|
||||
void arm64_notify_die(const char *str, struct pt_regs *regs,
|
||||
struct siginfo *info, int err);
|
||||
|
||||
void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned int,
|
||||
struct pt_regs *),
|
||||
int sig, int code, const char *name);
|
||||
|
||||
struct mm_struct;
|
||||
extern void show_pte(struct mm_struct *mm, unsigned long addr);
|
||||
extern void __show_regs(struct pt_regs *);
|
||||
|
||||
void soft_restart(unsigned long);
|
||||
extern void (*pm_restart)(const char *cmd);
|
||||
|
||||
#define UDBG_UNDEFINED (1 << 0)
|
||||
#define UDBG_SYSCALL (1 << 1)
|
||||
#define UDBG_BADABORT (1 << 2)
|
||||
#define UDBG_SEGV (1 << 3)
|
||||
#define UDBG_BUS (1 << 4)
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* __ASM_SYSTEM_MISC_H */
|
127
arch/arm64/include/asm/thread_info.h
Normal file
127
arch/arm64/include/asm/thread_info.h
Normal file
@ -0,0 +1,127 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/thread_info.h
|
||||
*
|
||||
* Copyright (C) 2002 Russell King.
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_THREAD_INFO_H
|
||||
#define __ASM_THREAD_INFO_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/compiler.h>
|
||||
|
||||
#ifndef CONFIG_ARM64_64K_PAGES
|
||||
#define THREAD_SIZE_ORDER 1
|
||||
#endif
|
||||
|
||||
#define THREAD_SIZE 8192
|
||||
#define THREAD_START_SP (THREAD_SIZE - 16)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
struct task_struct;
|
||||
struct exec_domain;
|
||||
|
||||
#include <asm/types.h>
|
||||
|
||||
typedef unsigned long mm_segment_t;
|
||||
|
||||
/*
|
||||
* low level task data that entry.S needs immediate access to.
|
||||
* __switch_to() assumes cpu_context follows immediately after cpu_domain.
|
||||
*/
|
||||
struct thread_info {
|
||||
unsigned long flags; /* low level flags */
|
||||
mm_segment_t addr_limit; /* address limit */
|
||||
struct task_struct *task; /* main task structure */
|
||||
struct exec_domain *exec_domain; /* execution domain */
|
||||
struct restart_block restart_block;
|
||||
int preempt_count; /* 0 => preemptable, <0 => bug */
|
||||
int cpu; /* cpu */
|
||||
};
|
||||
|
||||
#define INIT_THREAD_INFO(tsk) \
|
||||
{ \
|
||||
.task = &tsk, \
|
||||
.exec_domain = &default_exec_domain, \
|
||||
.flags = 0, \
|
||||
.preempt_count = INIT_PREEMPT_COUNT, \
|
||||
.addr_limit = KERNEL_DS, \
|
||||
.restart_block = { \
|
||||
.fn = do_no_restart_syscall, \
|
||||
}, \
|
||||
}
|
||||
|
||||
#define init_thread_info (init_thread_union.thread_info)
|
||||
#define init_stack (init_thread_union.stack)
|
||||
|
||||
/*
|
||||
* how to get the thread information struct from C
|
||||
*/
|
||||
static inline struct thread_info *current_thread_info(void) __attribute_const__;
|
||||
|
||||
static inline struct thread_info *current_thread_info(void)
|
||||
{
|
||||
register unsigned long sp asm ("sp");
|
||||
return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
|
||||
}
|
||||
|
||||
#define thread_saved_pc(tsk) \
|
||||
((unsigned long)(tsk->thread.cpu_context.pc))
|
||||
#define thread_saved_sp(tsk) \
|
||||
((unsigned long)(tsk->thread.cpu_context.sp))
|
||||
#define thread_saved_fp(tsk) \
|
||||
((unsigned long)(tsk->thread.cpu_context.fp))
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We use bit 30 of the preempt_count to indicate that kernel
|
||||
* preemption is occurring. See <asm/hardirq.h>.
|
||||
*/
|
||||
#define PREEMPT_ACTIVE 0x40000000
|
||||
|
||||
/*
|
||||
* thread information flags:
|
||||
* TIF_SYSCALL_TRACE - syscall trace active
|
||||
* TIF_SIGPENDING - signal pending
|
||||
* TIF_NEED_RESCHED - rescheduling necessary
|
||||
* TIF_NOTIFY_RESUME - callback before returning to user
|
||||
* TIF_USEDFPU - FPU was used by this task this quantum (SMP)
|
||||
* TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED
|
||||
*/
|
||||
#define TIF_SIGPENDING 0
|
||||
#define TIF_NEED_RESCHED 1
|
||||
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
|
||||
#define TIF_SYSCALL_TRACE 8
|
||||
#define TIF_POLLING_NRFLAG 16
|
||||
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
|
||||
#define TIF_FREEZE 19
|
||||
#define TIF_RESTORE_SIGMASK 20
|
||||
#define TIF_SINGLESTEP 21
|
||||
#define TIF_32BIT 22 /* 32bit process */
|
||||
#define TIF_SWITCH_MM 23 /* deferred switch_mm */
|
||||
|
||||
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
|
||||
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
|
||||
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
|
||||
#define _TIF_32BIT (1 << TIF_32BIT)
|
||||
|
||||
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
|
||||
_TIF_NOTIFY_RESUME)
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* __ASM_THREAD_INFO_H */
|
29
arch/arm64/include/asm/timex.h
Normal file
29
arch/arm64/include/asm/timex.h
Normal file
@ -0,0 +1,29 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_TIMEX_H
|
||||
#define __ASM_TIMEX_H
|
||||
|
||||
/*
|
||||
* Use the current timer as a cycle counter since this is what we use for
|
||||
* the delay loop.
|
||||
*/
|
||||
#define get_cycles() ({ cycles_t c; read_current_timer(&c); c; })
|
||||
|
||||
#include <asm-generic/timex.h>
|
||||
|
||||
#define ARCH_HAS_READ_CURRENT_TIMER
|
||||
|
||||
#endif
|
190
arch/arm64/include/asm/tlb.h
Normal file
190
arch/arm64/include/asm/tlb.h
Normal file
@ -0,0 +1,190 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/tlb.h
|
||||
*
|
||||
* Copyright (C) 2002 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_TLB_H
|
||||
#define __ASM_TLB_H
|
||||
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/swap.h>
|
||||
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
#define MMU_GATHER_BUNDLE 8
|
||||
|
||||
/*
|
||||
* TLB handling. This allows us to remove pages from the page
|
||||
* tables, and efficiently handle the TLB issues.
|
||||
*/
|
||||
struct mmu_gather {
|
||||
struct mm_struct *mm;
|
||||
unsigned int fullmm;
|
||||
struct vm_area_struct *vma;
|
||||
unsigned long range_start;
|
||||
unsigned long range_end;
|
||||
unsigned int nr;
|
||||
unsigned int max;
|
||||
struct page **pages;
|
||||
struct page *local[MMU_GATHER_BUNDLE];
|
||||
};
|
||||
|
||||
/*
|
||||
* This is unnecessarily complex. There's three ways the TLB shootdown
|
||||
* code is used:
|
||||
* 1. Unmapping a range of vmas. See zap_page_range(), unmap_region().
|
||||
* tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called.
|
||||
* tlb->vma will be non-NULL.
|
||||
* 2. Unmapping all vmas. See exit_mmap().
|
||||
* tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called.
|
||||
* tlb->vma will be non-NULL. Additionally, page tables will be freed.
|
||||
* 3. Unmapping argument pages. See shift_arg_pages().
|
||||
* tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called.
|
||||
* tlb->vma will be NULL.
|
||||
*/
|
||||
static inline void tlb_flush(struct mmu_gather *tlb)
|
||||
{
|
||||
if (tlb->fullmm || !tlb->vma)
|
||||
flush_tlb_mm(tlb->mm);
|
||||
else if (tlb->range_end > 0) {
|
||||
flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end);
|
||||
tlb->range_start = TASK_SIZE;
|
||||
tlb->range_end = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr)
|
||||
{
|
||||
if (!tlb->fullmm) {
|
||||
if (addr < tlb->range_start)
|
||||
tlb->range_start = addr;
|
||||
if (addr + PAGE_SIZE > tlb->range_end)
|
||||
tlb->range_end = addr + PAGE_SIZE;
|
||||
}
|
||||
}
|
||||
|
||||
static inline void __tlb_alloc_page(struct mmu_gather *tlb)
|
||||
{
|
||||
unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0);
|
||||
|
||||
if (addr) {
|
||||
tlb->pages = (void *)addr;
|
||||
tlb->max = PAGE_SIZE / sizeof(struct page *);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void tlb_flush_mmu(struct mmu_gather *tlb)
|
||||
{
|
||||
tlb_flush(tlb);
|
||||
free_pages_and_swap_cache(tlb->pages, tlb->nr);
|
||||
tlb->nr = 0;
|
||||
if (tlb->pages == tlb->local)
|
||||
__tlb_alloc_page(tlb);
|
||||
}
|
||||
|
||||
static inline void
|
||||
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int fullmm)
|
||||
{
|
||||
tlb->mm = mm;
|
||||
tlb->fullmm = fullmm;
|
||||
tlb->vma = NULL;
|
||||
tlb->max = ARRAY_SIZE(tlb->local);
|
||||
tlb->pages = tlb->local;
|
||||
tlb->nr = 0;
|
||||
__tlb_alloc_page(tlb);
|
||||
}
|
||||
|
||||
static inline void
|
||||
tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
|
||||
{
|
||||
tlb_flush_mmu(tlb);
|
||||
|
||||
/* keep the page table cache within bounds */
|
||||
check_pgt_cache();
|
||||
|
||||
if (tlb->pages != tlb->local)
|
||||
free_pages((unsigned long)tlb->pages, 0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Memorize the range for the TLB flush.
|
||||
*/
|
||||
static inline void
|
||||
tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr)
|
||||
{
|
||||
tlb_add_flush(tlb, addr);
|
||||
}
|
||||
|
||||
/*
|
||||
* In the case of tlb vma handling, we can optimise these away in the
|
||||
* case where we're doing a full MM flush. When we're doing a munmap,
|
||||
* the vmas are adjusted to only cover the region to be torn down.
|
||||
*/
|
||||
static inline void
|
||||
tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
|
||||
{
|
||||
if (!tlb->fullmm) {
|
||||
tlb->vma = vma;
|
||||
tlb->range_start = TASK_SIZE;
|
||||
tlb->range_end = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static inline void
|
||||
tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
|
||||
{
|
||||
if (!tlb->fullmm)
|
||||
tlb_flush(tlb);
|
||||
}
|
||||
|
||||
static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
|
||||
{
|
||||
tlb->pages[tlb->nr++] = page;
|
||||
VM_BUG_ON(tlb->nr > tlb->max);
|
||||
return tlb->max - tlb->nr;
|
||||
}
|
||||
|
||||
static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
|
||||
{
|
||||
if (!__tlb_remove_page(tlb, page))
|
||||
tlb_flush_mmu(tlb);
|
||||
}
|
||||
|
||||
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
|
||||
unsigned long addr)
|
||||
{
|
||||
pgtable_page_dtor(pte);
|
||||
tlb_add_flush(tlb, addr);
|
||||
tlb_remove_page(tlb, pte);
|
||||
}
|
||||
|
||||
#ifndef CONFIG_ARM64_64K_PAGES
|
||||
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
|
||||
unsigned long addr)
|
||||
{
|
||||
tlb_add_flush(tlb, addr);
|
||||
tlb_remove_page(tlb, virt_to_page(pmdp));
|
||||
}
|
||||
#endif
|
||||
|
||||
#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr)
|
||||
#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr)
|
||||
#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp)
|
||||
|
||||
#define tlb_migrate_finish(mm) do { } while (0)
|
||||
|
||||
#endif
|
122
arch/arm64/include/asm/tlbflush.h
Normal file
122
arch/arm64/include/asm/tlbflush.h
Normal file
@ -0,0 +1,122 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/tlbflush.h
|
||||
*
|
||||
* Copyright (C) 1999-2003 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_TLBFLUSH_H
|
||||
#define __ASM_TLBFLUSH_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/sched.h>
|
||||
#include <asm/cputype.h>
|
||||
|
||||
extern void __cpu_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
|
||||
extern void __cpu_flush_kern_tlb_range(unsigned long, unsigned long);
|
||||
|
||||
extern struct cpu_tlb_fns cpu_tlb;
|
||||
|
||||
/*
|
||||
* TLB Management
|
||||
* ==============
|
||||
*
|
||||
* The arch/arm64/mm/tlb.S files implement these methods.
|
||||
*
|
||||
* The TLB specific code is expected to perform whatever tests it needs
|
||||
* to determine if it should invalidate the TLB for each call. Start
|
||||
* addresses are inclusive and end addresses are exclusive; it is safe to
|
||||
* round these addresses down.
|
||||
*
|
||||
* flush_tlb_all()
|
||||
*
|
||||
* Invalidate the entire TLB.
|
||||
*
|
||||
* flush_tlb_mm(mm)
|
||||
*
|
||||
* Invalidate all TLB entries in a particular address space.
|
||||
* - mm - mm_struct describing address space
|
||||
*
|
||||
* flush_tlb_range(mm,start,end)
|
||||
*
|
||||
* Invalidate a range of TLB entries in the specified address
|
||||
* space.
|
||||
* - mm - mm_struct describing address space
|
||||
* - start - start address (may not be aligned)
|
||||
* - end - end address (exclusive, may not be aligned)
|
||||
*
|
||||
* flush_tlb_page(vaddr,vma)
|
||||
*
|
||||
* Invalidate the specified page in the specified address range.
|
||||
* - vaddr - virtual address (may not be aligned)
|
||||
* - vma - vma_struct describing address range
|
||||
*
|
||||
* flush_kern_tlb_page(kaddr)
|
||||
*
|
||||
* Invalidate the TLB entry for the specified page. The address
|
||||
* will be in the kernels virtual memory space. Current uses
|
||||
* only require the D-TLB to be invalidated.
|
||||
* - kaddr - Kernel virtual memory address
|
||||
*/
|
||||
static inline void flush_tlb_all(void)
|
||||
{
|
||||
dsb();
|
||||
asm("tlbi vmalle1is");
|
||||
dsb();
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline void flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
unsigned long asid = (unsigned long)ASID(mm) << 48;
|
||||
|
||||
dsb();
|
||||
asm("tlbi aside1is, %0" : : "r" (asid));
|
||||
dsb();
|
||||
}
|
||||
|
||||
static inline void flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long uaddr)
|
||||
{
|
||||
unsigned long addr = uaddr >> 12 |
|
||||
((unsigned long)ASID(vma->vm_mm) << 48);
|
||||
|
||||
dsb();
|
||||
asm("tlbi vae1is, %0" : : "r" (addr));
|
||||
dsb();
|
||||
}
|
||||
|
||||
/*
|
||||
* Convert calls to our calling convention.
|
||||
*/
|
||||
#define flush_tlb_range(vma,start,end) __cpu_flush_user_tlb_range(start,end,vma)
|
||||
#define flush_tlb_kernel_range(s,e) __cpu_flush_kern_tlb_range(s,e)
|
||||
|
||||
/*
|
||||
* On AArch64, the cache coherency is handled via the set_pte_at() function.
|
||||
*/
|
||||
static inline void update_mmu_cache(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
/*
|
||||
* set_pte() does not have a DSB, so make sure that the page table
|
||||
* write is visible.
|
||||
*/
|
||||
dsb();
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
30
arch/arm64/include/asm/traps.h
Normal file
30
arch/arm64/include/asm/traps.h
Normal file
@ -0,0 +1,30 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/traps.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_TRAP_H
|
||||
#define __ASM_TRAP_H
|
||||
|
||||
static inline int in_exception_text(unsigned long ptr)
|
||||
{
|
||||
extern char __exception_text_start[];
|
||||
extern char __exception_text_end[];
|
||||
|
||||
return ptr >= (unsigned long)&__exception_text_start &&
|
||||
ptr < (unsigned long)&__exception_text_end;
|
||||
}
|
||||
|
||||
#endif
|
297
arch/arm64/include/asm/uaccess.h
Normal file
297
arch/arm64/include/asm/uaccess.h
Normal file
@ -0,0 +1,297 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/uaccess.h
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_UACCESS_H
|
||||
#define __ASM_UACCESS_H
|
||||
|
||||
/*
|
||||
* User space memory access functions
|
||||
*/
|
||||
#include <linux/string.h>
|
||||
#include <linux/thread_info.h>
|
||||
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/errno.h>
|
||||
#include <asm/memory.h>
|
||||
#include <asm/compiler.h>
|
||||
|
||||
#define VERIFY_READ 0
|
||||
#define VERIFY_WRITE 1
|
||||
|
||||
/*
|
||||
* The exception table consists of pairs of addresses: the first is the
|
||||
* address of an instruction that is allowed to fault, and the second is
|
||||
* the address at which the program should continue. No registers are
|
||||
* modified, so it is entirely up to the continuation code to figure out
|
||||
* what to do.
|
||||
*
|
||||
* All the routines below use bits of fixup code that are out of line
|
||||
* with the main instruction path. This means when everything is well,
|
||||
* we don't even have to jump over them. Further, they do not intrude
|
||||
* on our cache or tlb entries.
|
||||
*/
|
||||
|
||||
struct exception_table_entry
|
||||
{
|
||||
unsigned long insn, fixup;
|
||||
};
|
||||
|
||||
extern int fixup_exception(struct pt_regs *regs);
|
||||
|
||||
#define KERNEL_DS (-1UL)
|
||||
#define get_ds() (KERNEL_DS)
|
||||
|
||||
#define USER_DS TASK_SIZE_64
|
||||
#define get_fs() (current_thread_info()->addr_limit)
|
||||
|
||||
static inline void set_fs(mm_segment_t fs)
|
||||
{
|
||||
current_thread_info()->addr_limit = fs;
|
||||
}
|
||||
|
||||
#define segment_eq(a,b) ((a) == (b))
|
||||
|
||||
/*
|
||||
* Return 1 if addr < current->addr_limit, 0 otherwise.
|
||||
*/
|
||||
#define __addr_ok(addr) \
|
||||
({ \
|
||||
unsigned long flag; \
|
||||
asm("cmp %1, %0; cset %0, lo" \
|
||||
: "=&r" (flag) \
|
||||
: "r" (addr), "0" (current_thread_info()->addr_limit) \
|
||||
: "cc"); \
|
||||
flag; \
|
||||
})
|
||||
|
||||
/*
|
||||
* Test whether a block of memory is a valid user space address.
|
||||
* Returns 1 if the range is valid, 0 otherwise.
|
||||
*
|
||||
* This is equivalent to the following test:
|
||||
* (u65)addr + (u65)size < (u65)current->addr_limit
|
||||
*
|
||||
* This needs 65-bit arithmetic.
|
||||
*/
|
||||
#define __range_ok(addr, size) \
|
||||
({ \
|
||||
unsigned long flag, roksum; \
|
||||
__chk_user_ptr(addr); \
|
||||
asm("adds %1, %1, %3; ccmp %1, %4, #2, cc; cset %0, cc" \
|
||||
: "=&r" (flag), "=&r" (roksum) \
|
||||
: "1" (addr), "Ir" (size), \
|
||||
"r" (current_thread_info()->addr_limit) \
|
||||
: "cc"); \
|
||||
flag; \
|
||||
})
|
||||
|
||||
#define access_ok(type, addr, size) __range_ok(addr, size)
|
||||
|
||||
/*
|
||||
* The "__xxx" versions of the user access functions do not verify the address
|
||||
* space - it must have been done previously with a separate "access_ok()"
|
||||
* call.
|
||||
*
|
||||
* The "__xxx_error" versions set the third argument to -EFAULT if an error
|
||||
* occurs, and leave it unchanged on success.
|
||||
*/
|
||||
#define __get_user_asm(instr, reg, x, addr, err) \
|
||||
asm volatile( \
|
||||
"1: " instr " " reg "1, [%2]\n" \
|
||||
"2:\n" \
|
||||
" .section .fixup, \"ax\"\n" \
|
||||
" .align 2\n" \
|
||||
"3: mov %w0, %3\n" \
|
||||
" mov %1, #0\n" \
|
||||
" b 2b\n" \
|
||||
" .previous\n" \
|
||||
" .section __ex_table,\"a\"\n" \
|
||||
" .align 3\n" \
|
||||
" .quad 1b, 3b\n" \
|
||||
" .previous" \
|
||||
: "+r" (err), "=&r" (x) \
|
||||
: "r" (addr), "i" (-EFAULT))
|
||||
|
||||
#define __get_user_err(x, ptr, err) \
|
||||
do { \
|
||||
unsigned long __gu_val; \
|
||||
__chk_user_ptr(ptr); \
|
||||
switch (sizeof(*(ptr))) { \
|
||||
case 1: \
|
||||
__get_user_asm("ldrb", "%w", __gu_val, (ptr), (err)); \
|
||||
break; \
|
||||
case 2: \
|
||||
__get_user_asm("ldrh", "%w", __gu_val, (ptr), (err)); \
|
||||
break; \
|
||||
case 4: \
|
||||
__get_user_asm("ldr", "%w", __gu_val, (ptr), (err)); \
|
||||
break; \
|
||||
case 8: \
|
||||
__get_user_asm("ldr", "%", __gu_val, (ptr), (err)); \
|
||||
break; \
|
||||
default: \
|
||||
BUILD_BUG(); \
|
||||
} \
|
||||
(x) = (__typeof__(*(ptr)))__gu_val; \
|
||||
} while (0)
|
||||
|
||||
#define __get_user(x, ptr) \
|
||||
({ \
|
||||
int __gu_err = 0; \
|
||||
__get_user_err((x), (ptr), __gu_err); \
|
||||
__gu_err; \
|
||||
})
|
||||
|
||||
#define __get_user_error(x, ptr, err) \
|
||||
({ \
|
||||
__get_user_err((x), (ptr), (err)); \
|
||||
(void)0; \
|
||||
})
|
||||
|
||||
#define __get_user_unaligned __get_user
|
||||
|
||||
#define get_user(x, ptr) \
|
||||
({ \
|
||||
might_sleep(); \
|
||||
access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) ? \
|
||||
__get_user((x), (ptr)) : \
|
||||
((x) = 0, -EFAULT); \
|
||||
})
|
||||
|
||||
#define __put_user_asm(instr, reg, x, addr, err) \
|
||||
asm volatile( \
|
||||
"1: " instr " " reg "1, [%2]\n" \
|
||||
"2:\n" \
|
||||
" .section .fixup,\"ax\"\n" \
|
||||
" .align 2\n" \
|
||||
"3: mov %w0, %3\n" \
|
||||
" b 2b\n" \
|
||||
" .previous\n" \
|
||||
" .section __ex_table,\"a\"\n" \
|
||||
" .align 3\n" \
|
||||
" .quad 1b, 3b\n" \
|
||||
" .previous" \
|
||||
: "+r" (err) \
|
||||
: "r" (x), "r" (addr), "i" (-EFAULT))
|
||||
|
||||
#define __put_user_err(x, ptr, err) \
|
||||
do { \
|
||||
__typeof__(*(ptr)) __pu_val = (x); \
|
||||
__chk_user_ptr(ptr); \
|
||||
switch (sizeof(*(ptr))) { \
|
||||
case 1: \
|
||||
__put_user_asm("strb", "%w", __pu_val, (ptr), (err)); \
|
||||
break; \
|
||||
case 2: \
|
||||
__put_user_asm("strh", "%w", __pu_val, (ptr), (err)); \
|
||||
break; \
|
||||
case 4: \
|
||||
__put_user_asm("str", "%w", __pu_val, (ptr), (err)); \
|
||||
break; \
|
||||
case 8: \
|
||||
__put_user_asm("str", "%", __pu_val, (ptr), (err)); \
|
||||
break; \
|
||||
default: \
|
||||
BUILD_BUG(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define __put_user(x, ptr) \
|
||||
({ \
|
||||
int __pu_err = 0; \
|
||||
__put_user_err((x), (ptr), __pu_err); \
|
||||
__pu_err; \
|
||||
})
|
||||
|
||||
#define __put_user_error(x, ptr, err) \
|
||||
({ \
|
||||
__put_user_err((x), (ptr), (err)); \
|
||||
(void)0; \
|
||||
})
|
||||
|
||||
#define __put_user_unaligned __put_user
|
||||
|
||||
#define put_user(x, ptr) \
|
||||
({ \
|
||||
might_sleep(); \
|
||||
access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) ? \
|
||||
__put_user((x), (ptr)) : \
|
||||
-EFAULT; \
|
||||
})
|
||||
|
||||
extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
|
||||
extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
|
||||
extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
|
||||
extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
|
||||
|
||||
extern unsigned long __must_check __strncpy_from_user(char *to, const char __user *from, unsigned long count);
|
||||
extern unsigned long __must_check __strnlen_user(const char __user *s, long n);
|
||||
|
||||
static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
|
||||
{
|
||||
if (access_ok(VERIFY_READ, from, n))
|
||||
n = __copy_from_user(to, from, n);
|
||||
else /* security hole - plug it */
|
||||
memset(to, 0, n);
|
||||
return n;
|
||||
}
|
||||
|
||||
static inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n)
|
||||
{
|
||||
if (access_ok(VERIFY_WRITE, to, n))
|
||||
n = __copy_to_user(to, from, n);
|
||||
return n;
|
||||
}
|
||||
|
||||
static inline unsigned long __must_check copy_in_user(void __user *to, const void __user *from, unsigned long n)
|
||||
{
|
||||
if (access_ok(VERIFY_READ, from, n) && access_ok(VERIFY_WRITE, to, n))
|
||||
n = __copy_in_user(to, from, n);
|
||||
return n;
|
||||
}
|
||||
|
||||
#define __copy_to_user_inatomic __copy_to_user
|
||||
#define __copy_from_user_inatomic __copy_from_user
|
||||
|
||||
static inline unsigned long __must_check clear_user(void __user *to, unsigned long n)
|
||||
{
|
||||
if (access_ok(VERIFY_WRITE, to, n))
|
||||
n = __clear_user(to, n);
|
||||
return n;
|
||||
}
|
||||
|
||||
static inline long __must_check strncpy_from_user(char *dst, const char __user *src, long count)
|
||||
{
|
||||
long res = -EFAULT;
|
||||
if (access_ok(VERIFY_READ, src, 1))
|
||||
res = __strncpy_from_user(dst, src, count);
|
||||
return res;
|
||||
}
|
||||
|
||||
#define strlen_user(s) strnlen_user(s, ~0UL >> 1)
|
||||
|
||||
static inline long __must_check strnlen_user(const char __user *s, long n)
|
||||
{
|
||||
unsigned long res = 0;
|
||||
|
||||
if (__addr_ok(s))
|
||||
res = __strnlen_user(s, n);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
#endif /* __ASM_UACCESS_H */
|
30
arch/arm64/include/asm/ucontext.h
Normal file
30
arch/arm64/include/asm/ucontext.h
Normal file
@ -0,0 +1,30 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_UCONTEXT_H
|
||||
#define __ASM_UCONTEXT_H
|
||||
|
||||
struct ucontext {
|
||||
unsigned long uc_flags;
|
||||
struct ucontext *uc_link;
|
||||
stack_t uc_stack;
|
||||
sigset_t uc_sigmask;
|
||||
/* glibc uses a 1024-bit sigset_t */
|
||||
__u8 __unused[(1024 - sizeof(sigset_t)) / 8];
|
||||
/* last for future expansion */
|
||||
struct sigcontext uc_mcontext;
|
||||
};
|
||||
|
||||
#endif /* __ASM_UCONTEXT_H */
|
27
arch/arm64/include/asm/unistd.h
Normal file
27
arch/arm64/include/asm/unistd.h
Normal file
@ -0,0 +1,27 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#if !defined(__ASM_UNISTD_H) || defined(__SYSCALL)
|
||||
#define __ASM_UNISTD_H
|
||||
|
||||
#ifndef __SYSCALL_COMPAT
|
||||
#include <asm-generic/unistd.h>
|
||||
#endif
|
||||
|
||||
#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
|
||||
#include <asm/unistd32.h>
|
||||
#endif
|
||||
|
||||
#endif /* __ASM_UNISTD_H */
|
758
arch/arm64/include/asm/unistd32.h
Normal file
758
arch/arm64/include/asm/unistd32.h
Normal file
@ -0,0 +1,758 @@
|
||||
/*
|
||||
* Based on arch/arm/include/asm/unistd.h
|
||||
*
|
||||
* Copyright (C) 2001-2005 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#if !defined(__ASM_UNISTD32_H) || defined(__SYSCALL)
|
||||
#define __ASM_UNISTD32_H
|
||||
|
||||
#ifndef __SYSCALL
|
||||
#define __SYSCALL(x, y)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This file contains the system call numbers.
|
||||
*/
|
||||
|
||||
#ifdef __SYSCALL_COMPAT
|
||||
|
||||
#define __NR_restart_syscall 0
|
||||
__SYSCALL(__NR_restart_syscall, sys_restart_syscall)
|
||||
#define __NR_exit 1
|
||||
__SYSCALL(__NR_exit, sys_exit)
|
||||
#define __NR_fork 2
|
||||
__SYSCALL(__NR_fork, sys_fork)
|
||||
#define __NR_read 3
|
||||
__SYSCALL(__NR_read, sys_read)
|
||||
#define __NR_write 4
|
||||
__SYSCALL(__NR_write, sys_write)
|
||||
#define __NR_open 5
|
||||
__SYSCALL(__NR_open, sys_open)
|
||||
#define __NR_close 6
|
||||
__SYSCALL(__NR_close, sys_close)
|
||||
__SYSCALL(7, sys_ni_syscall) /* 7 was sys_waitpid */
|
||||
#define __NR_creat 8
|
||||
__SYSCALL(__NR_creat, sys_creat)
|
||||
#define __NR_link 9
|
||||
__SYSCALL(__NR_link, sys_link)
|
||||
#define __NR_unlink 10
|
||||
__SYSCALL(__NR_unlink, sys_unlink)
|
||||
#define __NR_execve 11
|
||||
__SYSCALL(__NR_execve, sys_execve)
|
||||
#define __NR_chdir 12
|
||||
__SYSCALL(__NR_chdir, sys_chdir)
|
||||
__SYSCALL(13, sys_ni_syscall) /* 13 was sys_time */
|
||||
#define __NR_mknod 14
|
||||
__SYSCALL(__NR_mknod, sys_mknod)
|
||||
#define __NR_chmod 15
|
||||
__SYSCALL(__NR_chmod, sys_chmod)
|
||||
#define __NR_lchown 16
|
||||
__SYSCALL(__NR_lchown, sys_lchown16)
|
||||
__SYSCALL(17, sys_ni_syscall) /* 17 was sys_break */
|
||||
__SYSCALL(18, sys_ni_syscall) /* 18 was sys_stat */
|
||||
#define __NR_lseek 19
|
||||
__SYSCALL(__NR_lseek, sys_lseek)
|
||||
#define __NR_getpid 20
|
||||
__SYSCALL(__NR_getpid, sys_getpid)
|
||||
#define __NR_mount 21
|
||||
__SYSCALL(__NR_mount, sys_mount)
|
||||
__SYSCALL(22, sys_ni_syscall) /* 22 was sys_umount */
|
||||
#define __NR_setuid 23
|
||||
__SYSCALL(__NR_setuid, sys_setuid16)
|
||||
#define __NR_getuid 24
|
||||
__SYSCALL(__NR_getuid, sys_getuid16)
|
||||
__SYSCALL(25, sys_ni_syscall) /* 25 was sys_stime */
|
||||
#define __NR_ptrace 26
|
||||
__SYSCALL(__NR_ptrace, sys_ptrace)
|
||||
__SYSCALL(27, sys_ni_syscall) /* 27 was sys_alarm */
|
||||
__SYSCALL(28, sys_ni_syscall) /* 28 was sys_fstat */
|
||||
#define __NR_pause 29
|
||||
__SYSCALL(__NR_pause, sys_pause)
|
||||
__SYSCALL(30, sys_ni_syscall) /* 30 was sys_utime */
|
||||
__SYSCALL(31, sys_ni_syscall) /* 31 was sys_stty */
|
||||
__SYSCALL(32, sys_ni_syscall) /* 32 was sys_gtty */
|
||||
#define __NR_access 33
|
||||
__SYSCALL(__NR_access, sys_access)
|
||||
#define __NR_nice 34
|
||||
__SYSCALL(__NR_nice, sys_nice)
|
||||
__SYSCALL(35, sys_ni_syscall) /* 35 was sys_ftime */
|
||||
#define __NR_sync 36
|
||||
__SYSCALL(__NR_sync, sys_sync)
|
||||
#define __NR_kill 37
|
||||
__SYSCALL(__NR_kill, sys_kill)
|
||||
#define __NR_rename 38
|
||||
__SYSCALL(__NR_rename, sys_rename)
|
||||
#define __NR_mkdir 39
|
||||
__SYSCALL(__NR_mkdir, sys_mkdir)
|
||||
#define __NR_rmdir 40
|
||||
__SYSCALL(__NR_rmdir, sys_rmdir)
|
||||
#define __NR_dup 41
|
||||
__SYSCALL(__NR_dup, sys_dup)
|
||||
#define __NR_pipe 42
|
||||
__SYSCALL(__NR_pipe, sys_pipe)
|
||||
#define __NR_times 43
|
||||
__SYSCALL(__NR_times, sys_times)
|
||||
__SYSCALL(44, sys_ni_syscall) /* 44 was sys_prof */
|
||||
#define __NR_brk 45
|
||||
__SYSCALL(__NR_brk, sys_brk)
|
||||
#define __NR_setgid 46
|
||||
__SYSCALL(__NR_setgid, sys_setgid16)
|
||||
#define __NR_getgid 47
|
||||
__SYSCALL(__NR_getgid, sys_getgid16)
|
||||
__SYSCALL(48, sys_ni_syscall) /* 48 was sys_signal */
|
||||
#define __NR_geteuid 49
|
||||
__SYSCALL(__NR_geteuid, sys_geteuid16)
|
||||
#define __NR_getegid 50
|
||||
__SYSCALL(__NR_getegid, sys_getegid16)
|
||||
#define __NR_acct 51
|
||||
__SYSCALL(__NR_acct, sys_acct)
|
||||
#define __NR_umount2 52
|
||||
__SYSCALL(__NR_umount2, sys_umount)
|
||||
__SYSCALL(53, sys_ni_syscall) /* 53 was sys_lock */
|
||||
#define __NR_ioctl 54
|
||||
__SYSCALL(__NR_ioctl, sys_ioctl)
|
||||
#define __NR_fcntl 55
|
||||
__SYSCALL(__NR_fcntl, sys_fcntl)
|
||||
__SYSCALL(56, sys_ni_syscall) /* 56 was sys_mpx */
|
||||
#define __NR_setpgid 57
|
||||
__SYSCALL(__NR_setpgid, sys_setpgid)
|
||||
__SYSCALL(58, sys_ni_syscall) /* 58 was sys_ulimit */
|
||||
__SYSCALL(59, sys_ni_syscall) /* 59 was sys_olduname */
|
||||
#define __NR_umask 60
|
||||
__SYSCALL(__NR_umask, sys_umask)
|
||||
#define __NR_chroot 61
|
||||
__SYSCALL(__NR_chroot, sys_chroot)
|
||||
#define __NR_ustat 62
|
||||
__SYSCALL(__NR_ustat, sys_ustat)
|
||||
#define __NR_dup2 63
|
||||
__SYSCALL(__NR_dup2, sys_dup2)
|
||||
#define __NR_getppid 64
|
||||
__SYSCALL(__NR_getppid, sys_getppid)
|
||||
#define __NR_getpgrp 65
|
||||
__SYSCALL(__NR_getpgrp, sys_getpgrp)
|
||||
#define __NR_setsid 66
|
||||
__SYSCALL(__NR_setsid, sys_setsid)
|
||||
#define __NR_sigaction 67
|
||||
__SYSCALL(__NR_sigaction, sys_sigaction)
|
||||
__SYSCALL(68, sys_ni_syscall) /* 68 was sys_sgetmask */
|
||||
__SYSCALL(69, sys_ni_syscall) /* 69 was sys_ssetmask */
|
||||
#define __NR_setreuid 70
|
||||
__SYSCALL(__NR_setreuid, sys_setreuid16)
|
||||
#define __NR_setregid 71
|
||||
__SYSCALL(__NR_setregid, sys_setregid16)
|
||||
#define __NR_sigsuspend 72
|
||||
__SYSCALL(__NR_sigsuspend, sys_sigsuspend)
|
||||
#define __NR_sigpending 73
|
||||
__SYSCALL(__NR_sigpending, sys_sigpending)
|
||||
#define __NR_sethostname 74
|
||||
__SYSCALL(__NR_sethostname, sys_sethostname)
|
||||
#define __NR_setrlimit 75
|
||||
__SYSCALL(__NR_setrlimit, sys_setrlimit)
|
||||
__SYSCALL(76, sys_ni_syscall) /* 76 was sys_getrlimit */
|
||||
#define __NR_getrusage 77
|
||||
__SYSCALL(__NR_getrusage, sys_getrusage)
|
||||
#define __NR_gettimeofday 78
|
||||
__SYSCALL(__NR_gettimeofday, sys_gettimeofday)
|
||||
#define __NR_settimeofday 79
|
||||
__SYSCALL(__NR_settimeofday, sys_settimeofday)
|
||||
#define __NR_getgroups 80
|
||||
__SYSCALL(__NR_getgroups, sys_getgroups16)
|
||||
#define __NR_setgroups 81
|
||||
__SYSCALL(__NR_setgroups, sys_setgroups16)
|
||||
__SYSCALL(82, sys_ni_syscall) /* 82 was sys_select */
|
||||
#define __NR_symlink 83
|
||||
__SYSCALL(__NR_symlink, sys_symlink)
|
||||
__SYSCALL(84, sys_ni_syscall) /* 84 was sys_lstat */
|
||||
#define __NR_readlink 85
|
||||
__SYSCALL(__NR_readlink, sys_readlink)
|
||||
#define __NR_uselib 86
|
||||
__SYSCALL(__NR_uselib, sys_uselib)
|
||||
#define __NR_swapon 87
|
||||
__SYSCALL(__NR_swapon, sys_swapon)
|
||||
#define __NR_reboot 88
|
||||
__SYSCALL(__NR_reboot, sys_reboot)
|
||||
__SYSCALL(89, sys_ni_syscall) /* 89 was sys_readdir */
|
||||
__SYSCALL(90, sys_ni_syscall) /* 90 was sys_mmap */
|
||||
#define __NR_munmap 91
|
||||
__SYSCALL(__NR_munmap, sys_munmap)
|
||||
#define __NR_truncate 92
|
||||
__SYSCALL(__NR_truncate, sys_truncate)
|
||||
#define __NR_ftruncate 93
|
||||
__SYSCALL(__NR_ftruncate, sys_ftruncate)
|
||||
#define __NR_fchmod 94
|
||||
__SYSCALL(__NR_fchmod, sys_fchmod)
|
||||
#define __NR_fchown 95
|
||||
__SYSCALL(__NR_fchown, sys_fchown16)
|
||||
#define __NR_getpriority 96
|
||||
__SYSCALL(__NR_getpriority, sys_getpriority)
|
||||
#define __NR_setpriority 97
|
||||
__SYSCALL(__NR_setpriority, sys_setpriority)
|
||||
__SYSCALL(98, sys_ni_syscall) /* 98 was sys_profil */
|
||||
#define __NR_statfs 99
|
||||
__SYSCALL(__NR_statfs, sys_statfs)
|
||||
#define __NR_fstatfs 100
|
||||
__SYSCALL(__NR_fstatfs, sys_fstatfs)
|
||||
__SYSCALL(101, sys_ni_syscall) /* 101 was sys_ioperm */
|
||||
__SYSCALL(102, sys_ni_syscall) /* 102 was sys_socketcall */
|
||||
#define __NR_syslog 103
|
||||
__SYSCALL(__NR_syslog, sys_syslog)
|
||||
#define __NR_setitimer 104
|
||||
__SYSCALL(__NR_setitimer, sys_setitimer)
|
||||
#define __NR_getitimer 105
|
||||
__SYSCALL(__NR_getitimer, sys_getitimer)
|
||||
#define __NR_stat 106
|
||||
__SYSCALL(__NR_stat, sys_newstat)
|
||||
#define __NR_lstat 107
|
||||
__SYSCALL(__NR_lstat, sys_newlstat)
|
||||
#define __NR_fstat 108
|
||||
__SYSCALL(__NR_fstat, sys_newfstat)
|
||||
__SYSCALL(109, sys_ni_syscall) /* 109 was sys_uname */
|
||||
__SYSCALL(110, sys_ni_syscall) /* 110 was sys_iopl */
|
||||
#define __NR_vhangup 111
|
||||
__SYSCALL(__NR_vhangup, sys_vhangup)
|
||||
__SYSCALL(112, sys_ni_syscall) /* 112 was sys_idle */
|
||||
__SYSCALL(113, sys_ni_syscall) /* 113 was sys_syscall */
|
||||
#define __NR_wait4 114
|
||||
__SYSCALL(__NR_wait4, sys_wait4)
|
||||
#define __NR_swapoff 115
|
||||
__SYSCALL(__NR_swapoff, sys_swapoff)
|
||||
#define __NR_sysinfo 116
|
||||
__SYSCALL(__NR_sysinfo, sys_sysinfo)
|
||||
__SYSCALL(117, sys_ni_syscall) /* 117 was sys_ipc */
|
||||
#define __NR_fsync 118
|
||||
__SYSCALL(__NR_fsync, sys_fsync)
|
||||
#define __NR_sigreturn 119
|
||||
__SYSCALL(__NR_sigreturn, sys_sigreturn)
|
||||
#define __NR_clone 120
|
||||
__SYSCALL(__NR_clone, sys_clone)
|
||||
#define __NR_setdomainname 121
|
||||
__SYSCALL(__NR_setdomainname, sys_setdomainname)
|
||||
#define __NR_uname 122
|
||||
__SYSCALL(__NR_uname, sys_newuname)
|
||||
__SYSCALL(123, sys_ni_syscall) /* 123 was sys_modify_ldt */
|
||||
#define __NR_adjtimex 124
|
||||
__SYSCALL(__NR_adjtimex, sys_adjtimex)
|
||||
#define __NR_mprotect 125
|
||||
__SYSCALL(__NR_mprotect, sys_mprotect)
|
||||
#define __NR_sigprocmask 126
|
||||
__SYSCALL(__NR_sigprocmask, sys_sigprocmask)
|
||||
__SYSCALL(127, sys_ni_syscall) /* 127 was sys_create_module */
|
||||
#define __NR_init_module 128
|
||||
__SYSCALL(__NR_init_module, sys_init_module)
|
||||
#define __NR_delete_module 129
|
||||
__SYSCALL(__NR_delete_module, sys_delete_module)
|
||||
__SYSCALL(130, sys_ni_syscall) /* 130 was sys_get_kernel_syms */
|
||||
#define __NR_quotactl 131
|
||||
__SYSCALL(__NR_quotactl, sys_quotactl)
|
||||
#define __NR_getpgid 132
|
||||
__SYSCALL(__NR_getpgid, sys_getpgid)
|
||||
#define __NR_fchdir 133
|
||||
__SYSCALL(__NR_fchdir, sys_fchdir)
|
||||
#define __NR_bdflush 134
|
||||
__SYSCALL(__NR_bdflush, sys_bdflush)
|
||||
#define __NR_sysfs 135
|
||||
__SYSCALL(__NR_sysfs, sys_sysfs)
|
||||
#define __NR_personality 136
|
||||
__SYSCALL(__NR_personality, sys_personality)
|
||||
__SYSCALL(137, sys_ni_syscall) /* 137 was sys_afs_syscall */
|
||||
#define __NR_setfsuid 138
|
||||
__SYSCALL(__NR_setfsuid, sys_setfsuid16)
|
||||
#define __NR_setfsgid 139
|
||||
__SYSCALL(__NR_setfsgid, sys_setfsgid16)
|
||||
#define __NR__llseek 140
|
||||
__SYSCALL(__NR__llseek, sys_llseek)
|
||||
#define __NR_getdents 141
|
||||
__SYSCALL(__NR_getdents, sys_getdents)
|
||||
#define __NR__newselect 142
|
||||
__SYSCALL(__NR__newselect, sys_select)
|
||||
#define __NR_flock 143
|
||||
__SYSCALL(__NR_flock, sys_flock)
|
||||
#define __NR_msync 144
|
||||
__SYSCALL(__NR_msync, sys_msync)
|
||||
#define __NR_readv 145
|
||||
__SYSCALL(__NR_readv, sys_readv)
|
||||
#define __NR_writev 146
|
||||
__SYSCALL(__NR_writev, sys_writev)
|
||||
#define __NR_getsid 147
|
||||
__SYSCALL(__NR_getsid, sys_getsid)
|
||||
#define __NR_fdatasync 148
|
||||
__SYSCALL(__NR_fdatasync, sys_fdatasync)
|
||||
#define __NR__sysctl 149
|
||||
__SYSCALL(__NR__sysctl, sys_sysctl)
|
||||
#define __NR_mlock 150
|
||||
__SYSCALL(__NR_mlock, sys_mlock)
|
||||
#define __NR_munlock 151
|
||||
__SYSCALL(__NR_munlock, sys_munlock)
|
||||
#define __NR_mlockall 152
|
||||
__SYSCALL(__NR_mlockall, sys_mlockall)
|
||||
#define __NR_munlockall 153
|
||||
__SYSCALL(__NR_munlockall, sys_munlockall)
|
||||
#define __NR_sched_setparam 154
|
||||
__SYSCALL(__NR_sched_setparam, sys_sched_setparam)
|
||||
#define __NR_sched_getparam 155
|
||||
__SYSCALL(__NR_sched_getparam, sys_sched_getparam)
|
||||
#define __NR_sched_setscheduler 156
|
||||
__SYSCALL(__NR_sched_setscheduler, sys_sched_setscheduler)
|
||||
#define __NR_sched_getscheduler 157
|
||||
__SYSCALL(__NR_sched_getscheduler, sys_sched_getscheduler)
|
||||
#define __NR_sched_yield 158
|
||||
__SYSCALL(__NR_sched_yield, sys_sched_yield)
|
||||
#define __NR_sched_get_priority_max 159
|
||||
__SYSCALL(__NR_sched_get_priority_max, sys_sched_get_priority_max)
|
||||
#define __NR_sched_get_priority_min 160
|
||||
__SYSCALL(__NR_sched_get_priority_min, sys_sched_get_priority_min)
|
||||
#define __NR_sched_rr_get_interval 161
|
||||
__SYSCALL(__NR_sched_rr_get_interval, sys_sched_rr_get_interval)
|
||||
#define __NR_nanosleep 162
|
||||
__SYSCALL(__NR_nanosleep, sys_nanosleep)
|
||||
#define __NR_mremap 163
|
||||
__SYSCALL(__NR_mremap, sys_mremap)
|
||||
#define __NR_setresuid 164
|
||||
__SYSCALL(__NR_setresuid, sys_setresuid16)
|
||||
#define __NR_getresuid 165
|
||||
__SYSCALL(__NR_getresuid, sys_getresuid16)
|
||||
__SYSCALL(166, sys_ni_syscall) /* 166 was sys_vm86 */
|
||||
__SYSCALL(167, sys_ni_syscall) /* 167 was sys_query_module */
|
||||
#define __NR_poll 168
|
||||
__SYSCALL(__NR_poll, sys_poll)
|
||||
#define __NR_nfsservctl 169
|
||||
__SYSCALL(__NR_nfsservctl, sys_ni_syscall)
|
||||
#define __NR_setresgid 170
|
||||
__SYSCALL(__NR_setresgid, sys_setresgid16)
|
||||
#define __NR_getresgid 171
|
||||
__SYSCALL(__NR_getresgid, sys_getresgid16)
|
||||
#define __NR_prctl 172
|
||||
__SYSCALL(__NR_prctl, sys_prctl)
|
||||
#define __NR_rt_sigreturn 173
|
||||
__SYSCALL(__NR_rt_sigreturn, sys_rt_sigreturn)
|
||||
#define __NR_rt_sigaction 174
|
||||
__SYSCALL(__NR_rt_sigaction, sys_rt_sigaction)
|
||||
#define __NR_rt_sigprocmask 175
|
||||
__SYSCALL(__NR_rt_sigprocmask, sys_rt_sigprocmask)
|
||||
#define __NR_rt_sigpending 176
|
||||
__SYSCALL(__NR_rt_sigpending, sys_rt_sigpending)
|
||||
#define __NR_rt_sigtimedwait 177
|
||||
__SYSCALL(__NR_rt_sigtimedwait, sys_rt_sigtimedwait)
|
||||
#define __NR_rt_sigqueueinfo 178
|
||||
__SYSCALL(__NR_rt_sigqueueinfo, sys_rt_sigqueueinfo)
|
||||
#define __NR_rt_sigsuspend 179
|
||||
__SYSCALL(__NR_rt_sigsuspend, sys_rt_sigsuspend)
|
||||
#define __NR_pread64 180
|
||||
__SYSCALL(__NR_pread64, sys_pread64)
|
||||
#define __NR_pwrite64 181
|
||||
__SYSCALL(__NR_pwrite64, sys_pwrite64)
|
||||
#define __NR_chown 182
|
||||
__SYSCALL(__NR_chown, sys_chown16)
|
||||
#define __NR_getcwd 183
|
||||
__SYSCALL(__NR_getcwd, sys_getcwd)
|
||||
#define __NR_capget 184
|
||||
__SYSCALL(__NR_capget, sys_capget)
|
||||
#define __NR_capset 185
|
||||
__SYSCALL(__NR_capset, sys_capset)
|
||||
#define __NR_sigaltstack 186
|
||||
__SYSCALL(__NR_sigaltstack, sys_sigaltstack)
|
||||
#define __NR_sendfile 187
|
||||
__SYSCALL(__NR_sendfile, sys_sendfile)
|
||||
__SYSCALL(188, sys_ni_syscall) /* 188 reserved */
|
||||
__SYSCALL(189, sys_ni_syscall) /* 189 reserved */
|
||||
#define __NR_vfork 190
|
||||
__SYSCALL(__NR_vfork, sys_vfork)
|
||||
#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
|
||||
__SYSCALL(__NR_ugetrlimit, sys_getrlimit)
|
||||
#define __NR_mmap2 192
|
||||
__SYSCALL(__NR_mmap2, sys_mmap2)
|
||||
#define __NR_truncate64 193
|
||||
__SYSCALL(__NR_truncate64, sys_truncate64)
|
||||
#define __NR_ftruncate64 194
|
||||
__SYSCALL(__NR_ftruncate64, sys_ftruncate64)
|
||||
#define __NR_stat64 195
|
||||
__SYSCALL(__NR_stat64, sys_stat64)
|
||||
#define __NR_lstat64 196
|
||||
__SYSCALL(__NR_lstat64, sys_lstat64)
|
||||
#define __NR_fstat64 197
|
||||
__SYSCALL(__NR_fstat64, sys_fstat64)
|
||||
#define __NR_lchown32 198
|
||||
__SYSCALL(__NR_lchown32, sys_lchown)
|
||||
#define __NR_getuid32 199
|
||||
__SYSCALL(__NR_getuid32, sys_getuid)
|
||||
#define __NR_getgid32 200
|
||||
__SYSCALL(__NR_getgid32, sys_getgid)
|
||||
#define __NR_geteuid32 201
|
||||
__SYSCALL(__NR_geteuid32, sys_geteuid)
|
||||
#define __NR_getegid32 202
|
||||
__SYSCALL(__NR_getegid32, sys_getegid)
|
||||
#define __NR_setreuid32 203
|
||||
__SYSCALL(__NR_setreuid32, sys_setreuid)
|
||||
#define __NR_setregid32 204
|
||||
__SYSCALL(__NR_setregid32, sys_setregid)
|
||||
#define __NR_getgroups32 205
|
||||
__SYSCALL(__NR_getgroups32, sys_getgroups)
|
||||
#define __NR_setgroups32 206
|
||||
__SYSCALL(__NR_setgroups32, sys_setgroups)
|
||||
#define __NR_fchown32 207
|
||||
__SYSCALL(__NR_fchown32, sys_fchown)
|
||||
#define __NR_setresuid32 208
|
||||
__SYSCALL(__NR_setresuid32, sys_setresuid)
|
||||
#define __NR_getresuid32 209
|
||||
__SYSCALL(__NR_getresuid32, sys_getresuid)
|
||||
#define __NR_setresgid32 210
|
||||
__SYSCALL(__NR_setresgid32, sys_setresgid)
|
||||
#define __NR_getresgid32 211
|
||||
__SYSCALL(__NR_getresgid32, sys_getresgid)
|
||||
#define __NR_chown32 212
|
||||
__SYSCALL(__NR_chown32, sys_chown)
|
||||
#define __NR_setuid32 213
|
||||
__SYSCALL(__NR_setuid32, sys_setuid)
|
||||
#define __NR_setgid32 214
|
||||
__SYSCALL(__NR_setgid32, sys_setgid)
|
||||
#define __NR_setfsuid32 215
|
||||
__SYSCALL(__NR_setfsuid32, sys_setfsuid)
|
||||
#define __NR_setfsgid32 216
|
||||
__SYSCALL(__NR_setfsgid32, sys_setfsgid)
|
||||
#define __NR_getdents64 217
|
||||
__SYSCALL(__NR_getdents64, sys_getdents64)
|
||||
#define __NR_pivot_root 218
|
||||
__SYSCALL(__NR_pivot_root, sys_pivot_root)
|
||||
#define __NR_mincore 219
|
||||
__SYSCALL(__NR_mincore, sys_mincore)
|
||||
#define __NR_madvise 220
|
||||
__SYSCALL(__NR_madvise, sys_madvise)
|
||||
#define __NR_fcntl64 221
|
||||
__SYSCALL(__NR_fcntl64, sys_fcntl64)
|
||||
__SYSCALL(222, sys_ni_syscall) /* 222 for tux */
|
||||
__SYSCALL(223, sys_ni_syscall) /* 223 is unused */
|
||||
#define __NR_gettid 224
|
||||
__SYSCALL(__NR_gettid, sys_gettid)
|
||||
#define __NR_readahead 225
|
||||
__SYSCALL(__NR_readahead, sys_readahead)
|
||||
#define __NR_setxattr 226
|
||||
__SYSCALL(__NR_setxattr, sys_setxattr)
|
||||
#define __NR_lsetxattr 227
|
||||
__SYSCALL(__NR_lsetxattr, sys_lsetxattr)
|
||||
#define __NR_fsetxattr 228
|
||||
__SYSCALL(__NR_fsetxattr, sys_fsetxattr)
|
||||
#define __NR_getxattr 229
|
||||
__SYSCALL(__NR_getxattr, sys_getxattr)
|
||||
#define __NR_lgetxattr 230
|
||||
__SYSCALL(__NR_lgetxattr, sys_lgetxattr)
|
||||
#define __NR_fgetxattr 231
|
||||
__SYSCALL(__NR_fgetxattr, sys_fgetxattr)
|
||||
#define __NR_listxattr 232
|
||||
__SYSCALL(__NR_listxattr, sys_listxattr)
|
||||
#define __NR_llistxattr 233
|
||||
__SYSCALL(__NR_llistxattr, sys_llistxattr)
|
||||
#define __NR_flistxattr 234
|
||||
__SYSCALL(__NR_flistxattr, sys_flistxattr)
|
||||
#define __NR_removexattr 235
|
||||
__SYSCALL(__NR_removexattr, sys_removexattr)
|
||||
#define __NR_lremovexattr 236
|
||||
__SYSCALL(__NR_lremovexattr, sys_lremovexattr)
|
||||
#define __NR_fremovexattr 237
|
||||
__SYSCALL(__NR_fremovexattr, sys_fremovexattr)
|
||||
#define __NR_tkill 238
|
||||
__SYSCALL(__NR_tkill, sys_tkill)
|
||||
#define __NR_sendfile64 239
|
||||
__SYSCALL(__NR_sendfile64, sys_sendfile64)
|
||||
#define __NR_futex 240
|
||||
__SYSCALL(__NR_futex, sys_futex)
|
||||
#define __NR_sched_setaffinity 241
|
||||
__SYSCALL(__NR_sched_setaffinity, sys_sched_setaffinity)
|
||||
#define __NR_sched_getaffinity 242
|
||||
__SYSCALL(__NR_sched_getaffinity, sys_sched_getaffinity)
|
||||
#define __NR_io_setup 243
|
||||
__SYSCALL(__NR_io_setup, sys_io_setup)
|
||||
#define __NR_io_destroy 244
|
||||
__SYSCALL(__NR_io_destroy, sys_io_destroy)
|
||||
#define __NR_io_getevents 245
|
||||
__SYSCALL(__NR_io_getevents, sys_io_getevents)
|
||||
#define __NR_io_submit 246
|
||||
__SYSCALL(__NR_io_submit, sys_io_submit)
|
||||
#define __NR_io_cancel 247
|
||||
__SYSCALL(__NR_io_cancel, sys_io_cancel)
|
||||
#define __NR_exit_group 248
|
||||
__SYSCALL(__NR_exit_group, sys_exit_group)
|
||||
#define __NR_lookup_dcookie 249
|
||||
__SYSCALL(__NR_lookup_dcookie, sys_lookup_dcookie)
|
||||
#define __NR_epoll_create 250
|
||||
__SYSCALL(__NR_epoll_create, sys_epoll_create)
|
||||
#define __NR_epoll_ctl 251
|
||||
__SYSCALL(__NR_epoll_ctl, sys_epoll_ctl)
|
||||
#define __NR_epoll_wait 252
|
||||
__SYSCALL(__NR_epoll_wait, sys_epoll_wait)
|
||||
#define __NR_remap_file_pages 253
|
||||
__SYSCALL(__NR_remap_file_pages, sys_remap_file_pages)
|
||||
__SYSCALL(254, sys_ni_syscall) /* 254 for set_thread_area */
|
||||
__SYSCALL(255, sys_ni_syscall) /* 255 for get_thread_area */
|
||||
#define __NR_set_tid_address 256
|
||||
__SYSCALL(__NR_set_tid_address, sys_set_tid_address)
|
||||
#define __NR_timer_create 257
|
||||
__SYSCALL(__NR_timer_create, sys_timer_create)
|
||||
#define __NR_timer_settime 258
|
||||
__SYSCALL(__NR_timer_settime, sys_timer_settime)
|
||||
#define __NR_timer_gettime 259
|
||||
__SYSCALL(__NR_timer_gettime, sys_timer_gettime)
|
||||
#define __NR_timer_getoverrun 260
|
||||
__SYSCALL(__NR_timer_getoverrun, sys_timer_getoverrun)
|
||||
#define __NR_timer_delete 261
|
||||
__SYSCALL(__NR_timer_delete, sys_timer_delete)
|
||||
#define __NR_clock_settime 262
|
||||
__SYSCALL(__NR_clock_settime, sys_clock_settime)
|
||||
#define __NR_clock_gettime 263
|
||||
__SYSCALL(__NR_clock_gettime, sys_clock_gettime)
|
||||
#define __NR_clock_getres 264
|
||||
__SYSCALL(__NR_clock_getres, sys_clock_getres)
|
||||
#define __NR_clock_nanosleep 265
|
||||
__SYSCALL(__NR_clock_nanosleep, sys_clock_nanosleep)
|
||||
#define __NR_statfs64 266
|
||||
__SYSCALL(__NR_statfs64, sys_statfs64)
|
||||
#define __NR_fstatfs64 267
|
||||
__SYSCALL(__NR_fstatfs64, sys_fstatfs64)
|
||||
#define __NR_tgkill 268
|
||||
__SYSCALL(__NR_tgkill, sys_tgkill)
|
||||
#define __NR_utimes 269
|
||||
__SYSCALL(__NR_utimes, sys_utimes)
|
||||
#define __NR_fadvise64 270
|
||||
__SYSCALL(__NR_fadvise64, sys_fadvise64_64)
|
||||
#define __NR_pciconfig_iobase 271
|
||||
__SYSCALL(__NR_pciconfig_iobase, sys_pciconfig_iobase)
|
||||
#define __NR_pciconfig_read 272
|
||||
__SYSCALL(__NR_pciconfig_read, sys_pciconfig_read)
|
||||
#define __NR_pciconfig_write 273
|
||||
__SYSCALL(__NR_pciconfig_write, sys_pciconfig_write)
|
||||
#define __NR_mq_open 274
|
||||
__SYSCALL(__NR_mq_open, sys_mq_open)
|
||||
#define __NR_mq_unlink 275
|
||||
__SYSCALL(__NR_mq_unlink, sys_mq_unlink)
|
||||
#define __NR_mq_timedsend 276
|
||||
__SYSCALL(__NR_mq_timedsend, sys_mq_timedsend)
|
||||
#define __NR_mq_timedreceive 277
|
||||
__SYSCALL(__NR_mq_timedreceive, sys_mq_timedreceive)
|
||||
#define __NR_mq_notify 278
|
||||
__SYSCALL(__NR_mq_notify, sys_mq_notify)
|
||||
#define __NR_mq_getsetattr 279
|
||||
__SYSCALL(__NR_mq_getsetattr, sys_mq_getsetattr)
|
||||
#define __NR_waitid 280
|
||||
__SYSCALL(__NR_waitid, sys_waitid)
|
||||
#define __NR_socket 281
|
||||
__SYSCALL(__NR_socket, sys_socket)
|
||||
#define __NR_bind 282
|
||||
__SYSCALL(__NR_bind, sys_bind)
|
||||
#define __NR_connect 283
|
||||
__SYSCALL(__NR_connect, sys_connect)
|
||||
#define __NR_listen 284
|
||||
__SYSCALL(__NR_listen, sys_listen)
|
||||
#define __NR_accept 285
|
||||
__SYSCALL(__NR_accept, sys_accept)
|
||||
#define __NR_getsockname 286
|
||||
__SYSCALL(__NR_getsockname, sys_getsockname)
|
||||
#define __NR_getpeername 287
|
||||
__SYSCALL(__NR_getpeername, sys_getpeername)
|
||||
#define __NR_socketpair 288
|
||||
__SYSCALL(__NR_socketpair, sys_socketpair)
|
||||
#define __NR_send 289
|
||||
__SYSCALL(__NR_send, sys_send)
|
||||
#define __NR_sendto 290
|
||||
__SYSCALL(__NR_sendto, sys_sendto)
|
||||
#define __NR_recv 291
|
||||
__SYSCALL(__NR_recv, sys_recv)
|
||||
#define __NR_recvfrom 292
|
||||
__SYSCALL(__NR_recvfrom, sys_recvfrom)
|
||||
#define __NR_shutdown 293
|
||||
__SYSCALL(__NR_shutdown, sys_shutdown)
|
||||
#define __NR_setsockopt 294
|
||||
__SYSCALL(__NR_setsockopt, sys_setsockopt)
|
||||
#define __NR_getsockopt 295
|
||||
__SYSCALL(__NR_getsockopt, sys_getsockopt)
|
||||
#define __NR_sendmsg 296
|
||||
__SYSCALL(__NR_sendmsg, sys_sendmsg)
|
||||
#define __NR_recvmsg 297
|
||||
__SYSCALL(__NR_recvmsg, sys_recvmsg)
|
||||
#define __NR_semop 298
|
||||
__SYSCALL(__NR_semop, sys_semop)
|
||||
#define __NR_semget 299
|
||||
__SYSCALL(__NR_semget, sys_semget)
|
||||
#define __NR_semctl 300
|
||||
__SYSCALL(__NR_semctl, sys_semctl)
|
||||
#define __NR_msgsnd 301
|
||||
__SYSCALL(__NR_msgsnd, sys_msgsnd)
|
||||
#define __NR_msgrcv 302
|
||||
__SYSCALL(__NR_msgrcv, sys_msgrcv)
|
||||
#define __NR_msgget 303
|
||||
__SYSCALL(__NR_msgget, sys_msgget)
|
||||
#define __NR_msgctl 304
|
||||
__SYSCALL(__NR_msgctl, sys_msgctl)
|
||||
#define __NR_shmat 305
|
||||
__SYSCALL(__NR_shmat, sys_shmat)
|
||||
#define __NR_shmdt 306
|
||||
__SYSCALL(__NR_shmdt, sys_shmdt)
|
||||
#define __NR_shmget 307
|
||||
__SYSCALL(__NR_shmget, sys_shmget)
|
||||
#define __NR_shmctl 308
|
||||
__SYSCALL(__NR_shmctl, sys_shmctl)
|
||||
#define __NR_add_key 309
|
||||
__SYSCALL(__NR_add_key, sys_add_key)
|
||||
#define __NR_request_key 310
|
||||
__SYSCALL(__NR_request_key, sys_request_key)
|
||||
#define __NR_keyctl 311
|
||||
__SYSCALL(__NR_keyctl, sys_keyctl)
|
||||
#define __NR_semtimedop 312
|
||||
__SYSCALL(__NR_semtimedop, sys_semtimedop)
|
||||
#define __NR_vserver 313
|
||||
__SYSCALL(__NR_vserver, sys_ni_syscall)
|
||||
#define __NR_ioprio_set 314
|
||||
__SYSCALL(__NR_ioprio_set, sys_ioprio_set)
|
||||
#define __NR_ioprio_get 315
|
||||
__SYSCALL(__NR_ioprio_get, sys_ioprio_get)
|
||||
#define __NR_inotify_init 316
|
||||
__SYSCALL(__NR_inotify_init, sys_inotify_init)
|
||||
#define __NR_inotify_add_watch 317
|
||||
__SYSCALL(__NR_inotify_add_watch, sys_inotify_add_watch)
|
||||
#define __NR_inotify_rm_watch 318
|
||||
__SYSCALL(__NR_inotify_rm_watch, sys_inotify_rm_watch)
|
||||
#define __NR_mbind 319
|
||||
__SYSCALL(__NR_mbind, sys_mbind)
|
||||
#define __NR_get_mempolicy 320
|
||||
__SYSCALL(__NR_get_mempolicy, sys_get_mempolicy)
|
||||
#define __NR_set_mempolicy 321
|
||||
__SYSCALL(__NR_set_mempolicy, sys_set_mempolicy)
|
||||
#define __NR_openat 322
|
||||
__SYSCALL(__NR_openat, sys_openat)
|
||||
#define __NR_mkdirat 323
|
||||
__SYSCALL(__NR_mkdirat, sys_mkdirat)
|
||||
#define __NR_mknodat 324
|
||||
__SYSCALL(__NR_mknodat, sys_mknodat)
|
||||
#define __NR_fchownat 325
|
||||
__SYSCALL(__NR_fchownat, sys_fchownat)
|
||||
#define __NR_futimesat 326
|
||||
__SYSCALL(__NR_futimesat, sys_futimesat)
|
||||
#define __NR_fstatat64 327
|
||||
__SYSCALL(__NR_fstatat64, sys_fstatat64)
|
||||
#define __NR_unlinkat 328
|
||||
__SYSCALL(__NR_unlinkat, sys_unlinkat)
|
||||
#define __NR_renameat 329
|
||||
__SYSCALL(__NR_renameat, sys_renameat)
|
||||
#define __NR_linkat 330
|
||||
__SYSCALL(__NR_linkat, sys_linkat)
|
||||
#define __NR_symlinkat 331
|
||||
__SYSCALL(__NR_symlinkat, sys_symlinkat)
|
||||
#define __NR_readlinkat 332
|
||||
__SYSCALL(__NR_readlinkat, sys_readlinkat)
|
||||
#define __NR_fchmodat 333
|
||||
__SYSCALL(__NR_fchmodat, sys_fchmodat)
|
||||
#define __NR_faccessat 334
|
||||
__SYSCALL(__NR_faccessat, sys_faccessat)
|
||||
#define __NR_pselect6 335
|
||||
__SYSCALL(__NR_pselect6, sys_pselect6)
|
||||
#define __NR_ppoll 336
|
||||
__SYSCALL(__NR_ppoll, sys_ppoll)
|
||||
#define __NR_unshare 337
|
||||
__SYSCALL(__NR_unshare, sys_unshare)
|
||||
#define __NR_set_robust_list 338
|
||||
__SYSCALL(__NR_set_robust_list, sys_set_robust_list)
|
||||
#define __NR_get_robust_list 339
|
||||
__SYSCALL(__NR_get_robust_list, sys_get_robust_list)
|
||||
#define __NR_splice 340
|
||||
__SYSCALL(__NR_splice, sys_splice)
|
||||
#define __NR_sync_file_range2 341
|
||||
__SYSCALL(__NR_sync_file_range2, sys_sync_file_range2)
|
||||
#define __NR_tee 342
|
||||
__SYSCALL(__NR_tee, sys_tee)
|
||||
#define __NR_vmsplice 343
|
||||
__SYSCALL(__NR_vmsplice, sys_vmsplice)
|
||||
#define __NR_move_pages 344
|
||||
__SYSCALL(__NR_move_pages, sys_move_pages)
|
||||
#define __NR_getcpu 345
|
||||
__SYSCALL(__NR_getcpu, sys_getcpu)
|
||||
#define __NR_epoll_pwait 346
|
||||
__SYSCALL(__NR_epoll_pwait, sys_epoll_pwait)
|
||||
#define __NR_kexec_load 347
|
||||
__SYSCALL(__NR_kexec_load, sys_kexec_load)
|
||||
#define __NR_utimensat 348
|
||||
__SYSCALL(__NR_utimensat, sys_utimensat)
|
||||
#define __NR_signalfd 349
|
||||
__SYSCALL(__NR_signalfd, sys_signalfd)
|
||||
#define __NR_timerfd_create 350
|
||||
__SYSCALL(__NR_timerfd_create, sys_timerfd_create)
|
||||
#define __NR_eventfd 351
|
||||
__SYSCALL(__NR_eventfd, sys_eventfd)
|
||||
#define __NR_fallocate 352
|
||||
__SYSCALL(__NR_fallocate, sys_fallocate)
|
||||
#define __NR_timerfd_settime 353
|
||||
__SYSCALL(__NR_timerfd_settime, sys_timerfd_settime)
|
||||
#define __NR_timerfd_gettime 354
|
||||
__SYSCALL(__NR_timerfd_gettime, sys_timerfd_gettime)
|
||||
#define __NR_signalfd4 355
|
||||
__SYSCALL(__NR_signalfd4, sys_signalfd4)
|
||||
#define __NR_eventfd2 356
|
||||
__SYSCALL(__NR_eventfd2, sys_eventfd2)
|
||||
#define __NR_epoll_create1 357
|
||||
__SYSCALL(__NR_epoll_create1, sys_epoll_create1)
|
||||
#define __NR_dup3 358
|
||||
__SYSCALL(__NR_dup3, sys_dup3)
|
||||
#define __NR_pipe2 359
|
||||
__SYSCALL(__NR_pipe2, sys_pipe2)
|
||||
#define __NR_inotify_init1 360
|
||||
__SYSCALL(__NR_inotify_init1, sys_inotify_init1)
|
||||
#define __NR_preadv 361
|
||||
__SYSCALL(__NR_preadv, sys_preadv)
|
||||
#define __NR_pwritev 362
|
||||
__SYSCALL(__NR_pwritev, sys_pwritev)
|
||||
#define __NR_rt_tgsigqueueinfo 363
|
||||
__SYSCALL(__NR_rt_tgsigqueueinfo, sys_rt_tgsigqueueinfo)
|
||||
#define __NR_perf_event_open 364
|
||||
__SYSCALL(__NR_perf_event_open, sys_perf_event_open)
|
||||
#define __NR_recvmmsg 365
|
||||
__SYSCALL(__NR_recvmmsg, sys_recvmmsg)
|
||||
#define __NR_accept4 366
|
||||
__SYSCALL(__NR_accept4, sys_accept4)
|
||||
#define __NR_fanotify_init 367
|
||||
__SYSCALL(__NR_fanotify_init, sys_fanotify_init)
|
||||
#define __NR_fanotify_mark 368
|
||||
__SYSCALL(__NR_fanotify_mark, sys_fanotify_mark)
|
||||
#define __NR_prlimit64 369
|
||||
__SYSCALL(__NR_prlimit64, sys_prlimit64)
|
||||
#define __NR_name_to_handle_at 370
|
||||
__SYSCALL(__NR_name_to_handle_at, sys_name_to_handle_at)
|
||||
#define __NR_open_by_handle_at 371
|
||||
__SYSCALL(__NR_open_by_handle_at, sys_open_by_handle_at)
|
||||
#define __NR_clock_adjtime 372
|
||||
__SYSCALL(__NR_clock_adjtime, sys_clock_adjtime)
|
||||
#define __NR_syncfs 373
|
||||
__SYSCALL(__NR_syncfs, sys_syncfs)
|
||||
|
||||
/*
|
||||
* The following SVCs are ARM private.
|
||||
*/
|
||||
#define __ARM_NR_COMPAT_BASE 0x0f0000
|
||||
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
|
||||
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
|
||||
|
||||
#endif /* __SYSCALL_COMPAT */
|
||||
|
||||
#define __NR_compat_syscalls 374
|
||||
|
||||
#define __ARCH_WANT_COMPAT_IPC_PARSE_VERSION
|
||||
#define __ARCH_WANT_COMPAT_STAT64
|
||||
#define __ARCH_WANT_SYS_GETHOSTNAME
|
||||
#define __ARCH_WANT_SYS_PAUSE
|
||||
#define __ARCH_WANT_SYS_GETPGRP
|
||||
#define __ARCH_WANT_SYS_LLSEEK
|
||||
#define __ARCH_WANT_SYS_NICE
|
||||
#define __ARCH_WANT_SYS_SIGPENDING
|
||||
#define __ARCH_WANT_SYS_SIGPROCMASK
|
||||
#define __ARCH_WANT_COMPAT_SYS_RT_SIGSUSPEND
|
||||
|
||||
#endif /* __ASM_UNISTD32_H */
|
41
arch/arm64/include/asm/vdso.h
Normal file
41
arch/arm64/include/asm/vdso.h
Normal file
@ -0,0 +1,41 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_VDSO_H
|
||||
#define __ASM_VDSO_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
* Default link address for the vDSO.
|
||||
* Since we randomise the VDSO mapping, there's little point in trying
|
||||
* to prelink this.
|
||||
*/
|
||||
#define VDSO_LBASE 0x0
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <generated/vdso-offsets.h>
|
||||
|
||||
#define VDSO_SYMBOL(base, name) \
|
||||
({ \
|
||||
(void *)(vdso_offset_##name - VDSO_LBASE + (unsigned long)(base)); \
|
||||
})
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
#endif /* __ASM_VDSO_H */
|
43
arch/arm64/include/asm/vdso_datapage.h
Normal file
43
arch/arm64/include/asm/vdso_datapage.h
Normal file
@ -0,0 +1,43 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#ifndef __ASM_VDSO_DATAPAGE_H
|
||||
#define __ASM_VDSO_DATAPAGE_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
struct vdso_data {
|
||||
__u64 cs_cycle_last; /* Timebase at clocksource init */
|
||||
__u64 xtime_clock_sec; /* Kernel time */
|
||||
__u64 xtime_clock_nsec;
|
||||
__u64 xtime_coarse_sec; /* Coarse time */
|
||||
__u64 xtime_coarse_nsec;
|
||||
__u64 wtm_clock_sec; /* Wall to monotonic time */
|
||||
__u64 wtm_clock_nsec;
|
||||
__u32 tb_seq_count; /* Timebase sequence counter */
|
||||
__u32 cs_mult; /* Clocksource multiplier */
|
||||
__u32 cs_shift; /* Clocksource shift */
|
||||
__u32 tz_minuteswest; /* Whacky timezone stuff */
|
||||
__u32 tz_dsttime;
|
||||
__u32 use_syscall;
|
||||
};
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
#endif /* __ASM_VDSO_DATAPAGE_H */
|
1
arch/arm64/kernel/.gitignore
vendored
Normal file
1
arch/arm64/kernel/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
vmlinux.lds
|
27
arch/arm64/kernel/Makefile
Normal file
27
arch/arm64/kernel/Makefile
Normal file
@ -0,0 +1,27 @@
|
||||
#
|
||||
# Makefile for the linux kernel.
|
||||
#
|
||||
|
||||
CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
|
||||
# Object file lists.
|
||||
arm64-obj-y := cputable.o debug-monitors.o entry.o irq.o fpsimd.o \
|
||||
entry-fpsimd.o process.o ptrace.o setup.o signal.o \
|
||||
sys.o stacktrace.o time.o traps.o io.o vdso.o
|
||||
|
||||
arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \
|
||||
sys_compat.o
|
||||
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
|
||||
arm64-obj-$(CONFIG_SMP) += smp.o
|
||||
arm64-obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
|
||||
arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
|
||||
|
||||
obj-y += $(arm64-obj-y) vdso/
|
||||
obj-m += $(arm64-obj-m)
|
||||
head-y := head.o
|
||||
extra-y := $(head-y) vmlinux.lds
|
||||
|
||||
# vDSO - this must be built first to generate the symbol offsets
|
||||
$(call objectify,$(arm64-obj-y)): $(obj)/vdso/vdso-offsets.h
|
||||
$(obj)/vdso/vdso-offsets.h: $(obj)/vdso
|
46
arch/arm64/kernel/arm64ksyms.c
Normal file
46
arch/arm64/kernel/arm64ksyms.c
Normal file
@ -0,0 +1,46 @@
|
||||
/*
|
||||
* Based on arch/arm/kernel/armksyms.c
|
||||
*
|
||||
* Copyright (C) 2000 Russell King
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/cryptohash.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/in6.h>
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/io.h>
|
||||
|
||||
#include <asm/checksum.h>
|
||||
|
||||
/* user mem (segment) */
|
||||
EXPORT_SYMBOL(__strnlen_user);
|
||||
EXPORT_SYMBOL(__strncpy_from_user);
|
||||
|
||||
EXPORT_SYMBOL(copy_page);
|
||||
|
||||
EXPORT_SYMBOL(__copy_from_user);
|
||||
EXPORT_SYMBOL(__copy_to_user);
|
||||
EXPORT_SYMBOL(__clear_user);
|
||||
|
||||
/* bitops */
|
||||
EXPORT_SYMBOL(__atomic_hash);
|
||||
|
||||
/* physical memory */
|
||||
EXPORT_SYMBOL(memstart_addr);
|
108
arch/arm64/kernel/asm-offsets.c
Normal file
108
arch/arm64/kernel/asm-offsets.c
Normal file
@ -0,0 +1,108 @@
|
||||
/*
|
||||
* Based on arch/arm/kernel/asm-offsets.c
|
||||
*
|
||||
* Copyright (C) 1995-2003 Russell King
|
||||
* 2001-2002 Keith Owens
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/sched.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <asm/thread_info.h>
|
||||
#include <asm/memory.h>
|
||||
#include <asm/cputable.h>
|
||||
#include <asm/vdso_datapage.h>
|
||||
#include <linux/kbuild.h>
|
||||
|
||||
int main(void)
|
||||
{
|
||||
DEFINE(TSK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
|
||||
BLANK();
|
||||
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
|
||||
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
|
||||
DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
|
||||
DEFINE(TI_TASK, offsetof(struct thread_info, task));
|
||||
DEFINE(TI_EXEC_DOMAIN, offsetof(struct thread_info, exec_domain));
|
||||
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
|
||||
BLANK();
|
||||
DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context));
|
||||
BLANK();
|
||||
DEFINE(S_X0, offsetof(struct pt_regs, regs[0]));
|
||||
DEFINE(S_X1, offsetof(struct pt_regs, regs[1]));
|
||||
DEFINE(S_X2, offsetof(struct pt_regs, regs[2]));
|
||||
DEFINE(S_X3, offsetof(struct pt_regs, regs[3]));
|
||||
DEFINE(S_X4, offsetof(struct pt_regs, regs[4]));
|
||||
DEFINE(S_X5, offsetof(struct pt_regs, regs[5]));
|
||||
DEFINE(S_X6, offsetof(struct pt_regs, regs[6]));
|
||||
DEFINE(S_X7, offsetof(struct pt_regs, regs[7]));
|
||||
DEFINE(S_LR, offsetof(struct pt_regs, regs[30]));
|
||||
DEFINE(S_SP, offsetof(struct pt_regs, sp));
|
||||
#ifdef CONFIG_COMPAT
|
||||
DEFINE(S_COMPAT_SP, offsetof(struct pt_regs, compat_sp));
|
||||
#endif
|
||||
DEFINE(S_PSTATE, offsetof(struct pt_regs, pstate));
|
||||
DEFINE(S_PC, offsetof(struct pt_regs, pc));
|
||||
DEFINE(S_ORIG_X0, offsetof(struct pt_regs, orig_x0));
|
||||
DEFINE(S_SYSCALLNO, offsetof(struct pt_regs, syscallno));
|
||||
DEFINE(S_FRAME_SIZE, sizeof(struct pt_regs));
|
||||
BLANK();
|
||||
DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id));
|
||||
BLANK();
|
||||
DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
|
||||
DEFINE(VMA_VM_FLAGS, offsetof(struct vm_area_struct, vm_flags));
|
||||
BLANK();
|
||||
DEFINE(VM_EXEC, VM_EXEC);
|
||||
BLANK();
|
||||
DEFINE(PAGE_SZ, PAGE_SIZE);
|
||||
BLANK();
|
||||
DEFINE(CPU_INFO_SZ, sizeof(struct cpu_info));
|
||||
DEFINE(CPU_INFO_SETUP, offsetof(struct cpu_info, cpu_setup));
|
||||
BLANK();
|
||||
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
|
||||
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
|
||||
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
|
||||
BLANK();
|
||||
DEFINE(CLOCK_REALTIME, CLOCK_REALTIME);
|
||||
DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC);
|
||||
DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
|
||||
DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
|
||||
DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
|
||||
DEFINE(CLOCK_COARSE_RES, LOW_RES_NSEC);
|
||||
DEFINE(NSEC_PER_SEC, NSEC_PER_SEC);
|
||||
BLANK();
|
||||
DEFINE(VDSO_CS_CYCLE_LAST, offsetof(struct vdso_data, cs_cycle_last));
|
||||
DEFINE(VDSO_XTIME_CLK_SEC, offsetof(struct vdso_data, xtime_clock_sec));
|
||||
DEFINE(VDSO_XTIME_CLK_NSEC, offsetof(struct vdso_data, xtime_clock_nsec));
|
||||
DEFINE(VDSO_XTIME_CRS_SEC, offsetof(struct vdso_data, xtime_coarse_sec));
|
||||
DEFINE(VDSO_XTIME_CRS_NSEC, offsetof(struct vdso_data, xtime_coarse_nsec));
|
||||
DEFINE(VDSO_WTM_CLK_SEC, offsetof(struct vdso_data, wtm_clock_sec));
|
||||
DEFINE(VDSO_WTM_CLK_NSEC, offsetof(struct vdso_data, wtm_clock_nsec));
|
||||
DEFINE(VDSO_TB_SEQ_COUNT, offsetof(struct vdso_data, tb_seq_count));
|
||||
DEFINE(VDSO_CS_MULT, offsetof(struct vdso_data, cs_mult));
|
||||
DEFINE(VDSO_CS_SHIFT, offsetof(struct vdso_data, cs_shift));
|
||||
DEFINE(VDSO_TZ_MINWEST, offsetof(struct vdso_data, tz_minuteswest));
|
||||
DEFINE(VDSO_TZ_DSTTIME, offsetof(struct vdso_data, tz_dsttime));
|
||||
DEFINE(VDSO_USE_SYSCALL, offsetof(struct vdso_data, use_syscall));
|
||||
BLANK();
|
||||
DEFINE(TVAL_TV_SEC, offsetof(struct timeval, tv_sec));
|
||||
DEFINE(TVAL_TV_USEC, offsetof(struct timeval, tv_usec));
|
||||
DEFINE(TSPEC_TV_SEC, offsetof(struct timespec, tv_sec));
|
||||
DEFINE(TSPEC_TV_NSEC, offsetof(struct timespec, tv_nsec));
|
||||
BLANK();
|
||||
DEFINE(TZ_MINWEST, offsetof(struct timezone, tz_minuteswest));
|
||||
DEFINE(TZ_DSTTIME, offsetof(struct timezone, tz_dsttime));
|
||||
return 0;
|
||||
}
|
33
arch/arm64/kernel/cputable.c
Normal file
33
arch/arm64/kernel/cputable.c
Normal file
@ -0,0 +1,33 @@
|
||||
/*
|
||||
* arch/arm64/kernel/cputable.c
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software: you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
|
||||
#include <asm/cputable.h>
|
||||
|
||||
extern unsigned long __cpu_setup(void);
|
||||
|
||||
struct cpu_info __initdata cpu_table[] = {
|
||||
{
|
||||
.cpu_id_val = 0x000f0000,
|
||||
.cpu_id_mask = 0x000f0000,
|
||||
.cpu_name = "AArch64 Processor",
|
||||
.cpu_setup = __cpu_setup,
|
||||
},
|
||||
{ /* Empty */ },
|
||||
};
|
288
arch/arm64/kernel/debug-monitors.c
Normal file
288
arch/arm64/kernel/debug-monitors.c
Normal file
@ -0,0 +1,288 @@
|
||||
/*
|
||||
* ARMv8 single-step debug support and mdscr context switching.
|
||||
*
|
||||
* Copyright (C) 2012 ARM Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*
|
||||
* Author: Will Deacon <will.deacon@arm.com>
|
||||
*/
|
||||
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/hardirq.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/ptrace.h>
|
||||
#include <linux/stat.h>
|
||||
|
||||
#include <asm/debug-monitors.h>
|
||||
#include <asm/local.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/system_misc.h>
|
||||
|
||||
/* Low-level stepping controls. */
|
||||
#define DBG_MDSCR_SS (1 << 0)
|
||||
#define DBG_SPSR_SS (1 << 21)
|
||||
|
||||
/* MDSCR_EL1 enabling bits */
|
||||
#define DBG_MDSCR_KDE (1 << 13)
|
||||
#define DBG_MDSCR_MDE (1 << 15)
|
||||
#define DBG_MDSCR_MASK ~(DBG_MDSCR_KDE | DBG_MDSCR_MDE)
|
||||
|
||||
/* Determine debug architecture. */
|
||||
u8 debug_monitors_arch(void)
|
||||
{
|
||||
return read_cpuid(ID_AA64DFR0_EL1) & 0xf;
|
||||
}
|
||||
|
||||
/*
|
||||
* MDSCR access routines.
|
||||
*/
|
||||
static void mdscr_write(u32 mdscr)
|
||||
{
|
||||
unsigned long flags;
|
||||
local_dbg_save(flags);
|
||||
asm volatile("msr mdscr_el1, %0" :: "r" (mdscr));
|
||||
local_dbg_restore(flags);
|
||||
}
|
||||
|
||||
static u32 mdscr_read(void)
|
||||
{
|
||||
u32 mdscr;
|
||||
asm volatile("mrs %0, mdscr_el1" : "=r" (mdscr));
|
||||
return mdscr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allow root to disable self-hosted debug from userspace.
|
||||
* This is useful if you want to connect an external JTAG debugger.
|
||||
*/
|
||||
static u32 debug_enabled = 1;
|
||||
|
||||
static int create_debug_debugfs_entry(void)
|
||||
{
|
||||
debugfs_create_bool("debug_enabled", 0644, NULL, &debug_enabled);
|
||||
return 0;
|
||||
}
|
||||
fs_initcall(create_debug_debugfs_entry);
|
||||
|
||||
static int __init early_debug_disable(char *buf)
|
||||
{
|
||||
debug_enabled = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
early_param("nodebugmon", early_debug_disable);
|
||||
|
||||
/*
|
||||
* Keep track of debug users on each core.
|
||||
* The ref counts are per-cpu so we use a local_t type.
|
||||
*/
|
||||
static DEFINE_PER_CPU(local_t, mde_ref_count);
|
||||
static DEFINE_PER_CPU(local_t, kde_ref_count);
|
||||
|
||||
void enable_debug_monitors(enum debug_el el)
|
||||
{
|
||||
u32 mdscr, enable = 0;
|
||||
|
||||
WARN_ON(preemptible());
|
||||
|
||||
if (local_inc_return(&__get_cpu_var(mde_ref_count)) == 1)
|
||||
enable = DBG_MDSCR_MDE;
|
||||
|
||||
if (el == DBG_ACTIVE_EL1 &&
|
||||
local_inc_return(&__get_cpu_var(kde_ref_count)) == 1)
|
||||
enable |= DBG_MDSCR_KDE;
|
||||
|
||||
if (enable && debug_enabled) {
|
||||
mdscr = mdscr_read();
|
||||
mdscr |= enable;
|
||||
mdscr_write(mdscr);
|
||||
}
|
||||
}
|
||||
|
||||
void disable_debug_monitors(enum debug_el el)
|
||||
{
|
||||
u32 mdscr, disable = 0;
|
||||
|
||||
WARN_ON(preemptible());
|
||||
|
||||
if (local_dec_and_test(&__get_cpu_var(mde_ref_count)))
|
||||
disable = ~DBG_MDSCR_MDE;
|
||||
|
||||
if (el == DBG_ACTIVE_EL1 &&
|
||||
local_dec_and_test(&__get_cpu_var(kde_ref_count)))
|
||||
disable &= ~DBG_MDSCR_KDE;
|
||||
|
||||
if (disable) {
|
||||
mdscr = mdscr_read();
|
||||
mdscr &= disable;
|
||||
mdscr_write(mdscr);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* OS lock clearing.
|
||||
*/
|
||||
static void clear_os_lock(void *unused)
|
||||
{
|
||||
asm volatile("msr mdscr_el1, %0" : : "r" (0));
|
||||
isb();
|
||||
asm volatile("msr oslar_el1, %0" : : "r" (0));
|
||||
isb();
|
||||
}
|
||||
|
||||
static int __cpuinit os_lock_notify(struct notifier_block *self,
|
||||
unsigned long action, void *data)
|
||||
{
|
||||
int cpu = (unsigned long)data;
|
||||
if (action == CPU_ONLINE)
|
||||
smp_call_function_single(cpu, clear_os_lock, NULL, 1);
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static struct notifier_block __cpuinitdata os_lock_nb = {
|
||||
.notifier_call = os_lock_notify,
|
||||
};
|
||||
|
||||
static int __cpuinit debug_monitors_init(void)
|
||||
{
|
||||
/* Clear the OS lock. */
|
||||
smp_call_function(clear_os_lock, NULL, 1);
|
||||
clear_os_lock(NULL);
|
||||
|
||||
/* Register hotplug handler. */
|
||||
register_cpu_notifier(&os_lock_nb);
|
||||
return 0;
|
||||
}
|
||||
postcore_initcall(debug_monitors_init);
|
||||
|
||||
/*
|
||||
* Single step API and exception handling.
|
||||
*/
|
||||
static void set_regs_spsr_ss(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long spsr;
|
||||
|
||||
spsr = regs->pstate;
|
||||
spsr &= ~DBG_SPSR_SS;
|
||||
spsr |= DBG_SPSR_SS;
|
||||
regs->pstate = spsr;
|
||||
}
|
||||
|
||||
static void clear_regs_spsr_ss(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long spsr;
|
||||
|
||||
spsr = regs->pstate;
|
||||
spsr &= ~DBG_SPSR_SS;
|
||||
regs->pstate = spsr;
|
||||
}
|
||||
|
||||
static int single_step_handler(unsigned long addr, unsigned int esr,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
siginfo_t info;
|
||||
|
||||
/*
|
||||
* If we are stepping a pending breakpoint, call the hw_breakpoint
|
||||
* handler first.
|
||||
*/
|
||||
if (!reinstall_suspended_bps(regs))
|
||||
return 0;
|
||||
|
||||
if (user_mode(regs)) {
|
||||
info.si_signo = SIGTRAP;
|
||||
info.si_errno = 0;
|
||||
info.si_code = TRAP_HWBKPT;
|
||||
info.si_addr = (void __user *)instruction_pointer(regs);
|
||||
force_sig_info(SIGTRAP, &info, current);
|
||||
|
||||
/*
|
||||
* ptrace will disable single step unless explicitly
|
||||
* asked to re-enable it. For other clients, it makes
|
||||
* sense to leave it enabled (i.e. rewind the controls
|
||||
* to the active-not-pending state).
|
||||
*/
|
||||
user_rewind_single_step(current);
|
||||
} else {
|
||||
/* TODO: route to KGDB */
|
||||
pr_warning("Unexpected kernel single-step exception at EL1\n");
|
||||
/*
|
||||
* Re-enable stepping since we know that we will be
|
||||
* returning to regs.
|
||||
*/
|
||||
set_regs_spsr_ss(regs);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init single_step_init(void)
|
||||
{
|
||||
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
|
||||
TRAP_HWBKPT, "single-step handler");
|
||||
return 0;
|
||||
}
|
||||
arch_initcall(single_step_init);
|
||||
|
||||
/* Re-enable single step for syscall restarting. */
|
||||
void user_rewind_single_step(struct task_struct *task)
|
||||
{
|
||||
/*
|
||||
* If single step is active for this thread, then set SPSR.SS
|
||||
* to 1 to avoid returning to the active-pending state.
|
||||
*/
|
||||
if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))
|
||||
set_regs_spsr_ss(task_pt_regs(task));
|
||||
}
|
||||
|
||||
void user_fastforward_single_step(struct task_struct *task)
|
||||
{
|
||||
if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))
|
||||
clear_regs_spsr_ss(task_pt_regs(task));
|
||||
}
|
||||
|
||||
/* Kernel API */
|
||||
void kernel_enable_single_step(struct pt_regs *regs)
|
||||
{
|
||||
WARN_ON(!irqs_disabled());
|
||||
set_regs_spsr_ss(regs);
|
||||
mdscr_write(mdscr_read() | DBG_MDSCR_SS);
|
||||
enable_debug_monitors(DBG_ACTIVE_EL1);
|
||||
}
|
||||
|
||||
void kernel_disable_single_step(void)
|
||||
{
|
||||
WARN_ON(!irqs_disabled());
|
||||
mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
|
||||
disable_debug_monitors(DBG_ACTIVE_EL1);
|
||||
}
|
||||
|
||||
int kernel_active_single_step(void)
|
||||
{
|
||||
WARN_ON(!irqs_disabled());
|
||||
return mdscr_read() & DBG_MDSCR_SS;
|
||||
}
|
||||
|
||||
/* ptrace API */
|
||||
void user_enable_single_step(struct task_struct *task)
|
||||
{
|
||||
set_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
|
||||
set_regs_spsr_ss(task_pt_regs(task));
|
||||
}
|
||||
|
||||
void user_disable_single_step(struct task_struct *task)
|
||||
{
|
||||
clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
|
||||
}
|
80
arch/arm64/kernel/entry-fpsimd.S
Normal file
80
arch/arm64/kernel/entry-fpsimd.S
Normal file
@ -0,0 +1,80 @@
|
||||
/*
|
||||
* FP/SIMD state saving and restoring
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
* Author: Catalin Marinas <catalin.marinas@arm.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/linkage.h>
|
||||
|
||||
#include <asm/assembler.h>
|
||||
|
||||
/*
|
||||
* Save the FP registers.
|
||||
*
|
||||
* x0 - pointer to struct fpsimd_state
|
||||
*/
|
||||
ENTRY(fpsimd_save_state)
|
||||
stp q0, q1, [x0, #16 * 0]
|
||||
stp q2, q3, [x0, #16 * 2]
|
||||
stp q4, q5, [x0, #16 * 4]
|
||||
stp q6, q7, [x0, #16 * 6]
|
||||
stp q8, q9, [x0, #16 * 8]
|
||||
stp q10, q11, [x0, #16 * 10]
|
||||
stp q12, q13, [x0, #16 * 12]
|
||||
stp q14, q15, [x0, #16 * 14]
|
||||
stp q16, q17, [x0, #16 * 16]
|
||||
stp q18, q19, [x0, #16 * 18]
|
||||
stp q20, q21, [x0, #16 * 20]
|
||||
stp q22, q23, [x0, #16 * 22]
|
||||
stp q24, q25, [x0, #16 * 24]
|
||||
stp q26, q27, [x0, #16 * 26]
|
||||
stp q28, q29, [x0, #16 * 28]
|
||||
stp q30, q31, [x0, #16 * 30]!
|
||||
mrs x8, fpsr
|
||||
str w8, [x0, #16 * 2]
|
||||
mrs x8, fpcr
|
||||
str w8, [x0, #16 * 2 + 4]
|
||||
ret
|
||||
ENDPROC(fpsimd_save_state)
|
||||
|
||||
/*
|
||||
* Load the FP registers.
|
||||
*
|
||||
* x0 - pointer to struct fpsimd_state
|
||||
*/
|
||||
ENTRY(fpsimd_load_state)
|
||||
ldp q0, q1, [x0, #16 * 0]
|
||||
ldp q2, q3, [x0, #16 * 2]
|
||||
ldp q4, q5, [x0, #16 * 4]
|
||||
ldp q6, q7, [x0, #16 * 6]
|
||||
ldp q8, q9, [x0, #16 * 8]
|
||||
ldp q10, q11, [x0, #16 * 10]
|
||||
ldp q12, q13, [x0, #16 * 12]
|
||||
ldp q14, q15, [x0, #16 * 14]
|
||||
ldp q16, q17, [x0, #16 * 16]
|
||||
ldp q18, q19, [x0, #16 * 18]
|
||||
ldp q20, q21, [x0, #16 * 20]
|
||||
ldp q22, q23, [x0, #16 * 22]
|
||||
ldp q24, q25, [x0, #16 * 24]
|
||||
ldp q26, q27, [x0, #16 * 26]
|
||||
ldp q28, q29, [x0, #16 * 28]
|
||||
ldp q30, q31, [x0, #16 * 30]!
|
||||
ldr w8, [x0, #16 * 2]
|
||||
ldr w9, [x0, #16 * 2 + 4]
|
||||
msr fpsr, x8
|
||||
msr fpcr, x9
|
||||
ret
|
||||
ENDPROC(fpsimd_load_state)
|
695
arch/arm64/kernel/entry.S
Normal file
695
arch/arm64/kernel/entry.S
Normal file
@ -0,0 +1,695 @@
|
||||
/*
|
||||
* Low-level exception handling code
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
* Authors: Catalin Marinas <catalin.marinas@arm.com>
|
||||
* Will Deacon <will.deacon@arm.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/linkage.h>
|
||||
|
||||
#include <asm/assembler.h>
|
||||
#include <asm/asm-offsets.h>
|
||||
#include <asm/errno.h>
|
||||
#include <asm/thread_info.h>
|
||||
#include <asm/unistd.h>
|
||||
|
||||
/*
|
||||
* Bad Abort numbers
|
||||
*-----------------
|
||||
*/
|
||||
#define BAD_SYNC 0
|
||||
#define BAD_IRQ 1
|
||||
#define BAD_FIQ 2
|
||||
#define BAD_ERROR 3
|
||||
|
||||
.macro kernel_entry, el, regsize = 64
|
||||
sub sp, sp, #S_FRAME_SIZE - S_LR // room for LR, SP, SPSR, ELR
|
||||
.if \regsize == 32
|
||||
mov w0, w0 // zero upper 32 bits of x0
|
||||
.endif
|
||||
push x28, x29
|
||||
push x26, x27
|
||||
push x24, x25
|
||||
push x22, x23
|
||||
push x20, x21
|
||||
push x18, x19
|
||||
push x16, x17
|
||||
push x14, x15
|
||||
push x12, x13
|
||||
push x10, x11
|
||||
push x8, x9
|
||||
push x6, x7
|
||||
push x4, x5
|
||||
push x2, x3
|
||||
push x0, x1
|
||||
.if \el == 0
|
||||
mrs x21, sp_el0
|
||||
.else
|
||||
add x21, sp, #S_FRAME_SIZE
|
||||
.endif
|
||||
mrs x22, elr_el1
|
||||
mrs x23, spsr_el1
|
||||
stp lr, x21, [sp, #S_LR]
|
||||
stp x22, x23, [sp, #S_PC]
|
||||
|
||||
/*
|
||||
* Set syscallno to -1 by default (overridden later if real syscall).
|
||||
*/
|
||||
.if \el == 0
|
||||
mvn x21, xzr
|
||||
str x21, [sp, #S_SYSCALLNO]
|
||||
.endif
|
||||
|
||||
/*
|
||||
* Registers that may be useful after this macro is invoked:
|
||||
*
|
||||
* x21 - aborted SP
|
||||
* x22 - aborted PC
|
||||
* x23 - aborted PSTATE
|
||||
*/
|
||||
.endm
|
||||
|
||||
.macro kernel_exit, el, ret = 0
|
||||
ldp x21, x22, [sp, #S_PC] // load ELR, SPSR
|
||||
.if \el == 0
|
||||
ldr x23, [sp, #S_SP] // load return stack pointer
|
||||
.endif
|
||||
.if \ret
|
||||
ldr x1, [sp, #S_X1] // preserve x0 (syscall return)
|
||||
add sp, sp, S_X2
|
||||
.else
|
||||
pop x0, x1
|
||||
.endif
|
||||
pop x2, x3 // load the rest of the registers
|
||||
pop x4, x5
|
||||
pop x6, x7
|
||||
pop x8, x9
|
||||
msr elr_el1, x21 // set up the return data
|
||||
msr spsr_el1, x22
|
||||
.if \el == 0
|
||||
msr sp_el0, x23
|
||||
.endif
|
||||
pop x10, x11
|
||||
pop x12, x13
|
||||
pop x14, x15
|
||||
pop x16, x17
|
||||
pop x18, x19
|
||||
pop x20, x21
|
||||
pop x22, x23
|
||||
pop x24, x25
|
||||
pop x26, x27
|
||||
pop x28, x29
|
||||
ldr lr, [sp], #S_FRAME_SIZE - S_LR // load LR and restore SP
|
||||
eret // return to kernel
|
||||
.endm
|
||||
|
||||
.macro get_thread_info, rd
|
||||
mov \rd, sp
|
||||
and \rd, \rd, #~((1 << 13) - 1) // top of 8K stack
|
||||
.endm
|
||||
|
||||
/*
|
||||
* These are the registers used in the syscall handler, and allow us to
|
||||
* have in theory up to 7 arguments to a function - x0 to x6.
|
||||
*
|
||||
* x7 is reserved for the system call number in 32-bit mode.
|
||||
*/
|
||||
sc_nr .req x25 // number of system calls
|
||||
scno .req x26 // syscall number
|
||||
stbl .req x27 // syscall table pointer
|
||||
tsk .req x28 // current thread_info
|
||||
|
||||
/*
|
||||
* Interrupt handling.
|
||||
*/
|
||||
.macro irq_handler
|
||||
ldr x1, handle_arch_irq
|
||||
mov x0, sp
|
||||
blr x1
|
||||
.endm
|
||||
|
||||
.text
|
||||
|
||||
/*
|
||||
* Exception vectors.
|
||||
*/
|
||||
.macro ventry label
|
||||
.align 7
|
||||
b \label
|
||||
.endm
|
||||
|
||||
.align 11
|
||||
ENTRY(vectors)
|
||||
ventry el1_sync_invalid // Synchronous EL1t
|
||||
ventry el1_irq_invalid // IRQ EL1t
|
||||
ventry el1_fiq_invalid // FIQ EL1t
|
||||
ventry el1_error_invalid // Error EL1t
|
||||
|
||||
ventry el1_sync // Synchronous EL1h
|
||||
ventry el1_irq // IRQ EL1h
|
||||
ventry el1_fiq_invalid // FIQ EL1h
|
||||
ventry el1_error_invalid // Error EL1h
|
||||
|
||||
ventry el0_sync // Synchronous 64-bit EL0
|
||||
ventry el0_irq // IRQ 64-bit EL0
|
||||
ventry el0_fiq_invalid // FIQ 64-bit EL0
|
||||
ventry el0_error_invalid // Error 64-bit EL0
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
ventry el0_sync_compat // Synchronous 32-bit EL0
|
||||
ventry el0_irq_compat // IRQ 32-bit EL0
|
||||
ventry el0_fiq_invalid_compat // FIQ 32-bit EL0
|
||||
ventry el0_error_invalid_compat // Error 32-bit EL0
|
||||
#else
|
||||
ventry el0_sync_invalid // Synchronous 32-bit EL0
|
||||
ventry el0_irq_invalid // IRQ 32-bit EL0
|
||||
ventry el0_fiq_invalid // FIQ 32-bit EL0
|
||||
ventry el0_error_invalid // Error 32-bit EL0
|
||||
#endif
|
||||
END(vectors)
|
||||
|
||||
/*
|
||||
* Invalid mode handlers
|
||||
*/
|
||||
.macro inv_entry, el, reason, regsize = 64
|
||||
kernel_entry el, \regsize
|
||||
mov x0, sp
|
||||
mov x1, #\reason
|
||||
mrs x2, esr_el1
|
||||
b bad_mode
|
||||
.endm
|
||||
|
||||
el0_sync_invalid:
|
||||
inv_entry 0, BAD_SYNC
|
||||
ENDPROC(el0_sync_invalid)
|
||||
|
||||
el0_irq_invalid:
|
||||
inv_entry 0, BAD_IRQ
|
||||
ENDPROC(el0_irq_invalid)
|
||||
|
||||
el0_fiq_invalid:
|
||||
inv_entry 0, BAD_FIQ
|
||||
ENDPROC(el0_fiq_invalid)
|
||||
|
||||
el0_error_invalid:
|
||||
inv_entry 0, BAD_ERROR
|
||||
ENDPROC(el0_error_invalid)
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
el0_fiq_invalid_compat:
|
||||
inv_entry 0, BAD_FIQ, 32
|
||||
ENDPROC(el0_fiq_invalid_compat)
|
||||
|
||||
el0_error_invalid_compat:
|
||||
inv_entry 0, BAD_ERROR, 32
|
||||
ENDPROC(el0_error_invalid_compat)
|
||||
#endif
|
||||
|
||||
el1_sync_invalid:
|
||||
inv_entry 1, BAD_SYNC
|
||||
ENDPROC(el1_sync_invalid)
|
||||
|
||||
el1_irq_invalid:
|
||||
inv_entry 1, BAD_IRQ
|
||||
ENDPROC(el1_irq_invalid)
|
||||
|
||||
el1_fiq_invalid:
|
||||
inv_entry 1, BAD_FIQ
|
||||
ENDPROC(el1_fiq_invalid)
|
||||
|
||||
el1_error_invalid:
|
||||
inv_entry 1, BAD_ERROR
|
||||
ENDPROC(el1_error_invalid)
|
||||
|
||||
/*
|
||||
* EL1 mode handlers.
|
||||
*/
|
||||
.align 6
|
||||
el1_sync:
|
||||
kernel_entry 1
|
||||
mrs x1, esr_el1 // read the syndrome register
|
||||
lsr x24, x1, #26 // exception class
|
||||
cmp x24, #0x25 // data abort in EL1
|
||||
b.eq el1_da
|
||||
cmp x24, #0x18 // configurable trap
|
||||
b.eq el1_undef
|
||||
cmp x24, #0x26 // stack alignment exception
|
||||
b.eq el1_sp_pc
|
||||
cmp x24, #0x22 // pc alignment exception
|
||||
b.eq el1_sp_pc
|
||||
cmp x24, #0x00 // unknown exception in EL1
|
||||
b.eq el1_undef
|
||||
cmp x24, #0x30 // debug exception in EL1
|
||||
b.ge el1_dbg
|
||||
b el1_inv
|
||||
el1_da:
|
||||
/*
|
||||
* Data abort handling
|
||||
*/
|
||||
mrs x0, far_el1
|
||||
enable_dbg_if_not_stepping x2
|
||||
// re-enable interrupts if they were enabled in the aborted context
|
||||
tbnz x23, #7, 1f // PSR_I_BIT
|
||||
enable_irq
|
||||
1:
|
||||
mov x2, sp // struct pt_regs
|
||||
bl do_mem_abort
|
||||
|
||||
// disable interrupts before pulling preserved data off the stack
|
||||
disable_irq
|
||||
kernel_exit 1
|
||||
el1_sp_pc:
|
||||
/*
|
||||
* Stack or PC alignment exception handling
|
||||
*/
|
||||
mrs x0, far_el1
|
||||
mov x1, x25
|
||||
mov x2, sp
|
||||
b do_sp_pc_abort
|
||||
el1_undef:
|
||||
/*
|
||||
* Undefined instruction
|
||||
*/
|
||||
mov x0, sp
|
||||
b do_undefinstr
|
||||
el1_dbg:
|
||||
/*
|
||||
* Debug exception handling
|
||||
*/
|
||||
tbz x24, #0, el1_inv // EL1 only
|
||||
mrs x0, far_el1
|
||||
mov x2, sp // struct pt_regs
|
||||
bl do_debug_exception
|
||||
|
||||
kernel_exit 1
|
||||
el1_inv:
|
||||
// TODO: add support for undefined instructions in kernel mode
|
||||
mov x0, sp
|
||||
mov x1, #BAD_SYNC
|
||||
mrs x2, esr_el1
|
||||
b bad_mode
|
||||
ENDPROC(el1_sync)
|
||||
|
||||
.align 6
|
||||
el1_irq:
|
||||
kernel_entry 1
|
||||
enable_dbg_if_not_stepping x0
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
#ifdef CONFIG_PREEMPT
|
||||
get_thread_info tsk
|
||||
ldr x24, [tsk, #TI_PREEMPT] // get preempt count
|
||||
add x0, x24, #1 // increment it
|
||||
str x0, [tsk, #TI_PREEMPT]
|
||||
#endif
|
||||
irq_handler
|
||||
#ifdef CONFIG_PREEMPT
|
||||
str x24, [tsk, #TI_PREEMPT] // restore preempt count
|
||||
cbnz x24, 1f // preempt count != 0
|
||||
ldr x0, [tsk, #TI_FLAGS] // get flags
|
||||
tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling?
|
||||
bl el1_preempt
|
||||
1:
|
||||
#endif
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_on
|
||||
#endif
|
||||
kernel_exit 1
|
||||
ENDPROC(el1_irq)
|
||||
|
||||
#ifdef CONFIG_PREEMPT
|
||||
el1_preempt:
|
||||
mov x24, lr
|
||||
1: enable_dbg
|
||||
bl preempt_schedule_irq // irq en/disable is done inside
|
||||
ldr x0, [tsk, #TI_FLAGS] // get new tasks TI_FLAGS
|
||||
tbnz x0, #TIF_NEED_RESCHED, 1b // needs rescheduling?
|
||||
ret x24
|
||||
#endif
|
||||
|
||||
/*
|
||||
* EL0 mode handlers.
|
||||
*/
|
||||
.align 6
|
||||
el0_sync:
|
||||
kernel_entry 0
|
||||
mrs x25, esr_el1 // read the syndrome register
|
||||
lsr x24, x25, #26 // exception class
|
||||
cmp x24, #0x15 // SVC in 64-bit state
|
||||
b.eq el0_svc
|
||||
adr lr, ret_from_exception
|
||||
cmp x24, #0x24 // data abort in EL0
|
||||
b.eq el0_da
|
||||
cmp x24, #0x20 // instruction abort in EL0
|
||||
b.eq el0_ia
|
||||
cmp x24, #0x07 // FP/ASIMD access
|
||||
b.eq el0_fpsimd_acc
|
||||
cmp x24, #0x2c // FP/ASIMD exception
|
||||
b.eq el0_fpsimd_exc
|
||||
cmp x24, #0x18 // configurable trap
|
||||
b.eq el0_undef
|
||||
cmp x24, #0x26 // stack alignment exception
|
||||
b.eq el0_sp_pc
|
||||
cmp x24, #0x22 // pc alignment exception
|
||||
b.eq el0_sp_pc
|
||||
cmp x24, #0x00 // unknown exception in EL0
|
||||
b.eq el0_undef
|
||||
cmp x24, #0x30 // debug exception in EL0
|
||||
b.ge el0_dbg
|
||||
b el0_inv
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
.align 6
|
||||
el0_sync_compat:
|
||||
kernel_entry 0, 32
|
||||
mrs x25, esr_el1 // read the syndrome register
|
||||
lsr x24, x25, #26 // exception class
|
||||
cmp x24, #0x11 // SVC in 32-bit state
|
||||
b.eq el0_svc_compat
|
||||
adr lr, ret_from_exception
|
||||
cmp x24, #0x24 // data abort in EL0
|
||||
b.eq el0_da
|
||||
cmp x24, #0x20 // instruction abort in EL0
|
||||
b.eq el0_ia
|
||||
cmp x24, #0x07 // FP/ASIMD access
|
||||
b.eq el0_fpsimd_acc
|
||||
cmp x24, #0x28 // FP/ASIMD exception
|
||||
b.eq el0_fpsimd_exc
|
||||
cmp x24, #0x00 // unknown exception in EL0
|
||||
b.eq el0_undef
|
||||
cmp x24, #0x30 // debug exception in EL0
|
||||
b.ge el0_dbg
|
||||
b el0_inv
|
||||
el0_svc_compat:
|
||||
/*
|
||||
* AArch32 syscall handling
|
||||
*/
|
||||
adr stbl, compat_sys_call_table // load compat syscall table pointer
|
||||
uxtw scno, w7 // syscall number in w7 (r7)
|
||||
mov sc_nr, #__NR_compat_syscalls
|
||||
b el0_svc_naked
|
||||
|
||||
.align 6
|
||||
el0_irq_compat:
|
||||
kernel_entry 0, 32
|
||||
b el0_irq_naked
|
||||
#endif
|
||||
|
||||
el0_da:
|
||||
/*
|
||||
* Data abort handling
|
||||
*/
|
||||
mrs x0, far_el1
|
||||
disable_step x1
|
||||
isb
|
||||
enable_dbg
|
||||
// enable interrupts before calling the main handler
|
||||
enable_irq
|
||||
mov x1, x25
|
||||
mov x2, sp
|
||||
b do_mem_abort
|
||||
el0_ia:
|
||||
/*
|
||||
* Instruction abort handling
|
||||
*/
|
||||
mrs x0, far_el1
|
||||
disable_step x1
|
||||
isb
|
||||
enable_dbg
|
||||
// enable interrupts before calling the main handler
|
||||
enable_irq
|
||||
orr x1, x25, #1 << 24 // use reserved ISS bit for instruction aborts
|
||||
mov x2, sp
|
||||
b do_mem_abort
|
||||
el0_fpsimd_acc:
|
||||
/*
|
||||
* Floating Point or Advanced SIMD access
|
||||
*/
|
||||
mov x0, x25
|
||||
mov x1, sp
|
||||
b do_fpsimd_acc
|
||||
el0_fpsimd_exc:
|
||||
/*
|
||||
* Floating Point or Advanced SIMD exception
|
||||
*/
|
||||
mov x0, x25
|
||||
mov x1, sp
|
||||
b do_fpsimd_exc
|
||||
el0_sp_pc:
|
||||
/*
|
||||
* Stack or PC alignment exception handling
|
||||
*/
|
||||
mrs x0, far_el1
|
||||
disable_step x1
|
||||
isb
|
||||
enable_dbg
|
||||
// enable interrupts before calling the main handler
|
||||
enable_irq
|
||||
mov x1, x25
|
||||
mov x2, sp
|
||||
b do_sp_pc_abort
|
||||
el0_undef:
|
||||
/*
|
||||
* Undefined instruction
|
||||
*/
|
||||
mov x0, sp
|
||||
b do_undefinstr
|
||||
el0_dbg:
|
||||
/*
|
||||
* Debug exception handling
|
||||
*/
|
||||
tbnz x24, #0, el0_inv // EL0 only
|
||||
mrs x0, far_el1
|
||||
disable_step x1
|
||||
mov x1, x25
|
||||
mov x2, sp
|
||||
b do_debug_exception
|
||||
el0_inv:
|
||||
mov x0, sp
|
||||
mov x1, #BAD_SYNC
|
||||
mrs x2, esr_el1
|
||||
b bad_mode
|
||||
ENDPROC(el0_sync)
|
||||
|
||||
.align 6
|
||||
el0_irq:
|
||||
kernel_entry 0
|
||||
el0_irq_naked:
|
||||
disable_step x1
|
||||
isb
|
||||
enable_dbg
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
get_thread_info tsk
|
||||
#ifdef CONFIG_PREEMPT
|
||||
ldr x24, [tsk, #TI_PREEMPT] // get preempt count
|
||||
add x23, x24, #1 // increment it
|
||||
str x23, [tsk, #TI_PREEMPT]
|
||||
#endif
|
||||
irq_handler
|
||||
#ifdef CONFIG_PREEMPT
|
||||
ldr x0, [tsk, #TI_PREEMPT]
|
||||
str x24, [tsk, #TI_PREEMPT]
|
||||
cmp x0, x23
|
||||
b.eq 1f
|
||||
mov x1, #0
|
||||
str x1, [x1] // BUG
|
||||
1:
|
||||
#endif
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_on
|
||||
#endif
|
||||
b ret_to_user
|
||||
ENDPROC(el0_irq)
|
||||
|
||||
/*
|
||||
* This is the return code to user mode for abort handlers
|
||||
*/
|
||||
ret_from_exception:
|
||||
get_thread_info tsk
|
||||
b ret_to_user
|
||||
ENDPROC(ret_from_exception)
|
||||
|
||||
/*
|
||||
* Register switch for AArch64. The callee-saved registers need to be saved
|
||||
* and restored. On entry:
|
||||
* x0 = previous task_struct (must be preserved across the switch)
|
||||
* x1 = next task_struct
|
||||
* Previous and next are guaranteed not to be the same.
|
||||
*
|
||||
*/
|
||||
ENTRY(cpu_switch_to)
|
||||
add x8, x0, #THREAD_CPU_CONTEXT
|
||||
mov x9, sp
|
||||
stp x19, x20, [x8], #16 // store callee-saved registers
|
||||
stp x21, x22, [x8], #16
|
||||
stp x23, x24, [x8], #16
|
||||
stp x25, x26, [x8], #16
|
||||
stp x27, x28, [x8], #16
|
||||
stp x29, x9, [x8], #16
|
||||
str lr, [x8]
|
||||
add x8, x1, #THREAD_CPU_CONTEXT
|
||||
ldp x19, x20, [x8], #16 // restore callee-saved registers
|
||||
ldp x21, x22, [x8], #16
|
||||
ldp x23, x24, [x8], #16
|
||||
ldp x25, x26, [x8], #16
|
||||
ldp x27, x28, [x8], #16
|
||||
ldp x29, x9, [x8], #16
|
||||
ldr lr, [x8]
|
||||
mov sp, x9
|
||||
ret
|
||||
ENDPROC(cpu_switch_to)
|
||||
|
||||
/*
|
||||
* This is the fast syscall return path. We do as little as possible here,
|
||||
* and this includes saving x0 back into the kernel stack.
|
||||
*/
|
||||
ret_fast_syscall:
|
||||
disable_irq // disable interrupts
|
||||
ldr x1, [tsk, #TI_FLAGS]
|
||||
and x2, x1, #_TIF_WORK_MASK
|
||||
cbnz x2, fast_work_pending
|
||||
tbz x1, #TIF_SINGLESTEP, fast_exit
|
||||
disable_dbg
|
||||
enable_step x2
|
||||
fast_exit:
|
||||
kernel_exit 0, ret = 1
|
||||
|
||||
/*
|
||||
* Ok, we need to do extra processing, enter the slow path.
|
||||
*/
|
||||
fast_work_pending:
|
||||
str x0, [sp, #S_X0] // returned x0
|
||||
work_pending:
|
||||
tbnz x1, #TIF_NEED_RESCHED, work_resched
|
||||
/* TIF_SIGPENDING or TIF_NOTIFY_RESUME case */
|
||||
ldr x2, [sp, #S_PSTATE]
|
||||
mov x0, sp // 'regs'
|
||||
tst x2, #PSR_MODE_MASK // user mode regs?
|
||||
b.ne no_work_pending // returning to kernel
|
||||
bl do_notify_resume
|
||||
b ret_to_user
|
||||
work_resched:
|
||||
enable_dbg
|
||||
bl schedule
|
||||
|
||||
/*
|
||||
* "slow" syscall return path.
|
||||
*/
|
||||
ENTRY(ret_to_user)
|
||||
disable_irq // disable interrupts
|
||||
ldr x1, [tsk, #TI_FLAGS]
|
||||
and x2, x1, #_TIF_WORK_MASK
|
||||
cbnz x2, work_pending
|
||||
tbz x1, #TIF_SINGLESTEP, no_work_pending
|
||||
disable_dbg
|
||||
enable_step x2
|
||||
no_work_pending:
|
||||
kernel_exit 0, ret = 0
|
||||
ENDPROC(ret_to_user)
|
||||
|
||||
/*
|
||||
* This is how we return from a fork.
|
||||
*/
|
||||
ENTRY(ret_from_fork)
|
||||
bl schedule_tail
|
||||
get_thread_info tsk
|
||||
b ret_to_user
|
||||
ENDPROC(ret_from_fork)
|
||||
|
||||
/*
|
||||
* SVC handler.
|
||||
*/
|
||||
.align 6
|
||||
el0_svc:
|
||||
adrp stbl, sys_call_table // load syscall table pointer
|
||||
uxtw scno, w8 // syscall number in w8
|
||||
mov sc_nr, #__NR_syscalls
|
||||
el0_svc_naked: // compat entry point
|
||||
stp x0, scno, [sp, #S_ORIG_X0] // save the original x0 and syscall number
|
||||
disable_step x16
|
||||
isb
|
||||
enable_dbg
|
||||
enable_irq
|
||||
|
||||
get_thread_info tsk
|
||||
ldr x16, [tsk, #TI_FLAGS] // check for syscall tracing
|
||||
tbnz x16, #TIF_SYSCALL_TRACE, __sys_trace // are we tracing syscalls?
|
||||
adr lr, ret_fast_syscall // return address
|
||||
cmp scno, sc_nr // check upper syscall limit
|
||||
b.hs ni_sys
|
||||
ldr x16, [stbl, scno, lsl #3] // address in the syscall table
|
||||
br x16 // call sys_* routine
|
||||
ni_sys:
|
||||
mov x0, sp
|
||||
b do_ni_syscall
|
||||
ENDPROC(el0_svc)
|
||||
|
||||
/*
|
||||
* This is the really slow path. We're going to be doing context
|
||||
* switches, and waiting for our parent to respond.
|
||||
*/
|
||||
__sys_trace:
|
||||
mov x1, sp
|
||||
mov w0, #0 // trace entry
|
||||
bl syscall_trace
|
||||
adr lr, __sys_trace_return // return address
|
||||
uxtw scno, w0 // syscall number (possibly new)
|
||||
mov x1, sp // pointer to regs
|
||||
cmp scno, sc_nr // check upper syscall limit
|
||||
b.hs ni_sys
|
||||
ldp x0, x1, [sp] // restore the syscall args
|
||||
ldp x2, x3, [sp, #S_X2]
|
||||
ldp x4, x5, [sp, #S_X4]
|
||||
ldp x6, x7, [sp, #S_X6]
|
||||
ldr x16, [stbl, scno, lsl #3] // address in the syscall table
|
||||
br x16 // call sys_* routine
|
||||
|
||||
__sys_trace_return:
|
||||
str x0, [sp] // save returned x0
|
||||
mov x1, sp
|
||||
mov w0, #1 // trace exit
|
||||
bl syscall_trace
|
||||
b ret_to_user
|
||||
|
||||
/*
|
||||
* Special system call wrappers.
|
||||
*/
|
||||
ENTRY(sys_execve_wrapper)
|
||||
mov x3, sp
|
||||
b sys_execve
|
||||
ENDPROC(sys_execve_wrapper)
|
||||
|
||||
ENTRY(sys_clone_wrapper)
|
||||
mov x5, sp
|
||||
b sys_clone
|
||||
ENDPROC(sys_clone_wrapper)
|
||||
|
||||
ENTRY(sys_rt_sigreturn_wrapper)
|
||||
mov x0, sp
|
||||
b sys_rt_sigreturn
|
||||
ENDPROC(sys_rt_sigreturn_wrapper)
|
||||
|
||||
ENTRY(sys_sigaltstack_wrapper)
|
||||
ldr x2, [sp, #S_SP]
|
||||
b sys_sigaltstack
|
||||
ENDPROC(sys_sigaltstack_wrapper)
|
||||
|
||||
ENTRY(handle_arch_irq)
|
||||
.quad 0
|
106
arch/arm64/kernel/fpsimd.c
Normal file
106
arch/arm64/kernel/fpsimd.c
Normal file
@ -0,0 +1,106 @@
|
||||
/*
|
||||
* FP/SIMD context switching and fault handling
|
||||
*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
* Author: Catalin Marinas <catalin.marinas@arm.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/signal.h>
|
||||
|
||||
#include <asm/fpsimd.h>
|
||||
#include <asm/cputype.h>
|
||||
|
||||
#define FPEXC_IOF (1 << 0)
|
||||
#define FPEXC_DZF (1 << 1)
|
||||
#define FPEXC_OFF (1 << 2)
|
||||
#define FPEXC_UFF (1 << 3)
|
||||
#define FPEXC_IXF (1 << 4)
|
||||
#define FPEXC_IDF (1 << 7)
|
||||
|
||||
/*
|
||||
* Trapped FP/ASIMD access.
|
||||
*/
|
||||
void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
|
||||
{
|
||||
/* TODO: implement lazy context saving/restoring */
|
||||
WARN_ON(1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Raise a SIGFPE for the current process.
|
||||
*/
|
||||
void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
|
||||
{
|
||||
siginfo_t info;
|
||||
unsigned int si_code = 0;
|
||||
|
||||
if (esr & FPEXC_IOF)
|
||||
si_code = FPE_FLTINV;
|
||||
else if (esr & FPEXC_DZF)
|
||||
si_code = FPE_FLTDIV;
|
||||
else if (esr & FPEXC_OFF)
|
||||
si_code = FPE_FLTOVF;
|
||||
else if (esr & FPEXC_UFF)
|
||||
si_code = FPE_FLTUND;
|
||||
else if (esr & FPEXC_IXF)
|
||||
si_code = FPE_FLTRES;
|
||||
|
||||
memset(&info, 0, sizeof(info));
|
||||
info.si_signo = SIGFPE;
|
||||
info.si_code = si_code;
|
||||
info.si_addr = (void __user *)instruction_pointer(regs);
|
||||
|
||||
send_sig_info(SIGFPE, &info, current);
|
||||
}
|
||||
|
||||
void fpsimd_thread_switch(struct task_struct *next)
|
||||
{
|
||||
/* check if not kernel threads */
|
||||
if (current->mm)
|
||||
fpsimd_save_state(¤t->thread.fpsimd_state);
|
||||
if (next->mm)
|
||||
fpsimd_load_state(&next->thread.fpsimd_state);
|
||||
}
|
||||
|
||||
void fpsimd_flush_thread(void)
|
||||
{
|
||||
memset(¤t->thread.fpsimd_state, 0, sizeof(struct fpsimd_state));
|
||||
fpsimd_load_state(¤t->thread.fpsimd_state);
|
||||
}
|
||||
|
||||
/*
|
||||
* FP/SIMD support code initialisation.
|
||||
*/
|
||||
static int __init fpsimd_init(void)
|
||||
{
|
||||
u64 pfr = read_cpuid(ID_AA64PFR0_EL1);
|
||||
|
||||
if (pfr & (0xf << 16)) {
|
||||
pr_notice("Floating-point is not implemented\n");
|
||||
return 0;
|
||||
}
|
||||
elf_hwcap |= HWCAP_FP;
|
||||
|
||||
if (pfr & (0xf << 20))
|
||||
pr_notice("Advanced SIMD is not implemented\n");
|
||||
else
|
||||
elf_hwcap |= HWCAP_ASIMD;
|
||||
|
||||
return 0;
|
||||
}
|
||||
late_initcall(fpsimd_init);
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user