The toptree algorithm uses the physical core ids to create a mapping
between cores and nodes (to_node_id array within emu_cores structure).
The core ids are used as an index into an array which size depends on
CONFIG_NR_CPUS. If the physical core ids are larger, this will result
in out-of-bounds write accesses.
Generate logical core ids instead to avoid this.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The ptff() and clear_table() functions use the gcc extension "variable
length arrays in structures" (VLAIS) to define in the inline assembler
constraints the area of the clobbered memory. This extension will most
likely never be supported by LLVM/Clang.
Since currently BPF programs are compiled with LLVM, this leads to the
following compile errors:
$ cd samples/bpf
$ make
In file included from /root/linux-master/samples/bpf/tracex1_kern.c:8:
In file included from ./include/linux/netdevice.h:44:
...
In file included from ./arch/s390/include/asm/mmu_context.h:10:
./arch/s390/include/asm/pgalloc.h:30:24: error: fields must have a
constant size: 'variable length array in structure' extension will never
be supported
typedef struct { char _[n]; } addrtype;
In file included from /root/linux-master/samples/bpf/tracex1_kern.c:7:
In file included from ./include/linux/skbuff.h:18:
...
In file included from ./include/linux/jiffies.h:8:
In file included from ./include/linux/timex.h:65:
./arch/s390/include/asm/timex.h:105:24: error: fields must have a
constant size: 'variable length array in structure' extension will never
be supported
typedef struct { char _[len]; } addrtype;
To fix this do the following:
- Convert ptff() into a macro that then uses a fixed size array
when expanded.
- Convert the clear_table() function and use an inline assembly
with fixed size array in a loop.
The runtime performance of the new version is even better than
the old version (tested with EC12/z13 and gcc 4.8.5/6.2.1 with
"-march=z196 -O2").
Reported-by: Zvonko Kosic <zvonko.kosic@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
For system damage machine checks or machine checks due to invalid PSW
fields the system will be stopped. In order to get an oops message out
before killing the system the machine check handler branches to
.Lmcck_panic, switches to the panic stack and then does the usual
machine check handling.
The switch to the panic stack is incomplete, the stack pointer in %r15
is replaced, but the pt_regs pointer in %r11 is not. The result is
a program check which will kill the system in a slightly different way.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When converting from bootmem to memblock I missed a subtle difference:
the memblock_alloc() functions return uninitialized memory, while the
memblock_virt_alloc() functions return zeroed memory.
This led to quite random early boot crashes.
Therefore use the correct version everywhere now.
Hopefully.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When re-adding crash kernel memory within setup_resources() the
function memblock_add() is used. That function will add memory by
default to node "MAX_NUMNODES" instead of node 0, like the memory
detection code does. In case of !NUMA this will trigger this warning
when the kernel generates the vmemmap:
Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead
WARNING: CPU: 0 PID: 0 at mm/memblock.c:1261 memblock_virt_alloc_internal+0x76/0x220
CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc6 #16
Call Trace:
[<0000000000d0b2e8>] memblock_virt_alloc_try_nid+0x88/0xc8
[<000000000083c8ea>] __earlyonly_bootmem_alloc.constprop.1+0x42/0x50
[<000000000083e7f4>] vmemmap_populate+0x1ac/0x1e0
[<0000000000840136>] sparse_mem_map_populate+0x46/0x68
[<0000000000d0c59c>] sparse_init+0x184/0x238
[<0000000000cf45f6>] paging_init+0xbe/0xf8
[<0000000000cf1d4a>] setup_arch+0xa02/0xae0
[<0000000000ced75a>] start_kernel+0x72/0x450
[<0000000000100020>] _stext+0x20/0x80
If NUMA is selected numa_setup_memory() will fix the node assignments
before the vmemmap will be populated; so this warning will only appear
if NUMA is not selected.
To fix this simply use memblock_add_node() and re-add crash kernel
memory explicitly to node 0.
Reported-and-tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Fixes: 4e042af463 ("s390/kexec: fix crash on resize of reserved memory")
Cc: <stable@vger.kernel.org> # v4.8+
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of all remaining alloc_bootmem calls and use memblock_alloc
instead everywhere. This way we get rid of the inconsistent mixture
of alloc_bootmem and memblock_alloc usages.
Two of the alloc_bootmem_low calls within arch/s390/kernel/setup.c are
replaced with memblock_alloc calls that don't enforce that the
allocated memory is below 2GB. This restriction was never necessary.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The LAST_BREAK macro in entry.S uses a different instruction sequence
for CONFIG_MARCH_Z900 builds. The branch target offset to skip the
store of the last breaking event address needs to take the different
length of the code block into account.
Fixes: f8fc82b471 ("s390: move sys_call_table and last_break from thread_info to thread_struct")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use UIDs as domain numbers if the UID checking rules apply (in this
case the FW guarantees uniqueness of these values).
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Enable the contiguous memory allocator but set the default size to
zero. If somebody wants to use the cma allocator the "cma=" kernel
parameter has to be used.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
In order to make the cma infrastructure usable we need to add a small
architecture backend which calls dma_contiguous_reserve.
Otherwise we would end up with the cma allocator enabled, but no pool
where memory can be allocated from.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use the psw_bits macro and simplify the code. The generated code is
also better since it doesn't contain any conditional branches anymore.
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We have the s390 specific THREAD_ORDER define and the THREAD_SIZE_ORDER
define which is also used in common code. Both have exactly the same
semantics. Therefore get rid of THREAD_ORDER and always use
THREAD_SIZE_ORDER instead.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
For an unknown (historic) reason the s390 specific implementation of
set_fs returns whatever the __ctl_load would return. The set_fs macro
however is supposed to return void.
Change the macro to do that.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of a useless memset from dma_alloc. Users of dma_alloc who want
zero initialized memory can get it by specifying __GFP_ZERO or use one
of the zalloc variants.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We have 2 strategies to reduce the number of RPCIT instructions:
* A HW feature indicated via the tlb_refresh bit allows us to omit RPCIT for
invalid -> valid translation-table entry updates.
* With "lazy flush" we omit RPCIT for valid -> invalid updates until we run
out of dma addresses. When we have to reuse dma addresses we issue a global
tlb flush using only one RPCIT instruction.
Currently lazy flushing depends on tlb_refresh. Since there is no technical
reason for this remove this dependency.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
__s390_dma_map_sg maps a dma-contiguous area. Although we only map
whole pages we have to take into account that the area doesn't start
or stop at a page boundary because we use the dma address to loop
over the individual sg entries. Failing to do that might lead to an
access of the wrong sg entry.
Fixes: ee877b81c6 ("s390/pci_dma: improve map_sg")
Reported-and-tested-by: Christoph Raisch <raisch@de.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The TOD clock offset injected by an STP sync check can be negative.
If the resulting total tod_steering_delta gets negative the kernel
will panic.
Change the type of tod_steering_delta to a signed type.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 75c7b6f3f6 ("s390/time: steer clocksource on STP sync events")
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Move the last two architecture specific fields from the thread_info
structure to the thread_struct. All that is left in thread_info is
the flags field.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The user_timer and system_timer fields are used for the per-thread
cputime accounting code. The access to these values is simpler if
they are moved to the thread_struct as the task_thread_info(tsk)
indirection is not needed anymore.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The system_call field in thread_info structure is used by the signal
code to store the number of the current system call while the debugger
interacts with its inferior. A better location for the system_call
field is with the other debugger related information in the
thread_struct.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This is the s390 variant of commit 15f4eae70d ("x86: Move
thread_info into task_struct").
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Convert s390 to use a field in the struct lowcore for the CPU
preemption count. It is a bit cheaper to access a lowcore field
compared to a thread_info variable and it removes the depencency
on a task related structure.
bloat-o-meter on the vmlinux image for the default configuration
(CONFIG_PREEMPT_NONE=y) reports a small reduction in text size:
add/remove: 0/0 grow/shrink: 18/578 up/down: 228/-5448 (-5220)
A larger improvement is achieved with the default configuration
but with CONFIG_PREEMPT=y and CONFIG_DEBUG_PREEMPT=n:
add/remove: 2/6 grow/shrink: 59/4477 up/down: 1618/-228762 (-227144)
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Replace the bitops specific atomic update code by the functions
from atomic_ops.h. This saves a few lines of non-trivial code.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Rework atomic.h to make the low level functions avaible for use
in other headers without using atomic_t, e.g. in bitops.h.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The dependency between the object and the source is handled by
scripts/Makefile.host, so only "hostprogs-y += gen_facilities"
is fine.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We generally expect headers in arch/$(ARCH)/include/asm directory
are included from kernel sources, but facilities_src.h is not;
it is included from the arch/s390/tools/gen_facilities.c tool.
There is no reason to expose this header to the public include path.
Furthermore, facilities_src.h makes sure to be included only from
gen_facilities.c by the following:
#ifndef S390_GEN_FACILITIES_C
#error "This file can only be included by gen_facilities.c"
#endif
This check can be removed by merging the two files.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The header facilities_src.h is only included from gen_facilities.c
and the tool is compiled with the following extra options:
HOSTCFLAGS_gen_facilities.o += -Wall $(LINUXINCLUDE)
Please note $(LINUXINCLUDE) is expanded into build options including:
-include $(srctree)/include/linux/kconfig.h
So, the Makefile always forces the tool to include kconfig.h, i.e.,
the #include <linux/kconfig.h> directive in the header is redundant.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The Makefile currently controlling compilation of this code is obj-y
meaning that it currently is not being built as a module by anyone.
Lets remove the couple traces of modular infrastructure use, so that
when reading the driver there is no doubt it is builtin-only.
Since module_init translates to device_initcall in the non-modular
case, the init ordering remains unchanged with this commit.
We replace module.h with init.h and export.h since the file does
export some symbols.
Cc: linux-s390@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The Kconfig currently controlling compilation of this code is:
arch/s390/Kconfig:config S390_HYPFS_FS
arch/s390/Kconfig: def_bool y
...meaning that it currently is not being built as a module by anyone.
Lets remove the couple traces of modular infrastructure use, so that
when reading the driver there is no doubt it is builtin-only.
Since module_init translates to device_initcall in the non-modular
case, the init ordering remains unchanged with this commit.
Also note that MODULE_ALIAS is a no-op for non-modular code.
We also delete the MODULE_LICENSE tag etc. since all that information
was (or is now) contained at the top of the file in the comments.
Build testing indicated the presence of module.h was masking an
implicit include of kobject.h, hence the addition of that.
Cc: linux-s390@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
On STP sync events the TOD clock will jump in time, either forward or
backward. The TOD clocksource claims to be continuous but in case of
an STP sync with a negative offset it is not.
Subtract the offset injected by the STP sync check from the result of
the TOD clocksource to make it continuous again. Add code to drift the
offset towards zero with a fixed rate, steering 1 second in ~9 hours.
Suggested-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The last_update_clock time stamp in the lowcore should be adjusted by
the TOD clock delta that is created by the clock synchronization.
Otherwise the calculation of the steal time will be incorrect.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Merge clock_sync_cpu into stp_sync_clock and split out the update
of the global and per-CPU clock fields into clock_sync_global
and clock_sync_local.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Since commit d86bd1bece ("mm/slub: support left redzone") it is no longer
guaranteed that kmalloc(PAGE_SIZE) returns page aligned memory.
After the above commit we get an error for diag224 because aligned
memory is required. This leads to the following user visible error:
# mount none -t s390_hypfs /sys/hypervisor/
mount: unknown filesystem type 's390_hypfs'
# dmesg | grep hypfs
hypfs.cccfb8: The hardware system does not provide all functions
required by hypfs
hypfs.7a79f0: Initialization of hypfs failed with rc=-61
Fix this problem and use get_free_page() instead of kmalloc() to get
correctly aligned memory.
Cc: stable@vger.kernel.org # v3.6+
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Pull s390 fixes from Martin Schwidefsky:
"A few more s390 patches for 4.9:
- a fix for an overflow in the dasd driver reported by UBSAN
- fix a regression and add hotplug memory to the zone movable again
- add ignore defines for the pkey system calls
- fix the ouput of the merged stack tracer
- replace printk with pr_cont in arch/s390 where appropriate
- remove the arch specific return_address function again
- ignore reserved channel paths at boot time
- add a missing hugetlb_bad_size call to the arch backend"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/mm: fix zone calculation in arch_add_memory()
s390/dumpstack: use pr_cont within show_stack and die
s390/dumpstack: get rid of return_address again
s390/disassambler: use pr_cont where appropriate
s390/dumpstack: use pr_cont where appropriate
s390/dumpstack: restore reliable indicator for call traces
s390/mm: use hugetlb_bad_size()
s390/cio: don't register chpids in reserved state
s390: ignore pkey system calls
s390/dasd: avoid undefined behaviour
Standby (hotplug) memory should be added to ZONE_MOVABLE on s390. After
commit 199071f1 "s390/mm: make arch_add_memory() NUMA aware",
arch_add_memory() used memblock_end_of_DRAM() to find out the end of
ZONE_NORMAL and the beginning of ZONE_MOVABLE. However, commit 7f36e3e5
"memory-hotplug: add hot-added memory ranges to memblock before allocate
node_data for a node." moved the call of memblock_add_node() before
the call of arch_add_memory() in add_memory_resource(), and thus changed
the return value of memblock_end_of_DRAM() when called in
arch_add_memory(). As a result, arch_add_memory() will think that all
memory blocks should be added to ZONE_NORMAL.
Fix this by changing the logic in arch_add_memory() so that it will
manually iterate over all zones of a given node to find out which zone
a memory block should be added to.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use pr_cont instead of printk calls also within show_stack and
die in order to avoid extra line breaks.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Usually a validity intercept is a programming error of the host
because of invalid entries in the state description.
We can get a validity intercept if the mode of the runtime
instrumentation control block is wrong. As the host does not know
which modes are valid, this can be used by userspace to trigger
a WARN.
Instead of printing a WARN let's return an error to userspace as
this can only happen if userspace provides a malformed initial
value (e.g. on migration). The kernel should never warn on bogus
input. Instead let's log it into the s390 debug feature.
While at it, let's return -EINVAL for all validity intercepts as
this will trigger an error in QEMU like
error: kvm run failed Invalid argument
PSW=mask 0404c00180000000 addr 000000000063c226 cc 00
R00=000000000000004f R01=0000000000000004 R02=0000000000760005 R03=000000007fe0a000
R04=000000000064ba2a R05=000000049db73dd0 R06=000000000082c4b0 R07=0000000000000041
R08=0000000000000002 R09=000003e0804042a8 R10=0000000496152c42 R11=000000007fe0afb0
[...]
This will avoid an endless loop of validity intercepts.
Cc: stable@vger.kernel.org # v4.5+
Fixes: c6e5f16637 ("KVM: s390: implement the RI support of guest")
Acked-by: Fan Zhang <zhangfan@linux.vnet.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Merge the gup_flags cleanups from Lorenzo Stoakes:
"This patch series adjusts functions in the get_user_pages* family such
that desired FOLL_* flags are passed as an argument rather than
implied by flags.
The purpose of this change is to make the use of FOLL_FORCE explicit
so it is easier to grep for and clearer to callers that this flag is
being used. The use of FOLL_FORCE is an issue as it overrides missing
VM_READ/VM_WRITE flags for the VMA whose pages we are reading
from/writing to, which can result in surprising behaviour.
The patch series came out of the discussion around commit 38e0885465
("mm: check VMA flags to avoid invalid PROT_NONE NUMA balancing"),
which addressed a BUG_ON() being triggered when a page was faulted in
with PROT_NONE set but having been overridden by FOLL_FORCE.
do_numa_page() was run on the assumption the page _must_ be one marked
for NUMA node migration as an actual PROT_NONE page would have been
dealt with prior to this code path, however FOLL_FORCE introduced a
situation where this assumption did not hold.
See
https://marc.info/?l=linux-mm&m=147585445805166
for the patch proposal"
Additionally, there's a fix for an ancient bug related to FOLL_FORCE and
FOLL_WRITE by me.
[ This branch was rebased recently to add a few more acked-by's and
reviewed-by's ]
* gup_flag-cleanups:
mm: replace access_process_vm() write parameter with gup_flags
mm: replace access_remote_vm() write parameter with gup_flags
mm: replace __access_remote_vm() write parameter with gup_flags
mm: replace get_user_pages_remote() write/force parameters with gup_flags
mm: replace get_user_pages() write/force parameters with gup_flags
mm: replace get_vaddr_frames() write/force parameters with gup_flags
mm: replace get_user_pages_locked() write/force parameters with gup_flags
mm: replace get_user_pages_unlocked() write/force parameters with gup_flags
mm: remove write/force parameters from __get_user_pages_unlocked()
mm: remove write/force parameters from __get_user_pages_locked()
mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
This removes the 'write' and 'force' use from get_user_pages_unlocked()
and replaces them with 'gup_flags' to make the use of FOLL_FORCE
explicit in callers as use of this flag can result in surprising
behaviour (and hence bugs) within the mm subsystem.
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With commit ef6000b4c6 ("Disable the __builtin_return_address()
warning globally after all)" the kernel does not warn at all again if
__builtin_return_address(n) is called with n > 0.
Besides the fact that this was a false warning on s390 anyway, due to
the always present backchain, we can now revert commit 5606330627
("s390/dumpstack: implement and use return_address()") again, to
simplify the code again.
After all I shouldn't have had return_address() implememted at all to
workaround this issue. So get rid of this again.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Before merging all different stack tracers the call traces printed had
an indicator if an entry can be considered reliable or not.
Unreliable entries were put in braces, reliable not. Currently all
lines contain these extra braces.
This patch restores the old behaviour by adding an extra "reliable"
parameter to the callback functions. Only show_trace makes currently
use of it.
Before:
[ 0.804751] Call Trace:
[ 0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
[ 0.804756] ([<0000000000161d64>] create_worker+0x174/0x1c0)
After:
[ 0.804751] Call Trace:
[ 0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
[ 0.804756] [<0000000000161d64>] create_worker+0x174/0x1c0
Fixes: 758d39ebd3 ("s390/dumpstack: merge all four stack tracers")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Ignore the pkey systems calls since they don't make any sense on s390.
In addition any user could trigger a warning if issueing the pkey_free
system call, if it would be wired up on a system without pkey support.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Pull kbuild updates from Michal Marek:
- EXPORT_SYMBOL for asm source by Al Viro.
This does bring a regression, because genksyms no longer generates
checksums for these symbols (CONFIG_MODVERSIONS). Nick Piggin is
working on a patch to fix this.
Plus, we are talking about functions like strcpy(), which rarely
change prototypes.
- Fixes for PPC fallout of the above by Stephen Rothwell and Nick
Piggin
- fixdep speedup by Alexey Dobriyan.
- preparatory work by Nick Piggin to allow architectures to build with
-ffunction-sections, -fdata-sections and --gc-sections
- CONFIG_THIN_ARCHIVES support by Stephen Rothwell
- fix for filenames with colons in the initramfs source by me.
* 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild: (22 commits)
initramfs: Escape colons in depfile
ppc: there is no clear_pages to export
powerpc/64: whitelist unresolved modversions CRCs
kbuild: -ffunction-sections fix for archs with conflicting sections
kbuild: add arch specific post-link Makefile
kbuild: allow archs to select link dead code/data elimination
kbuild: allow architectures to use thin archives instead of ld -r
kbuild: Regenerate genksyms lexer
kbuild: genksyms fix for typeof handling
fixdep: faster CONFIG_ search
ia64: move exports to definitions
sparc32: debride memcpy.S a bit
[sparc] unify 32bit and 64bit string.h
sparc: move exports to definitions
ppc: move exports to definitions
arm: move exports to definitions
s390: move exports to definitions
m68k: move exports to definitions
alpha: move exports to actual definitions
x86: move exports to actual definitions
...