Introduce reserve/release functions to share the sampling facility
between perf and oprofile.
Also improve error handling for the sampling facility support in perf.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The cpum_cf (counter facility) PMU does not support sampling events.
With cpum_sf (sampling facility), a PMU for sampling CPU cycles is
available.
Make cpum_sf the "default" PMU for PERF_COUNT_HW_CPU_CYCLES sampling
events but use the more precise cpum_cf PMU for non-sampling events.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Introduce a perf PMU, "cpum_sf", to support the CPU-Measurement
Sampling Facility. You can control the sampling facility through
this perf PMU interfaces. Perf sampling events are created for
hardware samples.
For details about the CPU-Measurement Sampling Facility, see
"The Load-Program-Parameter and the CPU-Measurement Facilities" (SA23-2260).
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Provide PMU event attributes for supported counters and export their symbolic
names to the sysfs "events" directory.
See the /sys/devices/cpum_cf/events/ directory for a list of available counters.
Note that you might require counter set authorizations for the LPAR to use them.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The return code of the __put_user call to store the rt_sigreturn
system call to the user stack if not properly checked, the err
variable is only checked before to the __put_user. Use an if
statement instead.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove the embedded struct cpu from struct pcpu and replace it with a
pointer instead. The struct cpu now gets allocated when a new cpu gets
detected.
The size of the pcpu_devices array (NR_CPUS * sizeof(struct pcpu)) gets
reduced by nearly 120KB.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
It is less expensive to update control registers 0 and 2 with two
individual stctg/lctlg instructions as with a single one that spans
control register 0, 1 and 2.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The user_enable_single_step() and user_disable_sindle_step() functions
are always called on the inferior, never for the currently active
process. Remove the unnecessary check for the current process and
the update_cr_regs() call from the enable/disable functions.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If the per cpu ec_mask bit of the receiving cpu is already set there is
no need to send an ipi, since a different cpu has already sent an ipi
and the receiving cpu has not yet executed the external call ipi handler.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
With git commit 79c74ecbeb
"s390/time,vdso: convert to the new update_vsyscall interface"
the new update_vsyscall function already does the sum of xtime
and wall_to_monotonic. The old update_vsyscall function only
copied the wall_to_monotonic offset. The vdso code needs to be
modified to take this into consideration.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The code to use the ECTG instruction to calculate the cputime for the
current thread is currently used only for the per-thread CPU-clock
with the clockid -2 (PID=0, VIRT=1). Use the same code for the clockid
CLOCK_THREAD_CPUTIME_ID to speed up the more common clockid as well.
Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The access-list entry is supposed to have the fetch-only bit set, however
a reserved bit got set instead.
Userspace isn't able to write to the page anyway since the accessed page
has the read-only bit set. So this saves us only for bad surprises in the
future if the reserved bit gets used.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Git commit 9e34f2686bb088b211b6cac8772e1f644c6180f8
"s390/mm,tlb: tlb flush on page table upgrade fixup" removed the
exception handler for the asce-type exception. This is incorrect
as the user-copy with MVCOS can cause asce-type exceptions in
the kernel if a user pointer is too large. Those need to be
handled with do_no_context to branch to the fixup in the
user-copy code.
The simplest fix for this problem is to call do_dat_exception for
asce-type excpetions, as there is no vma for the address the code
will handle the exception correctly.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Git commit 4f37a68cda
"s390: Use direct ktime path for s390 clockevent device" makes use
of the CLOCK_EVT_FEAT_KTIME clockevent option to avoid the delta
calculation with ktime_get() in clockevents_program_event and the
get_tod_clock() in s390_next_event. This is based on the assumption
that the difference between the internal ktime and the hardware
clock is reflected in the wall_to_monotonic delta. But this is not
true, the ntp corrections are applied via changes to the tk->mult
multiplier and this is not reflected in wall_to_monotonic.
In theory this could be solved by using the raw monotonic clock
but it is simpler to switch back to the standard clock delta
calculation.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Switch to the improved update_vsyscall interface that provides
sub-nanosecond precision for gettimeofday and clock_gettime.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Commit "s390: fix handling of runtime instrumentation psw bit" (5ebf250dab)
changed the behavior of setting the runtime instrumentation psw bit. This
commit restores the original logic:
1. When returning from the signal handler, the runtime instrumentation psw bit
is restored to its saved state.
2. If the runtime instrumentation psw bit is enabled during the signal handler,
it is always turned off when leaving the signal handler. The saved state
is restored as described in 1. That also implies that turning on runtime
instrumentation in the signal handler is only effective while running in the
signal context.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Pull second set of s390 patches from Martin Schwidefsky:
"The handling of the PCI hotplug notifications has been improved, the
zfcp dumper can now detect the HSA size dynamically and the default
install kernel has been changed to the compressed bzImage. And two
bug-fixes for scm and 3720"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/pci: implement hotplug notifications
s390/scm_block: do not hide eadm subchannel dependency
s390/sclp: Consolidate early sclp init calls to sclp_early_detect()
s390/sclp: Move early code from sclp_cmd.c to sclp_early.c
s390/sclp: Determine HSA size dynamically for zfcpdump
s390/sclp: Move declarations for sclp_sdias into separate header file
s390/pci: implement pcibios_remove_bus
s390/pci: improve handling of bus resources
s390/3270: fix missing device_destroy() call
s390/boot: Install bzImage as default kernel image
Pull trivial tree updates from Jiri Kosina:
"Usual earth-shaking, news-breaking, rocket science pile from
trivial.git"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (23 commits)
doc: usb: Fix typo in Documentation/usb/gadget_configs.txt
doc: add missing files to timers/00-INDEX
timekeeping: Fix some trivial typos in comments
mm: Fix some trivial typos in comments
irq: Fix some trivial typos in comments
NUMA: fix typos in Kconfig help text
mm: update 00-INDEX
doc: Documentation/DMA-attributes.txt fix typo
DRM: comment: `halve' -> `half'
Docs: Kconfig: `devlopers' -> `developers'
doc: typo on word accounting in kprobes.c in mutliple architectures
treewide: fix "usefull" typo
treewide: fix "distingush" typo
mm/Kconfig: Grammar s/an/a/
kexec: Typo s/the/then/
Documentation/kvm: Update cpuid documentation for steal time and pv eoi
treewide: Fix common typo in "identify"
__page_to_pfn: Fix typo in comment
Correct some typos for word frequency
clk: fixed-factor: Fix a trivial typo
...
The new function calls the old ones. The sclp_event_mask_early() is removed
and replaced by one invocation of sclp_set_event_mask(0, 0).
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Currently we have hardcoded the HSA size to 32 MiB. With this patch the
HSA size is determined dynamically via SCLP in early.c.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Merge first patch-bomb from Andrew Morton:
"Quite a lot of other stuff is banked up awaiting further
next->mainline merging, but this batch contains:
- Lots of random misc patches
- OCFS2
- Most of MM
- backlight updates
- lib/ updates
- printk updates
- checkpatch updates
- epoll tweaking
- rtc updates
- hfs
- hfsplus
- documentation
- procfs
- update gcov to gcc-4.7 format
- IPC"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (269 commits)
ipc, msg: fix message length check for negative values
ipc/util.c: remove unnecessary work pending test
devpts: plug the memory leak in kill_sb
./Makefile: export initial ramdisk compression config option
init/Kconfig: add option to disable kernel compression
drivers: w1: make w1_slave::flags long to avoid memory corruption
drivers/w1/masters/ds1wm.cuse dev_get_platdata()
drivers/memstick/core/ms_block.c: fix unreachable state in h_msb_read_page()
drivers/memstick/core/mspro_block.c: fix attributes array allocation
drivers/pps/clients/pps-gpio.c: remove redundant of_match_ptr
kernel/panic.c: reduce 1 byte usage for print tainted buffer
gcov: reuse kbasename helper
kernel/gcov/fs.c: use pr_warn()
kernel/module.c: use pr_foo()
gcov: compile specific gcov implementation based on gcc version
gcov: add support for gcc 4.7 gcov format
gcov: move gcov structs definitions to a gcc version specific file
kernel/taskstats.c: return -ENOMEM when alloc memory fails in add_del_listener()
kernel/taskstats.c: add nla_nest_cancel() for failure processing between nla_nest_start() and nla_nest_end()
kernel/sysctl_binary.c: use scnprintf() instead of snprintf()
...
Pull vfs updates from Al Viro:
"All kinds of stuff this time around; some more notable parts:
- RCU'd vfsmounts handling
- new primitives for coredump handling
- files_lock is gone
- Bruce's delegations handling series
- exportfs fixes
plus misc stuff all over the place"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (101 commits)
ecryptfs: ->f_op is never NULL
locks: break delegations on any attribute modification
locks: break delegations on link
locks: break delegations on rename
locks: helper functions for delegation breaking
locks: break delegations on unlink
namei: minor vfs_unlink cleanup
locks: implement delegations
locks: introduce new FL_DELEG lock flag
vfs: take i_mutex on renamed file
vfs: rename I_MUTEX_QUOTA now that it's not used for quotas
vfs: don't use PARENT/CHILD lock classes for non-directories
vfs: pull ext4's double-i_mutex-locking into common code
exportfs: fix quadratic behavior in filehandle lookup
exportfs: better variable name
exportfs: move most of reconnect_path to helper function
exportfs: eliminate unused "noprogress" counter
exportfs: stop retrying once we race with rename/remove
exportfs: clear DISCONNECTED on all parents sooner
exportfs: more detailed comment for path_reconnect
...
Use more appropriate NUMA_NO_NODE instead of -1 in all archs' module_alloc()
Signed-off-by: Jianguo Wu <wujianguo@huawei.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull IRQ changes from Ingo Molnar:
"The biggest change this cycle are the softirq/hardirq stack
interaction and nesting fixes, cleanups and reorganizations from
Frederic. This is the longer followup story to the softirq nesting
fix that is already upstream (commit ded7975475: "irq: Force hardirq
exit's softirq processing on its own stack")"
* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip: bcm2835: Convert to use IRQCHIP_DECLARE macro
powerpc: Tell about irq stack coverage
x86: Tell about irq stack coverage
irq: Optimize softirq stack selection in irq exit
irq: Justify the various softirq stack choices
irq: Improve a bit softirq debugging
irq: Optimize call to softirq on hardirq exit
irq: Consolidate do_softirq() arch overriden implementations
x86/irq: Correct comment about i8259 initialization
The IDTE instruction used to flush TLB entries for a specific address
space uses the address-space-control element (ASCE) to identify
affected TLB entries. The upgrade of a page table adds a new top
level page table which changes the ASCE. The TLB entries associated
with the old ASCE need to be flushed and the ASCE for the address space
needs to be replaced synchronously on all CPUs which currently use it.
The concept of a lazy ASCE update with an exception handler is broken.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use the ACCESS_ONCE macro for both accesses to idle->sequence in the
loops to calculate the idle time. If only one access uses the macro,
the compiler is free to cache the value for the second access which
can cause endless loops.
Cc: stable@vger.kernel.org # 3.6+
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This typedef is unnecessary and should just be removed.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Make psw32_user_bits a constant value again.
This is a leftover of the code which allowed to run the kernel either
in primary or home space which got removed with 9a905662 "s390/uaccess:
always run the kernel in home space".
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Fix the following bugs:
- When returning from a signal the signal handler copies the saved psw mask
from user space and uses parts of it. Especially it restores the RI bit
unconditionally. If however the machine doesn't support RI, or RI is
disabled for the task, the last lpswe instruction which returns to user
space will generate a specification exception.
To fix this check if the RI bit is allowed to be set and kill the task
if not.
- In the compat mode signal handler code the RI bit of the psw mask gets
propagated to the mask of the return psw: if user space enables RI in the
signal handler, RI will also be enabled after the signal handler is
finished.
This is a different behaviour than with 64 bit tasks. So change this to
match the 64 bit semantics, which restores the original RI bit value.
- Fix similar oddities within the ptrace code as well.
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The FPC_VALID_MASK has been used to check the validity of the value
to be loaded into the floating-point-control register. With the
introduction of the floating-point extension facility and the
decimal-floating-point additional bits have been defined which need
to be checked in a non straight forward way. So far these bits have
been ignored which can cause an incorrect results for decimal-
floating-point operations, e.g. an incorrect rounding mode to be
set after signal return.
The static check with the FPC_VALID_MASK is replaced with a trial
load of the floating-point-control value, see test_fp_ctl.
In addition an information leak with the padding word between the
floating-point-control word and the floating-point registers in
the s390_fp_regs is fixed.
Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of this one:
arch/s390/kernel/cache.c: In function 'cache_build_info':
arch/s390/kernel/cache.c:144: warning: 'private' may be used uninitialized
in this function
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Instead of returnin the number of bytes not copied and/or -EFAULT let the
signal handler helper functions always return -EFAULT if a user space
access failed.
This doesn't fix a bug in the current code, but makes is harder to get it
wrong in the future.
Also "smatch" won't complain anymore about the fact that the number of
remaining bytes gets returned instead of -EFAULT.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Currently zfpcdump can only collect registers for up to CONFIG_NR_CPUS
CPUss. This dependency is not necessary. So remove it by dynamically
allocating the save area array.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Steven Rostedt noted that s390 is the only architecture which calls
ftrace_push_return_trace() before ftrace_graph_entry() and therefore has
the small advantage that trace.depth gets initialized automatically.
However this small advantage isn't worth the difference and possible subtle
breakage that may result from this.
So change s390 to have the same function call order like all other
architectures: first ftrace_graph_entry(), then ftrace_push_return_trace()
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of this compile warning:
arch/s390/kernel/crash_dump.c: In function 'copy_from_realmem':
arch/s390/kernel/crash_dump.c:48:6: warning: unused variable 'rc'
[-Wunused-variable]
int rc;
^
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
With dirty and referenced bits implemented in software it is unnecessary
to initialize the storage key for every page. With this patch not a single
storage key operation is done for a system that does not use KVM.
For KVM set_pte_at/pgste_set_key will do the initialization for the guest
view of the storage key when the mapping for the page is established in
the host.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Simplify the uaccess code by removing the user_mode=home option.
The kernel will now always run in the home space mode.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Just like all other architectures we should use out-of-line find bit
operations, since the inline variant bloat the size of the kernel image.
And also like all other architecures we should only supply optimized
variants of the __ffs, ffs, etc. primitives.
Therefore this patch removes the inlined s390 find bit functions and uses
the generic out-of-line variants instead.
The optimization of the primitives follows with the next patch.
With this patch also the functions find_first_bit_left() and
find_next_bit_left() have been reimplemented, since logically, they are
nothing else but a find_first_bit()/find_next_bit() implementation that
use an inverted __fls() instead of __ffs().
Also the restriction that these functions only work on machines which
support the "flogr" instruction is gone now.
This reduces the size of the kernel image (defconfig, -march=z9-109)
by 144,482 bytes.
Alone the size of the function build_sched_domains() gets reduced from
7 KB to 3,5 KB.
We also git rid of unused functions like find_first_bit_le()...
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Since zEC12 we have the interlocked-access facility 2 which allows to
use the instructions ni/oi/xi to update a single byte in storage with
compare-and-swap semantics.
So change set_bit(), clear_bit() and change_bit() to generate such code
instead of a compare-and-swap loop (or using the load-and-* instruction
family), if possible.
This reduces the text segment by yet another 8KB (defconfig).
Alternatively the long displacement variants niy/oiy/xiy could have
been used, but the extended displacement field is usually not needed
and therefore would only increase the size of the text segment again.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove CONFIG_SMP from bitops code. This reduces the C code significantly
but also generates better code for the SMP case.
This means that for !CONFIG_SMP set_bit() and friends now also have
compare and swap semantics (read: more code). However nobody really cares
for !CONFIG_SMP and this is the trade-off to simplify the SMP code which we
do care about.
The non-atomic bitops like __set_bit() now generate also better code
because the old code did not have a __builtin_contant_p() check for the
CONFIG_SMP case and therefore always generated the inline assembly variant.
However the inline assemblies for the non-atomic case now got completely
removed since gcc can produce better code, which accesses less memory
operands.
test_bit() got also a bit simplified since it did have a
__builtin_constant_p() check, however two identical code pathes for each
case (written differently).
In result this mainly reduces the to be maintained code but is not very
relevant for code generation, since there are not many non-atomic bitops
usages that we care about.
(code reduction defconfig kernel image before/after: 560 bytes).
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Since we have an in-kernel disassembler we can make sure that
there won't be any kprobes set on random data.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Now that the in-kernel disassembler has an own header file move the
disassembler related function prototypes to that header file.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The patch moves some of the definitions to a
header file. No functional changes involved.
I have retained the Copyright Statement from the
original file.
Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com>
[Heiko Carstens: rename s390-dis.h to dis.h]
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Rename 'insn' and 'operand' structures to more canonical names
to avoid conflicts.
struct insn represents information about an instruction, including
the mnemonics, format and opcode.
struct operand represents the 'properties' and information on howto
interpret the operand value and doesn't contain the value.
We rename these structures for avoiding a global conflict.
i.e,
1,$s/struct insn/struct s390_insn/g
1,$s/struct operand/struct s390_operand/g
Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When checking the insn address wether it is a kernel image or module
address it should be an if-else-if statement not two independent if
statements.
This doesn't really fix a bug, but matches s390_free_insn_slot().
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The result of the store-clock-fast (STCKF) instruction is a bit fuzzy.
It can happen that the value stored on one CPU is smaller than the value
stored on another CPU, although the order of the stores is the other
way around. This can cause deltas of get_tod_clock() values to become
negative when they should not be.
We need to be more careful with store-clock-fast, this patch partially
reverts git commit e4b7b4238e666682555461fa52eecd74652f36bb "time:
always use stckf instead of stck if available". The get_tod_clock()
function now uses the store-clock-extended (STCKE) instruction.
get_tod_clock_fast() can be used if the fuzziness of store-clock-fast
is acceptable e.g. for wait loops local to a CPU.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The return value of copy_siginfo_(to|from)_user32() gets passed to
user space, however we do not convert a positive return value from
copy_(to|from)_user to -EFAULT.
Therefore these functions (and the calling system calls) my incorrectly
return a positive number (bytes not copied) instead of -EFAULT.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>