This patch adds a explicit check in iscsit_find_cmd_from_itt_or_dump()
to ignore commands with ICF_GOT_LAST_DATAOUT set. This is done to
address the case where an ITT is being reused for DataOUT, but the
previous command with the same ITT has not yet been acknowledged by
ExpStatSN and removed from the per connection command list.
This issue was originally manifesting itself by referencing the
previous command during ITT lookup, and subsequently hitting the
check in iscsit_check_dataout_hdr() for ICF_GOT_LAST_DATAOUT, that
resulted in the DataOUT PDU + associated payload being silently
dumped.
Reported-by: Arshad Hussain <arshad.hussain@calsoftinc.com>
Tested-by: Arshad Hussain <arshad.hussain@calsoftinc.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
The iser target is the RDMA requester and the iser initiator is the
RDMA responder. In order to determine the max inflight RDMA READ requests
to set on the QP (initiator_depth), it should take the min between the
initiator published initiator_depth and the max inflight rdma read
requests its local HCA support (max_qp_init_rd_atom).
The target will never handle incoming RDMA READ requests so no need to
set responder_resources.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
rdma_disconnect may be called in 2 code flows:
- isert_wait_conn: disconnect initiated be the target
- disconnected_handler: disconnect invoked by the initiator
In case isert_conn->disconnect is true then rdma_disconnect
was called in disconnected handler, no need to call it again
from isert_wait_conn.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
disconnected_handler is invoked on several CM events (such
as DISCONNECTED, DEVICE_REMOVAL, TIMEWAIT_EXIT...). Since
multiple events can occur while before isert_free_conn is
invoked, we might put all isert_conn references and free
the connection too early.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
Cc: <stable@vger.kernel.org> # v3.10+
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
In case the connection didn't reach connected state, disconnected
handler will never be invoked thus the second kref_put on
isert_conn will be missing.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
Cc: <stable@vger.kernel.org> # v3.10+
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Revert parts of f244d8b623 ("ACPIPHP / radeon / nouveau: Fix VGA
switcheroo problem related to hotplug").
A previous commit 5493b31f0b55 ("PCI: Add pci_ignore_hotplug() to ignore
hotplug events for a device") added equivalent functionality implemented in
a different way for both acpiphp and pciehp.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Acked-by: Rajat Jain <rajatxjain@gmail.com>
Cut & paste typo from the line above.
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 9226b5b440 ("vfs: avoid non-forwarding large load after small
store in path lookup") made link_path_walk() always access the
"hash_len" field as a single 64-bit entity, in order to avoid mixed size
accesses to the members.
However, what I didn't notice was that that effectively means that the
whole "struct qstr this" is now basically redundant. We already
explicitly track the "const char *name", and if we just use "u64
hash_len" instead of "long len", there is nothing else left of the
"struct qstr".
We do end up wanting the "struct qstr" if we have a filesystem with a
"d_hash()" function, but that's a rare case, and we might as well then
just squirrell away the name and hash_len at that point.
End result: fewer live variables in the loop, a smaller stack frame, and
better code generation. And we don't need to pass in pointers variables
to helper functions any more, because the return value contains all the
relevant information. So this removes more lines than it adds, and the
source code is clearer too.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto fixes from Herbert Xu:
"This fixes the newly added drbg generator so that it actually works on
32-bit machines. Previously the code was only tested on 64-bit and on
32-bit it overflowed and simply doesn't work"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: drbg - remove check for uninitialized DRBG handle
crypto: drbg - backport "fix maximum value checks on 32 bit systems"
We were not checking for symlink support properly for SMB2/SMB3
mounts so could oops when mounted with mfsymlinks when try
to create symlink when mfsymlinks on smb2/smb3 mounts
Signed-off-by: Steve French <smfrench@gmail.com>
Cc: <stable@vger.kernel.org> # 3.14+
CC: Sachin Prabhu <sprabhu@redhat.com>
One small change I forgot to make in
commit c4d69da167
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Mon Sep 8 14:25:41 2014 +0100
drm/i915: Evict CS TLBs between batches
was to update the copy width for the compact BLT copy instruction.
Reported-by: Thomas Richter <thor@math.tu-berlin.de>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Richter <thor@math.tu-berlin.de>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: stable@vger.kernel.org
Tested-by: Thomas Richter <thor@math.tu-berlin.de>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Pull vfs fixes from Al Viro:
"double iput() on failure exit in lustre, racy removal of spliced
dentries from ->s_anon in __d_materialise_dentry() plus a bunch of
assorted RCU pathwalk fixes"
The RCU pathwalk fixes end up fixing a couple of cases where we
incorrectly dropped out of RCU walking, due to incorrect initialization
and testing of the sequence locks in some corner cases. Since dropping
out of RCU walk mode forces the slow locked accesses, those corner cases
slowed down quite dramatically.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
be careful with nd->inode in path_init() and follow_dotdot_rcu()
don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu()
fix bogus read_seqretry() checks introduced in b37199e
move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon)
[fix] lustre: d_make_root() does iput() on dentry allocation failure
The performance regression that Josef Bacik reported in the pathname
lookup (see commit 99d263d4c5 "vfs: fix bad hashing of dentries") made
me look at performance stability of the dcache code, just to verify that
the problem was actually fixed. That turned up a few other problems in
this area.
There are a few cases where we exit RCU lookup mode and go to the slow
serializing case when we shouldn't, Al has fixed those and they'll come
in with the next VFS pull.
But my performance verification also shows that link_path_walk() turns
out to have a very unfortunate 32-bit store of the length and hash of
the name we look up, followed by a 64-bit read of the combined hash_len
field. That screws up the processor store to load forwarding, causing
an unnecessary hickup in this critical routine.
It's caused by the ugly calling convention for the "hash_name()"
function, and easily fixed by just making hash_name() fill in the whole
'struct qstr' rather than passing it a pointer to just the hash value.
With that, the profile for this function looks much smoother.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
xfstest generic/258 sets the time on a file to a negative value
(before 1970) which fails since do_div can not handle negative
numbers. In addition 'normal' division of 64 bit values does
not build on 32 bit arch so have to workaround this by special
casing negative values in cifs_NTtimeToUnix
Samba server also has a bug with this (see samba bugzilla 7771)
but it works to Windows server.
Signed-off-by: Steve French <smfrench@gmail.com>
Pull parisc updates from Helge Deller:
"The most important patch is a new Light Weigth Syscall (LWS) for 8,
16, 32 and 64 bit atomic CAS operations which is required in order to
be able to implement the atomic gcc builtins on our platform.
Other than that, we wire up the seccomp, getrandom and memfd_create
syscalls, fixes a minor off-by-one bug and a wrong printk string"
* 'parisc-3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
parisc: Implement new LWS CAS supporting 64 bit operations.
parisc: Wire up seccomp, getrandom and memfd_create syscalls
parisc: dino: fix %d confusingly prefixed with 0x in format string
parisc: sys_hpux: NUL terminator is one past the end
in the former we simply check if dentry is still valid after picking
its ->d_inode; in the latter we fetch ->d_inode in the same places
where we fetch dentry and its ->d_seq, under the same checks.
Cc: stable@vger.kernel.org # 2.6.38+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
return the value instead, and have path_init() do the assignment. Broken by
"vfs: Fix absolute RCU path walk failures due to uninitialized seq number",
which was Cc-stable with 2.6.38+ as destination. This one should go where
it went.
To avoid dummy value returned in case when root is already set (it would do
no harm, actually, since the only caller that doesn't ignore the return value
is guaranteed to have nd->root *not* set, but it's more obvious that way),
lift the check into callers. And do the same to set_root(), to keep them
in sync.
Cc: stable@vger.kernel.org # 2.6.38+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
MAINTAINERS to reflect new e-mail address.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUFR7/AAoJEG5mS6x6i9Ij/84P/RniNXGFS4Kh+YuS5E6KdZ+C
6rzpxlUkeEWkL+1Oy8xpLSJdWsCYajOmteP2V3ubQNlqzd0H/zN5W52/i4tIhVc1
W7SXexh1tr18BeeiQZ0sX5IlBGzixJ9QDzsuf0cg5keX9RphVBNbm16d8DvW/Le3
fj2LXyN+K82vy0GPZE0awi9TWfgj1oZobsKswjBp6Kz/z4gq11/rKQBeBvJbxSF/
WttgvDWONHiuUyHuKYJq53KQ+B7Q6ANwtKfIvd2CpFb8dWlsYYMnr1eRmz4ZC5OM
NDj2T4Dn5VrxqivmFJ+6d4EZK1agZXYXly+mkW21N2sMqL2/fq7b6CgbKzy76PLz
sDc1e5WUTJu0AjfrtTQBKG/ZbxXxvpMsarO3rI3C2Oos9DUPnDG8eHsHeOSoZQXi
tyUyIu4IyH4x7BWO//srU7l82YXWrrGN8+YU+GcLsMGDfsOaR4DDURUu9iYNX3cA
RoGfhmb+H41X+zOnzgnioyGMcK6mwVvsPxIzZ4CwsSa64aWlud6vbxoJr+q6K2ap
KEGxT78gy32Jw9wsfYUJKQpa5By7FdpGUDFk/rTt4ZYVU8TR1e+ShCz2l7Ny5Ioh
u13/vZ/VGVG3+MFjZ5In02egYx3fPnbilbavGCNRQM31/GJxHSc5p8xnoPxLmg70
tVREVtRX23KREqt38CBf
=uhqP
-----END PGP SIGNATURE-----
Merge tag 'ntb-3.17' of git://github.com/jonmason/ntb
Pull ntb driver bugfixes from Jon Mason:
"NTB driver fixes for queue spread and buffer alignment. Also, update
to MAINTAINERS to reflect new e-mail address"
* tag 'ntb-3.17' of git://github.com/jonmason/ntb:
ntb: Add alignment check to meet hardware requirement
MAINTAINERS: update NTB info
NTB: correct the spread of queues over mw's
Pull ARM irq chip fixes from Thomas Gleixner:
"Another pile of ARM specific irq chip fixlets:
- off by one bugs in the crossbar driver
- missing annotations
- a bunch of "make it compile" updates
I pulled the lot today from Jason, but it has been in -next for at
least a week"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip: gic-v3: Declare rdist as __percpu pointer to __iomem pointer
irqchip: gic: Make gic_default_routable_irq_domain_ops static
irqchip: exynos-combiner: Fix compilation error on ARM64
irqchip: crossbar: Off by one bugs in init
irqchip: gic-v3: Tag all low level accessors __maybe_unused
irqchip: gic-v3: Only define gic_peek_irq() when building SMP
This patch fix gains values. The first driver was designed using
engineering samples, in mass production the values are changed.
Signed-off-by: Denis Ciocca <denis.ciocca@st.com>
Cc: Stable@vger.kernel.org
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
If touchscreen mode is enabled and a conversion is requested on another
channel, the result in the last converted data register can be a
touchscreen relative value. Starting a conversion involves to do a
conversion for all active channel. It starts with ADC channels and ends
with touchscreen channels. Then if ADC_LCD register is not read quickly,
its content may be a touchscreen conversion.
To remove this temporal constraint, the conversion value is taken from
the channel data register.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Cc: Stable@vger.kernel.org
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
The NTB translate register must have the value to be BAR size aligned.
This alignment check make sure that the DMA memory allocated has the
proper alignment. Another requirement for NTB to function properly with
memory window BAR size greater or equal to 4M is to use the CMA feature
in 3.16 kernel with the appropriate CONFIG_CMA_ALIGNMENT and
CONFIG_CMA_SIZE_MBYTES set.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Update my contact info to my personal email address and add Dave Jiang.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
The detection of an uneven number of queues on the given memory windows
was not correct. The mw_num is zero based and the mod should be
division to spread them evenly over the mw's.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Pull futex and timer fixes from Thomas Gleixner:
"A oneliner bugfix for the jinxed futex code:
- Drop hash bucket lock in the error exit path. I really could slap
myself for intruducing that bug while fixing all the other horror
in that code three month ago ...
and the timer department is not too proud about the following fixes:
- Deal with a long standing rounding bug in the timeval to jiffies
conversion. It's a real issue and this fix fell through the cracks
for quite some time.
- Another round of alarmtimer fixes. Finally this code gets used
more widely and the subtle issues hidden for quite some time are
noticed and fixed. Nothing really exciting, just the itty bitty
details which bite the serious users here and there"
* 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
futex: Unlock hb->lock in futex_wait_requeue_pi() error path
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
alarmtimer: Lock k_itimer during timer callback
alarmtimer: Do not signal SIGEV_NONE timers
alarmtimer: Return relative times in timer_gettime
jiffies: Fix timeval conversion to jiffies
The current LWS cas only works correctly for 32bit. The new LWS allows
for CAS operations of variable size.
Signed-off-by: Guy Martin <gmsoft@tuxicoman.be>
Cc: <stable@vger.kernel.org> # 3.13+
Signed-off-by: Helge Deller <deller@gmx.de>
Josef Bacik found a performance regression between 3.2 and 3.10 and
narrowed it down to commit bfcfaa77bd ("vfs: use 'unsigned long'
accesses for dcache name comparison and hashing"). He reports:
"The test case is essentially
for (i = 0; i < 1000000; i++)
mkdir("a$i");
On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
dir/sec with 3.10. This is because we spend waaaaay more time in
__d_lookup on 3.10 than in 3.2.
The new hashing function for strings is suboptimal for <
sizeof(unsigned long) string names (and hell even > sizeof(unsigned
long) string names that I've tested). I broke out the old hashing
function and the new one into a userspace helper to get real numbers
and this is what I'm getting:
Old hash table had 1000000 entries, 0 dupes, 0 max dupes
New hash table had 12628 entries, 987372 dupes, 900 max dupes
We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
My test does the hash, and then does the d_hash into a integer pointer
array the same size as the dentry hash table on my system, and then
just increments the value at the address we got to see how many
entries we overlap with.
As you can see the old hash function ended up with all 1 million
entries in their own bucket, whereas the new one they are only
distributed among ~12.5k buckets, which is why we're using so much
more CPU in __d_lookup".
The reason for this hash regression is two-fold:
- On 64-bit architectures the down-mixing of the original 64-bit
word-at-a-time hash into the final 32-bit hash value is very
simplistic and suboptimal, and just adds the two 32-bit parts
together.
In particular, because there is no bit shuffling and the mixing
boundary is also a byte boundary, similar character patterns in the
low and high word easily end up just canceling each other out.
- the old byte-at-a-time hash mixed each byte into the final hash as it
hashed the path component name, resulting in the low bits of the hash
generally being a good source of hash data. That is not true for the
word-at-a-time case, and the hash data is distributed among all the
bits.
The fix is the same in both cases: do a better job of mixing the bits up
and using as much of the hash data as possible. We already have the
"hash_32|64()" functions to do that.
Reported-by: Josef Bacik <jbacik@fb.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chris Mason <clm@fb.com>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The hash_64() function historically does the multiply by the
GOLDEN_RATIO_PRIME_64 number with explicit shifts and adds, because
unlike the 32-bit case, gcc seems unable to turn the constant multiply
into the more appropriate shift and adds when required.
However, that means that we generate those shifts and adds even when the
architecture has a fast multiplier, and could just do it better in
hardware.
Use the now-cleaned-up CONFIG_ARCH_HAS_FAST_MULTIPLIER (together with
"is it a 64-bit architecture") to decide whether to use an integer
multiply or the explicit sequence of shift/add instructions.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It used to be an ad-hoc hack defined by the x86 version of
<asm/bitops.h> that enabled a couple of library routines to know whether
an integer multiply is faster than repeated shifts and additions.
This just makes it use the real Kconfig system instead, and makes x86
(which was the only architecture that did this) select the option.
NOTE! Even for x86, this really is kind of wrong. If we cared, we would
probably not enable this for builds optimized for netburst (P4), where
shifts-and-adds are generally faster than multiplies. This patch does
*not* change that kind of logic, though, it is purely a syntactic change
with no code changes.
This was triggered by the fact that we have other places that really
want to know "do I want to expand multiples by constants by hand or
not", particularly the hash generation code.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
as clean. This could cause no writeback to occur or spurious dirty
block counts.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUEG6eAAoJEMUj8QotnQNabuUIALF4UnJov622hmGIcBC91j8m
HRD1gNqu8hOLrtQEP+E7fdrDwHqyxAdL2kFN6nowbvuWmSVAUpQmsLCbXOvQOttx
mELPFSOw2s1FpBbt63pIi8icAPCvzJYeR+jGN3m//b0eVFkmQ6xQGazhbd3JF+xP
mHeEVNTF7acbUYDdfejF8kNJcUbpU63IF5UzRGOXMTNZ2DnFvEiBJHtgXJKiIkhN
MhZNFgqeRdIppoDImnGJ9kQrWzoXGIpAXbxFbvjLofpr6HK2FxWLayeib0Jdm+Ze
znHjSqyRRBhtszk/c8Rxqzy9dotMi3fI/O4pVkziSMYKCjtt9dCRFbP1Gkwaf3w=
=6q2l
-----END PGP SIGNATURE-----
Merge tag 'dm-3.17-fix2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fix from Mike Snitzer:
"Fix a race in the DM cache target that caused dirty blocks to be
marked as clean. This could cause no writeback to occur or spurious
dirty block counts"
* tag 'dm-3.17-fix2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm cache: fix race causing dirty blocks to be marked as clean
Reference to RK3288 TRM, fix an error channel id for i2s tx and rx
Table 10-1 DMAC_BUS Request Mapping Table
Req number Source Polarity
0 I2S tx High level
1 I2S rx High level
Tested on RK3288 board.
Signed-off-by: Jianqun <jay.xu@rock-chips.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Pull block fixes from Jens Axboe:
"A small collection of fixes for the current rc series. This contains:
- Two small blk-mq patches from Rob Elliott, cleaning up error case
at init time.
- A fix from Ming Lei, fixing SG merging for blk-mq where
QUEUE_FLAG_SG_NO_MERGE is the default.
- A dev_t minor lifetime fix from Keith, fixing an issue where a
minor might be reused before all references to it were gone.
- Fix from Alan Stern where an unbalanced queue bypass caused SCSI
some headaches when it does a series of add/del on devices without
fully registrering the queue.
- A fix from me for improving the scaling of tag depth in blk-mq if
we are short on memory"
* 'for-linus' of git://git.kernel.dk/linux-block:
blk-mq: scale depth and rq map appropriate if low on memory
Block: fix unbalanced bypass-disable in blk_register_queue
block: Fix dev_t minor allocation lifetime
blk-mq: cleanup after blk_mq_init_rq_map failures
blk-mq: pass along blk_mq_alloc_tag_set return values
blk-merge: fix blk_recount_segments
Reference rockchip I2S controller TRM, modify some registers' property
I2S_FIFOLR: read / write, but not volatile, not precious
I2S_INTSR: read / write
I2S_CLR: volatile, register value will be cleared by read
Test on RK3288 with max98090.
Signed-off-by: Jianqun Xu <jay.xu@rock-chips.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Fix error format set to I2S master or slave mode.
Test on RK3288 board with max98090.
Signed-off-by: Jianqun Xu <jay.xu@rock-chips.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Now CS GPIOs are requested from struct spi_master.setup() callback
and that causes failures when Client SPI device is getting accessed
through SPIDEV driver. The failure happens, because .setup() callback
may be called many times from IOCTL handler and when it's called
second time gpio_request() will fail and return -EBUSY.
Hence, fix it by moving CS GPIOs requesting code in .probe().
Reported-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
arm64.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJUE0aVAAoJEIlPj0hw4a6QG9oQAO1xew3cWPYpougi/Nd19R70
dawtzs19Edj9CDgUC9OQE1JC7J/jG5ElZb3qFc7ICQl90akgx/d7BTWO/6dMGEzw
PtLc60M1lFY60aNYN6T+DLjUxYtwfq74g6x2RjRI4jLaSREVlsOJnV7b0vo4bLpP
tTLc65+Zo35cWykzImoj7nu5JxBsoJYNPCrKWz3B6nOjDiRf99zRAz2yhEt54ajz
BIfg36WbwyvZie5Nxgxp46Ou1hTsmZHmv5MFHhLga0jHRIRfiWnbrEc2pppI64DA
l7sMkz0MwPXdu/Auq4hHstbnLw7OBqE4PfMvPqs4bK2SVQOPB48W3Q+QwhK59iS5
9ytw9j2EGvdEhTDhRs6FQmqaII/xbyvqMQDmdXwDpBzo/+az656RFMQ4eS5+zLDu
JG+ws9Ozt2WJRFQvWiC8zgYRBKiVBkR6SeEf44WiYjRp9HV9gxIXgAIo7AUoNNjQ
USNd4yEkzqMD4aILekNkFvUm5uu/gzCNdmb1N2iIk1gS9CWh4fEUTNRjUr5tqXiR
9iNacoR4Iz96DjE2ZSLnno+1eq1tRNm8nYo/NFe9+SohlfjiSmsnpTJg8FxmrIer
CeqAYTBgQtO+8HOJL/hM2IdFX5EcT+0TYs3DWmoqxqcgOyhK4AFM1XhUWNLpxquH
y6ojU5lRs/E/L8ycGAGg
=gWAc
-----END PGP SIGNATURE-----
Merge tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull Xen ARM bugfix from Stefano Stabellini:
"The patches fix the "xen_add_mach_to_phys_entry: cannot add" bug that
has been affecting xen on arm and arm64 guests since 3.16. They
require a few hypervisor side changes that just went in xen-unstable.
A couple of days ago David sent out a pull request with a few other
Xen fixes (it is already in master). Sorry we didn't synchronized
better among us"
* tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/arm: remove mach_to_phys rbtree
xen/arm: reimplement xen_dma_unmap_page & friends
xen/arm: introduce XENFEAT_grant_map_identity
Locks the k_itimer's it_lock member when handling the alarm timer's
expiry callback.
The regular posix timers defined in posix-timers.c have this lock held
during timout processing because their callbacks are routed through
posix_timer_fn(). The alarm timers follow a different path, so they
ought to grab the lock somewhere else.
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Sharvil Nanavati <sharvil@google.com>
Signed-off-by: Richard Larocque <rlarocque@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Avoids sending a signal to alarm timers created with sigev_notify set to
SIGEV_NONE by checking for that special case in the timeout callback.
The regular posix timers avoid sending signals to SIGEV_NONE timers by
not scheduling any callbacks for them in the first place. Although it
would be possible to do something similar for alarm timers, it's simpler
to handle this as a special case in the timeout.
Prior to this patch, the alarm timer would ignore the sigev_notify value
and try to deliver signals to the process anyway. Even worse, the
sanity check for the value of sigev_signo is skipped when SIGEV_NONE was
specified, so the signal number could be bogus. If sigev_signo was an
unitialized value (as it often would be if SIGEV_NONE is used), then
it's hard to predict which signal will be sent.
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Sharvil Nanavati <sharvil@google.com>
Signed-off-by: Richard Larocque <rlarocque@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Returns the time remaining for an alarm timer, rather than the time at
which it is scheduled to expire. If the timer has already expired or it
is not currently scheduled, the it_value's members are set to zero.
This new behavior matches that of the other posix-timers and the POSIX
specifications.
This is a change in user-visible behavior, and may break existing
applications. Hopefully, few users rely on the old incorrect behavior.
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Sharvil Nanavati <sharvil@google.com>
Signed-off-by: Richard Larocque <rlarocque@google.com>
[jstultz: minor style tweak]
Signed-off-by: John Stultz <john.stultz@linaro.org>
timeval_to_jiffies tried to round a timeval up to an integral number
of jiffies, but the logic for doing so was incorrect: intervals
corresponding to exactly N jiffies would become N+1. This manifested
itself particularly repeatedly stopping/starting an itimer:
setitimer(ITIMER_PROF, &val, NULL);
setitimer(ITIMER_PROF, NULL, &val);
would add a full tick to val, _even if it was exactly representable in
terms of jiffies_ (say, the result of a previous rounding.) Doing
this repeatedly would cause unbounded growth in val. So fix the math.
Here's what was wrong with the conversion: we essentially computed
(eliding seconds)
jiffies = usec * (NSEC_PER_USEC/TICK_NSEC)
by using scaling arithmetic, which took the best approximation of
NSEC_PER_USEC/TICK_NSEC with denominator of 2^USEC_JIFFIE_SC =
x/(2^USEC_JIFFIE_SC), and computed:
jiffies = (usec * x) >> USEC_JIFFIE_SC
and rounded this calculation up in the intermediate form (since we
can't necessarily exactly represent TICK_NSEC in usec.) But the
scaling arithmetic is a (very slight) *over*approximation of the true
value; that is, instead of dividing by (1 usec/ 1 jiffie), we
effectively divided by (1 usec/1 jiffie)-epsilon (rounding
down). This would normally be fine, but we want to round timeouts up,
and we did so by adding 2^USEC_JIFFIE_SC - 1 before the shift; this
would be fine if our division was exact, but dividing this by the
slightly smaller factor was equivalent to adding just _over_ 1 to the
final result (instead of just _under_ 1, as desired.)
In particular, with HZ=1000, we consistently computed that 10000 usec
was 11 jiffies; the same was true for any exact multiple of
TICK_NSEC.
We could possibly still round in the intermediate form, adding
something less than 2^USEC_JIFFIE_SC - 1, but easier still is to
convert usec->nsec, round in nanoseconds, and then convert using
time*spec*_to_jiffies. This adds one constant multiplication, and is
not observably slower in microbenchmarks on recent x86 hardware.
Tested: the following program:
int main() {
struct itimerval zero = {{0, 0}, {0, 0}};
/* Initially set to 10 ms. */
struct itimerval initial = zero;
initial.it_interval.tv_usec = 10000;
setitimer(ITIMER_PROF, &initial, NULL);
/* Save and restore several times. */
for (size_t i = 0; i < 10; ++i) {
struct itimerval prev;
setitimer(ITIMER_PROF, &zero, &prev);
/* on old kernels, this goes up by TICK_USEC every iteration */
printf("previous value: %ld %ld %ld %ld\n",
prev.it_interval.tv_sec, prev.it_interval.tv_usec,
prev.it_value.tv_sec, prev.it_value.tv_usec);
setitimer(ITIMER_PROF, &prev, NULL);
}
return 0;
}
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul Turner <pjt@google.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Paul Turner <pjt@google.com>
Reported-by: Aaron Jacobs <jacobsa@google.com>
Signed-off-by: Andrew Hunter <ahh@google.com>
[jstultz: Tweaked to apply to 3.17-rc]
Signed-off-by: John Stultz <john.stultz@linaro.org>
futex_wait_requeue_pi() calls futex_wait_setup(). If
futex_wait_setup() succeeds it returns with hb->lock held and
preemption disabled. Now the sanity check after this does:
if (match_futex(&q.key, &key2)) {
ret = -EINVAL;
goto out_put_keys;
}
which releases the keys but does not release hb->lock.
So we happily return to user space with hb->lock held and therefor
preemption disabled.
Unlock hb->lock before taking the exit route.
Reported-by: Dave "Trinity" Jones <davej@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Darren Hart <dvhart@linux.intel.com>
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1409112318500.4178@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Callers of d_splice_alias(dentry, inode) don't need iput(), neither
on success nor on failure. Either the reference to inode is stored
in a previously negative dentry, or it's dropped. In either case
inode reference the caller used to hold is consumed.
__gfs2_lookup() does iput() in case when d_splice_alias() has failed.
Double iput() if we ever hit that. And gfs2_create_inode() ends up
not only with double iput(), but with link count dropped to zero - on
an inode it has just found in directory.
Cc: stable@vger.kernel.org # v3.14+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>