The DMA sync should sync the whole receive buffer, not just
part of it. Fixes log messages dma_sync_check.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The xc_cil_lock is used for two purposes - to protect the CIL
itself, and to protect the push/commit state and lists. These are
two logically separate structures and operations, so can have their
own locks. This means that pushing on the CIL and the commit wait
ordering won't contend for a lock with other transactions that are
completing concurrently. As the CIL insertion is the hottest path
throught eh CIL, this is a big win.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
Now that all the log item preparation and formatting is done under
the CIL lock, we can get rid of the intermediate log vector chain
used to track items to be inserted into the CIL.
We can already find all the items to be committed from the
transaction handle, so as long as we attach the log vectors to the
item before we insert the items into the CIL, we don't need to
create a log vector chain to pass around.
This means we can move all the item insertion code into and optimise
it into a pair of simple passes across all the items in the
transaction. The first pass does the formatting and accounting, the
second inserts them all into the CIL.
We keep this two pass split so that we can separate the CIL
insertion - which must be done under the CIL spinlock - from the
formatting. We could insert each item into the CIL with a single
pass, but that massively increases the number of times we have to
grab the CIL spinlock. It is much more efficient (and hence
scalable) to do a batch operation and insert all objects in a single
lock grab.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
Now that we have the size of the log vector that has been allocated,
we can determine if we need to allocate a new log vector for
formatting and insertion. We only need to allocate a new vector if
it won't fit into the existing buffer.
However, we need to hold the CIL context lock while we do this so
that we can't race with a push draining the currently queued log
vectors. It is safe to do this as long as we do GFP_NOFS allocation
to avoid avoid memory allocation recursing into the filesystem.
Hence we can safely overwrite the existing log vector on the CIL if
it is large enough to hold all the dirty regions of the current
item.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
Ensure that the definition of ax88172a_info matches the declaration seen
by users and silence sparse warnings about symbols without declarations
in the global namespace by moving the declaration into the shared header
asix.h.
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make functions that are only referenced from ops structures static, they
do not need to be in the global namespace and sparse complains about this.
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that we have the size of the object before the formatting pass
is called, we can allocation the log vector and it's buffer in a
single allocation rather than two separate allocations.
Store the size of the allocated buffer in the log vector so that
we potentially avoid allocation for future modifications of the
object.
While touching this code, remove the IOP_FORMAT definition.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
To begin optimising the CIL commit process, we need to have IOP_SIZE
return both the number of vectors and the size of the data pointed
to by the vectors. This enables us to calculate the size ofthe
memory allocation needed before the formatting step and reduces the
number of memory allocations per item by one.
While there, kill the IOP_SIZE macro.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
In order to specify a DMA zone size of 4GB on LPAE systems, the sizes need
to be 64-bit. So make machine_desc.dma_zone_size and arm_dma_zone_size be
phys_addr_t instead of unsigned long.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
xlog_find_tail() currently leaks a bp on one error path.
There is no error target, so manually free the bp before
returning the error.
Found by Coverity.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
xlog_find_zeroed() currently leaks a bp on one error path.
Using the bp_err: target resolves this.
Found by Coverity.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
xfs_attr_node_addname()'s error handling tests whether it
should free "state" in the out: error handling label:
out:
if (state)
xfs_da_state_free(state);
but an earlier free doesn't set state to NULL afterwards; this
could lead to a double free. Fix it by setting state to NULL
after it's freed.
This was found by Coverity.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
This semantic patch replaces "return {0,1};" with "return
{false,true};" in functions returning bool.
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Acked-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Michal Marek <mmarek@suse.cz>
This change adds infrastructure (CONFIG_TILE_HVGLUE_TRACE) that
provides C code wrappers for the calls the kernel makes to the Tilera
hypervisor. This allows standard kernel infrastructure like FTRACE to
be able to instrument hypervisor calls.
To allow direct calls to the true API, we export their names with a
leading underscore as well. This is important for the few contexts
where we need to make hypervisor calls without touching the stack.
As part of this change, we also switch from creating the symbols
with linker magic to creating them with assembler magic. This lets
us provide a symbol type and generally make them appear more as symbols
and less as just random values in the Elf namespace.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
If ioreamp_prot() fails in ioremap_page_range() due to kernel memory
exhaustion, we previously would leak a struct vm_struct.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
This change creates the framework for vDSO calls, makes the existing
rt_sigreturn() mechanism use it, and adds a fast gettimeofday().
Now that we need to expose the vDSO address to userspace, we add
AT_SYSINFO_EHDR to the set of aux entries provided to userspace.
(You can disable any extra vDSO support by booting with vdso=0,
but the rt_sigreturn vDSO page will still be provided.)
Note that glibc has supported the tile vDSO since release 2.17.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
The tile code notifies the simulator of new ET_EXEC objects starting
to execute so that tracing code can properly annotate the objects.
However, we didn't support ET_DYN executables like ld.so, so we
didn't properly load symbols, etc. This change enables that support;
we use a variant of the SIM_CONTROL_DLOPEN simulator notification
that newer simulators will recognize and use to set the base address
for the next SIM_CONTROL_OS_EXEC notification.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
First, don't re-enable interrupts blindly in the Linux trap handler.
We already handle page faults this way; synchronous interrupts like
ILL_TRANS will fire even when interrupts are disabled, and we don't
want to re-enable interrupts in that case.
For ILL_TRANS, we now pass the ILL_VA_PC reason into the trap handler
so we can report it properly; this is the address that caused the
illegal translation trap. We print the address as part of the
pr_alert() message now if it's coming from the kernel.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
It's much easier to read register dumps if you read vertically
rather than horizontally, since the register numbers line up
and lead the eye down more than to the right.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
First, fix a bug in asm/unaligned.h; we need to just use the asm-generic
unaligned.h so we properly choose endian-correct flavors.
Second, keep the hv/hypervisor.h ABI fully "native" in the sense that
we don't have __BIG_ENDIAN__ ifdefs there. Instead, we use macros in
the head_NN.S assembly code to properly extract two 32-bit structure
members from a 64-bit register holding the structure.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
This change adds support for CONFIG_PREEMPT (full kernel preemption).
In addition to the core support, this change includes a number
of places where we fix up uses of smp_processor_id() and per-cpu
variables. I also eliminate the PAGE_HOME_HERE and PAGE_HOME_UNKNOWN
values for page homing, as it turns out they weren't being used.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
First, in huge_pte_offset(), we were erroneously checking
pgd_present(), which is always true, rather than pud_present(),
which is the thing that tells us if there is a top-level (L0) PTE.
Fixing this means we properly look up huge page entries only when
the Present bit is actually set in the PTE.
Second, use the standard pte_alloc_map() instead of the hand-rolled
pte_alloc_hugetlb() routine that basically was written to avoid
worrying about CONFIG_HIGHPTE. However, we no longer plan to support
HIGHPTE, so a separate routine was just unnecessary code duplication.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
This change adds support for avoiding recursive backtracer crashes;
we haven't seen this in practice other than when things are seriously
corrupt, but it may help avoid losing the root cause of a crash.
Also, don't abort kernel backtracers for invalid userspace PC's.
If we do, we lose the ability to backtrace through a userspace
call to a bad address above PAGE_OFFSET, even though that it can
be perfectly reasonable to continue the backtrace in such a case.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
This change enables unaligned userspace memory access via a kernel
fast path on tilegx. The kernel tracks user PC/instruction pairs
per-thread using a direct-mapped cache in userspace. The cache
maps those PC/instruction pairs to JIT'ed instruction sequences that
load or store using byte-wide load store intructions and then
synthesize 2-, 4- or 8-byte load or store results. Once an
instruction has been seen to generate an unaligned access once,
subsequent hits on that instruction typically require overhead
of only around 50 cycles if cache and TLB is hot.
We support the prctl() PR_GET_UNALIGN / PR_SET_UNALIGN sys call to
enable or disable unaligned fixups on a per-process basis.
To do this we pull some of the tilepro unaligned support out of the
single_step.c file; tilepro uses instruction disassembly for both
single-step and unaligned access support. Since tilegx actually has
hardware singlestep support, though, it's cleaner to keep the tilegx
unaligned access code in a separate file. While we're at it,
properly rename the tilepro-specific types, etc., to have tilepro
suffixes instead of generic tile suffixes.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Stephen Rothwell reported that this driver does not compile on PowerPC
due to this missing include. One could argue why this driver is enabled
on PowerPC in the first place but it sure isn't wrong to include headers
for used function instead of to rely that they sneak in.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
THe L_PTE_USER actually has nothing to do with stage 2 mappings and the
L_PTE_S2_RDWR value sets the readable bit, which was what L_PTE_USER
was used for before proper handling of stage 2 memory defines.
Changelog:
[v3]: Drop call to kvm_set_s2pte_writable in mmu.c
[v2]: Change default mappings to be r/w instead of r/o, as per Marc
Zyngier's suggestion.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Architectures should fully validate whether kexec is possible as part of
machine_kexec_prepare(), so that user-space's kexec_load() operation can
report any problems. Performing validation in machine_kexec() itself is
too late, since it is not allowed to return.
Prior to this patch, ARM's machine_kexec() was testing after-the-fact
whether machine_kexec_prepare() was able to disable all but one CPU.
Instead, modify machine_kexec_prepare() to validate all conditions
necessary for machine_kexec_prepare()'s to succeed. BUG if the validation
succeeded, yet disabling the CPUs didn't actually work.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 15e7e5c1eb ("ARM: 7749/1: spinlock: retry trylock operation if
strex fails on free lock") modifying our arch_spin_trylock to retry the
acquisition if the lock appeared uncontended, but the strex failed.
This patch does the same for rwlocks, which were missed by the original
patch.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The res variable is written before we've finished with the input
operands (namely the lock address), so ensure that we mark it as `early
clobber' to avoid unintended register sharing.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It is possible to construct an event group with a software event as a
group leader and then subsequently add a hardware event to the group.
This results in the event group being validated by adding all members
of the group to a fake PMU and attempting to allocate each event on
their respective PMU.
Unfortunately, for software events wthout a corresponding arm_pmu, this
results in a kernel crash attempting to dereference the ->get_event_idx
function pointer.
This patch fixes the problem by checking explicitly for software events
and ignoring those in event validation (since they can always be
scheduled). We will probably want to revisit this for 3.12, since the
validation checks don't appear to work correctly when dealing with
multiple hardware PMUs anyway.
Cc: <stable@vger.kernel.org>
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since the musb-gadget code now calls the dma engine properly it is
possible to enable it for the TX path in device mode.
AM335x Advisory 1.0.13 says that we may lose the toggle bit on multiple
RX transfers. There is a workaround in host mode but none in device mode
and therefore RX transfers are disabled.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
This patch makes use of the two function is_cppi_enabled() and
tusb_dma_omap() instead of the ifdef for the proper DMA implementation
setup code. It basically shifts the code right by one indention level
and adds a few line breaks once the chars are crossed.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Replace roundup() with roundup_64() as we calculate min_logblks
with 64-bit divisions. Hence, call roundup() will cause the
following error while compiling a 32-bit kernel:
fs/built-in.o: In function `xfs_log_calc_minimum_size':
fs/xfs/xfs_log_rlimit.c:140: undefined reference to `__udivdi3'
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Cc: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Jie Liu <jeff.liu@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
In several places, this snippet is used when removing neigh entries:
list_del(&neigh->list);
ipoib_neigh_free(neigh);
The list_del() removes neigh from the associated struct ipoib_path, while
ipoib_neigh_free() removes neigh from the device's neigh entry lookup
table. Both of these operations are protected by the priv->lock
spinlock. The table however is also protected via RCU, and so naturally
the lock is not held when doing reads.
This leads to a race condition, in which a thread may successfully look
up a neigh entry that has already been deleted from neigh->list. Since
the previous deletion will have marked the entry with poison, a second
list_del() on the object will cause a panic:
#5 [ffff8802338c3c70] general_protection at ffffffff815108c5
[exception RIP: list_del+16]
RIP: ffffffff81289020 RSP: ffff8802338c3d20 RFLAGS: 00010082
RAX: dead000000200200 RBX: ffff880433e60c88 RCX: 0000000000009e6c
RDX: 0000000000000246 RSI: ffff8806012ca298 RDI: ffff880433e60c88
RBP: ffff8802338c3d30 R8: ffff8806012ca2e8 R9: 00000000ffffffff
R10: 0000000000000001 R11: 0000000000000000 R12: ffff8804346b2020
R13: ffff88032a3e7540 R14: ffff8804346b26e0 R15: 0000000000000246
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000
#6 [ffff8802338c3d38] ipoib_cm_tx_handler at ffffffffa066fe0a [ib_ipoib]
#7 [ffff8802338c3d98] cm_process_work at ffffffffa05149a7 [ib_cm]
#8 [ffff8802338c3de8] cm_work_handler at ffffffffa05161aa [ib_cm]
#9 [ffff8802338c3e38] worker_thread at ffffffff81090e10
#10 [ffff8802338c3ee8] kthread at ffffffff81096c66
#11 [ffff8802338c3f48] kernel_thread at ffffffff8100c0ca
We move the list_del() into ipoib_neigh_free(), so that deletion happens
only once, after the entry has been successfully removed from the lookup
table. This same behavior is already used in ipoib_del_neighs_by_gid()
and __ipoib_reap_neigh().
Signed-off-by: Jim Foraker <foraker1@llnl.gov>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Jack Wang <jinpu.wang@profitbricks.com>
Reviewed-by: Shlomo Pongratz <shlomop@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Lustre uses a advertised max MR size of ~0ULL to indicate it should
use a dma_mr. Hence advertise max MR size as ~0ULL.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
When polling, we do a GTS update if the accumulated cidx_inc == the CQ
depth / 16. However, if the CQ is large enough, Cq depth / 16 exceeds
the size of the field in the GTS word. So we also need to update if
cidx_inc hits CIDXINC_MASK to avoid overflowing the field.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
accept_cr() failed to set the arp error handler on a reused skb. This
results in a kernel crash if the arp does indeed time out.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
When determining how many WRs are completed with a signaled CQE,
correctly deal with queue wraps.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
This patch makes following fixes in QP flush logic:
- correctly flushes unsignaled WRs followed by a signaled WR
- supports for flushing a CQ bound to multiple QPs
- resets cidx_flush if a active queue starts getting HW CQEs again
- marks WQ in error when we leave RTS. This was only being done for
user queues, but we need it for kernel queues too so that
post_send/post_recv will start returning the appropriate error
synchronously
- eats unsignaled read resp CQEs. HW always inserts CQEs so we must
silently discard them if the read work request was unsignaled.
- handles QP flushes with pending SW CQEs. The flush and out of order
completion logic has a bug where if out of order completions are
flushed but not yet polled by the consumer and the qp is then
flushed then we end up inserting duplicate completions.
- c4iw_flush_sq() should only flush wrs that have not already been
flushed. Since we already track where in the SQ we've flushed via
sq.cidx_flush, just start at that point and flush any remaining.
This bug only caused a problem in the presence of unsignaled work
requests.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
[ Fixed sparse warning due to htonl/ntohl confusion. - Roland ]
Signed-off-by: Roland Dreier <roland@purestorage.com>
Move QP to TERMINATE instead to allow the peer to get the TERM
message. This bug wasn't detectable until newer FW that moves
connections out of RDMA mode as soon as an error is detected.
QP can exit RTS before the last AE arrives. This was introduced by
changes in the FW to kick connections out of RDMA mode as soon as an
error is detected. A side effect of this is that the driver can move
the QP out of RTS before the AE causing the connection to get kicked
out of RDMA mode is processed. Fix for this is to always post async
errors even if the QP is out of RTS.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Add new cpl messages, cpl_act_open_req6 and cpl_t5_act_open_req6, for
initiating active open connections.
Use LLD api cxgb4_create_server and cxgb4_create_server6 for
initiating passive open connections. Similarly use cxgb4_remove_server
to remove the passive open connections in place of listen_stop.
Add support for iWARP over VLAN device and enable IPv6 support on VLAN device.
Make use of import_ep in c4iw_reconnect.
Signed-off-by: Vipul Pandya <vipul@chelsio.com>
[ Fix build when IPv6 is disabled and make sure iw_cxgb4 is not built-in
when ipv6 is a module. - Roland ]
Signed-off-by: Roland Dreier <roland@purestorage.com>
Device tree entries for the three EHCI controllers on Tegra114.
Enables the the third controller (USB host) on Dalmore.
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Add device tree entries for the 3 USB controllers and PHYs and
enable the third controller on Cardhu and Beaver boards.
Fix VBUS regulator entries on Beaver. The GPIO pins were wrong.
Also, internal pullups need to be enabled on those pins.
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>