Commit Graph

724454 Commits

Author SHA1 Message Date
Marc Zyngier
d3d83f7fef KVM: arm/arm64: GICv4: Prevent userspace from changing doorbell affinity
We so far allocate the doorbell interrupts without taking any
special measure regarding the affinity of these interrupts. We
simply move them around as required when the vcpu gets scheduled
on a different CPU.

But that's counting without userspace (and the evil irqbalance) that
can try and move the VPE interrupt around, causing the ITS code
to emit VMOVP commands and remap the doorbell to another redistributor.
Worse, this can happen while the vcpu is running, causing all kind
of trouble if the VPE is already resident, and we end-up in UNPRED
territory.

So let's take a definitive action and prevent userspace from messing
with us. This is just a matter of adding IRQ_NO_BALANCING to the
set of flags we already have, letting the kernel in sole control
of the affinity.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:45:02 +01:00
Marc Zyngier
bd94e7aea4 KVM: arm/arm64: GICv4: Prevent a VM using GICv4 from being saved
The GICv4 architecture doesn't make it easy for save/restore to
work, as it doesn't give any guarantee that the pending state
is written into the pending table.

So let's not take any chance, and let's return an error if
we encounter any LPI that has the HW bit set. In order for
userspace to distinguish this error from other failure modes,
use -EACCES as an error code.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:44:36 +01:00
Marc Zyngier
374be35e23 KVM: arm/arm64: GICv4: Enable virtual cpuif if VLPIs can be delivered
In order for VLPIs to be delivered to the guest, we must make sure that
the virtual cpuif is always enabled, irrespective of the presence of
virtual interrupt in the LRs.

Acked-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:43:56 +01:00
Marc Zyngier
6277579778 KVM: arm/arm64: GICv4: Hook vPE scheduling into vgic flush/sync
The redistributor needs to be told which vPE is about to be run,
and tells us whether there is any pending VLPI on exit.

Let's add the scheduling calls to the vgic flush/sync functions,
allowing the VLPIs to be delivered to the guest.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:43:26 +01:00
Marc Zyngier
df9ba95993 KVM: arm/arm64: GICv4: Use the doorbell interrupt as an unblocking source
The doorbell interrupt is only useful if the vcpu is blocked on WFI.
In all other cases, recieving a doorbell interrupt is just a waste
of cycles.

So let's only enable the doorbell if a vcpu is getting blocked,
and disable it when it is unblocked. This is very similar to
what we're doing for the background timer.

Reviewed-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:43:22 +01:00
Marc Zyngier
bdb2d2ccac KVM: arm/arm64: GICv4: Add doorbell interrupt handling
When a vPE is not running, a VLPI being made pending results in a
doorbell interrupt being delivered. Let's handle this interrupt
and update the pending_last flag that indicates that VLPIs are
pending. The corresponding vcpu is also kicked into action.

Special care is taken to prevent the doorbell from being enabled
at request time (this is controlled separately), and to make
the disabling on the interrupt non-lazy.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:42:59 +01:00
Marc Zyngier
c971968071 KVM: arm/arm64: GICv4: Use pending_last as a scheduling hint
When a vPE exits, the pending_last flag is set when there are pending
VLPIs stored in the pending table. Similarily, this flag will be set
when a doorbell interrupt fires, as it indicates the same condition.

Let's update kvm_vgic_vcpu_pending_irq() to account for that
flag as well, making a vcpu runnable when set.

Acked-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:41:56 +01:00
Benjamin Tissoires
5b01b3b8b1 HID: Wacom: switch Dell canvas into highres mode
The Dell Canvas exports 2 collections for the Pen part. The only
difference between the 2 is that the default one has half the resolution
of the second one.

The Windows driver switches the tablet into the second mode, so we should
behave the same.

Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Reviewed-by: Jason Gerecke <jason.gerecke@wacom.com
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2017-11-10 09:39:44 +01:00
Amir Goldstein
d976807606 ovl: remove unneeded arg from ovl_verify_origin()
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:16 +01:00
Vivek Goyal
5455f92b54 ovl: Put upperdentry if ovl_check_origin() fails
If ovl_check_origin() fails, we should put upperdentry. We have a reference
on it by now. So goto out_put_upper instead of out.

Fixes: a9d019573e ("ovl: lookup non-dir copy-up-origin by file handle")
Cc: <stable@vger.kernel.org> #4.12
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:16 +01:00
Miklos Szeredi
ad204488d3 ovl: rename ufs to ofs
Rename all "struct ovl_fs" pointers to "ofs".  The "ufs" name is historical
and can only be found in overlayfs/super.c.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:16 +01:00
Miklos Szeredi
4155c10a03 ovl: clean up getting lower layers
Move calling ovl_get_lower_layers() into ovl_get_lowerstack().

ovl_get_lowerstack() now returns the root dentry's filled in ovl_entry.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Miklos Szeredi
bca44b52f8 ovl: clean up workdir creation
Move calling ovl_get_workdir() into ovl_get_workpath().

Rename ovl_get_workdir() to ovl_make_workdir() and ovl_get_workpath() to
ovl_get_workdir().

Workpath is now not needed outside ovl_get_workdir().

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Miklos Szeredi
5064975e7f ovl: clean up getting upper layer
Merge ovl_get_upper() and ovl_get_upperpath().

The resulting function is named ovl_get_upper(), though it still returns
upperpath as well.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Miklos Szeredi
520d7c867f ovl: move ovl_get_workdir() and ovl_get_lower_layers()
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Miklos Szeredi
6e88256e19 ovl: reduce the number of arguments for ovl_workdir_create()
Remove "sb" and "dentry" arguments of ovl_workdir_create() and related
functions.  Move setting MS_RDONLY flag to callers.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Miklos Szeredi
c6fe625493 ovl: change order of setup in ovl_fill_super()
Move ovl_get_upper() immediately after ovl_get_upperpath(),
ovl_get_workdir() immediately after ovl_get_workdir() and
ovl_get_lower_layers() immediately after ovl_get_lowerstack().

Also move prepare_creds() up to where other allocations are happening.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Miklos Szeredi
a9075cdb46 ovl: factor out ovl_free_fs() helper
This can be called both from ovl_put_super() and in the error cleanup path
from ovl_fill_super().

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-11-10 09:39:15 +01:00
Marc Zyngier
6ce18e3a5f KVM: arm/arm64: GICv4: Handle INVALL applied to a vPE
There is no need to perform an INV for each interrupt when updating
multiple interrupts.  Instead, we can rely on the final VINVALL that
gets sent to the ITS to do the work for all of them.

Acked-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:38:22 +01:00
Marc Zyngier
af340f992c KVM: arm/arm64: GICv4: Propagate property updates to VLPIs
Upon updating a property, we propagate it all the way to the physical
ITS, and ask for an INV command to be executed there.

Acked-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:29:38 +01:00
Marc Zyngier
ff9c114394 KVM: arm/arm64: GICv4: Handle MOVALL applied to a vPE
The current implementation of MOVALL doesn't allow us to call
into the core ITS code as we hold a number of spinlocks.

Let's try a method used in other parts of the code, were we copy
the intids of the candicate interrupts, and then do whatever
we need to do with them outside of the critical section.

This allows us to move the interrupts one by one, at the expense
of a bit of CPU time. Who cares? MOVALL is such a stupid command
anyway...

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:29:38 +01:00
Marc Zyngier
fb0cada604 KVM: arm/arm64: GICv4: Handle CLEAR applied to a VLPI
Handling CLEAR is pretty easy. Just ask the ITS driver to clear
the corresponding pending bit (which will turn into a CLEAR
command on the physical side).

Acked-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:29:37 +01:00
Marc Zyngier
0fc9a58ee4 KVM: arm/arm64: GICv4: Propagate affinity changes to the physical ITS
When the guest issues an affinity change, we need to tell the physical
ITS that we're now targetting a new vcpu.  This is done by extracting
the current mapping, updating the target, and reapplying the mapping.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:29:37 +01:00
Marc Zyngier
07b46ed116 KVM: arm/arm64: GICv4: Unmap VLPI when freeing an LPI
When freeing an LPI (on a DISCARD command, for example), we need
to unmap the VLPI down to the physical ITS level.

Acked-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:29:36 +01:00
Marc Zyngier
1b7fe468b0 KVM: arm/arm64: GICv4: Handle INT command applied to a VLPI
If the guest issues an INT command targetting a VLPI, let's
call into the irq_set_irqchip_state() helper to make it pending
on the physical side.

This works just as well if userspace decides to inject an interrupt
using the normal userspace API...

Acked-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:29:36 +01:00
Marc Zyngier
196b136498 KVM: arm/arm64: GICv4: Wire mapping/unmapping of VLPIs in VFIO irq bypass
Let's use the irq bypass mechanism also used for x86 posted interrupts
to intercept the virtual PCIe endpoint configuration and establish our
LPI->VLPI mapping.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:28:52 +01:00
Stephane Grosjean
4cbdd0ee67 can: peak: Add support for new PCIe/M2 CAN FD interfaces
This adds support for the following PEAK-System CAN FD interfaces:

PCAN-cPCIe FD         CAN FD Interface for cPCI Serial (2 or 4 channels)
PCAN-PCIe/104-Express CAN FD Interface for PCIe/104-Express (1, 2 or 4 ch.)
PCAN-miniPCIe FD      CAN FD Interface for PCIe Mini (1, 2 or 4 channels)
PCAN-PCIe FD OEM      CAN FD Interface for PCIe OEM version (1, 2 or 4 ch.)
PCAN-M.2              CAN FD Interface for M.2 (1 or 2 channels)

Like the PCAN-PCIe FD interface, all of these boards run the same IP Core
that is able to handle CAN FD (see also http://www.peak-system.com).

Signed-off-by: Stephane Grosjean <s.grosjean@peak-system.com>
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2017-11-10 09:15:28 +01:00
Gerhard Bertelsmann
4dcf924c2e can: sun4i: handle overrun in RX FIFO
SUN4Is CAN IP has a 64 byte deep FIFO buffer. If the buffer is not
drained fast enough (overrun) it's getting mangled. Already received
frames are dropped - the data can't be restored.

Signed-off-by: Gerhard Bertelsmann <info@gerhard-bertelsmann.de>
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2017-11-10 09:15:28 +01:00
Richard Schütz
fb5f0b3ef6 can: c_can: don't indicate triple sampling support for D_CAN
The D_CAN controller doesn't provide a triple sampling mode, so don't set
the CAN_CTRLMODE_3_SAMPLES flag in ctrlmode_supported. Currently enabling
triple sampling is a no-op.

Signed-off-by: Richard Schütz <rschuetz@uni-koblenz.de>
Cc: linux-stable <stable@vger.kernel.org> # >= v3.6
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2017-11-10 09:15:28 +01:00
Marc Zyngier
74fe55dc9a KVM: arm/arm64: GICv4: Add init/teardown of the per-VM vPE irq domain
In order to control the GICv4 view of virtual CPUs, we rely
on an irqdomain allocated for that purpose. Let's add a couple
of helpers to that effect.

At the same time, the vgic data structures gain new fields to
track all this... erm... wonderful stuff.

The way we hook into the vgic init is slightly convoluted. We
need the vgic to be initialized (in order to guarantee that
the number of vcpus is now fixed), and we must have a vITS
(otherwise this is all very pointless). So we end-up calling
the init from both vgic_init and vgic_its_create.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:06:56 +01:00
Marc Zyngier
e7c4805924 KVM: arm/arm64: GICv4: Add property field and per-VM predicate
Add a new has_gicv4 field in the global VGIC state that indicates
whether the HW is GICv4 capable, as a per-VM predicate indicating
if there is a possibility for a VM to support direct injection
(the above being true and the VM having an ITS).

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 09:06:45 +01:00
Marc Zyngier
08c9fd0421 KVM: arm/arm64: vITS: Add a helper to update the affinity of an LPI
In order to help integrating the vITS code with GICv4, let's add
a new helper that deals with updating the affinity of an LPI,
which will later be augmented with super duper extra GICv4
goodness.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 08:51:58 +01:00
Marc Zyngier
bebfd2a203 KVM: arm/arm64: vITS: Add MSI translation helpers
The whole MSI injection process is fairly monolithic. An MSI write
gets turned into an injected LPI in one swift go. But this is actually
a more fine-grained process:

- First, a virtual ITS gets selected using the doorbell address
- Then the DevID/EventID pair gets translated into an LPI
- Finally the LPI is injected

Since the GICv4 code needs the first two steps in order to match
an IRQ routing entry to an LPI, let's expose them as helpers,
and refactor the existing code to use them

Reviewed-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10 08:51:15 +01:00
Ingo Molnar
b5cd3b51e2 Merge branch 'linus' into x86/platform, to refresh the branch
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-11-10 08:21:08 +01:00
Ingo Molnar
91a6a6cfee Merge branch 'linus' into x86/asm, to resolve conflict
Conflicts:
	arch/x86/mm/mem_encrypt.c

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-11-10 08:06:47 +01:00
Ingo Molnar
d04fdafc06 Merge branch 'x86/mm' into x86/asm, to merge branches
Most of x86/mm is already in x86/asm, so merge the rest too.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-11-10 08:05:30 +01:00
Alexander Shishkin
b8347c2196 x86/debug: Handle warnings before the notifier chain, to fix KGDB crash
Commit:

  9a93848fe7 ("x86/debug: Implement __WARN() using UD0")

turned warnings into UD0, but the fixup code only runs after the
notify_die() chain. This is a problem, in particular, with kgdb,
which kicks in as if it was a BUG().

Fix this by running the fixup code before the notifier chain in
the invalid op handler path.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Tested-by: Ilya Dryomov <idryomov@gmail.com>
Acked-by: Daniel Thompson <daniel.thompson@linaro.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Weinberger <richard.weinberger@gmail.com>
Cc: <stable@vger.kernel.org> # v4.12+
Link: http://lkml.kernel.org/r/20170724100428.19173-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-11-10 08:04:19 +01:00
Eugenia Emantayev
d1c61e6d79 net/mlx5e: Increase Striding RQ minimum size limit to 4 multi-packet WQEs
This is to prevent the case of working with a single MPWQE
(1 WQE is always reserved as RQ is linked-list).
When the WQE is fully consumed, HW should still have available buffer
in order not to drop packets.

Fixes: 461017cb00 ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-11-10 15:39:21 +09:00
Inbar Karmy
2e50b26195 net/mlx5e: Set page to null in case dma mapping fails
Currently, when dma mapping fails, put_page is called,
but the page is not set to null. Later, in the page_reuse treatment in
mlx5e_free_rx_descs(), mlx5e_page_release() is called for the second time,
improperly doing dma_unmap (for a non-mapped address) and an extra put_page.
Prevent this by nullifying the page pointer when dma_map fails.

Fixes: accd588332 ("net/mlx5e: Introduce RX Page-Reuse")
Signed-off-by: Inbar Karmy <inbark@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-11-10 15:39:21 +09:00
Saeed Mahameed
2a8d6065e7 net/mlx5e: Fix napi poll with zero budget
napi->poll can be called with budget 0, e.g. in netpoll scenarios
where the caller only wants to poll TX rings
(poll_one_napi@net/core/netpoll.c).

The below commit changed RX polling from "while" loop to "do {} while",
which caused to ignore the initial budget and handle at least one RX
packet.

This fixes the following warning:
[ 2852.049194] mlx5e_napi_poll+0x0/0x260 [mlx5_core] exceeded budget in poll
[ 2852.049195] ------------[ cut here ]------------
[ 2852.049195] WARNING: CPU: 0 PID: 25691 at net/core/netpoll.c:171 netpoll_poll_dev+0x18a/0x1a0

Fixes: 4b7dfc9925 ("net/mlx5e: Early-return on empty completion queues")
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Reported-by: Martin KaFai Lau <kafai@fb.com>
Tested-by: Martin KaFai Lau <kafai@fb.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-11-10 15:39:20 +09:00
Huy Nguyen
d2aa060d40 net/mlx5: Cancel health poll before sending panic teardown command
After the panic teardown firmware command, health_care detects the error
in PCI bus and calls the mlx5_pci_err_detected. This health_care flow is
no longer needed because the panic teardown firmware command will bring
down the PCI bus communication with the HCA.

The solution is to cancel the health care timer and its pending
workqueue request before sending panic teardown firmware command.

Kernel trace:
mlx5_core 0033:01:00.0: Shutdown was called
mlx5_core 0033:01:00.0: health_care:154:(pid 9304): handling bad device here
mlx5_core 0033:01:00.0: mlx5_handle_bad_state:114:(pid 9304): NIC state 1
mlx5_core 0033:01:00.0: mlx5_pci_err_detected was called
mlx5_core 0033:01:00.0: mlx5_enter_error_state:96:(pid 9304): start
mlx5_3:mlx5_ib_event:3061:(pid 9304): warning: event on port 0
mlx5_core 0033:01:00.0: mlx5_enter_error_state:104:(pid 9304): end
Unable to handle kernel paging request for data at address 0x0000003f
Faulting instruction address: 0xc0080000434b8c80

Fixes: 8812c24d28 ('net/mlx5: Add fast unload support in shutdown flow')
Signed-off-by: Huy Nguyen <huyn@mellanox.com>
Reviewed-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-11-10 15:39:20 +09:00
Huy Nguyen
b8cce68bf1 net/mlx5: Loop over temp list to release delay events
list_splice_init initializing waiting_events_list after splicing it to
temp list, therefore we should loop over temp list to fire the events.

Fixes: 4ca637a20a ("net/mlx5: Delay events till mlx5 interface's add complete for pci resume")
Signed-off-by: Huy Nguyen <huyn@mellanox.com>
Signed-off-by: Feras Daoud <ferasda@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-11-10 15:39:20 +09:00
David S. Miller
b79c069a8c Merge branch 'act_vlan-rcu'
Manish Kurup says:

====================
net_sched actions: act_vlan now uses RCU

This commit consists of 3 patches:

patch1 (1/3):
The VLAN action maintains one set of stats across all cores, and uses a
spinlock to synchronize updates to it from the same. Changed this to use a
per-CPU stats context instead.
This change will result in better performance.

patch2 (2/3):
Modified netronome nfp flower action to use VLAN helper functions instead
of accessing/referencing TC act_vlan private structures directly.

patch3 (3/3):
Using a spinlock in the VLAN action causes performance issues when the VLAN
action is used on multiple cores. Rewrote the VLAN action to use RCU read
locking for reads and updates instead.
All functions now use an RCU dereferenced pointer to access the VLAN action
context. Modified helper functions used by other modules, to use the RCU as
opposed to directly accessing the structure.

As part of this review, there were some changes suggested by reviewers.
I have incorporated all the changes that were requested.

Here're the changes:
v2: Fixed all helper functions to use RCU (rtnl_dereference) - Eric, Jamal
v2: Fixed indentation, extra line nits - Jamal, Jiri
v2: Moved rcu_head to the end of the struct - Jiri
v2: Re-formatted locals to reverse-christmas-tree - Jiri
v2: Removed mismatched spin_lock() - Cong
v2: Removed spin_lock_bh() in tcf_vlan_init, rtnl_dereference() should
    suffice - Cong, Jiri
v4: Modified the nfp flower action code to use the VLAN helper functions
    instead of referencing the structure directly. Isolated this into a
    separate patch - Pieter Jansen
v5: Got rid of the unlikely() for the allocation case - Simon Horman
v6: Had forgotten cleanup functions for RCU alloc, added them - Dave Miller
v7: Re-formatted more locals to reverse-christmas-tree - Pieter V
v8: Reverted reverse-christmas-tree(v7), not required when dependencies
    make it difficult to implement - Alexander D
v9: Cover letter subject change - Jamal
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 15:32:20 +09:00
Manish Kurup
4c5b9d9642 act_vlan: VLAN action rewrite to use RCU lock/unlock and update
Using a spinlock in the VLAN action causes performance issues when the VLAN
action is used on multiple cores. Rewrote the VLAN action to use RCU read
locking for reads and updates instead.
All functions now use an RCU dereferenced pointer to access the VLAN action
context. Modified helper functions used by other modules, to use the RCU as
opposed to directly accessing the structure.

Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Manish Kurup <manish.kurup@verizon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 15:32:20 +09:00
Manish Kurup
bf068bdd3c nfp flower action: Modified to use VLAN helper functions
Modified netronome nfp flower action to use VLAN helper functions instead
of accessing/referencing TC act_vlan private structures directly.

Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Signed-off-by: Manish Kurup <manish.kurup@verizon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 15:32:20 +09:00
Manish Kurup
e0496cbbf8 act_vlan: Change stats update to use per-core stats
The VLAN action maintains one set of stats across all cores, and uses a
spinlock to synchronize updates to it from the same. Changed this to use a
per-CPU stats context instead.
This change will result in better performance.

Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Manish Kurup <manish.kurup@verizon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 15:32:20 +09:00
Robert Stonehouse
cbad52e92a sfc: don't warn on successful change of MAC
Fixes: 535a61777f ("sfc: suppress handled MCDI failures when changing the MAC address")
Signed-off-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 15:31:13 +09:00
Colin Ian King
e4effc094c net: vxge: remove redundant assignments and pointers
There are several pointers that are being assigned but never read
so remove these as they are redundant.  Also remove an assignment
to function_mode that is never read. Cleans up several clang
warnings:

vxge-main.c:1139:2: warning: Value stored to 'hldev' is never read
vxge-main.c:1294:2: warning: Value stored to 'hldev' is never read
vxge-main.c:2188:2: warning: Value stored to 'dev' is never read
vxge-main.c:2188:2: warning: Value stored to 'dev' is never read
vxge-main.c:2723:2: warning: Value stored to 'function_mode' is
never read

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 15:31:13 +09:00
Håkon Bugge
1cb483a5cc rds: ib: Fix NULL pointer dereference in debug code
rds_ib_recv_refill() is a function that refills an IB receive
queue. It can be called from both the CQE handler (tasklet) and a
worker thread.

Just after the call to ib_post_recv(), a debug message is printed with
rdsdebug():

            ret = ib_post_recv(ic->i_cm_id->qp, &recv->r_wr, &failed_wr);
            rdsdebug("recv %p ibinc %p page %p addr %lu ret %d\n", recv,
                     recv->r_ibinc, sg_page(&recv->r_frag->f_sg),
                     (long) ib_sg_dma_address(
                            ic->i_cm_id->device,
                            &recv->r_frag->f_sg),
                    ret);

Now consider an invocation of rds_ib_recv_refill() from the worker
thread, which is preemptible. Further, assume that the worker thread
is preempted between the ib_post_recv() and rdsdebug() statements.

Then, if the preemption is due to a receive CQE event, the
rds_ib_recv_cqe_handler() will be invoked. This function processes
receive completions, including freeing up data structures, such as the
recv->r_frag.

In this scenario, rds_ib_recv_cqe_handler() will process the receive
WR posted above. That implies, that the recv->r_frag has been freed
before the above rdsdebug() statement has been executed. When it is
later executed, we will have a NULL pointer dereference:

[ 4088.068008] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
[ 4088.076754] IP: rds_ib_recv_refill+0x87/0x620 [rds_rdma]
[ 4088.082686] PGD 0 P4D 0
[ 4088.085515] Oops: 0000 [#1] SMP
[ 4088.089015] Modules linked in: rds_rdma(OE) rds(OE) rpcsec_gss_krb5(E) nfsv4(E) dns_resolver(E) nfs(E) fscache(E) mlx4_ib(E) ib_ipoib(E) rdma_ucm(E) ib_ucm(E) ib_uverbs(E) ib_umad(E) rdma_cm(E) ib_cm(E) iw_cm(E) ib_core(E) binfmt_misc(E) sb_edac(E) intel_powerclamp(E) coretemp(E) kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) pcbc(E) aesni_intel(E) crypto_simd(E) iTCO_wdt(E) glue_helper(E) iTCO_vendor_support(E) sg(E) cryptd(E) pcspkr(E) ipmi_si(E) ipmi_devintf(E) ipmi_msghandler(E) shpchp(E) ioatdma(E) i2c_i801(E) wmi(E) lpc_ich(E) mei_me(E) mei(E) mfd_core(E) nfsd(E) auth_rpcgss(E) nfs_acl(E) lockd(E) grace(E) sunrpc(E) ip_tables(E) ext4(E) mbcache(E) jbd2(E) fscrypto(E) mgag200(E) i2c_algo_bit(E) drm_kms_helper(E) syscopyarea(E) sysfillrect(E) sysimgblt(E)
[ 4088.168486]  fb_sys_fops(E) ahci(E) ixgbe(E) libahci(E) ttm(E) mdio(E) ptp(E) pps_core(E) drm(E) sd_mod(E) libata(E) crc32c_intel(E) mlx4_core(E) i2c_core(E) dca(E) megaraid_sas(E) dm_mirror(E) dm_region_hash(E) dm_log(E) dm_mod(E) [last unloaded: rds]
[ 4088.193442] CPU: 20 PID: 1244 Comm: kworker/20:2 Tainted: G           OE   4.14.0-rc7.master.20171105.ol7.x86_64 #1
[ 4088.205097] Hardware name: Oracle Corporation ORACLE SERVER X5-2L/ASM,MOBO TRAY,2U, BIOS 31110000 03/03/2017
[ 4088.216074] Workqueue: ib_cm cm_work_handler [ib_cm]
[ 4088.221614] task: ffff885fa11d0000 task.stack: ffffc9000e598000
[ 4088.228224] RIP: 0010:rds_ib_recv_refill+0x87/0x620 [rds_rdma]
[ 4088.234736] RSP: 0018:ffffc9000e59bb68 EFLAGS: 00010286
[ 4088.240568] RAX: 0000000000000000 RBX: ffffc9002115d050 RCX: ffffc9002115d050
[ 4088.248535] RDX: ffffffffa0521380 RSI: ffffffffa0522158 RDI: ffffffffa0525580
[ 4088.256498] RBP: ffffc9000e59bbf8 R08: 0000000000000005 R09: 0000000000000000
[ 4088.264465] R10: 0000000000000339 R11: 0000000000000001 R12: 0000000000000000
[ 4088.272433] R13: ffff885f8c9d8000 R14: ffffffff81a0a060 R15: ffff884676268000
[ 4088.280397] FS:  0000000000000000(0000) GS:ffff885fbec80000(0000) knlGS:0000000000000000
[ 4088.289434] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4088.295846] CR2: 0000000000000020 CR3: 0000000001e09005 CR4: 00000000001606e0
[ 4088.303816] Call Trace:
[ 4088.306557]  rds_ib_cm_connect_complete+0xe0/0x220 [rds_rdma]
[ 4088.312982]  ? __dynamic_pr_debug+0x8c/0xb0
[ 4088.317664]  ? __queue_work+0x142/0x3c0
[ 4088.321944]  rds_rdma_cm_event_handler+0x19e/0x250 [rds_rdma]
[ 4088.328370]  cma_ib_handler+0xcd/0x280 [rdma_cm]
[ 4088.333522]  cm_process_work+0x25/0x120 [ib_cm]
[ 4088.338580]  cm_work_handler+0xd6b/0x17aa [ib_cm]
[ 4088.343832]  process_one_work+0x149/0x360
[ 4088.348307]  worker_thread+0x4d/0x3e0
[ 4088.352397]  kthread+0x109/0x140
[ 4088.355996]  ? rescuer_thread+0x380/0x380
[ 4088.360467]  ? kthread_park+0x60/0x60
[ 4088.364563]  ret_from_fork+0x25/0x30
[ 4088.368548] Code: 48 89 45 90 48 89 45 98 eb 4d 0f 1f 44 00 00 48 8b 43 08 48 89 d9 48 c7 c2 80 13 52 a0 48 c7 c6 58 21 52 a0 48 c7 c7 80 55 52 a0 <4c> 8b 48 20 44 89 64 24 08 48 8b 40 30 49 83 e1 fc 48 89 04 24
[ 4088.389612] RIP: rds_ib_recv_refill+0x87/0x620 [rds_rdma] RSP: ffffc9000e59bb68
[ 4088.397772] CR2: 0000000000000020
[ 4088.401505] ---[ end trace fe922e6ccf004431 ]---

This bug was provoked by compiling rds out-of-tree with
EXTRA_CFLAGS="-DRDS_DEBUG -DDEBUG" and inserting an artificial delay
between the rdsdebug() and ib_ib_port_recv() statements:

   	       /* XXX when can this fail? */
	       ret = ib_post_recv(ic->i_cm_id->qp, &recv->r_wr, &failed_wr);
+		if (can_wait)
+			usleep_range(1000, 5000);
	       rdsdebug("recv %p ibinc %p page %p addr %lu ret %d\n", recv,
			recv->r_ibinc, sg_page(&recv->r_frag->f_sg),
			(long) ib_sg_dma_address(

The fix is simply to move the rdsdebug() statement up before the
ib_post_recv() and remove the printing of ret, which is taken care of
anyway by the non-debug code.

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Knut Omang <knut.omang@oracle.com>
Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 14:54:47 +09:00
David S. Miller
be61a484af Merge branch 'ip_gre-flags-update'
Xin Long says:

====================
ip_gre: add support for i/o_flags update

ip_gre is using as many ip_tunnel apis as possible, newlink works
fine as gre would do it's own part in .ndo_init. But when changing
link, ip_tunnel_changelink doesn't even update i/o_flags, and also
the update of these flags would cause some other gre's properties
need to be updated or recalculated.

These two patch are to add i/o_flags update and then do adjustment
on some gre's properties according to the new i/o_flags.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-10 14:39:54 +09:00