2007-05-09 01:00:38 +00:00
|
|
|
/*
|
2008-07-25 17:32:52 +00:00
|
|
|
* Copyright (c) 2005, 2006, 2007, 2008 Mellanox Technologies. All rights reserved.
|
2007-05-09 01:00:38 +00:00
|
|
|
* Copyright (c) 2005, 2006, 2007 Cisco Systems, Inc. All rights reserved.
|
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/interrupt.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2011-05-27 20:14:23 +00:00
|
|
|
#include <linux/export.h>
|
2008-07-24 04:28:13 +00:00
|
|
|
#include <linux/mm.h>
|
2007-05-15 19:36:30 +00:00
|
|
|
#include <linux/dma-mapping.h>
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
#include <linux/mlx4/cmd.h>
|
2012-07-18 22:33:51 +00:00
|
|
|
#include <linux/cpu_rmap.h>
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
#include "mlx4.h"
|
|
|
|
#include "fw.h"
|
|
|
|
|
2009-09-06 03:24:50 +00:00
|
|
|
enum {
|
2011-03-22 22:37:47 +00:00
|
|
|
MLX4_IRQNAME_SIZE = 32
|
2009-09-06 03:24:50 +00:00
|
|
|
};
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
enum {
|
|
|
|
MLX4_NUM_ASYNC_EQE = 0x100,
|
|
|
|
MLX4_NUM_SPARE_EQE = 0x80,
|
|
|
|
MLX4_EQ_ENTRY_SIZE = 0x20
|
|
|
|
};
|
|
|
|
|
|
|
|
#define MLX4_EQ_STATUS_OK ( 0 << 28)
|
|
|
|
#define MLX4_EQ_STATUS_WRITE_FAIL (10 << 28)
|
|
|
|
#define MLX4_EQ_OWNER_SW ( 0 << 24)
|
|
|
|
#define MLX4_EQ_OWNER_HW ( 1 << 24)
|
|
|
|
#define MLX4_EQ_FLAG_EC ( 1 << 18)
|
|
|
|
#define MLX4_EQ_FLAG_OI ( 1 << 17)
|
|
|
|
#define MLX4_EQ_STATE_ARMED ( 9 << 8)
|
|
|
|
#define MLX4_EQ_STATE_FIRED (10 << 8)
|
|
|
|
#define MLX4_EQ_STATE_ALWAYS_ARMED (11 << 8)
|
|
|
|
|
|
|
|
#define MLX4_ASYNC_EVENT_MASK ((1ull << MLX4_EVENT_TYPE_PATH_MIG) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_COMM_EST) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SQ_DRAINED) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_CQ_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_WQ_CATAS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_EEC_CATAS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_PATH_MIG_FAILED) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_WQ_ACCESS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_PORT_CHANGE) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_ECC_DETECT) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SRQ_CATAS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SRQ_LIMIT) | \
|
2011-12-13 04:13:58 +00:00
|
|
|
(1ull << MLX4_EVENT_TYPE_CMD) | \
|
2013-07-28 15:54:21 +00:00
|
|
|
(1ull << MLX4_EVENT_TYPE_OP_REQUIRED) | \
|
2011-12-13 04:13:58 +00:00
|
|
|
(1ull << MLX4_EVENT_TYPE_COMM_CHANNEL) | \
|
2012-03-06 13:50:49 +00:00
|
|
|
(1ull << MLX4_EVENT_TYPE_FLR_EVENT) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_FATAL_WARNING))
|
2007-05-09 01:00:38 +00:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
static u64 get_async_ev_mask(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
u64 async_ev_mask = MLX4_ASYNC_EVENT_MASK;
|
|
|
|
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_PORT_MNG_CHG_EV)
|
|
|
|
async_ev_mask |= (1ull << MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT);
|
|
|
|
|
|
|
|
return async_ev_mask;
|
|
|
|
}
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
static void eq_set_ci(struct mlx4_eq *eq, int req_not)
|
|
|
|
{
|
|
|
|
__raw_writel((__force u32) cpu_to_be32((eq->cons_index & 0xffffff) |
|
|
|
|
req_not << 31),
|
|
|
|
eq->doorbell);
|
|
|
|
/* We still want ordering, just not swabbing, so add a barrier */
|
|
|
|
mb();
|
|
|
|
}
|
|
|
|
|
2014-09-18 08:51:00 +00:00
|
|
|
static struct mlx4_eqe *get_eqe(struct mlx4_eq *eq, u32 entry, u8 eqe_factor,
|
|
|
|
u8 eqe_size)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
2012-10-21 14:59:24 +00:00
|
|
|
/* (entry & (eq->nent - 1)) gives us a cyclic array */
|
2014-09-18 08:51:00 +00:00
|
|
|
unsigned long offset = (entry & (eq->nent - 1)) * eqe_size;
|
|
|
|
/* CX3 is capable of extending the EQE from 32 to 64 bytes with
|
|
|
|
* strides of 64B,128B and 256B.
|
|
|
|
* When 64B EQE is used, the first (in the lower addresses)
|
2012-10-21 14:59:24 +00:00
|
|
|
* 32 bytes in the 64 byte EQE are reserved and the next 32 bytes
|
|
|
|
* contain the legacy EQE information.
|
2014-09-18 08:51:00 +00:00
|
|
|
* In all other cases, the first 32B contains the legacy EQE info.
|
2012-10-21 14:59:24 +00:00
|
|
|
*/
|
|
|
|
return eq->page_list[offset / PAGE_SIZE].buf + (offset + (eqe_factor ? MLX4_EQ_ENTRY_SIZE : 0)) % PAGE_SIZE;
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
2014-09-18 08:51:00 +00:00
|
|
|
static struct mlx4_eqe *next_eqe_sw(struct mlx4_eq *eq, u8 eqe_factor, u8 size)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
2014-09-18 08:51:00 +00:00
|
|
|
struct mlx4_eqe *eqe = get_eqe(eq, eq->cons_index, eqe_factor, size);
|
2007-05-09 01:00:38 +00:00
|
|
|
return !!(eqe->owner & 0x80) ^ !!(eq->cons_index & eq->nent) ? NULL : eqe;
|
|
|
|
}
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
static struct mlx4_eqe *next_slave_event_eqe(struct mlx4_slave_event_eq *slave_eq)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe *eqe =
|
|
|
|
&slave_eq->event_eqe[slave_eq->cons & (SLAVE_EVENT_EQ_SIZE - 1)];
|
|
|
|
return (!!(eqe->owner & 0x80) ^
|
|
|
|
!!(slave_eq->cons & SLAVE_EVENT_EQ_SIZE)) ?
|
|
|
|
eqe : NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_gen_slave_eqe(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct mlx4_mfunc_master_ctx *master =
|
|
|
|
container_of(work, struct mlx4_mfunc_master_ctx,
|
|
|
|
slave_event_work);
|
|
|
|
struct mlx4_mfunc *mfunc =
|
|
|
|
container_of(master, struct mlx4_mfunc, master);
|
|
|
|
struct mlx4_priv *priv = container_of(mfunc, struct mlx4_priv, mfunc);
|
|
|
|
struct mlx4_dev *dev = &priv->dev;
|
|
|
|
struct mlx4_slave_event_eq *slave_eq = &mfunc->master.slave_eq;
|
|
|
|
struct mlx4_eqe *eqe;
|
|
|
|
u8 slave;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (eqe = next_slave_event_eqe(slave_eq); eqe;
|
|
|
|
eqe = next_slave_event_eqe(slave_eq)) {
|
|
|
|
slave = eqe->slave_id;
|
|
|
|
|
|
|
|
/* All active slaves need to receive the event */
|
|
|
|
if (slave == ALL_SLAVES) {
|
|
|
|
for (i = 0; i < dev->num_slaves; i++) {
|
|
|
|
if (i != dev->caps.function &&
|
|
|
|
master->slave_state[i].active)
|
|
|
|
if (mlx4_GEN_EQE(dev, i, eqe))
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Failed to generate event for slave %d\n",
|
|
|
|
i);
|
2011-12-13 04:13:58 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (mlx4_GEN_EQE(dev, slave, eqe))
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Failed to generate event for slave %d\n",
|
|
|
|
slave);
|
2011-12-13 04:13:58 +00:00
|
|
|
}
|
|
|
|
++slave_eq->cons;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void slave_event(struct mlx4_dev *dev, u8 slave, struct mlx4_eqe *eqe)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_event_eq *slave_eq = &priv->mfunc.master.slave_eq;
|
2012-08-03 08:40:54 +00:00
|
|
|
struct mlx4_eqe *s_eqe;
|
|
|
|
unsigned long flags;
|
2011-12-13 04:13:58 +00:00
|
|
|
|
2012-08-03 08:40:54 +00:00
|
|
|
spin_lock_irqsave(&slave_eq->event_lock, flags);
|
|
|
|
s_eqe = &slave_eq->event_eqe[slave_eq->prod & (SLAVE_EVENT_EQ_SIZE - 1)];
|
2011-12-13 04:13:58 +00:00
|
|
|
if ((!!(s_eqe->owner & 0x80)) ^
|
|
|
|
(!!(slave_eq->prod & SLAVE_EVENT_EQ_SIZE))) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Master failed to generate an EQE for slave: %d. No free EQE on slave events queue\n",
|
|
|
|
slave);
|
2012-08-03 08:40:54 +00:00
|
|
|
spin_unlock_irqrestore(&slave_eq->event_lock, flags);
|
2011-12-13 04:13:58 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2012-10-21 14:59:24 +00:00
|
|
|
memcpy(s_eqe, eqe, dev->caps.eqe_size - 1);
|
2011-12-13 04:13:58 +00:00
|
|
|
s_eqe->slave_id = slave;
|
|
|
|
/* ensure all information is written before setting the ownersip bit */
|
|
|
|
wmb();
|
|
|
|
s_eqe->owner = !!(slave_eq->prod & SLAVE_EVENT_EQ_SIZE) ? 0x0 : 0x80;
|
|
|
|
++slave_eq->prod;
|
|
|
|
|
|
|
|
queue_work(priv->mfunc.master.comm_wq,
|
|
|
|
&priv->mfunc.master.slave_event_work);
|
2012-08-03 08:40:54 +00:00
|
|
|
spin_unlock_irqrestore(&slave_eq->event_lock, flags);
|
2011-12-13 04:13:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_slave_event(struct mlx4_dev *dev, int slave,
|
|
|
|
struct mlx4_eqe *eqe)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_slave =
|
|
|
|
&priv->mfunc.master.slave_state[slave];
|
|
|
|
|
|
|
|
if (!s_slave->active) {
|
|
|
|
/*mlx4_warn(dev, "Trying to pass event to inactive slave\n");*/
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
slave_event(dev, slave, eqe);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:48 +00:00
|
|
|
int mlx4_gen_pkey_eqe(struct mlx4_dev *dev, int slave, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_slave = &priv->mfunc.master.slave_state[slave];
|
|
|
|
|
|
|
|
if (!s_slave->active)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT;
|
|
|
|
eqe.subtype = MLX4_DEV_PMC_SUBTYPE_PKEY_TABLE;
|
|
|
|
eqe.event.port_mgmt_change.port = port;
|
|
|
|
|
|
|
|
return mlx4_GEN_EQE(dev, slave, &eqe);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_pkey_eqe);
|
|
|
|
|
|
|
|
int mlx4_gen_guid_change_eqe(struct mlx4_dev *dev, int slave, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
/*don't send if we don't have the that slave */
|
|
|
|
if (dev->num_vfs < slave)
|
|
|
|
return 0;
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT;
|
|
|
|
eqe.subtype = MLX4_DEV_PMC_SUBTYPE_GUID_INFO;
|
|
|
|
eqe.event.port_mgmt_change.port = port;
|
|
|
|
|
|
|
|
return mlx4_GEN_EQE(dev, slave, &eqe);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_guid_change_eqe);
|
|
|
|
|
|
|
|
int mlx4_gen_port_state_change_eqe(struct mlx4_dev *dev, int slave, u8 port,
|
|
|
|
u8 port_subtype_change)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
/*don't send if we don't have the that slave */
|
|
|
|
if (dev->num_vfs < slave)
|
|
|
|
return 0;
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_CHANGE;
|
|
|
|
eqe.subtype = port_subtype_change;
|
|
|
|
eqe.event.port_change.port = cpu_to_be32(port << 28);
|
|
|
|
|
|
|
|
mlx4_dbg(dev, "%s: sending: %d to slave: %d on port: %d\n", __func__,
|
|
|
|
port_subtype_change, slave, port);
|
|
|
|
return mlx4_GEN_EQE(dev, slave, &eqe);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_port_state_change_eqe);
|
|
|
|
|
|
|
|
enum slave_port_state mlx4_get_slave_port_state(struct mlx4_dev *dev, int slave, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_state = priv->mfunc.master.slave_state;
|
2014-03-19 16:11:52 +00:00
|
|
|
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
|
|
|
|
|
|
|
|
if (slave >= dev->num_slaves || port > dev->caps.num_ports ||
|
|
|
|
port <= 0 || !test_bit(port - 1, actv_ports.ports)) {
|
2012-08-03 08:40:48 +00:00
|
|
|
pr_err("%s: Error: asking for slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
return SLAVE_PORT_DOWN;
|
|
|
|
}
|
|
|
|
return s_state[slave].port_state[port];
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_get_slave_port_state);
|
|
|
|
|
|
|
|
static int mlx4_set_slave_port_state(struct mlx4_dev *dev, int slave, u8 port,
|
|
|
|
enum slave_port_state state)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_state = priv->mfunc.master.slave_state;
|
2014-03-19 16:11:52 +00:00
|
|
|
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
|
2012-08-03 08:40:48 +00:00
|
|
|
|
2014-03-19 16:11:52 +00:00
|
|
|
if (slave >= dev->num_slaves || port > dev->caps.num_ports ||
|
|
|
|
port <= 0 || !test_bit(port - 1, actv_ports.ports)) {
|
2012-08-03 08:40:48 +00:00
|
|
|
pr_err("%s: Error: asking for slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
s_state[slave].port_state[port] = state;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_all_slave_state(struct mlx4_dev *dev, u8 port, int event)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
enum slave_port_gen_event gen_event;
|
2014-03-19 16:11:52 +00:00
|
|
|
struct mlx4_slaves_pport slaves_pport = mlx4_phys_to_slaves_pport(dev,
|
|
|
|
port);
|
2012-08-03 08:40:48 +00:00
|
|
|
|
2014-03-19 16:11:52 +00:00
|
|
|
for (i = 0; i < dev->num_vfs + 1; i++)
|
|
|
|
if (test_bit(i, slaves_pport.slaves))
|
|
|
|
set_and_calc_slave_port_state(dev, i, port,
|
|
|
|
event, &gen_event);
|
2012-08-03 08:40:48 +00:00
|
|
|
}
|
|
|
|
/**************************************************************************
|
|
|
|
The function get as input the new event to that port,
|
|
|
|
and according to the prev state change the slave's port state.
|
|
|
|
The events are:
|
|
|
|
MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN,
|
|
|
|
MLX4_PORT_STATE_DEV_EVENT_PORT_UP
|
|
|
|
MLX4_PORT_STATE_IB_EVENT_GID_VALID
|
|
|
|
MLX4_PORT_STATE_IB_EVENT_GID_INVALID
|
|
|
|
***************************************************************************/
|
|
|
|
int set_and_calc_slave_port_state(struct mlx4_dev *dev, int slave,
|
|
|
|
u8 port, int event,
|
|
|
|
enum slave_port_gen_event *gen_event)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *ctx = NULL;
|
|
|
|
unsigned long flags;
|
|
|
|
int ret = -1;
|
2014-03-19 16:11:52 +00:00
|
|
|
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
|
2012-08-03 08:40:48 +00:00
|
|
|
enum slave_port_state cur_state =
|
|
|
|
mlx4_get_slave_port_state(dev, slave, port);
|
|
|
|
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_NONE;
|
|
|
|
|
2014-03-19 16:11:52 +00:00
|
|
|
if (slave >= dev->num_slaves || port > dev->caps.num_ports ||
|
|
|
|
port <= 0 || !test_bit(port - 1, actv_ports.ports)) {
|
2012-08-03 08:40:48 +00:00
|
|
|
pr_err("%s: Error: asking for slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx = &priv->mfunc.master.slave_state[slave];
|
|
|
|
spin_lock_irqsave(&ctx->lock, flags);
|
|
|
|
|
|
|
|
switch (cur_state) {
|
|
|
|
case SLAVE_PORT_DOWN:
|
|
|
|
if (MLX4_PORT_STATE_DEV_EVENT_PORT_UP == event)
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PENDING_UP);
|
|
|
|
break;
|
|
|
|
case SLAVE_PENDING_UP:
|
|
|
|
if (MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN == event)
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PORT_DOWN);
|
|
|
|
else if (MLX4_PORT_STATE_IB_PORT_STATE_EVENT_GID_VALID == event) {
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PORT_UP);
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_UP;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case SLAVE_PORT_UP:
|
|
|
|
if (MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN == event) {
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PORT_DOWN);
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_DOWN;
|
|
|
|
} else if (MLX4_PORT_STATE_IB_EVENT_GID_INVALID ==
|
|
|
|
event) {
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PENDING_UP);
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_DOWN;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
2014-05-07 19:52:57 +00:00
|
|
|
pr_err("%s: BUG!!! UNKNOWN state: slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
goto out;
|
2012-08-03 08:40:48 +00:00
|
|
|
}
|
|
|
|
ret = mlx4_get_slave_port_state(dev, slave, port);
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_irqrestore(&ctx->lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(set_and_calc_slave_port_state);
|
|
|
|
|
|
|
|
int mlx4_gen_slaves_port_mgt_ev(struct mlx4_dev *dev, u8 port, int attr)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT;
|
|
|
|
eqe.subtype = MLX4_DEV_PMC_SUBTYPE_PORT_INFO;
|
|
|
|
eqe.event.port_mgmt_change.port = port;
|
|
|
|
eqe.event.port_mgmt_change.params.port_info.changed_attr =
|
|
|
|
cpu_to_be32((u32) attr);
|
|
|
|
|
|
|
|
slave_event(dev, ALL_SLAVES, &eqe);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_slaves_port_mgt_ev);
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
void mlx4_master_handle_slave_flr(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct mlx4_mfunc_master_ctx *master =
|
|
|
|
container_of(work, struct mlx4_mfunc_master_ctx,
|
|
|
|
slave_flr_event_work);
|
|
|
|
struct mlx4_mfunc *mfunc =
|
|
|
|
container_of(master, struct mlx4_mfunc, master);
|
|
|
|
struct mlx4_priv *priv =
|
|
|
|
container_of(mfunc, struct mlx4_priv, mfunc);
|
|
|
|
struct mlx4_dev *dev = &priv->dev;
|
|
|
|
struct mlx4_slave_state *slave_state = priv->mfunc.master.slave_state;
|
|
|
|
int i;
|
|
|
|
int err;
|
2012-11-27 16:24:30 +00:00
|
|
|
unsigned long flags;
|
2011-12-13 04:13:58 +00:00
|
|
|
|
|
|
|
mlx4_dbg(dev, "mlx4_handle_slave_flr\n");
|
|
|
|
|
|
|
|
for (i = 0 ; i < dev->num_slaves; i++) {
|
|
|
|
|
|
|
|
if (MLX4_COMM_CMD_FLR == slave_state[i].last_cmd) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_dbg(dev, "mlx4_handle_slave_flr: clean slave: %d\n",
|
|
|
|
i);
|
2011-12-13 04:13:58 +00:00
|
|
|
|
|
|
|
mlx4_delete_all_resources_for_slave(dev, i);
|
|
|
|
/*return the slave to running mode*/
|
2012-11-27 16:24:30 +00:00
|
|
|
spin_lock_irqsave(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 04:13:58 +00:00
|
|
|
slave_state[i].last_cmd = MLX4_COMM_CMD_RESET;
|
|
|
|
slave_state[i].is_slave_going_down = 0;
|
2012-11-27 16:24:30 +00:00
|
|
|
spin_unlock_irqrestore(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 04:13:58 +00:00
|
|
|
/*notify the FW:*/
|
|
|
|
err = mlx4_cmd(dev, 0, i, 0, MLX4_CMD_INFORM_FLR_DONE,
|
|
|
|
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
|
|
|
|
if (err)
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Failed to notify FW on FLR done (slave:%d)\n",
|
|
|
|
i);
|
2011-12-13 04:13:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
static int mlx4_eq_int(struct mlx4_dev *dev, struct mlx4_eq *eq)
|
|
|
|
{
|
2011-12-13 04:13:58 +00:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
2007-05-09 01:00:38 +00:00
|
|
|
struct mlx4_eqe *eqe;
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 08:57:53 +00:00
|
|
|
int cqn = -1;
|
2007-05-09 01:00:38 +00:00
|
|
|
int eqes_found = 0;
|
|
|
|
int set_ci = 0;
|
2009-03-19 02:45:11 +00:00
|
|
|
int port;
|
2011-12-13 04:13:58 +00:00
|
|
|
int slave = 0;
|
|
|
|
int ret;
|
|
|
|
u32 flr_slave;
|
|
|
|
u8 update_slave_state;
|
|
|
|
int i;
|
2012-08-03 08:40:48 +00:00
|
|
|
enum slave_port_gen_event gen_event;
|
2012-11-27 16:24:30 +00:00
|
|
|
unsigned long flags;
|
2013-06-13 10:19:11 +00:00
|
|
|
struct mlx4_vport_state *s_info;
|
2014-09-18 08:51:00 +00:00
|
|
|
int eqe_size = dev->caps.eqe_size;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2014-09-18 08:51:00 +00:00
|
|
|
while ((eqe = next_eqe_sw(eq, dev->caps.eqe_factor, eqe_size))) {
|
2007-05-09 01:00:38 +00:00
|
|
|
/*
|
|
|
|
* Make sure we read EQ entry contents after we've
|
|
|
|
* checked the ownership bit.
|
|
|
|
*/
|
|
|
|
rmb();
|
|
|
|
|
|
|
|
switch (eqe->type) {
|
|
|
|
case MLX4_EVENT_TYPE_COMP:
|
|
|
|
cqn = be32_to_cpu(eqe->event.comp.cqn) & 0xffffff;
|
|
|
|
mlx4_cq_completion(dev, cqn);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_PATH_MIG:
|
|
|
|
case MLX4_EVENT_TYPE_COMM_EST:
|
|
|
|
case MLX4_EVENT_TYPE_SQ_DRAINED:
|
|
|
|
case MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE:
|
|
|
|
case MLX4_EVENT_TYPE_WQ_CATAS_ERROR:
|
|
|
|
case MLX4_EVENT_TYPE_PATH_MIG_FAILED:
|
|
|
|
case MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
|
|
|
|
case MLX4_EVENT_TYPE_WQ_ACCESS_ERROR:
|
2011-12-13 04:13:58 +00:00
|
|
|
mlx4_dbg(dev, "event %d arrived\n", eqe->type);
|
|
|
|
if (mlx4_is_master(dev)) {
|
|
|
|
/* forward only to slave owning the QP */
|
|
|
|
ret = mlx4_get_slave_from_resource_id(dev,
|
|
|
|
RES_QP,
|
|
|
|
be32_to_cpu(eqe->event.qp.qpn)
|
|
|
|
& 0xffffff, &slave);
|
|
|
|
if (ret && ret != -ENOENT) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_dbg(dev, "QP event %02x(%02x) on EQ %d at index %u: could not get slave id (%d)\n",
|
2011-12-13 04:13:58 +00:00
|
|
|
eqe->type, eqe->subtype,
|
|
|
|
eq->eqn, eq->cons_index, ret);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret && slave != dev->caps.function) {
|
|
|
|
mlx4_slave_event(dev, slave, eqe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
mlx4_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) &
|
|
|
|
0xffffff, eqe->type);
|
2007-05-09 01:00:38 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_SRQ_LIMIT:
|
2013-04-21 15:09:59 +00:00
|
|
|
mlx4_dbg(dev, "%s: MLX4_EVENT_TYPE_SRQ_LIMIT\n",
|
|
|
|
__func__);
|
2007-05-09 01:00:38 +00:00
|
|
|
case MLX4_EVENT_TYPE_SRQ_CATAS_ERROR:
|
2011-12-13 04:13:58 +00:00
|
|
|
if (mlx4_is_master(dev)) {
|
|
|
|
/* forward only to slave owning the SRQ */
|
|
|
|
ret = mlx4_get_slave_from_resource_id(dev,
|
|
|
|
RES_SRQ,
|
|
|
|
be32_to_cpu(eqe->event.srq.srqn)
|
|
|
|
& 0xffffff,
|
|
|
|
&slave);
|
|
|
|
if (ret && ret != -ENOENT) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "SRQ event %02x(%02x) on EQ %d at index %u: could not get slave id (%d)\n",
|
2011-12-13 04:13:58 +00:00
|
|
|
eqe->type, eqe->subtype,
|
|
|
|
eq->eqn, eq->cons_index, ret);
|
|
|
|
break;
|
|
|
|
}
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "%s: slave:%d, srq_no:0x%x, event: %02x(%02x)\n",
|
|
|
|
__func__, slave,
|
2011-12-13 04:13:58 +00:00
|
|
|
be32_to_cpu(eqe->event.srq.srqn),
|
|
|
|
eqe->type, eqe->subtype);
|
|
|
|
|
|
|
|
if (!ret && slave != dev->caps.function) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "%s: sending event %02x(%02x) to slave:%d\n",
|
|
|
|
__func__, eqe->type,
|
2011-12-13 04:13:58 +00:00
|
|
|
eqe->subtype, slave);
|
|
|
|
mlx4_slave_event(dev, slave, eqe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mlx4_srq_event(dev, be32_to_cpu(eqe->event.srq.srqn) &
|
|
|
|
0xffffff, eqe->type);
|
2007-05-09 01:00:38 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_CMD:
|
|
|
|
mlx4_cmd_event(dev,
|
|
|
|
be16_to_cpu(eqe->event.cmd.token),
|
|
|
|
eqe->event.cmd.status,
|
|
|
|
be64_to_cpu(eqe->event.cmd.out_param));
|
|
|
|
break;
|
|
|
|
|
2014-03-19 16:11:52 +00:00
|
|
|
case MLX4_EVENT_TYPE_PORT_CHANGE: {
|
|
|
|
struct mlx4_slaves_pport slaves_port;
|
2009-03-19 02:45:11 +00:00
|
|
|
port = be32_to_cpu(eqe->event.port_change.port) >> 28;
|
2014-03-19 16:11:52 +00:00
|
|
|
slaves_port = mlx4_phys_to_slaves_pport(dev, port);
|
2009-03-19 02:45:11 +00:00
|
|
|
if (eqe->subtype == MLX4_PORT_CHANGE_SUBTYPE_DOWN) {
|
2012-08-03 08:40:48 +00:00
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_PORT_DOWN,
|
2009-03-19 02:45:11 +00:00
|
|
|
port);
|
|
|
|
mlx4_priv(dev)->sense.do_sense_port[port] = 1;
|
2012-08-03 08:40:48 +00:00
|
|
|
if (!mlx4_is_master(dev))
|
|
|
|
break;
|
2014-03-19 16:11:52 +00:00
|
|
|
for (i = 0; i < dev->num_vfs + 1; i++) {
|
|
|
|
if (!test_bit(i, slaves_port.slaves))
|
|
|
|
continue;
|
2012-08-03 08:40:48 +00:00
|
|
|
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH) {
|
|
|
|
if (i == mlx4_master_func_num(dev))
|
|
|
|
continue;
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_dbg(dev, "%s: Sending MLX4_PORT_CHANGE_SUBTYPE_DOWN to slave: %d, port:%d\n",
|
2011-12-13 04:13:58 +00:00
|
|
|
__func__, i, port);
|
2013-06-13 10:19:11 +00:00
|
|
|
s_info = &priv->mfunc.master.vf_oper[slave].vport[port].state;
|
2014-03-19 16:11:52 +00:00
|
|
|
if (IFLA_VF_LINK_STATE_AUTO == s_info->link_state) {
|
|
|
|
eqe->event.port_change.port =
|
|
|
|
cpu_to_be32(
|
|
|
|
(be32_to_cpu(eqe->event.port_change.port) & 0xFFFFFFF)
|
|
|
|
| (mlx4_phys_to_slave_port(dev, i, port) << 28));
|
2013-06-13 10:19:11 +00:00
|
|
|
mlx4_slave_event(dev, i, eqe);
|
2014-03-19 16:11:52 +00:00
|
|
|
}
|
2012-08-03 08:40:48 +00:00
|
|
|
} else { /* IB port */
|
|
|
|
set_and_calc_slave_port_state(dev, i, port,
|
|
|
|
MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN,
|
|
|
|
&gen_event);
|
|
|
|
/*we can be in pending state, then do not send port_down event*/
|
|
|
|
if (SLAVE_PORT_GEN_EVENT_DOWN == gen_event) {
|
|
|
|
if (i == mlx4_master_func_num(dev))
|
|
|
|
continue;
|
|
|
|
mlx4_slave_event(dev, i, eqe);
|
|
|
|
}
|
2011-12-13 04:13:58 +00:00
|
|
|
}
|
2012-08-03 08:40:48 +00:00
|
|
|
}
|
2009-03-19 02:45:11 +00:00
|
|
|
} else {
|
2012-08-03 08:40:48 +00:00
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_PORT_UP, port);
|
|
|
|
|
2009-03-19 02:45:11 +00:00
|
|
|
mlx4_priv(dev)->sense.do_sense_port[port] = 0;
|
2011-12-13 04:13:58 +00:00
|
|
|
|
2012-08-03 08:40:48 +00:00
|
|
|
if (!mlx4_is_master(dev))
|
|
|
|
break;
|
|
|
|
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH)
|
2014-03-19 16:11:52 +00:00
|
|
|
for (i = 0; i < dev->num_vfs + 1; i++) {
|
|
|
|
if (!test_bit(i, slaves_port.slaves))
|
|
|
|
continue;
|
2012-08-03 08:40:48 +00:00
|
|
|
if (i == mlx4_master_func_num(dev))
|
2011-12-13 04:13:58 +00:00
|
|
|
continue;
|
2013-06-13 10:19:11 +00:00
|
|
|
s_info = &priv->mfunc.master.vf_oper[slave].vport[port].state;
|
2014-03-19 16:11:52 +00:00
|
|
|
if (IFLA_VF_LINK_STATE_AUTO == s_info->link_state) {
|
|
|
|
eqe->event.port_change.port =
|
|
|
|
cpu_to_be32(
|
|
|
|
(be32_to_cpu(eqe->event.port_change.port) & 0xFFFFFFF)
|
|
|
|
| (mlx4_phys_to_slave_port(dev, i, port) << 28));
|
2013-06-13 10:19:11 +00:00
|
|
|
mlx4_slave_event(dev, i, eqe);
|
2014-03-19 16:11:52 +00:00
|
|
|
}
|
2011-12-13 04:13:58 +00:00
|
|
|
}
|
2012-08-03 08:40:48 +00:00
|
|
|
else /* IB port */
|
|
|
|
/* port-up event will be sent to a slave when the
|
|
|
|
* slave's alias-guid is set. This is done in alias_GUID.c
|
|
|
|
*/
|
|
|
|
set_all_slave_state(dev, port, MLX4_DEV_EVENT_PORT_UP);
|
2009-03-19 02:45:11 +00:00
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
break;
|
2014-03-19 16:11:52 +00:00
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_CQ_ERROR:
|
|
|
|
mlx4_warn(dev, "CQ %s on CQN %06x\n",
|
|
|
|
eqe->event.cq_err.syndrome == 1 ?
|
|
|
|
"overrun" : "access violation",
|
|
|
|
be32_to_cpu(eqe->event.cq_err.cqn) & 0xffffff);
|
2011-12-13 04:13:58 +00:00
|
|
|
if (mlx4_is_master(dev)) {
|
|
|
|
ret = mlx4_get_slave_from_resource_id(dev,
|
|
|
|
RES_CQ,
|
|
|
|
be32_to_cpu(eqe->event.cq_err.cqn)
|
|
|
|
& 0xffffff, &slave);
|
|
|
|
if (ret && ret != -ENOENT) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_dbg(dev, "CQ event %02x(%02x) on EQ %d at index %u: could not get slave id (%d)\n",
|
|
|
|
eqe->type, eqe->subtype,
|
|
|
|
eq->eqn, eq->cons_index, ret);
|
2011-12-13 04:13:58 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret && slave != dev->caps.function) {
|
|
|
|
mlx4_slave_event(dev, slave, eqe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mlx4_cq_event(dev,
|
|
|
|
be32_to_cpu(eqe->event.cq_err.cqn)
|
|
|
|
& 0xffffff,
|
2007-05-09 01:00:38 +00:00
|
|
|
eqe->type);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_EQ_OVERFLOW:
|
|
|
|
mlx4_warn(dev, "EQ overrun on EQN %d\n", eq->eqn);
|
|
|
|
break;
|
|
|
|
|
2013-07-28 15:54:21 +00:00
|
|
|
case MLX4_EVENT_TYPE_OP_REQUIRED:
|
|
|
|
atomic_inc(&priv->opreq_count);
|
|
|
|
/* FW commands can't be executed from interrupt context
|
|
|
|
* working in deferred task
|
|
|
|
*/
|
|
|
|
queue_work(mlx4_wq, &priv->opreq_task);
|
|
|
|
break;
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
case MLX4_EVENT_TYPE_COMM_CHANNEL:
|
|
|
|
if (!mlx4_is_master(dev)) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Received comm channel event for non master device\n");
|
2011-12-13 04:13:58 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
memcpy(&priv->mfunc.master.comm_arm_bit_vector,
|
|
|
|
eqe->event.comm_channel_arm.bit_vec,
|
|
|
|
sizeof eqe->event.comm_channel_arm.bit_vec);
|
|
|
|
queue_work(priv->mfunc.master.comm_wq,
|
|
|
|
&priv->mfunc.master.comm_work);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_FLR_EVENT:
|
|
|
|
flr_slave = be32_to_cpu(eqe->event.flr_event.slave_id);
|
|
|
|
if (!mlx4_is_master(dev)) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Non-master function received FLR event\n");
|
2011-12-13 04:13:58 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
mlx4_dbg(dev, "FLR event for slave: %d\n", flr_slave);
|
|
|
|
|
2012-05-30 09:14:50 +00:00
|
|
|
if (flr_slave >= dev->num_slaves) {
|
2011-12-13 04:13:58 +00:00
|
|
|
mlx4_warn(dev,
|
|
|
|
"Got FLR for unknown function: %d\n",
|
|
|
|
flr_slave);
|
|
|
|
update_slave_state = 0;
|
|
|
|
} else
|
|
|
|
update_slave_state = 1;
|
|
|
|
|
2012-11-27 16:24:30 +00:00
|
|
|
spin_lock_irqsave(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 04:13:58 +00:00
|
|
|
if (update_slave_state) {
|
|
|
|
priv->mfunc.master.slave_state[flr_slave].active = false;
|
|
|
|
priv->mfunc.master.slave_state[flr_slave].last_cmd = MLX4_COMM_CMD_FLR;
|
|
|
|
priv->mfunc.master.slave_state[flr_slave].is_slave_going_down = 1;
|
|
|
|
}
|
2012-11-27 16:24:30 +00:00
|
|
|
spin_unlock_irqrestore(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 04:13:58 +00:00
|
|
|
queue_work(priv->mfunc.master.comm_wq,
|
|
|
|
&priv->mfunc.master.slave_flr_event_work);
|
|
|
|
break;
|
2012-03-06 13:50:49 +00:00
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_FATAL_WARNING:
|
|
|
|
if (eqe->subtype == MLX4_FATAL_WARNING_SUBTYPE_WARMING) {
|
|
|
|
if (mlx4_is_master(dev))
|
|
|
|
for (i = 0; i < dev->num_slaves; i++) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_dbg(dev, "%s: Sending MLX4_FATAL_WARNING_SUBTYPE_WARMING to slave: %d\n",
|
|
|
|
__func__, i);
|
2012-03-06 13:50:49 +00:00
|
|
|
if (i == dev->caps.function)
|
|
|
|
continue;
|
|
|
|
mlx4_slave_event(dev, i, eqe);
|
|
|
|
}
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_err(dev, "Temperature Threshold was reached! Threshold: %d celsius degrees; Current Temperature: %d\n",
|
|
|
|
be16_to_cpu(eqe->event.warming.warning_threshold),
|
|
|
|
be16_to_cpu(eqe->event.warming.current_temperature));
|
2012-03-06 13:50:49 +00:00
|
|
|
} else
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Unhandled event FATAL WARNING (%02x), subtype %02x on EQ %d at index %u. owner=%x, nent=0x%x, slave=%x, ownership=%s\n",
|
2012-03-06 13:50:49 +00:00
|
|
|
eqe->type, eqe->subtype, eq->eqn,
|
|
|
|
eq->cons_index, eqe->owner, eq->nent,
|
|
|
|
eqe->slave_id,
|
|
|
|
!!(eqe->owner & 0x80) ^
|
|
|
|
!!(eq->cons_index & eq->nent) ? "HW" : "SW");
|
|
|
|
|
|
|
|
break;
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
case MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT:
|
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_PORT_MGMT_CHANGE,
|
|
|
|
(unsigned long) eqe);
|
|
|
|
break;
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
case MLX4_EVENT_TYPE_EEC_CATAS_ERROR:
|
|
|
|
case MLX4_EVENT_TYPE_ECC_DETECT:
|
|
|
|
default:
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_warn(dev, "Unhandled event %02x(%02x) on EQ %d at index %u. owner=%x, nent=0x%x, slave=%x, ownership=%s\n",
|
2011-12-13 04:13:58 +00:00
|
|
|
eqe->type, eqe->subtype, eq->eqn,
|
|
|
|
eq->cons_index, eqe->owner, eq->nent,
|
|
|
|
eqe->slave_id,
|
|
|
|
!!(eqe->owner & 0x80) ^
|
|
|
|
!!(eq->cons_index & eq->nent) ? "HW" : "SW");
|
2007-05-09 01:00:38 +00:00
|
|
|
break;
|
2011-12-13 04:13:58 +00:00
|
|
|
};
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
++eq->cons_index;
|
|
|
|
eqes_found = 1;
|
|
|
|
++set_ci;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The HCA will think the queue has overflowed if we
|
|
|
|
* don't tell it we've been processing events. We
|
|
|
|
* create our EQs with MLX4_NUM_SPARE_EQE extra
|
|
|
|
* entries, so we must update our consumer index at
|
|
|
|
* least that often.
|
|
|
|
*/
|
|
|
|
if (unlikely(set_ci >= MLX4_NUM_SPARE_EQE)) {
|
|
|
|
eq_set_ci(eq, 0);
|
|
|
|
set_ci = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
eq_set_ci(eq, 1);
|
|
|
|
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 08:57:53 +00:00
|
|
|
/* cqn is 24bit wide but is initialized such that its higher bits
|
|
|
|
* are ones too. Thus, if we got any event, cqn's high bits should be off
|
|
|
|
* and we need to schedule the tasklet.
|
|
|
|
*/
|
|
|
|
if (!(cqn & ~0xffffff))
|
|
|
|
tasklet_schedule(&eq->tasklet_ctx.task);
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
return eqes_found;
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t mlx4_interrupt(int irq, void *dev_ptr)
|
|
|
|
{
|
|
|
|
struct mlx4_dev *dev = dev_ptr;
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int work = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
writel(priv->eq_table.clr_mask, priv->eq_table.clr_int);
|
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
|
2007-05-09 01:00:38 +00:00
|
|
|
work |= mlx4_eq_int(dev, &priv->eq_table.eq[i]);
|
|
|
|
|
|
|
|
return IRQ_RETVAL(work);
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t mlx4_msi_x_interrupt(int irq, void *eq_ptr)
|
|
|
|
{
|
|
|
|
struct mlx4_eq *eq = eq_ptr;
|
|
|
|
struct mlx4_dev *dev = eq->dev;
|
|
|
|
|
|
|
|
mlx4_eq_int(dev, eq);
|
|
|
|
|
|
|
|
/* MSI-X vectors always belong to us */
|
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
int mlx4_MAP_EQ_wrapper(struct mlx4_dev *dev, int slave,
|
|
|
|
struct mlx4_vhcr *vhcr,
|
|
|
|
struct mlx4_cmd_mailbox *inbox,
|
|
|
|
struct mlx4_cmd_mailbox *outbox,
|
|
|
|
struct mlx4_cmd_info *cmd)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_event_eq_info *event_eq =
|
2012-01-19 09:45:46 +00:00
|
|
|
priv->mfunc.master.slave_state[slave].event_eq;
|
2011-12-13 04:13:58 +00:00
|
|
|
u32 in_modifier = vhcr->in_modifier;
|
2013-03-21 05:55:51 +00:00
|
|
|
u32 eqn = in_modifier & 0x3FF;
|
2011-12-13 04:13:58 +00:00
|
|
|
u64 in_param = vhcr->in_param;
|
|
|
|
int err = 0;
|
2012-01-19 09:45:46 +00:00
|
|
|
int i;
|
2011-12-13 04:13:58 +00:00
|
|
|
|
|
|
|
if (slave == dev->caps.function)
|
|
|
|
err = mlx4_cmd(dev, in_param, (in_modifier & 0x80000000) | eqn,
|
|
|
|
0, MLX4_CMD_MAP_EQ, MLX4_CMD_TIME_CLASS_B,
|
|
|
|
MLX4_CMD_NATIVE);
|
2012-01-19 09:45:46 +00:00
|
|
|
if (!err)
|
|
|
|
for (i = 0; i < MLX4_EVENT_TYPES_NUM; ++i)
|
|
|
|
if (in_param & (1LL << i))
|
|
|
|
event_eq[i].eqn = in_modifier >> 31 ? -1 : eqn;
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
static int mlx4_MAP_EQ(struct mlx4_dev *dev, u64 event_mask, int unmap,
|
|
|
|
int eq_num)
|
|
|
|
{
|
|
|
|
return mlx4_cmd(dev, event_mask, (unmap << 31) | eq_num,
|
2011-12-13 04:10:51 +00:00
|
|
|
0, MLX4_CMD_MAP_EQ, MLX4_CMD_TIME_CLASS_B,
|
|
|
|
MLX4_CMD_WRAPPED);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_SW2HW_EQ(struct mlx4_dev *dev, struct mlx4_cmd_mailbox *mailbox,
|
|
|
|
int eq_num)
|
|
|
|
{
|
2012-01-19 09:45:19 +00:00
|
|
|
return mlx4_cmd(dev, mailbox->dma, eq_num, 0,
|
2011-12-13 04:13:58 +00:00
|
|
|
MLX4_CMD_SW2HW_EQ, MLX4_CMD_TIME_CLASS_A,
|
2011-12-13 04:10:51 +00:00
|
|
|
MLX4_CMD_WRAPPED);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_HW2SW_EQ(struct mlx4_dev *dev, struct mlx4_cmd_mailbox *mailbox,
|
|
|
|
int eq_num)
|
|
|
|
{
|
2012-01-19 09:45:19 +00:00
|
|
|
return mlx4_cmd_box(dev, 0, mailbox->dma, eq_num,
|
2011-12-13 04:13:58 +00:00
|
|
|
0, MLX4_CMD_HW2SW_EQ, MLX4_CMD_TIME_CLASS_A,
|
2011-12-13 04:10:51 +00:00
|
|
|
MLX4_CMD_WRAPPED);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
static int mlx4_num_eq_uar(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Each UAR holds 4 EQ doorbells. To figure out how many UARs
|
|
|
|
* we need to map, take the difference of highest index and
|
|
|
|
* the lowest index we'll use and add 1.
|
|
|
|
*/
|
2011-03-22 22:37:47 +00:00
|
|
|
return (dev->caps.num_comp_vectors + 1 + dev->caps.reserved_eqs +
|
|
|
|
dev->caps.comp_pool)/4 - dev->caps.reserved_eqs/4 + 1;
|
2008-12-22 15:15:03 +00:00
|
|
|
}
|
|
|
|
|
2007-10-10 22:43:54 +00:00
|
|
|
static void __iomem *mlx4_get_eq_uar(struct mlx4_dev *dev, struct mlx4_eq *eq)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int index;
|
|
|
|
|
|
|
|
index = eq->eqn / 4 - dev->caps.reserved_eqs / 4;
|
|
|
|
|
|
|
|
if (!priv->eq_table.uar_map[index]) {
|
|
|
|
priv->eq_table.uar_map[index] =
|
|
|
|
ioremap(pci_resource_start(dev->pdev, 2) +
|
|
|
|
((eq->eqn / 4) << PAGE_SHIFT),
|
|
|
|
PAGE_SIZE);
|
|
|
|
if (!priv->eq_table.uar_map[index]) {
|
|
|
|
mlx4_err(dev, "Couldn't map EQ doorbell for EQN 0x%06x\n",
|
|
|
|
eq->eqn);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return priv->eq_table.uar_map[index] + 0x800 + 8 * (eq->eqn % 4);
|
|
|
|
}
|
|
|
|
|
2012-10-25 01:12:49 +00:00
|
|
|
static void mlx4_unmap_uar(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < mlx4_num_eq_uar(dev); ++i)
|
|
|
|
if (priv->eq_table.uar_map[i]) {
|
|
|
|
iounmap(priv->eq_table.uar_map[i]);
|
|
|
|
priv->eq_table.uar_map[i] = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-10-10 22:43:54 +00:00
|
|
|
static int mlx4_create_eq(struct mlx4_dev *dev, int nent,
|
|
|
|
u8 intr, struct mlx4_eq *eq)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_cmd_mailbox *mailbox;
|
|
|
|
struct mlx4_eq_context *eq_context;
|
|
|
|
int npages;
|
|
|
|
u64 *dma_list = NULL;
|
|
|
|
dma_addr_t t;
|
|
|
|
u64 mtt_addr;
|
|
|
|
int err = -ENOMEM;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
eq->dev = dev;
|
|
|
|
eq->nent = roundup_pow_of_two(max(nent, 2));
|
2014-09-18 08:51:00 +00:00
|
|
|
/* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
|
|
|
|
* strides of 64B,128B and 256B.
|
|
|
|
*/
|
|
|
|
npages = PAGE_ALIGN(eq->nent * dev->caps.eqe_size) / PAGE_SIZE;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
eq->page_list = kmalloc(npages * sizeof *eq->page_list,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!eq->page_list)
|
|
|
|
goto err_out;
|
|
|
|
|
|
|
|
for (i = 0; i < npages; ++i)
|
|
|
|
eq->page_list[i].buf = NULL;
|
|
|
|
|
|
|
|
dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
|
|
|
|
if (!dma_list)
|
|
|
|
goto err_out_free;
|
|
|
|
|
|
|
|
mailbox = mlx4_alloc_cmd_mailbox(dev);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
goto err_out_free;
|
|
|
|
eq_context = mailbox->buf;
|
|
|
|
|
|
|
|
for (i = 0; i < npages; ++i) {
|
|
|
|
eq->page_list[i].buf = dma_alloc_coherent(&dev->pdev->dev,
|
|
|
|
PAGE_SIZE, &t, GFP_KERNEL);
|
|
|
|
if (!eq->page_list[i].buf)
|
|
|
|
goto err_out_free_pages;
|
|
|
|
|
|
|
|
dma_list[i] = t;
|
|
|
|
eq->page_list[i].map = t;
|
|
|
|
|
|
|
|
memset(eq->page_list[i].buf, 0, PAGE_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
eq->eqn = mlx4_bitmap_alloc(&priv->eq_table.bitmap);
|
|
|
|
if (eq->eqn == -1)
|
|
|
|
goto err_out_free_pages;
|
|
|
|
|
|
|
|
eq->doorbell = mlx4_get_eq_uar(dev, eq);
|
|
|
|
if (!eq->doorbell) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_out_free_eq;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_mtt_init(dev, npages, PAGE_SHIFT, &eq->mtt);
|
|
|
|
if (err)
|
|
|
|
goto err_out_free_eq;
|
|
|
|
|
|
|
|
err = mlx4_write_mtt(dev, &eq->mtt, 0, npages, dma_list);
|
|
|
|
if (err)
|
|
|
|
goto err_out_free_mtt;
|
|
|
|
|
|
|
|
eq_context->flags = cpu_to_be32(MLX4_EQ_STATUS_OK |
|
|
|
|
MLX4_EQ_STATE_ARMED);
|
|
|
|
eq_context->log_eq_size = ilog2(eq->nent);
|
|
|
|
eq_context->intr = intr;
|
|
|
|
eq_context->log_page_size = PAGE_SHIFT - MLX4_ICM_PAGE_SHIFT;
|
|
|
|
|
|
|
|
mtt_addr = mlx4_mtt_addr(dev, &eq->mtt);
|
|
|
|
eq_context->mtt_base_addr_h = mtt_addr >> 32;
|
|
|
|
eq_context->mtt_base_addr_l = cpu_to_be32(mtt_addr & 0xffffffff);
|
|
|
|
|
|
|
|
err = mlx4_SW2HW_EQ(dev, mailbox, eq->eqn);
|
|
|
|
if (err) {
|
|
|
|
mlx4_warn(dev, "SW2HW_EQ failed (%d)\n", err);
|
|
|
|
goto err_out_free_mtt;
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(dma_list);
|
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
|
|
|
|
eq->cons_index = 0;
|
|
|
|
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 08:57:53 +00:00
|
|
|
INIT_LIST_HEAD(&eq->tasklet_ctx.list);
|
|
|
|
INIT_LIST_HEAD(&eq->tasklet_ctx.process_list);
|
|
|
|
spin_lock_init(&eq->tasklet_ctx.lock);
|
|
|
|
tasklet_init(&eq->tasklet_ctx.task, mlx4_cq_tasklet_cb,
|
|
|
|
(unsigned long)&eq->tasklet_ctx);
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
return err;
|
|
|
|
|
|
|
|
err_out_free_mtt:
|
|
|
|
mlx4_mtt_cleanup(dev, &eq->mtt);
|
|
|
|
|
|
|
|
err_out_free_eq:
|
mlx4_core: Roll back round robin bitmap allocation commit for CQs, SRQs, and MPTs
Commit f4ec9e9 "mlx4_core: Change bitmap allocator to work in round-robin fashion"
introduced round-robin allocation (via bitmap) for all resources which allocate
via a bitmap.
Round robin allocation is desirable for mcgs, counters, pd's, UARs, and xrcds.
These are simply numbers, with no involvement of ICM memory mapping.
Round robin is required for QPs, since we had a problem with immediate
reuse of a 24-bit QP number (commit f4ec9e9).
However, for other resources which use the bitmap allocator and involve
mapping ICM memory -- MPTs, CQs, SRQs -- round-robin is not desirable.
What happens in these cases is the following:
ICM memory is allocated and mapped in chunks of 256K.
Since the resource allocation index goes up monotonically, the allocator
will eventually require mapping a new chunk. Now, chunks are also unmapped
when their reference count goes back to zero. Thus, if a single app is
running and starts/exits frequently we will have the following situation:
When the app starts, a new chunk must be allocated and mapped.
When the app exits, the chunk reference count goes back to zero, and the
chunk is unmapped and freed. Therefore, the app must pay the cost of allocation
and mapping of ICM memory each time it runs (although the price is paid only when
allocating the initial entry in the new chunk).
For apps which allocate MPTs/SRQs/CQs and which operate as described above,
this presented a performance problem.
We therefore roll back the round-robin allocator modification for MPTs, CQs, SRQs.
Reported-by: Matthew Finlay <matt@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-08 14:50:17 +00:00
|
|
|
mlx4_bitmap_free(&priv->eq_table.bitmap, eq->eqn, MLX4_USE_RR);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
err_out_free_pages:
|
|
|
|
for (i = 0; i < npages; ++i)
|
|
|
|
if (eq->page_list[i].buf)
|
|
|
|
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
|
|
|
|
eq->page_list[i].buf,
|
|
|
|
eq->page_list[i].map);
|
|
|
|
|
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
|
|
|
|
err_out_free:
|
|
|
|
kfree(eq->page_list);
|
|
|
|
kfree(dma_list);
|
|
|
|
|
|
|
|
err_out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_free_eq(struct mlx4_dev *dev,
|
|
|
|
struct mlx4_eq *eq)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_cmd_mailbox *mailbox;
|
|
|
|
int err;
|
|
|
|
int i;
|
2014-09-18 08:51:00 +00:00
|
|
|
/* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
|
|
|
|
* strides of 64B,128B and 256B
|
|
|
|
*/
|
|
|
|
int npages = PAGE_ALIGN(dev->caps.eqe_size * eq->nent) / PAGE_SIZE;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
mailbox = mlx4_alloc_cmd_mailbox(dev);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
return;
|
|
|
|
|
|
|
|
err = mlx4_HW2SW_EQ(dev, mailbox, eq->eqn);
|
|
|
|
if (err)
|
|
|
|
mlx4_warn(dev, "HW2SW_EQ failed (%d)\n", err);
|
|
|
|
|
|
|
|
if (0) {
|
|
|
|
mlx4_dbg(dev, "Dumping EQ context %02x:\n", eq->eqn);
|
|
|
|
for (i = 0; i < sizeof (struct mlx4_eq_context) / 4; ++i) {
|
|
|
|
if (i % 4 == 0)
|
2010-07-10 07:22:46 +00:00
|
|
|
pr_cont("[%02x] ", i * 4);
|
|
|
|
pr_cont(" %08x", be32_to_cpup(mailbox->buf + i * 4));
|
2007-05-09 01:00:38 +00:00
|
|
|
if ((i + 1) % 4 == 0)
|
2010-07-10 07:22:46 +00:00
|
|
|
pr_cont("\n");
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
}
|
2014-10-23 12:57:27 +00:00
|
|
|
synchronize_irq(eq->irq);
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 08:57:53 +00:00
|
|
|
tasklet_disable(&eq->tasklet_ctx.task);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
mlx4_mtt_cleanup(dev, &eq->mtt);
|
|
|
|
for (i = 0; i < npages; ++i)
|
2011-10-06 16:33:12 +00:00
|
|
|
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
|
2007-05-09 01:00:38 +00:00
|
|
|
eq->page_list[i].buf,
|
|
|
|
eq->page_list[i].map);
|
|
|
|
|
|
|
|
kfree(eq->page_list);
|
mlx4_core: Roll back round robin bitmap allocation commit for CQs, SRQs, and MPTs
Commit f4ec9e9 "mlx4_core: Change bitmap allocator to work in round-robin fashion"
introduced round-robin allocation (via bitmap) for all resources which allocate
via a bitmap.
Round robin allocation is desirable for mcgs, counters, pd's, UARs, and xrcds.
These are simply numbers, with no involvement of ICM memory mapping.
Round robin is required for QPs, since we had a problem with immediate
reuse of a 24-bit QP number (commit f4ec9e9).
However, for other resources which use the bitmap allocator and involve
mapping ICM memory -- MPTs, CQs, SRQs -- round-robin is not desirable.
What happens in these cases is the following:
ICM memory is allocated and mapped in chunks of 256K.
Since the resource allocation index goes up monotonically, the allocator
will eventually require mapping a new chunk. Now, chunks are also unmapped
when their reference count goes back to zero. Thus, if a single app is
running and starts/exits frequently we will have the following situation:
When the app starts, a new chunk must be allocated and mapped.
When the app exits, the chunk reference count goes back to zero, and the
chunk is unmapped and freed. Therefore, the app must pay the cost of allocation
and mapping of ICM memory each time it runs (although the price is paid only when
allocating the initial entry in the new chunk).
For apps which allocate MPTs/SRQs/CQs and which operate as described above,
this presented a performance problem.
We therefore roll back the round-robin allocator modification for MPTs, CQs, SRQs.
Reported-by: Matthew Finlay <matt@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-08 14:50:17 +00:00
|
|
|
mlx4_bitmap_free(&priv->eq_table.bitmap, eq->eqn, MLX4_USE_RR);
|
2007-05-09 01:00:38 +00:00
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_free_irqs(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_eq_table *eq_table = &mlx4_priv(dev)->eq_table;
|
2011-03-22 22:37:47 +00:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i, vec;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
if (eq_table->have_irq)
|
|
|
|
free_irq(dev->pdev->irq, dev);
|
2011-03-22 22:37:47 +00:00
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
|
2009-06-14 20:30:45 +00:00
|
|
|
if (eq_table->eq[i].have_irq) {
|
2007-05-09 01:00:38 +00:00
|
|
|
free_irq(eq_table->eq[i].irq, eq_table->eq + i);
|
2009-06-14 20:30:45 +00:00
|
|
|
eq_table->eq[i].have_irq = 0;
|
|
|
|
}
|
2008-12-22 15:15:03 +00:00
|
|
|
|
2011-03-22 22:37:47 +00:00
|
|
|
for (i = 0; i < dev->caps.comp_pool; i++) {
|
|
|
|
/*
|
|
|
|
* Freeing the assigned irq's
|
|
|
|
* all bits should be 0, but we need to validate
|
|
|
|
*/
|
|
|
|
if (priv->msix_ctl.pool_bm & 1ULL << i) {
|
|
|
|
/* NO need protecting*/
|
|
|
|
vec = dev->caps.num_comp_vectors + 1 + i;
|
|
|
|
free_irq(priv->eq_table.eq[vec].irq,
|
|
|
|
&priv->eq_table.eq[vec]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
kfree(eq_table->irq_names);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
2007-10-10 22:43:54 +00:00
|
|
|
static int mlx4_map_clr_int(struct mlx4_dev *dev)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
priv->clr_base = ioremap(pci_resource_start(dev->pdev, priv->fw.clr_int_bar) +
|
|
|
|
priv->fw.clr_int_base, MLX4_CLR_INT_SIZE);
|
|
|
|
if (!priv->clr_base) {
|
2014-05-07 19:52:57 +00:00
|
|
|
mlx4_err(dev, "Couldn't map interrupt clear register, aborting\n");
|
2007-05-09 01:00:38 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_unmap_clr_int(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
iounmap(priv->clr_base);
|
|
|
|
}
|
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
int mlx4_alloc_eq_table(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
priv->eq_table.eq = kcalloc(dev->caps.num_eqs - dev->caps.reserved_eqs,
|
|
|
|
sizeof *priv->eq_table.eq, GFP_KERNEL);
|
|
|
|
if (!priv->eq_table.eq)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_free_eq_table(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
kfree(mlx4_priv(dev)->eq_table.eq);
|
|
|
|
}
|
|
|
|
|
2007-10-10 22:43:54 +00:00
|
|
|
int mlx4_init_eq_table(struct mlx4_dev *dev)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int err;
|
|
|
|
int i;
|
|
|
|
|
2012-02-12 15:14:39 +00:00
|
|
|
priv->eq_table.uar_map = kcalloc(mlx4_num_eq_uar(dev),
|
|
|
|
sizeof *priv->eq_table.uar_map,
|
|
|
|
GFP_KERNEL);
|
2008-12-22 15:15:03 +00:00
|
|
|
if (!priv->eq_table.uar_map) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_out_free;
|
|
|
|
}
|
|
|
|
|
2014-11-13 12:45:32 +00:00
|
|
|
err = mlx4_bitmap_init(&priv->eq_table.bitmap,
|
|
|
|
roundup_pow_of_two(dev->caps.num_eqs),
|
|
|
|
dev->caps.num_eqs - 1,
|
|
|
|
dev->caps.reserved_eqs,
|
|
|
|
roundup_pow_of_two(dev->caps.num_eqs) -
|
|
|
|
dev->caps.num_eqs);
|
2007-05-09 01:00:38 +00:00
|
|
|
if (err)
|
2008-12-22 15:15:03 +00:00
|
|
|
goto err_out_free;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
for (i = 0; i < mlx4_num_eq_uar(dev); ++i)
|
2007-05-09 01:00:38 +00:00
|
|
|
priv->eq_table.uar_map[i] = NULL;
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
if (!mlx4_is_slave(dev)) {
|
|
|
|
err = mlx4_map_clr_int(dev);
|
|
|
|
if (err)
|
|
|
|
goto err_out_bitmap;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
priv->eq_table.clr_mask =
|
|
|
|
swab32(1 << (priv->eq_table.inta_pin & 31));
|
|
|
|
priv->eq_table.clr_int = priv->clr_base +
|
|
|
|
(priv->eq_table.inta_pin < 32 ? 4 : 0);
|
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2009-09-06 03:24:50 +00:00
|
|
|
priv->eq_table.irq_names =
|
2011-03-22 22:37:47 +00:00
|
|
|
kmalloc(MLX4_IRQNAME_SIZE * (dev->caps.num_comp_vectors + 1 +
|
|
|
|
dev->caps.comp_pool),
|
2009-09-06 03:24:50 +00:00
|
|
|
GFP_KERNEL);
|
2008-12-22 15:15:03 +00:00
|
|
|
if (!priv->eq_table.irq_names) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_out_bitmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors; ++i) {
|
2011-03-30 23:30:17 +00:00
|
|
|
err = mlx4_create_eq(dev, dev->caps.num_cqs -
|
|
|
|
dev->caps.reserved_cqs +
|
|
|
|
MLX4_NUM_SPARE_EQE,
|
2008-12-22 15:15:03 +00:00
|
|
|
(dev->flags & MLX4_FLAG_MSI_X) ? i : 0,
|
|
|
|
&priv->eq_table.eq[i]);
|
2009-06-08 07:39:58 +00:00
|
|
|
if (err) {
|
|
|
|
--i;
|
2008-12-22 15:15:03 +00:00
|
|
|
goto err_out_unmap;
|
2009-06-08 07:39:58 +00:00
|
|
|
}
|
2008-12-22 15:15:03 +00:00
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
err = mlx4_create_eq(dev, MLX4_NUM_ASYNC_EQE + MLX4_NUM_SPARE_EQE,
|
2008-12-22 15:15:03 +00:00
|
|
|
(dev->flags & MLX4_FLAG_MSI_X) ? dev->caps.num_comp_vectors : 0,
|
|
|
|
&priv->eq_table.eq[dev->caps.num_comp_vectors]);
|
2007-05-09 01:00:38 +00:00
|
|
|
if (err)
|
|
|
|
goto err_out_comp;
|
|
|
|
|
2011-03-22 22:37:47 +00:00
|
|
|
/*if additional completion vectors poolsize is 0 this loop will not run*/
|
|
|
|
for (i = dev->caps.num_comp_vectors + 1;
|
|
|
|
i < dev->caps.num_comp_vectors + dev->caps.comp_pool + 1; ++i) {
|
|
|
|
|
|
|
|
err = mlx4_create_eq(dev, dev->caps.num_cqs -
|
|
|
|
dev->caps.reserved_cqs +
|
|
|
|
MLX4_NUM_SPARE_EQE,
|
|
|
|
(dev->flags & MLX4_FLAG_MSI_X) ? i : 0,
|
|
|
|
&priv->eq_table.eq[i]);
|
|
|
|
if (err) {
|
|
|
|
--i;
|
|
|
|
goto err_out_unmap;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
if (dev->flags & MLX4_FLAG_MSI_X) {
|
2008-12-22 15:15:03 +00:00
|
|
|
const char *eq_name;
|
|
|
|
|
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i) {
|
|
|
|
if (i < dev->caps.num_comp_vectors) {
|
2009-09-06 03:24:50 +00:00
|
|
|
snprintf(priv->eq_table.irq_names +
|
|
|
|
i * MLX4_IRQNAME_SIZE,
|
|
|
|
MLX4_IRQNAME_SIZE,
|
|
|
|
"mlx4-comp-%d@pci:%s", i,
|
|
|
|
pci_name(dev->pdev));
|
|
|
|
} else {
|
|
|
|
snprintf(priv->eq_table.irq_names +
|
|
|
|
i * MLX4_IRQNAME_SIZE,
|
|
|
|
MLX4_IRQNAME_SIZE,
|
|
|
|
"mlx4-async@pci:%s",
|
|
|
|
pci_name(dev->pdev));
|
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2009-09-06 03:24:50 +00:00
|
|
|
eq_name = priv->eq_table.irq_names +
|
|
|
|
i * MLX4_IRQNAME_SIZE;
|
2007-05-09 01:00:38 +00:00
|
|
|
err = request_irq(priv->eq_table.eq[i].irq,
|
2008-12-22 15:15:03 +00:00
|
|
|
mlx4_msi_x_interrupt, 0, eq_name,
|
|
|
|
priv->eq_table.eq + i);
|
2007-05-09 01:00:38 +00:00
|
|
|
if (err)
|
2007-07-12 14:50:45 +00:00
|
|
|
goto err_out_async;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
priv->eq_table.eq[i].have_irq = 1;
|
|
|
|
}
|
|
|
|
} else {
|
2009-09-06 03:24:50 +00:00
|
|
|
snprintf(priv->eq_table.irq_names,
|
|
|
|
MLX4_IRQNAME_SIZE,
|
|
|
|
DRV_NAME "@pci:%s",
|
|
|
|
pci_name(dev->pdev));
|
2007-05-09 01:00:38 +00:00
|
|
|
err = request_irq(dev->pdev->irq, mlx4_interrupt,
|
2009-09-06 03:24:50 +00:00
|
|
|
IRQF_SHARED, priv->eq_table.irq_names, dev);
|
2007-05-09 01:00:38 +00:00
|
|
|
if (err)
|
|
|
|
goto err_out_async;
|
|
|
|
|
|
|
|
priv->eq_table.have_irq = 1;
|
|
|
|
}
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
err = mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
2008-12-22 15:15:03 +00:00
|
|
|
priv->eq_table.eq[dev->caps.num_comp_vectors].eqn);
|
2007-05-09 01:00:38 +00:00
|
|
|
if (err)
|
|
|
|
mlx4_warn(dev, "MAP_EQ for async EQ %d failed (%d)\n",
|
2008-12-22 15:15:03 +00:00
|
|
|
priv->eq_table.eq[dev->caps.num_comp_vectors].eqn, err);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
|
2007-05-09 01:00:38 +00:00
|
|
|
eq_set_ci(&priv->eq_table.eq[i], 1);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_out_async:
|
2008-12-22 15:15:03 +00:00
|
|
|
mlx4_free_eq(dev, &priv->eq_table.eq[dev->caps.num_comp_vectors]);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
err_out_comp:
|
2008-12-22 15:15:03 +00:00
|
|
|
i = dev->caps.num_comp_vectors - 1;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
err_out_unmap:
|
2008-12-22 15:15:03 +00:00
|
|
|
while (i >= 0) {
|
|
|
|
mlx4_free_eq(dev, &priv->eq_table.eq[i]);
|
|
|
|
--i;
|
|
|
|
}
|
2011-12-13 04:13:58 +00:00
|
|
|
if (!mlx4_is_slave(dev))
|
|
|
|
mlx4_unmap_clr_int(dev);
|
2007-05-09 01:00:38 +00:00
|
|
|
mlx4_free_irqs(dev);
|
|
|
|
|
2008-12-22 15:15:03 +00:00
|
|
|
err_out_bitmap:
|
2012-10-25 01:12:49 +00:00
|
|
|
mlx4_unmap_uar(dev);
|
2007-05-09 01:00:38 +00:00
|
|
|
mlx4_bitmap_cleanup(&priv->eq_table.bitmap);
|
2008-12-22 15:15:03 +00:00
|
|
|
|
|
|
|
err_out_free:
|
|
|
|
kfree(priv->eq_table.uar_map);
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_cleanup_eq_table(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i;
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 1,
|
2008-12-22 15:15:03 +00:00
|
|
|
priv->eq_table.eq[dev->caps.num_comp_vectors].eqn);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
mlx4_free_irqs(dev);
|
|
|
|
|
2011-03-22 22:37:47 +00:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + dev->caps.comp_pool + 1; ++i)
|
2007-05-09 01:00:38 +00:00
|
|
|
mlx4_free_eq(dev, &priv->eq_table.eq[i]);
|
|
|
|
|
2011-12-13 04:13:58 +00:00
|
|
|
if (!mlx4_is_slave(dev))
|
|
|
|
mlx4_unmap_clr_int(dev);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2012-10-25 01:12:49 +00:00
|
|
|
mlx4_unmap_uar(dev);
|
2007-05-09 01:00:38 +00:00
|
|
|
mlx4_bitmap_cleanup(&priv->eq_table.bitmap);
|
2008-12-22 15:15:03 +00:00
|
|
|
|
|
|
|
kfree(priv->eq_table.uar_map);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
2010-08-24 03:46:18 +00:00
|
|
|
|
|
|
|
/* A test that verifies that we can accept interrupts on all
|
|
|
|
* the irq vectors of the device.
|
|
|
|
* Interrupts are checked using the NOP command.
|
|
|
|
*/
|
|
|
|
int mlx4_test_interrupts(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = mlx4_NOP(dev);
|
|
|
|
/* When not in MSI_X, there is only one irq to check */
|
2011-12-13 04:13:58 +00:00
|
|
|
if (!(dev->flags & MLX4_FLAG_MSI_X) || mlx4_is_slave(dev))
|
2010-08-24 03:46:18 +00:00
|
|
|
return err;
|
|
|
|
|
|
|
|
/* A loop over all completion vectors, for each vector we will check
|
|
|
|
* whether it works by mapping command completions to that vector
|
|
|
|
* and performing a NOP command
|
|
|
|
*/
|
|
|
|
for(i = 0; !err && (i < dev->caps.num_comp_vectors); ++i) {
|
|
|
|
/* Temporary use polling for command completions */
|
|
|
|
mlx4_cmd_use_polling(dev);
|
|
|
|
|
2012-09-20 01:48:02 +00:00
|
|
|
/* Map the new eq to handle all asynchronous events */
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
err = mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
2010-08-24 03:46:18 +00:00
|
|
|
priv->eq_table.eq[i].eqn);
|
|
|
|
if (err) {
|
|
|
|
mlx4_warn(dev, "Failed mapping eq for interrupt test\n");
|
|
|
|
mlx4_cmd_use_events(dev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Go back to using events */
|
|
|
|
mlx4_cmd_use_events(dev);
|
|
|
|
err = mlx4_NOP(dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Return to default */
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
2010-08-24 03:46:18 +00:00
|
|
|
priv->eq_table.eq[dev->caps.num_comp_vectors].eqn);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_test_interrupts);
|
2011-03-22 22:37:47 +00:00
|
|
|
|
2012-07-18 22:33:51 +00:00
|
|
|
int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap,
|
2014-06-02 07:18:48 +00:00
|
|
|
int *vector)
|
2011-03-22 22:37:47 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int vec = 0, err = 0, i;
|
|
|
|
|
2012-02-21 03:39:32 +00:00
|
|
|
mutex_lock(&priv->msix_ctl.pool_lock);
|
2011-03-22 22:37:47 +00:00
|
|
|
for (i = 0; !vec && i < dev->caps.comp_pool; i++) {
|
|
|
|
if (~priv->msix_ctl.pool_bm & 1ULL << i) {
|
|
|
|
priv->msix_ctl.pool_bm |= 1ULL << i;
|
|
|
|
vec = dev->caps.num_comp_vectors + 1 + i;
|
|
|
|
snprintf(priv->eq_table.irq_names +
|
|
|
|
vec * MLX4_IRQNAME_SIZE,
|
|
|
|
MLX4_IRQNAME_SIZE, "%s", name);
|
2012-07-18 22:33:51 +00:00
|
|
|
#ifdef CONFIG_RFS_ACCEL
|
|
|
|
if (rmap) {
|
|
|
|
err = irq_cpu_rmap_add(rmap,
|
|
|
|
priv->eq_table.eq[vec].irq);
|
|
|
|
if (err)
|
|
|
|
mlx4_warn(dev, "Failed adding irq rmap\n");
|
|
|
|
}
|
|
|
|
#endif
|
2011-03-22 22:37:47 +00:00
|
|
|
err = request_irq(priv->eq_table.eq[vec].irq,
|
|
|
|
mlx4_msi_x_interrupt, 0,
|
|
|
|
&priv->eq_table.irq_names[vec<<5],
|
|
|
|
priv->eq_table.eq + vec);
|
|
|
|
if (err) {
|
|
|
|
/*zero out bit by fliping it*/
|
|
|
|
priv->msix_ctl.pool_bm ^= 1 << i;
|
|
|
|
vec = 0;
|
|
|
|
continue;
|
|
|
|
/*we dont want to break here*/
|
|
|
|
}
|
2014-05-14 09:15:10 +00:00
|
|
|
|
2011-03-22 22:37:47 +00:00
|
|
|
eq_set_ci(&priv->eq_table.eq[vec], 1);
|
|
|
|
}
|
|
|
|
}
|
2012-02-21 03:39:32 +00:00
|
|
|
mutex_unlock(&priv->msix_ctl.pool_lock);
|
2011-03-22 22:37:47 +00:00
|
|
|
|
|
|
|
if (vec) {
|
|
|
|
*vector = vec;
|
|
|
|
} else {
|
|
|
|
*vector = 0;
|
|
|
|
err = (i == dev->caps.comp_pool) ? -ENOSPC : err;
|
|
|
|
}
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_assign_eq);
|
|
|
|
|
2014-06-29 08:54:55 +00:00
|
|
|
int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
return priv->eq_table.eq[vec].irq;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_eq_get_irq);
|
|
|
|
|
2011-03-22 22:37:47 +00:00
|
|
|
void mlx4_release_eq(struct mlx4_dev *dev, int vec)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
/*bm index*/
|
|
|
|
int i = vec - dev->caps.num_comp_vectors - 1;
|
|
|
|
|
|
|
|
if (likely(i >= 0)) {
|
|
|
|
/*sanity check , making sure were not trying to free irq's
|
|
|
|
Belonging to a legacy EQ*/
|
2012-02-21 03:39:32 +00:00
|
|
|
mutex_lock(&priv->msix_ctl.pool_lock);
|
2011-03-22 22:37:47 +00:00
|
|
|
if (priv->msix_ctl.pool_bm & 1ULL << i) {
|
|
|
|
free_irq(priv->eq_table.eq[vec].irq,
|
|
|
|
&priv->eq_table.eq[vec]);
|
|
|
|
priv->msix_ctl.pool_bm &= ~(1ULL << i);
|
|
|
|
}
|
2012-02-21 03:39:32 +00:00
|
|
|
mutex_unlock(&priv->msix_ctl.pool_lock);
|
2011-03-22 22:37:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_release_eq);
|
|
|
|
|