2007-05-09 01:00:38 +00:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2007 Cisco Systems, Inc. All rights reserved.
|
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <rdma/ib_mad.h>
|
|
|
|
#include <rdma/ib_smi.h>
|
2012-08-03 08:40:44 +00:00
|
|
|
#include <rdma/ib_sa.h>
|
|
|
|
#include <rdma/ib_cache.h>
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2012-08-03 08:40:56 +00:00
|
|
|
#include <linux/random.h>
|
2007-05-09 01:00:38 +00:00
|
|
|
#include <linux/mlx4/cmd.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/gfp.h>
|
2011-06-15 14:51:27 +00:00
|
|
|
#include <rdma/ib_pma.h>
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2015-12-06 16:07:42 +00:00
|
|
|
#include <linux/mlx4/driver.h>
|
2007-05-09 01:00:38 +00:00
|
|
|
#include "mlx4_ib.h"
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MLX4_IB_VENDOR_CLASS1 = 0x9,
|
|
|
|
MLX4_IB_VENDOR_CLASS2 = 0xa
|
|
|
|
};
|
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
#define MLX4_TUN_SEND_WRID_SHIFT 34
|
|
|
|
#define MLX4_TUN_QPN_SHIFT 32
|
|
|
|
#define MLX4_TUN_WRID_RECV (((u64) 1) << MLX4_TUN_SEND_WRID_SHIFT)
|
|
|
|
#define MLX4_TUN_SET_WRID_QPN(a) (((u64) ((a) & 0x3)) << MLX4_TUN_QPN_SHIFT)
|
|
|
|
|
|
|
|
#define MLX4_TUN_IS_RECV(a) (((a) >> MLX4_TUN_SEND_WRID_SHIFT) & 0x1)
|
|
|
|
#define MLX4_TUN_WRID_QPN(a) (((a) >> MLX4_TUN_QPN_SHIFT) & 0x3)
|
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
/* Port mgmt change event handling */
|
|
|
|
|
|
|
|
#define GET_BLK_PTR_FROM_EQE(eqe) be32_to_cpu(eqe->event.port_mgmt_change.params.tbl_change_info.block_ptr)
|
|
|
|
#define GET_MASK_FROM_EQE(eqe) be32_to_cpu(eqe->event.port_mgmt_change.params.tbl_change_info.tbl_entries_mask)
|
|
|
|
#define NUM_IDX_IN_PKEY_TBL_BLK 32
|
|
|
|
#define GUID_TBL_ENTRY_SIZE 8 /* size in bytes */
|
|
|
|
#define GUID_TBL_BLK_NUM_ENTRIES 8
|
|
|
|
#define GUID_TBL_BLK_SIZE (GUID_TBL_ENTRY_SIZE * GUID_TBL_BLK_NUM_ENTRIES)
|
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
struct mlx4_mad_rcv_buf {
|
|
|
|
struct ib_grh grh;
|
|
|
|
u8 payload[256];
|
|
|
|
} __packed;
|
|
|
|
|
|
|
|
struct mlx4_mad_snd_buf {
|
|
|
|
u8 payload[256];
|
|
|
|
} __packed;
|
|
|
|
|
|
|
|
struct mlx4_tunnel_mad {
|
|
|
|
struct ib_grh grh;
|
|
|
|
struct mlx4_ib_tunnel_header hdr;
|
|
|
|
struct ib_mad mad;
|
|
|
|
} __packed;
|
|
|
|
|
|
|
|
struct mlx4_rcv_tunnel_mad {
|
|
|
|
struct mlx4_rcv_tunnel_hdr hdr;
|
|
|
|
struct ib_grh grh;
|
|
|
|
struct ib_mad mad;
|
|
|
|
} __packed;
|
|
|
|
|
2012-08-03 08:40:46 +00:00
|
|
|
static void handle_client_rereg_event(struct mlx4_ib_dev *dev, u8 port_num);
|
2012-08-03 08:40:50 +00:00
|
|
|
static void handle_lid_change_event(struct mlx4_ib_dev *dev, u8 port_num);
|
|
|
|
static void __propagate_pkey_ev(struct mlx4_ib_dev *dev, int port_num,
|
|
|
|
int block, u32 change_bitmap);
|
2012-08-03 08:40:46 +00:00
|
|
|
|
2012-08-03 08:40:56 +00:00
|
|
|
__be64 mlx4_ib_gen_node_guid(void)
|
|
|
|
{
|
|
|
|
#define NODE_GUID_HI ((u64) (((u64)IB_OPENIB_OUI) << 40))
|
2013-05-07 23:18:16 +00:00
|
|
|
return cpu_to_be64(NODE_GUID_HI | prandom_u32());
|
2012-08-03 08:40:56 +00:00
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:46 +00:00
|
|
|
__be64 mlx4_ib_get_new_demux_tid(struct mlx4_ib_demux_ctx *ctx)
|
|
|
|
{
|
|
|
|
return cpu_to_be64(atomic_inc_return(&ctx->tid)) |
|
|
|
|
cpu_to_be64(0xff00000000000000LL);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:45 +00:00
|
|
|
int mlx4_MAD_IFC(struct mlx4_ib_dev *dev, int mad_ifc_flags,
|
2015-05-31 21:15:30 +00:00
|
|
|
int port, const struct ib_wc *in_wc,
|
|
|
|
const struct ib_grh *in_grh,
|
|
|
|
const void *in_mad, void *response_mad)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
|
|
|
struct mlx4_cmd_mailbox *inmailbox, *outmailbox;
|
|
|
|
void *inbox;
|
|
|
|
int err;
|
|
|
|
u32 in_modifier = port;
|
|
|
|
u8 op_modifier = 0;
|
|
|
|
|
|
|
|
inmailbox = mlx4_alloc_cmd_mailbox(dev->dev);
|
|
|
|
if (IS_ERR(inmailbox))
|
|
|
|
return PTR_ERR(inmailbox);
|
|
|
|
inbox = inmailbox->buf;
|
|
|
|
|
|
|
|
outmailbox = mlx4_alloc_cmd_mailbox(dev->dev);
|
|
|
|
if (IS_ERR(outmailbox)) {
|
|
|
|
mlx4_free_cmd_mailbox(dev->dev, inmailbox);
|
|
|
|
return PTR_ERR(outmailbox);
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(inbox, in_mad, 256);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Key check traps can't be generated unless we have in_wc to
|
|
|
|
* tell us where to send the trap.
|
|
|
|
*/
|
2012-08-03 08:40:45 +00:00
|
|
|
if ((mad_ifc_flags & MLX4_MAD_IFC_IGNORE_MKEY) || !in_wc)
|
2007-05-09 01:00:38 +00:00
|
|
|
op_modifier |= 0x1;
|
2012-08-03 08:40:45 +00:00
|
|
|
if ((mad_ifc_flags & MLX4_MAD_IFC_IGNORE_BKEY) || !in_wc)
|
2007-05-09 01:00:38 +00:00
|
|
|
op_modifier |= 0x2;
|
2012-08-03 08:40:45 +00:00
|
|
|
if (mlx4_is_mfunc(dev->dev) &&
|
|
|
|
(mad_ifc_flags & MLX4_MAD_IFC_NET_VIEW || in_wc))
|
|
|
|
op_modifier |= 0x8;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
if (in_wc) {
|
|
|
|
struct {
|
|
|
|
__be32 my_qpn;
|
|
|
|
u32 reserved1;
|
|
|
|
__be32 rqpn;
|
|
|
|
u8 sl;
|
|
|
|
u8 g_path;
|
|
|
|
u16 reserved2[2];
|
|
|
|
__be16 pkey;
|
|
|
|
u32 reserved3[11];
|
|
|
|
u8 grh[40];
|
|
|
|
} *ext_info;
|
|
|
|
|
|
|
|
memset(inbox + 256, 0, 256);
|
|
|
|
ext_info = inbox + 256;
|
|
|
|
|
|
|
|
ext_info->my_qpn = cpu_to_be32(in_wc->qp->qp_num);
|
|
|
|
ext_info->rqpn = cpu_to_be32(in_wc->src_qp);
|
|
|
|
ext_info->sl = in_wc->sl << 4;
|
|
|
|
ext_info->g_path = in_wc->dlid_path_bits |
|
|
|
|
(in_wc->wc_flags & IB_WC_GRH ? 0x80 : 0);
|
|
|
|
ext_info->pkey = cpu_to_be16(in_wc->pkey_index);
|
|
|
|
|
|
|
|
if (in_grh)
|
|
|
|
memcpy(ext_info->grh, in_grh, 40);
|
|
|
|
|
|
|
|
op_modifier |= 0x4;
|
|
|
|
|
|
|
|
in_modifier |= in_wc->slid << 16;
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:45 +00:00
|
|
|
err = mlx4_cmd_box(dev->dev, inmailbox->dma, outmailbox->dma, in_modifier,
|
|
|
|
mlx4_is_master(dev->dev) ? (op_modifier & ~0x8) : op_modifier,
|
2011-12-13 04:10:51 +00:00
|
|
|
MLX4_CMD_MAD_IFC, MLX4_CMD_TIME_CLASS_C,
|
2012-08-03 08:40:45 +00:00
|
|
|
(op_modifier & 0x8) ? MLX4_CMD_NATIVE : MLX4_CMD_WRAPPED);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2007-08-15 22:02:07 +00:00
|
|
|
if (!err)
|
2007-05-09 01:00:38 +00:00
|
|
|
memcpy(response_mad, outmailbox->buf, 256);
|
|
|
|
|
|
|
|
mlx4_free_cmd_mailbox(dev->dev, inmailbox);
|
|
|
|
mlx4_free_cmd_mailbox(dev->dev, outmailbox);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void update_sm_ah(struct mlx4_ib_dev *dev, u8 port_num, u16 lid, u8 sl)
|
|
|
|
{
|
|
|
|
struct ib_ah *new_ah;
|
|
|
|
struct ib_ah_attr ah_attr;
|
2012-08-03 08:26:45 +00:00
|
|
|
unsigned long flags;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
if (!dev->send_agent[port_num - 1][0])
|
|
|
|
return;
|
|
|
|
|
|
|
|
memset(&ah_attr, 0, sizeof ah_attr);
|
|
|
|
ah_attr.dlid = lid;
|
|
|
|
ah_attr.sl = sl;
|
|
|
|
ah_attr.port_num = port_num;
|
|
|
|
|
|
|
|
new_ah = ib_create_ah(dev->send_agent[port_num - 1][0]->qp->pd,
|
|
|
|
&ah_attr);
|
|
|
|
if (IS_ERR(new_ah))
|
|
|
|
return;
|
|
|
|
|
2012-08-03 08:26:45 +00:00
|
|
|
spin_lock_irqsave(&dev->sm_lock, flags);
|
2007-05-09 01:00:38 +00:00
|
|
|
if (dev->sm_ah[port_num - 1])
|
|
|
|
ib_destroy_ah(dev->sm_ah[port_num - 1]);
|
|
|
|
dev->sm_ah[port_num - 1] = new_ah;
|
2012-08-03 08:26:45 +00:00
|
|
|
spin_unlock_irqrestore(&dev->sm_lock, flags);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
* Snoop SM MADs for port info, GUID info, and P_Key table sets, so we can
|
|
|
|
* synthesize LID change, Client-Rereg, GID change, and P_Key change events.
|
2007-05-09 01:00:38 +00:00
|
|
|
*/
|
2015-05-31 21:15:30 +00:00
|
|
|
static void smp_snoop(struct ib_device *ibdev, u8 port_num, const struct ib_mad *mad,
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
u16 prev_lid)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
struct ib_port_info *pinfo;
|
|
|
|
u16 lid;
|
2012-08-03 08:40:43 +00:00
|
|
|
__be16 *base;
|
|
|
|
u32 bn, pkey_change_bitmap;
|
|
|
|
int i;
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibdev);
|
2007-05-09 01:00:38 +00:00
|
|
|
if ((mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED ||
|
|
|
|
mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) &&
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
mad->mad_hdr.method == IB_MGMT_METHOD_SET)
|
|
|
|
switch (mad->mad_hdr.attr_id) {
|
|
|
|
case IB_SMP_ATTR_PORT_INFO:
|
2016-09-12 16:16:21 +00:00
|
|
|
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_PORT_MNG_CHG_EV)
|
|
|
|
return;
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
pinfo = (struct ib_port_info *) ((struct ib_smp *) mad)->data;
|
|
|
|
lid = be16_to_cpu(pinfo->lid);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
update_sm_ah(dev, port_num,
|
2007-05-09 01:00:38 +00:00
|
|
|
be16_to_cpu(pinfo->sm_lid),
|
|
|
|
pinfo->neighbormtu_mastersmsl & 0xf);
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
if (pinfo->clientrereg_resv_subnetto & 0x80)
|
2012-08-03 08:40:46 +00:00
|
|
|
handle_client_rereg_event(dev, port_num);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
if (prev_lid != lid)
|
2012-08-03 08:40:50 +00:00
|
|
|
handle_lid_change_event(dev, port_num);
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
break;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
case IB_SMP_ATTR_PKEY_TABLE:
|
2016-09-12 16:16:21 +00:00
|
|
|
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_PORT_MNG_CHG_EV)
|
|
|
|
return;
|
2012-08-03 08:40:43 +00:00
|
|
|
if (!mlx4_is_mfunc(dev->dev)) {
|
|
|
|
mlx4_ib_dispatch_event(dev, port_num,
|
|
|
|
IB_EVENT_PKEY_CHANGE);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
/* at this point, we are running in the master.
|
|
|
|
* Slaves do not receive SMPs.
|
|
|
|
*/
|
2012-08-03 08:40:43 +00:00
|
|
|
bn = be32_to_cpu(((struct ib_smp *)mad)->attr_mod) & 0xFFFF;
|
|
|
|
base = (__be16 *) &(((struct ib_smp *)mad)->data[0]);
|
|
|
|
pkey_change_bitmap = 0;
|
|
|
|
for (i = 0; i < 32; i++) {
|
|
|
|
pr_debug("PKEY[%d] = x%x\n",
|
|
|
|
i + bn*32, be16_to_cpu(base[i]));
|
|
|
|
if (be16_to_cpu(base[i]) !=
|
|
|
|
dev->pkeys.phys_pkey_cache[port_num - 1][i + bn*32]) {
|
|
|
|
pkey_change_bitmap |= (1 << i);
|
|
|
|
dev->pkeys.phys_pkey_cache[port_num - 1][i + bn*32] =
|
|
|
|
be16_to_cpu(base[i]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
pr_debug("PKEY Change event: port=%d, "
|
|
|
|
"block=0x%x, change_bitmap=0x%x\n",
|
|
|
|
port_num, bn, pkey_change_bitmap);
|
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
if (pkey_change_bitmap) {
|
2012-08-03 08:40:43 +00:00
|
|
|
mlx4_ib_dispatch_event(dev, port_num,
|
|
|
|
IB_EVENT_PKEY_CHANGE);
|
2012-08-03 08:40:50 +00:00
|
|
|
if (!dev->sriov.is_going_down)
|
|
|
|
__propagate_pkey_ev(dev, port_num, bn,
|
|
|
|
pkey_change_bitmap);
|
|
|
|
}
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
break;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
case IB_SMP_ATTR_GUID_INFO:
|
2016-09-12 16:16:21 +00:00
|
|
|
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_PORT_MNG_CHG_EV)
|
|
|
|
return;
|
mlx4: Put physical GID and P_Key table sizes in mlx4_phys_caps struct and paravirtualize them
To allow easy paravirtualization of P_Key and GID table sizes, keep
paravirtualized sizes in mlx4_dev->caps, but save the actual physical
sizes from FW in struct: mlx4_dev->phys_cap.
In addition, in SR-IOV mode, do the following:
1. Reduce reported P_Key table size by 1.
This is done to reserve the highest P_Key index for internal use,
for declaring an invalid P_Key in P_Key paravirtualization.
We require a P_Key index which always contain an invalid P_Key
value for this purpose (i.e., one which cannot be modified by
the subnet manager). The way to do this is to reduce the
P_Key table size reported to the subnet manager by 1, so that
it will not attempt to access the P_Key at index #127.
2. Paravirtualize the GID table size to 1. Thus, each guest sees
only a single GID (at its paravirtualized index 0).
In addition, since we are paravirtualizing the GID table size to 1, we
add paravirtualization of the master GID event here (i.e., we do not
do ib_dispatch_event() for the GUID change event on the master, since
its (only) GUID never changes).
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:44 +00:00
|
|
|
/* paravirtualized master's guid is guid 0 -- does not change */
|
|
|
|
if (!mlx4_is_master(dev->dev))
|
|
|
|
mlx4_ib_dispatch_event(dev, port_num,
|
|
|
|
IB_EVENT_GID_CHANGE);
|
2012-08-03 08:40:50 +00:00
|
|
|
/*if master, notify relevant slaves*/
|
|
|
|
if (mlx4_is_master(dev->dev) &&
|
|
|
|
!dev->sriov.is_going_down) {
|
|
|
|
bn = be32_to_cpu(((struct ib_smp *)mad)->attr_mod);
|
|
|
|
mlx4_ib_update_cache_on_guid_change(dev, bn, port_num,
|
|
|
|
(u8 *)(&((struct ib_smp *)mad)->data));
|
|
|
|
mlx4_ib_notify_slaves_on_guid_change(dev, bn, port_num,
|
|
|
|
(u8 *)(&((struct ib_smp *)mad)->data));
|
|
|
|
}
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
break;
|
2012-08-03 08:40:50 +00:00
|
|
|
|
2016-09-12 16:16:21 +00:00
|
|
|
case IB_SMP_ATTR_SL_TO_VL_TABLE:
|
|
|
|
/* cache sl to vl mapping changes for use in
|
|
|
|
* filling QP1 LRH VL field when sending packets
|
|
|
|
*/
|
|
|
|
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_PORT_MNG_CHG_EV &&
|
|
|
|
dev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_SL_TO_VL_CHANGE_EVENT)
|
|
|
|
return;
|
|
|
|
if (!mlx4_is_slave(dev->dev)) {
|
|
|
|
union sl2vl_tbl_to_u64 sl2vl64;
|
|
|
|
int jj;
|
|
|
|
|
|
|
|
for (jj = 0; jj < 8; jj++) {
|
|
|
|
sl2vl64.sl8[jj] = ((struct ib_smp *)mad)->data[jj];
|
|
|
|
pr_debug("port %u, sl2vl[%d] = %02x\n",
|
|
|
|
port_num, jj, sl2vl64.sl8[jj]);
|
|
|
|
}
|
|
|
|
atomic64_set(&dev->sl2vl[port_num - 1], sl2vl64.sl64);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
default:
|
|
|
|
break;
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
static void __propagate_pkey_ev(struct mlx4_ib_dev *dev, int port_num,
|
|
|
|
int block, u32 change_bitmap)
|
|
|
|
{
|
|
|
|
int i, ix, slave, err;
|
|
|
|
int have_event = 0;
|
|
|
|
|
|
|
|
for (slave = 0; slave < dev->dev->caps.sqp_demux; slave++) {
|
|
|
|
if (slave == mlx4_master_func_num(dev->dev))
|
|
|
|
continue;
|
|
|
|
if (!mlx4_is_slave_active(dev->dev, slave))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
have_event = 0;
|
|
|
|
for (i = 0; i < 32; i++) {
|
|
|
|
if (!(change_bitmap & (1 << i)))
|
|
|
|
continue;
|
|
|
|
for (ix = 0;
|
|
|
|
ix < dev->dev->caps.pkey_table_len[port_num]; ix++) {
|
|
|
|
if (dev->pkeys.virt2phys_pkey[slave][port_num - 1]
|
|
|
|
[ix] == i + 32 * block) {
|
|
|
|
err = mlx4_gen_pkey_eqe(dev->dev, slave, port_num);
|
|
|
|
pr_debug("propagate_pkey_ev: slave %d,"
|
|
|
|
" port %d, ix %d (%d)\n",
|
|
|
|
slave, port_num, ix, err);
|
|
|
|
have_event = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (have_event)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
static void node_desc_override(struct ib_device *dev,
|
|
|
|
struct ib_mad *mad)
|
|
|
|
{
|
2012-08-03 08:26:45 +00:00
|
|
|
unsigned long flags;
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
if ((mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED ||
|
|
|
|
mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) &&
|
|
|
|
mad->mad_hdr.method == IB_MGMT_METHOD_GET_RESP &&
|
|
|
|
mad->mad_hdr.attr_id == IB_SMP_ATTR_NODE_DESC) {
|
2012-08-03 08:26:45 +00:00
|
|
|
spin_lock_irqsave(&to_mdev(dev)->sm_lock, flags);
|
2016-08-25 17:57:07 +00:00
|
|
|
memcpy(((struct ib_smp *) mad)->data, dev->node_desc,
|
|
|
|
IB_DEVICE_NODE_DESC_MAX);
|
2012-08-03 08:26:45 +00:00
|
|
|
spin_unlock_irqrestore(&to_mdev(dev)->sm_lock, flags);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-05-31 21:15:30 +00:00
|
|
|
static void forward_trap(struct mlx4_ib_dev *dev, u8 port_num, const struct ib_mad *mad)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
|
|
|
int qpn = mad->mad_hdr.mgmt_class != IB_MGMT_CLASS_SUBN_LID_ROUTED;
|
|
|
|
struct ib_mad_send_buf *send_buf;
|
|
|
|
struct ib_mad_agent *agent = dev->send_agent[port_num - 1][qpn];
|
|
|
|
int ret;
|
2012-08-03 08:26:45 +00:00
|
|
|
unsigned long flags;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
if (agent) {
|
|
|
|
send_buf = ib_create_send_mad(agent, qpn, 0, 0, IB_MGMT_MAD_HDR,
|
2015-06-06 18:38:28 +00:00
|
|
|
IB_MGMT_MAD_DATA, GFP_ATOMIC,
|
|
|
|
IB_MGMT_BASE_VERSION);
|
2011-01-11 01:42:06 +00:00
|
|
|
if (IS_ERR(send_buf))
|
|
|
|
return;
|
2007-05-09 01:00:38 +00:00
|
|
|
/*
|
|
|
|
* We rely here on the fact that MLX QPs don't use the
|
|
|
|
* address handle after the send is posted (this is
|
|
|
|
* wrong following the IB spec strictly, but we know
|
|
|
|
* it's OK for our devices).
|
|
|
|
*/
|
2012-08-03 08:26:45 +00:00
|
|
|
spin_lock_irqsave(&dev->sm_lock, flags);
|
2007-05-09 01:00:38 +00:00
|
|
|
memcpy(send_buf->mad, mad, sizeof *mad);
|
|
|
|
if ((send_buf->ah = dev->sm_ah[port_num - 1]))
|
|
|
|
ret = ib_post_send_mad(send_buf, NULL);
|
|
|
|
else
|
|
|
|
ret = -EINVAL;
|
2012-08-03 08:26:45 +00:00
|
|
|
spin_unlock_irqrestore(&dev->sm_lock, flags);
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
if (ret)
|
|
|
|
ib_free_send_mad(send_buf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
static int mlx4_ib_demux_sa_handler(struct ib_device *ibdev, int port, int slave,
|
|
|
|
struct ib_sa_mad *sa_mad)
|
|
|
|
{
|
2012-08-03 08:40:46 +00:00
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
/* dispatch to different sa handlers */
|
|
|
|
switch (be16_to_cpu(sa_mad->mad_hdr.attr_id)) {
|
|
|
|
case IB_SA_ATTR_MC_MEMBER_REC:
|
|
|
|
ret = mlx4_ib_mcg_demux_handler(ibdev, port, slave, sa_mad);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return ret;
|
2012-08-03 08:40:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_find_real_gid(struct ib_device *ibdev, u8 port, __be64 guid)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibdev);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < dev->dev->caps.sqp_demux; i++) {
|
|
|
|
if (dev->sriov.demux[port - 1].guid_cache[i] == guid)
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-10-17 16:42:58 +00:00
|
|
|
static int find_slave_port_pkey_ix(struct mlx4_ib_dev *dev, int slave,
|
|
|
|
u8 port, u16 pkey, u16 *ix)
|
2012-08-03 08:40:44 +00:00
|
|
|
{
|
2012-10-17 16:42:58 +00:00
|
|
|
int i, ret;
|
|
|
|
u8 unassigned_pkey_ix, pkey_ix, partial_ix = 0xFF;
|
|
|
|
u16 slot_pkey;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
2012-10-17 16:42:58 +00:00
|
|
|
if (slave == mlx4_master_func_num(dev->dev))
|
|
|
|
return ib_find_cached_pkey(&dev->ib_dev, port, pkey, ix);
|
2012-08-03 08:40:44 +00:00
|
|
|
|
2012-10-17 16:42:58 +00:00
|
|
|
unassigned_pkey_ix = dev->dev->phys_caps.pkey_phys_table_len[port] - 1;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
2012-10-17 16:42:58 +00:00
|
|
|
for (i = 0; i < dev->dev->caps.pkey_table_len[port]; i++) {
|
|
|
|
if (dev->pkeys.virt2phys_pkey[slave][port - 1][i] == unassigned_pkey_ix)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
pkey_ix = dev->pkeys.virt2phys_pkey[slave][port - 1][i];
|
|
|
|
|
|
|
|
ret = ib_get_cached_pkey(&dev->ib_dev, port, pkey_ix, &slot_pkey);
|
|
|
|
if (ret)
|
|
|
|
continue;
|
|
|
|
if ((slot_pkey & 0x7FFF) == (pkey & 0x7FFF)) {
|
|
|
|
if (slot_pkey & 0x8000) {
|
|
|
|
*ix = (u16) pkey_ix;
|
|
|
|
return 0;
|
|
|
|
} else {
|
|
|
|
/* take first partial pkey index found */
|
|
|
|
if (partial_ix == 0xFF)
|
|
|
|
partial_ix = pkey_ix;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (partial_ix < 0xFF) {
|
|
|
|
*ix = (u16) partial_ix;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -EINVAL;
|
2012-08-03 08:40:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port,
|
|
|
|
enum ib_qp_type dest_qpt, struct ib_wc *wc,
|
|
|
|
struct ib_grh *grh, struct ib_mad *mad)
|
|
|
|
{
|
|
|
|
struct ib_sge list;
|
2015-10-08 08:16:33 +00:00
|
|
|
struct ib_ud_wr wr;
|
|
|
|
struct ib_send_wr *bad_wr;
|
2012-08-03 08:40:44 +00:00
|
|
|
struct mlx4_ib_demux_pv_ctx *tun_ctx;
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp;
|
|
|
|
struct mlx4_rcv_tunnel_mad *tun_mad;
|
|
|
|
struct ib_ah_attr attr;
|
|
|
|
struct ib_ah *ah;
|
|
|
|
struct ib_qp *src_qp = NULL;
|
|
|
|
unsigned tun_tx_ix = 0;
|
|
|
|
int dqpn;
|
|
|
|
int ret = 0;
|
|
|
|
u16 tun_pkey_ix;
|
2012-10-17 16:42:58 +00:00
|
|
|
u16 cached_pkey;
|
2014-03-12 10:00:37 +00:00
|
|
|
u8 is_eth = dev->dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
|
|
|
if (dest_qpt > IB_QPT_GSI)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tun_ctx = dev->sriov.demux[port-1].tun[slave];
|
|
|
|
|
|
|
|
/* check if proxy qp created */
|
|
|
|
if (!tun_ctx || tun_ctx->state != DEMUX_PV_STATE_ACTIVE)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
if (!dest_qpt)
|
|
|
|
tun_qp = &tun_ctx->qp[0];
|
|
|
|
else
|
|
|
|
tun_qp = &tun_ctx->qp[1];
|
|
|
|
|
2012-10-17 16:42:58 +00:00
|
|
|
/* compute P_Key index to put in tunnel header for slave */
|
2012-08-03 08:40:44 +00:00
|
|
|
if (dest_qpt) {
|
2012-10-17 16:42:58 +00:00
|
|
|
u16 pkey_ix;
|
|
|
|
ret = ib_get_cached_pkey(&dev->ib_dev, port, wc->pkey_index, &cached_pkey);
|
2012-08-03 08:40:44 +00:00
|
|
|
if (ret)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2012-10-17 16:42:58 +00:00
|
|
|
ret = find_slave_port_pkey_ix(dev, slave, port, cached_pkey, &pkey_ix);
|
|
|
|
if (ret)
|
2012-08-03 08:40:44 +00:00
|
|
|
return -EINVAL;
|
2012-10-17 16:42:58 +00:00
|
|
|
tun_pkey_ix = pkey_ix;
|
2012-08-03 08:40:44 +00:00
|
|
|
} else
|
|
|
|
tun_pkey_ix = dev->pkeys.virt2phys_pkey[slave][port - 1][0];
|
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 08:40:57 +00:00
|
|
|
dqpn = dev->dev->phys_caps.base_proxy_sqpn + 8 * slave + port + (dest_qpt * 2) - 1;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
|
|
|
/* get tunnel tx data buf for slave */
|
|
|
|
src_qp = tun_qp->qp;
|
|
|
|
|
|
|
|
/* create ah. Just need an empty one with the port num for the post send.
|
|
|
|
* The driver will set the force loopback bit in post_send */
|
|
|
|
memset(&attr, 0, sizeof attr);
|
|
|
|
attr.port_num = port;
|
2014-03-12 10:00:37 +00:00
|
|
|
if (is_eth) {
|
2014-03-12 10:00:39 +00:00
|
|
|
memcpy(&attr.grh.dgid.raw[0], &grh->dgid.raw[0], 16);
|
2014-03-12 10:00:37 +00:00
|
|
|
attr.ah_flags = IB_AH_GRH;
|
|
|
|
}
|
2012-08-03 08:40:44 +00:00
|
|
|
ah = ib_create_ah(tun_ctx->pd, &attr);
|
|
|
|
if (IS_ERR(ah))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* allocate tunnel tx buf after pass failure returns */
|
|
|
|
spin_lock(&tun_qp->tx_lock);
|
|
|
|
if (tun_qp->tx_ix_head - tun_qp->tx_ix_tail >=
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1))
|
|
|
|
ret = -EAGAIN;
|
|
|
|
else
|
|
|
|
tun_tx_ix = (++tun_qp->tx_ix_head) & (MLX4_NUM_TUNNEL_BUFS - 1);
|
|
|
|
spin_unlock(&tun_qp->tx_lock);
|
|
|
|
if (ret)
|
2016-06-22 14:27:29 +00:00
|
|
|
goto end;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
|
|
|
tun_mad = (struct mlx4_rcv_tunnel_mad *) (tun_qp->tx_ring[tun_tx_ix].buf.addr);
|
|
|
|
if (tun_qp->tx_ring[tun_tx_ix].ah)
|
|
|
|
ib_destroy_ah(tun_qp->tx_ring[tun_tx_ix].ah);
|
|
|
|
tun_qp->tx_ring[tun_tx_ix].ah = ah;
|
|
|
|
ib_dma_sync_single_for_cpu(&dev->ib_dev,
|
|
|
|
tun_qp->tx_ring[tun_tx_ix].buf.map,
|
|
|
|
sizeof (struct mlx4_rcv_tunnel_mad),
|
|
|
|
DMA_TO_DEVICE);
|
|
|
|
|
|
|
|
/* copy over to tunnel buffer */
|
|
|
|
if (grh)
|
|
|
|
memcpy(&tun_mad->grh, grh, sizeof *grh);
|
|
|
|
memcpy(&tun_mad->mad, mad, sizeof *mad);
|
|
|
|
|
|
|
|
/* adjust tunnel data */
|
|
|
|
tun_mad->hdr.pkey_index = cpu_to_be16(tun_pkey_ix);
|
|
|
|
tun_mad->hdr.flags_src_qp = cpu_to_be32(wc->src_qp & 0xFFFFFF);
|
|
|
|
tun_mad->hdr.g_ml_path = (grh && (wc->wc_flags & IB_WC_GRH)) ? 0x80 : 0;
|
|
|
|
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
if (is_eth) {
|
|
|
|
u16 vlan = 0;
|
|
|
|
if (mlx4_get_slave_default_vlan(dev->dev, port, slave, &vlan,
|
|
|
|
NULL)) {
|
|
|
|
/* VST mode */
|
|
|
|
if (vlan != wc->vlan_id)
|
|
|
|
/* Packet vlan is not the VST-assigned vlan.
|
|
|
|
* Drop the packet.
|
|
|
|
*/
|
|
|
|
goto out;
|
|
|
|
else
|
|
|
|
/* Remove the vlan tag before forwarding
|
|
|
|
* the packet to the VF.
|
|
|
|
*/
|
|
|
|
vlan = 0xffff;
|
|
|
|
} else {
|
|
|
|
vlan = wc->vlan_id;
|
|
|
|
}
|
|
|
|
|
|
|
|
tun_mad->hdr.sl_vid = cpu_to_be16(vlan);
|
|
|
|
memcpy((char *)&tun_mad->hdr.mac_31_0, &(wc->smac[0]), 4);
|
|
|
|
memcpy((char *)&tun_mad->hdr.slid_mac_47_32, &(wc->smac[4]), 2);
|
|
|
|
} else {
|
|
|
|
tun_mad->hdr.sl_vid = cpu_to_be16(((u16)(wc->sl)) << 12);
|
|
|
|
tun_mad->hdr.slid_mac_47_32 = cpu_to_be16(wc->slid);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
ib_dma_sync_single_for_device(&dev->ib_dev,
|
|
|
|
tun_qp->tx_ring[tun_tx_ix].buf.map,
|
|
|
|
sizeof (struct mlx4_rcv_tunnel_mad),
|
|
|
|
DMA_TO_DEVICE);
|
|
|
|
|
|
|
|
list.addr = tun_qp->tx_ring[tun_tx_ix].buf.map;
|
|
|
|
list.length = sizeof (struct mlx4_rcv_tunnel_mad);
|
2015-07-30 23:22:18 +00:00
|
|
|
list.lkey = tun_ctx->pd->local_dma_lkey;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
2015-10-08 08:16:33 +00:00
|
|
|
wr.ah = ah;
|
|
|
|
wr.port_num = port;
|
|
|
|
wr.remote_qkey = IB_QP_SET_QKEY;
|
|
|
|
wr.remote_qpn = dqpn;
|
|
|
|
wr.wr.next = NULL;
|
|
|
|
wr.wr.wr_id = ((u64) tun_tx_ix) | MLX4_TUN_SET_WRID_QPN(dest_qpt);
|
|
|
|
wr.wr.sg_list = &list;
|
|
|
|
wr.wr.num_sge = 1;
|
|
|
|
wr.wr.opcode = IB_WR_SEND;
|
|
|
|
wr.wr.send_flags = IB_SEND_SIGNALED;
|
|
|
|
|
|
|
|
ret = ib_post_send(src_qp, &wr.wr, &bad_wr);
|
2016-06-22 14:27:29 +00:00
|
|
|
if (!ret)
|
|
|
|
return 0;
|
|
|
|
out:
|
|
|
|
spin_lock(&tun_qp->tx_lock);
|
|
|
|
tun_qp->tx_ix_tail++;
|
|
|
|
spin_unlock(&tun_qp->tx_lock);
|
|
|
|
tun_qp->tx_ring[tun_tx_ix].ah = NULL;
|
|
|
|
end:
|
|
|
|
ib_destroy_ah(ah);
|
2012-08-03 08:40:44 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_ib_demux_mad(struct ib_device *ibdev, u8 port,
|
|
|
|
struct ib_wc *wc, struct ib_grh *grh,
|
|
|
|
struct ib_mad *mad)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibdev);
|
2015-12-06 16:07:42 +00:00
|
|
|
int err, other_port;
|
|
|
|
int slave = -1;
|
2012-08-03 08:40:44 +00:00
|
|
|
u8 *slave_id;
|
2014-03-12 10:00:37 +00:00
|
|
|
int is_eth = 0;
|
|
|
|
|
|
|
|
if (rdma_port_get_link_layer(ibdev, port) == IB_LINK_LAYER_INFINIBAND)
|
|
|
|
is_eth = 0;
|
|
|
|
else
|
|
|
|
is_eth = 1;
|
|
|
|
|
|
|
|
if (is_eth) {
|
|
|
|
if (!(wc->wc_flags & IB_WC_GRH)) {
|
|
|
|
mlx4_ib_warn(ibdev, "RoCE grh not present.\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (mad->mad_hdr.mgmt_class != IB_MGMT_CLASS_CM) {
|
|
|
|
mlx4_ib_warn(ibdev, "RoCE mgmt class is not CM\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2015-12-06 16:07:42 +00:00
|
|
|
err = mlx4_get_slave_from_roce_gid(dev->dev, port, grh->dgid.raw, &slave);
|
|
|
|
if (err && mlx4_is_mf_bonded(dev->dev)) {
|
|
|
|
other_port = (port == 1) ? 2 : 1;
|
|
|
|
err = mlx4_get_slave_from_roce_gid(dev->dev, other_port, grh->dgid.raw, &slave);
|
|
|
|
if (!err) {
|
|
|
|
port = other_port;
|
|
|
|
pr_debug("resolved slave %d from gid %pI6 wire port %d other %d\n",
|
|
|
|
slave, grh->dgid.raw, port, other_port);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (err) {
|
2014-03-12 10:00:37 +00:00
|
|
|
mlx4_ib_warn(ibdev, "failed matching grh\n");
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
if (slave >= dev->dev->caps.sqp_demux) {
|
|
|
|
mlx4_ib_warn(ibdev, "slave id: %d is bigger than allowed:%d\n",
|
|
|
|
slave, dev->dev->caps.sqp_demux);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mlx4_ib_demux_cm_handler(ibdev, port, NULL, mad))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err = mlx4_ib_send_to_slave(dev, slave, port, wc->qp->qp_type, wc, grh, mad);
|
|
|
|
if (err)
|
|
|
|
pr_debug("failed sending to slave %d via tunnel qp (%d)\n",
|
|
|
|
slave, err);
|
|
|
|
return 0;
|
|
|
|
}
|
2012-08-03 08:40:44 +00:00
|
|
|
|
|
|
|
/* Initially assume that this mad is for us */
|
|
|
|
slave = mlx4_master_func_num(dev->dev);
|
|
|
|
|
|
|
|
/* See if the slave id is encoded in a response mad */
|
|
|
|
if (mad->mad_hdr.method & 0x80) {
|
|
|
|
slave_id = (u8 *) &mad->mad_hdr.tid;
|
|
|
|
slave = *slave_id;
|
|
|
|
if (slave != 255) /*255 indicates the dom0*/
|
|
|
|
*slave_id = 0; /* remap tid */
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If a grh is present, we demux according to it */
|
|
|
|
if (wc->wc_flags & IB_WC_GRH) {
|
|
|
|
slave = mlx4_ib_find_real_gid(ibdev, port, grh->dgid.global.interface_id);
|
|
|
|
if (slave < 0) {
|
|
|
|
mlx4_ib_warn(ibdev, "failed matching grh\n");
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Class-specific handling */
|
|
|
|
switch (mad->mad_hdr.mgmt_class) {
|
2014-05-29 13:31:02 +00:00
|
|
|
case IB_MGMT_CLASS_SUBN_LID_ROUTED:
|
|
|
|
case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
|
|
|
|
/* 255 indicates the dom0 */
|
|
|
|
if (slave != 255 && slave != mlx4_master_func_num(dev->dev)) {
|
|
|
|
if (!mlx4_vf_smi_enabled(dev->dev, slave, port))
|
|
|
|
return -EPERM;
|
|
|
|
/* for a VF. drop unsolicited MADs */
|
|
|
|
if (!(mad->mad_hdr.method & IB_MGMT_METHOD_RESP)) {
|
|
|
|
mlx4_ib_warn(ibdev, "demux QP0. rejecting unsolicited mad for slave %d class 0x%x, method 0x%x\n",
|
|
|
|
slave, mad->mad_hdr.mgmt_class,
|
|
|
|
mad->mad_hdr.method);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
2012-08-03 08:40:44 +00:00
|
|
|
case IB_MGMT_CLASS_SUBN_ADM:
|
|
|
|
if (mlx4_ib_demux_sa_handler(ibdev, port, slave,
|
|
|
|
(struct ib_sa_mad *) mad))
|
|
|
|
return 0;
|
|
|
|
break;
|
2012-08-03 08:40:47 +00:00
|
|
|
case IB_MGMT_CLASS_CM:
|
|
|
|
if (mlx4_ib_demux_cm_handler(ibdev, port, &slave, mad))
|
|
|
|
return 0;
|
|
|
|
break;
|
2012-08-03 08:40:44 +00:00
|
|
|
case IB_MGMT_CLASS_DEVICE_MGMT:
|
|
|
|
if (mad->mad_hdr.method != IB_MGMT_METHOD_GET_RESP)
|
|
|
|
return 0;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Drop unsupported classes for slaves in tunnel mode */
|
|
|
|
if (slave != mlx4_master_func_num(dev->dev)) {
|
|
|
|
pr_debug("dropping unsupported ingress mad from class:%d "
|
|
|
|
"for slave:%d\n", mad->mad_hdr.mgmt_class, slave);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/*make sure that no slave==255 was not handled yet.*/
|
|
|
|
if (slave >= dev->dev->caps.sqp_demux) {
|
|
|
|
mlx4_ib_warn(ibdev, "slave id: %d is bigger than allowed:%d\n",
|
|
|
|
slave, dev->dev->caps.sqp_demux);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_ib_send_to_slave(dev, slave, port, wc->qp->qp_type, wc, grh, mad);
|
|
|
|
if (err)
|
|
|
|
pr_debug("failed sending to slave %d via tunnel qp (%d)\n",
|
|
|
|
slave, err);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-06-15 14:51:27 +00:00
|
|
|
static int ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
|
2015-05-31 21:15:30 +00:00
|
|
|
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
|
|
|
|
const struct ib_mad *in_mad, struct ib_mad *out_mad)
|
2007-05-09 01:00:38 +00:00
|
|
|
{
|
2009-01-28 22:54:35 +00:00
|
|
|
u16 slid, prev_lid = 0;
|
2007-05-09 01:00:38 +00:00
|
|
|
int err;
|
2009-01-28 22:54:35 +00:00
|
|
|
struct ib_port_attr pattr;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2012-06-19 08:21:35 +00:00
|
|
|
if (in_wc && in_wc->qp->qp_num) {
|
|
|
|
pr_debug("received MAD: slid:%d sqpn:%d "
|
|
|
|
"dlid_bits:%d dqpn:%d wc_flags:0x%x, cls %x, mtd %x, atr %x\n",
|
|
|
|
in_wc->slid, in_wc->src_qp,
|
|
|
|
in_wc->dlid_path_bits,
|
|
|
|
in_wc->qp->qp_num,
|
|
|
|
in_wc->wc_flags,
|
|
|
|
in_mad->mad_hdr.mgmt_class, in_mad->mad_hdr.method,
|
|
|
|
be16_to_cpu(in_mad->mad_hdr.attr_id));
|
|
|
|
if (in_wc->wc_flags & IB_WC_GRH) {
|
|
|
|
pr_debug("sgid_hi:0x%016llx sgid_lo:0x%016llx\n",
|
|
|
|
be64_to_cpu(in_grh->sgid.global.subnet_prefix),
|
|
|
|
be64_to_cpu(in_grh->sgid.global.interface_id));
|
|
|
|
pr_debug("dgid_hi:0x%016llx dgid_lo:0x%016llx\n",
|
|
|
|
be64_to_cpu(in_grh->dgid.global.subnet_prefix),
|
|
|
|
be64_to_cpu(in_grh->dgid.global.interface_id));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
slid = in_wc ? in_wc->slid : be16_to_cpu(IB_LID_PERMISSIVE);
|
|
|
|
|
|
|
|
if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP && slid == 0) {
|
|
|
|
forward_trap(to_mdev(ibdev), port_num, in_mad);
|
|
|
|
return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED ||
|
|
|
|
in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) {
|
|
|
|
if (in_mad->mad_hdr.method != IB_MGMT_METHOD_GET &&
|
|
|
|
in_mad->mad_hdr.method != IB_MGMT_METHOD_SET &&
|
|
|
|
in_mad->mad_hdr.method != IB_MGMT_METHOD_TRAP_REPRESS)
|
|
|
|
return IB_MAD_RESULT_SUCCESS;
|
|
|
|
|
|
|
|
/*
|
2012-01-26 14:41:33 +00:00
|
|
|
* Don't process SMInfo queries -- the SMA can't handle them.
|
2007-05-09 01:00:38 +00:00
|
|
|
*/
|
2012-01-26 14:41:33 +00:00
|
|
|
if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO)
|
2007-05-09 01:00:38 +00:00
|
|
|
return IB_MAD_RESULT_SUCCESS;
|
|
|
|
} else if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT ||
|
|
|
|
in_mad->mad_hdr.mgmt_class == MLX4_IB_VENDOR_CLASS1 ||
|
2008-07-15 06:48:45 +00:00
|
|
|
in_mad->mad_hdr.mgmt_class == MLX4_IB_VENDOR_CLASS2 ||
|
|
|
|
in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_CONG_MGMT) {
|
2007-05-09 01:00:38 +00:00
|
|
|
if (in_mad->mad_hdr.method != IB_MGMT_METHOD_GET &&
|
|
|
|
in_mad->mad_hdr.method != IB_MGMT_METHOD_SET)
|
|
|
|
return IB_MAD_RESULT_SUCCESS;
|
|
|
|
} else
|
|
|
|
return IB_MAD_RESULT_SUCCESS;
|
|
|
|
|
2009-01-28 22:54:35 +00:00
|
|
|
if ((in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED ||
|
|
|
|
in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) &&
|
|
|
|
in_mad->mad_hdr.method == IB_MGMT_METHOD_SET &&
|
|
|
|
in_mad->mad_hdr.attr_id == IB_SMP_ATTR_PORT_INFO &&
|
|
|
|
!ib_query_port(ibdev, port_num, &pattr))
|
|
|
|
prev_lid = pattr.lid;
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
err = mlx4_MAD_IFC(to_mdev(ibdev),
|
2012-08-03 08:40:45 +00:00
|
|
|
(mad_flags & IB_MAD_IGNORE_MKEY ? MLX4_MAD_IFC_IGNORE_MKEY : 0) |
|
|
|
|
(mad_flags & IB_MAD_IGNORE_BKEY ? MLX4_MAD_IFC_IGNORE_BKEY : 0) |
|
|
|
|
MLX4_MAD_IFC_NET_VIEW,
|
2007-05-09 01:00:38 +00:00
|
|
|
port_num, in_wc, in_grh, in_mad, out_mad);
|
|
|
|
if (err)
|
|
|
|
return IB_MAD_RESULT_FAILURE;
|
|
|
|
|
|
|
|
if (!out_mad->mad_hdr.status) {
|
2016-09-12 16:16:21 +00:00
|
|
|
smp_snoop(ibdev, port_num, in_mad, prev_lid);
|
2012-08-03 08:40:54 +00:00
|
|
|
/* slaves get node desc from FW */
|
|
|
|
if (!mlx4_is_slave(to_mdev(ibdev)->dev))
|
|
|
|
node_desc_override(ibdev, out_mad);
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* set return bit in status of directed route responses */
|
|
|
|
if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
|
|
|
|
out_mad->mad_hdr.status |= cpu_to_be16(1 << 15);
|
|
|
|
|
|
|
|
if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP_REPRESS)
|
|
|
|
/* no response for trap repress */
|
|
|
|
return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED;
|
|
|
|
|
|
|
|
return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
|
|
|
|
}
|
|
|
|
|
2016-02-11 08:24:43 +00:00
|
|
|
static void edit_counter(struct mlx4_counter *cnt, void *counters,
|
|
|
|
__be16 attr_id)
|
2011-06-15 14:51:27 +00:00
|
|
|
{
|
2016-02-11 08:24:43 +00:00
|
|
|
switch (attr_id) {
|
|
|
|
case IB_PMA_PORT_COUNTERS:
|
|
|
|
{
|
|
|
|
struct ib_pma_portcounters *pma_cnt =
|
|
|
|
(struct ib_pma_portcounters *)counters;
|
|
|
|
|
|
|
|
ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data,
|
|
|
|
(be64_to_cpu(cnt->tx_bytes) >> 2));
|
|
|
|
ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,
|
|
|
|
(be64_to_cpu(cnt->rx_bytes) >> 2));
|
|
|
|
ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets,
|
|
|
|
be64_to_cpu(cnt->tx_frames));
|
|
|
|
ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets,
|
|
|
|
be64_to_cpu(cnt->rx_frames));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case IB_PMA_PORT_COUNTERS_EXT:
|
|
|
|
{
|
|
|
|
struct ib_pma_portcounters_ext *pma_cnt_ext =
|
|
|
|
(struct ib_pma_portcounters_ext *)counters;
|
|
|
|
|
|
|
|
pma_cnt_ext->port_xmit_data =
|
|
|
|
cpu_to_be64(be64_to_cpu(cnt->tx_bytes) >> 2);
|
|
|
|
pma_cnt_ext->port_rcv_data =
|
|
|
|
cpu_to_be64(be64_to_cpu(cnt->rx_bytes) >> 2);
|
|
|
|
pma_cnt_ext->port_xmit_packets = cnt->tx_frames;
|
|
|
|
pma_cnt_ext->port_rcv_packets = cnt->rx_frames;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-06-15 14:51:27 +00:00
|
|
|
}
|
|
|
|
|
2016-02-11 08:24:44 +00:00
|
|
|
static int iboe_process_mad_port_info(void *out_mad)
|
|
|
|
{
|
|
|
|
struct ib_class_port_info cpi = {};
|
|
|
|
|
|
|
|
cpi.capability_mask = IB_PMA_CLASS_CAP_EXT_WIDTH;
|
|
|
|
memcpy(out_mad, &cpi, sizeof(cpi));
|
|
|
|
return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
|
|
|
|
}
|
|
|
|
|
2011-06-15 14:51:27 +00:00
|
|
|
static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
|
2015-05-31 21:15:30 +00:00
|
|
|
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
|
|
|
|
const struct ib_mad *in_mad, struct ib_mad *out_mad)
|
2011-06-15 14:51:27 +00:00
|
|
|
{
|
2015-06-15 14:59:05 +00:00
|
|
|
struct mlx4_counter counter_stats;
|
2011-06-15 14:51:27 +00:00
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibdev);
|
2015-10-15 11:44:40 +00:00
|
|
|
struct counter_index *tmp_counter;
|
|
|
|
int err = IB_MAD_RESULT_FAILURE, stats_avail = 0;
|
2011-06-15 14:51:27 +00:00
|
|
|
|
|
|
|
if (in_mad->mad_hdr.mgmt_class != IB_MGMT_CLASS_PERF_MGMT)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2016-02-11 08:24:44 +00:00
|
|
|
if (in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO)
|
|
|
|
return iboe_process_mad_port_info((void *)(out_mad->data + 40));
|
|
|
|
|
2015-06-15 14:59:05 +00:00
|
|
|
memset(&counter_stats, 0, sizeof(counter_stats));
|
2015-10-15 11:44:40 +00:00
|
|
|
mutex_lock(&dev->counters_table[port_num - 1].mutex);
|
|
|
|
list_for_each_entry(tmp_counter,
|
|
|
|
&dev->counters_table[port_num - 1].counters_list,
|
|
|
|
list) {
|
|
|
|
err = mlx4_get_counter_stats(dev->dev,
|
|
|
|
tmp_counter->index,
|
|
|
|
&counter_stats, 0);
|
|
|
|
if (err) {
|
|
|
|
err = IB_MAD_RESULT_FAILURE;
|
|
|
|
stats_avail = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
stats_avail = 1;
|
|
|
|
}
|
|
|
|
mutex_unlock(&dev->counters_table[port_num - 1].mutex);
|
|
|
|
if (stats_avail) {
|
2011-06-15 14:51:27 +00:00
|
|
|
memset(out_mad->data, 0, sizeof out_mad->data);
|
2015-06-15 14:59:05 +00:00
|
|
|
switch (counter_stats.counter_mode & 0xf) {
|
2011-06-15 14:51:27 +00:00
|
|
|
case 0:
|
2015-06-15 14:59:05 +00:00
|
|
|
edit_counter(&counter_stats,
|
2016-02-11 08:24:43 +00:00
|
|
|
(void *)(out_mad->data + 40),
|
|
|
|
in_mad->mad_hdr.attr_id);
|
2011-06-15 14:51:27 +00:00
|
|
|
err = IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
err = IB_MAD_RESULT_FAILURE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
|
2015-05-31 21:15:30 +00:00
|
|
|
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
|
2015-06-06 18:38:31 +00:00
|
|
|
const struct ib_mad_hdr *in, size_t in_mad_size,
|
|
|
|
struct ib_mad_hdr *out, size_t *out_mad_size,
|
|
|
|
u16 *out_mad_pkey_index)
|
2011-06-15 14:51:27 +00:00
|
|
|
{
|
2015-06-15 14:59:04 +00:00
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibdev);
|
2015-06-06 18:38:31 +00:00
|
|
|
const struct ib_mad *in_mad = (const struct ib_mad *)in;
|
|
|
|
struct ib_mad *out_mad = (struct ib_mad *)out;
|
2015-06-25 14:45:38 +00:00
|
|
|
enum rdma_link_layer link = rdma_port_get_link_layer(ibdev, port_num);
|
2015-06-06 18:38:31 +00:00
|
|
|
|
2015-06-25 13:52:50 +00:00
|
|
|
if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) ||
|
|
|
|
*out_mad_size != sizeof(*out_mad)))
|
|
|
|
return IB_MAD_RESULT_FAILURE;
|
2015-06-06 18:38:31 +00:00
|
|
|
|
2015-06-25 14:45:38 +00:00
|
|
|
/* iboe_process_mad() which uses the HCA flow-counters to implement IB PMA
|
|
|
|
* queries, should be called only by VFs and for that specific purpose
|
|
|
|
*/
|
|
|
|
if (link == IB_LINK_LAYER_INFINIBAND) {
|
|
|
|
if (mlx4_is_slave(dev->dev) &&
|
2016-02-11 08:24:43 +00:00
|
|
|
(in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT &&
|
|
|
|
(in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS ||
|
2016-02-11 08:24:44 +00:00
|
|
|
in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS_EXT ||
|
|
|
|
in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO)))
|
2015-06-25 14:45:38 +00:00
|
|
|
return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,
|
|
|
|
in_grh, in_mad, out_mad);
|
|
|
|
|
|
|
|
return ib_process_mad(ibdev, mad_flags, port_num, in_wc,
|
|
|
|
in_grh, in_mad, out_mad);
|
2011-06-15 14:51:27 +00:00
|
|
|
}
|
2015-06-25 14:45:38 +00:00
|
|
|
|
|
|
|
if (link == IB_LINK_LAYER_ETHERNET)
|
|
|
|
return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,
|
|
|
|
in_grh, in_mad, out_mad);
|
|
|
|
|
|
|
|
return -EINVAL;
|
2011-06-15 14:51:27 +00:00
|
|
|
}
|
|
|
|
|
2007-05-09 01:00:38 +00:00
|
|
|
static void send_handler(struct ib_mad_agent *agent,
|
|
|
|
struct ib_mad_send_wc *mad_send_wc)
|
|
|
|
{
|
2012-08-03 08:40:54 +00:00
|
|
|
if (mad_send_wc->send_buf->context[0])
|
|
|
|
ib_destroy_ah(mad_send_wc->send_buf->context[0]);
|
2007-05-09 01:00:38 +00:00
|
|
|
ib_free_send_mad(mad_send_wc->send_buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_mad_init(struct mlx4_ib_dev *dev)
|
|
|
|
{
|
|
|
|
struct ib_mad_agent *agent;
|
|
|
|
int p, q;
|
|
|
|
int ret;
|
2010-10-25 04:08:52 +00:00
|
|
|
enum rdma_link_layer ll;
|
2007-05-09 01:00:38 +00:00
|
|
|
|
2010-10-25 04:08:52 +00:00
|
|
|
for (p = 0; p < dev->num_ports; ++p) {
|
|
|
|
ll = rdma_port_get_link_layer(&dev->ib_dev, p + 1);
|
2007-05-09 01:00:38 +00:00
|
|
|
for (q = 0; q <= 1; ++q) {
|
2010-10-25 04:08:52 +00:00
|
|
|
if (ll == IB_LINK_LAYER_INFINIBAND) {
|
|
|
|
agent = ib_register_mad_agent(&dev->ib_dev, p + 1,
|
|
|
|
q ? IB_QPT_GSI : IB_QPT_SMI,
|
|
|
|
NULL, 0, send_handler,
|
2014-08-08 23:00:55 +00:00
|
|
|
NULL, NULL, 0);
|
2010-10-25 04:08:52 +00:00
|
|
|
if (IS_ERR(agent)) {
|
|
|
|
ret = PTR_ERR(agent);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
dev->send_agent[p][q] = agent;
|
|
|
|
} else
|
|
|
|
dev->send_agent[p][q] = NULL;
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
2010-10-25 04:08:52 +00:00
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err:
|
2008-10-22 22:38:42 +00:00
|
|
|
for (p = 0; p < dev->num_ports; ++p)
|
2007-05-09 01:00:38 +00:00
|
|
|
for (q = 0; q <= 1; ++q)
|
|
|
|
if (dev->send_agent[p][q])
|
|
|
|
ib_unregister_mad_agent(dev->send_agent[p][q]);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_ib_mad_cleanup(struct mlx4_ib_dev *dev)
|
|
|
|
{
|
|
|
|
struct ib_mad_agent *agent;
|
|
|
|
int p, q;
|
|
|
|
|
2008-10-22 22:38:42 +00:00
|
|
|
for (p = 0; p < dev->num_ports; ++p) {
|
2007-05-09 01:00:38 +00:00
|
|
|
for (q = 0; q <= 1; ++q) {
|
|
|
|
agent = dev->send_agent[p][q];
|
2010-10-25 04:08:52 +00:00
|
|
|
if (agent) {
|
|
|
|
dev->send_agent[p][q] = NULL;
|
|
|
|
ib_unregister_mad_agent(agent);
|
|
|
|
}
|
2007-05-09 01:00:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (dev->sm_ah[p])
|
|
|
|
ib_destroy_ah(dev->sm_ah[p]);
|
|
|
|
}
|
|
|
|
}
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
static void handle_lid_change_event(struct mlx4_ib_dev *dev, u8 port_num)
|
|
|
|
{
|
|
|
|
mlx4_ib_dispatch_event(dev, port_num, IB_EVENT_LID_CHANGE);
|
|
|
|
|
|
|
|
if (mlx4_is_master(dev->dev) && !dev->sriov.is_going_down)
|
|
|
|
mlx4_gen_slaves_port_mgt_ev(dev->dev, port_num,
|
|
|
|
MLX4_EQ_PORT_INFO_LID_CHANGE_MASK);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:46 +00:00
|
|
|
static void handle_client_rereg_event(struct mlx4_ib_dev *dev, u8 port_num)
|
|
|
|
{
|
2012-08-03 08:40:49 +00:00
|
|
|
/* re-configure the alias-guid and mcg's */
|
2012-08-03 08:40:46 +00:00
|
|
|
if (mlx4_is_master(dev->dev)) {
|
2012-08-03 08:40:49 +00:00
|
|
|
mlx4_ib_invalidate_all_guid_record(dev, port_num);
|
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
if (!dev->sriov.is_going_down) {
|
2012-08-03 08:40:46 +00:00
|
|
|
mlx4_ib_mcg_port_cleanup(&dev->sriov.demux[port_num - 1], 0);
|
2012-08-03 08:40:50 +00:00
|
|
|
mlx4_gen_slaves_port_mgt_ev(dev->dev, port_num,
|
|
|
|
MLX4_EQ_PORT_INFO_CLIENT_REREG_MASK);
|
|
|
|
}
|
2012-08-03 08:40:46 +00:00
|
|
|
}
|
2016-09-12 16:16:21 +00:00
|
|
|
|
|
|
|
/* Update the sl to vl table from inside client rereg
|
|
|
|
* only if in secure-host mode (snooping is not possible)
|
|
|
|
* and the sl-to-vl change event is not generated by FW.
|
|
|
|
*/
|
|
|
|
if (!mlx4_is_slave(dev->dev) &&
|
|
|
|
dev->dev->flags & MLX4_FLAG_SECURE_HOST &&
|
|
|
|
!(dev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_SL_TO_VL_CHANGE_EVENT)) {
|
|
|
|
if (mlx4_is_master(dev->dev))
|
|
|
|
/* already in work queue from mlx4_ib_event queueing
|
|
|
|
* mlx4_handle_port_mgmt_change_event, which calls
|
|
|
|
* this procedure. Therefore, call sl2vl_update directly.
|
|
|
|
*/
|
|
|
|
mlx4_ib_sl2vl_update(dev, port_num);
|
|
|
|
else
|
|
|
|
mlx4_sched_ib_sl2vl_update_work(dev, port_num);
|
|
|
|
}
|
2012-08-03 08:40:46 +00:00
|
|
|
mlx4_ib_dispatch_event(dev, port_num, IB_EVENT_CLIENT_REREGISTER);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:50 +00:00
|
|
|
static void propagate_pkey_ev(struct mlx4_ib_dev *dev, int port_num,
|
|
|
|
struct mlx4_eqe *eqe)
|
|
|
|
{
|
|
|
|
__propagate_pkey_ev(dev, port_num, GET_BLK_PTR_FROM_EQE(eqe),
|
|
|
|
GET_MASK_FROM_EQE(eqe));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_slaves_guid_change(struct mlx4_ib_dev *dev, u8 port_num,
|
|
|
|
u32 guid_tbl_blk_num, u32 change_bitmap)
|
|
|
|
{
|
|
|
|
struct ib_smp *in_mad = NULL;
|
|
|
|
struct ib_smp *out_mad = NULL;
|
|
|
|
u16 i;
|
|
|
|
|
|
|
|
if (!mlx4_is_mfunc(dev->dev) || !mlx4_is_master(dev->dev))
|
|
|
|
return;
|
|
|
|
|
|
|
|
in_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
|
|
|
|
out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
|
|
|
|
if (!in_mad || !out_mad) {
|
|
|
|
mlx4_ib_warn(&dev->ib_dev, "failed to allocate memory for guid info mads\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
guid_tbl_blk_num *= 4;
|
|
|
|
|
|
|
|
for (i = 0; i < 4; i++) {
|
|
|
|
if (change_bitmap && (!((change_bitmap >> (8 * i)) & 0xff)))
|
|
|
|
continue;
|
|
|
|
memset(in_mad, 0, sizeof *in_mad);
|
|
|
|
memset(out_mad, 0, sizeof *out_mad);
|
|
|
|
|
|
|
|
in_mad->base_version = 1;
|
|
|
|
in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
|
|
|
|
in_mad->class_version = 1;
|
|
|
|
in_mad->method = IB_MGMT_METHOD_GET;
|
|
|
|
in_mad->attr_id = IB_SMP_ATTR_GUID_INFO;
|
|
|
|
in_mad->attr_mod = cpu_to_be32(guid_tbl_blk_num + i);
|
|
|
|
|
|
|
|
if (mlx4_MAD_IFC(dev,
|
|
|
|
MLX4_MAD_IFC_IGNORE_KEYS | MLX4_MAD_IFC_NET_VIEW,
|
|
|
|
port_num, NULL, NULL, in_mad, out_mad)) {
|
|
|
|
mlx4_ib_warn(&dev->ib_dev, "Failed in get GUID INFO MAD_IFC\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
mlx4_ib_update_cache_on_guid_change(dev, guid_tbl_blk_num + i,
|
|
|
|
port_num,
|
|
|
|
(u8 *)(&((struct ib_smp *)out_mad)->data));
|
|
|
|
mlx4_ib_notify_slaves_on_guid_change(dev, guid_tbl_blk_num + i,
|
|
|
|
port_num,
|
|
|
|
(u8 *)(&((struct ib_smp *)out_mad)->data));
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
kfree(in_mad);
|
|
|
|
kfree(out_mad);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
void handle_port_mgmt_change_event(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct ib_event_work *ew = container_of(work, struct ib_event_work, work);
|
|
|
|
struct mlx4_ib_dev *dev = ew->ib_dev;
|
|
|
|
struct mlx4_eqe *eqe = &(ew->ib_eqe);
|
|
|
|
u8 port = eqe->event.port_mgmt_change.port;
|
|
|
|
u32 changed_attr;
|
2012-08-03 08:40:50 +00:00
|
|
|
u32 tbl_block;
|
|
|
|
u32 change_bitmap;
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
|
|
|
|
switch (eqe->subtype) {
|
|
|
|
case MLX4_DEV_PMC_SUBTYPE_PORT_INFO:
|
|
|
|
changed_attr = be32_to_cpu(eqe->event.port_mgmt_change.params.port_info.changed_attr);
|
|
|
|
|
|
|
|
/* Update the SM ah - This should be done before handling
|
|
|
|
the other changed attributes so that MADs can be sent to the SM */
|
|
|
|
if (changed_attr & MSTR_SM_CHANGE_MASK) {
|
|
|
|
u16 lid = be16_to_cpu(eqe->event.port_mgmt_change.params.port_info.mstr_sm_lid);
|
|
|
|
u8 sl = eqe->event.port_mgmt_change.params.port_info.mstr_sm_sl & 0xf;
|
|
|
|
update_sm_ah(dev, port, lid, sl);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if it is a lid change event */
|
|
|
|
if (changed_attr & MLX4_EQ_PORT_INFO_LID_CHANGE_MASK)
|
2012-08-03 08:40:50 +00:00
|
|
|
handle_lid_change_event(dev, port);
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
|
|
|
|
/* Generate GUID changed event */
|
2012-08-03 08:40:50 +00:00
|
|
|
if (changed_attr & MLX4_EQ_PORT_INFO_GID_PFX_CHANGE_MASK) {
|
IB/mlx4: Use correct subnet-prefix in QP1 mads under SR-IOV
When sending QP1 MAD packets which use a GRH, the source GID
(which consists of the 64-bit subnet prefix, and the 64 bit port GUID)
must be included in the packet GRH.
For SR-IOV, a GID cache is used, since the source GID needs to be the
slave's source GID, and not the Hypervisor's GID. This cache also
included a subnet_prefix. Unfortunately, the subnet_prefix field in
the cache was never initialized (to the default subnet prefix 0xfe80::0).
As a result, this field remained all zeroes. Therefore, when SR-IOV
was active, all QP1 packets which included a GRH had a source GID
subnet prefix of all-zeroes.
However, the subnet-prefix should initially be 0xfe80::0 (the default
subnet prefix). In addition, if OpenSM modifies a port's subnet prefix,
the new subnet prefix must be used in the GRH when sending QP1 packets.
To fix this we now initialize the subnet prefix in the SR-IOV GID cache
to the default subnet prefix. We update the cached value if/when OpenSM
modifies the port's subnet prefix. We take this cached value when sending
QP1 packets when SR-IOV is active.
Note that the value is stored as an atomic64. This eliminates any need
for locking when the subnet prefix is being updated.
Note also that we depend on the FW generating the "port management change"
event for tracking subnet-prefix changes performed by OpenSM. If running
early FW (before 2.9.4630), subnet prefix changes will not be tracked (but
the default subnet prefix still will be stored in the cache; therefore
users who do not modify the subnet prefix will not have a problem).
IF there is a need for such tracking also for early FW, we will add that
capability in a subsequent patch.
Fixes: 1ffeb2eb8be9 ("IB/mlx4: SR-IOV IB context objects and proxy/tunnel SQP support")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-09-12 16:16:20 +00:00
|
|
|
if (mlx4_is_master(dev->dev)) {
|
|
|
|
union ib_gid gid;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
if (!eqe->event.port_mgmt_change.params.port_info.gid_prefix)
|
|
|
|
err = __mlx4_ib_query_gid(&dev->ib_dev, port, 0, &gid, 1);
|
|
|
|
else
|
|
|
|
gid.global.subnet_prefix =
|
|
|
|
eqe->event.port_mgmt_change.params.port_info.gid_prefix;
|
|
|
|
if (err) {
|
|
|
|
pr_warn("Could not change QP1 subnet prefix for port %d: query_gid error (%d)\n",
|
|
|
|
port, err);
|
|
|
|
} else {
|
|
|
|
pr_debug("Changing QP1 subnet prefix for port %d. old=0x%llx. new=0x%llx\n",
|
|
|
|
port,
|
|
|
|
(u64)atomic64_read(&dev->sriov.demux[port - 1].subnet_prefix),
|
|
|
|
be64_to_cpu(gid.global.subnet_prefix));
|
|
|
|
atomic64_set(&dev->sriov.demux[port - 1].subnet_prefix,
|
|
|
|
be64_to_cpu(gid.global.subnet_prefix));
|
|
|
|
}
|
|
|
|
}
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
mlx4_ib_dispatch_event(dev, port, IB_EVENT_GID_CHANGE);
|
2012-08-03 08:40:50 +00:00
|
|
|
/*if master, notify all slaves*/
|
|
|
|
if (mlx4_is_master(dev->dev))
|
|
|
|
mlx4_gen_slaves_port_mgt_ev(dev->dev, port,
|
|
|
|
MLX4_EQ_PORT_INFO_GID_PFX_CHANGE_MASK);
|
|
|
|
}
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
|
|
|
|
if (changed_attr & MLX4_EQ_PORT_INFO_CLIENT_REREG_MASK)
|
2012-08-03 08:40:46 +00:00
|
|
|
handle_client_rereg_event(dev, port);
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_DEV_PMC_SUBTYPE_PKEY_TABLE:
|
|
|
|
mlx4_ib_dispatch_event(dev, port, IB_EVENT_PKEY_CHANGE);
|
2012-08-03 08:40:50 +00:00
|
|
|
if (mlx4_is_master(dev->dev) && !dev->sriov.is_going_down)
|
|
|
|
propagate_pkey_ev(dev, port, eqe);
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
break;
|
|
|
|
case MLX4_DEV_PMC_SUBTYPE_GUID_INFO:
|
mlx4: Put physical GID and P_Key table sizes in mlx4_phys_caps struct and paravirtualize them
To allow easy paravirtualization of P_Key and GID table sizes, keep
paravirtualized sizes in mlx4_dev->caps, but save the actual physical
sizes from FW in struct: mlx4_dev->phys_cap.
In addition, in SR-IOV mode, do the following:
1. Reduce reported P_Key table size by 1.
This is done to reserve the highest P_Key index for internal use,
for declaring an invalid P_Key in P_Key paravirtualization.
We require a P_Key index which always contain an invalid P_Key
value for this purpose (i.e., one which cannot be modified by
the subnet manager). The way to do this is to reduce the
P_Key table size reported to the subnet manager by 1, so that
it will not attempt to access the P_Key at index #127.
2. Paravirtualize the GID table size to 1. Thus, each guest sees
only a single GID (at its paravirtualized index 0).
In addition, since we are paravirtualizing the GID table size to 1, we
add paravirtualization of the master GID event here (i.e., we do not
do ib_dispatch_event() for the GUID change event on the master, since
its (only) GUID never changes).
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:44 +00:00
|
|
|
/* paravirtualized master's guid is guid 0 -- does not change */
|
|
|
|
if (!mlx4_is_master(dev->dev))
|
|
|
|
mlx4_ib_dispatch_event(dev, port, IB_EVENT_GID_CHANGE);
|
2012-08-03 08:40:50 +00:00
|
|
|
/*if master, notify relevant slaves*/
|
|
|
|
else if (!dev->sriov.is_going_down) {
|
|
|
|
tbl_block = GET_BLK_PTR_FROM_EQE(eqe);
|
|
|
|
change_bitmap = GET_MASK_FROM_EQE(eqe);
|
|
|
|
handle_slaves_guid_change(dev, port, tbl_block, change_bitmap);
|
|
|
|
}
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
break;
|
2016-09-12 16:16:21 +00:00
|
|
|
|
|
|
|
case MLX4_DEV_PMC_SUBTYPE_SL_TO_VL_MAP:
|
|
|
|
/* cache sl to vl mapping changes for use in
|
|
|
|
* filling QP1 LRH VL field when sending packets
|
|
|
|
*/
|
|
|
|
if (!mlx4_is_slave(dev->dev)) {
|
|
|
|
union sl2vl_tbl_to_u64 sl2vl64;
|
|
|
|
int jj;
|
|
|
|
|
|
|
|
for (jj = 0; jj < 8; jj++) {
|
|
|
|
sl2vl64.sl8[jj] =
|
|
|
|
eqe->event.port_mgmt_change.params.sl2vl_tbl_change_info.sl2vl_table[jj];
|
|
|
|
pr_debug("port %u, sl2vl[%d] = %02x\n",
|
|
|
|
port, jj, sl2vl64.sl8[jj]);
|
|
|
|
}
|
|
|
|
atomic64_set(&dev->sl2vl[port - 1], sl2vl64.sl64);
|
|
|
|
}
|
|
|
|
break;
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 08:21:40 +00:00
|
|
|
default:
|
|
|
|
pr_warn("Unsupported subtype 0x%x for "
|
|
|
|
"Port Management Change event\n", eqe->subtype);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(ew);
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_ib_dispatch_event(struct mlx4_ib_dev *dev, u8 port_num,
|
|
|
|
enum ib_event_type type)
|
|
|
|
{
|
|
|
|
struct ib_event event;
|
|
|
|
|
|
|
|
event.device = &dev->ib_dev;
|
|
|
|
event.element.port_num = port_num;
|
|
|
|
event.event = type;
|
|
|
|
|
|
|
|
ib_dispatch_event(&event);
|
|
|
|
}
|
2012-08-03 08:40:42 +00:00
|
|
|
|
|
|
|
static void mlx4_ib_tunnel_comp_handler(struct ib_cq *cq, void *arg)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct mlx4_ib_demux_pv_ctx *ctx = cq->cq_context;
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ctx->ib_dev);
|
|
|
|
spin_lock_irqsave(&dev->sriov.going_down_lock, flags);
|
|
|
|
if (!dev->sriov.is_going_down && ctx->state == DEMUX_PV_STATE_ACTIVE)
|
|
|
|
queue_work(ctx->wq, &ctx->work);
|
|
|
|
spin_unlock_irqrestore(&dev->sriov.going_down_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_ib_post_pv_qp_buf(struct mlx4_ib_demux_pv_ctx *ctx,
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp,
|
|
|
|
int index)
|
|
|
|
{
|
|
|
|
struct ib_sge sg_list;
|
|
|
|
struct ib_recv_wr recv_wr, *bad_recv_wr;
|
|
|
|
int size;
|
|
|
|
|
|
|
|
size = (tun_qp->qp->qp_type == IB_QPT_UD) ?
|
|
|
|
sizeof (struct mlx4_tunnel_mad) : sizeof (struct mlx4_mad_rcv_buf);
|
|
|
|
|
|
|
|
sg_list.addr = tun_qp->ring[index].map;
|
|
|
|
sg_list.length = size;
|
2015-07-30 23:22:18 +00:00
|
|
|
sg_list.lkey = ctx->pd->local_dma_lkey;
|
2012-08-03 08:40:42 +00:00
|
|
|
|
|
|
|
recv_wr.next = NULL;
|
|
|
|
recv_wr.sg_list = &sg_list;
|
|
|
|
recv_wr.num_sge = 1;
|
|
|
|
recv_wr.wr_id = (u64) index | MLX4_TUN_WRID_RECV |
|
|
|
|
MLX4_TUN_SET_WRID_QPN(tun_qp->proxy_qpt);
|
|
|
|
ib_dma_sync_single_for_device(ctx->ib_dev, tun_qp->ring[index].map,
|
|
|
|
size, DMA_FROM_DEVICE);
|
|
|
|
return ib_post_recv(tun_qp->qp, &recv_wr, &bad_recv_wr);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
static int mlx4_ib_multiplex_sa_handler(struct ib_device *ibdev, int port,
|
|
|
|
int slave, struct ib_sa_mad *sa_mad)
|
|
|
|
{
|
2012-08-03 08:40:46 +00:00
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
/* dispatch to different sa handlers */
|
|
|
|
switch (be16_to_cpu(sa_mad->mad_hdr.attr_id)) {
|
|
|
|
case IB_SA_ATTR_MC_MEMBER_REC:
|
|
|
|
ret = mlx4_ib_mcg_multiplex_handler(ibdev, port, slave, sa_mad);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return ret;
|
2012-08-03 08:40:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int is_proxy_qp0(struct mlx4_ib_dev *dev, int qpn, int slave)
|
|
|
|
{
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 08:40:57 +00:00
|
|
|
int proxy_start = dev->dev->phys_caps.base_proxy_sqpn + 8 * slave;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 08:40:57 +00:00
|
|
|
return (qpn >= proxy_start && qpn <= proxy_start + 1);
|
2012-08-03 08:40:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int mlx4_ib_send_to_wire(struct mlx4_ib_dev *dev, int slave, u8 port,
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
enum ib_qp_type dest_qpt, u16 pkey_index,
|
|
|
|
u32 remote_qpn, u32 qkey, struct ib_ah_attr *attr,
|
2015-10-15 15:38:51 +00:00
|
|
|
u8 *s_mac, u16 vlan_id, struct ib_mad *mad)
|
2012-08-03 08:40:44 +00:00
|
|
|
{
|
|
|
|
struct ib_sge list;
|
2015-10-08 08:16:33 +00:00
|
|
|
struct ib_ud_wr wr;
|
|
|
|
struct ib_send_wr *bad_wr;
|
2012-08-03 08:40:44 +00:00
|
|
|
struct mlx4_ib_demux_pv_ctx *sqp_ctx;
|
|
|
|
struct mlx4_ib_demux_pv_qp *sqp;
|
|
|
|
struct mlx4_mad_snd_buf *sqp_mad;
|
|
|
|
struct ib_ah *ah;
|
|
|
|
struct ib_qp *send_qp = NULL;
|
|
|
|
unsigned wire_tx_ix = 0;
|
|
|
|
int ret = 0;
|
|
|
|
u16 wire_pkey_ix;
|
|
|
|
int src_qpnum;
|
|
|
|
u8 sgid_index;
|
|
|
|
|
|
|
|
|
|
|
|
sqp_ctx = dev->sriov.sqps[port-1];
|
|
|
|
|
|
|
|
/* check if proxy qp created */
|
|
|
|
if (!sqp_ctx || sqp_ctx->state != DEMUX_PV_STATE_ACTIVE)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
if (dest_qpt == IB_QPT_SMI) {
|
|
|
|
src_qpnum = 0;
|
|
|
|
sqp = &sqp_ctx->qp[0];
|
|
|
|
wire_pkey_ix = dev->pkeys.virt2phys_pkey[slave][port - 1][0];
|
|
|
|
} else {
|
|
|
|
src_qpnum = 1;
|
|
|
|
sqp = &sqp_ctx->qp[1];
|
|
|
|
wire_pkey_ix = dev->pkeys.virt2phys_pkey[slave][port - 1][pkey_index];
|
|
|
|
}
|
|
|
|
|
|
|
|
send_qp = sqp->qp;
|
|
|
|
|
|
|
|
/* create ah */
|
|
|
|
sgid_index = attr->grh.sgid_index;
|
|
|
|
attr->grh.sgid_index = 0;
|
|
|
|
ah = ib_create_ah(sqp_ctx->pd, attr);
|
|
|
|
if (IS_ERR(ah))
|
|
|
|
return -ENOMEM;
|
|
|
|
attr->grh.sgid_index = sgid_index;
|
|
|
|
to_mah(ah)->av.ib.gid_index = sgid_index;
|
|
|
|
/* get rid of force-loopback bit */
|
|
|
|
to_mah(ah)->av.ib.port_pd &= cpu_to_be32(0x7FFFFFFF);
|
|
|
|
spin_lock(&sqp->tx_lock);
|
|
|
|
if (sqp->tx_ix_head - sqp->tx_ix_tail >=
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1))
|
|
|
|
ret = -EAGAIN;
|
|
|
|
else
|
|
|
|
wire_tx_ix = (++sqp->tx_ix_head) & (MLX4_NUM_TUNNEL_BUFS - 1);
|
|
|
|
spin_unlock(&sqp->tx_lock);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
sqp_mad = (struct mlx4_mad_snd_buf *) (sqp->tx_ring[wire_tx_ix].buf.addr);
|
|
|
|
if (sqp->tx_ring[wire_tx_ix].ah)
|
|
|
|
ib_destroy_ah(sqp->tx_ring[wire_tx_ix].ah);
|
|
|
|
sqp->tx_ring[wire_tx_ix].ah = ah;
|
|
|
|
ib_dma_sync_single_for_cpu(&dev->ib_dev,
|
|
|
|
sqp->tx_ring[wire_tx_ix].buf.map,
|
|
|
|
sizeof (struct mlx4_mad_snd_buf),
|
|
|
|
DMA_TO_DEVICE);
|
|
|
|
|
|
|
|
memcpy(&sqp_mad->payload, mad, sizeof *mad);
|
|
|
|
|
|
|
|
ib_dma_sync_single_for_device(&dev->ib_dev,
|
|
|
|
sqp->tx_ring[wire_tx_ix].buf.map,
|
|
|
|
sizeof (struct mlx4_mad_snd_buf),
|
|
|
|
DMA_TO_DEVICE);
|
|
|
|
|
|
|
|
list.addr = sqp->tx_ring[wire_tx_ix].buf.map;
|
|
|
|
list.length = sizeof (struct mlx4_mad_snd_buf);
|
2015-07-30 23:22:18 +00:00
|
|
|
list.lkey = sqp_ctx->pd->local_dma_lkey;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
2015-10-08 08:16:33 +00:00
|
|
|
wr.ah = ah;
|
|
|
|
wr.port_num = port;
|
|
|
|
wr.pkey_index = wire_pkey_ix;
|
|
|
|
wr.remote_qkey = qkey;
|
|
|
|
wr.remote_qpn = remote_qpn;
|
|
|
|
wr.wr.next = NULL;
|
|
|
|
wr.wr.wr_id = ((u64) wire_tx_ix) | MLX4_TUN_SET_WRID_QPN(src_qpnum);
|
|
|
|
wr.wr.sg_list = &list;
|
|
|
|
wr.wr.num_sge = 1;
|
|
|
|
wr.wr.opcode = IB_WR_SEND;
|
|
|
|
wr.wr.send_flags = IB_SEND_SIGNALED;
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
if (s_mac)
|
|
|
|
memcpy(to_mah(ah)->av.eth.s_mac, s_mac, 6);
|
2015-10-15 15:38:51 +00:00
|
|
|
if (vlan_id < 0x1000)
|
|
|
|
vlan_id |= (attr->sl & 7) << 13;
|
|
|
|
to_mah(ah)->av.eth.vlan = cpu_to_be16(vlan_id);
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
|
2015-10-08 08:16:33 +00:00
|
|
|
ret = ib_post_send(send_qp, &wr.wr, &bad_wr);
|
2016-06-22 14:27:29 +00:00
|
|
|
if (!ret)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
spin_lock(&sqp->tx_lock);
|
|
|
|
sqp->tx_ix_tail++;
|
|
|
|
spin_unlock(&sqp->tx_lock);
|
|
|
|
sqp->tx_ring[wire_tx_ix].ah = NULL;
|
2012-08-03 08:40:44 +00:00
|
|
|
out:
|
2016-06-22 14:27:29 +00:00
|
|
|
ib_destroy_ah(ah);
|
2012-08-03 08:40:44 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-03-12 10:00:39 +00:00
|
|
|
static int get_slave_base_gid_ix(struct mlx4_ib_dev *dev, int slave, int port)
|
|
|
|
{
|
|
|
|
if (rdma_port_get_link_layer(&dev->ib_dev, port) == IB_LINK_LAYER_INFINIBAND)
|
|
|
|
return slave;
|
2014-03-19 16:11:52 +00:00
|
|
|
return mlx4_get_base_gid_ix(dev->dev, slave, port);
|
2014-03-12 10:00:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void fill_in_real_sgid_index(struct mlx4_ib_dev *dev, int slave, int port,
|
|
|
|
struct ib_ah_attr *ah_attr)
|
|
|
|
{
|
|
|
|
if (rdma_port_get_link_layer(&dev->ib_dev, port) == IB_LINK_LAYER_INFINIBAND)
|
|
|
|
ah_attr->grh.sgid_index = slave;
|
|
|
|
else
|
|
|
|
ah_attr->grh.sgid_index += get_slave_base_gid_ix(dev, slave, port);
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
static void mlx4_ib_multiplex_mad(struct mlx4_ib_demux_pv_ctx *ctx, struct ib_wc *wc)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ctx->ib_dev);
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp = &ctx->qp[MLX4_TUN_WRID_QPN(wc->wr_id)];
|
|
|
|
int wr_ix = wc->wr_id & (MLX4_NUM_TUNNEL_BUFS - 1);
|
|
|
|
struct mlx4_tunnel_mad *tunnel = tun_qp->ring[wr_ix].addr;
|
|
|
|
struct mlx4_ib_ah ah;
|
|
|
|
struct ib_ah_attr ah_attr;
|
|
|
|
u8 *slave_id;
|
|
|
|
int slave;
|
2014-03-19 16:11:52 +00:00
|
|
|
int port;
|
2015-10-15 15:38:51 +00:00
|
|
|
u16 vlan_id;
|
2012-08-03 08:40:44 +00:00
|
|
|
|
|
|
|
/* Get slave that sent this packet */
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 08:40:57 +00:00
|
|
|
if (wc->src_qp < dev->dev->phys_caps.base_proxy_sqpn ||
|
|
|
|
wc->src_qp >= dev->dev->phys_caps.base_proxy_sqpn + 8 * MLX4_MFUNC_MAX ||
|
2012-08-03 08:40:44 +00:00
|
|
|
(wc->src_qp & 0x1) != ctx->port - 1 ||
|
|
|
|
wc->src_qp & 0x4) {
|
|
|
|
mlx4_ib_warn(ctx->ib_dev, "can't multiplex bad sqp:%d\n", wc->src_qp);
|
|
|
|
return;
|
|
|
|
}
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 08:40:57 +00:00
|
|
|
slave = ((wc->src_qp & ~0x7) - dev->dev->phys_caps.base_proxy_sqpn) / 8;
|
2012-08-03 08:40:44 +00:00
|
|
|
if (slave != ctx->slave) {
|
|
|
|
mlx4_ib_warn(ctx->ib_dev, "can't multiplex bad sqp:%d: "
|
|
|
|
"belongs to another slave\n", wc->src_qp);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Map transaction ID */
|
|
|
|
ib_dma_sync_single_for_cpu(ctx->ib_dev, tun_qp->ring[wr_ix].map,
|
|
|
|
sizeof (struct mlx4_tunnel_mad),
|
|
|
|
DMA_FROM_DEVICE);
|
|
|
|
switch (tunnel->mad.mad_hdr.method) {
|
|
|
|
case IB_MGMT_METHOD_SET:
|
|
|
|
case IB_MGMT_METHOD_GET:
|
|
|
|
case IB_MGMT_METHOD_REPORT:
|
|
|
|
case IB_SA_METHOD_GET_TABLE:
|
|
|
|
case IB_SA_METHOD_DELETE:
|
|
|
|
case IB_SA_METHOD_GET_MULTI:
|
|
|
|
case IB_SA_METHOD_GET_TRACE_TBL:
|
|
|
|
slave_id = (u8 *) &tunnel->mad.mad_hdr.tid;
|
|
|
|
if (*slave_id) {
|
|
|
|
mlx4_ib_warn(ctx->ib_dev, "egress mad has non-null tid msb:%d "
|
|
|
|
"class:%d slave:%d\n", *slave_id,
|
|
|
|
tunnel->mad.mad_hdr.mgmt_class, slave);
|
|
|
|
return;
|
|
|
|
} else
|
|
|
|
*slave_id = slave;
|
|
|
|
default:
|
|
|
|
/* nothing */;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Class-specific handling */
|
|
|
|
switch (tunnel->mad.mad_hdr.mgmt_class) {
|
2014-05-29 13:31:02 +00:00
|
|
|
case IB_MGMT_CLASS_SUBN_LID_ROUTED:
|
|
|
|
case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
|
|
|
|
if (slave != mlx4_master_func_num(dev->dev) &&
|
|
|
|
!mlx4_vf_smi_enabled(dev->dev, slave, ctx->port))
|
|
|
|
return;
|
|
|
|
break;
|
2012-08-03 08:40:44 +00:00
|
|
|
case IB_MGMT_CLASS_SUBN_ADM:
|
|
|
|
if (mlx4_ib_multiplex_sa_handler(ctx->ib_dev, ctx->port, slave,
|
|
|
|
(struct ib_sa_mad *) &tunnel->mad))
|
|
|
|
return;
|
|
|
|
break;
|
2012-08-03 08:40:47 +00:00
|
|
|
case IB_MGMT_CLASS_CM:
|
|
|
|
if (mlx4_ib_multiplex_cm_handler(ctx->ib_dev, ctx->port, slave,
|
|
|
|
(struct ib_mad *) &tunnel->mad))
|
|
|
|
return;
|
|
|
|
break;
|
2012-08-03 08:40:44 +00:00
|
|
|
case IB_MGMT_CLASS_DEVICE_MGMT:
|
|
|
|
if (tunnel->mad.mad_hdr.method != IB_MGMT_METHOD_GET &&
|
|
|
|
tunnel->mad.mad_hdr.method != IB_MGMT_METHOD_SET)
|
|
|
|
return;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Drop unsupported classes for slaves in tunnel mode */
|
|
|
|
if (slave != mlx4_master_func_num(dev->dev)) {
|
|
|
|
mlx4_ib_warn(ctx->ib_dev, "dropping unsupported egress mad from class:%d "
|
|
|
|
"for slave:%d\n", tunnel->mad.mad_hdr.mgmt_class, slave);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We are using standard ib_core services to send the mad, so generate a
|
|
|
|
* stadard address handle by decoding the tunnelled mlx4_ah fields */
|
|
|
|
memcpy(&ah.av, &tunnel->hdr.av, sizeof (struct mlx4_av));
|
|
|
|
ah.ibah.device = ctx->ib_dev;
|
2015-05-21 12:14:06 +00:00
|
|
|
|
|
|
|
port = be32_to_cpu(ah.av.ib.port_pd) >> 24;
|
|
|
|
port = mlx4_slave_convert_port(dev->dev, slave, port);
|
|
|
|
if (port < 0)
|
|
|
|
return;
|
|
|
|
ah.av.ib.port_pd = cpu_to_be32(port << 24 | (be32_to_cpu(ah.av.ib.port_pd) & 0xffffff));
|
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
mlx4_ib_query_ah(&ah.ibah, &ah_attr);
|
2014-03-12 10:00:37 +00:00
|
|
|
if (ah_attr.ah_flags & IB_AH_GRH)
|
2014-03-12 10:00:39 +00:00
|
|
|
fill_in_real_sgid_index(dev, slave, ctx->port, &ah_attr);
|
2012-08-03 08:40:44 +00:00
|
|
|
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
memcpy(ah_attr.dmac, tunnel->hdr.mac, 6);
|
2015-10-15 15:38:51 +00:00
|
|
|
vlan_id = be16_to_cpu(tunnel->hdr.vlan);
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
/* if slave have default vlan use it */
|
|
|
|
mlx4_get_slave_default_vlan(dev->dev, ctx->port, slave,
|
2015-10-15 15:38:51 +00:00
|
|
|
&vlan_id, &ah_attr.sl);
|
mlx4: Implement IP based gids support for RoCE/SRIOV
Since there is no connection between the MAC/VLAN and the GID
when using IP-based addressing, the proxy QP1 (running on the
slave) must pass the source-mac, destination-mac, and vlan_id
information separately from the GID. Additionally, the Host
must pass the remote source-mac and vlan_id back to the slave,
This is achieved as follows:
Outgoing MADs:
1. Source MAC: obtained from the CQ completion structure
(struct ib_wc, smac field).
2. Destination MAC: obtained from the tunnel header
3. vlan_id: obtained from the tunnel header.
Incoming MADs
1. The source (i.e., remote) MAC and vlan_id are passed in
the tunnel header to the proxy QP1.
VST mode support:
For outgoing MADs, the vlan_id obtained from the header is
discarded, and the vlan_id specified by the Hypervisor is used
instead.
For incoming MADs, the incoming vlan_id (in the wc) is discarded, and the
"invalid" vlan (0xffff) is substituted when forwarding to the slave.
Signed-off-by: Moni Shoua <monis@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-12 10:00:41 +00:00
|
|
|
|
2012-08-03 08:40:44 +00:00
|
|
|
mlx4_ib_send_to_wire(dev, slave, ctx->port,
|
|
|
|
is_proxy_qp0(dev, wc->src_qp, slave) ?
|
|
|
|
IB_QPT_SMI : IB_QPT_GSI,
|
|
|
|
be16_to_cpu(tunnel->hdr.pkey_index),
|
|
|
|
be32_to_cpu(tunnel->hdr.remote_qpn),
|
|
|
|
be32_to_cpu(tunnel->hdr.qkey),
|
2015-10-15 15:38:51 +00:00
|
|
|
&ah_attr, wc->smac, vlan_id, &tunnel->mad);
|
2012-08-03 08:40:44 +00:00
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
|
|
|
|
enum ib_qp_type qp_type, int is_tun)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp;
|
|
|
|
int rx_buf_size, tx_buf_size;
|
|
|
|
|
|
|
|
if (qp_type > IB_QPT_GSI)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tun_qp = &ctx->qp[qp_type];
|
|
|
|
|
|
|
|
tun_qp->ring = kzalloc(sizeof (struct mlx4_ib_buf) * MLX4_NUM_TUNNEL_BUFS,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!tun_qp->ring)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
tun_qp->tx_ring = kcalloc(MLX4_NUM_TUNNEL_BUFS,
|
|
|
|
sizeof (struct mlx4_ib_tun_tx_buf),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!tun_qp->tx_ring) {
|
|
|
|
kfree(tun_qp->ring);
|
|
|
|
tun_qp->ring = NULL;
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (is_tun) {
|
|
|
|
rx_buf_size = sizeof (struct mlx4_tunnel_mad);
|
|
|
|
tx_buf_size = sizeof (struct mlx4_rcv_tunnel_mad);
|
|
|
|
} else {
|
|
|
|
rx_buf_size = sizeof (struct mlx4_mad_rcv_buf);
|
|
|
|
tx_buf_size = sizeof (struct mlx4_mad_snd_buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
|
|
|
|
tun_qp->ring[i].addr = kmalloc(rx_buf_size, GFP_KERNEL);
|
|
|
|
if (!tun_qp->ring[i].addr)
|
|
|
|
goto err;
|
|
|
|
tun_qp->ring[i].map = ib_dma_map_single(ctx->ib_dev,
|
|
|
|
tun_qp->ring[i].addr,
|
|
|
|
rx_buf_size,
|
|
|
|
DMA_FROM_DEVICE);
|
2015-03-16 17:49:59 +00:00
|
|
|
if (ib_dma_mapping_error(ctx->ib_dev, tun_qp->ring[i].map)) {
|
|
|
|
kfree(tun_qp->ring[i].addr);
|
|
|
|
goto err;
|
|
|
|
}
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
|
|
|
|
tun_qp->tx_ring[i].buf.addr =
|
|
|
|
kmalloc(tx_buf_size, GFP_KERNEL);
|
|
|
|
if (!tun_qp->tx_ring[i].buf.addr)
|
|
|
|
goto tx_err;
|
|
|
|
tun_qp->tx_ring[i].buf.map =
|
|
|
|
ib_dma_map_single(ctx->ib_dev,
|
|
|
|
tun_qp->tx_ring[i].buf.addr,
|
|
|
|
tx_buf_size,
|
|
|
|
DMA_TO_DEVICE);
|
2015-03-16 17:49:59 +00:00
|
|
|
if (ib_dma_mapping_error(ctx->ib_dev,
|
|
|
|
tun_qp->tx_ring[i].buf.map)) {
|
|
|
|
kfree(tun_qp->tx_ring[i].buf.addr);
|
|
|
|
goto tx_err;
|
|
|
|
}
|
2012-08-03 08:40:42 +00:00
|
|
|
tun_qp->tx_ring[i].ah = NULL;
|
|
|
|
}
|
|
|
|
spin_lock_init(&tun_qp->tx_lock);
|
|
|
|
tun_qp->tx_ix_head = 0;
|
|
|
|
tun_qp->tx_ix_tail = 0;
|
|
|
|
tun_qp->proxy_qpt = qp_type;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
tx_err:
|
|
|
|
while (i > 0) {
|
|
|
|
--i;
|
|
|
|
ib_dma_unmap_single(ctx->ib_dev, tun_qp->tx_ring[i].buf.map,
|
|
|
|
tx_buf_size, DMA_TO_DEVICE);
|
|
|
|
kfree(tun_qp->tx_ring[i].buf.addr);
|
|
|
|
}
|
|
|
|
kfree(tun_qp->tx_ring);
|
|
|
|
tun_qp->tx_ring = NULL;
|
|
|
|
i = MLX4_NUM_TUNNEL_BUFS;
|
|
|
|
err:
|
|
|
|
while (i > 0) {
|
|
|
|
--i;
|
|
|
|
ib_dma_unmap_single(ctx->ib_dev, tun_qp->ring[i].map,
|
|
|
|
rx_buf_size, DMA_FROM_DEVICE);
|
|
|
|
kfree(tun_qp->ring[i].addr);
|
|
|
|
}
|
|
|
|
kfree(tun_qp->ring);
|
|
|
|
tun_qp->ring = NULL;
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_free_pv_qp_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
|
|
|
|
enum ib_qp_type qp_type, int is_tun)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp;
|
|
|
|
int rx_buf_size, tx_buf_size;
|
|
|
|
|
|
|
|
if (qp_type > IB_QPT_GSI)
|
|
|
|
return;
|
|
|
|
|
|
|
|
tun_qp = &ctx->qp[qp_type];
|
|
|
|
if (is_tun) {
|
|
|
|
rx_buf_size = sizeof (struct mlx4_tunnel_mad);
|
|
|
|
tx_buf_size = sizeof (struct mlx4_rcv_tunnel_mad);
|
|
|
|
} else {
|
|
|
|
rx_buf_size = sizeof (struct mlx4_mad_rcv_buf);
|
|
|
|
tx_buf_size = sizeof (struct mlx4_mad_snd_buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
|
|
|
|
ib_dma_unmap_single(ctx->ib_dev, tun_qp->ring[i].map,
|
|
|
|
rx_buf_size, DMA_FROM_DEVICE);
|
|
|
|
kfree(tun_qp->ring[i].addr);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
|
|
|
|
ib_dma_unmap_single(ctx->ib_dev, tun_qp->tx_ring[i].buf.map,
|
|
|
|
tx_buf_size, DMA_TO_DEVICE);
|
|
|
|
kfree(tun_qp->tx_ring[i].buf.addr);
|
|
|
|
if (tun_qp->tx_ring[i].ah)
|
|
|
|
ib_destroy_ah(tun_qp->tx_ring[i].ah);
|
|
|
|
}
|
|
|
|
kfree(tun_qp->tx_ring);
|
|
|
|
kfree(tun_qp->ring);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_tunnel_comp_worker(struct work_struct *work)
|
|
|
|
{
|
2012-08-03 08:40:44 +00:00
|
|
|
struct mlx4_ib_demux_pv_ctx *ctx;
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp;
|
|
|
|
struct ib_wc wc;
|
|
|
|
int ret;
|
|
|
|
ctx = container_of(work, struct mlx4_ib_demux_pv_ctx, work);
|
|
|
|
ib_req_notify_cq(ctx->cq, IB_CQ_NEXT_COMP);
|
|
|
|
|
|
|
|
while (ib_poll_cq(ctx->cq, 1, &wc) == 1) {
|
|
|
|
tun_qp = &ctx->qp[MLX4_TUN_WRID_QPN(wc.wr_id)];
|
|
|
|
if (wc.status == IB_WC_SUCCESS) {
|
|
|
|
switch (wc.opcode) {
|
|
|
|
case IB_WC_RECV:
|
|
|
|
mlx4_ib_multiplex_mad(ctx, &wc);
|
|
|
|
ret = mlx4_ib_post_pv_qp_buf(ctx, tun_qp,
|
|
|
|
wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1));
|
|
|
|
if (ret)
|
|
|
|
pr_err("Failed reposting tunnel "
|
|
|
|
"buf:%lld\n", wc.wr_id);
|
|
|
|
break;
|
|
|
|
case IB_WC_SEND:
|
|
|
|
pr_debug("received tunnel send completion:"
|
|
|
|
"wrid=0x%llx, status=0x%x\n",
|
|
|
|
wc.wr_id, wc.status);
|
|
|
|
ib_destroy_ah(tun_qp->tx_ring[wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)].ah);
|
|
|
|
tun_qp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah
|
|
|
|
= NULL;
|
|
|
|
spin_lock(&tun_qp->tx_lock);
|
|
|
|
tun_qp->tx_ix_tail++;
|
|
|
|
spin_unlock(&tun_qp->tx_lock);
|
|
|
|
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
pr_debug("mlx4_ib: completion error in tunnel: %d."
|
|
|
|
" status = %d, wrid = 0x%llx\n",
|
|
|
|
ctx->slave, wc.status, wc.wr_id);
|
|
|
|
if (!MLX4_TUN_IS_RECV(wc.wr_id)) {
|
|
|
|
ib_destroy_ah(tun_qp->tx_ring[wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)].ah);
|
|
|
|
tun_qp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah
|
|
|
|
= NULL;
|
|
|
|
spin_lock(&tun_qp->tx_lock);
|
|
|
|
tun_qp->tx_ix_tail++;
|
|
|
|
spin_unlock(&tun_qp->tx_lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void pv_qp_event_handler(struct ib_event *event, void *qp_context)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_demux_pv_ctx *sqp = qp_context;
|
|
|
|
|
|
|
|
/* It's worse than that! He's dead, Jim! */
|
|
|
|
pr_err("Fatal error (%d) on a MAD QP on port %d\n",
|
|
|
|
event->event, sqp->port);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int create_pv_sqp(struct mlx4_ib_demux_pv_ctx *ctx,
|
|
|
|
enum ib_qp_type qp_type, int create_tun)
|
|
|
|
{
|
|
|
|
int i, ret;
|
|
|
|
struct mlx4_ib_demux_pv_qp *tun_qp;
|
|
|
|
struct mlx4_ib_qp_tunnel_init_attr qp_init_attr;
|
|
|
|
struct ib_qp_attr attr;
|
|
|
|
int qp_attr_mask_INIT;
|
|
|
|
|
|
|
|
if (qp_type > IB_QPT_GSI)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tun_qp = &ctx->qp[qp_type];
|
|
|
|
|
|
|
|
memset(&qp_init_attr, 0, sizeof qp_init_attr);
|
|
|
|
qp_init_attr.init_attr.send_cq = ctx->cq;
|
|
|
|
qp_init_attr.init_attr.recv_cq = ctx->cq;
|
|
|
|
qp_init_attr.init_attr.sq_sig_type = IB_SIGNAL_ALL_WR;
|
|
|
|
qp_init_attr.init_attr.cap.max_send_wr = MLX4_NUM_TUNNEL_BUFS;
|
|
|
|
qp_init_attr.init_attr.cap.max_recv_wr = MLX4_NUM_TUNNEL_BUFS;
|
|
|
|
qp_init_attr.init_attr.cap.max_send_sge = 1;
|
|
|
|
qp_init_attr.init_attr.cap.max_recv_sge = 1;
|
|
|
|
if (create_tun) {
|
|
|
|
qp_init_attr.init_attr.qp_type = IB_QPT_UD;
|
|
|
|
qp_init_attr.init_attr.create_flags = MLX4_IB_SRIOV_TUNNEL_QP;
|
|
|
|
qp_init_attr.port = ctx->port;
|
|
|
|
qp_init_attr.slave = ctx->slave;
|
|
|
|
qp_init_attr.proxy_qp_type = qp_type;
|
|
|
|
qp_attr_mask_INIT = IB_QP_STATE | IB_QP_PKEY_INDEX |
|
|
|
|
IB_QP_QKEY | IB_QP_PORT;
|
|
|
|
} else {
|
|
|
|
qp_init_attr.init_attr.qp_type = qp_type;
|
|
|
|
qp_init_attr.init_attr.create_flags = MLX4_IB_SRIOV_SQP;
|
|
|
|
qp_attr_mask_INIT = IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_QKEY;
|
|
|
|
}
|
|
|
|
qp_init_attr.init_attr.port_num = ctx->port;
|
|
|
|
qp_init_attr.init_attr.qp_context = ctx;
|
|
|
|
qp_init_attr.init_attr.event_handler = pv_qp_event_handler;
|
|
|
|
tun_qp->qp = ib_create_qp(ctx->pd, &qp_init_attr.init_attr);
|
|
|
|
if (IS_ERR(tun_qp->qp)) {
|
|
|
|
ret = PTR_ERR(tun_qp->qp);
|
|
|
|
tun_qp->qp = NULL;
|
|
|
|
pr_err("Couldn't create %s QP (%d)\n",
|
|
|
|
create_tun ? "tunnel" : "special", ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
memset(&attr, 0, sizeof attr);
|
|
|
|
attr.qp_state = IB_QPS_INIT;
|
IB/mlx4: Use default pkey when creating tunnel QPs
When creating tunnel QPs for special QP tunneling, look for the
default pkey in the slave's virtual pkey table. If it is present, use
the real pkey index where the default pkey is located.
If the default pkey is not found in the pkey table, use the real pkey
index which is stored at index 0 in the slave's virtual pkey table
(this is the current behavior).
This change is required to support cloud computing, where the
paravirtualized index of the default pkey is moved to index 1 or
higher. The pkey at paravirtualized index 0 is used for the default
IPoIB interface created by the VF.
Its possible for the pkey value at paravirtualized index 0 to be
invalid (zero) at VF probe time (pkey index 0 is mapped to real pkey
index 127, which contains pkey = 0).
At some point after the VF probe, the cloud computing interface at the
hypervisor maps virtual index 0 for the VF to the pkey index
containing the pkey that IPoIB will use in its operation. However,
when the tunnel QP is created, the pkey at the slave's virtual index 0
is still mapped to the invalid pkey index, so tunnel QP creation
fails.
This commit causes the hypervisor to search for the default pkey in
the slave's pkey table -- and this pkey is present in the table (at
index > 0) at tunnel QP creation time, so that the tunnel QP creation
will succeed.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-07-18 11:02:30 +00:00
|
|
|
ret = 0;
|
|
|
|
if (create_tun)
|
|
|
|
ret = find_slave_port_pkey_ix(to_mdev(ctx->ib_dev), ctx->slave,
|
|
|
|
ctx->port, IB_DEFAULT_PKEY_FULL,
|
|
|
|
&attr.pkey_index);
|
|
|
|
if (ret || !create_tun)
|
|
|
|
attr.pkey_index =
|
|
|
|
to_mdev(ctx->ib_dev)->pkeys.virt2phys_pkey[ctx->slave][ctx->port - 1][0];
|
2012-08-03 08:40:42 +00:00
|
|
|
attr.qkey = IB_QP1_QKEY;
|
|
|
|
attr.port_num = ctx->port;
|
|
|
|
ret = ib_modify_qp(tun_qp->qp, &attr, qp_attr_mask_INIT);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Couldn't change %s qp state to INIT (%d)\n",
|
|
|
|
create_tun ? "tunnel" : "special", ret);
|
|
|
|
goto err_qp;
|
|
|
|
}
|
|
|
|
attr.qp_state = IB_QPS_RTR;
|
|
|
|
ret = ib_modify_qp(tun_qp->qp, &attr, IB_QP_STATE);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Couldn't change %s qp state to RTR (%d)\n",
|
|
|
|
create_tun ? "tunnel" : "special", ret);
|
|
|
|
goto err_qp;
|
|
|
|
}
|
|
|
|
attr.qp_state = IB_QPS_RTS;
|
|
|
|
attr.sq_psn = 0;
|
|
|
|
ret = ib_modify_qp(tun_qp->qp, &attr, IB_QP_STATE | IB_QP_SQ_PSN);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Couldn't change %s qp state to RTS (%d)\n",
|
|
|
|
create_tun ? "tunnel" : "special", ret);
|
|
|
|
goto err_qp;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
|
|
|
|
ret = mlx4_ib_post_pv_qp_buf(ctx, tun_qp, i);
|
|
|
|
if (ret) {
|
|
|
|
pr_err(" mlx4_ib_post_pv_buf error"
|
|
|
|
" (err = %d, i = %d)\n", ret, i);
|
|
|
|
goto err_qp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_qp:
|
|
|
|
ib_destroy_qp(tun_qp->qp);
|
|
|
|
tun_qp->qp = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* IB MAD completion callback for real SQPs
|
|
|
|
*/
|
|
|
|
static void mlx4_ib_sqp_comp_worker(struct work_struct *work)
|
|
|
|
{
|
2012-08-03 08:40:44 +00:00
|
|
|
struct mlx4_ib_demux_pv_ctx *ctx;
|
|
|
|
struct mlx4_ib_demux_pv_qp *sqp;
|
|
|
|
struct ib_wc wc;
|
|
|
|
struct ib_grh *grh;
|
|
|
|
struct ib_mad *mad;
|
|
|
|
|
|
|
|
ctx = container_of(work, struct mlx4_ib_demux_pv_ctx, work);
|
|
|
|
ib_req_notify_cq(ctx->cq, IB_CQ_NEXT_COMP);
|
|
|
|
|
|
|
|
while (mlx4_ib_poll_cq(ctx->cq, 1, &wc) == 1) {
|
|
|
|
sqp = &ctx->qp[MLX4_TUN_WRID_QPN(wc.wr_id)];
|
|
|
|
if (wc.status == IB_WC_SUCCESS) {
|
|
|
|
switch (wc.opcode) {
|
|
|
|
case IB_WC_SEND:
|
|
|
|
ib_destroy_ah(sqp->tx_ring[wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)].ah);
|
|
|
|
sqp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah
|
|
|
|
= NULL;
|
|
|
|
spin_lock(&sqp->tx_lock);
|
|
|
|
sqp->tx_ix_tail++;
|
|
|
|
spin_unlock(&sqp->tx_lock);
|
|
|
|
break;
|
|
|
|
case IB_WC_RECV:
|
|
|
|
mad = (struct ib_mad *) &(((struct mlx4_mad_rcv_buf *)
|
|
|
|
(sqp->ring[wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)].addr))->payload);
|
|
|
|
grh = &(((struct mlx4_mad_rcv_buf *)
|
|
|
|
(sqp->ring[wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)].addr))->grh);
|
|
|
|
mlx4_ib_demux_mad(ctx->ib_dev, ctx->port, &wc, grh, mad);
|
|
|
|
if (mlx4_ib_post_pv_qp_buf(ctx, sqp, wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)))
|
|
|
|
pr_err("Failed reposting SQP "
|
|
|
|
"buf:%lld\n", wc.wr_id);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
BUG_ON(1);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
pr_debug("mlx4_ib: completion error in tunnel: %d."
|
|
|
|
" status = %d, wrid = 0x%llx\n",
|
|
|
|
ctx->slave, wc.status, wc.wr_id);
|
|
|
|
if (!MLX4_TUN_IS_RECV(wc.wr_id)) {
|
|
|
|
ib_destroy_ah(sqp->tx_ring[wc.wr_id &
|
|
|
|
(MLX4_NUM_TUNNEL_BUFS - 1)].ah);
|
|
|
|
sqp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah
|
|
|
|
= NULL;
|
|
|
|
spin_lock(&sqp->tx_lock);
|
|
|
|
sqp->tx_ix_tail++;
|
|
|
|
spin_unlock(&sqp->tx_lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int alloc_pv_object(struct mlx4_ib_dev *dev, int slave, int port,
|
|
|
|
struct mlx4_ib_demux_pv_ctx **ret_ctx)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_demux_pv_ctx *ctx;
|
|
|
|
|
|
|
|
*ret_ctx = NULL;
|
|
|
|
ctx = kzalloc(sizeof (struct mlx4_ib_demux_pv_ctx), GFP_KERNEL);
|
|
|
|
if (!ctx) {
|
|
|
|
pr_err("failed allocating pv resource context "
|
|
|
|
"for port %d, slave %d\n", port, slave);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx->ib_dev = &dev->ib_dev;
|
|
|
|
ctx->port = port;
|
|
|
|
ctx->slave = slave;
|
|
|
|
*ret_ctx = ctx;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_pv_object(struct mlx4_ib_dev *dev, int slave, int port)
|
|
|
|
{
|
|
|
|
if (dev->sriov.demux[port - 1].tun[slave]) {
|
|
|
|
kfree(dev->sriov.demux[port - 1].tun[slave]);
|
|
|
|
dev->sriov.demux[port - 1].tun[slave] = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
|
|
|
|
int create_tun, struct mlx4_ib_demux_pv_ctx *ctx)
|
|
|
|
{
|
|
|
|
int ret, cq_size;
|
2015-06-11 13:35:21 +00:00
|
|
|
struct ib_cq_init_attr cq_attr = {};
|
2012-08-03 08:40:42 +00:00
|
|
|
|
2012-08-03 08:40:58 +00:00
|
|
|
if (ctx->state != DEMUX_PV_STATE_DOWN)
|
|
|
|
return -EEXIST;
|
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
ctx->state = DEMUX_PV_STATE_STARTING;
|
2014-05-29 13:31:02 +00:00
|
|
|
/* have QP0 only if link layer is IB */
|
|
|
|
if (rdma_port_get_link_layer(ibdev, ctx->port) ==
|
|
|
|
IB_LINK_LAYER_INFINIBAND)
|
2012-08-03 08:40:42 +00:00
|
|
|
ctx->has_smi = 1;
|
|
|
|
|
|
|
|
if (ctx->has_smi) {
|
|
|
|
ret = mlx4_ib_alloc_pv_bufs(ctx, IB_QPT_SMI, create_tun);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Failed allocating qp0 tunnel bufs (%d)\n", ret);
|
|
|
|
goto err_out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = mlx4_ib_alloc_pv_bufs(ctx, IB_QPT_GSI, create_tun);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Failed allocating qp1 tunnel bufs (%d)\n", ret);
|
|
|
|
goto err_out_qp0;
|
|
|
|
}
|
|
|
|
|
|
|
|
cq_size = 2 * MLX4_NUM_TUNNEL_BUFS;
|
|
|
|
if (ctx->has_smi)
|
|
|
|
cq_size *= 2;
|
|
|
|
|
2015-06-11 13:35:21 +00:00
|
|
|
cq_attr.cqe = cq_size;
|
2012-08-03 08:40:42 +00:00
|
|
|
ctx->cq = ib_create_cq(ctx->ib_dev, mlx4_ib_tunnel_comp_handler,
|
2015-06-11 13:35:21 +00:00
|
|
|
NULL, ctx, &cq_attr);
|
2012-08-03 08:40:42 +00:00
|
|
|
if (IS_ERR(ctx->cq)) {
|
|
|
|
ret = PTR_ERR(ctx->cq);
|
|
|
|
pr_err("Couldn't create tunnel CQ (%d)\n", ret);
|
|
|
|
goto err_buf;
|
|
|
|
}
|
|
|
|
|
2016-09-05 10:56:17 +00:00
|
|
|
ctx->pd = ib_alloc_pd(ctx->ib_dev, 0);
|
2012-08-03 08:40:42 +00:00
|
|
|
if (IS_ERR(ctx->pd)) {
|
|
|
|
ret = PTR_ERR(ctx->pd);
|
|
|
|
pr_err("Couldn't create tunnel PD (%d)\n", ret);
|
|
|
|
goto err_cq;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ctx->has_smi) {
|
|
|
|
ret = create_pv_sqp(ctx, IB_QPT_SMI, create_tun);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Couldn't create %s QP0 (%d)\n",
|
|
|
|
create_tun ? "tunnel for" : "", ret);
|
2015-07-30 23:22:18 +00:00
|
|
|
goto err_pd;
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = create_pv_sqp(ctx, IB_QPT_GSI, create_tun);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Couldn't create %s QP1 (%d)\n",
|
|
|
|
create_tun ? "tunnel for" : "", ret);
|
|
|
|
goto err_qp0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (create_tun)
|
|
|
|
INIT_WORK(&ctx->work, mlx4_ib_tunnel_comp_worker);
|
|
|
|
else
|
|
|
|
INIT_WORK(&ctx->work, mlx4_ib_sqp_comp_worker);
|
|
|
|
|
|
|
|
ctx->wq = to_mdev(ibdev)->sriov.demux[port - 1].wq;
|
|
|
|
|
|
|
|
ret = ib_req_notify_cq(ctx->cq, IB_CQ_NEXT_COMP);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Couldn't arm tunnel cq (%d)\n", ret);
|
|
|
|
goto err_wq;
|
|
|
|
}
|
|
|
|
ctx->state = DEMUX_PV_STATE_ACTIVE;
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_wq:
|
|
|
|
ctx->wq = NULL;
|
|
|
|
ib_destroy_qp(ctx->qp[1].qp);
|
|
|
|
ctx->qp[1].qp = NULL;
|
|
|
|
|
|
|
|
|
|
|
|
err_qp0:
|
|
|
|
if (ctx->has_smi)
|
|
|
|
ib_destroy_qp(ctx->qp[0].qp);
|
|
|
|
ctx->qp[0].qp = NULL;
|
|
|
|
|
|
|
|
err_pd:
|
|
|
|
ib_dealloc_pd(ctx->pd);
|
|
|
|
ctx->pd = NULL;
|
|
|
|
|
|
|
|
err_cq:
|
|
|
|
ib_destroy_cq(ctx->cq);
|
|
|
|
ctx->cq = NULL;
|
|
|
|
|
|
|
|
err_buf:
|
|
|
|
mlx4_ib_free_pv_qp_bufs(ctx, IB_QPT_GSI, create_tun);
|
|
|
|
|
|
|
|
err_out_qp0:
|
|
|
|
if (ctx->has_smi)
|
|
|
|
mlx4_ib_free_pv_qp_bufs(ctx, IB_QPT_SMI, create_tun);
|
|
|
|
err_out:
|
|
|
|
ctx->state = DEMUX_PV_STATE_DOWN;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void destroy_pv_resources(struct mlx4_ib_dev *dev, int slave, int port,
|
|
|
|
struct mlx4_ib_demux_pv_ctx *ctx, int flush)
|
|
|
|
{
|
|
|
|
if (!ctx)
|
|
|
|
return;
|
|
|
|
if (ctx->state > DEMUX_PV_STATE_DOWN) {
|
|
|
|
ctx->state = DEMUX_PV_STATE_DOWNING;
|
|
|
|
if (flush)
|
|
|
|
flush_workqueue(ctx->wq);
|
|
|
|
if (ctx->has_smi) {
|
|
|
|
ib_destroy_qp(ctx->qp[0].qp);
|
|
|
|
ctx->qp[0].qp = NULL;
|
|
|
|
mlx4_ib_free_pv_qp_bufs(ctx, IB_QPT_SMI, 1);
|
|
|
|
}
|
|
|
|
ib_destroy_qp(ctx->qp[1].qp);
|
|
|
|
ctx->qp[1].qp = NULL;
|
|
|
|
mlx4_ib_free_pv_qp_bufs(ctx, IB_QPT_GSI, 1);
|
|
|
|
ib_dealloc_pd(ctx->pd);
|
|
|
|
ctx->pd = NULL;
|
|
|
|
ib_destroy_cq(ctx->cq);
|
|
|
|
ctx->cq = NULL;
|
|
|
|
ctx->state = DEMUX_PV_STATE_DOWN;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_ib_tunnels_update(struct mlx4_ib_dev *dev, int slave,
|
|
|
|
int port, int do_init)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (!do_init) {
|
2012-08-03 08:40:46 +00:00
|
|
|
clean_vf_mcast(&dev->sriov.demux[port - 1], slave);
|
2012-08-03 08:40:42 +00:00
|
|
|
/* for master, destroy real sqp resources */
|
|
|
|
if (slave == mlx4_master_func_num(dev->dev))
|
|
|
|
destroy_pv_resources(dev, slave, port,
|
|
|
|
dev->sriov.sqps[port - 1], 1);
|
|
|
|
/* destroy the tunnel qp resources */
|
|
|
|
destroy_pv_resources(dev, slave, port,
|
|
|
|
dev->sriov.demux[port - 1].tun[slave], 1);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* create the tunnel qp resources */
|
|
|
|
ret = create_pv_resources(&dev->ib_dev, slave, port, 1,
|
|
|
|
dev->sriov.demux[port - 1].tun[slave]);
|
|
|
|
|
|
|
|
/* for master, create the real sqp resources */
|
|
|
|
if (!ret && slave == mlx4_master_func_num(dev->dev))
|
|
|
|
ret = create_pv_resources(&dev->ib_dev, slave, port, 0,
|
|
|
|
dev->sriov.sqps[port - 1]);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_ib_tunnels_update_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_demux_work *dmxw;
|
|
|
|
|
|
|
|
dmxw = container_of(work, struct mlx4_ib_demux_work, work);
|
|
|
|
mlx4_ib_tunnels_update(dmxw->dev, dmxw->slave, (int) dmxw->port,
|
|
|
|
dmxw->do_init);
|
|
|
|
kfree(dmxw);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_ib_alloc_demux_ctx(struct mlx4_ib_dev *dev,
|
|
|
|
struct mlx4_ib_demux_ctx *ctx,
|
|
|
|
int port)
|
|
|
|
{
|
|
|
|
char name[12];
|
|
|
|
int ret = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
ctx->tun = kcalloc(dev->dev->caps.sqp_demux,
|
|
|
|
sizeof (struct mlx4_ib_demux_pv_ctx *), GFP_KERNEL);
|
|
|
|
if (!ctx->tun)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ctx->dev = dev;
|
|
|
|
ctx->port = port;
|
|
|
|
ctx->ib_dev = &dev->ib_dev;
|
|
|
|
|
2014-03-19 16:11:52 +00:00
|
|
|
for (i = 0;
|
2015-01-25 14:59:35 +00:00
|
|
|
i < min(dev->dev->caps.sqp_demux,
|
|
|
|
(u16)(dev->dev->persist->num_vfs + 1));
|
2014-03-19 16:11:52 +00:00
|
|
|
i++) {
|
|
|
|
struct mlx4_active_ports actv_ports =
|
|
|
|
mlx4_get_active_ports(dev->dev, i);
|
|
|
|
|
|
|
|
if (!test_bit(port - 1, actv_ports.ports))
|
|
|
|
continue;
|
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
ret = alloc_pv_object(dev, i, port, &ctx->tun[i]);
|
|
|
|
if (ret) {
|
|
|
|
ret = -ENOMEM;
|
2012-08-03 08:40:46 +00:00
|
|
|
goto err_mcg;
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:46 +00:00
|
|
|
ret = mlx4_ib_mcg_port_init(ctx);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Failed initializing mcg para-virt (%d)\n", ret);
|
|
|
|
goto err_mcg;
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
snprintf(name, sizeof name, "mlx4_ibt%d", port);
|
2016-08-15 18:13:10 +00:00
|
|
|
ctx->wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
|
2012-08-03 08:40:42 +00:00
|
|
|
if (!ctx->wq) {
|
|
|
|
pr_err("Failed to create tunnelling WQ for port %d\n", port);
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err_wq;
|
|
|
|
}
|
|
|
|
|
|
|
|
snprintf(name, sizeof name, "mlx4_ibud%d", port);
|
2016-08-15 18:13:10 +00:00
|
|
|
ctx->ud_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
|
2012-08-03 08:40:42 +00:00
|
|
|
if (!ctx->ud_wq) {
|
|
|
|
pr_err("Failed to create up/down WQ for port %d\n", port);
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err_udwq;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_udwq:
|
|
|
|
destroy_workqueue(ctx->wq);
|
|
|
|
ctx->wq = NULL;
|
|
|
|
|
|
|
|
err_wq:
|
2012-08-03 08:40:46 +00:00
|
|
|
mlx4_ib_mcg_port_cleanup(ctx, 1);
|
|
|
|
err_mcg:
|
2012-08-03 08:40:42 +00:00
|
|
|
for (i = 0; i < dev->dev->caps.sqp_demux; i++)
|
|
|
|
free_pv_object(dev, i, port);
|
|
|
|
kfree(ctx->tun);
|
|
|
|
ctx->tun = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_free_sqp_ctx(struct mlx4_ib_demux_pv_ctx *sqp_ctx)
|
|
|
|
{
|
|
|
|
if (sqp_ctx->state > DEMUX_PV_STATE_DOWN) {
|
|
|
|
sqp_ctx->state = DEMUX_PV_STATE_DOWNING;
|
|
|
|
flush_workqueue(sqp_ctx->wq);
|
|
|
|
if (sqp_ctx->has_smi) {
|
|
|
|
ib_destroy_qp(sqp_ctx->qp[0].qp);
|
|
|
|
sqp_ctx->qp[0].qp = NULL;
|
|
|
|
mlx4_ib_free_pv_qp_bufs(sqp_ctx, IB_QPT_SMI, 0);
|
|
|
|
}
|
|
|
|
ib_destroy_qp(sqp_ctx->qp[1].qp);
|
|
|
|
sqp_ctx->qp[1].qp = NULL;
|
|
|
|
mlx4_ib_free_pv_qp_bufs(sqp_ctx, IB_QPT_GSI, 0);
|
|
|
|
ib_dealloc_pd(sqp_ctx->pd);
|
|
|
|
sqp_ctx->pd = NULL;
|
|
|
|
ib_destroy_cq(sqp_ctx->cq);
|
|
|
|
sqp_ctx->cq = NULL;
|
|
|
|
sqp_ctx->state = DEMUX_PV_STATE_DOWN;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_free_demux_ctx(struct mlx4_ib_demux_ctx *ctx)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
if (ctx) {
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ctx->ib_dev);
|
2012-08-03 08:40:46 +00:00
|
|
|
mlx4_ib_mcg_port_cleanup(ctx, 1);
|
2012-08-03 08:40:42 +00:00
|
|
|
for (i = 0; i < dev->dev->caps.sqp_demux; i++) {
|
|
|
|
if (!ctx->tun[i])
|
|
|
|
continue;
|
|
|
|
if (ctx->tun[i]->state > DEMUX_PV_STATE_DOWN)
|
|
|
|
ctx->tun[i]->state = DEMUX_PV_STATE_DOWNING;
|
|
|
|
}
|
|
|
|
flush_workqueue(ctx->wq);
|
|
|
|
for (i = 0; i < dev->dev->caps.sqp_demux; i++) {
|
|
|
|
destroy_pv_resources(dev, i, ctx->port, ctx->tun[i], 0);
|
|
|
|
free_pv_object(dev, i, ctx->port);
|
|
|
|
}
|
|
|
|
kfree(ctx->tun);
|
|
|
|
destroy_workqueue(ctx->ud_wq);
|
|
|
|
destroy_workqueue(ctx->wq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_master_tunnels(struct mlx4_ib_dev *dev, int do_init)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!mlx4_is_master(dev->dev))
|
|
|
|
return;
|
|
|
|
/* initialize or tear down tunnel QPs for the master */
|
|
|
|
for (i = 0; i < dev->dev->caps.num_ports; i++)
|
|
|
|
mlx4_ib_tunnels_update(dev, mlx4_master_func_num(dev->dev), i + 1, do_init);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_init_sriov(struct mlx4_ib_dev *dev)
|
|
|
|
{
|
|
|
|
int i = 0;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!mlx4_is_mfunc(dev->dev))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
dev->sriov.is_going_down = 0;
|
|
|
|
spin_lock_init(&dev->sriov.going_down_lock);
|
2012-08-03 08:40:47 +00:00
|
|
|
mlx4_ib_cm_paravirt_init(dev);
|
2012-08-03 08:40:42 +00:00
|
|
|
|
|
|
|
mlx4_ib_warn(&dev->ib_dev, "multi-function enabled\n");
|
|
|
|
|
|
|
|
if (mlx4_is_slave(dev->dev)) {
|
|
|
|
mlx4_ib_warn(&dev->ib_dev, "operating in qp1 tunnel mode\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:56 +00:00
|
|
|
for (i = 0; i < dev->dev->caps.sqp_demux; i++) {
|
|
|
|
if (i == mlx4_master_func_num(dev->dev))
|
|
|
|
mlx4_put_slave_node_guid(dev->dev, i, dev->ib_dev.node_guid);
|
|
|
|
else
|
|
|
|
mlx4_put_slave_node_guid(dev->dev, i, mlx4_ib_gen_node_guid());
|
|
|
|
}
|
|
|
|
|
2012-08-03 08:40:49 +00:00
|
|
|
err = mlx4_ib_init_alias_guid_service(dev);
|
|
|
|
if (err) {
|
|
|
|
mlx4_ib_warn(&dev->ib_dev, "Failed init alias guid process.\n");
|
|
|
|
goto paravirt_err;
|
|
|
|
}
|
2012-08-03 08:40:51 +00:00
|
|
|
err = mlx4_ib_device_register_sysfs(dev);
|
|
|
|
if (err) {
|
|
|
|
mlx4_ib_warn(&dev->ib_dev, "Failed to register sysfs\n");
|
|
|
|
goto sysfs_err;
|
|
|
|
}
|
2012-08-03 08:40:49 +00:00
|
|
|
|
2012-08-03 08:40:42 +00:00
|
|
|
mlx4_ib_warn(&dev->ib_dev, "initializing demux service for %d qp1 clients\n",
|
|
|
|
dev->dev->caps.sqp_demux);
|
|
|
|
for (i = 0; i < dev->num_ports; i++) {
|
2012-08-03 08:40:49 +00:00
|
|
|
union ib_gid gid;
|
|
|
|
err = __mlx4_ib_query_gid(&dev->ib_dev, i + 1, 0, &gid, 1);
|
|
|
|
if (err)
|
|
|
|
goto demux_err;
|
|
|
|
dev->sriov.demux[i].guid_cache[0] = gid.global.interface_id;
|
IB/mlx4: Use correct subnet-prefix in QP1 mads under SR-IOV
When sending QP1 MAD packets which use a GRH, the source GID
(which consists of the 64-bit subnet prefix, and the 64 bit port GUID)
must be included in the packet GRH.
For SR-IOV, a GID cache is used, since the source GID needs to be the
slave's source GID, and not the Hypervisor's GID. This cache also
included a subnet_prefix. Unfortunately, the subnet_prefix field in
the cache was never initialized (to the default subnet prefix 0xfe80::0).
As a result, this field remained all zeroes. Therefore, when SR-IOV
was active, all QP1 packets which included a GRH had a source GID
subnet prefix of all-zeroes.
However, the subnet-prefix should initially be 0xfe80::0 (the default
subnet prefix). In addition, if OpenSM modifies a port's subnet prefix,
the new subnet prefix must be used in the GRH when sending QP1 packets.
To fix this we now initialize the subnet prefix in the SR-IOV GID cache
to the default subnet prefix. We update the cached value if/when OpenSM
modifies the port's subnet prefix. We take this cached value when sending
QP1 packets when SR-IOV is active.
Note that the value is stored as an atomic64. This eliminates any need
for locking when the subnet prefix is being updated.
Note also that we depend on the FW generating the "port management change"
event for tracking subnet-prefix changes performed by OpenSM. If running
early FW (before 2.9.4630), subnet prefix changes will not be tracked (but
the default subnet prefix still will be stored in the cache; therefore
users who do not modify the subnet prefix will not have a problem).
IF there is a need for such tracking also for early FW, we will add that
capability in a subsequent patch.
Fixes: 1ffeb2eb8be9 ("IB/mlx4: SR-IOV IB context objects and proxy/tunnel SQP support")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-09-12 16:16:20 +00:00
|
|
|
atomic64_set(&dev->sriov.demux[i].subnet_prefix,
|
|
|
|
be64_to_cpu(gid.global.subnet_prefix));
|
2012-08-03 08:40:42 +00:00
|
|
|
err = alloc_pv_object(dev, mlx4_master_func_num(dev->dev), i + 1,
|
|
|
|
&dev->sriov.sqps[i]);
|
|
|
|
if (err)
|
|
|
|
goto demux_err;
|
|
|
|
err = mlx4_ib_alloc_demux_ctx(dev, &dev->sriov.demux[i], i + 1);
|
|
|
|
if (err)
|
2013-02-04 11:22:36 +00:00
|
|
|
goto free_pv;
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|
|
|
|
mlx4_ib_master_tunnels(dev, 1);
|
|
|
|
return 0;
|
|
|
|
|
2013-02-04 11:22:36 +00:00
|
|
|
free_pv:
|
|
|
|
free_pv_object(dev, mlx4_master_func_num(dev->dev), i + 1);
|
2012-08-03 08:40:42 +00:00
|
|
|
demux_err:
|
2013-02-04 11:22:36 +00:00
|
|
|
while (--i >= 0) {
|
2012-08-03 08:40:42 +00:00
|
|
|
free_pv_object(dev, mlx4_master_func_num(dev->dev), i + 1);
|
|
|
|
mlx4_ib_free_demux_ctx(&dev->sriov.demux[i]);
|
|
|
|
}
|
2012-08-03 08:40:51 +00:00
|
|
|
mlx4_ib_device_unregister_sysfs(dev);
|
|
|
|
|
|
|
|
sysfs_err:
|
2012-08-03 08:40:49 +00:00
|
|
|
mlx4_ib_destroy_alias_guid_service(dev);
|
|
|
|
|
|
|
|
paravirt_err:
|
2012-08-03 08:40:47 +00:00
|
|
|
mlx4_ib_cm_paravirt_clean(dev, -1);
|
2012-08-03 08:40:42 +00:00
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_ib_close_sriov(struct mlx4_ib_dev *dev)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (!mlx4_is_mfunc(dev->dev))
|
|
|
|
return;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&dev->sriov.going_down_lock, flags);
|
|
|
|
dev->sriov.is_going_down = 1;
|
|
|
|
spin_unlock_irqrestore(&dev->sriov.going_down_lock, flags);
|
2012-08-03 08:40:47 +00:00
|
|
|
if (mlx4_is_master(dev->dev)) {
|
2012-08-03 08:40:42 +00:00
|
|
|
for (i = 0; i < dev->num_ports; i++) {
|
|
|
|
flush_workqueue(dev->sriov.demux[i].ud_wq);
|
|
|
|
mlx4_ib_free_sqp_ctx(dev->sriov.sqps[i]);
|
|
|
|
kfree(dev->sriov.sqps[i]);
|
|
|
|
dev->sriov.sqps[i] = NULL;
|
|
|
|
mlx4_ib_free_demux_ctx(&dev->sriov.demux[i]);
|
|
|
|
}
|
2012-08-03 08:40:47 +00:00
|
|
|
|
|
|
|
mlx4_ib_cm_paravirt_clean(dev, -1);
|
2012-08-03 08:40:49 +00:00
|
|
|
mlx4_ib_destroy_alias_guid_service(dev);
|
2012-08-03 08:40:51 +00:00
|
|
|
mlx4_ib_device_unregister_sysfs(dev);
|
2012-08-03 08:40:47 +00:00
|
|
|
}
|
2012-08-03 08:40:42 +00:00
|
|
|
}
|