First pull request for 4.17-rc

- Various build fixes (USER_ACCESS=m and ADDR_TRANS turned off)
 - SPDX license tag cleanups (new tag Linux-OpenIB)
 - RoCE GID fixes related to default GIDs
 - Various fixes to: cxgb4, uverbs, cma, iwpm, rxe, hns (big batch),
   mlx4, mlx5, and hfi1 (medium batch)
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJa7JXPAAoJELgmozMOVy/dc0AP/0i7EajAmgl1ihka6BYVj2pa
 DV8iSrVMDPulh9AVnAtwLJSbdwmgN/HeVzLzcutHyMYk6tAf8RCs6TsyoB36XiOL
 oUh5+V2GyNnyh9veWPwyGTgZKCpPJc3uQFV6502lZVDYwArMfGatumApBgQVKiJ+
 YdPEXEQZPNIs6YZB1WXkNYV/ra9u0aBByQvUrxwVZ2AND+srJYO82tqZit2wBtjK
 UXrhmZbWXGWMFg8K3/lpfUkQhkG3Arj+tMKkCfqsVzC7wUPhlTKBHR9NmvdLIiy9
 5Vhv7Xp78udcxZKtUeTFsbhaMqqK7x7sKHnpKAs7hOZNZ/Eg47BrMwMrZVLOFuDF
 nBLUL1H+nJ1mASZoMWH5xzOpVew+e9X0cot09pVDBIvsOIh97wCG7hgptQ2Z5xig
 fcDiMmg6tuakMsaiD0dzC9JI5HR6Z7+6oR1tBkQFDxQ+XkkcoFabdmkJaIRRwOj7
 CUhXRgcm0UgVd03Jdta6CtYXsjSODirWg4AvSSMt9lUFpjYf9WZP00/YojcBbBEH
 UlVrPbsKGyncgrm3FUP6kXmScESfdTljTPDLiY9cO9+bhhPGo1OHf005EfAp178B
 jGp6hbKlt+rNs9cdXrPSPhjds+QF8HyfSlwyYVWKw8VWlh/5DG8uyGYjF05hYO0q
 xhjIS6/EZjcTAh5e4LzR
 =PI8v
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma

Pull rdma fixes from Doug Ledford:
 "This is our first pull request of the rc cycle. It's not that it's
  been overly quiet, we were just waiting on a few things before sending
  this off.

  For instance, the 6 patch series from Intel for the hfi1 driver had
  actually been pulled in on Tuesday for a Wednesday pull request, only
  to have Jason notice something I missed, so we held off for some
  testing, and then on Thursday had to respin the series because the
  very first patch needed a minor fix (unnecessary cast is all).

  There is a sizable hns patch series in here, as well as a reasonably
  largish hfi1 patch series, then all of the lines of uapi updates are
  just the change to the new official Linux-OpenIB SPDX tag (a bunch of
  our files had what amounts to a BSD-2-Clause + MIT Warranty statement
  as their license as a result of the initial code submission years ago,
  and the SPDX folks decided it was unique enough to warrant a unique
  tag), then the typical mlx4 and mlx5 updates, and finally some cxgb4
  and core/cache/cma updates to round out the bunch.

  None of it was overly large by itself, but in the 2 1/2 weeks we've
  been collecting patches, it has added up :-/.

  As best I can tell, it's been through 0day (I got a notice about my
  last for-next push, but not for my for-rc push, but Jason seems to
  think that failure messages are prioritized and success messages not
  so much). It's also been through linux-next. And yes, we did notice in
  the context portion of the CMA query gid fix patch that there is a
  dubious BUG_ON() in the code, and have plans to audit our BUG_ON usage
  and remove it anywhere we can.

  Summary:

   - Various build fixes (USER_ACCESS=m and ADDR_TRANS turned off)

   - SPDX license tag cleanups (new tag Linux-OpenIB)

   - RoCE GID fixes related to default GIDs

   - Various fixes to: cxgb4, uverbs, cma, iwpm, rxe, hns (big batch),
     mlx4, mlx5, and hfi1 (medium batch)"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (52 commits)
  RDMA/cma: Do not query GID during QP state transition to RTR
  IB/mlx4: Fix integer overflow when calculating optimal MTT size
  IB/hfi1: Fix memory leak in exception path in get_irq_affinity()
  IB/{hfi1, rdmavt}: Fix memory leak in hfi1_alloc_devdata() upon failure
  IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used
  IB/hfi1: Fix loss of BECN with AHG
  IB/hfi1 Use correct type for num_user_context
  IB/hfi1: Fix handling of FECN marked multicast packet
  IB/core: Make ib_mad_client_id atomic
  iw_cxgb4: Atomically flush per QP HW CQEs
  IB/uverbs: Fix kernel crash during MR deregistration flow
  IB/uverbs: Prevent reregistration of DM_MR to regular MR
  RDMA/mlx4: Add missed RSS hash inner header flag
  RDMA/hns: Fix a couple misspellings
  RDMA/hns: Submit bad wr
  RDMA/hns: Update assignment method for owner field of send wqe
  RDMA/hns: Adjust the order of cleanup hem table
  RDMA/hns: Only assign dqpn if IB_QP_PATH_DEST_QPN bit is set
  RDMA/hns: Remove some unnecessary attr_mask judgement
  RDMA/hns: Only assign mtu if IB_QP_PATH_MTU bit is set
  ...
This commit is contained in:
Linus Torvalds 2018-05-04 20:51:10 -10:00
commit eb4f959b26
63 changed files with 409 additions and 210 deletions

View File

@ -61,9 +61,12 @@ config INFINIBAND_ON_DEMAND_PAGING
pages on demand instead. pages on demand instead.
config INFINIBAND_ADDR_TRANS config INFINIBAND_ADDR_TRANS
bool bool "RDMA/CM"
depends on INFINIBAND depends on INFINIBAND
default y default y
---help---
Support for RDMA communication manager (CM).
This allows for a generic connection abstraction over RDMA.
config INFINIBAND_ADDR_TRANS_CONFIGFS config INFINIBAND_ADDR_TRANS_CONFIGFS
bool bool

View File

@ -291,14 +291,18 @@ static int find_gid(struct ib_gid_table *table, const union ib_gid *gid,
* so lookup free slot only if requested. * so lookup free slot only if requested.
*/ */
if (pempty && empty < 0) { if (pempty && empty < 0) {
if (data->props & GID_TABLE_ENTRY_INVALID) { if (data->props & GID_TABLE_ENTRY_INVALID &&
/* Found an invalid (free) entry; allocate it */ (default_gid ==
if (data->props & GID_TABLE_ENTRY_DEFAULT) { !!(data->props & GID_TABLE_ENTRY_DEFAULT))) {
if (default_gid) /*
* Found an invalid (free) entry; allocate it.
* If default GID is requested, then our
* found slot must be one of the DEFAULT
* reserved slots or we fail.
* This ensures that only DEFAULT reserved
* slots are used for default property GIDs.
*/
empty = curr_index; empty = curr_index;
} else {
empty = curr_index;
}
} }
} }
@ -420,8 +424,10 @@ int ib_cache_gid_add(struct ib_device *ib_dev, u8 port,
return ret; return ret;
} }
int ib_cache_gid_del(struct ib_device *ib_dev, u8 port, static int
union ib_gid *gid, struct ib_gid_attr *attr) _ib_cache_gid_del(struct ib_device *ib_dev, u8 port,
union ib_gid *gid, struct ib_gid_attr *attr,
unsigned long mask, bool default_gid)
{ {
struct ib_gid_table *table; struct ib_gid_table *table;
int ret = 0; int ret = 0;
@ -431,11 +437,7 @@ int ib_cache_gid_del(struct ib_device *ib_dev, u8 port,
mutex_lock(&table->lock); mutex_lock(&table->lock);
ix = find_gid(table, gid, attr, false, ix = find_gid(table, gid, attr, default_gid, mask, NULL);
GID_ATTR_FIND_MASK_GID |
GID_ATTR_FIND_MASK_GID_TYPE |
GID_ATTR_FIND_MASK_NETDEV,
NULL);
if (ix < 0) { if (ix < 0) {
ret = -EINVAL; ret = -EINVAL;
goto out_unlock; goto out_unlock;
@ -452,6 +454,17 @@ out_unlock:
return ret; return ret;
} }
int ib_cache_gid_del(struct ib_device *ib_dev, u8 port,
union ib_gid *gid, struct ib_gid_attr *attr)
{
unsigned long mask = GID_ATTR_FIND_MASK_GID |
GID_ATTR_FIND_MASK_GID_TYPE |
GID_ATTR_FIND_MASK_DEFAULT |
GID_ATTR_FIND_MASK_NETDEV;
return _ib_cache_gid_del(ib_dev, port, gid, attr, mask, false);
}
int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port, int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port,
struct net_device *ndev) struct net_device *ndev)
{ {
@ -728,7 +741,7 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
unsigned long gid_type_mask, unsigned long gid_type_mask,
enum ib_cache_gid_default_mode mode) enum ib_cache_gid_default_mode mode)
{ {
union ib_gid gid; union ib_gid gid = { };
struct ib_gid_attr gid_attr; struct ib_gid_attr gid_attr;
struct ib_gid_table *table; struct ib_gid_table *table;
unsigned int gid_type; unsigned int gid_type;
@ -736,7 +749,9 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid; table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid;
make_default_gid(ndev, &gid); mask = GID_ATTR_FIND_MASK_GID_TYPE |
GID_ATTR_FIND_MASK_DEFAULT |
GID_ATTR_FIND_MASK_NETDEV;
memset(&gid_attr, 0, sizeof(gid_attr)); memset(&gid_attr, 0, sizeof(gid_attr));
gid_attr.ndev = ndev; gid_attr.ndev = ndev;
@ -747,12 +762,12 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
gid_attr.gid_type = gid_type; gid_attr.gid_type = gid_type;
if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) { if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) {
mask = GID_ATTR_FIND_MASK_GID_TYPE | make_default_gid(ndev, &gid);
GID_ATTR_FIND_MASK_DEFAULT;
__ib_cache_gid_add(ib_dev, port, &gid, __ib_cache_gid_add(ib_dev, port, &gid,
&gid_attr, mask, true); &gid_attr, mask, true);
} else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) { } else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) {
ib_cache_gid_del(ib_dev, port, &gid, &gid_attr); _ib_cache_gid_del(ib_dev, port, &gid,
&gid_attr, mask, true);
} }
} }
} }

View File

@ -382,6 +382,8 @@ struct cma_hdr {
#define CMA_VERSION 0x00 #define CMA_VERSION 0x00
struct cma_req_info { struct cma_req_info {
struct sockaddr_storage listen_addr_storage;
struct sockaddr_storage src_addr_storage;
struct ib_device *device; struct ib_device *device;
int port; int port;
union ib_gid local_gid; union ib_gid local_gid;
@ -866,7 +868,6 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
{ {
struct ib_qp_attr qp_attr; struct ib_qp_attr qp_attr;
int qp_attr_mask, ret; int qp_attr_mask, ret;
union ib_gid sgid;
mutex_lock(&id_priv->qp_mutex); mutex_lock(&id_priv->qp_mutex);
if (!id_priv->id.qp) { if (!id_priv->id.qp) {
@ -889,12 +890,6 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
if (ret) if (ret)
goto out; goto out;
ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num,
rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index,
&sgid, NULL);
if (ret)
goto out;
BUG_ON(id_priv->cma_dev->device != id_priv->id.device); BUG_ON(id_priv->cma_dev->device != id_priv->id.device);
if (conn_param) if (conn_param)
@ -1340,11 +1335,11 @@ static bool validate_net_dev(struct net_device *net_dev,
} }
static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event, static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
const struct cma_req_info *req) struct cma_req_info *req)
{ {
struct sockaddr_storage listen_addr_storage, src_addr_storage; struct sockaddr *listen_addr =
struct sockaddr *listen_addr = (struct sockaddr *)&listen_addr_storage, (struct sockaddr *)&req->listen_addr_storage;
*src_addr = (struct sockaddr *)&src_addr_storage; struct sockaddr *src_addr = (struct sockaddr *)&req->src_addr_storage;
struct net_device *net_dev; struct net_device *net_dev;
const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL; const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL;
int err; int err;
@ -1359,11 +1354,6 @@ static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
if (!net_dev) if (!net_dev)
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
if (!validate_net_dev(net_dev, listen_addr, src_addr)) {
dev_put(net_dev);
return ERR_PTR(-EHOSTUNREACH);
}
return net_dev; return net_dev;
} }
@ -1490,15 +1480,51 @@ static struct rdma_id_private *cma_id_from_event(struct ib_cm_id *cm_id,
} }
} }
/*
* Net namespace might be getting deleted while route lookup,
* cm_id lookup is in progress. Therefore, perform netdevice
* validation, cm_id lookup under rcu lock.
* RCU lock along with netdevice state check, synchronizes with
* netdevice migrating to different net namespace and also avoids
* case where net namespace doesn't get deleted while lookup is in
* progress.
* If the device state is not IFF_UP, its properties such as ifindex
* and nd_net cannot be trusted to remain valid without rcu lock.
* net/core/dev.c change_net_namespace() ensures to synchronize with
* ongoing operations on net device after device is closed using
* synchronize_net().
*/
rcu_read_lock();
if (*net_dev) {
/*
* If netdevice is down, it is likely that it is administratively
* down or it might be migrating to different namespace.
* In that case avoid further processing, as the net namespace
* or ifindex may change.
*/
if (((*net_dev)->flags & IFF_UP) == 0) {
id_priv = ERR_PTR(-EHOSTUNREACH);
goto err;
}
if (!validate_net_dev(*net_dev,
(struct sockaddr *)&req.listen_addr_storage,
(struct sockaddr *)&req.src_addr_storage)) {
id_priv = ERR_PTR(-EHOSTUNREACH);
goto err;
}
}
bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net, bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net,
rdma_ps_from_service_id(req.service_id), rdma_ps_from_service_id(req.service_id),
cma_port_from_service_id(req.service_id)); cma_port_from_service_id(req.service_id));
id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev); id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev);
err:
rcu_read_unlock();
if (IS_ERR(id_priv) && *net_dev) { if (IS_ERR(id_priv) && *net_dev) {
dev_put(*net_dev); dev_put(*net_dev);
*net_dev = NULL; *net_dev = NULL;
} }
return id_priv; return id_priv;
} }

View File

@ -114,7 +114,7 @@ int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr,
struct sockaddr_storage *mapped_sockaddr, struct sockaddr_storage *mapped_sockaddr,
u8 nl_client) u8 nl_client)
{ {
struct hlist_head *hash_bucket_head; struct hlist_head *hash_bucket_head = NULL;
struct iwpm_mapping_info *map_info; struct iwpm_mapping_info *map_info;
unsigned long flags; unsigned long flags;
int ret = -EINVAL; int ret = -EINVAL;
@ -142,6 +142,9 @@ int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr,
} }
} }
spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags);
if (!hash_bucket_head)
kfree(map_info);
return ret; return ret;
} }

View File

@ -59,7 +59,7 @@ module_param_named(recv_queue_size, mad_recvq_size, int, 0444);
MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests"); MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests");
static struct list_head ib_mad_port_list; static struct list_head ib_mad_port_list;
static u32 ib_mad_client_id = 0; static atomic_t ib_mad_client_id = ATOMIC_INIT(0);
/* Port list lock */ /* Port list lock */
static DEFINE_SPINLOCK(ib_mad_port_list_lock); static DEFINE_SPINLOCK(ib_mad_port_list_lock);
@ -377,7 +377,7 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
} }
spin_lock_irqsave(&port_priv->reg_lock, flags); spin_lock_irqsave(&port_priv->reg_lock, flags);
mad_agent_priv->agent.hi_tid = ++ib_mad_client_id; mad_agent_priv->agent.hi_tid = atomic_inc_return(&ib_mad_client_id);
/* /*
* Make sure MAD registration (if supplied) * Make sure MAD registration (if supplied)

View File

@ -255,6 +255,7 @@ static void bond_delete_netdev_default_gids(struct ib_device *ib_dev,
struct net_device *rdma_ndev) struct net_device *rdma_ndev)
{ {
struct net_device *real_dev = rdma_vlan_dev_real_dev(event_ndev); struct net_device *real_dev = rdma_vlan_dev_real_dev(event_ndev);
unsigned long gid_type_mask;
if (!rdma_ndev) if (!rdma_ndev)
return; return;
@ -264,10 +265,14 @@ static void bond_delete_netdev_default_gids(struct ib_device *ib_dev,
rcu_read_lock(); rcu_read_lock();
if (rdma_is_upper_dev_rcu(rdma_ndev, event_ndev) && if (((rdma_ndev != event_ndev &&
is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev) == !rdma_is_upper_dev_rcu(rdma_ndev, event_ndev)) ||
BONDING_SLAVE_STATE_INACTIVE) { is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev)
unsigned long gid_type_mask; ==
BONDING_SLAVE_STATE_INACTIVE)) {
rcu_read_unlock();
return;
}
rcu_read_unlock(); rcu_read_unlock();
@ -276,9 +281,6 @@ static void bond_delete_netdev_default_gids(struct ib_device *ib_dev,
ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev, ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev,
gid_type_mask, gid_type_mask,
IB_CACHE_GID_DEFAULT_MODE_DELETE); IB_CACHE_GID_DEFAULT_MODE_DELETE);
} else {
rcu_read_unlock();
}
} }
static void enum_netdev_ipv4_ips(struct ib_device *ib_dev, static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,

View File

@ -159,6 +159,23 @@ static void ucma_put_ctx(struct ucma_context *ctx)
complete(&ctx->comp); complete(&ctx->comp);
} }
/*
* Same as ucm_get_ctx but requires that ->cm_id->device is valid, eg that the
* CM_ID is bound.
*/
static struct ucma_context *ucma_get_ctx_dev(struct ucma_file *file, int id)
{
struct ucma_context *ctx = ucma_get_ctx(file, id);
if (IS_ERR(ctx))
return ctx;
if (!ctx->cm_id->device) {
ucma_put_ctx(ctx);
return ERR_PTR(-EINVAL);
}
return ctx;
}
static void ucma_close_event_id(struct work_struct *work) static void ucma_close_event_id(struct work_struct *work)
{ {
struct ucma_event *uevent_close = container_of(work, struct ucma_event, close_work); struct ucma_event *uevent_close = container_of(work, struct ucma_event, close_work);
@ -683,7 +700,7 @@ static ssize_t ucma_resolve_ip(struct ucma_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
if (!rdma_addr_size_in6(&cmd.src_addr) || if ((cmd.src_addr.sin6_family && !rdma_addr_size_in6(&cmd.src_addr)) ||
!rdma_addr_size_in6(&cmd.dst_addr)) !rdma_addr_size_in6(&cmd.dst_addr))
return -EINVAL; return -EINVAL;
@ -734,7 +751,7 @@ static ssize_t ucma_resolve_route(struct ucma_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx_dev(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
@ -1050,7 +1067,7 @@ static ssize_t ucma_connect(struct ucma_file *file, const char __user *inbuf,
if (!cmd.conn_param.valid) if (!cmd.conn_param.valid)
return -EINVAL; return -EINVAL;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx_dev(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
@ -1092,7 +1109,7 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx_dev(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
@ -1120,7 +1137,7 @@ static ssize_t ucma_reject(struct ucma_file *file, const char __user *inbuf,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx_dev(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
@ -1139,7 +1156,7 @@ static ssize_t ucma_disconnect(struct ucma_file *file, const char __user *inbuf,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx_dev(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
@ -1167,15 +1184,10 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file,
if (cmd.qp_state > IB_QPS_ERR) if (cmd.qp_state > IB_QPS_ERR)
return -EINVAL; return -EINVAL;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx_dev(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
if (!ctx->cm_id->device) {
ret = -EINVAL;
goto out;
}
resp.qp_attr_mask = 0; resp.qp_attr_mask = 0;
memset(&qp_attr, 0, sizeof qp_attr); memset(&qp_attr, 0, sizeof qp_attr);
qp_attr.qp_state = cmd.qp_state; qp_attr.qp_state = cmd.qp_state;
@ -1316,13 +1328,13 @@ static ssize_t ucma_set_option(struct ucma_file *file, const char __user *inbuf,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
return -EINVAL;
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
return -EINVAL;
optval = memdup_user(u64_to_user_ptr(cmd.optval), optval = memdup_user(u64_to_user_ptr(cmd.optval),
cmd.optlen); cmd.optlen);
if (IS_ERR(optval)) { if (IS_ERR(optval)) {
@ -1384,7 +1396,7 @@ static ssize_t ucma_process_join(struct ucma_file *file,
else else
return -EINVAL; return -EINVAL;
ctx = ucma_get_ctx(file, cmd->id); ctx = ucma_get_ctx_dev(file, cmd->id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);

View File

@ -691,6 +691,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file,
mr->device = pd->device; mr->device = pd->device;
mr->pd = pd; mr->pd = pd;
mr->dm = NULL;
mr->uobject = uobj; mr->uobject = uobj;
atomic_inc(&pd->usecnt); atomic_inc(&pd->usecnt);
mr->res.type = RDMA_RESTRACK_MR; mr->res.type = RDMA_RESTRACK_MR;
@ -765,6 +766,11 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file,
mr = uobj->object; mr = uobj->object;
if (mr->dm) {
ret = -EINVAL;
goto put_uobjs;
}
if (cmd.flags & IB_MR_REREG_ACCESS) { if (cmd.flags & IB_MR_REREG_ACCESS) {
ret = ib_check_mr_access(cmd.access_flags); ret = ib_check_mr_access(cmd.access_flags);
if (ret) if (ret)

View File

@ -234,6 +234,15 @@ static int uverbs_validate_kernel_mandatory(const struct uverbs_method_spec *met
return -EINVAL; return -EINVAL;
} }
for (; i < method_spec->num_buckets; i++) {
struct uverbs_attr_spec_hash *attr_spec_bucket =
method_spec->attr_buckets[i];
if (!bitmap_empty(attr_spec_bucket->mandatory_attrs_bitmask,
attr_spec_bucket->num_attrs))
return -EINVAL;
}
return 0; return 0;
} }

View File

@ -363,28 +363,28 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)(struct ib_device
static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = {
[IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM] = { [IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM] = {
.ptr = { { .ptr = {
.type = UVERBS_ATTR_TYPE_PTR_IN, .type = UVERBS_ATTR_TYPE_PTR_IN,
UVERBS_ATTR_TYPE(struct ib_uverbs_flow_action_esp_keymat_aes_gcm), UVERBS_ATTR_TYPE(struct ib_uverbs_flow_action_esp_keymat_aes_gcm),
.flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO,
}, } },
}, },
}; };
static const struct uverbs_attr_spec uverbs_flow_action_esp_replay[] = { static const struct uverbs_attr_spec uverbs_flow_action_esp_replay[] = {
[IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE] = { [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE] = {
.ptr = { { .ptr = {
.type = UVERBS_ATTR_TYPE_PTR_IN, .type = UVERBS_ATTR_TYPE_PTR_IN,
/* No need to specify any data */ /* No need to specify any data */
.len = 0, .len = 0,
} } }
}, },
[IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP] = { [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP] = {
.ptr = { { .ptr = {
.type = UVERBS_ATTR_TYPE_PTR_IN, .type = UVERBS_ATTR_TYPE_PTR_IN,
UVERBS_ATTR_STRUCT(struct ib_uverbs_flow_action_esp_replay_bmp, size), UVERBS_ATTR_STRUCT(struct ib_uverbs_flow_action_esp_replay_bmp, size),
.flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO,
} } }
}, },
}; };

View File

@ -1656,6 +1656,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd,
if (!IS_ERR(mr)) { if (!IS_ERR(mr)) {
mr->device = pd->device; mr->device = pd->device;
mr->pd = pd; mr->pd = pd;
mr->dm = NULL;
mr->uobject = NULL; mr->uobject = NULL;
atomic_inc(&pd->usecnt); atomic_inc(&pd->usecnt);
mr->need_inval = false; mr->need_inval = false;

View File

@ -315,7 +315,7 @@ static void advance_oldest_read(struct t4_wq *wq)
* Deal with out-of-order and/or completions that complete * Deal with out-of-order and/or completions that complete
* prior unsignalled WRs. * prior unsignalled WRs.
*/ */
void c4iw_flush_hw_cq(struct c4iw_cq *chp) void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp)
{ {
struct t4_cqe *hw_cqe, *swcqe, read_cqe; struct t4_cqe *hw_cqe, *swcqe, read_cqe;
struct c4iw_qp *qhp; struct c4iw_qp *qhp;
@ -339,6 +339,13 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp)
if (qhp == NULL) if (qhp == NULL)
goto next_cqe; goto next_cqe;
if (flush_qhp != qhp) {
spin_lock(&qhp->lock);
if (qhp->wq.flushed == 1)
goto next_cqe;
}
if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE) if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE)
goto next_cqe; goto next_cqe;
@ -390,6 +397,8 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp)
next_cqe: next_cqe:
t4_hwcq_consume(&chp->cq); t4_hwcq_consume(&chp->cq);
ret = t4_next_hw_cqe(&chp->cq, &hw_cqe); ret = t4_next_hw_cqe(&chp->cq, &hw_cqe);
if (qhp && flush_qhp != qhp)
spin_unlock(&qhp->lock);
} }
} }

View File

@ -875,6 +875,11 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
rdev->status_page->db_off = 0; rdev->status_page->db_off = 0;
init_completion(&rdev->rqt_compl);
init_completion(&rdev->pbl_compl);
kref_init(&rdev->rqt_kref);
kref_init(&rdev->pbl_kref);
return 0; return 0;
err_free_status_page_and_wr_log: err_free_status_page_and_wr_log:
if (c4iw_wr_log && rdev->wr_log) if (c4iw_wr_log && rdev->wr_log)
@ -893,13 +898,15 @@ destroy_resource:
static void c4iw_rdev_close(struct c4iw_rdev *rdev) static void c4iw_rdev_close(struct c4iw_rdev *rdev)
{ {
destroy_workqueue(rdev->free_workq);
kfree(rdev->wr_log); kfree(rdev->wr_log);
c4iw_release_dev_ucontext(rdev, &rdev->uctx); c4iw_release_dev_ucontext(rdev, &rdev->uctx);
free_page((unsigned long)rdev->status_page); free_page((unsigned long)rdev->status_page);
c4iw_pblpool_destroy(rdev); c4iw_pblpool_destroy(rdev);
c4iw_rqtpool_destroy(rdev); c4iw_rqtpool_destroy(rdev);
wait_for_completion(&rdev->pbl_compl);
wait_for_completion(&rdev->rqt_compl);
c4iw_ocqp_pool_destroy(rdev); c4iw_ocqp_pool_destroy(rdev);
destroy_workqueue(rdev->free_workq);
c4iw_destroy_resource(&rdev->resource); c4iw_destroy_resource(&rdev->resource);
} }

View File

@ -185,6 +185,10 @@ struct c4iw_rdev {
struct wr_log_entry *wr_log; struct wr_log_entry *wr_log;
int wr_log_size; int wr_log_size;
struct workqueue_struct *free_workq; struct workqueue_struct *free_workq;
struct completion rqt_compl;
struct completion pbl_compl;
struct kref rqt_kref;
struct kref pbl_kref;
}; };
static inline int c4iw_fatal_error(struct c4iw_rdev *rdev) static inline int c4iw_fatal_error(struct c4iw_rdev *rdev)
@ -1049,7 +1053,7 @@ u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size);
void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size); void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size);
u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size); u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size);
void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size); void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size);
void c4iw_flush_hw_cq(struct c4iw_cq *chp); void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp);
void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count); void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count);
int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp); int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp);
int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count); int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count);

View File

@ -1343,12 +1343,12 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
qhp->wq.flushed = 1; qhp->wq.flushed = 1;
t4_set_wq_in_error(&qhp->wq); t4_set_wq_in_error(&qhp->wq);
c4iw_flush_hw_cq(rchp); c4iw_flush_hw_cq(rchp, qhp);
c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count);
rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count);
if (schp != rchp) if (schp != rchp)
c4iw_flush_hw_cq(schp); c4iw_flush_hw_cq(schp, qhp);
sq_flushed = c4iw_flush_sq(qhp); sq_flushed = c4iw_flush_sq(qhp);
spin_unlock(&qhp->lock); spin_unlock(&qhp->lock);

View File

@ -260,12 +260,22 @@ u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size)
rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT); rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT);
if (rdev->stats.pbl.cur > rdev->stats.pbl.max) if (rdev->stats.pbl.cur > rdev->stats.pbl.max)
rdev->stats.pbl.max = rdev->stats.pbl.cur; rdev->stats.pbl.max = rdev->stats.pbl.cur;
kref_get(&rdev->pbl_kref);
} else } else
rdev->stats.pbl.fail++; rdev->stats.pbl.fail++;
mutex_unlock(&rdev->stats.lock); mutex_unlock(&rdev->stats.lock);
return (u32)addr; return (u32)addr;
} }
static void destroy_pblpool(struct kref *kref)
{
struct c4iw_rdev *rdev;
rdev = container_of(kref, struct c4iw_rdev, pbl_kref);
gen_pool_destroy(rdev->pbl_pool);
complete(&rdev->pbl_compl);
}
void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size) void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
{ {
pr_debug("addr 0x%x size %d\n", addr, size); pr_debug("addr 0x%x size %d\n", addr, size);
@ -273,6 +283,7 @@ void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT); rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT);
mutex_unlock(&rdev->stats.lock); mutex_unlock(&rdev->stats.lock);
gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size); gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size);
kref_put(&rdev->pbl_kref, destroy_pblpool);
} }
int c4iw_pblpool_create(struct c4iw_rdev *rdev) int c4iw_pblpool_create(struct c4iw_rdev *rdev)
@ -310,7 +321,7 @@ int c4iw_pblpool_create(struct c4iw_rdev *rdev)
void c4iw_pblpool_destroy(struct c4iw_rdev *rdev) void c4iw_pblpool_destroy(struct c4iw_rdev *rdev)
{ {
gen_pool_destroy(rdev->pbl_pool); kref_put(&rdev->pbl_kref, destroy_pblpool);
} }
/* /*
@ -331,12 +342,22 @@ u32 c4iw_rqtpool_alloc(struct c4iw_rdev *rdev, int size)
rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT); rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT);
if (rdev->stats.rqt.cur > rdev->stats.rqt.max) if (rdev->stats.rqt.cur > rdev->stats.rqt.max)
rdev->stats.rqt.max = rdev->stats.rqt.cur; rdev->stats.rqt.max = rdev->stats.rqt.cur;
kref_get(&rdev->rqt_kref);
} else } else
rdev->stats.rqt.fail++; rdev->stats.rqt.fail++;
mutex_unlock(&rdev->stats.lock); mutex_unlock(&rdev->stats.lock);
return (u32)addr; return (u32)addr;
} }
static void destroy_rqtpool(struct kref *kref)
{
struct c4iw_rdev *rdev;
rdev = container_of(kref, struct c4iw_rdev, rqt_kref);
gen_pool_destroy(rdev->rqt_pool);
complete(&rdev->rqt_compl);
}
void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size) void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
{ {
pr_debug("addr 0x%x size %d\n", addr, size << 6); pr_debug("addr 0x%x size %d\n", addr, size << 6);
@ -344,6 +365,7 @@ void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT); rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT);
mutex_unlock(&rdev->stats.lock); mutex_unlock(&rdev->stats.lock);
gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6); gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6);
kref_put(&rdev->rqt_kref, destroy_rqtpool);
} }
int c4iw_rqtpool_create(struct c4iw_rdev *rdev) int c4iw_rqtpool_create(struct c4iw_rdev *rdev)
@ -380,7 +402,7 @@ int c4iw_rqtpool_create(struct c4iw_rdev *rdev)
void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev) void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev)
{ {
gen_pool_destroy(rdev->rqt_pool); kref_put(&rdev->rqt_kref, destroy_rqtpool);
} }
/* /*

View File

@ -412,7 +412,6 @@ static void hfi1_cleanup_sdma_notifier(struct hfi1_msix_entry *msix)
static int get_irq_affinity(struct hfi1_devdata *dd, static int get_irq_affinity(struct hfi1_devdata *dd,
struct hfi1_msix_entry *msix) struct hfi1_msix_entry *msix)
{ {
int ret;
cpumask_var_t diff; cpumask_var_t diff;
struct hfi1_affinity_node *entry; struct hfi1_affinity_node *entry;
struct cpu_mask_set *set = NULL; struct cpu_mask_set *set = NULL;
@ -424,10 +423,6 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
extra[0] = '\0'; extra[0] = '\0';
cpumask_clear(&msix->mask); cpumask_clear(&msix->mask);
ret = zalloc_cpumask_var(&diff, GFP_KERNEL);
if (!ret)
return -ENOMEM;
entry = node_affinity_lookup(dd->node); entry = node_affinity_lookup(dd->node);
switch (msix->type) { switch (msix->type) {
@ -458,6 +453,9 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
* finds its CPU here. * finds its CPU here.
*/ */
if (cpu == -1 && set) { if (cpu == -1 && set) {
if (!zalloc_cpumask_var(&diff, GFP_KERNEL))
return -ENOMEM;
if (cpumask_equal(&set->mask, &set->used)) { if (cpumask_equal(&set->mask, &set->used)) {
/* /*
* We've used up all the CPUs, bump up the generation * We've used up all the CPUs, bump up the generation
@ -469,6 +467,8 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
cpumask_andnot(diff, &set->mask, &set->used); cpumask_andnot(diff, &set->mask, &set->used);
cpu = cpumask_first(diff); cpu = cpumask_first(diff);
cpumask_set_cpu(cpu, &set->used); cpumask_set_cpu(cpu, &set->used);
free_cpumask_var(diff);
} }
cpumask_set_cpu(cpu, &msix->mask); cpumask_set_cpu(cpu, &msix->mask);
@ -482,7 +482,6 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
hfi1_setup_sdma_notifier(msix); hfi1_setup_sdma_notifier(msix);
} }
free_cpumask_var(diff);
return 0; return 0;
} }

View File

@ -433,31 +433,43 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
bool do_cnp) bool do_cnp)
{ {
struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num); struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num);
struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
struct ib_other_headers *ohdr = pkt->ohdr; struct ib_other_headers *ohdr = pkt->ohdr;
struct ib_grh *grh = pkt->grh; struct ib_grh *grh = pkt->grh;
u32 rqpn = 0, bth1; u32 rqpn = 0, bth1;
u16 pkey, rlid, dlid = ib_get_dlid(pkt->hdr); u16 pkey;
u32 rlid, slid, dlid = 0;
u8 hdr_type, sc, svc_type; u8 hdr_type, sc, svc_type;
bool is_mcast = false; bool is_mcast = false;
/* can be called from prescan */
if (pkt->etype == RHF_RCV_TYPE_BYPASS) { if (pkt->etype == RHF_RCV_TYPE_BYPASS) {
is_mcast = hfi1_is_16B_mcast(dlid); is_mcast = hfi1_is_16B_mcast(dlid);
pkey = hfi1_16B_get_pkey(pkt->hdr); pkey = hfi1_16B_get_pkey(pkt->hdr);
sc = hfi1_16B_get_sc(pkt->hdr); sc = hfi1_16B_get_sc(pkt->hdr);
dlid = hfi1_16B_get_dlid(pkt->hdr);
slid = hfi1_16B_get_slid(pkt->hdr);
hdr_type = HFI1_PKT_TYPE_16B; hdr_type = HFI1_PKT_TYPE_16B;
} else { } else {
is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) &&
(dlid != be16_to_cpu(IB_LID_PERMISSIVE)); (dlid != be16_to_cpu(IB_LID_PERMISSIVE));
pkey = ib_bth_get_pkey(ohdr); pkey = ib_bth_get_pkey(ohdr);
sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf); sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf);
dlid = ib_get_dlid(pkt->hdr);
slid = ib_get_slid(pkt->hdr);
hdr_type = HFI1_PKT_TYPE_9B; hdr_type = HFI1_PKT_TYPE_9B;
} }
switch (qp->ibqp.qp_type) { switch (qp->ibqp.qp_type) {
case IB_QPT_UD:
dlid = ppd->lid;
rlid = slid;
rqpn = ib_get_sqpn(pkt->ohdr);
svc_type = IB_CC_SVCTYPE_UD;
break;
case IB_QPT_SMI: case IB_QPT_SMI:
case IB_QPT_GSI: case IB_QPT_GSI:
case IB_QPT_UD: rlid = slid;
rlid = ib_get_slid(pkt->hdr);
rqpn = ib_get_sqpn(pkt->ohdr); rqpn = ib_get_sqpn(pkt->ohdr);
svc_type = IB_CC_SVCTYPE_UD; svc_type = IB_CC_SVCTYPE_UD;
break; break;
@ -482,7 +494,6 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
dlid, rlid, sc, grh); dlid, rlid, sc, grh);
if (!is_mcast && (bth1 & IB_BECN_SMASK)) { if (!is_mcast && (bth1 & IB_BECN_SMASK)) {
struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
u32 lqpn = bth1 & RVT_QPN_MASK; u32 lqpn = bth1 & RVT_QPN_MASK;
u8 sl = ibp->sc_to_sl[sc]; u8 sl = ibp->sc_to_sl[sc];

View File

@ -1537,13 +1537,13 @@ void set_link_ipg(struct hfi1_pportdata *ppd);
void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn, void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn,
u32 rqpn, u8 svc_type); u32 rqpn, u8 svc_type);
void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
u32 pkey, u32 slid, u32 dlid, u8 sc5, u16 pkey, u32 slid, u32 dlid, u8 sc5,
const struct ib_grh *old_grh); const struct ib_grh *old_grh);
void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, u32 remote_qpn, u16 pkey, u32 slid, u32 dlid,
u8 sc5, const struct ib_grh *old_grh); u8 sc5, const struct ib_grh *old_grh);
typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp, typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp,
u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, u32 remote_qpn, u16 pkey, u32 slid, u32 dlid,
u8 sc5, const struct ib_grh *old_grh); u8 sc5, const struct ib_grh *old_grh);
#define PKEY_CHECK_INVALID -1 #define PKEY_CHECK_INVALID -1
@ -2437,7 +2437,7 @@ static inline void hfi1_make_16b_hdr(struct hfi1_16b_header *hdr,
((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT); ((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT);
lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) | lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) |
((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT); ((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT);
lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | (pkey << OPA_16B_PKEY_SHIFT); lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | ((u32)pkey << OPA_16B_PKEY_SHIFT);
lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4; lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4;
hdr->lrh[0] = lrh0; hdr->lrh[0] = lrh0;

View File

@ -88,9 +88,9 @@
* pio buffers per ctxt, etc.) Zero means use one user context per CPU. * pio buffers per ctxt, etc.) Zero means use one user context per CPU.
*/ */
int num_user_contexts = -1; int num_user_contexts = -1;
module_param_named(num_user_contexts, num_user_contexts, uint, S_IRUGO); module_param_named(num_user_contexts, num_user_contexts, int, 0444);
MODULE_PARM_DESC( MODULE_PARM_DESC(
num_user_contexts, "Set max number of user contexts to use"); num_user_contexts, "Set max number of user contexts to use (default: -1 will use the real (non-HT) CPU count)");
uint krcvqs[RXE_NUM_DATA_VL]; uint krcvqs[RXE_NUM_DATA_VL];
int krcvqsset; int krcvqsset;
@ -1209,19 +1209,26 @@ static void finalize_asic_data(struct hfi1_devdata *dd,
kfree(ad); kfree(ad);
} }
static void __hfi1_free_devdata(struct kobject *kobj) /**
* hfi1_clean_devdata - cleans up per-unit data structure
* @dd: pointer to a valid devdata structure
*
* It cleans up all data structures set up by
* by hfi1_alloc_devdata().
*/
static void hfi1_clean_devdata(struct hfi1_devdata *dd)
{ {
struct hfi1_devdata *dd =
container_of(kobj, struct hfi1_devdata, kobj);
struct hfi1_asic_data *ad; struct hfi1_asic_data *ad;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&hfi1_devs_lock, flags); spin_lock_irqsave(&hfi1_devs_lock, flags);
if (!list_empty(&dd->list)) {
idr_remove(&hfi1_unit_table, dd->unit); idr_remove(&hfi1_unit_table, dd->unit);
list_del(&dd->list); list_del_init(&dd->list);
}
ad = release_asic_data(dd); ad = release_asic_data(dd);
spin_unlock_irqrestore(&hfi1_devs_lock, flags); spin_unlock_irqrestore(&hfi1_devs_lock, flags);
if (ad)
finalize_asic_data(dd, ad); finalize_asic_data(dd, ad);
free_platform_config(dd); free_platform_config(dd);
rcu_barrier(); /* wait for rcu callbacks to complete */ rcu_barrier(); /* wait for rcu callbacks to complete */
@ -1229,10 +1236,22 @@ static void __hfi1_free_devdata(struct kobject *kobj)
free_percpu(dd->rcv_limit); free_percpu(dd->rcv_limit);
free_percpu(dd->send_schedule); free_percpu(dd->send_schedule);
free_percpu(dd->tx_opstats); free_percpu(dd->tx_opstats);
dd->int_counter = NULL;
dd->rcv_limit = NULL;
dd->send_schedule = NULL;
dd->tx_opstats = NULL;
sdma_clean(dd, dd->num_sdma); sdma_clean(dd, dd->num_sdma);
rvt_dealloc_device(&dd->verbs_dev.rdi); rvt_dealloc_device(&dd->verbs_dev.rdi);
} }
static void __hfi1_free_devdata(struct kobject *kobj)
{
struct hfi1_devdata *dd =
container_of(kobj, struct hfi1_devdata, kobj);
hfi1_clean_devdata(dd);
}
static struct kobj_type hfi1_devdata_type = { static struct kobj_type hfi1_devdata_type = {
.release = __hfi1_free_devdata, .release = __hfi1_free_devdata,
}; };
@ -1265,6 +1284,8 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
dd->num_pports = nports; dd->num_pports = nports;
dd->pport = (struct hfi1_pportdata *)(dd + 1); dd->pport = (struct hfi1_pportdata *)(dd + 1);
dd->pcidev = pdev;
pci_set_drvdata(pdev, dd);
INIT_LIST_HEAD(&dd->list); INIT_LIST_HEAD(&dd->list);
idr_preload(GFP_KERNEL); idr_preload(GFP_KERNEL);
@ -1331,9 +1352,7 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
return dd; return dd;
bail: bail:
if (!list_empty(&dd->list)) hfi1_clean_devdata(dd);
list_del_init(&dd->list);
rvt_dealloc_device(&dd->verbs_dev.rdi);
return ERR_PTR(ret); return ERR_PTR(ret);
} }

View File

@ -163,9 +163,6 @@ int hfi1_pcie_ddinit(struct hfi1_devdata *dd, struct pci_dev *pdev)
resource_size_t addr; resource_size_t addr;
int ret = 0; int ret = 0;
dd->pcidev = pdev;
pci_set_drvdata(pdev, dd);
addr = pci_resource_start(pdev, 0); addr = pci_resource_start(pdev, 0);
len = pci_resource_len(pdev, 0); len = pci_resource_len(pdev, 0);

View File

@ -199,6 +199,7 @@ void free_platform_config(struct hfi1_devdata *dd)
{ {
/* Release memory allocated for eprom or fallback file read. */ /* Release memory allocated for eprom or fallback file read. */
kfree(dd->platform_config.data); kfree(dd->platform_config.data);
dd->platform_config.data = NULL;
} }
void get_port_type(struct hfi1_pportdata *ppd) void get_port_type(struct hfi1_pportdata *ppd)

View File

@ -204,6 +204,8 @@ static void clean_i2c_bus(struct hfi1_i2c_bus *bus)
void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad) void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad)
{ {
if (!ad)
return;
clean_i2c_bus(ad->i2c_bus0); clean_i2c_bus(ad->i2c_bus0);
ad->i2c_bus0 = NULL; ad->i2c_bus0 = NULL;
clean_i2c_bus(ad->i2c_bus1); clean_i2c_bus(ad->i2c_bus1);

View File

@ -733,6 +733,20 @@ static inline void hfi1_make_ruc_bth(struct rvt_qp *qp,
ohdr->bth[2] = cpu_to_be32(bth2); ohdr->bth[2] = cpu_to_be32(bth2);
} }
/**
* hfi1_make_ruc_header_16B - build a 16B header
* @qp: the queue pair
* @ohdr: a pointer to the destination header memory
* @bth0: bth0 passed in from the RC/UC builder
* @bth2: bth2 passed in from the RC/UC builder
* @middle: non zero implies indicates ahg "could" be used
* @ps: the current packet state
*
* This routine may disarm ahg under these situations:
* - packet needs a GRH
* - BECN needed
* - migration state not IB_MIG_MIGRATED
*/
static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp, static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
struct ib_other_headers *ohdr, struct ib_other_headers *ohdr,
u32 bth0, u32 bth2, int middle, u32 bth0, u32 bth2, int middle,
@ -777,6 +791,12 @@ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
else else
middle = 0; middle = 0;
if (qp->s_flags & RVT_S_ECN) {
qp->s_flags &= ~RVT_S_ECN;
/* we recently received a FECN, so return a BECN */
becn = true;
middle = 0;
}
if (middle) if (middle)
build_ahg(qp, bth2); build_ahg(qp, bth2);
else else
@ -784,11 +804,6 @@ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
bth0 |= pkey; bth0 |= pkey;
bth0 |= extra_bytes << 20; bth0 |= extra_bytes << 20;
if (qp->s_flags & RVT_S_ECN) {
qp->s_flags &= ~RVT_S_ECN;
/* we recently received a FECN, so return a BECN */
becn = true;
}
hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);
if (!ppd->lid) if (!ppd->lid)
@ -806,6 +821,20 @@ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
pkey, becn, 0, l4, priv->s_sc); pkey, becn, 0, l4, priv->s_sc);
} }
/**
* hfi1_make_ruc_header_9B - build a 9B header
* @qp: the queue pair
* @ohdr: a pointer to the destination header memory
* @bth0: bth0 passed in from the RC/UC builder
* @bth2: bth2 passed in from the RC/UC builder
* @middle: non zero implies indicates ahg "could" be used
* @ps: the current packet state
*
* This routine may disarm ahg under these situations:
* - packet needs a GRH
* - BECN needed
* - migration state not IB_MIG_MIGRATED
*/
static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp, static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,
struct ib_other_headers *ohdr, struct ib_other_headers *ohdr,
u32 bth0, u32 bth2, int middle, u32 bth0, u32 bth2, int middle,
@ -839,6 +868,12 @@ static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,
else else
middle = 0; middle = 0;
if (qp->s_flags & RVT_S_ECN) {
qp->s_flags &= ~RVT_S_ECN;
/* we recently received a FECN, so return a BECN */
bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);
middle = 0;
}
if (middle) if (middle)
build_ahg(qp, bth2); build_ahg(qp, bth2);
else else
@ -846,11 +881,6 @@ static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,
bth0 |= pkey; bth0 |= pkey;
bth0 |= extra_bytes << 20; bth0 |= extra_bytes << 20;
if (qp->s_flags & RVT_S_ECN) {
qp->s_flags &= ~RVT_S_ECN;
/* we recently received a FECN, so return a BECN */
bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);
}
hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);
hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh, hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh,
lrh0, lrh0,

View File

@ -628,7 +628,7 @@ int hfi1_lookup_pkey_idx(struct hfi1_ibport *ibp, u16 pkey)
} }
void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, u32 remote_qpn, u16 pkey, u32 slid, u32 dlid,
u8 sc5, const struct ib_grh *old_grh) u8 sc5, const struct ib_grh *old_grh)
{ {
u64 pbc, pbc_flags = 0; u64 pbc, pbc_flags = 0;
@ -687,7 +687,7 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
} }
void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
u32 pkey, u32 slid, u32 dlid, u8 sc5, u16 pkey, u32 slid, u32 dlid, u8 sc5,
const struct ib_grh *old_grh) const struct ib_grh *old_grh)
{ {
u64 pbc, pbc_flags = 0; u64 pbc, pbc_flags = 0;

View File

@ -912,7 +912,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
obj_per_chunk = buf_chunk_size / obj_size; obj_per_chunk = buf_chunk_size / obj_size;
num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk; num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk;
bt_chunk_num = bt_chunk_size / 8; bt_chunk_num = bt_chunk_size / 8;
if (table->type >= HEM_TYPE_MTT) if (type >= HEM_TYPE_MTT)
num_bt_l0 = bt_chunk_num; num_bt_l0 = bt_chunk_num;
table->hem = kcalloc(num_hem, sizeof(*table->hem), table->hem = kcalloc(num_hem, sizeof(*table->hem),
@ -920,7 +920,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
if (!table->hem) if (!table->hem)
goto err_kcalloc_hem_buf; goto err_kcalloc_hem_buf;
if (check_whether_bt_num_3(table->type, hop_num)) { if (check_whether_bt_num_3(type, hop_num)) {
unsigned long num_bt_l1; unsigned long num_bt_l1;
num_bt_l1 = (num_hem + bt_chunk_num - 1) / num_bt_l1 = (num_hem + bt_chunk_num - 1) /
@ -939,8 +939,8 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
goto err_kcalloc_l1_dma; goto err_kcalloc_l1_dma;
} }
if (check_whether_bt_num_2(table->type, hop_num) || if (check_whether_bt_num_2(type, hop_num) ||
check_whether_bt_num_3(table->type, hop_num)) { check_whether_bt_num_3(type, hop_num)) {
table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0), table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0),
GFP_KERNEL); GFP_KERNEL);
if (!table->bt_l0) if (!table->bt_l0)
@ -1039,14 +1039,14 @@ void hns_roce_cleanup_hem_table(struct hns_roce_dev *hr_dev,
void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev) void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev)
{ {
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table); hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table);
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table);
if (hr_dev->caps.trrl_entry_sz) if (hr_dev->caps.trrl_entry_sz)
hns_roce_cleanup_hem_table(hr_dev, hns_roce_cleanup_hem_table(hr_dev,
&hr_dev->qp_table.trrl_table); &hr_dev->qp_table.trrl_table);
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table);
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.qp_table); hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.qp_table);
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table); hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table);
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table);
if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE)) if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE))
hns_roce_cleanup_hem_table(hr_dev, hns_roce_cleanup_hem_table(hr_dev,
&hr_dev->mr_table.mtt_cqe_table); &hr_dev->mr_table.mtt_cqe_table);
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table);
} }

View File

@ -71,6 +71,11 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr,
return -EINVAL; return -EINVAL;
} }
if (wr->opcode == IB_WR_RDMA_READ) {
dev_err(hr_dev->dev, "Not support inline data!\n");
return -EINVAL;
}
for (i = 0; i < wr->num_sge; i++) { for (i = 0; i < wr->num_sge; i++) {
memcpy(wqe, ((void *)wr->sg_list[i].addr), memcpy(wqe, ((void *)wr->sg_list[i].addr),
wr->sg_list[i].length); wr->sg_list[i].length);
@ -148,7 +153,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
ibqp->qp_type != IB_QPT_GSI && ibqp->qp_type != IB_QPT_GSI &&
ibqp->qp_type != IB_QPT_UD)) { ibqp->qp_type != IB_QPT_UD)) {
dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type); dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type);
*bad_wr = NULL; *bad_wr = wr;
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -182,7 +187,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] = qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] =
wr->wr_id; wr->wr_id;
owner_bit = ~(qp->sq.head >> ilog2(qp->sq.wqe_cnt)) & 0x1; owner_bit =
~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1);
/* Corresponding to the QP type, wqe process separately */ /* Corresponding to the QP type, wqe process separately */
if (ibqp->qp_type == IB_QPT_GSI) { if (ibqp->qp_type == IB_QPT_GSI) {
@ -456,6 +462,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
} else { } else {
dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type); dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type);
spin_unlock_irqrestore(&qp->sq.lock, flags); spin_unlock_irqrestore(&qp->sq.lock, flags);
*bad_wr = wr;
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
} }
@ -2592,10 +2599,12 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M, roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M,
V2_QPC_BYTE_4_SQPN_S, 0); V2_QPC_BYTE_4_SQPN_S, 0);
if (attr_mask & IB_QP_DEST_QPN) {
roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M,
V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn); V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn);
roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, roce_set_field(qpc_mask->byte_56_dqpn_err,
V2_QPC_BYTE_56_DQPN_S, 0); V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0);
}
roce_set_field(context->byte_168_irrl_idx, roce_set_field(context->byte_168_irrl_idx,
V2_QPC_BYTE_168_SQ_SHIFT_BAK_M, V2_QPC_BYTE_168_SQ_SHIFT_BAK_M,
V2_QPC_BYTE_168_SQ_SHIFT_BAK_S, V2_QPC_BYTE_168_SQ_SHIFT_BAK_S,
@ -2650,8 +2659,7 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
return -EINVAL; return -EINVAL;
} }
if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || if (attr_mask & IB_QP_ALT_PATH) {
(attr_mask & IB_QP_PKEY_INDEX) || (attr_mask & IB_QP_QKEY)) {
dev_err(dev, "INIT2RTR attr_mask (0x%x) error\n", attr_mask); dev_err(dev, "INIT2RTR attr_mask (0x%x) error\n", attr_mask);
return -EINVAL; return -EINVAL;
} }
@ -2800,10 +2808,12 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
V2_QPC_BYTE_140_RR_MAX_S, 0); V2_QPC_BYTE_140_RR_MAX_S, 0);
} }
if (attr_mask & IB_QP_DEST_QPN) {
roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M,
V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num); V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num);
roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, roce_set_field(qpc_mask->byte_56_dqpn_err,
V2_QPC_BYTE_56_DQPN_S, 0); V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0);
}
/* Configure GID index */ /* Configure GID index */
port_num = rdma_ah_get_port_num(&attr->ah_attr); port_num = rdma_ah_get_port_num(&attr->ah_attr);
@ -2845,7 +2855,7 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD)
roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
V2_QPC_BYTE_24_MTU_S, IB_MTU_4096); V2_QPC_BYTE_24_MTU_S, IB_MTU_4096);
else else if (attr_mask & IB_QP_PATH_MTU)
roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
V2_QPC_BYTE_24_MTU_S, attr->path_mtu); V2_QPC_BYTE_24_MTU_S, attr->path_mtu);
@ -2922,11 +2932,9 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
return -EINVAL; return -EINVAL;
} }
/* If exist optional param, return error */ /* Not support alternate path and path migration */
if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || if ((attr_mask & IB_QP_ALT_PATH) ||
(attr_mask & IB_QP_QKEY) || (attr_mask & IB_QP_PATH_MIG_STATE) || (attr_mask & IB_QP_PATH_MIG_STATE)) {
(attr_mask & IB_QP_CUR_STATE) ||
(attr_mask & IB_QP_MIN_RNR_TIMER)) {
dev_err(dev, "RTR2RTS attr_mask (0x%x)error\n", attr_mask); dev_err(dev, "RTR2RTS attr_mask (0x%x)error\n", attr_mask);
return -EINVAL; return -EINVAL;
} }
@ -3161,7 +3169,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
(cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) || (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) ||
(cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) || (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) ||
(cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) || (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) ||
(cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR)) { (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR) ||
(cur_state == IB_QPS_ERR && new_state == IB_QPS_ERR)) {
/* Nothing */ /* Nothing */
; ;
} else { } else {
@ -4478,7 +4487,7 @@ static int hns_roce_v2_create_eq(struct hns_roce_dev *hr_dev,
ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0, ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0,
eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS); eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS);
if (ret) { if (ret) {
dev_err(dev, "[mailbox cmd] creat eqc failed.\n"); dev_err(dev, "[mailbox cmd] create eqc failed.\n");
goto err_cmd_mbox; goto err_cmd_mbox;
} }

View File

@ -620,7 +620,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
to_hr_ucontext(ib_pd->uobject->context), to_hr_ucontext(ib_pd->uobject->context),
ucmd.db_addr, &hr_qp->rdb); ucmd.db_addr, &hr_qp->rdb);
if (ret) { if (ret) {
dev_err(dev, "rp record doorbell map failed!\n"); dev_err(dev, "rq record doorbell map failed!\n");
goto err_mtt; goto err_mtt;
} }
} }

View File

@ -346,7 +346,7 @@ int mlx4_ib_umem_calc_optimal_mtt_size(struct ib_umem *umem, u64 start_va,
/* Add to the first block the misalignment that it suffers from. */ /* Add to the first block the misalignment that it suffers from. */
total_len += (first_block_start & ((1ULL << block_shift) - 1ULL)); total_len += (first_block_start & ((1ULL << block_shift) - 1ULL));
last_block_end = current_block_start + current_block_len; last_block_end = current_block_start + current_block_len;
last_block_aligned_end = round_up(last_block_end, 1 << block_shift); last_block_aligned_end = round_up(last_block_end, 1ULL << block_shift);
total_len += (last_block_aligned_end - last_block_end); total_len += (last_block_aligned_end - last_block_end);
if (total_len & ((1ULL << block_shift) - 1ULL)) if (total_len & ((1ULL << block_shift) - 1ULL))

View File

@ -673,7 +673,8 @@ static int set_qp_rss(struct mlx4_ib_dev *dev, struct mlx4_ib_rss *rss_ctx,
MLX4_IB_RX_HASH_SRC_PORT_TCP | MLX4_IB_RX_HASH_SRC_PORT_TCP |
MLX4_IB_RX_HASH_DST_PORT_TCP | MLX4_IB_RX_HASH_DST_PORT_TCP |
MLX4_IB_RX_HASH_SRC_PORT_UDP | MLX4_IB_RX_HASH_SRC_PORT_UDP |
MLX4_IB_RX_HASH_DST_PORT_UDP)) { MLX4_IB_RX_HASH_DST_PORT_UDP |
MLX4_IB_RX_HASH_INNER)) {
pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n", pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n",
ucmd->rx_hash_fields_mask); ucmd->rx_hash_fields_mask);
return (-EOPNOTSUPP); return (-EOPNOTSUPP);

View File

@ -1,6 +1,7 @@
config MLX5_INFINIBAND config MLX5_INFINIBAND
tristate "Mellanox Connect-IB HCA support" tristate "Mellanox Connect-IB HCA support"
depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
depends on INFINIBAND_USER_ACCESS || INFINIBAND_USER_ACCESS=n
---help--- ---help---
This driver provides low-level InfiniBand support for This driver provides low-level InfiniBand support for
Mellanox Connect-IB PCI Express host channel adapters (HCAs). Mellanox Connect-IB PCI Express host channel adapters (HCAs).

View File

@ -52,7 +52,6 @@
#include <linux/mlx5/port.h> #include <linux/mlx5/port.h>
#include <linux/mlx5/vport.h> #include <linux/mlx5/vport.h>
#include <linux/mlx5/fs.h> #include <linux/mlx5/fs.h>
#include <linux/mlx5/fs_helpers.h>
#include <linux/list.h> #include <linux/list.h>
#include <rdma/ib_smi.h> #include <rdma/ib_smi.h>
#include <rdma/ib_umem.h> #include <rdma/ib_umem.h>
@ -180,7 +179,7 @@ static int mlx5_netdev_event(struct notifier_block *this,
if (rep_ndev == ndev) if (rep_ndev == ndev)
roce->netdev = (event == NETDEV_UNREGISTER) ? roce->netdev = (event == NETDEV_UNREGISTER) ?
NULL : ndev; NULL : ndev;
} else if (ndev->dev.parent == &ibdev->mdev->pdev->dev) { } else if (ndev->dev.parent == &mdev->pdev->dev) {
roce->netdev = (event == NETDEV_UNREGISTER) ? roce->netdev = (event == NETDEV_UNREGISTER) ?
NULL : ndev; NULL : ndev;
} }
@ -5427,9 +5426,7 @@ static void mlx5_ib_stage_cong_debugfs_cleanup(struct mlx5_ib_dev *dev)
static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev) static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev)
{ {
dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev); dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev);
if (!dev->mdev->priv.uar) return PTR_ERR_OR_ZERO(dev->mdev->priv.uar);
return -ENOMEM;
return 0;
} }
static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev) static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev)

View File

@ -866,25 +866,28 @@ static int mr_umem_get(struct ib_pd *pd, u64 start, u64 length,
int *order) int *order)
{ {
struct mlx5_ib_dev *dev = to_mdev(pd->device); struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct ib_umem *u;
int err; int err;
*umem = ib_umem_get(pd->uobject->context, start, length,
access_flags, 0);
err = PTR_ERR_OR_ZERO(*umem);
if (err) {
*umem = NULL; *umem = NULL;
mlx5_ib_err(dev, "umem get failed (%d)\n", err);
u = ib_umem_get(pd->uobject->context, start, length, access_flags, 0);
err = PTR_ERR_OR_ZERO(u);
if (err) {
mlx5_ib_dbg(dev, "umem get failed (%d)\n", err);
return err; return err;
} }
mlx5_ib_cont_pages(*umem, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, mlx5_ib_cont_pages(u, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages,
page_shift, ncont, order); page_shift, ncont, order);
if (!*npages) { if (!*npages) {
mlx5_ib_warn(dev, "avoid zero region\n"); mlx5_ib_warn(dev, "avoid zero region\n");
ib_umem_release(*umem); ib_umem_release(u);
return -EINVAL; return -EINVAL;
} }
*umem = u;
mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n", mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n",
*npages, *ncont, *order, *page_shift); *npages, *ncont, *order, *page_shift);
@ -1458,13 +1461,12 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
int access_flags = flags & IB_MR_REREG_ACCESS ? int access_flags = flags & IB_MR_REREG_ACCESS ?
new_access_flags : new_access_flags :
mr->access_flags; mr->access_flags;
u64 addr = (flags & IB_MR_REREG_TRANS) ? virt_addr : mr->umem->address;
u64 len = (flags & IB_MR_REREG_TRANS) ? length : mr->umem->length;
int page_shift = 0; int page_shift = 0;
int upd_flags = 0; int upd_flags = 0;
int npages = 0; int npages = 0;
int ncont = 0; int ncont = 0;
int order = 0; int order = 0;
u64 addr, len;
int err; int err;
mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n", mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n",
@ -1472,6 +1474,17 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
atomic_sub(mr->npages, &dev->mdev->priv.reg_pages); atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
if (!mr->umem)
return -EINVAL;
if (flags & IB_MR_REREG_TRANS) {
addr = virt_addr;
len = length;
} else {
addr = mr->umem->address;
len = mr->umem->length;
}
if (flags != IB_MR_REREG_PD) { if (flags != IB_MR_REREG_PD) {
/* /*
* Replace umem. This needs to be done whether or not UMR is * Replace umem. This needs to be done whether or not UMR is
@ -1479,6 +1492,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
*/ */
flags |= IB_MR_REREG_TRANS; flags |= IB_MR_REREG_TRANS;
ib_umem_release(mr->umem); ib_umem_release(mr->umem);
mr->umem = NULL;
err = mr_umem_get(pd, addr, len, access_flags, &mr->umem, err = mr_umem_get(pd, addr, len, access_flags, &mr->umem,
&npages, &page_shift, &ncont, &order); &npages, &page_shift, &ncont, &order);
if (err) if (err)

View File

@ -259,7 +259,11 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
} else { } else {
if (ucmd) { if (ucmd) {
qp->rq.wqe_cnt = ucmd->rq_wqe_count; qp->rq.wqe_cnt = ucmd->rq_wqe_count;
if (ucmd->rq_wqe_shift > BITS_PER_BYTE * sizeof(ucmd->rq_wqe_shift))
return -EINVAL;
qp->rq.wqe_shift = ucmd->rq_wqe_shift; qp->rq.wqe_shift = ucmd->rq_wqe_shift;
if ((1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) < qp->wq_sig)
return -EINVAL;
qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig; qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig;
qp->rq.max_post = qp->rq.wqe_cnt; qp->rq.max_post = qp->rq.wqe_cnt;
} else { } else {
@ -2451,18 +2455,18 @@ enum {
static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
{ {
if (rate == IB_RATE_PORT_CURRENT) { if (rate == IB_RATE_PORT_CURRENT)
return 0; return 0;
} else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) {
if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS)
return -EINVAL; return -EINVAL;
} else {
while (rate != IB_RATE_2_5_GBPS && while (rate != IB_RATE_PORT_CURRENT &&
!(1 << (rate + MLX5_STAT_RATE_OFFSET) & !(1 << (rate + MLX5_STAT_RATE_OFFSET) &
MLX5_CAP_GEN(dev->mdev, stat_rate_support))) MLX5_CAP_GEN(dev->mdev, stat_rate_support)))
--rate; --rate;
}
return rate + MLX5_STAT_RATE_OFFSET; return rate ? rate + MLX5_STAT_RATE_OFFSET : rate;
} }
static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev, static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev,

View File

@ -461,7 +461,7 @@ static bool nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
/** /**
* nes_netdev_start_xmit * nes_netdev_start_xmit
*/ */
static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev) static netdev_tx_t nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev)
{ {
struct nes_vnic *nesvnic = netdev_priv(netdev); struct nes_vnic *nesvnic = netdev_priv(netdev);
struct nes_device *nesdev = nesvnic->nesdev; struct nes_device *nesdev = nesvnic->nesdev;

View File

@ -390,7 +390,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
.name = "IB_OPCODE_RC_SEND_ONLY_INV", .name = "IB_OPCODE_RC_SEND_ONLY_INV",
.mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK
| RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK
| RXE_END_MASK, | RXE_END_MASK | RXE_START_MASK,
.length = RXE_BTH_BYTES + RXE_IETH_BYTES, .length = RXE_BTH_BYTES + RXE_IETH_BYTES,
.offset = { .offset = {
[RXE_BTH] = 0, [RXE_BTH] = 0,

View File

@ -728,7 +728,6 @@ next_wqe:
rollback_state(wqe, qp, &rollback_wqe, rollback_psn); rollback_state(wqe, qp, &rollback_wqe, rollback_psn);
if (ret == -EAGAIN) { if (ret == -EAGAIN) {
kfree_skb(skb);
rxe_run_task(&qp->req.task, 1); rxe_run_task(&qp->req.task, 1);
goto exit; goto exit;
} }

View File

@ -742,7 +742,6 @@ static enum resp_states read_reply(struct rxe_qp *qp,
err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb);
if (err) { if (err) {
pr_err("Failed sending RDMA reply.\n"); pr_err("Failed sending RDMA reply.\n");
kfree_skb(skb);
return RESPST_ERR_RNR; return RESPST_ERR_RNR;
} }
@ -954,10 +953,8 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
} }
err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb);
if (err) { if (err)
pr_err_ratelimited("Failed sending ack\n"); pr_err_ratelimited("Failed sending ack\n");
kfree_skb(skb);
}
err1: err1:
return err; return err;
@ -1141,7 +1138,6 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
if (rc) { if (rc) {
pr_err("Failed resending result. This flow is not handled - skb ignored\n"); pr_err("Failed resending result. This flow is not handled - skb ignored\n");
rxe_drop_ref(qp); rxe_drop_ref(qp);
kfree_skb(skb_copy);
rc = RESPST_CLEANUP; rc = RESPST_CLEANUP;
goto out; goto out;
} }

View File

@ -1094,7 +1094,7 @@ drop_and_unlock:
spin_unlock_irqrestore(&priv->lock, flags); spin_unlock_irqrestore(&priv->lock, flags);
} }
static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct ipoib_dev_priv *priv = ipoib_priv(dev); struct ipoib_dev_priv *priv = ipoib_priv(dev);
struct rdma_netdev *rn = netdev_priv(dev); struct rdma_netdev *rn = netdev_priv(dev);

View File

@ -1,6 +1,6 @@
config INFINIBAND_SRP config INFINIBAND_SRP
tristate "InfiniBand SCSI RDMA Protocol" tristate "InfiniBand SCSI RDMA Protocol"
depends on SCSI depends on SCSI && INFINIBAND_ADDR_TRANS
select SCSI_SRP_ATTRS select SCSI_SRP_ATTRS
---help--- ---help---
Support for the SCSI RDMA Protocol over InfiniBand. This Support for the SCSI RDMA Protocol over InfiniBand. This

View File

@ -1,6 +1,6 @@
config INFINIBAND_SRPT config INFINIBAND_SRPT
tristate "InfiniBand SCSI RDMA Protocol target support" tristate "InfiniBand SCSI RDMA Protocol target support"
depends on INFINIBAND && TARGET_CORE depends on INFINIBAND && INFINIBAND_ADDR_TRANS && TARGET_CORE
---help--- ---help---
Support for the SCSI RDMA Protocol (SRP) Target driver. The Support for the SCSI RDMA Protocol (SRP) Target driver. The

View File

@ -27,7 +27,7 @@ config NVME_FABRICS
config NVME_RDMA config NVME_RDMA
tristate "NVM Express over Fabrics RDMA host driver" tristate "NVM Express over Fabrics RDMA host driver"
depends on INFINIBAND && BLOCK depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK
select NVME_CORE select NVME_CORE
select NVME_FABRICS select NVME_FABRICS
select SG_POOL select SG_POOL

View File

@ -27,7 +27,7 @@ config NVME_TARGET_LOOP
config NVME_TARGET_RDMA config NVME_TARGET_RDMA
tristate "NVMe over Fabrics RDMA target support" tristate "NVMe over Fabrics RDMA target support"
depends on INFINIBAND depends on INFINIBAND && INFINIBAND_ADDR_TRANS
depends on NVME_TARGET depends on NVME_TARGET
select SGL_ALLOC select SGL_ALLOC
help help

View File

@ -197,7 +197,7 @@ config CIFS_SMB311
config CIFS_SMB_DIRECT config CIFS_SMB_DIRECT
bool "SMB Direct support (Experimental)" bool "SMB Direct support (Experimental)"
depends on CIFS=m && INFINIBAND || CIFS=y && INFINIBAND=y depends on CIFS=m && INFINIBAND && INFINIBAND_ADDR_TRANS || CIFS=y && INFINIBAND=y && INFINIBAND_ADDR_TRANS=y
help help
Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1. Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1.
SMB Direct allows transferring SMB packets over RDMA. If unsure, SMB Direct allows transferring SMB packets over RDMA. If unsure,

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */
/* /*
* This software is available to you under a choice of one of two * This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU * licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2008 Oracle. All rights reserved. * Copyright (c) 2008 Oracle. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved. * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved. * Copyright (c) 2006 Chelsio, Inc. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved. * Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2016 Hisilicon Limited. * Copyright (c) 2016 Hisilicon Limited.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2005 Topspin Communications. All rights reserved. * Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Intel Corporation. All rights reserved. * Copyright (c) 2005 Intel Corporation. All rights reserved.

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved. * Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2004 Topspin Communications. All rights reserved. * Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved. * Copyright (c) 2005 Voltaire, Inc. All rights reserved.

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2005 Intel Corporation. All rights reserved. * Copyright (c) 2005 Intel Corporation. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2005 Topspin Communications. All rights reserved. * Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005, 2006 Cisco Systems. All rights reserved. * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2007 Cisco Systems, Inc. All rights reserved. * Copyright (c) 2007 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved. * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2005 Topspin Communications. All rights reserved. * Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005, 2006 Cisco Systems. All rights reserved. * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Topspin Communications. All rights reserved. * Copyright (c) 2005 Topspin Communications. All rights reserved.

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* QLogic qedr NIC Driver /* QLogic qedr NIC Driver
* Copyright (c) 2015-2016 QLogic Corporation * Copyright (c) 2015-2016 QLogic Corporation
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2005-2006 Intel Corporation. All rights reserved. * Copyright (c) 2005-2006 Intel Corporation. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved. * Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved.
* *

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
/* /*
* Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
* *