2015-07-30 19:17:43 +00:00
|
|
|
/*
|
IB/hfi1: Rework fault injection machinery
The packet fault injection code present in the HFI1 driver had some
issues which not only fragment the code but also created user
confusion. Furthermore, it suffered from the following issues:
1. The fault_packet method only worked for received packets. This
meant that the only fault injection mode available for sent
packets is fault_opcode, which did not allow for random packet
drops on all egressing packets.
2. The mask available for the fault_opcode mode did not really work
due to the fact that the opcode values are not bits in a bitmask but
rather sequential integer values. Creating a opcode/mask pair that
would successfully capture a set of packets was nearly impossible.
3. The code was fragmented and used too many debugfs entries to
operate and control. This was confusing to users.
4. It did not allow filtering fault injection on a per direction basis -
egress vs. ingress.
In order to improve or fix the above issues, the following changes have
been made:
1. The fault injection methods have been combined into a single fault
injection facility. As such, the fault injection has been plugged
into both the send and receive code paths. Regardless of method used
the fault injection will operate on both egress and ingress packets.
2. The type of fault injection - by packet or by opcode - is now controlled
by changing the boolean value of the file "opcode_mode". When the value
is set to True, fault injection is done by opcode. Otherwise, by
packet.
2. The masking ability has been removed in favor of a bitmap that holds
opcodes of interest (one bit per opcode, a total of 256 bits). This
works in tandem with the "opcode_mode" value. When the value of
"opcode_mode" is False, this bitmap is ignored. When the value is
True, the bitmap lists all opcodes to be considered for fault injection.
By default, the bitmap is empty. When the user wants to filter by opcode,
the user sets the corresponding bit in the bitmap by echo'ing the bit
position into the 'opcodes' file. This gets around the issue that the set
of opcodes does not lend itself to effective masks and allow for extremely
fine-grained filtering by opcode.
4. fault_packet and fault_opcode methods have been combined. Hence, there
is only one debugfs directory controlling the entire operation of the
fault injection machinery. This reduces the number of debugfs entries
and provides a more unified user experience.
5. A new control files - "direction" - is provided to allow the user to
control the direction of packets, which are subject to fault injection.
6. A new control file - "skip_usec" - is added that would allow the user
to specify a "timeout" during which no fault injection will occur.
In addition, the following bug fixes have been applied:
1. The fault injection code has been split into its own header and source
files. This was done to better organize the code and support conditional
compilation without littering the code with #ifdef's.
2. The method by which the TX PIO packets were being marked for drop
conflicted with the way send contexts were being setup. As a result,
the send context was repeatedly being reset.
3. The fault injection only makes sense when the user can control it
through the debugfs entries. However, a kernel configuration can
enable fault injection but keep fault injection debugfs entries
disabled. Therefore, it makes sense that the HFI fault injection
code depends on both.
4. Error suppression did not take into account the method by which PIO
packets were being dropped. Therefore, even with error suppression
turned on, errors would still be displayed to the screen. A larger
enough packet drop percentage would case the kernel to crash because
the driver would be stuck printing errors.
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Don Hiatt <don.hiatt@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-02 13:43:24 +00:00
|
|
|
* Copyright(c) 2015 - 2018 Intel Corporation.
|
2015-07-30 19:17:43 +00:00
|
|
|
*
|
|
|
|
* This file is provided under a dual BSD/GPLv2 license. When using or
|
|
|
|
* redistributing this file, you may do so under either license.
|
|
|
|
*
|
|
|
|
* GPL LICENSE SUMMARY
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of version 2 of the GNU General Public License as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful, but
|
|
|
|
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
|
|
|
* BSD LICENSE
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* - Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in
|
|
|
|
* the documentation and/or other materials provided with the
|
|
|
|
* distribution.
|
|
|
|
* - Neither the name of Intel Corporation nor the names of its
|
|
|
|
* contributors may be used to endorse or promote products derived
|
|
|
|
* from this software without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
|
|
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
|
|
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
|
|
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
|
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
|
|
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
|
|
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef HFI1_VERBS_H
|
|
|
|
#define HFI1_VERBS_H
|
|
|
|
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/seqlock.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/kref.h>
|
|
|
|
#include <linux/workqueue.h>
|
|
|
|
#include <linux/kthread.h>
|
|
|
|
#include <linux/completion.h>
|
2016-02-14 20:44:43 +00:00
|
|
|
#include <linux/slab.h>
|
2015-07-30 19:17:43 +00:00
|
|
|
#include <rdma/ib_pack.h>
|
|
|
|
#include <rdma/ib_user_verbs.h>
|
|
|
|
#include <rdma/ib_mad.h>
|
2016-09-06 11:35:05 +00:00
|
|
|
#include <rdma/ib_hdrs.h>
|
2016-01-19 22:41:33 +00:00
|
|
|
#include <rdma/rdma_vt.h>
|
2016-01-19 22:43:01 +00:00
|
|
|
#include <rdma/rdmavt_qp.h>
|
2016-01-19 22:43:22 +00:00
|
|
|
#include <rdma/rdmavt_cq.h>
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
struct hfi1_ctxtdata;
|
|
|
|
struct hfi1_pportdata;
|
|
|
|
struct hfi1_devdata;
|
|
|
|
struct hfi1_packet;
|
|
|
|
|
|
|
|
#include "iowait.h"
|
2018-11-28 18:22:31 +00:00
|
|
|
#include "tid_rdma.h"
|
2019-01-24 14:09:46 +00:00
|
|
|
#include "opfn.h"
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
#define HFI1_MAX_RDMA_ATOMIC 16
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment this value if any changes that break userspace ABI
|
|
|
|
* compatibility are made.
|
|
|
|
*/
|
|
|
|
#define HFI1_UVERBS_ABI_VERSION 2
|
|
|
|
|
|
|
|
/* IB Performance Manager status values */
|
|
|
|
#define IB_PMA_SAMPLE_STATUS_DONE 0x00
|
|
|
|
#define IB_PMA_SAMPLE_STATUS_STARTED 0x01
|
|
|
|
#define IB_PMA_SAMPLE_STATUS_RUNNING 0x02
|
|
|
|
|
|
|
|
/* Mandatory IB performance counter select values. */
|
|
|
|
#define IB_PMA_PORT_XMIT_DATA cpu_to_be16(0x0001)
|
|
|
|
#define IB_PMA_PORT_RCV_DATA cpu_to_be16(0x0002)
|
|
|
|
#define IB_PMA_PORT_XMIT_PKTS cpu_to_be16(0x0003)
|
|
|
|
#define IB_PMA_PORT_RCV_PKTS cpu_to_be16(0x0004)
|
|
|
|
#define IB_PMA_PORT_XMIT_WAIT cpu_to_be16(0x0005)
|
|
|
|
|
|
|
|
#define HFI1_VENDOR_IPG cpu_to_be16(0xFFA0)
|
|
|
|
|
|
|
|
#define IB_DEFAULT_GID_PREFIX cpu_to_be64(0xfe80000000000000ULL)
|
2017-08-04 20:54:10 +00:00
|
|
|
#define OPA_BTH_MIG_REQ BIT(31)
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-09-25 14:40:58 +00:00
|
|
|
#define RC_OP(x) IB_OPCODE_RC_##x
|
|
|
|
#define UC_OP(x) IB_OPCODE_UC_##x
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
/* flags passed by hfi1_ib_rcv() */
|
|
|
|
enum {
|
|
|
|
HFI1_HAS_GRH = (1 << 0),
|
|
|
|
};
|
|
|
|
|
2018-02-01 18:52:35 +00:00
|
|
|
#define LRH_16B_BYTES (FIELD_SIZEOF(struct hfi1_16b_header, lrh))
|
|
|
|
#define LRH_16B_DWORDS (LRH_16B_BYTES / sizeof(u32))
|
|
|
|
#define LRH_9B_BYTES (FIELD_SIZEOF(struct ib_header, lrh))
|
|
|
|
#define LRH_9B_DWORDS (LRH_9B_BYTES / sizeof(u32))
|
|
|
|
|
2018-05-16 01:28:07 +00:00
|
|
|
/* 24Bits for qpn, upper 8Bits reserved */
|
|
|
|
struct opa_16b_mgmt {
|
|
|
|
__be32 dest_qpn;
|
|
|
|
__be32 src_qpn;
|
|
|
|
};
|
|
|
|
|
2017-08-04 20:53:58 +00:00
|
|
|
struct hfi1_16b_header {
|
|
|
|
u32 lrh[4];
|
|
|
|
union {
|
|
|
|
struct {
|
|
|
|
struct ib_grh grh;
|
|
|
|
struct ib_other_headers oth;
|
|
|
|
} l;
|
|
|
|
struct ib_other_headers oth;
|
2018-05-16 01:28:07 +00:00
|
|
|
struct opa_16b_mgmt mgmt;
|
2017-08-04 20:53:58 +00:00
|
|
|
} u;
|
|
|
|
} __packed;
|
|
|
|
|
2017-08-04 20:54:04 +00:00
|
|
|
struct hfi1_opa_header {
|
|
|
|
union {
|
|
|
|
struct ib_header ibh; /* 9B header */
|
|
|
|
struct hfi1_16b_header opah; /* 16B header */
|
|
|
|
};
|
|
|
|
u8 hdr_type; /* 9B or 16B */
|
|
|
|
} __packed;
|
|
|
|
|
2016-07-25 20:40:16 +00:00
|
|
|
struct hfi1_ahg_info {
|
2015-07-30 19:17:43 +00:00
|
|
|
u32 ahgdesc[2];
|
|
|
|
u16 tx_flags;
|
|
|
|
u8 ahgcount;
|
|
|
|
u8 ahgidx;
|
|
|
|
};
|
|
|
|
|
2016-07-25 20:40:22 +00:00
|
|
|
struct hfi1_sdma_header {
|
2015-07-30 19:17:43 +00:00
|
|
|
__le64 pbc;
|
2017-08-04 20:54:04 +00:00
|
|
|
struct hfi1_opa_header hdr;
|
2015-07-30 19:17:43 +00:00
|
|
|
} __packed;
|
|
|
|
|
2016-01-19 22:42:00 +00:00
|
|
|
/*
|
|
|
|
* hfi1 specific data structures that will be hidden from rvt after the queue
|
|
|
|
* pair is made common
|
|
|
|
*/
|
|
|
|
struct hfi1_qp_priv {
|
2016-07-25 20:40:16 +00:00
|
|
|
struct hfi1_ahg_info *s_ahg; /* ahg info for next header */
|
2016-02-14 20:45:00 +00:00
|
|
|
struct sdma_engine *s_sde; /* current sde */
|
|
|
|
struct send_context *s_sendcontext; /* current sendcontext */
|
2018-11-28 18:22:31 +00:00
|
|
|
struct hfi1_ctxtdata *rcd; /* QP's receive context */
|
2019-01-24 03:30:07 +00:00
|
|
|
struct page **pages; /* for TID page scan */
|
2019-02-05 22:13:13 +00:00
|
|
|
u32 tid_enqueue; /* saved when tid waited */
|
2016-02-14 20:45:00 +00:00
|
|
|
u8 s_sc; /* SC[0..4] for next packet */
|
2016-01-19 22:42:00 +00:00
|
|
|
struct iowait s_iowait;
|
2019-01-24 05:49:19 +00:00
|
|
|
struct timer_list s_tid_timer; /* for timing tid wait */
|
2019-01-24 05:50:24 +00:00
|
|
|
struct timer_list s_tid_retry_timer; /* for timing tid ack */
|
2019-02-05 22:13:13 +00:00
|
|
|
struct list_head tid_wait; /* for queueing tid space */
|
2019-01-24 03:20:42 +00:00
|
|
|
struct hfi1_opfn_data opfn;
|
2019-02-05 22:13:13 +00:00
|
|
|
struct tid_flow_state flow_state;
|
2019-01-24 03:20:42 +00:00
|
|
|
struct tid_rdma_qp_params tid_rdma;
|
2016-01-19 22:42:28 +00:00
|
|
|
struct rvt_qp *owner;
|
2017-08-04 20:54:16 +00:00
|
|
|
u8 hdr_type; /* 9B or 16B */
|
2019-01-24 05:50:14 +00:00
|
|
|
atomic_t n_tid_requests; /* # of sent TID RDMA requests */
|
2019-01-24 03:20:42 +00:00
|
|
|
unsigned long tid_timer_timeout_jiffies;
|
2019-01-24 05:50:24 +00:00
|
|
|
unsigned long tid_retry_timeout_jiffies;
|
2019-01-24 03:30:40 +00:00
|
|
|
|
2019-01-24 14:36:48 +00:00
|
|
|
/* variables for the TID RDMA SE state machine */
|
2019-01-24 05:50:14 +00:00
|
|
|
u8 s_state;
|
|
|
|
u8 s_retry;
|
2019-01-24 05:48:59 +00:00
|
|
|
u8 rnr_nak_state; /* RNR NAK state */
|
2019-01-24 05:49:51 +00:00
|
|
|
u8 s_nak_state;
|
|
|
|
u32 s_nak_psn;
|
2019-01-24 14:36:48 +00:00
|
|
|
u32 s_flags;
|
2019-01-24 05:49:31 +00:00
|
|
|
u32 s_tid_cur;
|
|
|
|
u32 s_tid_head;
|
|
|
|
u32 s_tid_tail;
|
2019-01-24 05:48:59 +00:00
|
|
|
u32 r_tid_head; /* Most recently added TID RDMA request */
|
|
|
|
u32 r_tid_tail; /* the last completed TID RDMA request */
|
|
|
|
u32 r_tid_ack; /* the TID RDMA request to be ACK'ed */
|
|
|
|
u32 r_tid_alloc; /* Request for which we are allocating resources */
|
|
|
|
u32 pending_tid_w_segs; /* Num of pending tid write segments */
|
|
|
|
u32 alloc_w_segs; /* Number of segments for which write */
|
|
|
|
/* resources have been allocated for this QP */
|
2019-01-24 14:36:48 +00:00
|
|
|
|
2019-01-24 03:30:40 +00:00
|
|
|
/* For TID RDMA READ */
|
2019-02-05 22:13:30 +00:00
|
|
|
u32 tid_r_reqs; /* Num of tid reads requested */
|
|
|
|
u32 tid_r_comp; /* Num of tid reads completed */
|
2019-01-24 03:30:40 +00:00
|
|
|
u32 pending_tid_r_segs; /* Num of pending tid read segments */
|
2019-01-24 03:20:42 +00:00
|
|
|
u16 pkts_ps; /* packets per segment */
|
|
|
|
u8 timeout_shift; /* account for number of packets per segment */
|
2019-01-24 05:48:59 +00:00
|
|
|
|
2019-01-24 05:49:51 +00:00
|
|
|
u32 r_next_psn_kdeth;
|
2019-01-24 05:50:03 +00:00
|
|
|
u32 r_next_psn_kdeth_save;
|
2019-01-24 05:50:14 +00:00
|
|
|
u32 s_resync_psn;
|
2019-01-24 05:48:59 +00:00
|
|
|
u8 sync_pt; /* Set when QP reaches sync point */
|
2019-01-24 05:50:03 +00:00
|
|
|
u8 resync;
|
2015-07-30 19:17:43 +00:00
|
|
|
};
|
|
|
|
|
2019-01-24 05:48:59 +00:00
|
|
|
#define HFI1_QP_WQE_INVALID ((u32)-1)
|
|
|
|
|
2019-01-24 03:30:07 +00:00
|
|
|
struct hfi1_swqe_priv {
|
|
|
|
struct tid_rdma_request tid_req;
|
2019-01-24 03:30:40 +00:00
|
|
|
struct rvt_sge_state ss; /* Used for TID RDMA READ Request */
|
2019-01-24 03:30:07 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct hfi1_ack_priv {
|
2019-01-24 05:49:09 +00:00
|
|
|
struct rvt_sge_state ss; /* used for TID WRITE RESP */
|
2019-01-24 03:30:07 +00:00
|
|
|
struct tid_rdma_request tid_req;
|
|
|
|
};
|
|
|
|
|
2015-11-11 05:34:37 +00:00
|
|
|
/*
|
|
|
|
* This structure is used to hold commonly lookedup and computed values during
|
|
|
|
* the send engine progress.
|
|
|
|
*/
|
2018-09-28 14:17:09 +00:00
|
|
|
struct iowait_work;
|
2015-11-11 05:34:37 +00:00
|
|
|
struct hfi1_pkt_state {
|
|
|
|
struct hfi1_ibdev *dev;
|
|
|
|
struct hfi1_ibport *ibp;
|
|
|
|
struct hfi1_pportdata *ppd;
|
2016-02-14 20:44:43 +00:00
|
|
|
struct verbs_txreq *s_txreq;
|
2018-09-28 14:17:09 +00:00
|
|
|
struct iowait_work *wait;
|
2016-04-12 17:46:10 +00:00
|
|
|
unsigned long flags;
|
2017-05-04 12:14:10 +00:00
|
|
|
unsigned long timeout;
|
|
|
|
unsigned long timeout_int;
|
|
|
|
int cpu;
|
2017-08-04 20:54:47 +00:00
|
|
|
u8 opcode;
|
2017-05-04 12:14:10 +00:00
|
|
|
bool in_thread;
|
2017-07-24 14:45:37 +00:00
|
|
|
bool pkts_sent;
|
2015-11-11 05:34:37 +00:00
|
|
|
};
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
#define HFI1_PSN_CREDIT 16
|
|
|
|
|
|
|
|
struct hfi1_opcode_stats {
|
|
|
|
u64 n_packets; /* number of packets */
|
|
|
|
u64 n_bytes; /* total number of bytes */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct hfi1_opcode_stats_perctx {
|
|
|
|
struct hfi1_opcode_stats stats[256];
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline void inc_opstats(
|
|
|
|
u32 tlen,
|
|
|
|
struct hfi1_opcode_stats *stats)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
|
|
stats->n_bytes += tlen;
|
|
|
|
stats->n_packets++;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
struct hfi1_ibport {
|
2016-01-19 22:42:28 +00:00
|
|
|
struct rvt_qp __rcu *qp[2];
|
2016-01-19 22:42:39 +00:00
|
|
|
struct rvt_ibport rvp;
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
/* the first 16 entries are sl_to_vl for !OPA */
|
|
|
|
u8 sl_to_sc[32];
|
|
|
|
u8 sc_to_sl[32];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct hfi1_ibdev {
|
2016-01-19 22:41:33 +00:00
|
|
|
struct rvt_dev_info rdi; /* Must be first */
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
/* QP numbers are shared by all IB ports */
|
2016-10-10 13:14:28 +00:00
|
|
|
/* protect txwait list */
|
|
|
|
seqlock_t txwait_lock ____cacheline_aligned_in_smp;
|
2015-07-30 19:17:43 +00:00
|
|
|
struct list_head txwait; /* list for wait verbs_txreq */
|
|
|
|
struct list_head memwait; /* list for wait kernel memory */
|
|
|
|
struct kmem_cache *verbs_txreq_cache;
|
2016-10-10 13:14:28 +00:00
|
|
|
u64 n_txwait;
|
|
|
|
u64 n_kmem_wait;
|
2019-01-24 03:30:18 +00:00
|
|
|
u64 n_tidwait;
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-10-10 13:14:28 +00:00
|
|
|
/* protect iowait lists */
|
|
|
|
seqlock_t iowait_lock ____cacheline_aligned_in_smp;
|
2015-07-30 19:17:43 +00:00
|
|
|
u64 n_piowait;
|
2016-02-14 20:45:36 +00:00
|
|
|
u64 n_piodrain;
|
2016-10-10 13:14:28 +00:00
|
|
|
struct timer_list mem_timer;
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
|
|
/* per HFI debugfs */
|
|
|
|
struct dentry *hfi1_ibdev_dbg;
|
|
|
|
/* per HFI symlinks to above */
|
|
|
|
struct dentry *hfi1_ibdev_link;
|
2017-03-21 00:26:14 +00:00
|
|
|
#ifdef CONFIG_FAULT_INJECTION
|
IB/hfi1: Rework fault injection machinery
The packet fault injection code present in the HFI1 driver had some
issues which not only fragment the code but also created user
confusion. Furthermore, it suffered from the following issues:
1. The fault_packet method only worked for received packets. This
meant that the only fault injection mode available for sent
packets is fault_opcode, which did not allow for random packet
drops on all egressing packets.
2. The mask available for the fault_opcode mode did not really work
due to the fact that the opcode values are not bits in a bitmask but
rather sequential integer values. Creating a opcode/mask pair that
would successfully capture a set of packets was nearly impossible.
3. The code was fragmented and used too many debugfs entries to
operate and control. This was confusing to users.
4. It did not allow filtering fault injection on a per direction basis -
egress vs. ingress.
In order to improve or fix the above issues, the following changes have
been made:
1. The fault injection methods have been combined into a single fault
injection facility. As such, the fault injection has been plugged
into both the send and receive code paths. Regardless of method used
the fault injection will operate on both egress and ingress packets.
2. The type of fault injection - by packet or by opcode - is now controlled
by changing the boolean value of the file "opcode_mode". When the value
is set to True, fault injection is done by opcode. Otherwise, by
packet.
2. The masking ability has been removed in favor of a bitmap that holds
opcodes of interest (one bit per opcode, a total of 256 bits). This
works in tandem with the "opcode_mode" value. When the value of
"opcode_mode" is False, this bitmap is ignored. When the value is
True, the bitmap lists all opcodes to be considered for fault injection.
By default, the bitmap is empty. When the user wants to filter by opcode,
the user sets the corresponding bit in the bitmap by echo'ing the bit
position into the 'opcodes' file. This gets around the issue that the set
of opcodes does not lend itself to effective masks and allow for extremely
fine-grained filtering by opcode.
4. fault_packet and fault_opcode methods have been combined. Hence, there
is only one debugfs directory controlling the entire operation of the
fault injection machinery. This reduces the number of debugfs entries
and provides a more unified user experience.
5. A new control files - "direction" - is provided to allow the user to
control the direction of packets, which are subject to fault injection.
6. A new control file - "skip_usec" - is added that would allow the user
to specify a "timeout" during which no fault injection will occur.
In addition, the following bug fixes have been applied:
1. The fault injection code has been split into its own header and source
files. This was done to better organize the code and support conditional
compilation without littering the code with #ifdef's.
2. The method by which the TX PIO packets were being marked for drop
conflicted with the way send contexts were being setup. As a result,
the send context was repeatedly being reset.
3. The fault injection only makes sense when the user can control it
through the debugfs entries. However, a kernel configuration can
enable fault injection but keep fault injection debugfs entries
disabled. Therefore, it makes sense that the HFI fault injection
code depends on both.
4. Error suppression did not take into account the method by which PIO
packets were being dropped. Therefore, even with error suppression
turned on, errors would still be displayed to the screen. A larger
enough packet drop percentage would case the kernel to crash because
the driver would be stuck printing errors.
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Don Hiatt <don.hiatt@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-02 13:43:24 +00:00
|
|
|
struct fault *fault;
|
2017-03-21 00:26:14 +00:00
|
|
|
#endif
|
2015-07-30 19:17:43 +00:00
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline struct hfi1_ibdev *to_idev(struct ib_device *ibdev)
|
|
|
|
{
|
2016-01-19 22:41:33 +00:00
|
|
|
struct rvt_dev_info *rdi;
|
|
|
|
|
|
|
|
rdi = container_of(ibdev, struct rvt_dev_info, ibdev);
|
|
|
|
return container_of(rdi, struct hfi1_ibdev, rdi);
|
2015-07-30 19:17:43 +00:00
|
|
|
}
|
|
|
|
|
2018-09-28 14:17:09 +00:00
|
|
|
static inline struct rvt_qp *iowait_to_qp(struct iowait *s_iowait)
|
2016-01-19 22:42:00 +00:00
|
|
|
{
|
|
|
|
struct hfi1_qp_priv *priv;
|
|
|
|
|
|
|
|
priv = container_of(s_iowait, struct hfi1_qp_priv, s_iowait);
|
|
|
|
return priv->owner;
|
|
|
|
}
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
/*
|
|
|
|
* This must be called with s_lock held.
|
|
|
|
*/
|
2017-05-30 00:22:01 +00:00
|
|
|
void hfi1_bad_pkey(struct hfi1_ibport *ibp, u32 key, u32 sl,
|
2017-08-04 20:54:23 +00:00
|
|
|
u32 qp1, u32 qp2, u32 lid1, u32 lid2);
|
2016-02-03 22:36:49 +00:00
|
|
|
void hfi1_cap_mask_chg(struct rvt_dev_info *rdi, u8 port_num);
|
2015-07-30 19:17:43 +00:00
|
|
|
void hfi1_sys_guid_chg(struct hfi1_ibport *ibp);
|
|
|
|
void hfi1_node_desc_chg(struct hfi1_ibport *ibp);
|
|
|
|
int hfi1_process_mad(struct ib_device *ibdev, int mad_flags, u8 port,
|
|
|
|
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
|
|
|
|
const struct ib_mad_hdr *in_mad, size_t in_mad_size,
|
|
|
|
struct ib_mad_hdr *out_mad, size_t *out_mad_size,
|
|
|
|
u16 *out_mad_pkey_index);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The PSN_MASK and PSN_SHIFT allow for
|
|
|
|
* 1) comparing two PSNs
|
|
|
|
* 2) returning the PSN with any upper bits masked
|
|
|
|
* 3) returning the difference between to PSNs
|
|
|
|
*
|
|
|
|
* The number of significant bits in the PSN must
|
|
|
|
* necessarily be at least one bit less than
|
|
|
|
* the container holding the PSN.
|
|
|
|
*/
|
|
|
|
#define PSN_MASK 0x7FFFFFFF
|
|
|
|
#define PSN_SHIFT 1
|
|
|
|
#define PSN_MODIFY_MASK 0xFFFFFF
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compare two PSNs
|
|
|
|
* Returns an integer <, ==, or > than zero.
|
|
|
|
*/
|
|
|
|
static inline int cmp_psn(u32 a, u32 b)
|
|
|
|
{
|
2016-02-15 04:19:41 +00:00
|
|
|
return (((int)a) - ((int)b)) << PSN_SHIFT;
|
2015-07-30 19:17:43 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return masked PSN
|
|
|
|
*/
|
|
|
|
static inline u32 mask_psn(u32 a)
|
|
|
|
{
|
|
|
|
return a & PSN_MASK;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return delta between two PSNs
|
|
|
|
*/
|
|
|
|
static inline u32 delta_psn(u32 a, u32 b)
|
|
|
|
{
|
|
|
|
return (((int)a - (int)b) << PSN_SHIFT) >> PSN_SHIFT;
|
|
|
|
}
|
|
|
|
|
2019-01-24 03:30:40 +00:00
|
|
|
static inline struct tid_rdma_request *wqe_to_tid_req(struct rvt_swqe *wqe)
|
|
|
|
{
|
|
|
|
return &((struct hfi1_swqe_priv *)wqe->priv)->tid_req;
|
|
|
|
}
|
|
|
|
|
2019-01-24 03:31:02 +00:00
|
|
|
static inline struct tid_rdma_request *ack_to_tid_req(struct rvt_ack_entry *e)
|
|
|
|
{
|
|
|
|
return &((struct hfi1_ack_priv *)e->priv)->tid_req;
|
|
|
|
}
|
|
|
|
|
2019-01-24 03:30:07 +00:00
|
|
|
/*
|
|
|
|
* Look through all the active flows for a TID RDMA request and find
|
|
|
|
* the one (if it exists) that contains the specified PSN.
|
|
|
|
*/
|
|
|
|
static inline u32 __full_flow_psn(struct flow_state *state, u32 psn)
|
|
|
|
{
|
|
|
|
return mask_psn((state->generation << HFI1_KDETH_BTH_SEQ_SHIFT) |
|
|
|
|
(psn & HFI1_KDETH_BTH_SEQ_MASK));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32 full_flow_psn(struct tid_rdma_flow *flow, u32 psn)
|
|
|
|
{
|
|
|
|
return __full_flow_psn(&flow->flow_state, psn);
|
|
|
|
}
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
struct verbs_txreq;
|
|
|
|
void hfi1_put_txreq(struct verbs_txreq *tx);
|
|
|
|
|
2016-01-19 22:42:28 +00:00
|
|
|
int hfi1_verbs_send(struct rvt_qp *qp, struct hfi1_pkt_state *ps);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
void hfi1_cnp_rcv(struct hfi1_packet *packet);
|
|
|
|
|
|
|
|
void hfi1_uc_rcv(struct hfi1_packet *packet);
|
|
|
|
|
|
|
|
void hfi1_rc_rcv(struct hfi1_packet *packet);
|
|
|
|
|
|
|
|
void hfi1_rc_hdrerr(
|
|
|
|
struct hfi1_ctxtdata *rcd,
|
2017-05-12 16:20:20 +00:00
|
|
|
struct hfi1_packet *packet,
|
2016-01-19 22:42:28 +00:00
|
|
|
struct rvt_qp *qp);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2017-04-29 18:41:18 +00:00
|
|
|
u8 ah_to_sc(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2017-08-04 20:54:04 +00:00
|
|
|
void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
void hfi1_ud_rcv(struct hfi1_packet *packet);
|
|
|
|
|
|
|
|
int hfi1_lookup_pkey_idx(struct hfi1_ibport *ibp, u16 pkey);
|
|
|
|
|
2016-01-19 22:42:28 +00:00
|
|
|
void hfi1_migrate_qp(struct rvt_qp *qp);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-01-19 22:43:44 +00:00
|
|
|
int hfi1_check_modify_qp(struct rvt_qp *qp, struct ib_qp_attr *attr,
|
|
|
|
int attr_mask, struct ib_udata *udata);
|
|
|
|
|
|
|
|
void hfi1_modify_qp(struct rvt_qp *qp, struct ib_qp_attr *attr,
|
|
|
|
int attr_mask, struct ib_udata *udata);
|
2017-02-08 13:27:19 +00:00
|
|
|
void hfi1_restart_rc(struct rvt_qp *qp, u32 psn, int wait);
|
2018-09-26 17:26:44 +00:00
|
|
|
int hfi1_setup_wqe(struct rvt_qp *qp, struct rvt_swqe *wqe,
|
|
|
|
bool *call_send);
|
2016-02-03 22:33:14 +00:00
|
|
|
|
2016-02-14 20:45:36 +00:00
|
|
|
extern const u32 rc_only_opcode;
|
|
|
|
extern const u32 uc_only_opcode;
|
|
|
|
|
2017-05-12 16:20:20 +00:00
|
|
|
int hfi1_ruc_check_hdr(struct hfi1_ibport *ibp, struct hfi1_packet *packet);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
u32 hfi1_make_grh(struct hfi1_ibport *ibp, struct ib_grh *hdr,
|
2017-04-29 18:41:28 +00:00
|
|
|
const struct ib_global_route *grh, u32 hwords, u32 nwords);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-09-06 11:35:05 +00:00
|
|
|
void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
|
2019-01-24 14:09:46 +00:00
|
|
|
u32 bth0, u32 bth1, u32 bth2, int middle,
|
2016-02-14 20:44:43 +00:00
|
|
|
struct hfi1_pkt_state *ps);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-01-19 22:43:33 +00:00
|
|
|
void _hfi1_do_send(struct work_struct *work);
|
|
|
|
|
2017-04-09 17:16:35 +00:00
|
|
|
void hfi1_do_send_from_rvt(struct rvt_qp *qp);
|
|
|
|
|
|
|
|
void hfi1_do_send(struct rvt_qp *qp, bool in_thread);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2018-02-01 18:46:31 +00:00
|
|
|
void hfi1_send_rc_ack(struct hfi1_packet *packet, bool is_fecn);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-02-14 20:44:43 +00:00
|
|
|
int hfi1_make_rc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-02-14 20:44:43 +00:00
|
|
|
int hfi1_make_uc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-02-14 20:44:43 +00:00
|
|
|
int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
int hfi1_register_ib_device(struct hfi1_devdata *);
|
|
|
|
|
|
|
|
void hfi1_unregister_ib_device(struct hfi1_devdata *);
|
|
|
|
|
2019-01-24 14:36:34 +00:00
|
|
|
void hfi1_kdeth_eager_rcv(struct hfi1_packet *packet);
|
|
|
|
|
|
|
|
void hfi1_kdeth_expected_rcv(struct hfi1_packet *packet);
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
void hfi1_ib_rcv(struct hfi1_packet *packet);
|
|
|
|
|
2017-08-04 20:53:58 +00:00
|
|
|
void hfi1_16B_rcv(struct hfi1_packet *packet);
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
unsigned hfi1_get_npkeys(struct hfi1_devdata *);
|
|
|
|
|
2016-01-19 22:42:28 +00:00
|
|
|
int hfi1_verbs_send_dma(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
|
2015-11-11 05:34:37 +00:00
|
|
|
u64 pbc);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2016-01-19 22:42:28 +00:00
|
|
|
int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
|
2015-11-11 05:34:37 +00:00
|
|
|
u64 pbc);
|
2015-07-30 19:17:43 +00:00
|
|
|
|
2018-02-01 18:46:23 +00:00
|
|
|
static inline bool opa_bth_is_migration(struct ib_other_headers *ohdr)
|
|
|
|
{
|
|
|
|
return ohdr->bth[1] & cpu_to_be32(OPA_BTH_MIG_REQ);
|
|
|
|
}
|
|
|
|
|
2019-01-24 03:30:07 +00:00
|
|
|
void hfi1_wait_kmem(struct rvt_qp *qp);
|
|
|
|
|
|
|
|
static inline void hfi1_trdma_send_complete(struct rvt_qp *qp,
|
|
|
|
struct rvt_swqe *wqe,
|
|
|
|
enum ib_wc_status status)
|
|
|
|
{
|
|
|
|
trdma_clean_swqe(qp, wqe);
|
|
|
|
rvt_send_complete(qp, wqe, status);
|
|
|
|
}
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
extern const enum ib_wc_opcode ib_hfi1_wc_opcode[];
|
|
|
|
|
|
|
|
extern const u8 hdr_len_by_opcode[];
|
|
|
|
|
2016-01-19 22:43:33 +00:00
|
|
|
extern const int ib_rvt_state_ops[];
|
2015-07-30 19:17:43 +00:00
|
|
|
|
|
|
|
extern __be64 ib_hfi1_sys_image_guid; /* in network order */
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_cqes;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_cqs;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_qp_wrs;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_qps;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_sges;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_mcast_grps;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_mcast_qp_attached;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_srqs;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_srq_sges;
|
|
|
|
|
|
|
|
extern unsigned int hfi1_max_srq_wrs;
|
|
|
|
|
2016-02-14 20:45:36 +00:00
|
|
|
extern unsigned short piothreshold;
|
|
|
|
|
2015-07-30 19:17:43 +00:00
|
|
|
extern const u32 ib_hfi1_rnr_table[];
|
|
|
|
|
|
|
|
#endif /* HFI1_VERBS_H */
|