Commit Graph

10 Commits

Author SHA1 Message Date
James Smart
81b96eda5f scsi: lpfc: Expand WQE capability of every NVME hardware queue
Hardware queues are a fast staging area to push commands into the
adapter.  The adapter should drain them extremely quickly. However,
under heavy io load, the host cpu is pushing commands faster than the
drain rate of the adapter causing the driver to resource busy commands.

Enlarge the hardware queue (wq & cq) to support a larger number of queue
entries (4x the prior size) before backpressure. Enlarging the queue
requires larger contiguous buffers (16k) per logical page for the
hardware. This changed calling sequences that were expecting 4K page
sizes that now must pass a parameter with the page sizes. It also
required use of a new version of an adapter command that can vary the
page size values.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-12-04 20:32:53 -05:00
James Smart
80cc004393 scsi: lpfc: Fix transition nvme-i rport handling to nport only.
As the devloss API was implemented in the nvmei driver, an evaluation of
the nvme transport and the lpfc driver showed dual management of the
rports.  This creates a bug possibility when the thread count and SAN
size increases.

The nvmei driver code was based on a very early transport and was not
revisited until the devloss API was introduced.

Remove the listhead in the driver's rport data structure and the
listhead in the driver's lport data structure.  Remove all rport_list
traversal.  Convert the driver to use the nrport (nvme rport) pointer
that is now NULL or nonNULL depending on a devloss action.  Convert
debugfs and nvme_info in sysfs to use the fc_nodes list in the vport.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-06-12 21:37:30 -04:00
James Smart
bbe3012b73 lpfc: Fix memory corruption of the lpfc_ncmd->list pointers
lpfc was changing the private pointer that is set/maintained by
the nvme_fc transport. This caused two issues: a) the transport, on
teardown may erroneous attempt to free whatever address was set;
and b) lfpc uses any value set in lpfc_nvme_fcp_abort() and
assumes its a valid io request.

Correct issue by properly defining a context structure for lpfc.
Lpfc also updated to clear the private context structure on io
completion.

Since this bug caused scrutiny of the way lpfc moves local request
structures between lists, also cleaned up list_del()'s to
list_del_inits()'s.

This is a nvme-specific bug. The patch was cut against the
linux-block tree, for-4.12/block tree. It should be pulled in through
that tree.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-04-25 20:00:58 +02:00
James Smart
06909bb91f Remove unused defines for NVME PostBuf.
These defines for the posting of buffers for nvmet target were not used.

Removing the unused defines.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
2017-04-24 09:25:48 +02:00
James Smart
a44e4e8b6b Standardize nvme SGL segment count
Standardize default SGL segment count for nvme target and initiator

The driver needs to make them the same for clarity.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
2017-04-24 09:25:48 +02:00
James Smart
318083ad92 scsi: lpfc: add NVME exchange aborts
previous code did little more than log a message.

This patch adds abort path support, modeled after the SCSI code paths.
Currently addresses only the initiator path. Target path under
development, but stubbed out.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-03-06 23:04:22 -05:00
James Smart
d080abe0a8 scsi: lpfc: Update copyrights
Update copyrights to 2017 for all files touched in this patch set

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-02-22 18:41:44 -05:00
James Smart
bd2cdd5e40 scsi: lpfc: NVME Initiator: Add debugfs support
NVME Initiator: Add debugfs support

Adds debugfs snippets to cover the new NVME initiator functionality

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-02-22 18:41:43 -05:00
James Smart
01649561a8 scsi: lpfc: NVME Initiator: bind to nvme_fc api
NVME Initiator: Tie in to NVME Fabrics nvme_fc LLDD initiator api

Adds the routines to:
- register and deregister the FC port as a nvme-fc initiator localport
- register and deregister remote FC ports as a nvme-fc remoteport
- binding of nvme queues to adapter WQs
- send/perform NVME LS's
- send/perform NVME FCP initiator io operations

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-02-22 18:41:43 -05:00
James Smart
895427bd01 scsi: lpfc: NVME Initiator: Base modifications
NVME Initiator: Base modifications

This patch adds base modifications for NVME initiator support.

The base modifications consist of:
- Formal split of SLI3 rings from SLI-4 WQs (sometimes referred to as
  rings as well) as implementation now widely varies between the two.
- Addition of configuration modes:
   SCSI initiator only; NVME initiator only; NVME target only; and
   SCSI and NVME initiator.
   The configuration mode drives overall adapter configuration,
   offloads enabled, and resource splits.
   NVME support is only available on SLI-4 devices and newer fw.
- Implements the following based on configuration mode:
  - Exchange resources are split by protocol; Obviously, if only
     1 mode, then no split occurs. Default is 50/50. module attribute
     allows tuning.
  - Pools and config parameters are separated per-protocol
  - Each protocol has it's own set of queues, but share interrupt
    vectors.
     SCSI:
       SLI3 devices have few queues and the original style of queue
         allocation remains.
       SLI4 devices piggy back on an "io-channel" concept that
         eventually needs to merge with scsi-mq/blk-mq support (it is
	 underway).  For now, the paradigm continues as it existed
	 prior. io channel allocates N msix and N WQs (N=4 default)
	 and either round robins or uses cpu # modulo N for scheduling.
	 A bunch of module parameters allow the configuration to be
	 tuned.
     NVME (initiator):
       Allocates an msix per cpu (or whatever pci_alloc_irq_vectors
         gets)
       Allocates a WQ per cpu, and maps the WQs to msix on a WQ #
         modulo msix vector count basis.
       Module parameters exist to cap/control the config if desired.
  - Each protocol has its own buffer and dma pools.

I apologize for the size of the patch.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>

----
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-02-22 18:41:43 -05:00