Introduce function _scsih_nvme_shutdown() to issue IO Unit Control message
to IOC firmware with operation code 'shutdown'. This causes IOC firmware to
issue NVMe shutdown commands to all NVMe drives attached to it.
NVMe Shutdown:
NVMe devices need to have a specific shutdown sequence performed before
power is removed. For this, the IOC firmware needs to be notified when the
system is being shutdown. So during the system shutdown time, driver issues
an IO Unit Control request with operation code MPI26_CTRL_OP_SHUTDOWN to
inform firmware that a shutdown is initiated.
This shutdown command is issued only if NVMe devices are attached to the
controller.
During each NVMe device addition, driver reads pcie device page2 to get
shutdown latency (e.g. drive's RTD3 Entry Latency) and updates the max
latency value among the added NVMe drives in ioc->max_shutdown_latency.
This is used as the timeout value for IO Unit Control command at the time
of shutdown.
When a NVMe drive is removed and its shutdown latency matches which
ioc->max_shutdown_latency then ioc->max_shutdown_latency is updated to next
max value (by iterating over the list of available devices). If the
shutdown latency is 0, then default timeout is set to six seconds.
Link: https://lore.kernel.org/r/20191226111333.26131-3-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Added a new status flag named MPT3_DIAG_BUFFER_IS_APP_OWNED and it will set
whenever application registers the diag buffer & it will be cleared when
application unregisters the buffer.
When this flag is enabled, and if application issues diag buffer register
command without releasing the buffer, then register command will be failed
with -EINVAL status by saying that this buffer is already registered by the
application.
When user issues a trace buffer register command through sysfs parameter,
and if trace buffer is in released stated but not yet unregistered by the
application which was owning it, then driver will unregister the buffer by
itself and freshly register the 1MB sized trace buffer with the HBA
firmware.
Link: https://lore.kernel.org/r/1568379890-18347-9-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The diag buffer which is allocated during driver load time or through sysfs
parameter is marked as driver allocated diag buffer.
MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED bit will be set for this buffer.
This buffer won't be de-allocated even when application issues unregister
command, driver just clears the registered status bit. Same buffer will be
reused while re-registering the same diag buffer type by any application.
While re-registering the same diag buffer type application has to register
with the same size that the buffer was allocated during driver load
time. This buffer size can be read by the application by issuing diag
'query' command.
This always makes sure that the memory is available for applications for
collecting the firmware logs. Only thing is that this won't allow the
application to re-register the diag buffer with different size, but the
buffer size which is allocated during driver load time will be enough for
most of the cases for collecting the firmware logs.
Link: https://lore.kernel.org/r/1568379890-18347-8-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently if user wishes to enable the host trace buffer during driver load
time, then user has to load the driver with module parameter
'diag_buffer_enable' set to one.
Alternatively now the user can enable host trace buffer by enabling the
following fields in manufacturing page11 in NVDATA (nvdata xml is used
while building HBA firmware image):
* HostTraceBufferMaxSizeKB - Maximum trace buffer size in KB that host can
allocate,
* HostTraceBufferMinSizeKB - Minimum trace buffer size in KB atleast host
should allocate,
* HostTraceBufferDecrementSizeKB - size by which host can reduce from
buffer size and retry the buffer allocation
when buffer allocation failed with previous
calculated buffer size.
The driver will register the trace buffer automatically without any module
parameter during boot time when above fields are enabled in manufacturing
page11 in HBA firmware.
Driver follows the following algorithm for enabling the host trace buffer
during driver load time:
* If user has loaded the driver with module parameter 'diag_buffer_enable'
set to one, then driver allocates 2MB buffer and registers this buffer
with HBA firmware for capturing the firmware trace logs.
* Else driver reads manufacture page11 data and checks whether
HostTraceBufferMaxSizeKB filed is zero or not?
- If HostTraceBufferMaxSizeKB is non-zero then driver tries to allocate
HostTraceBufferMaxSizeKB size of memory. If the buffer allocation is
successful, then it will register this buffer with HBA firmware, else
in a loop the driver will try again by reducing the current buffer size
with HostTraceBufferDecrementSizeKB size until memory allocation is
successful or buffer size falls below HostTraceBufferMinSizeKB. If the
memory allocation is successful, then the buffer will be registered
with the firmware. Else, if the buffer size falls below the
HostTraceBufferMinSizeKB, then driver won't register trace buffer with
HBA firmware.
- If HostTraceBufferMaxSizeKB is zero, then driver won't register trace
buffer with HBA firmware.
Link: https://lore.kernel.org/r/1568379890-18347-2-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This patch provides a module parameter and sysfs interface to select
whether the queue depth for each device should be based on the
protocol-specific value set by the driver (the default) or the maximum
supported by the controller (can_queue).
Although we have a sysfs interface per sdev to change the queue depth
of individual scsi devices, this implementation provides a single
sysfs entry per shost to switch between the controller max and the
driver default.
[mkp: tweaked commit desc]
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updated driver version from 29.100.00.00 to 31.100.00.00 which is
equivalent to Phase 12 OOB.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently with sysfs parameter "drv_support_bitmap" driver exposes whether
driver supports toolbox memory move command or not.
And application should issue the toolbox memory move command only if driver
tell that memory move tool box command is supported through this sysfs
parameter.
In future we can utilize this sysfs parameter if any new feature is added
and need to notify the same to applications.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
If driver sees the NVMe drive with "DEVICE_BLOCKED" AccessStatus in its
PCIe Device Page0, then driver removes the drive from its internal list and
does not allow any IOCTL commands to be sent to the drive and will return
the IOCTLs with "-ENODEV" status.
The driver will now allow NVMe Encapsulated IOCTL issued to the NVMe device
with an access status of DEVICE_BLOCKED. This change allows the user to
flash new drive firmware online and revive the drive.
Add NVMe device only the driver's internal list even though the device is
in the blocked state so that the device will be visible to Apps. This way
Apps can send NVMe Encapsulated IOCTLs to this drive and bring the drive
online. This NVMe drive with DEVICE_BLOCKED access status won't added to
the SML, it will be added only in the driver's internal list.
[mkp: clarified desc]
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
SES device of managed PCIe switch will be enumerated same as NVMe drives.
The device info type for this SES device is
MPI26_PCIE_DEVINFO_SCSI (0x4),
whereas the device info type for NVMe drives is
MPI26_PCIE_DEVINFO_NVME (0x3).
Based on this device info type driver determines whether the device is NVMe
drive or a SES device of a managed PCIe switch.
This SES device doesn't have the PCIe device page 2 information like NVMe
drives, so driver won't read PCIe device page 2 information for SES device.
This SES device uses only IEEE SGL's, So driver build's IEEE SGL's whenever
it receives any SCSI commands for this SES device.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Issue:
During online Firmware upgrade operations it is possible that MaxDevHandles
filled in IOCFacts may change with new FW. With this we may observe kernel
panics when driver try to access the pd_handles or blocking_handles buffers
at offset greater than the old firmware's MaxDevHandle value.
Fix:
_base_check_ioc_facts_changes() looks for increase/decrease in IOCFacts
attributes during online firmware upgrade and increases the pd_handles,
blocking_handles, etc buffer sizes to new firmware's MaxDevHandle value if
this new firmware's MaxDevHandle value is greater than the old firmware's
MaxDevHandle value.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Even though 'smp_affinity_enable' module parameter is enabled, if the
number of online CPUs is bigger than the number of msix vectors enabled on
that HBA, then smp affinity settings should be disabled only for this HBA.
But currently the smp affinity setting is disabled globally and hence smp
affinity will be disabled for subsequent HBAs even though number of msix
vectors enabled for this HBA matches the number of online CPU.
To fix this, define a per HBA variable smp_affinity_enable. Initially this
variable is initialized with smp_affinity_enable module parameter value. If
this HBA has less number of msix vectors configured when compared to number
of online cpus, then only this HBA's variable smp_affinity_enable is set to
zero.
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Update driver version from 28.100.00.00 to 29.100.00.00
This is equivalent to Phase 10 OOB driver.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Enable interrupt coalescing only on high iops queues.
In ioc config page 1, offset 0x14 (ProductSpecific field) is used to
determine interrupt coalescing enabled/disabled on per reply descriptor
post queue group(8) basis. If 31st bit is zero, then interrupt coalescing
is enabled for all reply descriptor post queues. If 31st bit is set to one,
then user can enable/disable interrupt coalescing on per reply descriptor
post queue group(8) basis. So to enable interrupt coalescing only on first
reply descriptor post queue group (i.e. on high iops queues), set bit 0 and
31.
This configuration should reset during driver unload or shutdown to the
default settings. For this, the driver takes copy of default ioc page 1 and
copies back the default or unmodified ioc page1 during unload and
shutdown. This means that on next driver load (e.g. if older version driver
is loaded by user), current modified changes on ioc page1 won't take
effect.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
In the IO submission path _base_get_msix_index is called twice. Initially
while getting the smid and subsequently while posting the request
descriptor (RD).
Refactor code to query msix index only while posting the request
descriptor. Save determined msix index in msix_io field.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The driver will use round-robin method for io submission in batches within
the high iops queues when the number of in-flight ios on the target device
is larger than 8. Otherwise the driver will use low latency reply queues.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Aero controllers support balanced performance mode through the ability to
configure queues with different properties.
Reply queues with interrupt coalescing enabled are called "high iops reply
queues" and reply queues with interrupt coalescing disabled are called "low
latency reply queues".
The driver configures a combination of high iops and low latency reply
queues if:
- HBA is an AERO controller;
- MSI-X vectors supported by the HBA is 128;
- Total CPU count in the system more than high iops queue count;
- Driver is loaded with default max_msix_vectors module parameter; and
- System booted in non-kdump mode.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
If the Aero HBA supports Atomic Request Descriptors, it sets the Atomic
Request Descriptor Capable bit in the IOCCapabilities field of the IOCFacts
Reply message. Driver uses an Atomic Request Descriptor as an alternative
method for posting an entry onto a request queue.
The posting of an Atomic Request Descriptor is an atomic operation,
providing a safe mechanism for multiple processors on the host to post
requests without synchronization. This Atomic Request Descriptor format is
identical to first 32 bits of Default Request Descriptor and uses only 32
bits.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This code refactoring introduces function pointers.
Host uses Request Descriptors of different types for posting an entry onto
a request queue. Based on controller type and capabilities, host can also
use atomic descriptors other than normal descriptors. Using function
pointer will avoid if-else statements
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updated driver version to 28.100.00.00, which is equivalent to OOB Phase 9.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
* Reduce the threshold value to 1/4 of the queue depth.
* With this FW can find enough entries to post the Reply Descriptors in the
reply descriptor post queue.
* With module param, user can play with threshold value, the same
irqpoll_weight is used as the budget in processing of reply descriptor
post queues in _base_process_reply_queue.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Driver uses "reply descriptor post queues" in round robin fashion so that
IO's are distributed to all the available reply descriptor post queues
equally. With this each reply descriptor post queue load is balanced.
This is enabled only if CPUs count to MSI-X vector count ratio is X:1
(where X > 1) This improves performance and also fixes soft lockups.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Issue Description:
We have seen cpu lock up issue from fields if system has greater (more than
96) logical cpu count. SAS3.0 controller (Invader series) supports at max
96 msix vector and SAS3.5 product (Ventura) supports at max 128 msix
vectors.
This may be a generic issue (if PCI device supports completion on multiple
reply queues). Let me explain it w.r.t to mpt3sas supported h/w just to
simplify the problem and possible changes to handle such issues. IT HBA
(mpt3sas) supports multiple reply queues in completion path. Driver creates
MSI-x vectors for controller as "min of (FW supported Reply queue, Logical
CPUs)". If submitter is not interrupted via completion on same CPU, there
is a loop in the IO path. This behavior can cause hard/soft CPU lockups, IO
timeout, system sluggish etc.
Example - one CPU (e.g. CPU A) is busy submitting the IOs and another CPU
(e.g. CPU B) is busy with processing the corresponding IO's reply
descriptors from reply descriptor queue upon receiving the interrupts from
HBA. If the CPU A is continuously pumping the IOs then always CPU B (which
is executing the ISR) will see the valid reply descriptors in the reply
descriptor queue and it will be continuously processing those reply
descriptor in a loop without quitting the ISR handler.
Mpt3sas driver will exit ISR handler if it finds unused reply descriptor in
the reply descriptor queue. Since CPU A will be continuously sending the
IOs, CPU B may always see a valid reply descriptor (posted by HBA Firmware
after processing the IO) in the reply descriptor queue. In worst case,
driver will not quit from this loop in the ISR handler. Eventually, CPU
lockup will be detected by watchdog.
Above mentioned behavior is not common if "rq_affinity" set to 2 or
affinity_hint is honored by irqbalance as "exact". If rq_affinity is set
to 2, submitter will be always interrupted via completion on same CPU. If
irqbalance is using "exact" policy, interrupt will be delivered to
submitter CPU.
If CPU counts to MSI-X vectors (reply descriptor Queues) count ratio is not
1:1, we still have exposure of issue explained above and for that we don't
have any solution.
Exposure of soft/hard lockup if CPU count is more than MSI-x supported by
device.
If CPUs count to MSI-x vectors count ratio is not 1:1, (Other way, if CPU
counts to MSI-x vector count ratio is something like X:1, where X > 1) then
'exact' irqbalance policy OR rq_affinity = 2 won't help to avoid CPU
hard/soft lockups. There won't be any one to one mapping between CPU to
MSI-x vector instead one MSI-x interrupt (or reply descriptor queue) is
shared with group/set of CPUs and there is a possibility of having a loop
in the IO path within that CPU group and may observe lockups.
For example: Consider a system having two NUMA nodes and each node having
four logical CPUs and also consider that number of MSI-x vectors enabled on
the HBA is two, then CPUs count to MSI-x vector count ratio as 4:1. e.g.
MSIx vector 0 is affinity to CPU 0, CPU 1, CPU 2 & CPU 3 of NUMA node 0 and
MSI-x vector 1 is affinity to CPU 4, CPU 5, CPU 6 & CPU 7 of NUMA node 1.
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 --> MSI-x 0
node 0 size: 65536 MB
node 0 free: 63176 MB
node 1 cpus: 4 5 6 7 -->MSI-x 1
node 1 size: 65536 MB
node 1 free: 63176 MB
Assume that user started an application which uses all the CPUs of NUMA
node 0 for issuing the IOs. Only one CPU from affinity list (it can be any
cpu since this behavior depends upon irqbalance) CPU0 will receive the
interrupts from MSIx vector 0 for all the IOs. Eventually, CPU 0 IO
submission percentage will be decreasing and ISR processing percentage will
be increasing as it is more busy with processing the interrupts. Gradually
IO submission percentage on CPU 0 will be zero and it's ISR processing
percentage will be 100 percentage as IO loop has already formed within the
NUMA node 0, i.e. CPU 1, CPU 2 & CPU 3 will be continuously busy with
submitting the heavy IOs and only CPU 0 is busy in the ISR path as it
always find the valid reply descriptor in the reply descriptor
queue. Eventually, we will observe the hard lockup here.
Chances of occurring of hard/soft lockups are directly proportional to
value of X. If value of X is high, then chances of observing CPU lockups is
high.
Solution: Use IRQ poll interface defined in " irq_poll.c". mpt3sas driver
will execute ISR routine in Softirq context and it will always quit the
loop based on budget provided in IRQ poll interface.
In these scenarios (i.e. where CPUs count to MSI-X vectors count ratio is
X:1 (where X > 1)), IRQ poll interface will avoid CPU hard lockups due to
voluntary exit from the reply queue processing based on budget. Note -
Only one MSI-x vector is busy doing processing.
Irqstat output:
IRQs / 1 second(s)
IRQ# TOTAL NODE0 NODE1 NODE2 NODE3 NAME
44 122871 122871 0 0 0 IR-PCI-MSI-edge mpt3sas0-msix0
45 0 0 0 0 0 IR-PCI-MSI-edge mpt3sas0-msix1
We use this approach only if cpu count is more than FW supported MSI-x
vector
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updated driver version to 27.102.00.00 from 27.101.00.00.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Add Atlas PCIe Switch Management Port device PNPID,
Vendor Id: 0x1000
device Id: 0x00B2
This device is based on MPI 2.6 spec and it exposes one SES device to
accept management commands for the PCIe switch.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Update driver version from 27.100.00.00 to 27.101.00.00.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Sometimes Aero controllers appears to be returning bad data (0) for
doorbell register read and if retries are performed immediately after the
bad read, they return good data.
Workaround is added to retry read from doorbell registers for maximum three
times if driver get the zero. Added functions base_readl_aero for Aero IOC
and base_readl for gen35 and other controllers.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding flag "is_aero_ioc" to differentiate aero based controllers from
other gen35 controllers.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Modify driver version to 27.100.00.00 (which is equivalent to PH8 OOB
driver)
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
No functional changes. This section of code "wait for IOC to be
operational" is used in many places across the driver. Factor this code
out into a new mpt3sas_wait_for_ioc().
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Added new #define variable IOC_OPERATIONAL_WAIT_COUNT and it replaces hard
coded value '10' in all the places where driver is waiting for the IOC to
become operational.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updating MPI headers to the latest version 2.6.7 to add support to the
driver to detect the new 3816 and 3916 chip based controllers. Separate
out firmware image data from mpi2_ioc.h to new file mpi2_image.h
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
All the uses have been removed, delete the macro.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
These macros can help identify specific logging uses and eventually perhaps
reduce object sizes.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Split each of these functions in three functions - one function per reset
phase. This patch does not change any functionality but makes the code
easier to read.
Note: it is much easier to review the git diff -w output after having
applied this patch than by reviewing the patch itself.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: Chaitra P B <chaitra.basappa@broadcom.com>
Cc: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Since ioc->shost_recovery is set after ioc->reset_in_progress_mutex is
obtained, if concurrent resets are issued there is a short time during
which ioc->reset_in_progress_mutex is locked and ioc->shost_recovery ==
0. Avoid that this can cause trouble by unconditionally locking
ioc->shost_recovery.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: Chaitra P B <chaitra.basappa@broadcom.com>
Cc: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Make _base_build_nvme_prp() easier to read by introducing a structure
to access NVMe command fields.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: Chaitra P B <chaitra.basappa@broadcom.com>
Cc: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updated driver version to "26.100.00.00"
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Presently driver is using combined reply queue feature when MSI-x vectors >
8 for both SAS3 and SAS3.5 controllers. But as per MPI-spec,
1. For SAS3 controllers, driver should use combined reply queue when HBA
supports more than 8 MSI-x vectors.
2. For SAS3.5 controllers, driver should use combined reply queue when HBA
supports more than 16 MSI-x vectors.
Modified driver code to use combined reply queue for SAS3 controllers when
HBA supports > 8 MSI-x vectors and for SAS3.5 controllers when HBA supports
> 16 MSI-x vectors.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
When an ioctl is sent to FW, and if there is a controller reset issued
before ioctl gets completed, then in controller reset path all the pending
ioctl commands are terminated from "mpt3sas_ctl_reset_handler" function.
This will wake up the waiting ioctl commands in ioctl path and print
timeouts which are actually not timeouts.
Introduced "mpt3sas_base_check_cmd_timeout" function to check and print
whether command got timed out (or) terminated due to Host reset.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Update driver version to match OOB/internal driver version.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
1) Manufacturing Page 11 contains parameters to control internal
firmware behavior. Based on AddlFlags2 field FW/Driver behaviour can
be changed, (flag tm_custom_handling is used for this)
a) For PCIe device, protocol level reset should be used if flag
tm_custom_handling is 0. Since Abort Task Set, LUN reset and Target
reset will result in a protocol level reset. Drivers should issue
only one type of this reset, if that fails then it should escalate to
a controller reset (diag reset/OCR).
b) If the driver has control over the TM reset timeout value, then
driver should use the value exposed in PCIe Device Page 2 for pcie
device (field ControllerResetTO).
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Added function _base_display_fwpkg_version, which sends FWUpload request
to pull FW package version from FW Image Header. Now driver prints FW
package version in addition to FW version if the PackageVersion is
valid.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
In function _scsih_add_device, for each device connected to an
enclosure, driver reads the enclosure page(To get details like enclosure
handle, enclosure logical ID, enclosure level etc.)
With this patch, instead of reading enclosure page everytime, driver
maintains a list for enclosure device(During enclosure add event,
enclosure device is added to the list and removed from the list on
delete events) and uses the enclosure page from the list.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Introduces Chain lookup table/tracker and implements accessing chain
buffer using smid. Removed link list based access of chain buffer which
requires lock and allocated as many chains needed.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Instead of allocating RDPQ array (This stores the address's of each RDPQ
pools) at run time, now it will be allocated once during driver load
time and same will be reused during host reset operation also (instead
of allocating & freeing this buffer on the fly during every host reset
operation) and then freed during driver unload.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This patch fixes sparse warnings and bugs on big endian systems.
Signed-off-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Somewhat nasty merge due to conflicts between "33b28357dd00 scsi:
qla2xxx: Fix Async GPN_FT for FCP and FC-NVMe scan" and "2b5b96473efc
scsi: qla2xxx: Fix FC-NVMe LUN discovery"
Merge is non-trivial and has been verified by Qlogic (Cavium)
Signed-off-by: James E.J. Bottomley <jejb@linux.vnet.ibm.com>
The newly added code mixes up phys_addr_t/resource_size_t with dma_addr_t
and void pointers, as seen from these compiler warning:
drivers/scsi/mpt3sas/mpt3sas_base.c: In function '_base_get_chain_phys':
drivers/scsi/mpt3sas/mpt3sas_base.c:235:21: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
base_chain_phys = (void *)ioc->chip_phys + MPI_FRAME_START_OFFSET +
^
drivers/scsi/mpt3sas/mpt3sas_base.c: In function '_clone_sg_entries':
drivers/scsi/mpt3sas/mpt3sas_base.c:427:20: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
sgel->Address = (dma_addr_t)dst_addr_phys;
^
drivers/scsi/mpt3sas/mpt3sas_base.c:438:7: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
(dma_addr_t)buff_ptr_phys;
^
drivers/scsi/mpt3sas/mpt3sas_base.c:444:10: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
(dma_addr_t)buff_ptr_phys;
Both dma_addr_t and phys_addr_t may be wider than a pointer, so we must
avoid the conversion to pointer types. This also helps readability.
A second problem is treating MMIO addresses from a 'struct resource'
as addresses that can be used for DMA on that device. In almost all
cases, those are the same, but on some of the more obscure architectures,
PCI memory address 0 is mapped into the CPU address space at a nonzero
offset. I don't have a good fix for that, so I'm adding a comment here,
plus a WARN_ON() that triggers whenever the phys_addr_t number is
outside of the low 32-bit address space and causes a straight overflow
when assigned to the 32-bit sgel->Address.
Fixes: 182ac784b4 ("scsi: mpt3sas: Introduce Base function for cloning.")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Sending I/O through 32 bit descriptors to Ventura series of controller
results in IO timeout on certain conditions. This error only occurs on
systems with high I/O activity.
Changes in this patch will prevent driver from using 32 bit descriptor
and use 64 bit Descriptors
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Reviewed-by: Tomas Henzl <thenzl@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>