We would like to allow other utlities to init msix and rx.
Put their declarations in a place accessible to other utilities.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This makes code less indented and more readable.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This allows less "dummy" declarations and casting.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The rfh for 22560 devices has changed so it supports now
the same arch of using used and free lists, but different
structures to support the last.
Use the new structures, hw dependent, to manage the lists.
bd, the free list, uses the iwl_rx_transfer_desc,
in which the vid is stored in the structs' rbid
field, and the page address in the addr field.
used_bd, the used list, uses the iwl_rx_completion_desc
struct, in which the vid is stored in the structs' rbid
field.
rb_stts, the hw "write" pointer of rx is stored in a
__le16 array, in which each entry represents the write
pointer per queue.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The smallest rb size supported today is 4k rx buffers.
22560 devices use 2k rxb's, so allow using 2k buffers.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In 22560 devices the firmware will do all the hw configurations,
but that's not ready yet.
Update the correct registers in the driver until the FW is ready
and does it by itself.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In 22560 devices the ROM sendis an interrupt to the host
once the IML reading is done.
Handle this interrupt, and indicate sw error in case the
value is fail.
Additionally, the cause for sw error in 22560 devices
have been changed, so update the cause list.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The hw now refers to two new blocks:
* rx tr tail - The Tail index on the free buffers queue TR,
which is update by the device after reading the free buffer
from the tr.
* rx cr tail - Updated by the driver when completing
processing a new completion descriptor in the cr.
Add these two new struct to the rxq, allocate and free them
when needed.
In addition, the register for rx write pointer had been changed
to HBUS_TARG_WRPTR. The way to differentiate tx from rx is the
queue number. TX range is 0-511, and RX's is 512-527.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Make sure the rx_allocator worker is canceled before running the
rx_init routine. rx_init frees and re-allocates all rxb's pages. The
rx_allocator worker also allocates pages for the used rxb's. Running
rx_init and rx_allocator simultaniously causes a kernel panic. Fix
that by canceling the work in rx_init.
Signed-off-by: Shaul Triebitz <shaul.triebitz@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Different device families may have different flag values
for passing a message to the fw (i.e. SW_RESET).
In order to keep the code readable, and avoid conditioning
upon the family, store a value for each flag, which indicates
the bit that needs to be enabled.
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Please do not apply this to mainline directly, instead please re-run the
coccinelle script shown below and apply its output.
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't harmful, and changing them results in
churn.
However, for some features, the read/write distinction is critical to
correct operation. To distinguish these cases, separate read/write
accessors must be used. This patch migrates (most) remaining
ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
coccinelle script:
----
// Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
// WRITE_ONCE()
// $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
virtual patch
@ depends on patch @
expression E1, E2;
@@
- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)
@ depends on patch @
expression E;
@@
- ACCESS_ONCE(E)
+ READ_ONCE(E)
----
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: mpe@ellerman.id.au
Cc: shuah@kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: viro@zeniv.linux.org.uk
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Work queues cannot be allocated when a mutex is held because the mutex
may be in use and that would make it sleep. Doing so generates the
following splat with 4.13+:
[ 19.513298] ======================================================
[ 19.513429] WARNING: possible circular locking dependency detected
[ 19.513557] 4.13.0-rc5+ #6 Not tainted
[ 19.513638] ------------------------------------------------------
[ 19.513767] cpuhp/0/12 is trying to acquire lock:
[ 19.513867] (&tz->lock){+.+.+.}, at: [<ffffffff924afebb>] thermal_zone_get_temp+0x5b/0xb0
[ 19.514047]
[ 19.514047] but task is already holding lock:
[ 19.514166] (cpuhp_state){+.+.+.}, at: [<ffffffff91cc4baa>] cpuhp_thread_fun+0x3a/0x210
[ 19.514338]
[ 19.514338] which lock already depends on the new lock.
This lock dependency already existed with previous kernel versions,
but it was not detected until commit 49dfe2a677 ("cpuhotplug: Link
lock stacks for hotplug callbacks") was introduced.
Reported-by: David Weinehall <david.weinehall@intel.com>
Reported-by: Jiri Kosina <jikos@kernel.org>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
This allows to modify TFD_TX_CMD_SLOTS to a power of 2
which is smaller than 256.
Note that we still need to set values to wrap at 256
into the scheduler's write pointer, but all the rest of
the code can use shorter transmit queues.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We have tracing for both pre-ICT and ICT interrupts, including all
the data read there. Extend the tracing to MSI-X interrupts.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Print the queue for the existing debug message and add a new
debug message indicating where the RB ended.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Print out both queue IDs to be able to see what went wrong.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Due to a hardware issue, certain power saving had to be
disabled. However, this issue was fixed in B-step, so the
workaround only needs to apply to A-step.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When the firmware crashes, the transmit queues can't make
any progress. This is why we stop the counter that monitor
the transmit queues' activity.
The call that notifies the error to the op_mode may take
a bit of time, so stop the timer of the transmit queues
earlier.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When we started using threaded irqs, all the opmode calls were changed
to be called with local_bh disabled. The reason for this was it was
that mac80211 needs that. When we are handling FW errors, mac80211 is
not involved, so we don't need it.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When toggling the RF-kill pin quickly in succession, the driver can
get rather confused because it might be in the process of shutting
down, expecting all commands to go through quickly due to rfkill,
but the transport already thinks the device is accessible again,
even though it previously shut it down. This leads to bugs, and I
even observed a kernel panic.
Avoid this by making the PCIe code only report that the radio is
enabled again after the higher layers actually decided to shut it
off.
This also pulls out this common RF-kill checking code into a common
function called by both transport generations and also moves it to
the direct method - in the internal helper we don't really care
about the RF-kill status anymore since we won't report it up until
the stop anyway.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In order to debug "hardware" RF-kill flows, add a low-level hook to
allow changing the "hardware" RF-kill from debugfs.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
There's no point in duplicating exactly the same code here
for legacy and MSI-X interrupts, so pull it out into a new
function to call in both places.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Letting the preprocessor/compiler generate the shift/mask by itself
is a win for readability, so use bitfield.h for some registers.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When applying no-reclaim logic to commands other than the group
zero for legacy commands, commands such as 0x1c (TX_CMD in group
0) can't be used in any other group. Fix that by applying this
logic only for group 0 - it's not and should never be needed for
any other groups.
Reported-by: Sharon Dvir <sharon.dvir@intel.com>
Reported-by: Shaul Triebitz <shaul.triebitz@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Change queue allocation to be dynamic. On transport init only
the command queue is being allocated. Other queues are allocated
on demand.
This is due to the huge amount of queues we will soon enable (512)
and as a preparation for TX Virtual Queue Manager feature (TVQM),
where firmware will assign the actual queue number on demand.
This includes also allocation of the byte count table per queue
and not as a contiguous chunk of memory.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In a000 transport we will allocate queues dynamically.
Right now queue are allocated as one big chunk of memory
and accessed as such.
The dynamic allocation of the queues will require accessing
the queues as pointers.
In order to keep simplicity of pre-a000 tx queues handling,
keep allocating and freeing the memory in the same style,
but move to access the queues in the various functions as
individual pointers.
Dynamic allocation for the a000 devices will be in a separate
patch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Context information structure is going to be used in a000
devices for firmware self init.
The self init includes firmware self loading from DRAM by
ROM.
This means the TFH relevant firmware loading can be cleaned up.
The firmware loading includes the paging memory as well, so op
mode can stop initializing the paging and sending the DRAM_BLOCK_CMD.
Firmware is doing RFH, TFH and SCD configuration, while driver
only fills the required configurations and addresses in the
context information structure.
The only remaining access to RFH is the write pointer, which
is updated upon alive interrupt after FW configured the RFH.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We don't need to print so much data in the kernel log.
Limit the data to be printed to the queue that actually
got stuck in case of a TFD queue hang, and stop dumping
all the CSR and FH registers. Over the course of time, the
CSR and FH values haven't proven themselves to be really
useful for debugging, and they are now in the firmware dump
anyway.
This comes as a preparation to the addition of more data
required to be printed by the firwmare team.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Currently, when getting a RFKILL interrupt, the transport enters a flow
in which it stops the device, disables other interrupts, etc. After
stopping the device, the transport resets the hw, and sleeps. During
the sleep, a context switch occurs and host commands are sent by upper
layers (e.g. mvm) to the fw. This is possible since the op_mode layer
and the transport layer hold different mutexes.
Since the STATUS_RFKILL bit isn't set, the transport layer doesn't
recognize that RFKILL was toggled on, and no commands can actually be
sent, so it enqueues the command to the tx queue and sets a timer on
the queue.
After switching context back to stopping the device, STATUS_RFKILL is
set, and then the transport can't send the command to the fw.
This eventually results in a queue hang.
Fix this by setting STATUS_RFKILL immediately when
the interrupt is fired.
Signed-off-by: Golan Ben-Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When resuming, it's possible for the following scenario to occur:
* iwl_pci_resume() enables the RF-kill interrupt
* iwl_pci_resume() reads the RF-kill state (e.g. to 'radio enabled')
* RF_KILL interrupt triggers, and iwl_pcie_irq_handler() reads the
state, now 'radio disabled', and acquires the &trans_pcie->mutex.
* iwl_pcie_irq_handler() further calls iwl_trans_pcie_rf_kill() to
indicate to the higher layers that the radio is now disabled (and
stops the device while at it)
* iwl_pcie_irq_handler() drops the mutex
* iwl_pci_resume() continues, acquires the mutex and calls the higher
layers to indicate that the radio is enabled.
At this point, the device is stopped but the higher layers think it's
available, and can call deeply into the driver to try to enable it.
However, this will fail since the device is actually disabled.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
There's no need to declare a list and then init it manually,
just use the LIST_HEAD() macro.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Log group as well. Remove 0x prefix to match TX logging.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In case the OS provides fewer interrupts than requested, different
causes will share the same interrupt vector as follow:
1.One interrupt less: non rx causes shared with FBQ.
2.Two interrupts less: non rx causes shared with FBQ and RSS.
3.More than two interrupts: we will use fewer RSS queues.
Also make the request depend on the number of online CPUs
instead of possible CPUs.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The original intent was to have the general iwl_queue shared
between RX and TX queues, but it is not the actual status.
Since it is not shared with any struct but iwl_txq, it adds
unnecessary complexity. Merge those structs.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Move the write_prph_64 of pcie to be transport agnostic.
Add direct write as well, as it is needed for a000 HW.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In MQ environment and new architecture in early stages
we may encounter DMA issues. Track RXB status and bail
out in case we receive index to an RXB that was not
mapped and handed over to HW.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Upon firmware load interrupt (FH_TX), the ISR re-enables the
firmware load interrupt only to avoid races with other
flows as described in the commit below. When the firmware
is completely loaded, the thread that is loading the
firmware will enable all the interrupts to make sure that
the driver gets the ALIVE interrupt.
The problem with that is that the thread that is loading
the firmware is actually racing against the ISR and we can
get to the following situation:
CPU0 CPU1
iwl_pcie_load_given_ucode
...
iwl_pcie_load_firmware_chunk
wait_for_interrupt
<interrupt>
ISR handles CSR_INT_BIT_FH_TX
ISR wakes up the thread on CPU0
/* enable all the interrupts
* to get the ALIVE interrupt
*/
iwl_enable_interrupts
ISR re-enables CSR_INT_BIT_FH_TX only
/* start the firmware */
iwl_write32(trans, CSR_RESET, 0);
BUG! ALIVE interrupt will never arrive since it has been
masked by CPU1.
In order to fix that, change the ISR to first check if
STATUS_INT_ENABLED is set. If so, re-enable all the
interrupts. If STATUS_INT_ENABLED is clear, then we can
check what specific interrupt happened and re-enable only
that specific interrupt (RFKILL or FH_TX).
All the credit for the analysis goes to Kirtika who did the
actual debugging work.
Cc: <stable@vger.kernel.org> [4.5+]
Fixes: a6bd005fe9 ("iwlwifi: pcie: fix RF-Kill vs. firmware load race")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In cases of hardware or DMA error, the vid read from
a zeroed location will be 0, and we will access the rxb
at index 0 in the global table, while it may be NULL or
owned by hardware.
Invalidate vid 0 in order to detect the situation and
bail out.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Somehow we ended up stopping RX using legacy RX registers
even for devices that support RFH. Fix it.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Currently code calls restock for mq devices during the init
function, unlike sq where restock is called after init.
This causes an harmless but alarming deadlock warning from
lockdep, to fix this - unify the init code.
Rename the restock functions while at it.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Add a warning in case packet didn't end up in the HW
destined queue.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We now have 9000 devices that support multiple frames in
a single RB. Enable it.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
For 9000 devices we can have PCIe bus for discrete
devices and IOSF bus for integrated devices.
PCIe supports maximum transfer size of 128B while IOSF
bus supports maximum transfer size of 64B.
Configure RB size accordingly.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Integrated 9000 devices have a bug with shadow registers
value retention.
If driver writes RBD registers while MAC is asleep the
values are stored in shadow registers to be copied whenever
MAC wakes up.
However, in 9000 devices a MAC wakeup is not triggered
and when the bus powers down due to inactivity the shadow
values and dirty bits are lost.
Turn on the chicken-bits that cause MAC wakeup for RX-related
values as well when the device is in D0.
When the device is in low power mode turn the RX wakeup chicken
bits off since driver is idle and this W/A is not needed.
Remove previous W/A which was ineffective.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When initializing RX we grab NIC access for every read and
write. This is redundant - we can just grab access once.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The RX queues have a shadow register for the write pointer
that enables updates without grabbing NIC access. Use them
instead of the periphery registers because accessing those
is much more expensive.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
CSR registers are always available even when the NIC is not awake, no
need to wake up the NIC before accessing them. This has a huge impact
when we re-enable an interrupt at the end of the ISR since waking up the
NIC can take some time.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
isr_stats is written twice with the same value, remove one of the
redundant assignments to isr_stats.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Due to hardware bug, upon any shadow free-queue register write
access, a legacy RBD shadow register must be written as well.
This is required in order to trigger a copy of the shadow registers
values after MAC exits sleep state.
Specifically, the driver has to write (any value) to the legacy RBD
register each time FRBDCB is accessed.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>