Integrated 9000 devices have a bug with shadow registers
value retention.
If driver writes RBD registers while MAC is asleep the
values are stored in shadow registers to be copied whenever
MAC wakes up.
However, in 9000 devices a MAC wakeup is not triggered
and when the bus powers down due to inactivity the shadow
values and dirty bits are lost.
Turn on the chicken-bits that cause MAC wakeup for RX-related
values as well when the device is in D0.
When the device is in low power mode turn the RX wakeup chicken
bits off since driver is idle and this W/A is not needed.
Remove previous W/A which was ineffective.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Since msleep is based on jiffies, it can sleep for a long time.
Use usleep_range() instead to shorten the maximum time.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Newer hardware generations will take longer to be accessible again
after reset, so we need to wait longer before continuing any flow
that did a reset.
Rather than make the wait time configurable, simply extend it for
all.
Since all of these code paths can sleep, use usleep_range() rather
than mdelay().
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Since we have a lot of configuration structs (almost 70) saving
some memory in each one of them leads to an overall saving of
~2.6KiB of memory.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Currently there is one to one function between device id to it's ucode.
The new generation devices allows to combine different phy and mac images.
Now we have two different ucode images with the same device id.
Read RF ID to identify phy image and overwrite it if needed.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
It's cleaner to always call the iwl_trans_ref/unref() functions
instead of sometimes calling the trans-specific ops directly. This
also prepares for moving some of the code from the trans-specific ops
to the common trans code.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In MSIX mode we iterate over the allocated interrupt vectors and
register them to an handler. In case of registration failure,
we free all the allocated irq.
we use the outer index mistakenly instead of the inner one.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
We want to request an interrupt vector for RSS queue per CPU,
one vector for fallback queue, and one for non-rx interrupts.
Future patch will make sure that no RSS traffic is directed to
fallback queue.
This will enable us to enable fast path on traffic that otherwise
would have been received on the fallback queue.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Bjorn pointed out that printing an error value as an
hexadecimal isn't very convenient. Change that.
Reported-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
We don't use the refcount value anymore, all the refcounting is done
in the runtime PM usage_count value. Remove it.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When entering suspend the driver calls iwl_disable_interrupts() and
then iwl_pcie_disable_ict().
On resume the driver calls only iwl_pcie_reset_ict() without calling
explicitly to iwl_enable_interrupts().
This mostly works since iwl_pcie_reset_ict is calling to
iwl_enable_interrupts, but it doesn't work when there is no ict_table
in MSIx mode.
The result is that driver tries to resume but fails since it doesn't
get the RX interrupt from FW indicating that d0i3 exit was completed.
Fix it by adding an explicit call to enable interrupts.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
IWL_INFO is not an error but still printed by default.
"can't access the RSA semaphore it is write protected" seems
worrisome but it is not really a problem.
CC: <stable@vger.kernel.org> [4.1+]
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The pci driver keeps any unbound device in active state and forbids
runtime PM. When our driver gets probed, we take control of the
state. When the device is released (i.e. during unbind or module
removal), we should return the state to what it was before. To do so,
we need to forbid RTPM in the driver remove op.
Additionally, remove an unnecessary pm_runtime_disable() call, move
the initial ref_count setting to a better place and add some comments
explaining what is going on.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Working with MSIX requires prior configuration.
This includes requesting interrupt vectors from the OS,
registering the vectors and mapping the optional causes to the
relevant interrupt. In addition add new interrupt handler
to handle MSIX interrupt.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Instead of waking up the device each time we write a
register, wake it up once, and writes the registers
at once.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
These are a few fixes for the current cycle.
3 out of the 5 patches fix a bugzilla.
* fix a race that users reported when we try to load the firmware
and the hardware rfkill interrupt triggers at the same time.
* Luca fixes a very visible bug in scheduled scan: our firmware
doesn't support scheduled scan with no profile configured and
the supplicant sometimes requests such scheduled scans.
* build system fix
* firmware name update for 8265
* typo fix in return value
The iwl_trans_pcie_start_fw() function may return the positive value EIO
instead of -EIO in case of error.
Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When we load the firmware, we hold trans_pcie->mutex to
avoid nested flows. We also rely on the ISR to wake up the
thread when the DMA has finished copying a chunk. During
this flow, we enable the RF-Kill interrupt.
The problem is that the RF-Kill interrupt handler can take
the mutex and bring the device down. This means that if
we load the firmware while the RF-Kill switch is enabled
(which will happen when we load the INIT firmware to read
the device's capabilities and register to mac80211), we
may get an RF-Kill interrupt immediately and the ISR will
be waiting for the mutex held by the thread that is
currently loading the firmware. At this stage, the ISR
won't be able to service the DMA's interrupt needed to
wake up the thread that load the firmware. We are in a
deadlock situation which ends when the thread that loads
the firmware fails on timeout and releases the mutex.
To fix this, take the mutex later in the flow, disable
the interrupts and synchronize_irq() to give a chance to
the RF-Kill interrupt to run and complete.
After that, mask all the interrupts besides the DMA
interrupt and proceed with firmware load. Make sure to
check that there was no RF-Kill interrupt when the
interrupts were disabled.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=111361
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Previous patches enabled new 9000 hardware DMA for one queue
only.
Enable the actual multi-queue path and configuration now.
This requires also per-queue NAPI struct.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Currently when the driver is configured with wowlan parameters, and enters
D3 mode, the driver switches the FW image to D3, and when it exists
suspend, it reloads the D0 image.
If the firmware supports the consolidation of the D0 & D3 images there is
no need to load the D3 image on suspend, and no need to reload the D0
image on resume.
Do not switch images on suspend / resume, for firmwares that support
consolidated images.
Signed-off-by: Matti Gottlieb <matti.gottlieb@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Enable runtime power management (RTPM) for PCIe devices and implement
the corresponding functions to enable D0i3 mode when the device is
idle.
Additionally, remove some unnecessary #ifdef's because the RTPM code
will not be called if runtime PM is not configured.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Add an initial implementation of runtime power management (RTPM) for
PCI devices. With this patch, RTPM is only used when wifi is off
(i.e. the wifi interface is down). This implementation is behind a
new Kconfig flag, IWLWIFI_PCIE_RTPM.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The 9000 series introduces several changes in the device
DMA operation.
As the device now supports multi-queue rx, several DMA channels
should be configured.
The flows of providing the device with the allocated RBDs now
changes as well - the device maintains a separate table of used
and free table.
The hardware may use the free table to feed RBDs to any queue.
This requires maintaing a shared table to map returned RBDs to
the original RXB - for that purpose the VID is introduced - an
internal identifier of the RB placed in the lower 12 bits and
returned by HW in the used data.
Another change is the support of 64 bit DMA address.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The 9000 series devices will support multi rx queues.
Current code has one static rx queue - change it to allocate
a number of queues per the device capability (pre-9000 devices
have the number of rx queues set to one).
Subsequent generalizations are:
Change the code to access an explicit numbered rx queue only
when the queue number is known - when handling interrupt, when
accessing the default queue and when iterating the queues.
The rest of the functions will receive the rx queue as a pointer.
Generalize the warning in allocation failure to consider the
allocator status instead of a single rx queue status.
Move the rx initial pool of memory buffers to be shared among
all the queues and allocated to the default queue on init.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
8000 device family has a new debug engine that needs to be
configured differently than 7000's.
The debug engine's DMA works in chunks of memory and the
size of the buffer really means the start of the last
chunk. Since one chunk is 256-byte long, we should
configure the device to write to buffer_size - 256.
This fixes a situation were the device would write to
memory it is not allowed to access.
CC: <stable@vger.kernel.org> [4.1+]
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The debug functions of fw-dbg.c don't really need to modify
the trigger and the description they receive as a parameter.
Constify the pointers.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When the op_mode sends an skb whose payload is bigger than
MSS, PCIe will create an A-MSDU out of it. PCIe assumes
that the skb that is coming from the op_mode can fit in one
A-MSDU. It is the op_mode's responsibility to make sure
that this guarantee holds.
Additional headers need to be built for the subframes.
The TSO core code takes care of the IP / TCP headers and
the driver takes care of the 802.11 subframe headers.
These headers are stored on a per-cpu page that is re-used
for all the packets handled on that same CPU. Each skb
holds a reference to that page and releases the page when
it is reclaimed. When the page gets full, it is released
and a new one is allocated.
Since any SKB that doesn't go through the fast-xmit path
of mac80211 will be segmented, we can assume here that the
packet is not WEP / TKIP and has a proper SNAP header.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Allow to configure the driver to pretend to have TX CSUM
offload support. This will be useful to test the TSO flows
that will come in further patches.
This configuration is disabled by default.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
ilw@linux.intel.com is not available anymore.
linuxwifi@intel.com should be used instead.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The d0i3_mode variable is used to distinguish between transports that
handle d0i3 entry during suspend by themselves (i.e. the slave
transports) and those which rely on the op_mode layer to do it. The
reason why the former do it by themselves is that they need to
transition from d0i3 in runtime_suspend into d0i3 in system-wide
suspend and this transition needs to happen before the op_mode's
suspend flow is called.
The wowlan_d0i3 element is also a bit confusing, because it just
reflects the wowlan->any value for the trans to understand. This is a
bit unclear in the code and not generic enough for future use.
To make it clearer and to generalize the platform power mode settings,
introduce two variables to indicate the platform power management
modes used by the transport.
Additionally, in order not to take too big a step in one patch, treat
this new variables semantically in the same way as the old d0i3_mode
element, introducing a iwl_mvm_enter_d0i3_on_suspend() function to
help with that.
This commit also adds the foundation for a new concept where the
firmware configuration state (i.e. D0, D3 or D0i3) is abstracted from
the platform PM mode we are in (i.e. runtime suspend or system-wide
suspend).
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Host commands now have a group id, express this in printed messages.
Signed-off-by: Sharon Dvir <sharon.dvir@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In certain flows (see next patches), the op_mode may need to
block the Tx queues for a short period. Provide an API for
that. The transport is in charge of counting the number of
times the queues are blocked since the op_mode may block the
queues several times in a row before unblocking them.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Transport code currently calls itself through the transport ops,
which is quite pointless. Clean up all of this. While at it,
remove the unnecessary dir argument and the redundant IDI code.
In slave transports, call both the common slave debugfs and the
transport's own. SDIO has no files, so remove it all there.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Currently the prph registers dump is in the transport layer,
and each bus needs an additional dump implementation.
Move the prph dump outside transport, and allow a common
implementation for all of the buses.
This is possible because prph base addresses are similar for
all buses.
Signed-off-by: Golan Ben-Ami <golan.ben.ami@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Expose _no_grab prph i/o functions that allow performing i/o
outside the transport, without requiring grab and release NIC access
for each operation. In addition, rename the functions so they reflect
their non-grabbing behavior.
This can be very useful for consecutive prph i/o operation that occur
outside trans, such as fw dumps.
Signed-off-by: Golan Ben-Ami <golan.ben.ami@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
802.11ac allows A-MSDU that can be up to 12KB long. Since
an entire A-MSDU needs to fit into one single Receive
Buffer (RB), add support for big RBs.
Since this adds lots of pressure to the memory manager and
significantly increase the true_size of the RX buffers,
don't enable this by default.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>