When user terminates one DMA channel to free all its descriptors, but
at the same time one transaction interrupt was triggered possibly, now
we should not handle this interrupt by validating if the 'schan->cur_desc'
was set as NULL to avoid crashing the kernel.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We will get a NULL virtual descriptor by vchan_find_desc() when the descriptor
has been submitted, that will crash the kernel when getting the descriptor
status.
In this case, since the descriptor has been submitted to process, but it
is not completed now, which means the descriptor is listed into the
'vc->desc_submitted' list now. So we can not get current processing descriptor
by vchan_find_desc(), but the pointer 'schan->cur_desc' will point to the
current processing descriptor, then we can use 'schan->cur_desc' to get
current processing descriptor's status to avoid this issue.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit ded1f3db4c ("dmaengine: tegra210-adma: prepare for supporting
newer Tegra chips") removed the default settings DMA channel RX and TX
FIFO sizes and this is breaking DMA transfers. The intention was to
move the default settings to the chip specific data structure because
this commit was preparing for adding support for Tegra186 where the
fields for the FIFO CTRL register are slightly different.
Fix the configuration of the FIFO sizes by adding default values for
the FIFO CTRL register for both Tegra210 and Tegra186 and store the
values in the chip specific structure.
Fixes: ded1f3db4c ("dmaengine: tegra210-adma: prepare for supporting newer Tegra chips")
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit f33e7bb3eb ("dmaengine: tegra210-adma: restore channel status")
added support to save and restore the DMA channel registers when runtime
suspending the ADMA. This change is causing the kernel to crash when
probing the ADMA, if the device is probed deferred when looking up the
channel interrupts. The crash occurs because not all of the channel base
addresses have been setup at this point and in the clean-up path of the
probe, pm_runtime_suspend() is called invoking its callback which
expects all the channel base addresses to be initialised.
Although this could be fixed by simply checking for a NULL address, on
further review of the driver it seems more appropriate that we only call
pm_runtime_get_sync() after all the channel interrupts and base
addresses have been configured. Therefore, fix this crash by moving the
calls to pm_runtime_enable(), pm_runtime_get_sync() and
tegra_adma_init() after the DMA channels have been initialised.
Fixes: f33e7bb3eb ("dmaengine: tegra210-adma: restore channel status")
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The mtk_cqdma_poll_engine_done() function takes a true/false parameter
where true means it's called from atomic context. There are a couple
places where it was set to false but it's actually in atomic context
so it should be true.
All the callers for mtk_cqdma_hard_reset() are holding a spin_lock and
in mtk_cqdma_free_chan_resources() we take a spin_lock before calling
the mtk_cqdma_poll_engine_done() function.
Fixes: b1f01e48df ("dmaengine: mediatek: Add MediaTek Command-Queue DMA controller for MT6765 SoC")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the unlikely event that axi_desc_get returns a null desc in the
very first iteration of the while-loop the error exit path ends
up calling axi_desc_put on a null pointer 'first' and this causes
a null pointer dereference. Fix this by adding a null check on
pointer 'first' before calling axi_desc_put.
Addresses-Coverity: ("Explicit null dereference")
Fixes: 1fe20f1b84 ("dmaengine: Introduce DW AXI DMAC driver")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When an error occurs we should clean the error register then to return
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When a multi-descriptor DMA transfer is in progress, the "IRQ pending"
flag will apparently be set for that channel as soon as the last
descriptor loads, way before the IRQ actually happens. This behaviour
has been observed on the JZ4725B, but maybe other SoCs are affected.
In the case where another DMA transfer is running into completion on a
separate channel, the IRQ handler would then run the completion handler
for our previous channel even if the transfer didn't actually finish.
Fix this by checking in the completion handler that we're indeed done;
if not the interrupted DMA transfer will simply be resumed.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- Updates to stm32 dma residue calculations
- Interleave dma capability to axi-dmac and
support for ZynqMP arch
- Rework of channel assignment for rcar dma
- Debugfs for pl330 driver
- Support for Tegra186/Tegra194, refactoring for new chips
and support for pause/resume
- Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma,
rcar-dmac, stm32-dma etc
- dev_get_drvdata() updates on few drivers
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJc08p1AAoJEHwUBw8lI4NHD14QAJGU7MOc9dpr+qtm2k3sNO3o
EXZtb3GjTs4MUt6EfMA47KXsxeq4UhubQqM7CmPngDyjXaPd4JBE8bwAd+OzS9sq
eAPMa+M1g8MehuQcdUzB/y6APoSFhGvFoGLY8e7FeI6fwYNm3Yy2gTSiZfpMb3MW
hclJQe+UWfppUHOig13tr0tbQ31DOa7qb2+roVJqDEb9sQ5bDkhRWXjElfoeSXsS
n8nNh4GZr5RkIxfzslVRZNfqb1lja2e03SXBsN9faQI7BfIYBM+9hWSYd4Nq8uYo
xvhYf9gJnKVKtFrwdXtyeBJ80DijWBoodhLrLOfhEYYOrCl9WwJT9AepIOdvij32
11FwjCbkC9ASQ1cSLyRUBbdmfykSlBvdbAMwJc1y9qK7k9BMba3rXRJfimlRy29A
Cpsu4tZKoPlZRGinoGnEGreg1YZI1YHwa+hlkW/8V9Zkb2hvIUbbXr7xHedJf7n4
gIb5DnCF5pC1umB/o7pj2YXrYBc9GETp3sDQ88aw1owKh1T2pZcc5HOpi4p7/7n+
b2HM0cWOCM3aKwdOcONk0jd87FcYQm3g1isQF5SCOtOys8Uy6wNqo9aRrfE/94aw
4SiGRq9/nSOHDh72mD3Ux7v47/cFjWGzZZJVy5+NC+Mq79KxgpXOjsIr7YVbcn9m
GuUdiDZmUvZ4y+qq/uCI
=JDU6
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- Updates to stm32 dma residue calculations
- Interleave dma capability to axi-dmac and support for ZynqMP arch
- Rework of channel assignment for rcar dma
- Debugfs for pl330 driver
- Support for Tegra186/Tegra194, refactoring for new chips and support
for pause/resume
- Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma, rcar-dmac,
stm32-dma etc
- dev_get_drvdata() updates on few drivers
* tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (34 commits)
dmaengine: tegra210-adma: restore channel status
dmaengine: tegra210-dma: free dma controller in remove()
dmaengine: tegra210-adma: add pause/resume support
dmaengine: tegra210-adma: add support for Tegra186/Tegra194
Documentation: DT: Add compatibility binding for Tegra186
dmaengine: tegra210-adma: prepare for supporting newer Tegra chips
dmaengine: at_xdmac: remove a stray bottom half unlock
dmaengine: fsl-edma: Adjust indentation
dmaengine: fsl-edma: Fix typo in Vybrid name
dmaengine: stm32-dma: fix residue calculation in stm32-dma
dmaengine: nbpfaxi: Use dev_get_drvdata()
dmaengine: bcm-sba-raid: Use dev_get_drvdata()
dmaengine: stm32-dma: Fix unsigned variable compared with zero
dmaengine: stm32-dma: use platform_get_irq()
dmaengine: rcar-dmac: Update copyright information
dmaengine: imx-sdma: Only check ratio on parts that support 1:1
dmaengine: xgene-dma: fix spelling mistake "descripto" -> "descriptor"
dmaengine: idma64: Move driver name to the header
dmaengine: bcm2835: Drop duplicate capability setting.
dmaengine: pl330: _stop: clear interrupt status
...
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAlzMFaUACgkQt6xw3ITB
YzRICQgAiv7wF/yIbBhDOmCNCAKDO59chvFQWxXWdGk/aAB56kwKAMXJgLOvlMG/
VRuuLyParTFQETC3jaxKgnO/1hb+PZLDt2Q2KqixtjIzBypKUPWvK2sf6THhSRF1
GK0DBVUd1rCrWrR815+SPb8el4xXtdBzvAVB+Fx35PXVNpdRdqCkK+EQ6UnXGokm
rXXHbnfsnquBDtmb4CR4r2beH+aNElXbdt0Kj8VcE5J7f7jTdW3z6Q9WFRvdKmK7
yrsxXXB2w/EsWXOwFp0SLTV5+fgeGgTvv8uLjDw+SG6t0E0PebxjNAflT7dPrbYL
WecjKC9WqBxrGY+4ew6YJP70ijLBCw==
=aC8m
-----END PGP SIGNATURE-----
Merge tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull mmiowb removal from Will Deacon:
"Remove Mysterious Macro Intended to Obscure Weird Behaviours (mmiowb())
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
The only relatively recent changes have been addressing review
comments on the documentation, which is in a much better shape thanks
to the efforts of Ben and Ingo.
I was initially planning to split this into two pull requests so that
you could run the coccinelle script yourself, however it's been plain
sailing in linux-next so I've just included the whole lot here to keep
things simple"
* tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (23 commits)
docs/memory-barriers.txt: Update I/O section to be clearer about CPU vs thread
docs/memory-barriers.txt: Fix style, spacing and grammar in I/O section
arch: Remove dummy mmiowb() definitions from arch code
net/ethernet/silan/sc92031: Remove stale comment about mmiowb()
i40iw: Redefine i40iw_mmiowb() to do nothing
scsi/qla1280: Remove stale comment about mmiowb()
drivers: Remove explicit invocations of mmiowb()
drivers: Remove useless trailing comments from mmiowb() invocations
Documentation: Kill all references to mmiowb()
riscv/mmiowb: Hook up mmwiob() implementation to asm-generic code
powerpc/mmiowb: Hook up mmwiob() implementation to asm-generic code
ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
mips/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
sh/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
m68k/io: Remove useless definition of mmiowb()
nds32/io: Remove useless definition of mmiowb()
x86/io: Remove useless definition of mmiowb()
arm64/io: Remove useless definition of mmiowb()
ARM/io: Remove useless definition of mmiowb()
mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors
...
Status of ADMA channel registers is not saved and restored during system
suspend. During active playback if system enters suspend, this results in
wrong state of channel registers during system resume and playback fails
to resume properly. Fix this by saving following channel registers in
runtime suspend and restore during runtime resume.
* ADMA_CH_LOWER_SRC_ADDR
* ADMA_CH_LOWER_TRG_ADDR
* ADMA_CH_FIFO_CTRL
* ADMA_CH_CONFIG
* ADMA_CH_CTRL
* ADMA_CH_CMD
* ADMA_CH_TC
Runtime PM calls will be inovked during system resume path if a playback
or capture needs to be resumed. Hence above changes work fine for system
suspend case.
Fixes: f46b195799 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA")
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
During an audio playback session it is observed that, audio goes off after
few seconds of continuous pause and play. No audio is heard even when the
playback is resumed.
The reason for above is, currently ADMA driver does not handle DMA_PAUSE/
DMA_RESUME and relevant callbacks for dma_device are not implemented. This
patch implements device_pause and device_resume callbacks for the device.
During pause TRANSFER_PAUSE bit of dma channel control register is set and
the same is cleared during resume.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Tegra186 specific macro defines and chip_data structure for chip
specific information. New compatibility is added to select relevant
chip details. There is no major change for Tegra194 and hence it can
use the same chip data.
The bits in the BURST_SIZE field of the ADMA CH_CONFIG register are
encoded differently on Tegra186 and Tegra194 compared with Tegra210.
On Tegra210 the bits are encoded as follows ...
1 = WORD_1
2 = WORDS_2
3 = WORDS_4
4 = WORDS_8
5 = WORDS_16
Where as on Tegra186 and Tegra194 the bits are encoded as ...
0 = WORD_1
1 = WORDS_2
2 = WORDS_3
3 = WORDS_4
4 = WORDS_5
...
15 = WORDS_16
Add helper functions for generating the correct burst size.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is a preparatory patch to add support for Tegra186 and Tegra194 chips.
Following changes are necessary to make driver code generic.
* chip_data structure is enhanced to have chip specific details and
following are the additions to the structure
* Offset addresses for ADMA global and channel registers
* Offset values for Tx and Rx channel selection
* Maximum supported Tx and Rx channels
* Tx and Rx channel request mask
* ADMA channel register space size
* Make use of above chip_data to generalise the driver code
Support for Tegra186 and Tegra194 will be added in subsequent patches of
the series.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We switched this code from spin_lock_bh() to vanilla spin_lock() but
there was one stray spin_unlock_bh() that was overlooked. This
patch converts it to spin_unlock() as well.
Fixes: d8570d018f ("dmaengine: at_xdmac: move spin_lock_bh to spin_lock in tasklet")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix indentation and remove unneeded space after 'return' keyword. This
fixes checkpatch warning:
WARNING: Statements should start on a tabstop
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In double buffer mode, during residue calculation, the DMA can
automatically switch to the next transfer. Indeed the CT bit that
gives position in the double buffer can has been updated by the
hardware, during calculation.
In this case the SxNDTR register value can not be trusted.
If a transition is detected we consider that the DMA has switched to
the beginning of next sg.
Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit f4fd2ec08f: ("dmaengine: stm32-dma: use platform_get_irq()") used
unsigned variable irq to store the results and check later for negative
errors, so update the code to use signed variable for this
Fixes: f4fd2ec08f ("dmaengine: stm32-dma: use platform_get_irq()")
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_resource(pdev, IORESOURCE_IRQ) is not recommended for
requesting IRQ's resources, as they can be not ready yet. Using
platform_get_irq() instead is preferred for getting IRQ even if it was
not retrieved earlier.
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Reviewed-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On imx8mq B0 chip, AHB/SDMA clock ratio 2:1 can't be supported,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.
To limit this change to the imx8mq for now this patch also adds an
im8mq-sdma compatible string.
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Acked-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is a spelling mistake in a chan_dbg message, fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are two drivers that are relying on the iDMA 64-bit driver name
to match. Instead of duplicating string in both of them, dedicate
a header file and share it between users.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch kill instructs the DMAC to immediately terminate
execution of a thread. and then clear the interrupt status,
at last, stop generating interrupts for DMA_SEV. to guarantee
the next dma start is clean. otherwise, one interrupt maybe leave
to next start and make some mistake.
we can reporduce the problem as follows:
DMASEV: modify the event-interrupt resource, and if the INTEN sets
function as interrupt, the DMAC will set irq<event_num> HIGH to
generate interrupt. write INTCLR to clear interrupt.
DMA EXECUTING INSTRUCTS DMA TERMINATE
| |
| |
... _stop
| |
| spin_lock_irqsave
DMASEV |
| |
| mask INTEN
| |
| DMAKILL
| |
| spin_unlock_irqrestore
in above case, a interrupt was left, and if we unmask INTEN, the DMAC
will set irq<event_num> HIGH to generate interrupt.
to fix this, do as follows:
DMA EXECUTING INSTRUCTS DMA TERMINATE
| |
| |
... _stop
| |
| spin_lock_irqsave
DMASEV |
| |
| DMAKILL
| |
| clear INTCLR
| mask INTEN
| |
| spin_unlock_irqrestore
Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since device_prep_interleaved_dma() is already implemented, the
DMA_INTERLEAVE capability should be set.
Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In 2D transfers (for the AXI DMAC), the number of frames (numf) represents
Y_LENGTH, and the length of a frame is X_LENGTH. 2D transfers are useful
for video transfers where screen resolutions ( X * Y ) are typically
aligned for X, but not for Y.
There is no requirement for Y_LENGTH to be aligned to the bus-width (or
anything), and this is also true for AXI DMAC.
Checking the Y_LENGTH for alignment causes false errors when initiating DMA
transfers. This change fixes this by checking only that the Y_LENGTH is
non-zero.
Fixes: 0e3b67b348 ("dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller")
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Some synthesis time configuration parameters of the DMA controller can be
inferred from the hardware itself.
Use this information as it is more reliably than the information specified
in the devicetree which might be outdated if the HDL project got changed.
Deprecate the devicetree properties that can be inferred from the hardware
itself.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The tx_status poll in the rcar_dmac driver reads the status register
which indicates which chunk is busy (DMACHCRB). Afterwards the point
inside the chunk is read from DMATCRB. It is possible that the chunk
has changed between the two reads. The result is a non-monotonous
increase of the residue. Fix this by introducing a 'safe read' logic.
Fixes: 73a47bd0da ("dmaengine: rcar-dmac: use TCRB instead of TCR for residue")
Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com>
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Having a cyclic DMA, a residue 0 is not an indication of a completed
DMA. In case of cyclic DMA make sure that dma_set_residue() is called
and with this a residue of 0 is forwarded correctly to the caller.
Fixes: 3544d28788 ("dmaengine: rcar-dmac: use result of updated get_residue in tx_status")
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com>
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Signed-off-by: Yao Lihua <ylhuajnu@outlook.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: <stable@vger.kernel.org> # v4.8+
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The commit af19b7ce76 ("mmc: bcm2835: Avoid possible races on
data requests") introduces a possible circular locking dependency,
which is triggered by swapping to the sdhost interface.
So instead of reintroduce the race condition again, we could also
avoid this situation by using GFP_NOWAIT for the allocation of the
DMA buffer descriptors.
Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com>
Fixes: af19b7ce76 ("mmc: bcm2835: Avoid possible races on data requests")
Link: http://lists.infradead.org/pipermail/linux-rpi-kernel/2019-March/008615.html
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The overflow error flag (ROI: Request Overflow Error) is only relevant
for the case when the channel handles a peripheral synchronized transfer.
Not in the case of memory to memory transfer where there is no hardware
request signal.
Remove the use of this interrupt source in such a case. It's based on
the first descriptor which holds the configuration for the whole
linked list transfer.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Complement the identification of errors with stopping the channel and
dumping the descriptor that led to the error case.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Even if this case shouldn't happen when controller is properly programmed,
it's still better to avoid dumping a kernel Oops for this.
As the sequence may happen only for debugging purposes, log the error and
just finish the tasklet call.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
mmiowb() is now implied by spin_unlock() on architectures that require
it, so there is no reason to call it from driver code. This patch was
generated using coccinelle:
@mmiowb@
@@
- mmiowb();
and invoked as:
$ for d in drivers include/linux/qed sound; do \
spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done
NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
spin_unlock(). However, pairing each mmiowb() removal in this patch with
the corresponding call to spin_unlock() is not at all trivial, so there
is a small chance that this change may regress any drivers incorrectly
relying on mmiowb() to order MMIO writes between CPUs using lock-free
synchronisation. If you've ended up bisecting to this commit, you can
reintroduce the mmiowb() calls using wmb() instead, which should restore
the old behaviour on all architectures other than some esoteric ia64
systems.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This reverts commit 906b40b246 ("dmaengine: stm32-mdma: Add a check on
read_u32_array")
As stated by bindings "st,ahb-addr-masks" is optional.
The statement inserted by this commit makes this property
mandatory and prevents MDMA to be probed in case property not present.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXI DMAC driver is currently supported also on the Xilinx ZynqMP
architecture. This change allows this driver to be enabled & used on it as
well.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Current way we find a waiting virtual channel for the next transfer
at the time one physical channel becomes free is not really fair.
More in details, in case there is more than one channel waiting at a time,
by just going through the arrays of memcpy and slave channels and stopping
as soon as state match waiting state, channels with high indexes can be
penalized.
Whenever dma engine is substantially overloaded so that we constantly
get several channels waiting, channels with highest indexes might not
be served for a substantial time which in the worse case, might hang
task that wait for dma transfer to complete.
This patch makes physical channel re-assignment more fair by storing
time in jiffies when a channel is put in waiting state. Whenever a
physical channel has to be re-assigned, this time is used to select
channel that is waiting for the longest time.
Signed-off-by: Jean-Nicolas Graux <jean-nicolas.graux@st.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Guion <nicolas.guion@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The axi-dmac driver currently rejects transfers with segments that are
larger than what the hardware can handle.
Re-work the driver so that these large segments are split into multiple
segments instead where each segment is smaller or equal to the maximum
segment size.
This allows the driver to handle transfers with segments of arbitrary size.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Bogdan Togorean <bogdan.togorean@analog.com>
Signed-off-by: Alexandru Ardelean <alex.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds debugfs interface to show the relationship between
DMA threads (hardware resource for transferring data) and DMA
channel ID of DMA slave.
Typically, PL330 has many slaves than number of DMA threads.
So sometimes PL330 cannot allocate DMA threads for all slaves even
if a user specify DMA channel ID in devicetree. This interface will
be useful for checking that DMA threads are allocated or not.
Below is an output sample:
$ sudo cat /sys/kernel/debug/ff1f0000.dmac
PL330 physical channels:
THREAD: CHANNEL:
-------- -----
0 8
1 9
2 11
3 12
4 14
5 15
6 10
7 --
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If the driver is active till late suspend, where runtime PM cannot run,
force suspend is essential in such case to put the device in low power
state. Thus pm_runtime_force_suspend and pm_runtime_force_resume are
used as system sleep callbacks during system wide PM transitions.
Late system sleep callbacks are used to ensure, for instance, that the
sound core has suspended any on-going activity, including stopping the
ADMA if active, before we attempt to suspend the ADMA.
Suggested-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
adma driver is using pm_clk_*() interface for managing clock resources.
With this it is observed that clocks remain ON always. This happens on
Tegra devices which use BPMP co-processor to manage clock resources,
where clocks are enabled during prepare phase. This is necessary because
clocks to BPMP are always blocking. When pm_clk_*() interface is used on
such Tegra devices, clock prepare count is not balanced till remove call
happens for the driver and hence clocks are seen ON always. Thus this
patch replaces pm_clk_*() with devm_clk_*() framework.
Suggested-by: Mohan Kumar D <mkumard@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Intel IOMMU, when enabled, tries to find the domain of the device,
assuming it's a PCI one, during DMA operations, such as mapping or
unmapping. Since we are splitting the actual PCI device to couple of
children via MFD framework (see drivers/mfd/intel-lpss.c for details),
the DMA device appears to be a platform one, and thus not an actual one
that performs DMA. In a such situation IOMMU can't find or allocate
a proper domain for its operations. As a result, all DMA operations are
failed.
In order to fix this, supply parent of the platform device
to the DMA engine framework and fix filter functions accordingly.
We may rely on the fact that parent is a real PCI device, because no
other configuration is present in the wild.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [for tty parts]
Signed-off-by: Vinod Koul <vkoul@kernel.org>