Fix indentation and remove unneeded space after 'return' keyword. This
fixes checkpatch warning:
WARNING: Statements should start on a tabstop
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In double buffer mode, during residue calculation, the DMA can
automatically switch to the next transfer. Indeed the CT bit that
gives position in the double buffer can has been updated by the
hardware, during calculation.
In this case the SxNDTR register value can not be trusted.
If a transition is detected we consider that the DMA has switched to
the beginning of next sg.
Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit f4fd2ec08f: ("dmaengine: stm32-dma: use platform_get_irq()") used
unsigned variable irq to store the results and check later for negative
errors, so update the code to use signed variable for this
Fixes: f4fd2ec08f ("dmaengine: stm32-dma: use platform_get_irq()")
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_resource(pdev, IORESOURCE_IRQ) is not recommended for
requesting IRQ's resources, as they can be not ready yet. Using
platform_get_irq() instead is preferred for getting IRQ even if it was
not retrieved earlier.
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Reviewed-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On imx8mq B0 chip, AHB/SDMA clock ratio 2:1 can't be supported,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.
To limit this change to the imx8mq for now this patch also adds an
im8mq-sdma compatible string.
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Acked-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is a spelling mistake in a chan_dbg message, fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are two drivers that are relying on the iDMA 64-bit driver name
to match. Instead of duplicating string in both of them, dedicate
a header file and share it between users.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch kill instructs the DMAC to immediately terminate
execution of a thread. and then clear the interrupt status,
at last, stop generating interrupts for DMA_SEV. to guarantee
the next dma start is clean. otherwise, one interrupt maybe leave
to next start and make some mistake.
we can reporduce the problem as follows:
DMASEV: modify the event-interrupt resource, and if the INTEN sets
function as interrupt, the DMAC will set irq<event_num> HIGH to
generate interrupt. write INTCLR to clear interrupt.
DMA EXECUTING INSTRUCTS DMA TERMINATE
| |
| |
... _stop
| |
| spin_lock_irqsave
DMASEV |
| |
| mask INTEN
| |
| DMAKILL
| |
| spin_unlock_irqrestore
in above case, a interrupt was left, and if we unmask INTEN, the DMAC
will set irq<event_num> HIGH to generate interrupt.
to fix this, do as follows:
DMA EXECUTING INSTRUCTS DMA TERMINATE
| |
| |
... _stop
| |
| spin_lock_irqsave
DMASEV |
| |
| DMAKILL
| |
| clear INTCLR
| mask INTEN
| |
| spin_unlock_irqrestore
Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since device_prep_interleaved_dma() is already implemented, the
DMA_INTERLEAVE capability should be set.
Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In 2D transfers (for the AXI DMAC), the number of frames (numf) represents
Y_LENGTH, and the length of a frame is X_LENGTH. 2D transfers are useful
for video transfers where screen resolutions ( X * Y ) are typically
aligned for X, but not for Y.
There is no requirement for Y_LENGTH to be aligned to the bus-width (or
anything), and this is also true for AXI DMAC.
Checking the Y_LENGTH for alignment causes false errors when initiating DMA
transfers. This change fixes this by checking only that the Y_LENGTH is
non-zero.
Fixes: 0e3b67b348 ("dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller")
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Some synthesis time configuration parameters of the DMA controller can be
inferred from the hardware itself.
Use this information as it is more reliably than the information specified
in the devicetree which might be outdated if the HDL project got changed.
Deprecate the devicetree properties that can be inferred from the hardware
itself.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The overflow error flag (ROI: Request Overflow Error) is only relevant
for the case when the channel handles a peripheral synchronized transfer.
Not in the case of memory to memory transfer where there is no hardware
request signal.
Remove the use of this interrupt source in such a case. It's based on
the first descriptor which holds the configuration for the whole
linked list transfer.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Complement the identification of errors with stopping the channel and
dumping the descriptor that led to the error case.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Even if this case shouldn't happen when controller is properly programmed,
it's still better to avoid dumping a kernel Oops for this.
As the sequence may happen only for debugging purposes, log the error and
just finish the tasklet call.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXI DMAC driver is currently supported also on the Xilinx ZynqMP
architecture. This change allows this driver to be enabled & used on it as
well.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Current way we find a waiting virtual channel for the next transfer
at the time one physical channel becomes free is not really fair.
More in details, in case there is more than one channel waiting at a time,
by just going through the arrays of memcpy and slave channels and stopping
as soon as state match waiting state, channels with high indexes can be
penalized.
Whenever dma engine is substantially overloaded so that we constantly
get several channels waiting, channels with highest indexes might not
be served for a substantial time which in the worse case, might hang
task that wait for dma transfer to complete.
This patch makes physical channel re-assignment more fair by storing
time in jiffies when a channel is put in waiting state. Whenever a
physical channel has to be re-assigned, this time is used to select
channel that is waiting for the longest time.
Signed-off-by: Jean-Nicolas Graux <jean-nicolas.graux@st.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Guion <nicolas.guion@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The axi-dmac driver currently rejects transfers with segments that are
larger than what the hardware can handle.
Re-work the driver so that these large segments are split into multiple
segments instead where each segment is smaller or equal to the maximum
segment size.
This allows the driver to handle transfers with segments of arbitrary size.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Bogdan Togorean <bogdan.togorean@analog.com>
Signed-off-by: Alexandru Ardelean <alex.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds debugfs interface to show the relationship between
DMA threads (hardware resource for transferring data) and DMA
channel ID of DMA slave.
Typically, PL330 has many slaves than number of DMA threads.
So sometimes PL330 cannot allocate DMA threads for all slaves even
if a user specify DMA channel ID in devicetree. This interface will
be useful for checking that DMA threads are allocated or not.
Below is an output sample:
$ sudo cat /sys/kernel/debug/ff1f0000.dmac
PL330 physical channels:
THREAD: CHANNEL:
-------- -----
0 8
1 9
2 11
3 12
4 14
5 15
6 10
7 --
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If the driver is active till late suspend, where runtime PM cannot run,
force suspend is essential in such case to put the device in low power
state. Thus pm_runtime_force_suspend and pm_runtime_force_resume are
used as system sleep callbacks during system wide PM transitions.
Late system sleep callbacks are used to ensure, for instance, that the
sound core has suspended any on-going activity, including stopping the
ADMA if active, before we attempt to suspend the ADMA.
Suggested-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
adma driver is using pm_clk_*() interface for managing clock resources.
With this it is observed that clocks remain ON always. This happens on
Tegra devices which use BPMP co-processor to manage clock resources,
where clocks are enabled during prepare phase. This is necessary because
clocks to BPMP are always blocking. When pm_clk_*() interface is used on
such Tegra devices, clock prepare count is not balanced till remove call
happens for the driver and hence clocks are seen ON always. Thus this
patch replaces pm_clk_*() with devm_clk_*() framework.
Suggested-by: Mohan Kumar D <mkumard@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Intel IOMMU, when enabled, tries to find the domain of the device,
assuming it's a PCI one, during DMA operations, such as mapping or
unmapping. Since we are splitting the actual PCI device to couple of
children via MFD framework (see drivers/mfd/intel-lpss.c for details),
the DMA device appears to be a platform one, and thus not an actual one
that performs DMA. In a such situation IOMMU can't find or allocate
a proper domain for its operations. As a result, all DMA operations are
failed.
In order to fix this, supply parent of the platform device
to the DMA engine framework and fix filter functions accordingly.
We may rely on the fact that parent is a real PCI device, because no
other configuration is present in the wild.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [for tty parts]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- dmatest updates for modularizing common struct and code
- remove SG support for VDMA xilinx IP and updates to driver
- Update to dw driver to support Intel iDMA controllers
multi-block support
- tegra updates for proper reporting of residue
- Add Snow Ridge ioatdma device id and support for IOATDMA v3.4
- struct_size() usage and useless LIST_HEAD cleanups in subsystem.
- qDMA controller driver for Layerscape SoCs
- stm32-dma PM Runtime support
- And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
bcm2835, qcom_hidma etc
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJciVhkAAoJEHwUBw8lI4NHdVoQANDfaY1eruozJobfWX0w1b8U
0eSCN4M/gRzL/K1nOTWTV9dZFGycGokPBn25lryX6d6VIL+yO7Ptpwls/w0inn0e
RgVtESQodLZyxcD4iWACVfTGxqxe74/bXCRAq0OCrjt5oX+5KdsBsTrhHvB8dQin
JWT7Mq6tii5wZHZHl9b4Ds/crxM9+pIHmlzbu5MQiPDL37X9HX4KUoLQCKrVGgZt
3FjlKEiSS6CnBWP1cs/aHgANr/PjOIwL8SD1t5NPS3i7/2k9z1u16TmmI8SbcyHf
y9hVy1QPlxC5V85EPYK7JW7JLILotkMToxlX/QhfaFN0PWQq04rp6PCLvObKJuB2
36QJTwSXBM2/f9bWwkddPyo9Szb3L30K80Vx8zlxzgoXYWtFFB2BXAV57M/I48j2
gMnxEMZpHD3IupeqlykCmssClVVmCRT8qUZHLNHTdDNu48rLuNlZestFGyBS2Ma2
D4UcJPiA/IxENy1rz54XoCajL/BIJOsFXXVRikjj3vfbV0Uir0uB9puzwfemMtKz
tCrJbKwnnDtN30vBZtU7hVu/t4lFYYnl2c945+SpeQC69dysz3IFQGem0kxVHKnH
INdQH4Od7nQbVQer1EXg4+h3n/rUbcYx/0iLE6JAYyOT80wE+KSlrQ5etqyJ3/qh
1BxvU49PkUUPzw7aUGgS
=Ykvn
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- dmatest updates for modularizing common struct and code
- remove SG support for VDMA xilinx IP and updates to driver
- Update to dw driver to support Intel iDMA controllers multi-block
support
- tegra updates for proper reporting of residue
- Add Snow Ridge ioatdma device id and support for IOATDMA v3.4
- struct_size() usage and useless LIST_HEAD cleanups in subsystem.
- qDMA controller driver for Layerscape SoCs
- stm32-dma PM Runtime support
- And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
bcm2835, qcom_hidma etc
* tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (81 commits)
dmaengine: imx-sdma: fix consistent dma test failures
dmaengine: imx-sdma: add a test for imx8mq multi sdma devices
dmaengine: imx-sdma: add clock ratio 1:1 check
dmaengine: dmatest: move test data alloc & free into functions
dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func()
dmaengine: dmatest: wrap src & dst data into a struct
dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4
dmaengine: ioatdma: add descriptor pre-fetch support for v3.4
dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4
dmaengine: ioatdma: Add Snow Ridge ioatdma device id
dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dmaengine: mv_xor: Use correct device for DMA API
Documentation :dmaengine: clarify DMA desc. pointer after submission
Documentation: dmaengine: fix dmatest.rst warning
dmaengine: k3dma: Add support for dma-channel-mask
dmaengine: k3dma: Delete axi_config
dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware
Documentation: bindings: dma: Add binding for dma-channel-mask
Documentation: bindings: k3dma: Extend the k3dma driver binding to support hisi-asp
...
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlyCpL0UHGJoZWxnYWFz
QGdvb2dsZS5jb20ACgkQWYigwDrT+vzoHw//ZyFbwekF0mV3RZwcV35LkScIOw0d
O1DgjJo8UbuV51+/foQeUZ8IzjHlybQhoFdJupPuw+LyaDUkwqjAmdtY8J/FjWSm
AJeVzu6gMF0Z9kwwGO4NyqX2EWluTD0xNLgf8g+fe3p1MtEuH6VCrqe+hk3wma0K
CrSIKWY/sO408SpAaWiLTEZmVT+hXiP9hJw1qTrbqKLtyWa4oCjErdoyUDsA01+5
gPndKC/3pu6q6q9Dd94582HuQaE2dKHWQXx6Fzd/tdCyYffpbOUAUNP3aRXaTKrS
MwKxOF3y7yUnz5RbxRgopwNVf5WyXhCnnPZRLaSxqnTSZCY6FCUi3l6RpVyWu2Ha
iztBbkTP/x6WV3VWg810qgQKQ9wl8oALMkoOfR6lWCR7MTuJnMXJtbrz0jWpEC2O
ZPwK9fAxFj2/3e13hx88O7Ek8kfajTPM8T15K79pvpljfqa0BD9SrhPyQ5ssmxj4
idz4yIFCATULKszPXA1QbfC1/xCDveQOEPSerL3eACXsLN17vfpOwOT9vWJm6bpr
6u5ggM2dEA07eI1ANnY6twn5g0kSYU9qISNQO98tA86IvaCnME0Z+k+SCwUNIM9U
ep9k0NdAGDNsYOfdVEEY0fYGT9k+9f9w8AfZLNvh0N3s7mGQQ35jf0Z75jj/jsor
cbMcPAN2jOCyFVs=
=vf9L
-----END PGP SIGNATURE-----
Merge tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- Use match_string() instead of reimplementing it (Andy Shevchenko)
- Enable SERR# forwarding for all bridges (Bharat Kumar Gogada)
- Use Latency Tolerance Reporting if already enabled by platform (Bjorn
Helgaas)
- Save/restore LTR info for suspend/resume (Bjorn Helgaas)
- Fix DPC use of uninitialized data (Dongdong Liu)
- Probe bridge window attributes only once at enumeration-time to fix
device accesses during rescan (Bjorn Helgaas)
- Return BAR size (not "size -1 ") from pci_size() to simplify code (Du
Changbin)
- Use config header type (not class code) identify bridges more
reliably (Honghui Zhang)
- Work around Intel Denverton incorrect Trace Hub BAR size reporting
(Alexander Shishkin)
- Reorder pciehp cached state/hardware state updates to avoid missed
interrupts (Mika Westerberg)
- Turn ibmphp semaphores into completions or mutexes (Arnd Bergmann)
- Mark expected switch fall-through (Mathieu Malaterre)
- Use of_node_name_eq() for node name comparisons (Rob Herring)
- Add ACS and pciehp quirks for HXT SD4800 (Shunyong Yang)
- Consolidate Rohm Vendor ID definitions (Andy Shevchenko)
- Use u32 (not __u32) for things not exposed to userspace (Logan
Gunthorpe)
- Fix locking semantics of bus and slot reset interfaces (Alex
Williamson)
- Update PCIEPORTBUS Kconfig help text (Hou Zhiqiang)
- Allow portdrv to claim subtractive decode Ports so PCIe services will
work for them (Honghui Zhang)
- Report PCIe links that become degraded at run-time (Alexandru
Gagniuc)
- Blacklist Gigabyte X299 Root Port power management to fix Thunderbolt
hotplug (Mika Westerberg)
- Revert runtime PM suspend/resume callbacks that broke PME on network
cable plug (Mika Westerberg)
- Disable Data Link State Changed interrupts to prevent wakeup
immediately after suspend (Mika Westerberg)
- Extend altera to support Stratix 10 (Ley Foon Tan)
- Allow building altera driver on ARM64 (Ley Foon Tan)
- Replace Douglas with Tom Joseph as Cadence PCI host/endpoint
maintainer (Lorenzo Pieralisi)
- Add DT support for R-Car RZ/G2E (R8A774C0) (Fabrizio Castro)
- Add dra72x/dra74x/dra76x SoC compatible strings (Kishon Vijay Abraham I)
- Enable x2 mode support for dra72x/dra74x/dra76x SoC (Kishon Vijay
Abraham I)
- Configure dra7xx PHY to PCIe mode (Kishon Vijay Abraham I)
- Simplify dwc (remove unnecessary header includes, name variables
consistently, reduce inverted logic, etc) (Gustavo Pimentel)
- Add i.MX8MQ support (Andrey Smirnov)
- Add message to help debug dwc MSI-X mask bit errors (Gustavo
Pimentel)
- Work around imx7d PCIe PLL erratum (Trent Piepho)
- Don't assert qcom reset GPIO during probe (Bjorn Andersson)
- Skip dwc MSI init if MSIs have been disabled (Lucas Stach)
- Use memcpy_fromio()/memcpy_toio() instead of plain memcpy() in PCI
endpoint framework (Wen Yang)
- Add interface to discover supported endpoint features to replace a
bitfield that wasn't flexible enough (Kishon Vijay Abraham I)
- Implement the new supported-feature interface for designware-plat,
dra7xx, rockchip, cadence (Kishon Vijay Abraham I)
- Fix issues with 64-bit BAR in endpoints (Kishon Vijay Abraham I)
- Add layerscape endpoint mode support (Xiaowei Bao)
- Remove duplicate struct hv_vp_set in favor of struct hv_vpset (Maya
Nakamura)
- Rework hv_irq_unmask() to use cpumask_to_vpset() instead of
open-coded reimplementation (Maya Nakamura)
- Align Hyper-V struct retarget_msi_interrupt arguments (Maya Nakamura)
- Fix mediatek MMIO size computation to enable full size of available
MMIO space (Honghui Zhang)
- Fix mediatek DMA window size computation to allow endpoint DMA access
to full DRAM address range (Honghui Zhang)
- Fix mvebu prefetchable BAR regression caused by common bridge
emulation that assumed all bridges had prefetchable windows (Thomas
Petazzoni)
- Make advk_pci_bridge_emul_ops static (Wei Yongjun)
- Configure MPS settings for VMD root ports (Jon Derrick)
* tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (92 commits)
PCI: Update PCIEPORTBUS Kconfig help text
PCI: Fix "try" semantics of bus and slot reset
PCI/LINK: Report degraded links via link bandwidth notification
dt-bindings: PCI: altera: Add altr,pcie-root-port-2.0
PCI: altera: Enable driver on ARM64
PCI: altera: Add Stratix 10 PCIe support
PCI/PME: Fix possible use-after-free on remove
PCI: aardvark: Make symbol 'advk_pci_bridge_emul_ops' static
PCI: dwc: skip MSI init if MSIs have been explicitly disabled
PCI: hv: Refactor hv_irq_unmask() to use cpumask_to_vpset()
PCI: hv: Replace hv_vp_set with hv_vpset
PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt
PCI: mediatek: Enlarge PCIe2AHB window size to support 4GB DRAM
PCI: mediatek: Fix memory mapped IO range size computation
PCI: dwc: Remove superfluous shifting in definitions
PCI: dwc: Make use of GENMASK/FIELD_PREP
PCI: dwc: Make use of BIT() in constant definitions
PCI: dwc: Share code for dw_pcie_rd/wr_other_conf()
PCI: dwc: Make use of IS_ALIGNED()
PCI: imx6: Add code to request/control "pcie_aux" clock for i.MX8MQ
...
Patch series "Replace all open encodings for NUMA_NO_NODE", v3.
All these places for replacement were found by running the following
grep patterns on the entire kernel code. Please let me know if this
might have missed some instances. This might also have replaced some
false positives. I will appreciate suggestions, inputs and review.
1. git grep "nid == -1"
2. git grep "node == -1"
3. git grep "nid = -1"
4. git grep "node = -1"
This patch (of 2):
At present there are multiple places where invalid node number is
encoded as -1. Even though implicitly understood it is always better to
have macros in there. Replace these open encodings for an invalid node
number with the global macro NUMA_NO_NODE. This helps remove NUMA
related assumptions like 'invalid node' from various places redirecting
them to a common definition.
Link: http://lkml.kernel.org/r/1545127933-10711-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> [ixgbe]
Acked-by: Jens Axboe <axboe@kernel.dk> [mtip32xx]
Acked-by: Vinod Koul <vkoul@kernel.org> [dmaengine.c]
Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
Acked-by: Doug Ledford <dledford@redhat.com> [drivers/infiniband]
Cc: Joseph Qi <jiangqi903@gmail.com>
Cc: Hans Verkuil <hverkuil@xs4all.nl>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Without the copy being aligned sdma1 fails ~10% of the time
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On i.mx8mq, there are two sdma instances, and the common dma framework
will get a channel dynamically from any available sdma instance whether
it's the first sdma device or the second sdma device. Some IPs like
SAI only work with sdma2 not sdma1. To make sure the sdma channel is from
the correct sdma device, use the node pointer to match.
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Daniel Baluta <daniel.baluta@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On i.mx8 mscale B0 chip, AHB/SDMA clock ratio 2:1 can't be supportted,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.
Based on NXP commit MLK-16841-1 by Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch starts to take advantage of the `dmatest_data` struct by moving
the common allocation & free-ing bits into functions.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is just a cosmetic change, since this variable gets used quite a bit
inside the dmatest_func() routine.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This change wraps the data for the source & destination buffers into a
`struct dmatest_data`. The rename patterns are:
* src_cnt -> src->cnt
* dst_cnt -> dst->cnt
* src_off -> src->off
* dst_off -> dst->off
* thread->srcs -> src->aligned
* thread->usrcs -> src->raw
* thread->dsts -> dst->aligned
* thread->udsts -> dst->raw
The intent is to make a function that moves duplicate parts of the code
into common alloc & free functions, which will unclutter the
`dmatest_func()` function.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
IOATDMA 3.4 supports PCIe LTR mechanism. The registers are non-standard
PCIe LTR support. This needs to be setup in order to not suffer performance
impact and provide proper power management. The channel is set to active
when it is allocated, and to passive when it's freed.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Adding support for new feature on ioatdma 3.4 hardware that provides
descriptor pre-fetching in order to reduce small DMA latencies.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Snowridge Xeon-D ioatdma PCI device id. Also applies for Icelake
SP Xeon. This introduces ioatdma v3.4 platform. Also bumping driver version
to 5.0 since we are adding additional code for 3.4 support.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>