dmaengine updates for v6.12

New support:
   - Support for AMD Versal Gen 2 DMA IP
   - Rcar RZ/G3S SoC dma controller
   - Support for Intel Diamond Rapids and Granite Rapids-D dma controllers
   - Support for Freescale ls1021a-qdma controller
   - New driver for Loongson-1 APB DMA
   - New driver for AMD QDMA
   - Pl08x in LPC32XX router dma driver
 
  Updates:
   - Support for dpdma cyclic dma mode
   - XML conversion for marvell xor dma bindings
   - Dma clocks documentation for imx dma
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmbv/CAACgkQfBQHDyUj
 g0cEnRAAsv/rEDi47ku1dkbyOOfbhySj3/TyhOWs0eXPlJLrlMEC4b1i5QIP+Ldd
 ldzYIdlMLVbmFD+VfTOgqGY2qPgBTdahoCSaTH/GoiCV9OnKt2ImjoZ7KSWQjd90
 Qtl8IQnuNBFMl3+DchU/EtGWA4+5dGM3y0p+TdIHxtf7z8Quos9XCp5QaTw8+4wg
 TMeQQKaoGjjm9aOBNwk9B9f3yqYB39ZmmqWfhII2395wrWulQYFsji6DnQhVcHrs
 EZ4w2lRIyqELszHSZ/8VPFfzyolsaDjTwC+1zZwVg8gFna2KOpZgRx1a16B+KZ7n
 w6BnMERLCTgkle9FQdhnzCeG8pZpk2YJT1OYLZHSH2XSHau9O7d9PSQ8mEwecDEz
 Iq3x0HfcZ5Cz2sansCwGTJQQpSvLqw74hybhhiFk6BQhEh3Bjq+7AjJvTQRGK2X8
 w+v4e2hx7SpPCMIlXtdSf/1YhHil9897JS/nbAdYW2PflZhPSbDbH2c0hklab08K
 BdbriaAoT5XkQB0LKSDNHDcl+NxX0gph/geHpg2QD+JuJOydHR1El6nI2/d8Styj
 w07QKVCa/O46Bxz4AwQNIFOVL8JnygNTBjaJKof+6DX2/v4TikMcJYnVMj3i1fWB
 t/NsBzim1zliiGaZNiFoo3obzuzlYs2/yZ++dYZYY8oyLRGjJjM=
 =beF+
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
 "Unusually, more new driver and device support than updates. Couple of
  new device support, AMD, Rcar, Intel and New drivers in Freescale,
  Loonsoon, AMD and LPC32XX with DT conversion and mode updates etc.

  New support:
   - Support for AMD Versal Gen 2 DMA IP
   - Rcar RZ/G3S SoC dma controller
   - Support for Intel Diamond Rapids and Granite Rapids-D dma controllers
   - Support for Freescale ls1021a-qdma controller
   - New driver for Loongson-1 APB DMA
   - New driver for AMD QDMA
   - Pl08x in LPC32XX router dma driver

  Updates:
   - Support for dpdma cyclic dma mode
   - XML conversion for marvell xor dma bindings
   - Dma clocks documentation for imx dma"

* tag 'dmaengine-6.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (24 commits)
  dmaengine: loongson1-apb-dma: Fix the build warning caused by the size of pdev_irqname
  dmaengine: Fix spelling mistakes
  dmaengine: Add dma router for pl08x in LPC32XX SoC
  dmaengine: fsl-edma: add edma src ID check at request channel
  dmaengine: fsl-edma: change to guard(mutex) within fsl_edma3_xlate()
  dmaengine: avoid non-constant format string
  dmaengine: imx-dma: Remove i.MX21 support
  dt-bindings: dma: fsl,imx-dma: Document the DMA clocks
  dmaengine: Loongson1: Add Loongson-1 APB DMA driver
  dt-bindings: dma: Add Loongson-1 APB DMA
  dmaengine: zynqmp_dma: Add support for AMD Versal Gen 2 DMA IP
  dt-bindings: dmaengine: zynqmp_dma: Add a new compatible string
  dmaengine: idxd: Add new DSA and IAA device IDs for Diamond Rapids platform
  dmaengine: idxd: Add a new DSA device ID for Granite Rapids-D platform
  dmaengine: ti: k3-udma: Remove unused declarations
  dmaengine: amd: qdma: Add AMD QDMA driver
  dmaengine: xilinx: dpdma: Add support for cyclic dma mode
  dma: ipu: Remove include/linux/dma/ipu-dma.h
  dt-bindings: dma: fsl-mxs-dma: Add compatible string "fsl,imx8qxp-dma-apbh"
  dt-bindings: fsl-qdma: allow compatible string fallback to fsl,ls1021a-qdma
  ...
This commit is contained in:
Linus Torvalds 2024-09-23 14:08:08 -07:00
commit 8874d92b57
66 changed files with 2791 additions and 275 deletions

View File

@ -28,6 +28,14 @@ properties:
- description: DMA Error interrupt
minItems: 1
clocks:
maxItems: 2
clock-names:
items:
- const: ipg
- const: ahb
"#dma-cells":
const: 1
@ -42,15 +50,21 @@ required:
- reg
- interrupts
- "#dma-cells"
- clocks
- clock-names
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/imx27-clock.h>
dma-controller@10001000 {
compatible = "fsl,imx27-dma";
reg = <0x10001000 0x1000>;
interrupts = <32 33>;
#dma-cells = <1>;
dma-channels = <16>;
clocks = <&clks IMX27_CLK_DMA_IPG_GATE>, <&clks IMX27_CLK_DMA_AHB_GATE>;
clock-names = "ipg", "ahb";
};

View File

@ -11,6 +11,17 @@ maintainers:
allOf:
- $ref: dma-controller.yaml#
- if:
properties:
compatible:
contains:
const: fsl,imx8qxp-dma-apbh
then:
required:
- power-domains
else:
properties:
power-domains: false
properties:
compatible:
@ -20,6 +31,7 @@ properties:
- fsl,imx6q-dma-apbh
- fsl,imx6sx-dma-apbh
- fsl,imx7d-dma-apbh
- fsl,imx8qxp-dma-apbh
- const: fsl,imx28-dma-apbh
- enum:
- fsl,imx23-dma-apbh
@ -42,6 +54,9 @@ properties:
dma-channels:
enum: [4, 8, 16]
power-domains:
maxItems: 1
required:
- compatible
- reg

View File

@ -11,11 +11,14 @@ maintainers:
properties:
compatible:
enum:
- fsl,ls1021a-qdma
- fsl,ls1028a-qdma
- fsl,ls1043a-qdma
- fsl,ls1046a-qdma
oneOf:
- const: fsl,ls1021a-qdma
- items:
- enum:
- fsl,ls1028a-qdma
- fsl,ls1043a-qdma
- fsl,ls1046a-qdma
- const: fsl,ls1021a-qdma
reg:
items:

View File

@ -0,0 +1,65 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/loongson,ls1b-apbdma.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Loongson-1 APB DMA Controller
maintainers:
- Keguang Zhang <keguang.zhang@gmail.com>
description:
Loongson-1 APB DMA controller provides 3 independent channels for
peripherals such as NAND, audio playback and capture.
properties:
compatible:
oneOf:
- const: loongson,ls1b-apbdma
- items:
- enum:
- loongson,ls1a-apbdma
- loongson,ls1c-apbdma
- const: loongson,ls1b-apbdma
reg:
maxItems: 1
interrupts:
items:
- description: NAND interrupt
- description: Audio playback interrupt
- description: Audio capture interrupt
interrupt-names:
items:
- const: ch0
- const: ch1
- const: ch2
'#dma-cells':
const: 1
required:
- compatible
- reg
- interrupts
- interrupt-names
- '#dma-cells'
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
dma-controller@1fd01160 {
compatible = "loongson,ls1b-apbdma";
reg = <0x1fd01160 0x4>;
interrupt-parent = <&intc0>;
interrupts = <13 IRQ_TYPE_EDGE_RISING>,
<14 IRQ_TYPE_EDGE_RISING>,
<15 IRQ_TYPE_EDGE_RISING>;
interrupt-names = "ch0", "ch1", "ch2";
#dma-cells = <1>;
};

View File

@ -0,0 +1,61 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/marvell,xor-v2.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Marvell XOR v2 engines
maintainers:
- Andrew Lunn <andrew@lunn.ch>
properties:
compatible:
oneOf:
- const: marvell,xor-v2
- items:
- enum:
- marvell,armada-7k-xor
- const: marvell,xor-v2
reg:
items:
- description: DMA registers
- description: global registers
clocks:
minItems: 1
maxItems: 2
clock-names:
minItems: 1
items:
- const: core
- const: reg
msi-parent:
description:
Phandle to the MSI-capable interrupt controller used for
interrupts.
maxItems: 1
dma-coherent: true
required:
- compatible
- reg
- msi-parent
- dma-coherent
additionalProperties: false
examples:
- |
xor0@6a0000 {
compatible = "marvell,armada-7k-xor", "marvell,xor-v2";
reg = <0x6a0000 0x1000>, <0x6b0000 0x1000>;
clocks = <&ap_clk 0>, <&ap_clk 1>;
clock-names = "core", "reg";
msi-parent = <&gic_v2m0>;
dma-coherent;
};

View File

@ -1,28 +0,0 @@
* Marvell XOR v2 engines
Required properties:
- compatible: one of the following values:
"marvell,armada-7k-xor"
"marvell,xor-v2"
- reg: Should contain registers location and length (two sets)
the first set is the DMA registers
the second set is the global registers
- msi-parent: Phandle to the MSI-capable interrupt controller used for
interrupts.
Optional properties:
- clocks: Optional reference to the clocks used by the XOR engine.
- clock-names: mandatory if there is a second clock, in this case the
name must be "core" for the first clock and "reg" for the second
one
Example:
xor0@400000 {
compatible = "marvell,xor-v2";
reg = <0x400000 0x1000>,
<0x410000 0x1000>;
msi-parent = <&gic_v2m0>;
dma-coherent;
};

View File

@ -19,6 +19,7 @@ properties:
- renesas,r9a07g043-dmac # RZ/G2UL and RZ/Five
- renesas,r9a07g044-dmac # RZ/G2{L,LC}
- renesas,r9a07g054-dmac # RZ/V2L
- renesas,r9a08g045-dmac # RZ/G3S
- const: renesas,rz-dmac
reg:

View File

@ -24,7 +24,9 @@ properties:
const: 1
compatible:
const: xlnx,zynqmp-dma-1.0
enum:
- amd,versal2-dma-1.0
- xlnx,zynqmp-dma-1.0
reg:
description: memory map for gdma/adma module access

View File

@ -1147,6 +1147,14 @@ L: dmaengine@vger.kernel.org
S: Maintained
F: drivers/dma/ptdma/
AMD QDMA DRIVER
M: Nishad Saraf <nishads@amd.com>
M: Lizhi Hou <lizhi.hou@amd.com>
L: dmaengine@vger.kernel.org
S: Supported
F: drivers/dma/amd/qdma/
F: include/linux/platform_data/amd_qdma.h
AMD SEATTLE DEVICE TREE SUPPORT
M: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
M: Tom Lendacky <thomas.lendacky@amd.com>
@ -2492,6 +2500,7 @@ T: git git://github.com/vzapolskiy/linux-lpc32xx.git
F: Documentation/devicetree/bindings/i2c/nxp,pnx-i2c.yaml
F: arch/arm/boot/dts/nxp/lpc/lpc32*
F: arch/arm/mach-lpc32xx/
F: drivers/dma/lpc32xx-dmamux.c
F: drivers/i2c/busses/i2c-pnx.c
F: drivers/net/ethernet/nxp/lpc_eth.c
F: drivers/usb/host/ohci-nxp.c

View File

@ -8,5 +8,6 @@ config ARCH_LPC32XX
select CLKSRC_LPC32XX
select CPU_ARM926T
select GPIOLIB
select LPC32XX_DMAMUX if AMBA_PL08X
help
Support for the NXP LPC32XX family of processors

View File

@ -369,6 +369,15 @@ config K3_DMA
Support the DMA engine for Hisilicon K3 platform
devices.
config LOONGSON1_APB_DMA
tristate "Loongson1 APB DMA support"
depends on MACH_LOONGSON32 || COMPILE_TEST
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
This selects support for the APB DMA controller in Loongson1 SoCs,
which is required by Loongson1 NAND and audio support.
config LPC18XX_DMAMUX
bool "NXP LPC18xx/43xx DMA MUX for PL080"
depends on ARCH_LPC18XX || COMPILE_TEST
@ -378,6 +387,15 @@ config LPC18XX_DMAMUX
Enable support for DMA on NXP LPC18xx/43xx platforms
with PL080 and multiplexed DMA request lines.
config LPC32XX_DMAMUX
bool "NXP LPC32xx DMA MUX for PL080"
depends on ARCH_LPC32XX || COMPILE_TEST
depends on OF && AMBA_PL08X
select MFD_SYSCON
help
Support for PL080 multiplexed DMA request lines on
LPC32XX platrofm.
config LS2X_APB_DMA
tristate "Loongson LS2X APB DMA support"
depends on LOONGARCH || COMPILE_TEST
@ -716,6 +734,8 @@ config XILINX_ZYNQMP_DPDMA
display driver.
# driver files
source "drivers/dma/amd/Kconfig"
source "drivers/dma/bestcomm/Kconfig"
source "drivers/dma/mediatek/Kconfig"

View File

@ -49,7 +49,9 @@ obj-$(CONFIG_INTEL_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-y += idxd/
obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_LOONGSON1_APB_DMA) += loongson1-apb-dma.o
obj-$(CONFIG_LPC18XX_DMAMUX) += lpc18xx-dmamux.o
obj-$(CONFIG_LPC32XX_DMAMUX) += lpc32xx-dmamux.o
obj-$(CONFIG_LS2X_APB_DMA) += ls2x-apb-dma.o
obj-$(CONFIG_MILBEAUT_HDMAC) += milbeaut-hdmac.o
obj-$(CONFIG_MILBEAUT_XDMAC) += milbeaut-xdmac.o
@ -83,6 +85,7 @@ obj-$(CONFIG_ST_FDMA) += st_fdma.o
obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
obj-$(CONFIG_INTEL_LDMA) += lgm/
obj-y += amd/
obj-y += mediatek/
obj-y += qcom/
obj-y += stm32/

View File

@ -112,7 +112,7 @@ static int acpi_dma_parse_resource_group(const struct acpi_csrt_group *grp,
}
/**
* acpi_dma_parse_csrt - parse CSRT to exctract additional DMA resources
* acpi_dma_parse_csrt - parse CSRT to extract additional DMA resources
* @adev: ACPI device to match with
* @adma: struct acpi_dma of the given DMA controller
*
@ -305,7 +305,7 @@ EXPORT_SYMBOL_GPL(devm_acpi_dma_controller_free);
* found.
*
* Return:
* 0, if no information is avaiable, -1 on mismatch, and 1 otherwise.
* 0, if no information is available, -1 on mismatch, and 1 otherwise.
*/
static int acpi_dma_update_dma_spec(struct acpi_dma *adma,
struct acpi_dma_spec *dma_spec)

View File

@ -153,7 +153,7 @@ struct msgdma_extended_desc {
/**
* struct msgdma_sw_desc - implements a sw descriptor
* @async_tx: support for the async_tx api
* @hw_desc: assosiated HW descriptor
* @hw_desc: associated HW descriptor
* @node: node to move from the free list to the tx list
* @tx_list: transmit list node
*/
@ -511,7 +511,7 @@ static void msgdma_copy_one(struct msgdma_device *mdev,
* of the DMA controller. The descriptor will get flushed to the
* FIFO, once the last word (control word) is written. Since we
* are not 100% sure that memcpy() writes all word in the "correct"
* oder (address from low to high) on all architectures, we make
* order (address from low to high) on all architectures, we make
* sure this control word is written last by single coding it and
* adding some write-barriers here.
*/

View File

@ -2,7 +2,7 @@
/*
* Copyright (c) 2006 ARM Ltd.
* Copyright (c) 2010 ST-Ericsson SA
* Copyirght (c) 2017 Linaro Ltd.
* Copyright (c) 2017 Linaro Ltd.
*
* Author: Peter Pearse <peter.pearse@arm.com>
* Author: Linus Walleij <linus.walleij@linaro.org>

14
drivers/dma/amd/Kconfig Normal file
View File

@ -0,0 +1,14 @@
# SPDX-License-Identifier: GPL-2.0-only
config AMD_QDMA
tristate "AMD Queue-based DMA"
depends on HAS_IOMEM
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
select REGMAP_MMIO
help
Enable support for the AMD Queue-based DMA subsystem. The primary
mechanism to transfer data using the QDMA is for the QDMA engine to
operate on instructions (descriptors) provided by the host operating
system. Using the descriptors, the QDMA can move data in either the
Host to Card (H2C) direction or the Card to Host (C2H) direction.

3
drivers/dma/amd/Makefile Normal file
View File

@ -0,0 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_AMD_QDMA) += qdma/

View File

@ -0,0 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_AMD_QDMA) += amd-qdma.o
amd-qdma-$(CONFIG_AMD_QDMA) := qdma.o qdma-comm-regs.o

View File

@ -0,0 +1,64 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) 2023-2024, Advanced Micro Devices, Inc.
*/
#ifndef __QDMA_REGS_DEF_H
#define __QDMA_REGS_DEF_H
#include "qdma.h"
const struct qdma_reg qdma_regos_default[QDMA_REGO_MAX] = {
[QDMA_REGO_CTXT_DATA] = QDMA_REGO(0x804, 8),
[QDMA_REGO_CTXT_CMD] = QDMA_REGO(0x844, 1),
[QDMA_REGO_CTXT_MASK] = QDMA_REGO(0x824, 8),
[QDMA_REGO_MM_H2C_CTRL] = QDMA_REGO(0x1004, 1),
[QDMA_REGO_MM_C2H_CTRL] = QDMA_REGO(0x1204, 1),
[QDMA_REGO_QUEUE_COUNT] = QDMA_REGO(0x120, 1),
[QDMA_REGO_RING_SIZE] = QDMA_REGO(0x204, 1),
[QDMA_REGO_H2C_PIDX] = QDMA_REGO(0x18004, 1),
[QDMA_REGO_C2H_PIDX] = QDMA_REGO(0x18008, 1),
[QDMA_REGO_INTR_CIDX] = QDMA_REGO(0x18000, 1),
[QDMA_REGO_FUNC_ID] = QDMA_REGO(0x12c, 1),
[QDMA_REGO_ERR_INT] = QDMA_REGO(0xb04, 1),
[QDMA_REGO_ERR_STAT] = QDMA_REGO(0x248, 1),
};
const struct qdma_reg_field qdma_regfs_default[QDMA_REGF_MAX] = {
/* QDMA_REGO_CTXT_DATA fields */
[QDMA_REGF_IRQ_ENABLE] = QDMA_REGF(53, 53),
[QDMA_REGF_WBK_ENABLE] = QDMA_REGF(52, 52),
[QDMA_REGF_WBI_CHECK] = QDMA_REGF(34, 34),
[QDMA_REGF_IRQ_ARM] = QDMA_REGF(16, 16),
[QDMA_REGF_IRQ_VEC] = QDMA_REGF(138, 128),
[QDMA_REGF_IRQ_AGG] = QDMA_REGF(139, 139),
[QDMA_REGF_WBI_INTVL_ENABLE] = QDMA_REGF(35, 35),
[QDMA_REGF_MRKR_DISABLE] = QDMA_REGF(62, 62),
[QDMA_REGF_QUEUE_ENABLE] = QDMA_REGF(32, 32),
[QDMA_REGF_QUEUE_MODE] = QDMA_REGF(63, 63),
[QDMA_REGF_DESC_BASE] = QDMA_REGF(127, 64),
[QDMA_REGF_DESC_SIZE] = QDMA_REGF(49, 48),
[QDMA_REGF_RING_ID] = QDMA_REGF(47, 44),
[QDMA_REGF_QUEUE_BASE] = QDMA_REGF(11, 0),
[QDMA_REGF_QUEUE_MAX] = QDMA_REGF(44, 32),
[QDMA_REGF_FUNCTION_ID] = QDMA_REGF(24, 17),
[QDMA_REGF_INTR_AGG_BASE] = QDMA_REGF(66, 15),
[QDMA_REGF_INTR_VECTOR] = QDMA_REGF(11, 1),
[QDMA_REGF_INTR_SIZE] = QDMA_REGF(69, 67),
[QDMA_REGF_INTR_VALID] = QDMA_REGF(0, 0),
[QDMA_REGF_INTR_COLOR] = QDMA_REGF(14, 14),
[QDMA_REGF_INTR_FUNCTION_ID] = QDMA_REGF(125, 114),
/* QDMA_REGO_CTXT_CMD fields */
[QDMA_REGF_CMD_INDX] = QDMA_REGF(19, 7),
[QDMA_REGF_CMD_CMD] = QDMA_REGF(6, 5),
[QDMA_REGF_CMD_TYPE] = QDMA_REGF(4, 1),
[QDMA_REGF_CMD_BUSY] = QDMA_REGF(0, 0),
/* QDMA_REGO_QUEUE_COUNT fields */
[QDMA_REGF_QUEUE_COUNT] = QDMA_REGF(11, 0),
/* QDMA_REGO_ERR_INT fields */
[QDMA_REGF_ERR_INT_FUNC] = QDMA_REGF(11, 0),
[QDMA_REGF_ERR_INT_VEC] = QDMA_REGF(22, 12),
[QDMA_REGF_ERR_INT_ARM] = QDMA_REGF(24, 24),
};
#endif /* __QDMA_REGS_DEF_H */

1143
drivers/dma/amd/qdma/qdma.c Normal file

File diff suppressed because it is too large Load Diff

266
drivers/dma/amd/qdma/qdma.h Normal file
View File

@ -0,0 +1,266 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* DMA header for AMD Queue-based DMA Subsystem
*
* Copyright (C) 2023-2024, Advanced Micro Devices, Inc.
*/
#ifndef __QDMA_H
#define __QDMA_H
#include <linux/bitfield.h>
#include <linux/dmaengine.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include "../../virt-dma.h"
#define DISABLE 0
#define ENABLE 1
#define QDMA_MIN_IRQ 3
#define QDMA_INTR_NAME_MAX_LEN 30
#define QDMA_INTR_PREFIX "amd-qdma"
#define QDMA_IDENTIFIER 0x1FD3
#define QDMA_DEFAULT_RING_SIZE (BIT(10) + 1)
#define QDMA_DEFAULT_RING_ID 0
#define QDMA_POLL_INTRVL_US 10 /* 10us */
#define QDMA_POLL_TIMEOUT_US (500 * 1000) /* 500ms */
#define QDMA_DMAP_REG_STRIDE 16
#define QDMA_CTXT_REGMAP_LEN 8 /* 8 regs */
#define QDMA_MM_DESC_SIZE 32 /* Bytes */
#define QDMA_MM_DESC_LEN_BITS 28
#define QDMA_MM_DESC_MAX_LEN (BIT(QDMA_MM_DESC_LEN_BITS) - 1)
#define QDMA_MIN_DMA_ALLOC_SIZE 4096
#define QDMA_INTR_RING_SIZE BIT(13)
#define QDMA_INTR_RING_IDX_MASK GENMASK(9, 0)
#define QDMA_INTR_RING_BASE(_addr) ((_addr) >> 12)
#define QDMA_IDENTIFIER_REGOFF 0x0
#define QDMA_IDENTIFIER_MASK GENMASK(31, 16)
#define QDMA_QUEUE_ARM_BIT BIT(16)
#define qdma_err(qdev, fmt, args...) \
dev_err(&(qdev)->pdev->dev, fmt, ##args)
#define qdma_dbg(qdev, fmt, args...) \
dev_dbg(&(qdev)->pdev->dev, fmt, ##args)
#define qdma_info(qdev, fmt, args...) \
dev_info(&(qdev)->pdev->dev, fmt, ##args)
enum qdma_reg_fields {
QDMA_REGF_IRQ_ENABLE,
QDMA_REGF_WBK_ENABLE,
QDMA_REGF_WBI_CHECK,
QDMA_REGF_IRQ_ARM,
QDMA_REGF_IRQ_VEC,
QDMA_REGF_IRQ_AGG,
QDMA_REGF_WBI_INTVL_ENABLE,
QDMA_REGF_MRKR_DISABLE,
QDMA_REGF_QUEUE_ENABLE,
QDMA_REGF_QUEUE_MODE,
QDMA_REGF_DESC_BASE,
QDMA_REGF_DESC_SIZE,
QDMA_REGF_RING_ID,
QDMA_REGF_CMD_INDX,
QDMA_REGF_CMD_CMD,
QDMA_REGF_CMD_TYPE,
QDMA_REGF_CMD_BUSY,
QDMA_REGF_QUEUE_COUNT,
QDMA_REGF_QUEUE_MAX,
QDMA_REGF_QUEUE_BASE,
QDMA_REGF_FUNCTION_ID,
QDMA_REGF_INTR_AGG_BASE,
QDMA_REGF_INTR_VECTOR,
QDMA_REGF_INTR_SIZE,
QDMA_REGF_INTR_VALID,
QDMA_REGF_INTR_COLOR,
QDMA_REGF_INTR_FUNCTION_ID,
QDMA_REGF_ERR_INT_FUNC,
QDMA_REGF_ERR_INT_VEC,
QDMA_REGF_ERR_INT_ARM,
QDMA_REGF_MAX
};
enum qdma_regs {
QDMA_REGO_CTXT_DATA,
QDMA_REGO_CTXT_CMD,
QDMA_REGO_CTXT_MASK,
QDMA_REGO_MM_H2C_CTRL,
QDMA_REGO_MM_C2H_CTRL,
QDMA_REGO_QUEUE_COUNT,
QDMA_REGO_RING_SIZE,
QDMA_REGO_H2C_PIDX,
QDMA_REGO_C2H_PIDX,
QDMA_REGO_INTR_CIDX,
QDMA_REGO_FUNC_ID,
QDMA_REGO_ERR_INT,
QDMA_REGO_ERR_STAT,
QDMA_REGO_MAX
};
struct qdma_reg_field {
u16 lsb; /* Least significant bit of field */
u16 msb; /* Most significant bit of field */
};
struct qdma_reg {
u32 off;
u32 count;
};
#define QDMA_REGF(_msb, _lsb) { \
.lsb = (_lsb), \
.msb = (_msb), \
}
#define QDMA_REGO(_off, _count) { \
.off = (_off), \
.count = (_count), \
}
enum qdma_desc_size {
QDMA_DESC_SIZE_8B,
QDMA_DESC_SIZE_16B,
QDMA_DESC_SIZE_32B,
QDMA_DESC_SIZE_64B,
};
enum qdma_queue_op_mode {
QDMA_QUEUE_OP_STREAM,
QDMA_QUEUE_OP_MM,
};
enum qdma_ctxt_type {
QDMA_CTXT_DESC_SW_C2H,
QDMA_CTXT_DESC_SW_H2C,
QDMA_CTXT_DESC_HW_C2H,
QDMA_CTXT_DESC_HW_H2C,
QDMA_CTXT_DESC_CR_C2H,
QDMA_CTXT_DESC_CR_H2C,
QDMA_CTXT_WRB,
QDMA_CTXT_PFTCH,
QDMA_CTXT_INTR_COAL,
QDMA_CTXT_RSVD,
QDMA_CTXT_HOST_PROFILE,
QDMA_CTXT_TIMER,
QDMA_CTXT_FMAP,
QDMA_CTXT_FNC_STS,
};
enum qdma_ctxt_cmd {
QDMA_CTXT_CLEAR,
QDMA_CTXT_WRITE,
QDMA_CTXT_READ,
QDMA_CTXT_INVALIDATE,
QDMA_CTXT_MAX
};
struct qdma_ctxt_sw_desc {
u64 desc_base;
u16 vec;
};
struct qdma_ctxt_intr {
u64 agg_base;
u16 vec;
u32 size;
bool valid;
bool color;
};
struct qdma_ctxt_fmap {
u16 qbase;
u16 qmax;
};
struct qdma_device;
struct qdma_mm_desc {
__le64 src_addr;
__le32 len;
__le32 reserved1;
__le64 dst_addr;
__le64 reserved2;
} __packed;
struct qdma_mm_vdesc {
struct virt_dma_desc vdesc;
struct qdma_queue *queue;
struct scatterlist *sgl;
u64 sg_off;
u32 sg_len;
u64 dev_addr;
u32 pidx;
u32 pending_descs;
struct dma_slave_config cfg;
};
#define QDMA_VDESC_QUEUED(vdesc) (!(vdesc)->sg_len)
struct qdma_queue {
struct qdma_device *qdev;
struct virt_dma_chan vchan;
enum dma_transfer_direction dir;
struct dma_slave_config cfg;
struct qdma_mm_desc *desc_base;
struct qdma_mm_vdesc *submitted_vdesc;
struct qdma_mm_vdesc *issued_vdesc;
dma_addr_t dma_desc_base;
u32 pidx_reg;
u32 cidx_reg;
u32 ring_size;
u32 idx_mask;
u16 qid;
u32 pidx;
u32 cidx;
};
struct qdma_intr_ring {
struct qdma_device *qdev;
__le64 *base;
dma_addr_t dev_base;
char msix_name[QDMA_INTR_NAME_MAX_LEN];
u32 msix_vector;
u16 msix_id;
u32 ring_size;
u16 ridx;
u16 cidx;
u8 color;
};
#define QDMA_INTR_MASK_PIDX GENMASK_ULL(15, 0)
#define QDMA_INTR_MASK_CIDX GENMASK_ULL(31, 16)
#define QDMA_INTR_MASK_DESC_COLOR GENMASK_ULL(32, 32)
#define QDMA_INTR_MASK_STATE GENMASK_ULL(34, 33)
#define QDMA_INTR_MASK_ERROR GENMASK_ULL(36, 35)
#define QDMA_INTR_MASK_TYPE GENMASK_ULL(38, 38)
#define QDMA_INTR_MASK_QID GENMASK_ULL(62, 39)
#define QDMA_INTR_MASK_COLOR GENMASK_ULL(63, 63)
struct qdma_device {
struct platform_device *pdev;
struct dma_device dma_dev;
struct regmap *regmap;
struct mutex ctxt_lock; /* protect ctxt registers */
const struct qdma_reg_field *rfields;
const struct qdma_reg *roffs;
struct qdma_queue *h2c_queues;
struct qdma_queue *c2h_queues;
struct qdma_intr_ring *qintr_rings;
u32 qintr_ring_num;
u32 qintr_ring_idx;
u32 chan_num;
u32 queue_irq_start;
u32 queue_irq_num;
u32 err_irq_idx;
u32 fid;
};
extern const struct qdma_reg qdma_regos_default[QDMA_REGO_MAX];
extern const struct qdma_reg_field qdma_regfs_default[QDMA_REGF_MAX];
#endif /* __QDMA_H */

View File

@ -339,7 +339,7 @@ static inline u8 convert_buswidth(enum dma_slave_buswidth addr_width)
* @regs: memory mapped register base
* @clk: dma controller clock
* @save_imr: interrupt mask register that is saved on suspend/resume cycle
* @all_chan_mask: all channels availlable in a mask
* @all_chan_mask: all channels available in a mask
* @lli_pool: hw lli table
* @memset_pool: hw memset pool
* @chan: channels table to store at_dma_chan structures
@ -668,7 +668,7 @@ static inline u32 atc_calc_bytes_left(u32 current_len, u32 ctrla)
* CTRLA is read in turn, next the DSCR is read a second time. If the two
* consecutive read values of the DSCR are the same then we assume both refers
* to the very same LLI as well as the CTRLA value read inbetween does. For
* cyclic tranfers, the assumption is that a full loop is "not so fast". If the
* cyclic transfers, the assumption is that a full loop is "not so fast". If the
* two DSCR values are different, we read again the CTRLA then the DSCR till two
* consecutive read values from DSCR are equal or till the maximum trials is
* reach. This algorithm is very unlikely not to find a stable value for DSCR.
@ -700,7 +700,7 @@ static int atc_get_llis_residue(struct at_dma_chan *atchan,
break;
/*
* DSCR has changed inside the DMA controller, so the previouly
* DSCR has changed inside the DMA controller, so the previously
* read value of CTRLA may refer to an already processed
* descriptor hence could be outdated. We need to update ctrla
* to match the current descriptor.

View File

@ -15,7 +15,7 @@
* number of hardware rings over one or more SBA hardware devices. By
* design, the internal buffer size of SBA hardware device is limited
* but all offload operations supported by SBA can be broken down into
* multiple small size requests and executed parallely on multiple SBA
* multiple small size requests and executed parallelly on multiple SBA
* hardware devices for achieving high through-put.
*
* The Broadcom SBA RAID driver does not require any register programming
@ -135,7 +135,7 @@ struct sba_device {
u32 max_xor_srcs;
u32 max_resp_pool_size;
u32 max_cmds_pool_size;
/* Maibox client and Mailbox channels */
/* Mailbox client and Mailbox channels */
struct mbox_client client;
struct mbox_chan *mchan;
struct device *mbox_dev;

View File

@ -369,7 +369,7 @@ static struct bcm2835_desc *bcm2835_dma_create_cb_chain(
/* the last frame requires extra flags */
d->cb_list[d->frames - 1].cb->info |= finalextrainfo;
/* detect a size missmatch */
/* detect a size mismatch */
if (buf_len && (d->size != buf_len))
goto error_cb;

View File

@ -1070,7 +1070,7 @@ static int __dma_async_device_channel_register(struct dma_device *device,
if (!name)
dev_set_name(&chan->dev->device, "dma%dchan%d", device->dev_id, chan->chan_id);
else
dev_set_name(&chan->dev->device, name);
dev_set_name(&chan->dev->device, "%s", name);
rc = device_register(&chan->dev->device);
if (rc)
goto err_out_ida;

View File

@ -500,7 +500,7 @@ static unsigned long long dmatest_persec(s64 runtime, unsigned int val)
per_sec *= val;
per_sec = INT_TO_FIXPT(per_sec);
do_div(per_sec, runtime);
do_div(per_sec, (u32)runtime);
return per_sec;
}

View File

@ -841,7 +841,7 @@ static dma_cookie_t ep93xx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
desc = container_of(tx, struct ep93xx_dma_desc, txd);
/*
* If nothing is currently prosessed, we push this descriptor
* If nothing is currently processed, we push this descriptor
* directly to the hardware. Otherwise we put the descriptor
* to the pending queue.
*/
@ -1025,7 +1025,7 @@ fail:
* @chan: channel
* @sgl: list of buffers to transfer
* @sg_len: number of entries in @sgl
* @dir: direction of tha DMA transfer
* @dir: direction of the DMA transfer
* @flags: flags for the descriptor
* @context: operation context (ignored)
*

View File

@ -12,8 +12,8 @@ struct dpaa2_qdma_sd_d {
u32 rsv:32;
union {
struct {
u32 ssd:12; /* souce stride distance */
u32 sss:12; /* souce stride size */
u32 ssd:12; /* source stride distance */
u32 sss:12; /* source stride size */
u32 rsv1:8;
} sdf;
struct {
@ -48,7 +48,7 @@ struct dpaa2_qdma_sd_d {
#define QDMA_SER_DISABLE (8) /* no notification */
#define QDMA_SER_CTX BIT(8) /* notification by FQD_CTX[fqid] */
#define QDMA_SER_DEST (2 << 8) /* notification by destination desc */
#define QDMA_SER_BOTH (3 << 8) /* soruce and dest notification */
#define QDMA_SER_BOTH (3 << 8) /* source and dest notification */
#define QDMA_FD_SPF_ENALBE BIT(30) /* source prefetch enable */
#define QMAN_FD_VA_ENABLE BIT(14) /* Address used is virtual address */

View File

@ -100,6 +100,22 @@ static irqreturn_t fsl_edma_irq_handler(int irq, void *dev_id)
return fsl_edma_err_handler(irq, dev_id);
}
static bool fsl_edma_srcid_in_use(struct fsl_edma_engine *fsl_edma, u32 srcid)
{
struct fsl_edma_chan *fsl_chan;
int i;
for (i = 0; i < fsl_edma->n_chans; i++) {
fsl_chan = &fsl_edma->chans[i];
if (fsl_chan->srcid && srcid == fsl_chan->srcid) {
dev_err(&fsl_chan->pdev->dev, "The srcid is in use, can't use!");
return true;
}
}
return false;
}
static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
@ -117,6 +133,10 @@ static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec,
list_for_each_entry_safe(chan, _chan, &fsl_edma->dma_dev.channels, device_node) {
if (chan->client_count)
continue;
if (fsl_edma_srcid_in_use(fsl_edma, dma_spec->args[1]))
return NULL;
if ((chan->chan_id / chans_per_mux) == dma_spec->args[0]) {
chan = dma_get_slave_channel(chan);
if (chan) {
@ -153,7 +173,7 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
b_chmux = !!(fsl_edma->drvdata->flags & FSL_EDMA_DRV_HAS_CHMUX);
mutex_lock(&fsl_edma->fsl_edma_mutex);
guard(mutex)(&fsl_edma->fsl_edma_mutex);
list_for_each_entry_safe(chan, _chan, &fsl_edma->dma_dev.channels,
device_node) {
@ -161,6 +181,8 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
continue;
fsl_chan = to_fsl_edma_chan(chan);
if (fsl_edma_srcid_in_use(fsl_edma, dma_spec->args[0]))
return NULL;
i = fsl_chan - fsl_edma->chans;
fsl_chan->priority = dma_spec->args[1];
@ -177,18 +199,15 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
if (!b_chmux && i == dma_spec->args[0]) {
chan = dma_get_slave_channel(chan);
chan->device->privatecnt++;
mutex_unlock(&fsl_edma->fsl_edma_mutex);
return chan;
} else if (b_chmux && !fsl_chan->srcid) {
/* if controller support channel mux, choose a free channel */
chan = dma_get_slave_channel(chan);
chan->device->privatecnt++;
fsl_chan->srcid = dma_spec->args[0];
mutex_unlock(&fsl_edma->fsl_edma_mutex);
return chan;
}
}
mutex_unlock(&fsl_edma->fsl_edma_mutex);
return NULL;
}

View File

@ -677,7 +677,7 @@ static void hisi_dma_init_hw_qp(struct hisi_dma_dev *hdma_dev, u32 index)
writel_relaxed(tmp, addr);
/*
* 0 - dma should process FLR whith CPU.
* 0 - dma should process FLR with CPU.
* 1 - dma not process FLR, only cpu process FLR.
*/
addr = q_base + HISI_DMA_HIP09_DMA_FLR_DISABLE +

View File

@ -290,7 +290,7 @@ static void idma64_desc_fill(struct idma64_chan *idma64c,
desc->length += hw->len;
} while (i);
/* Trigger an interrupt after the last block is transfered */
/* Trigger an interrupt after the last block is transferred */
lli->ctllo |= IDMA64C_CTLL_INT_EN;
/* Disable LLP transfer in the last block */
@ -364,7 +364,7 @@ static size_t idma64_active_desc_size(struct idma64_chan *idma64c)
if (!i)
return bytes;
/* The current chunk is not fully transfered yet */
/* The current chunk is not fully transferred yet */
bytes += desc->hw[--i].len;
return bytes - IDMA64C_CTLH_BLOCK_TS(ctlhi);

View File

@ -69,9 +69,15 @@ static struct idxd_driver_data idxd_driver_data[] = {
static struct pci_device_id idxd_pci_tbl[] = {
/* DSA ver 1.0 platforms */
{ PCI_DEVICE_DATA(INTEL, DSA_SPR0, &idxd_driver_data[IDXD_TYPE_DSA]) },
/* DSA on GNR-D platforms */
{ PCI_DEVICE_DATA(INTEL, DSA_GNRD, &idxd_driver_data[IDXD_TYPE_DSA]) },
/* DSA on DMR platforms */
{ PCI_DEVICE_DATA(INTEL, DSA_DMR, &idxd_driver_data[IDXD_TYPE_DSA]) },
/* IAX ver 1.0 platforms */
{ PCI_DEVICE_DATA(INTEL, IAX_SPR0, &idxd_driver_data[IDXD_TYPE_IAX]) },
/* IAA on DMR platforms */
{ PCI_DEVICE_DATA(INTEL, IAA_DMR, &idxd_driver_data[IDXD_TYPE_IAX]) },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, idxd_pci_tbl);

View File

@ -449,8 +449,8 @@ static void idxd_pmu_init(struct idxd_pmu *idxd_pmu)
idxd_pmu->pmu.attr_groups = perfmon_attr_groups;
idxd_pmu->pmu.task_ctx_nr = perf_invalid_context;
idxd_pmu->pmu.event_init = perfmon_pmu_event_init;
idxd_pmu->pmu.pmu_enable = perfmon_pmu_enable,
idxd_pmu->pmu.pmu_disable = perfmon_pmu_disable,
idxd_pmu->pmu.pmu_enable = perfmon_pmu_enable;
idxd_pmu->pmu.pmu_disable = perfmon_pmu_disable;
idxd_pmu->pmu.add = perfmon_pmu_event_add;
idxd_pmu->pmu.del = perfmon_pmu_event_del;
idxd_pmu->pmu.start = perfmon_pmu_event_start;

View File

@ -134,7 +134,7 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
* completing the descriptor will return desc to allocator and
* the desc can be acquired by a different process and the
* desc->list can be modified. Delete desc from list so the
* list trasversing does not get corrupted by the other process.
* list traversing does not get corrupted by the other process.
*/
list_for_each_entry_safe(d, t, &flist, list) {
list_del_init(&d->list);

View File

@ -167,7 +167,6 @@ struct imxdma_channel {
enum imx_dma_type {
IMX1_DMA,
IMX21_DMA,
IMX27_DMA,
};
@ -194,8 +193,6 @@ struct imxdma_filter_data {
static const struct of_device_id imx_dma_of_dev_id[] = {
{
.compatible = "fsl,imx1-dma", .data = (const void *)IMX1_DMA,
}, {
.compatible = "fsl,imx21-dma", .data = (const void *)IMX21_DMA,
}, {
.compatible = "fsl,imx27-dma", .data = (const void *)IMX27_DMA,
}, {

View File

@ -905,7 +905,7 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
op = IOAT_OP_XOR_VAL;
/* validate the sources with the destintation page */
/* validate the sources with the destination page */
for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
xor_val_srcs[i] = xor_srcs[i];
xor_val_srcs[i] = dest;

View File

@ -107,7 +107,7 @@
* If header mode is set in DMA descriptor,
* If bit 30 is disabled, HDR_LEN must be configured according to channel
* requirement.
* If bit 30 is enabled(checksum with heade mode), HDR_LEN has no need to
* If bit 30 is enabled(checksum with header mode), HDR_LEN has no need to
* be configured. It will enable check sum for switch
* If header mode is not set in DMA descriptor,
* This register setting doesn't matter

View File

@ -0,0 +1,660 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Driver for Loongson-1 APB DMA Controller
*
* Copyright (C) 2015-2024 Keguang Zhang <keguang.zhang@gmail.com>
*/
#include <linux/dmapool.h>
#include <linux/dma-mapping.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include "dmaengine.h"
#include "virt-dma.h"
/* Loongson-1 DMA Control Register */
#define LS1X_DMA_CTRL 0x0
/* DMA Control Register Bits */
#define LS1X_DMA_STOP BIT(4)
#define LS1X_DMA_START BIT(3)
#define LS1X_DMA_ASK_VALID BIT(2)
/* DMA Next Field Bits */
#define LS1X_DMA_NEXT_VALID BIT(0)
/* DMA Command Field Bits */
#define LS1X_DMA_RAM2DEV BIT(12)
#define LS1X_DMA_INT BIT(1)
#define LS1X_DMA_INT_MASK BIT(0)
#define LS1X_DMA_LLI_ALIGNMENT 64
#define LS1X_DMA_LLI_ADDR_MASK GENMASK(31, __ffs(LS1X_DMA_LLI_ALIGNMENT))
#define LS1X_DMA_MAX_CHANNELS 3
enum ls1x_dmadesc_offsets {
LS1X_DMADESC_NEXT = 0,
LS1X_DMADESC_SADDR,
LS1X_DMADESC_DADDR,
LS1X_DMADESC_LENGTH,
LS1X_DMADESC_STRIDE,
LS1X_DMADESC_CYCLES,
LS1X_DMADESC_CMD,
LS1X_DMADESC_SIZE
};
struct ls1x_dma_lli {
unsigned int hw[LS1X_DMADESC_SIZE];
dma_addr_t phys;
struct list_head node;
} __aligned(LS1X_DMA_LLI_ALIGNMENT);
struct ls1x_dma_desc {
struct virt_dma_desc vd;
struct list_head lli_list;
};
struct ls1x_dma_chan {
struct virt_dma_chan vc;
struct dma_pool *lli_pool;
phys_addr_t src_addr;
phys_addr_t dst_addr;
enum dma_slave_buswidth src_addr_width;
enum dma_slave_buswidth dst_addr_width;
unsigned int bus_width;
void __iomem *reg_base;
int irq;
bool is_cyclic;
struct ls1x_dma_lli *curr_lli;
};
struct ls1x_dma {
struct dma_device ddev;
unsigned int nr_chans;
struct ls1x_dma_chan chan[];
};
static irqreturn_t ls1x_dma_irq_handler(int irq, void *data);
#define to_ls1x_dma_chan(dchan) \
container_of(dchan, struct ls1x_dma_chan, vc.chan)
#define to_ls1x_dma_desc(d) \
container_of(d, struct ls1x_dma_desc, vd)
static inline struct device *chan2dev(struct dma_chan *chan)
{
return &chan->dev->device;
}
static inline int ls1x_dma_query(struct ls1x_dma_chan *chan,
dma_addr_t *lli_phys)
{
struct dma_chan *dchan = &chan->vc.chan;
int val, ret;
val = *lli_phys & LS1X_DMA_LLI_ADDR_MASK;
val |= LS1X_DMA_ASK_VALID;
val |= dchan->chan_id;
writel(val, chan->reg_base + LS1X_DMA_CTRL);
ret = readl_poll_timeout_atomic(chan->reg_base + LS1X_DMA_CTRL, val,
!(val & LS1X_DMA_ASK_VALID), 0, 3000);
if (ret)
dev_err(chan2dev(dchan), "failed to query DMA\n");
return ret;
}
static inline int ls1x_dma_start(struct ls1x_dma_chan *chan,
dma_addr_t *lli_phys)
{
struct dma_chan *dchan = &chan->vc.chan;
struct device *dev = chan2dev(dchan);
int val, ret;
val = *lli_phys & LS1X_DMA_LLI_ADDR_MASK;
val |= LS1X_DMA_START;
val |= dchan->chan_id;
writel(val, chan->reg_base + LS1X_DMA_CTRL);
ret = readl_poll_timeout(chan->reg_base + LS1X_DMA_CTRL, val,
!(val & LS1X_DMA_START), 0, 1000);
if (!ret)
dev_dbg(dev, "start DMA with lli_phys=%pad\n", lli_phys);
else
dev_err(dev, "failed to start DMA\n");
return ret;
}
static inline void ls1x_dma_stop(struct ls1x_dma_chan *chan)
{
int val = readl(chan->reg_base + LS1X_DMA_CTRL);
writel(val | LS1X_DMA_STOP, chan->reg_base + LS1X_DMA_CTRL);
}
static void ls1x_dma_free_chan_resources(struct dma_chan *dchan)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
struct device *dev = chan2dev(dchan);
dma_free_coherent(dev, sizeof(struct ls1x_dma_lli),
chan->curr_lli, chan->curr_lli->phys);
dma_pool_destroy(chan->lli_pool);
chan->lli_pool = NULL;
devm_free_irq(dev, chan->irq, chan);
vchan_free_chan_resources(&chan->vc);
}
static int ls1x_dma_alloc_chan_resources(struct dma_chan *dchan)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
struct device *dev = chan2dev(dchan);
dma_addr_t phys;
int ret;
ret = devm_request_irq(dev, chan->irq, ls1x_dma_irq_handler,
IRQF_SHARED, dma_chan_name(dchan), chan);
if (ret) {
dev_err(dev, "failed to request IRQ %d\n", chan->irq);
return ret;
}
chan->lli_pool = dma_pool_create(dma_chan_name(dchan), dev,
sizeof(struct ls1x_dma_lli),
__alignof__(struct ls1x_dma_lli), 0);
if (!chan->lli_pool)
return -ENOMEM;
/* allocate memory for querying the current lli */
dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
chan->curr_lli = dma_alloc_coherent(dev, sizeof(struct ls1x_dma_lli),
&phys, GFP_KERNEL);
if (!chan->curr_lli) {
dma_pool_destroy(chan->lli_pool);
return -ENOMEM;
}
chan->curr_lli->phys = phys;
return 0;
}
static void ls1x_dma_free_desc(struct virt_dma_desc *vd)
{
struct ls1x_dma_desc *desc = to_ls1x_dma_desc(vd);
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(vd->tx.chan);
struct ls1x_dma_lli *lli, *_lli;
list_for_each_entry_safe(lli, _lli, &desc->lli_list, node) {
list_del(&lli->node);
dma_pool_free(chan->lli_pool, lli, lli->phys);
}
kfree(desc);
}
static struct ls1x_dma_desc *ls1x_dma_alloc_desc(void)
{
struct ls1x_dma_desc *desc;
desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
if (!desc)
return NULL;
INIT_LIST_HEAD(&desc->lli_list);
return desc;
}
static int ls1x_dma_prep_lli(struct dma_chan *dchan, struct ls1x_dma_desc *desc,
struct scatterlist *sgl, unsigned int sg_len,
enum dma_transfer_direction dir, bool is_cyclic)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
struct ls1x_dma_lli *lli, *prev = NULL, *first = NULL;
struct device *dev = chan2dev(dchan);
struct list_head *pos = NULL;
struct scatterlist *sg;
unsigned int dev_addr, cmd, i;
switch (dir) {
case DMA_MEM_TO_DEV:
dev_addr = chan->dst_addr;
chan->bus_width = chan->dst_addr_width;
cmd = LS1X_DMA_RAM2DEV | LS1X_DMA_INT;
break;
case DMA_DEV_TO_MEM:
dev_addr = chan->src_addr;
chan->bus_width = chan->src_addr_width;
cmd = LS1X_DMA_INT;
break;
default:
dev_err(dev, "unsupported DMA direction: %s\n",
dmaengine_get_direction_text(dir));
return -EINVAL;
}
for_each_sg(sgl, sg, sg_len, i) {
dma_addr_t buf_addr = sg_dma_address(sg);
size_t buf_len = sg_dma_len(sg);
dma_addr_t phys;
if (!is_dma_copy_aligned(dchan->device, buf_addr, 0, buf_len)) {
dev_err(dev, "buffer is not aligned\n");
return -EINVAL;
}
/* allocate HW descriptors */
lli = dma_pool_zalloc(chan->lli_pool, GFP_NOWAIT, &phys);
if (!lli) {
dev_err(dev, "failed to alloc lli %u\n", i);
return -ENOMEM;
}
/* setup HW descriptors */
lli->phys = phys;
lli->hw[LS1X_DMADESC_SADDR] = buf_addr;
lli->hw[LS1X_DMADESC_DADDR] = dev_addr;
lli->hw[LS1X_DMADESC_LENGTH] = buf_len / chan->bus_width;
lli->hw[LS1X_DMADESC_STRIDE] = 0;
lli->hw[LS1X_DMADESC_CYCLES] = 1;
lli->hw[LS1X_DMADESC_CMD] = cmd;
if (prev)
prev->hw[LS1X_DMADESC_NEXT] =
lli->phys | LS1X_DMA_NEXT_VALID;
prev = lli;
if (!first)
first = lli;
list_add_tail(&lli->node, &desc->lli_list);
}
if (is_cyclic) {
lli->hw[LS1X_DMADESC_NEXT] = first->phys | LS1X_DMA_NEXT_VALID;
chan->is_cyclic = is_cyclic;
}
list_for_each(pos, &desc->lli_list) {
lli = list_entry(pos, struct ls1x_dma_lli, node);
print_hex_dump_debug("LLI: ", DUMP_PREFIX_OFFSET, 16, 4,
lli, sizeof(*lli), false);
}
return 0;
}
static struct dma_async_tx_descriptor *
ls1x_dma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction dir,
unsigned long flags, void *context)
{
struct ls1x_dma_desc *desc;
dev_dbg(chan2dev(dchan), "sg_len=%u flags=0x%lx dir=%s\n",
sg_len, flags, dmaengine_get_direction_text(dir));
desc = ls1x_dma_alloc_desc();
if (!desc)
return NULL;
if (ls1x_dma_prep_lli(dchan, desc, sgl, sg_len, dir, false)) {
ls1x_dma_free_desc(&desc->vd);
return NULL;
}
return vchan_tx_prep(to_virt_chan(dchan), &desc->vd, flags);
}
static struct dma_async_tx_descriptor *
ls1x_dma_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t buf_addr,
size_t buf_len, size_t period_len,
enum dma_transfer_direction dir, unsigned long flags)
{
struct ls1x_dma_desc *desc;
struct scatterlist *sgl;
unsigned int sg_len;
unsigned int i;
int ret;
dev_dbg(chan2dev(dchan),
"buf_len=%zu period_len=%zu flags=0x%lx dir=%s\n",
buf_len, period_len, flags, dmaengine_get_direction_text(dir));
desc = ls1x_dma_alloc_desc();
if (!desc)
return NULL;
/* allocate the scatterlist */
sg_len = buf_len / period_len;
sgl = kmalloc_array(sg_len, sizeof(*sgl), GFP_NOWAIT);
if (!sgl)
return NULL;
sg_init_table(sgl, sg_len);
for (i = 0; i < sg_len; ++i) {
sg_set_page(&sgl[i], pfn_to_page(PFN_DOWN(buf_addr)),
period_len, offset_in_page(buf_addr));
sg_dma_address(&sgl[i]) = buf_addr;
sg_dma_len(&sgl[i]) = period_len;
buf_addr += period_len;
}
ret = ls1x_dma_prep_lli(dchan, desc, sgl, sg_len, dir, true);
kfree(sgl);
if (ret) {
ls1x_dma_free_desc(&desc->vd);
return NULL;
}
return vchan_tx_prep(to_virt_chan(dchan), &desc->vd, flags);
}
static int ls1x_dma_slave_config(struct dma_chan *dchan,
struct dma_slave_config *config)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
chan->src_addr = config->src_addr;
chan->src_addr_width = config->src_addr_width;
chan->dst_addr = config->dst_addr;
chan->dst_addr_width = config->dst_addr_width;
return 0;
}
static int ls1x_dma_pause(struct dma_chan *dchan)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
int ret;
guard(spinlock_irqsave)(&chan->vc.lock);
/* save the current lli */
ret = ls1x_dma_query(chan, &chan->curr_lli->phys);
if (!ret)
ls1x_dma_stop(chan);
return ret;
}
static int ls1x_dma_resume(struct dma_chan *dchan)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
guard(spinlock_irqsave)(&chan->vc.lock);
return ls1x_dma_start(chan, &chan->curr_lli->phys);
}
static int ls1x_dma_terminate_all(struct dma_chan *dchan)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
struct virt_dma_desc *vd;
LIST_HEAD(head);
ls1x_dma_stop(chan);
scoped_guard(spinlock_irqsave, &chan->vc.lock) {
vd = vchan_next_desc(&chan->vc);
if (vd)
vchan_terminate_vdesc(vd);
vchan_get_all_descriptors(&chan->vc, &head);
}
vchan_dma_desc_free_list(&chan->vc, &head);
return 0;
}
static void ls1x_dma_synchronize(struct dma_chan *dchan)
{
vchan_synchronize(to_virt_chan(dchan));
}
static enum dma_status ls1x_dma_tx_status(struct dma_chan *dchan,
dma_cookie_t cookie,
struct dma_tx_state *state)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
struct virt_dma_desc *vd;
enum dma_status status;
size_t bytes = 0;
status = dma_cookie_status(dchan, cookie, state);
if (status == DMA_COMPLETE)
return status;
scoped_guard(spinlock_irqsave, &chan->vc.lock) {
vd = vchan_find_desc(&chan->vc, cookie);
if (vd) {
struct ls1x_dma_desc *desc = to_ls1x_dma_desc(vd);
struct ls1x_dma_lli *lli;
dma_addr_t next_phys;
/* get the current lli */
if (ls1x_dma_query(chan, &chan->curr_lli->phys))
return status;
/* locate the current lli */
next_phys = chan->curr_lli->hw[LS1X_DMADESC_NEXT];
list_for_each_entry(lli, &desc->lli_list, node)
if (lli->hw[LS1X_DMADESC_NEXT] == next_phys)
break;
dev_dbg(chan2dev(dchan), "current lli_phys=%pad",
&lli->phys);
/* count the residues */
list_for_each_entry_from(lli, &desc->lli_list, node)
bytes += lli->hw[LS1X_DMADESC_LENGTH] *
chan->bus_width;
}
}
dma_set_residue(state, bytes);
return status;
}
static void ls1x_dma_issue_pending(struct dma_chan *dchan)
{
struct ls1x_dma_chan *chan = to_ls1x_dma_chan(dchan);
guard(spinlock_irqsave)(&chan->vc.lock);
if (vchan_issue_pending(&chan->vc)) {
struct virt_dma_desc *vd = vchan_next_desc(&chan->vc);
if (vd) {
struct ls1x_dma_desc *desc = to_ls1x_dma_desc(vd);
struct ls1x_dma_lli *lli;
lli = list_first_entry(&desc->lli_list,
struct ls1x_dma_lli, node);
ls1x_dma_start(chan, &lli->phys);
}
}
}
static irqreturn_t ls1x_dma_irq_handler(int irq, void *data)
{
struct ls1x_dma_chan *chan = data;
struct dma_chan *dchan = &chan->vc.chan;
struct device *dev = chan2dev(dchan);
struct virt_dma_desc *vd;
scoped_guard(spinlock, &chan->vc.lock) {
vd = vchan_next_desc(&chan->vc);
if (!vd) {
dev_warn(dev,
"IRQ %d with no active desc on channel %d\n",
irq, dchan->chan_id);
return IRQ_NONE;
}
if (chan->is_cyclic) {
vchan_cyclic_callback(vd);
} else {
list_del(&vd->node);
vchan_cookie_complete(vd);
}
}
dev_dbg(dev, "DMA IRQ %d on channel %d\n", irq, dchan->chan_id);
return IRQ_HANDLED;
}
static int ls1x_dma_chan_probe(struct platform_device *pdev,
struct ls1x_dma *dma)
{
void __iomem *reg_base;
int id;
reg_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(reg_base))
return PTR_ERR(reg_base);
for (id = 0; id < dma->nr_chans; id++) {
struct ls1x_dma_chan *chan = &dma->chan[id];
char pdev_irqname[16];
snprintf(pdev_irqname, sizeof(pdev_irqname), "ch%d", id);
chan->irq = platform_get_irq_byname(pdev, pdev_irqname);
if (chan->irq < 0)
return dev_err_probe(&pdev->dev, chan->irq,
"failed to get IRQ for ch%d\n",
id);
chan->reg_base = reg_base;
chan->vc.desc_free = ls1x_dma_free_desc;
vchan_init(&chan->vc, &dma->ddev);
}
return 0;
}
static void ls1x_dma_chan_remove(struct ls1x_dma *dma)
{
int id;
for (id = 0; id < dma->nr_chans; id++) {
struct ls1x_dma_chan *chan = &dma->chan[id];
if (chan->vc.chan.device == &dma->ddev) {
list_del(&chan->vc.chan.device_node);
tasklet_kill(&chan->vc.task);
}
}
}
static int ls1x_dma_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dma_device *ddev;
struct ls1x_dma *dma;
int ret;
ret = platform_irq_count(pdev);
if (ret <= 0 || ret > LS1X_DMA_MAX_CHANNELS)
return dev_err_probe(dev, -EINVAL,
"Invalid number of IRQ channels: %d\n",
ret);
dma = devm_kzalloc(dev, struct_size(dma, chan, ret), GFP_KERNEL);
if (!dma)
return -ENOMEM;
dma->nr_chans = ret;
/* initialize DMA device */
ddev = &dma->ddev;
ddev->dev = dev;
ddev->copy_align = DMAENGINE_ALIGN_4_BYTES;
ddev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
ddev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
ddev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
ddev->device_alloc_chan_resources = ls1x_dma_alloc_chan_resources;
ddev->device_free_chan_resources = ls1x_dma_free_chan_resources;
ddev->device_prep_slave_sg = ls1x_dma_prep_slave_sg;
ddev->device_prep_dma_cyclic = ls1x_dma_prep_dma_cyclic;
ddev->device_config = ls1x_dma_slave_config;
ddev->device_pause = ls1x_dma_pause;
ddev->device_resume = ls1x_dma_resume;
ddev->device_terminate_all = ls1x_dma_terminate_all;
ddev->device_synchronize = ls1x_dma_synchronize;
ddev->device_tx_status = ls1x_dma_tx_status;
ddev->device_issue_pending = ls1x_dma_issue_pending;
dma_cap_set(DMA_SLAVE, ddev->cap_mask);
INIT_LIST_HEAD(&ddev->channels);
/* initialize DMA channels */
ret = ls1x_dma_chan_probe(pdev, dma);
if (ret)
goto err;
ret = dmaenginem_async_device_register(ddev);
if (ret) {
dev_err(dev, "failed to register DMA device\n");
goto err;
}
ret = of_dma_controller_register(dev->of_node, of_dma_xlate_by_chan_id,
ddev);
if (ret) {
dev_err(dev, "failed to register DMA controller\n");
goto err;
}
platform_set_drvdata(pdev, dma);
dev_info(dev, "Loongson1 DMA driver registered\n");
return 0;
err:
ls1x_dma_chan_remove(dma);
return ret;
}
static void ls1x_dma_remove(struct platform_device *pdev)
{
struct ls1x_dma *dma = platform_get_drvdata(pdev);
of_dma_controller_free(pdev->dev.of_node);
ls1x_dma_chan_remove(dma);
}
static const struct of_device_id ls1x_dma_match[] = {
{ .compatible = "loongson,ls1b-apbdma" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, ls1x_dma_match);
static struct platform_driver ls1x_dma_driver = {
.probe = ls1x_dma_probe,
.remove = ls1x_dma_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = ls1x_dma_match,
},
};
module_platform_driver(ls1x_dma_driver);
MODULE_AUTHOR("Keguang Zhang <keguang.zhang@gmail.com>");
MODULE_DESCRIPTION("Loongson-1 APB DMA Controller driver");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,195 @@
// SPDX-License-Identifier: GPL-2.0-only
//
// Copyright 2024 Timesys Corporation <piotr.wojtaszczyk@timesys.com>
//
// Based on TI DMA Crossbar driver by:
// Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
// Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/mfd/syscon.h>
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/spinlock.h>
#define LPC32XX_SSP_CLK_CTRL 0x78
#define LPC32XX_I2S_CLK_CTRL 0x7c
struct lpc32xx_dmamux {
int signal;
char *name_sel0;
char *name_sel1;
int muxval;
int muxreg;
int bit;
bool busy;
};
struct lpc32xx_dmamux_data {
struct dma_router dmarouter;
struct regmap *reg;
spinlock_t lock; /* protects busy status flag */
};
/* From LPC32x0 User manual "3.2.1 DMA request signals" */
static struct lpc32xx_dmamux lpc32xx_muxes[] = {
{
.signal = 3,
.name_sel0 = "spi2-rx-tx",
.name_sel1 = "ssp1-rx",
.muxreg = LPC32XX_SSP_CLK_CTRL,
.bit = 5,
},
{
.signal = 10,
.name_sel0 = "uart7-rx",
.name_sel1 = "i2s1-dma1",
.muxreg = LPC32XX_I2S_CLK_CTRL,
.bit = 4,
},
{
.signal = 11,
.name_sel0 = "spi1-rx-tx",
.name_sel1 = "ssp1-tx",
.muxreg = LPC32XX_SSP_CLK_CTRL,
.bit = 4,
},
{
.signal = 14,
.name_sel0 = "none",
.name_sel1 = "ssp0-rx",
.muxreg = LPC32XX_SSP_CLK_CTRL,
.bit = 3,
},
{
.signal = 15,
.name_sel0 = "none",
.name_sel1 = "ssp0-tx",
.muxreg = LPC32XX_SSP_CLK_CTRL,
.bit = 2,
},
};
static void lpc32xx_dmamux_release(struct device *dev, void *route_data)
{
struct lpc32xx_dmamux_data *dmamux = dev_get_drvdata(dev);
struct lpc32xx_dmamux *mux = route_data;
dev_dbg(dev, "releasing dma request signal %d routed to %s\n",
mux->signal, mux->muxval ? mux->name_sel1 : mux->name_sel1);
guard(spinlock)(&dmamux->lock);
mux->busy = false;
}
static void *lpc32xx_dmamux_reserve(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
struct device *dev = &pdev->dev;
struct lpc32xx_dmamux_data *dmamux = platform_get_drvdata(pdev);
unsigned long flags;
struct lpc32xx_dmamux *mux = NULL;
int i;
if (dma_spec->args_count != 3) {
dev_err(&pdev->dev, "invalid number of dma mux args\n");
return ERR_PTR(-EINVAL);
}
for (i = 0; i < ARRAY_SIZE(lpc32xx_muxes); i++) {
if (lpc32xx_muxes[i].signal == dma_spec->args[0]) {
mux = &lpc32xx_muxes[i];
break;
}
}
if (!mux) {
dev_err(&pdev->dev, "invalid mux request number: %d\n",
dma_spec->args[0]);
return ERR_PTR(-EINVAL);
}
if (dma_spec->args[2] > 1) {
dev_err(&pdev->dev, "invalid dma mux value: %d\n",
dma_spec->args[1]);
return ERR_PTR(-EINVAL);
}
/* The of_node_put() will be done in the core for the node */
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
if (!dma_spec->np) {
dev_err(&pdev->dev, "can't get dma master\n");
return ERR_PTR(-EINVAL);
}
spin_lock_irqsave(&dmamux->lock, flags);
if (mux->busy) {
spin_unlock_irqrestore(&dmamux->lock, flags);
dev_err(dev, "dma request signal %d busy, routed to %s\n",
mux->signal, mux->muxval ? mux->name_sel1 : mux->name_sel1);
of_node_put(dma_spec->np);
return ERR_PTR(-EBUSY);
}
mux->busy = true;
mux->muxval = dma_spec->args[2] ? BIT(mux->bit) : 0;
regmap_update_bits(dmamux->reg, mux->muxreg, BIT(mux->bit), mux->muxval);
spin_unlock_irqrestore(&dmamux->lock, flags);
dma_spec->args[2] = 0;
dma_spec->args_count = 2;
dev_dbg(dev, "dma request signal %d routed to %s\n",
mux->signal, mux->muxval ? mux->name_sel1 : mux->name_sel1);
return mux;
}
static int lpc32xx_dmamux_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct lpc32xx_dmamux_data *dmamux;
dmamux = devm_kzalloc(&pdev->dev, sizeof(*dmamux), GFP_KERNEL);
if (!dmamux)
return -ENOMEM;
dmamux->reg = syscon_node_to_regmap(np->parent);
if (IS_ERR(dmamux->reg)) {
dev_err(&pdev->dev, "syscon lookup failed\n");
return PTR_ERR(dmamux->reg);
}
spin_lock_init(&dmamux->lock);
platform_set_drvdata(pdev, dmamux);
dmamux->dmarouter.dev = &pdev->dev;
dmamux->dmarouter.route_free = lpc32xx_dmamux_release;
return of_dma_router_register(np, lpc32xx_dmamux_reserve,
&dmamux->dmarouter);
}
static const struct of_device_id lpc32xx_dmamux_match[] = {
{ .compatible = "nxp,lpc3220-dmamux" },
{},
};
static struct platform_driver lpc32xx_dmamux_driver = {
.probe = lpc32xx_dmamux_probe,
.driver = {
.name = "lpc32xx-dmamux",
.of_match_table = lpc32xx_dmamux_match,
},
};
static int __init lpc32xx_dmamux_init(void)
{
return platform_driver_register(&lpc32xx_dmamux_driver);
}
arch_initcall(lpc32xx_dmamux_init);

View File

@ -33,11 +33,11 @@
#define LDMA_STOP BIT(4) /* DMA stop operation */
#define LDMA_CONFIG_MASK GENMASK(4, 0) /* DMA controller config bits mask */
/* Bitfields in ndesc_addr field of HW decriptor */
/* Bitfields in ndesc_addr field of HW descriptor */
#define LDMA_DESC_EN BIT(0) /*1: The next descriptor is valid */
#define LDMA_DESC_ADDR_LOW GENMASK(31, 1)
/* Bitfields in cmd field of HW decriptor */
/* Bitfields in cmd field of HW descriptor */
#define LDMA_INT BIT(1) /* Enable DMA interrupts */
#define LDMA_DATA_DIRECTION BIT(12) /* 1: write to device, 0: read from device */

View File

@ -518,7 +518,7 @@ mtk_cqdma_prep_dma_memcpy(struct dma_chan *c, dma_addr_t dest,
/* setup dma channel */
cvd[i]->ch = c;
/* setup sourece, destination, and length */
/* setup source, destination, and length */
tlen = (len > MTK_CQDMA_MAX_LEN) ? MTK_CQDMA_MAX_LEN : len;
cvd[i]->len = tlen;
cvd[i]->src = src;
@ -617,7 +617,7 @@ static int mtk_cqdma_alloc_chan_resources(struct dma_chan *c)
u32 i, min_refcnt = U32_MAX, refcnt;
unsigned long flags;
/* allocate PC with the minimun refcount */
/* allocate PC with the minimum refcount */
for (i = 0; i < cqdma->dma_channels; ++i) {
refcnt = refcount_read(&cqdma->pc[i]->refcnt);
if (refcnt < min_refcnt) {

View File

@ -226,7 +226,7 @@ struct mtk_hsdma_soc {
* @pc_refcnt: Track how many VCs are using the PC
* @lock: Lock protect agaisting multiple VCs access PC
* @soc: The pointer to area holding differences among
* vaious platform
* various platform
*/
struct mtk_hsdma_device {
struct dma_device ddev;

View File

@ -414,7 +414,7 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
if (!mv_chan_is_busy(mv_chan)) {
u32 current_desc = mv_chan_get_current_desc(mv_chan);
/*
* and the curren desc is the end of the chain before
* and the current desc is the end of the chain before
* the append, then we need to start the channel
*/
if (current_desc == old_chain_tail->async_tx.phys)
@ -1074,7 +1074,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
if (!mv_chan->dma_desc_pool_virt)
return ERR_PTR(-ENOMEM);
/* discover transaction capabilites from the platform data */
/* discover transaction capabilities from the platform data */
dma_dev->cap_mask = cap_mask;
INIT_LIST_HEAD(&dma_dev->channels);

View File

@ -99,7 +99,7 @@ struct mv_xor_device {
* @common: common dmaengine channel object members
* @slots_allocated: records the actual size of the descriptor slot pool
* @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
* @op_in_desc: new mode of driver, each op is writen to descriptor.
* @op_in_desc: new mode of driver, each op is written to descriptor.
*/
struct mv_xor_chan {
int pending;

View File

@ -175,7 +175,7 @@ struct mv_xor_v2_device {
* struct mv_xor_v2_sw_desc - implements a xor SW descriptor
* @idx: descriptor index
* @async_tx: support for the async_tx api
* @hw_desc: assosiated HW descriptor
* @hw_desc: associated HW descriptor
* @free_list: node of the free SW descriprots list
*/
struct mv_xor_v2_sw_desc {

View File

@ -897,7 +897,7 @@ static int nbpf_config(struct dma_chan *dchan,
/*
* We could check config->slave_id to match chan->terminal here,
* but with DT they would be coming from the same source, so
* such a check would be superflous
* such a check would be superfluous
*/
chan->slave_dst_addr = config->dst_addr;

View File

@ -26,7 +26,7 @@ static DEFINE_MUTEX(of_dma_lock);
*
* Finds a DMA controller with matching device node and number for dma cells
* in a list of registered DMA controllers. If a match is found a valid pointer
* to the DMA data stored is retuned. A NULL pointer is returned if no match is
* to the DMA data stored is returned. A NULL pointer is returned if no match is
* found.
*/
static struct of_dma *of_dma_find_controller(const struct of_phandle_args *dma_spec)
@ -342,7 +342,7 @@ EXPORT_SYMBOL_GPL(of_dma_simple_xlate);
*
* This function can be used as the of xlate callback for DMA driver which wants
* to match the channel based on the channel id. When using this xlate function
* the #dma-cells propety of the DMA controller dt node needs to be set to 1.
* the #dma-cells property of the DMA controller dt node needs to be set to 1.
* The data parameter of of_dma_controller_register must be a pointer to the
* dma_device struct the function should match upon.
*

View File

@ -1156,7 +1156,7 @@ static int owl_dma_probe(struct platform_device *pdev)
}
/*
* Eventhough the DMA controller is capable of generating 4
* Even though the DMA controller is capable of generating 4
* IRQ's for DMA priority feature, we only use 1 IRQ for
* simplification.
*/

View File

@ -9,7 +9,7 @@
*/
/*
* This driver supports the asynchrounous DMA copy and RAID engines available
* This driver supports the asynchronous DMA copy and RAID engines available
* on the AMCC PPC440SPe Processors.
* Based on the Intel Xscale(R) family of I/O Processors (IOP 32x, 33x, 134x)
* ADMA driver written by D.Williams.

View File

@ -14,7 +14,7 @@
/* Number of elements in the array with statical CDBs */
#define MAX_STAT_DMA_CDBS 16
/* Number of DMA engines available on the contoller */
/* Number of DMA engines available on the controller */
#define DMA_ENGINES_NUM 2
/* Maximum h/w supported number of destinations */

View File

@ -192,7 +192,7 @@ struct pt_cmd_queue {
/* Queue dma pool */
struct dma_pool *dma_pool;
/* Queue base address (not neccessarily aligned)*/
/* Queue base address (not necessarily aligned)*/
struct ptdma_desc *qbase;
/* Aligned queue start address (per requirement) */

View File

@ -440,7 +440,7 @@ static void bam_reset(struct bam_device *bdev)
val |= BAM_EN;
writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
/* set descriptor threshhold, start with 4 bytes */
/* set descriptor threshold, start with 4 bytes */
writel_relaxed(DEFAULT_CNT_THRSHLD,
bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
@ -667,7 +667,7 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
for_each_sg(sgl, sg, sg_len, i)
num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_FIFO_SIZE);
/* allocate enough room to accomodate the number of entries */
/* allocate enough room to accommodate the number of entries */
async_desc = kzalloc(struct_size(async_desc, desc, num_alloc),
GFP_NOWAIT);

View File

@ -1856,7 +1856,7 @@ static void gpi_issue_pending(struct dma_chan *chan)
read_lock_irqsave(&gpii->pm_lock, pm_lock_flags);
/* move all submitted discriptors to issued list */
/* move all submitted descriptors to issued list */
spin_lock_irqsave(&gchan->vc.lock, flags);
if (vchan_issue_pending(&gchan->vc))
vd = list_last_entry(&gchan->vc.desc_issued,

View File

@ -650,7 +650,7 @@ static enum dma_status adm_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
/*
* residue is either the full length if it is in the issued list, or 0
* if it is in progress. We have no reliable way of determining
* anything inbetween
* anything in between
*/
dma_set_residue(txstate, residue);

View File

@ -318,7 +318,7 @@ static void sh_dmae_setup_xfer(struct shdma_chan *schan,
}
/*
* Find a slave channel configuration from the contoller list by either a slave
* Find a slave channel configuration from the controller list by either a slave
* ID in the non-DT case, or by a MID/RID value in the DT case
*/
static const struct sh_dmae_slave_config *dmae_find_slave(

View File

@ -4,7 +4,7 @@
#define STE_DMA40_H
/*
* Maxium size for a single dma descriptor
* Maximum size for a single dma descriptor
* Size is limited to 16 bits.
* Size is in the units of addr-widths (1,2,4,8 bytes)
* Larger transfers will be split up to multiple linked desc

View File

@ -369,7 +369,7 @@ struct d40_phy_lli_bidir {
* @lcsp02: Either maps to register lcsp0 if src or lcsp2 if dst.
* @lcsp13: Either maps to register lcsp1 if src or lcsp3 if dst.
*
* This struct must be 8 bytes aligned since it will be accessed directy by
* This struct must be 8 bytes aligned since it will be accessed directly by
* the DMA. Never add any none hw mapped registers to this struct.
*/

View File

@ -463,7 +463,7 @@ static void tegra_dma_configure_for_next(struct tegra_dma_channel *tdc,
/*
* If interrupt is pending then do nothing as the ISR will handle
* the programing for new request.
* the programming for new request.
*/
if (status & TEGRA_APBDMA_STATUS_ISE_EOC) {
dev_err(tdc2dev(tdc),

View File

@ -131,7 +131,6 @@ int xudma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread,
struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property);
struct device *xudma_get_device(struct udma_dev *ud);
struct k3_ringacc *xudma_get_ringacc(struct udma_dev *ud);
void xudma_dev_put(struct udma_dev *ud);
u32 xudma_dev_get_psil_base(struct udma_dev *ud);
struct udma_tisci_rm *xudma_dev_get_tisci_rm(struct udma_dev *ud);

View File

@ -1742,7 +1742,7 @@ static int xgene_dma_probe(struct platform_device *pdev)
/* Initialize DMA channels software state */
xgene_dma_init_channels(pdma);
/* Configue DMA rings */
/* Configure DMA rings */
ret = xgene_dma_init_rings(pdma);
if (ret)
goto err_clk_enable;

View File

@ -149,7 +149,7 @@ struct xilinx_dpdma_chan;
* @addr_ext: upper 16 bit of 48 bit address (next_desc and src_addr)
* @next_desc: next descriptor 32 bit address
* @src_addr: payload source address (1st page, 32 LSB)
* @addr_ext_23: payload source address (3nd and 3rd pages, 16 LSBs)
* @addr_ext_23: payload source address (2nd and 3rd pages, 16 LSBs)
* @addr_ext_45: payload source address (4th and 5th pages, 16 LSBs)
* @src_addr2: payload source address (2nd page, 32 LSB)
* @src_addr3: payload source address (3rd page, 32 LSB)
@ -210,7 +210,7 @@ struct xilinx_dpdma_tx_desc {
* @vchan: virtual DMA channel
* @reg: register base address
* @id: channel ID
* @wait_to_stop: queue to wait for outstanding transacitons before stopping
* @wait_to_stop: queue to wait for outstanding transactions before stopping
* @running: true if the channel is running
* @first_frame: flag for the first frame of stream
* @video_group: flag if multi-channel operation is needed for video channels
@ -670,6 +670,84 @@ static void xilinx_dpdma_chan_free_tx_desc(struct virt_dma_desc *vdesc)
kfree(desc);
}
/**
* xilinx_dpdma_chan_prep_cyclic - Prepare a cyclic dma descriptor
* @chan: DPDMA channel
* @buf_addr: buffer address
* @buf_len: buffer length
* @period_len: number of periods
* @flags: tx flags argument passed in to prepare function
*
* Prepare a tx descriptor incudling internal software/hardware descriptors
* for the given cyclic transaction.
*
* Return: A dma async tx descriptor on success, or NULL.
*/
static struct dma_async_tx_descriptor *
xilinx_dpdma_chan_prep_cyclic(struct xilinx_dpdma_chan *chan,
dma_addr_t buf_addr, size_t buf_len,
size_t period_len, unsigned long flags)
{
struct xilinx_dpdma_tx_desc *tx_desc;
struct xilinx_dpdma_sw_desc *sw_desc, *last = NULL;
unsigned int periods = buf_len / period_len;
unsigned int i;
tx_desc = xilinx_dpdma_chan_alloc_tx_desc(chan);
if (!tx_desc)
return NULL;
for (i = 0; i < periods; i++) {
struct xilinx_dpdma_hw_desc *hw_desc;
if (!IS_ALIGNED(buf_addr, XILINX_DPDMA_ALIGN_BYTES)) {
dev_err(chan->xdev->dev,
"buffer should be aligned at %d B\n",
XILINX_DPDMA_ALIGN_BYTES);
goto error;
}
sw_desc = xilinx_dpdma_chan_alloc_sw_desc(chan);
if (!sw_desc)
goto error;
xilinx_dpdma_sw_desc_set_dma_addrs(chan->xdev, sw_desc, last,
&buf_addr, 1);
hw_desc = &sw_desc->hw;
hw_desc->xfer_size = period_len;
hw_desc->hsize_stride =
FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_HSIZE_MASK,
period_len) |
FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_STRIDE_MASK,
period_len);
hw_desc->control = XILINX_DPDMA_DESC_CONTROL_PREEMBLE |
XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE |
XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
list_add_tail(&sw_desc->node, &tx_desc->descriptors);
buf_addr += period_len;
last = sw_desc;
}
sw_desc = list_first_entry(&tx_desc->descriptors,
struct xilinx_dpdma_sw_desc, node);
last->hw.next_desc = lower_32_bits(sw_desc->dma_addr);
if (chan->xdev->ext_addr)
last->hw.addr_ext |=
FIELD_PREP(XILINX_DPDMA_DESC_ADDR_EXT_NEXT_ADDR_MASK,
upper_32_bits(sw_desc->dma_addr));
last->hw.control |= XILINX_DPDMA_DESC_CONTROL_LAST_OF_FRAME;
return vchan_tx_prep(&chan->vchan, &tx_desc->vdesc, flags);
error:
xilinx_dpdma_chan_free_tx_desc(&tx_desc->vdesc);
return NULL;
}
/**
* xilinx_dpdma_chan_prep_interleaved_dma - Prepare an interleaved dma
* descriptor
@ -1189,6 +1267,23 @@ out_unlock:
/* -----------------------------------------------------------------------------
* DMA Engine Operations
*/
static struct dma_async_tx_descriptor *
xilinx_dpdma_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t buf_addr,
size_t buf_len, size_t period_len,
enum dma_transfer_direction direction,
unsigned long flags)
{
struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
if (direction != DMA_MEM_TO_DEV)
return NULL;
if (buf_len % period_len)
return NULL;
return xilinx_dpdma_chan_prep_cyclic(chan, buf_addr, buf_len,
period_len, flags);
}
static struct dma_async_tx_descriptor *
xilinx_dpdma_prep_interleaved_dma(struct dma_chan *dchan,
@ -1672,6 +1767,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SLAVE, ddev->cap_mask);
dma_cap_set(DMA_PRIVATE, ddev->cap_mask);
dma_cap_set(DMA_CYCLIC, ddev->cap_mask);
dma_cap_set(DMA_INTERLEAVE, ddev->cap_mask);
dma_cap_set(DMA_REPEAT, ddev->cap_mask);
dma_cap_set(DMA_LOAD_EOT, ddev->cap_mask);
@ -1679,6 +1775,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
ddev->device_alloc_chan_resources = xilinx_dpdma_alloc_chan_resources;
ddev->device_free_chan_resources = xilinx_dpdma_free_chan_resources;
ddev->device_prep_dma_cyclic = xilinx_dpdma_prep_dma_cyclic;
ddev->device_prep_interleaved_dma = xilinx_dpdma_prep_interleaved_dma;
/* TODO: Can we achieve better granularity ? */
ddev->device_tx_status = dma_cookie_status;

View File

@ -22,10 +22,10 @@
#include "../dmaengine.h"
/* Register Offsets */
#define ZYNQMP_DMA_ISR 0x100
#define ZYNQMP_DMA_IMR 0x104
#define ZYNQMP_DMA_IER 0x108
#define ZYNQMP_DMA_IDS 0x10C
#define ZYNQMP_DMA_ISR (chan->irq_offset + 0x100)
#define ZYNQMP_DMA_IMR (chan->irq_offset + 0x104)
#define ZYNQMP_DMA_IER (chan->irq_offset + 0x108)
#define ZYNQMP_DMA_IDS (chan->irq_offset + 0x10c)
#define ZYNQMP_DMA_CTRL0 0x110
#define ZYNQMP_DMA_CTRL1 0x114
#define ZYNQMP_DMA_DATA_ATTR 0x120
@ -145,6 +145,9 @@
#define tx_to_desc(tx) container_of(tx, struct zynqmp_dma_desc_sw, \
async_tx)
/* IRQ Register offset for Versal Gen 2 */
#define IRQ_REG_OFFSET 0x308
/**
* struct zynqmp_dma_desc_ll - Hw linked list descriptor
* @addr: Buffer address
@ -211,6 +214,7 @@ struct zynqmp_dma_desc_sw {
* @bus_width: Bus width
* @src_burst_len: Source burst length
* @dst_burst_len: Dest burst length
* @irq_offset: Irq register offset
*/
struct zynqmp_dma_chan {
struct zynqmp_dma_device *zdev;
@ -235,6 +239,7 @@ struct zynqmp_dma_chan {
u32 bus_width;
u32 src_burst_len;
u32 dst_burst_len;
u32 irq_offset;
};
/**
@ -253,6 +258,14 @@ struct zynqmp_dma_device {
struct clk *clk_apb;
};
struct zynqmp_dma_config {
u32 offset;
};
static const struct zynqmp_dma_config versal2_dma_config = {
.offset = IRQ_REG_OFFSET,
};
static inline void zynqmp_dma_writeq(struct zynqmp_dma_chan *chan, u32 reg,
u64 value)
{
@ -892,6 +905,7 @@ static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *zdev,
{
struct zynqmp_dma_chan *chan;
struct device_node *node = pdev->dev.of_node;
const struct zynqmp_dma_config *match_data;
int err;
chan = devm_kzalloc(zdev->dev, sizeof(*chan), GFP_KERNEL);
@ -919,6 +933,10 @@ static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *zdev,
return -EINVAL;
}
match_data = of_device_get_match_data(&pdev->dev);
if (match_data)
chan->irq_offset = match_data->offset;
chan->is_dmacoherent = of_property_read_bool(node, "dma-coherent");
zdev->chan = chan;
tasklet_setup(&chan->tasklet, zynqmp_dma_do_tasklet);
@ -1161,6 +1179,7 @@ static void zynqmp_dma_remove(struct platform_device *pdev)
}
static const struct of_device_id zynqmp_dma_of_match[] = {
{ .compatible = "amd,versal2-dma-1.0", .data = &versal2_dma_config },
{ .compatible = "xlnx,zynqmp-dma-1.0", },
{}
};

View File

@ -1,174 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2008
* Guennadi Liakhovetski, DENX Software Engineering, <lg@denx.de>
*
* Copyright (C) 2005-2007 Freescale Semiconductor, Inc.
*/
#ifndef __LINUX_DMA_IPU_DMA_H
#define __LINUX_DMA_IPU_DMA_H
#include <linux/types.h>
#include <linux/dmaengine.h>
/* IPU DMA Controller channel definitions. */
enum ipu_channel {
IDMAC_IC_0 = 0, /* IC (encoding task) to memory */
IDMAC_IC_1 = 1, /* IC (viewfinder task) to memory */
IDMAC_ADC_0 = 1,
IDMAC_IC_2 = 2,
IDMAC_ADC_1 = 2,
IDMAC_IC_3 = 3,
IDMAC_IC_4 = 4,
IDMAC_IC_5 = 5,
IDMAC_IC_6 = 6,
IDMAC_IC_7 = 7, /* IC (sensor data) to memory */
IDMAC_IC_8 = 8,
IDMAC_IC_9 = 9,
IDMAC_IC_10 = 10,
IDMAC_IC_11 = 11,
IDMAC_IC_12 = 12,
IDMAC_IC_13 = 13,
IDMAC_SDC_0 = 14, /* Background synchronous display data */
IDMAC_SDC_1 = 15, /* Foreground data (overlay) */
IDMAC_SDC_2 = 16,
IDMAC_SDC_3 = 17,
IDMAC_ADC_2 = 18,
IDMAC_ADC_3 = 19,
IDMAC_ADC_4 = 20,
IDMAC_ADC_5 = 21,
IDMAC_ADC_6 = 22,
IDMAC_ADC_7 = 23,
IDMAC_PF_0 = 24,
IDMAC_PF_1 = 25,
IDMAC_PF_2 = 26,
IDMAC_PF_3 = 27,
IDMAC_PF_4 = 28,
IDMAC_PF_5 = 29,
IDMAC_PF_6 = 30,
IDMAC_PF_7 = 31,
};
/* Order significant! */
enum ipu_channel_status {
IPU_CHANNEL_FREE,
IPU_CHANNEL_INITIALIZED,
IPU_CHANNEL_READY,
IPU_CHANNEL_ENABLED,
};
#define IPU_CHANNELS_NUM 32
enum pixel_fmt {
/* 1 byte */
IPU_PIX_FMT_GENERIC,
IPU_PIX_FMT_RGB332,
IPU_PIX_FMT_YUV420P,
IPU_PIX_FMT_YUV422P,
IPU_PIX_FMT_YUV420P2,
IPU_PIX_FMT_YVU422P,
/* 2 bytes */
IPU_PIX_FMT_RGB565,
IPU_PIX_FMT_RGB666,
IPU_PIX_FMT_BGR666,
IPU_PIX_FMT_YUYV,
IPU_PIX_FMT_UYVY,
/* 3 bytes */
IPU_PIX_FMT_RGB24,
IPU_PIX_FMT_BGR24,
/* 4 bytes */
IPU_PIX_FMT_GENERIC_32,
IPU_PIX_FMT_RGB32,
IPU_PIX_FMT_BGR32,
IPU_PIX_FMT_ABGR32,
IPU_PIX_FMT_BGRA32,
IPU_PIX_FMT_RGBA32,
};
enum ipu_color_space {
IPU_COLORSPACE_RGB,
IPU_COLORSPACE_YCBCR,
IPU_COLORSPACE_YUV
};
/*
* Enumeration of IPU rotation modes
*/
enum ipu_rotate_mode {
/* Note the enum values correspond to BAM value */
IPU_ROTATE_NONE = 0,
IPU_ROTATE_VERT_FLIP = 1,
IPU_ROTATE_HORIZ_FLIP = 2,
IPU_ROTATE_180 = 3,
IPU_ROTATE_90_RIGHT = 4,
IPU_ROTATE_90_RIGHT_VFLIP = 5,
IPU_ROTATE_90_RIGHT_HFLIP = 6,
IPU_ROTATE_90_LEFT = 7,
};
/*
* Enumeration of DI ports for ADC.
*/
enum display_port {
DISP0,
DISP1,
DISP2,
DISP3
};
struct idmac_video_param {
unsigned short in_width;
unsigned short in_height;
uint32_t in_pixel_fmt;
unsigned short out_width;
unsigned short out_height;
uint32_t out_pixel_fmt;
unsigned short out_stride;
bool graphics_combine_en;
bool global_alpha_en;
bool key_color_en;
enum display_port disp;
unsigned short out_left;
unsigned short out_top;
};
/*
* Union of initialization parameters for a logical channel. So far only video
* parameters are used.
*/
union ipu_channel_param {
struct idmac_video_param video;
};
struct idmac_tx_desc {
struct dma_async_tx_descriptor txd;
struct scatterlist *sg; /* scatterlist for this */
unsigned int sg_len; /* tx-descriptor. */
struct list_head list;
};
struct idmac_channel {
struct dma_chan dma_chan;
dma_cookie_t completed; /* last completed cookie */
union ipu_channel_param params;
enum ipu_channel link; /* input channel, linked to the output */
enum ipu_channel_status status;
void *client; /* Only one client per channel */
unsigned int n_tx_desc;
struct idmac_tx_desc *desc; /* allocated tx-descriptors */
struct scatterlist *sg[2]; /* scatterlist elements in buffer-0 and -1 */
struct list_head free_list; /* free tx-descriptors */
struct list_head queue; /* queued tx-descriptors */
spinlock_t lock; /* protects sg[0,1], queue */
struct mutex chan_mutex; /* protects status, cookie, free_list */
bool sec_chan_en;
int active_buffer;
unsigned int eof_irq;
char eof_name[16]; /* EOF IRQ name for request_irq() */
};
#define to_tx_desc(tx) container_of(tx, struct idmac_tx_desc, txd)
#define to_idmac_chan(c) container_of(c, struct idmac_channel, dma_chan)
#endif /* __LINUX_DMA_IPU_DMA_H */

View File

@ -136,8 +136,6 @@ u32 k3_udma_glue_rx_flow_get_fdq_id(struct k3_udma_glue_rx_channel *rx_chn,
u32 k3_udma_glue_rx_get_flow_id_base(struct k3_udma_glue_rx_channel *rx_chn);
int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn,
u32 flow_num);
void k3_udma_glue_rx_put_irq(struct k3_udma_glue_rx_channel *rx_chn,
u32 flow_num);
void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
u32 flow_num, void *data,
void (*cleanup)(void *data, dma_addr_t desc_dma),

View File

@ -2709,6 +2709,9 @@
#define PCI_DEVICE_ID_INTEL_82815_MC 0x1130
#define PCI_DEVICE_ID_INTEL_82815_CGC 0x1132
#define PCI_DEVICE_ID_INTEL_SST_TNG 0x119a
#define PCI_DEVICE_ID_INTEL_DSA_GNRD 0x11fb
#define PCI_DEVICE_ID_INTEL_DSA_DMR 0x1212
#define PCI_DEVICE_ID_INTEL_IAA_DMR 0x1216
#define PCI_DEVICE_ID_INTEL_82092AA_0 0x1221
#define PCI_DEVICE_ID_INTEL_82437 0x122d
#define PCI_DEVICE_ID_INTEL_82371FB_0 0x122e

View File

@ -0,0 +1,36 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2023-2024, Advanced Micro Devices, Inc.
*/
#ifndef _PLATDATA_AMD_QDMA_H
#define _PLATDATA_AMD_QDMA_H
#include <linux/dmaengine.h>
/**
* struct qdma_queue_info - DMA queue information. This information is used to
* match queue when DMA channel is requested
* @dir: Channel transfer direction
*/
struct qdma_queue_info {
enum dma_transfer_direction dir;
};
#define QDMA_FILTER_PARAM(qinfo) ((void *)(qinfo))
struct dma_slave_map;
/**
* struct qdma_platdata - Platform specific data for QDMA engine
* @max_mm_channels: Maximum number of MM DMA channels in each direction
* @device_map: DMA slave map
* @irq_index: The index of first IRQ
*/
struct qdma_platdata {
u32 max_mm_channels;
u32 irq_index;
struct dma_slave_map *device_map;
};
#endif /* _PLATDATA_AMD_QDMA_H */