dmaengine updates for 4.15-rc1
Updates for this cycle include: - New driver for Spreadtrum dma controller, ST MDMA and DMAMUX controllers - PM support for IMG MDC drivers - Updates to bcm-sba-raid driver and improvements to sun6i driver - Subsystem conversion for: - timers to use timer_setup() - remove usage of PCI pool API - usage of %p format specifier - Minor updates to bunch of drivers -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJaCn48AAoJEHwUBw8lI4NHTe8P/RpJH8tDat/joT7Hl71stEod vKa0iSkW2fdwd6PeaRfd+UTloska1NE9rdgfh8pCVveoHjCPQBVBOC7V8DbMtlsi /IlJjFT74wl2R1aSHcSGoLGsIEyurz+9SK88qCU54OQSjVHSnfmyGI4ycTLQGH9U zce5JHWHB5MkdftM4eJaSE/t0Md1DBkxadFSQRkwQqqDqoLE7jgJUK0TADRukQqS fsDYPh/OhYAizAHlmEGuLZQheN0ld5W7n1sGsEnBD88wtBMvYHzAwT17B+BobxEp jyaoE5nV4AgqWh1mvixrmgKoj2KL3DDC+QeoHYCExdcgIrvc86xN3homx9g9y38a b99pgDDvXjw4N7S6AmRyQlm/5D0QyjUaoHgGklsaR3ix81dFwDY15aZa8/uQ4EAT iKH8DxAgOq6aG1MkUycQ/7QTenRbN4yWQQa+Mm5ncoNU8bpazyxf2l5L9OJWpFjX Q6VagNim+plGeUhpJ4IEfPi7LChXFaYsb1D7A/dqpIRvaYzwsy80b/DNhobGMDF6 eTpny64AKHnozWw/KP5k3DfcYvoU/ytcSsWf8h+CPN7EdLMBqUXFgkVwtyf6WKNc UPl+2in08GLgfGb+n2IAdaQzlJ4dK2P7f7mx0T4OvRymu35HXd8nJjmMJ5ZyBr1t Z/0JVfcA66AL+XSt179C =t9Ix -----END PGP SIGNATURE----- Merge tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma Pull dmaengine updates from Vinod Koul: "Updates for this cycle include: - new driver for Spreadtrum dma controller, ST MDMA and DMAMUX controllers - PM support for IMG MDC drivers - updates to bcm-sba-raid driver and improvements to sun6i driver - subsystem conversion for: - timers to use timer_setup() - remove usage of PCI pool API - usage of %p format specifier - minor updates to bunch of drivers" * tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (49 commits) dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type dmaengine: dmatest: warn user when dma test times out dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue" dmaengine: stm32_mdma: activate pack/unpack feature dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad MAINTAINERS: Step down from a co-maintaner of DW DMAC driver dmaengine: pch_dma: Replace PCI pool old API dmaengine: Convert timers to use timer_setup() dmaengine: sprd: Add Spreadtrum DMA driver dt-bindings: dmaengine: Add Spreadtrum SC9860 DMA controller dmaengine: sun6i: Retrieve channel count/max request from devicetree dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs dmaengine: bcm-sba-raid: Use common GPL comment header dmaengine: bcm-sba-raid: Use only single mailbox channel dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock dmaengine: pl330: fix descriptor allocation fail dmaengine: rcar-dmac: use TCRB instead of TCR for residue dmaengine: sun6i: Add support for Allwinner A64 and compatibles arm64: allwinner: a64: Add devicetree binding for DMA controller ...
This commit is contained in:
commit
23c258763b
|
@ -3,6 +3,8 @@
|
|||
Required Properties:
|
||||
-compatible: "renesas,<soctype>-usb-dmac", "renesas,usb-dmac" as fallback.
|
||||
Examples with soctypes are:
|
||||
- "renesas,r8a7743-usb-dmac" (RZ/G1M)
|
||||
- "renesas,r8a7745-usb-dmac" (RZ/G1E)
|
||||
- "renesas,r8a7790-usb-dmac" (R-Car H2)
|
||||
- "renesas,r8a7791-usb-dmac" (R-Car M2-W)
|
||||
- "renesas,r8a7793-usb-dmac" (R-Car M2-N)
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
* Spreadtrum DMA controller
|
||||
|
||||
This binding follows the generic DMA bindings defined in dma.txt.
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "sprd,sc9860-dma".
|
||||
- reg: Should contain DMA registers location and length.
|
||||
- interrupts: Should contain one interrupt shared by all channel.
|
||||
- #dma-cells: must be <1>. Used to represent the number of integer
|
||||
cells in the dmas property of client device.
|
||||
- #dma-channels : Number of DMA channels supported. Should be 32.
|
||||
- clock-names: Should contain the clock of the DMA controller.
|
||||
- clocks: Should contain a clock specifier for each entry in clock-names.
|
||||
|
||||
Example:
|
||||
|
||||
Controller:
|
||||
apdma: dma-controller@20100000 {
|
||||
compatible = "sprd,sc9860-dma";
|
||||
reg = <0x20100000 0x4000>;
|
||||
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
|
||||
#dma-cells = <1>;
|
||||
#dma-channels = <32>;
|
||||
clock-names = "enable";
|
||||
clocks = <&clk_ap_ahb_gates 5>;
|
||||
};
|
||||
|
||||
|
||||
Client:
|
||||
DMA clients connected to the Spreadtrum DMA controller must use the format
|
||||
described in the dma.txt file, using a two-cell specifier for each channel.
|
||||
The two cells in order are:
|
||||
1. A phandle pointing to the DMA controller.
|
||||
2. The channel id.
|
||||
|
||||
spi0: spi@70a00000{
|
||||
...
|
||||
dma-names = "rx_chn", "tx_chn";
|
||||
dmas = <&apdma 11>, <&apdma 12>;
|
||||
...
|
||||
};
|
|
@ -13,6 +13,7 @@ Required properties:
|
|||
- #dma-cells : Must be <4>. See DMA client paragraph for more details.
|
||||
|
||||
Optional properties:
|
||||
- dma-requests : Number of DMA requests supported.
|
||||
- resets: Reference to a reset controller asserting the DMA controller
|
||||
- st,mem2mem: boolean; if defined, it indicates that the controller supports
|
||||
memory-to-memory transfer
|
||||
|
@ -34,12 +35,13 @@ Example:
|
|||
#dma-cells = <4>;
|
||||
st,mem2mem;
|
||||
resets = <&rcc 150>;
|
||||
dma-requests = <8>;
|
||||
};
|
||||
|
||||
* DMA client
|
||||
|
||||
DMA clients connected to the STM32 DMA controller must use the format
|
||||
described in the dma.txt file, using a five-cell specifier for each
|
||||
described in the dma.txt file, using a four-cell specifier for each
|
||||
channel: a phandle to the DMA controller plus the following four integer cells:
|
||||
|
||||
1. The channel id
|
||||
|
|
|
@ -0,0 +1,84 @@
|
|||
STM32 DMA MUX (DMA request router)
|
||||
|
||||
Required properties:
|
||||
- compatible: "st,stm32h7-dmamux"
|
||||
- reg: Memory map for accessing module
|
||||
- #dma-cells: Should be set to <3>.
|
||||
First parameter is request line number.
|
||||
Second is DMA channel configuration
|
||||
Third is Fifo threshold
|
||||
For more details about the three cells, please see
|
||||
stm32-dma.txt documentation binding file
|
||||
- dma-masters: Phandle pointing to the DMA controllers.
|
||||
Several controllers are allowed. Only "st,stm32-dma" DMA
|
||||
compatible are supported.
|
||||
|
||||
Optional properties:
|
||||
- dma-channels : Number of DMA requests supported.
|
||||
- dma-requests : Number of DMAMUX requests supported.
|
||||
- resets: Reference to a reset controller asserting the DMA controller
|
||||
- clocks: Input clock of the DMAMUX instance.
|
||||
|
||||
Example:
|
||||
|
||||
/* DMA controller 1 */
|
||||
dma1: dma-controller@40020000 {
|
||||
compatible = "st,stm32-dma";
|
||||
reg = <0x40020000 0x400>;
|
||||
interrupts = <11>,
|
||||
<12>,
|
||||
<13>,
|
||||
<14>,
|
||||
<15>,
|
||||
<16>,
|
||||
<17>,
|
||||
<47>;
|
||||
clocks = <&timer_clk>;
|
||||
#dma-cells = <4>;
|
||||
st,mem2mem;
|
||||
resets = <&rcc 150>;
|
||||
dma-channels = <8>;
|
||||
dma-requests = <8>;
|
||||
};
|
||||
|
||||
/* DMA controller 1 */
|
||||
dma2: dma@40020400 {
|
||||
compatible = "st,stm32-dma";
|
||||
reg = <0x40020400 0x400>;
|
||||
interrupts = <56>,
|
||||
<57>,
|
||||
<58>,
|
||||
<59>,
|
||||
<60>,
|
||||
<68>,
|
||||
<69>,
|
||||
<70>;
|
||||
clocks = <&timer_clk>;
|
||||
#dma-cells = <4>;
|
||||
st,mem2mem;
|
||||
resets = <&rcc 150>;
|
||||
dma-channels = <8>;
|
||||
dma-requests = <8>;
|
||||
};
|
||||
|
||||
/* DMA mux */
|
||||
dmamux1: dma-router@40020800 {
|
||||
compatible = "st,stm32h7-dmamux";
|
||||
reg = <0x40020800 0x3c>;
|
||||
#dma-cells = <3>;
|
||||
dma-requests = <128>;
|
||||
dma-channels = <16>;
|
||||
dma-masters = <&dma1 &dma2>;
|
||||
clocks = <&timer_clk>;
|
||||
};
|
||||
|
||||
/* DMA client */
|
||||
usart1: serial@40011000 {
|
||||
compatible = "st,stm32-usart", "st,stm32-uart";
|
||||
reg = <0x40011000 0x400>;
|
||||
interrupts = <37>;
|
||||
clocks = <&timer_clk>;
|
||||
dmas = <&dmamux1 41 0x414 0>,
|
||||
<&dmamux1 42 0x414 0>;
|
||||
dma-names = "rx", "tx";
|
||||
};
|
|
@ -0,0 +1,94 @@
|
|||
* STMicroelectronics STM32 MDMA controller
|
||||
|
||||
The STM32 MDMA is a general-purpose direct memory access controller capable of
|
||||
supporting 64 independent DMA channels with 256 HW requests.
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "st,stm32h7-mdma"
|
||||
- reg: Should contain MDMA registers location and length. This should include
|
||||
all of the per-channel registers.
|
||||
- interrupts: Should contain the MDMA interrupt.
|
||||
- clocks: Should contain the input clock of the DMA instance.
|
||||
- resets: Reference to a reset controller asserting the DMA controller.
|
||||
- #dma-cells : Must be <5>. See DMA client paragraph for more details.
|
||||
|
||||
Optional properties:
|
||||
- dma-channels: Number of DMA channels supported by the controller.
|
||||
- dma-requests: Number of DMA request signals supported by the controller.
|
||||
- st,ahb-addr-masks: Array of u32 mask to list memory devices addressed via
|
||||
AHB bus.
|
||||
|
||||
Example:
|
||||
|
||||
mdma1: dma@52000000 {
|
||||
compatible = "st,stm32h7-mdma";
|
||||
reg = <0x52000000 0x1000>;
|
||||
interrupts = <122>;
|
||||
clocks = <&timer_clk>;
|
||||
resets = <&rcc 992>;
|
||||
#dma-cells = <5>;
|
||||
dma-channels = <16>;
|
||||
dma-requests = <32>;
|
||||
st,ahb-addr-masks = <0x20000000>, <0x00000000>;
|
||||
};
|
||||
|
||||
* DMA client
|
||||
|
||||
DMA clients connected to the STM32 MDMA controller must use the format
|
||||
described in the dma.txt file, using a five-cell specifier for each channel:
|
||||
a phandle to the MDMA controller plus the following five integer cells:
|
||||
|
||||
1. The request line number
|
||||
2. The priority level
|
||||
0x00: Low
|
||||
0x01: Medium
|
||||
0x10: High
|
||||
0x11: Very high
|
||||
3. A 32bit mask specifying the DMA channel configuration
|
||||
-bit 0-1: Source increment mode
|
||||
0x00: Source address pointer is fixed
|
||||
0x10: Source address pointer is incremented after each data transfer
|
||||
0x11: Source address pointer is decremented after each data transfer
|
||||
-bit 2-3: Destination increment mode
|
||||
0x00: Destination address pointer is fixed
|
||||
0x10: Destination address pointer is incremented after each data
|
||||
transfer
|
||||
0x11: Destination address pointer is decremented after each data
|
||||
transfer
|
||||
-bit 8-9: Source increment offset size
|
||||
0x00: byte (8bit)
|
||||
0x01: half-word (16bit)
|
||||
0x10: word (32bit)
|
||||
0x11: double-word (64bit)
|
||||
-bit 10-11: Destination increment offset size
|
||||
0x00: byte (8bit)
|
||||
0x01: half-word (16bit)
|
||||
0x10: word (32bit)
|
||||
0x11: double-word (64bit)
|
||||
-bit 25-18: The number of bytes to be transferred in a single transfer
|
||||
(min = 1 byte, max = 128 bytes)
|
||||
-bit 29:28: Trigger Mode
|
||||
0x00: Each MDMA request triggers a buffer transfer (max 128 bytes)
|
||||
0x01: Each MDMA request triggers a block transfer (max 64K bytes)
|
||||
0x10: Each MDMA request triggers a repeated block transfer
|
||||
0x11: Each MDMA request triggers a linked list transfer
|
||||
4. A 32bit value specifying the register to be used to acknowledge the request
|
||||
if no HW ack signal is used by the MDMA client
|
||||
5. A 32bit mask specifying the value to be written to acknowledge the request
|
||||
if no HW ack signal is used by the MDMA client
|
||||
|
||||
Example:
|
||||
|
||||
i2c4: i2c@5c002000 {
|
||||
compatible = "st,stm32f7-i2c";
|
||||
reg = <0x5c002000 0x400>;
|
||||
interrupts = <95>,
|
||||
<96>;
|
||||
clocks = <&timer_clk>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
dmas = <&mdma1 36 0x0 0x40008 0x0 0x0>,
|
||||
<&mdma1 37 0x0 0x40002 0x0 0x0>;
|
||||
dma-names = "rx", "tx";
|
||||
status = "disabled";
|
||||
};
|
|
@ -27,6 +27,32 @@ Example:
|
|||
#dma-cells = <1>;
|
||||
};
|
||||
|
||||
------------------------------------------------------------------------------
|
||||
For A64 DMA controller:
|
||||
|
||||
Required properties:
|
||||
- compatible: "allwinner,sun50i-a64-dma"
|
||||
- dma-channels: Number of DMA channels supported by the controller.
|
||||
Refer to Documentation/devicetree/bindings/dma/dma.txt
|
||||
- all properties above, i.e. reg, interrupts, clocks, resets and #dma-cells
|
||||
|
||||
Optional properties:
|
||||
- dma-requests: Number of DMA request signals supported by the controller.
|
||||
Refer to Documentation/devicetree/bindings/dma/dma.txt
|
||||
|
||||
Example:
|
||||
dma: dma-controller@1c02000 {
|
||||
compatible = "allwinner,sun50i-a64-dma";
|
||||
reg = <0x01c02000 0x1000>;
|
||||
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&ccu CLK_BUS_DMA>;
|
||||
dma-channels = <8>;
|
||||
dma-requests = <27>;
|
||||
resets = <&ccu RST_BUS_DMA>;
|
||||
#dma-cells = <1>;
|
||||
};
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
Clients:
|
||||
|
||||
DMA clients connected to the A31 DMA controller must use the format
|
||||
|
|
|
@ -12947,7 +12947,7 @@ F: Documentation/devicetree/bindings/arc/axs10*
|
|||
|
||||
SYNOPSYS DESIGNWARE DMAC DRIVER
|
||||
M: Viresh Kumar <vireshk@kernel.org>
|
||||
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
||||
R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
||||
S: Maintained
|
||||
F: include/linux/dma/dw.h
|
||||
F: include/linux/platform_data/dma-dw.h
|
||||
|
|
|
@ -115,7 +115,7 @@ config BCM_SBA_RAID
|
|||
select DMA_ENGINE_RAID
|
||||
select ASYNC_TX_DISABLE_XOR_VAL_DMA
|
||||
select ASYNC_TX_DISABLE_PQ_VAL_DMA
|
||||
default ARCH_BCM_IPROC
|
||||
default m if ARCH_BCM_IPROC
|
||||
help
|
||||
Enable support for Broadcom SBA RAID Engine. The SBA RAID
|
||||
engine is available on most of the Broadcom iProc SoCs. It
|
||||
|
@ -483,6 +483,35 @@ config STM32_DMA
|
|||
If you have a board based on such a MCU and wish to use DMA say Y
|
||||
here.
|
||||
|
||||
config STM32_DMAMUX
|
||||
bool "STMicroelectronics STM32 dma multiplexer support"
|
||||
depends on STM32_DMA || COMPILE_TEST
|
||||
help
|
||||
Enable support for the on-chip DMA multiplexer on STMicroelectronics
|
||||
STM32 MCUs.
|
||||
If you have a board based on such a MCU and wish to use DMAMUX say Y
|
||||
here.
|
||||
|
||||
config STM32_MDMA
|
||||
bool "STMicroelectronics STM32 master dma support"
|
||||
depends on ARCH_STM32 || COMPILE_TEST
|
||||
depends on OF
|
||||
select DMA_ENGINE
|
||||
select DMA_VIRTUAL_CHANNELS
|
||||
help
|
||||
Enable support for the on-chip MDMA controller on STMicroelectronics
|
||||
STM32 platforms.
|
||||
If you have a board based on STM32 SoC and wish to use the master DMA
|
||||
say Y here.
|
||||
|
||||
config SPRD_DMA
|
||||
tristate "Spreadtrum DMA support"
|
||||
depends on ARCH_SPRD || COMPILE_TEST
|
||||
select DMA_ENGINE
|
||||
select DMA_VIRTUAL_CHANNELS
|
||||
help
|
||||
Enable support for the on-chip DMA controller on Spreadtrum platform.
|
||||
|
||||
config S3C24XX_DMAC
|
||||
bool "Samsung S3C24XX DMA support"
|
||||
depends on ARCH_S3C24XX || COMPILE_TEST
|
||||
|
|
|
@ -60,6 +60,9 @@ obj-$(CONFIG_RENESAS_DMA) += sh/
|
|||
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
|
||||
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
|
||||
obj-$(CONFIG_STM32_DMA) += stm32-dma.o
|
||||
obj-$(CONFIG_STM32_DMAMUX) += stm32-dmamux.o
|
||||
obj-$(CONFIG_STM32_MDMA) += stm32-mdma.o
|
||||
obj-$(CONFIG_SPRD_DMA) += sprd-dma.o
|
||||
obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o
|
||||
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
|
||||
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
|
||||
|
|
|
@ -385,7 +385,7 @@ static void vdbg_dump_regs(struct at_dma_chan *atchan) {}
|
|||
static void atc_dump_lli(struct at_dma_chan *atchan, struct at_lli *lli)
|
||||
{
|
||||
dev_crit(chan2dev(&atchan->chan_common),
|
||||
" desc: s%pad d%pad ctrl0x%x:0x%x l0x%pad\n",
|
||||
"desc: s%pad d%pad ctrl0x%x:0x%x l%pad\n",
|
||||
&lli->saddr, &lli->daddr,
|
||||
lli->ctrla, lli->ctrlb, &lli->dscr);
|
||||
}
|
||||
|
|
|
@ -1,9 +1,14 @@
|
|||
/*
|
||||
* Copyright (C) 2017 Broadcom
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public License as
|
||||
* published by the Free Software Foundation version 2.
|
||||
*
|
||||
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
|
||||
* kind, whether express or implied; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
/*
|
||||
|
@ -25,11 +30,8 @@
|
|||
*
|
||||
* The Broadcom SBA RAID driver does not require any register programming
|
||||
* except submitting request to SBA hardware device via mailbox channels.
|
||||
* This driver implements a DMA device with one DMA channel using a set
|
||||
* of mailbox channels provided by Broadcom SoC specific ring manager
|
||||
* driver. To exploit parallelism (as described above), all DMA request
|
||||
* coming to SBA RAID DMA channel are broken down to smaller requests
|
||||
* and submitted to multiple mailbox channels in round-robin fashion.
|
||||
* This driver implements a DMA device with one DMA channel using a single
|
||||
* mailbox channel provided by Broadcom SoC specific ring manager driver.
|
||||
* For having more SBA DMA channels, we can create more SBA device nodes
|
||||
* in Broadcom SoC specific DTS based on number of hardware rings supported
|
||||
* by Broadcom SoC ring manager.
|
||||
|
@ -85,6 +87,7 @@
|
|||
#define SBA_CMD_GALOIS 0xe
|
||||
|
||||
#define SBA_MAX_REQ_PER_MBOX_CHANNEL 8192
|
||||
#define SBA_MAX_MSG_SEND_PER_MBOX_CHANNEL 8
|
||||
|
||||
/* Driver helper macros */
|
||||
#define to_sba_request(tx) \
|
||||
|
@ -142,9 +145,7 @@ struct sba_device {
|
|||
u32 max_cmds_pool_size;
|
||||
/* Maibox client and Mailbox channels */
|
||||
struct mbox_client client;
|
||||
int mchans_count;
|
||||
atomic_t mchans_current;
|
||||
struct mbox_chan **mchans;
|
||||
struct mbox_chan *mchan;
|
||||
struct device *mbox_dev;
|
||||
/* DMA device and DMA channel */
|
||||
struct dma_device dma_dev;
|
||||
|
@ -200,14 +201,6 @@ static inline u32 __pure sba_cmd_pq_c_mdata(u32 d, u32 b1, u32 b0)
|
|||
|
||||
/* ====== General helper routines ===== */
|
||||
|
||||
static void sba_peek_mchans(struct sba_device *sba)
|
||||
{
|
||||
int mchan_idx;
|
||||
|
||||
for (mchan_idx = 0; mchan_idx < sba->mchans_count; mchan_idx++)
|
||||
mbox_client_peek_data(sba->mchans[mchan_idx]);
|
||||
}
|
||||
|
||||
static struct sba_request *sba_alloc_request(struct sba_device *sba)
|
||||
{
|
||||
bool found = false;
|
||||
|
@ -231,7 +224,7 @@ static struct sba_request *sba_alloc_request(struct sba_device *sba)
|
|||
* would have completed which will create more
|
||||
* room for new requests.
|
||||
*/
|
||||
sba_peek_mchans(sba);
|
||||
mbox_client_peek_data(sba->mchan);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -369,15 +362,11 @@ static void sba_cleanup_pending_requests(struct sba_device *sba)
|
|||
static int sba_send_mbox_request(struct sba_device *sba,
|
||||
struct sba_request *req)
|
||||
{
|
||||
int mchans_idx, ret = 0;
|
||||
|
||||
/* Select mailbox channel in round-robin fashion */
|
||||
mchans_idx = atomic_inc_return(&sba->mchans_current);
|
||||
mchans_idx = mchans_idx % sba->mchans_count;
|
||||
int ret = 0;
|
||||
|
||||
/* Send message for the request */
|
||||
req->msg.error = 0;
|
||||
ret = mbox_send_message(sba->mchans[mchans_idx], &req->msg);
|
||||
ret = mbox_send_message(sba->mchan, &req->msg);
|
||||
if (ret < 0) {
|
||||
dev_err(sba->dev, "send message failed with error %d", ret);
|
||||
return ret;
|
||||
|
@ -390,7 +379,7 @@ static int sba_send_mbox_request(struct sba_device *sba,
|
|||
}
|
||||
|
||||
/* Signal txdone for mailbox channel */
|
||||
mbox_client_txdone(sba->mchans[mchans_idx], ret);
|
||||
mbox_client_txdone(sba->mchan, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -402,13 +391,8 @@ static void _sba_process_pending_requests(struct sba_device *sba)
|
|||
u32 count;
|
||||
struct sba_request *req;
|
||||
|
||||
/*
|
||||
* Process few pending requests
|
||||
*
|
||||
* For now, we process (<number_of_mailbox_channels> * 8)
|
||||
* number of requests at a time.
|
||||
*/
|
||||
count = sba->mchans_count * 8;
|
||||
/* Process few pending requests */
|
||||
count = SBA_MAX_MSG_SEND_PER_MBOX_CHANNEL;
|
||||
while (!list_empty(&sba->reqs_pending_list) && count) {
|
||||
/* Get the first pending request */
|
||||
req = list_first_entry(&sba->reqs_pending_list,
|
||||
|
@ -442,7 +426,9 @@ static void sba_process_received_request(struct sba_device *sba,
|
|||
|
||||
WARN_ON(tx->cookie < 0);
|
||||
if (tx->cookie > 0) {
|
||||
spin_lock_irqsave(&sba->reqs_lock, flags);
|
||||
dma_cookie_complete(tx);
|
||||
spin_unlock_irqrestore(&sba->reqs_lock, flags);
|
||||
dmaengine_desc_get_callback_invoke(tx, NULL);
|
||||
dma_descriptor_unmap(tx);
|
||||
tx->callback = NULL;
|
||||
|
@ -570,7 +556,7 @@ static enum dma_status sba_tx_status(struct dma_chan *dchan,
|
|||
if (ret == DMA_COMPLETE)
|
||||
return ret;
|
||||
|
||||
sba_peek_mchans(sba);
|
||||
mbox_client_peek_data(sba->mchan);
|
||||
|
||||
return dma_cookie_status(dchan, cookie, txstate);
|
||||
}
|
||||
|
@ -1637,7 +1623,7 @@ static int sba_async_register(struct sba_device *sba)
|
|||
|
||||
static int sba_probe(struct platform_device *pdev)
|
||||
{
|
||||
int i, ret = 0, mchans_count;
|
||||
int ret = 0;
|
||||
struct sba_device *sba;
|
||||
struct platform_device *mbox_pdev;
|
||||
struct of_phandle_args args;
|
||||
|
@ -1650,12 +1636,11 @@ static int sba_probe(struct platform_device *pdev)
|
|||
sba->dev = &pdev->dev;
|
||||
platform_set_drvdata(pdev, sba);
|
||||
|
||||
/* Number of channels equals number of mailbox channels */
|
||||
/* Number of mailbox channels should be atleast 1 */
|
||||
ret = of_count_phandle_with_args(pdev->dev.of_node,
|
||||
"mboxes", "#mbox-cells");
|
||||
if (ret <= 0)
|
||||
return -ENODEV;
|
||||
mchans_count = ret;
|
||||
|
||||
/* Determine SBA version from DT compatible string */
|
||||
if (of_device_is_compatible(sba->dev->of_node, "brcm,iproc-sba"))
|
||||
|
@ -1688,7 +1673,7 @@ static int sba_probe(struct platform_device *pdev)
|
|||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
sba->max_req = SBA_MAX_REQ_PER_MBOX_CHANNEL * mchans_count;
|
||||
sba->max_req = SBA_MAX_REQ_PER_MBOX_CHANNEL;
|
||||
sba->max_cmd_per_req = sba->max_pq_srcs + 3;
|
||||
sba->max_xor_srcs = sba->max_cmd_per_req - 1;
|
||||
sba->max_resp_pool_size = sba->max_req * sba->hw_resp_size;
|
||||
|
@ -1702,55 +1687,30 @@ static int sba_probe(struct platform_device *pdev)
|
|||
sba->client.knows_txdone = true;
|
||||
sba->client.tx_tout = 0;
|
||||
|
||||
/* Allocate mailbox channel array */
|
||||
sba->mchans = devm_kcalloc(&pdev->dev, mchans_count,
|
||||
sizeof(*sba->mchans), GFP_KERNEL);
|
||||
if (!sba->mchans)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Request mailbox channels */
|
||||
sba->mchans_count = 0;
|
||||
for (i = 0; i < mchans_count; i++) {
|
||||
sba->mchans[i] = mbox_request_channel(&sba->client, i);
|
||||
if (IS_ERR(sba->mchans[i])) {
|
||||
ret = PTR_ERR(sba->mchans[i]);
|
||||
goto fail_free_mchans;
|
||||
/* Request mailbox channel */
|
||||
sba->mchan = mbox_request_channel(&sba->client, 0);
|
||||
if (IS_ERR(sba->mchan)) {
|
||||
ret = PTR_ERR(sba->mchan);
|
||||
goto fail_free_mchan;
|
||||
}
|
||||
sba->mchans_count++;
|
||||
}
|
||||
atomic_set(&sba->mchans_current, 0);
|
||||
|
||||
/* Find-out underlying mailbox device */
|
||||
ret = of_parse_phandle_with_args(pdev->dev.of_node,
|
||||
"mboxes", "#mbox-cells", 0, &args);
|
||||
if (ret)
|
||||
goto fail_free_mchans;
|
||||
goto fail_free_mchan;
|
||||
mbox_pdev = of_find_device_by_node(args.np);
|
||||
of_node_put(args.np);
|
||||
if (!mbox_pdev) {
|
||||
ret = -ENODEV;
|
||||
goto fail_free_mchans;
|
||||
goto fail_free_mchan;
|
||||
}
|
||||
sba->mbox_dev = &mbox_pdev->dev;
|
||||
|
||||
/* All mailbox channels should be of same ring manager device */
|
||||
for (i = 1; i < mchans_count; i++) {
|
||||
ret = of_parse_phandle_with_args(pdev->dev.of_node,
|
||||
"mboxes", "#mbox-cells", i, &args);
|
||||
if (ret)
|
||||
goto fail_free_mchans;
|
||||
mbox_pdev = of_find_device_by_node(args.np);
|
||||
of_node_put(args.np);
|
||||
if (sba->mbox_dev != &mbox_pdev->dev) {
|
||||
ret = -EINVAL;
|
||||
goto fail_free_mchans;
|
||||
}
|
||||
}
|
||||
|
||||
/* Prealloc channel resource */
|
||||
ret = sba_prealloc_channel_resources(sba);
|
||||
if (ret)
|
||||
goto fail_free_mchans;
|
||||
goto fail_free_mchan;
|
||||
|
||||
/* Check availability of debugfs */
|
||||
if (!debugfs_initialized())
|
||||
|
@ -1777,24 +1737,22 @@ static int sba_probe(struct platform_device *pdev)
|
|||
goto fail_free_resources;
|
||||
|
||||
/* Print device info */
|
||||
dev_info(sba->dev, "%s using SBAv%d and %d mailbox channels",
|
||||
dev_info(sba->dev, "%s using SBAv%d mailbox channel from %s",
|
||||
dma_chan_name(&sba->dma_chan), sba->ver+1,
|
||||
sba->mchans_count);
|
||||
dev_name(sba->mbox_dev));
|
||||
|
||||
return 0;
|
||||
|
||||
fail_free_resources:
|
||||
debugfs_remove_recursive(sba->root);
|
||||
sba_freeup_channel_resources(sba);
|
||||
fail_free_mchans:
|
||||
for (i = 0; i < sba->mchans_count; i++)
|
||||
mbox_free_channel(sba->mchans[i]);
|
||||
fail_free_mchan:
|
||||
mbox_free_channel(sba->mchan);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int sba_remove(struct platform_device *pdev)
|
||||
{
|
||||
int i;
|
||||
struct sba_device *sba = platform_get_drvdata(pdev);
|
||||
|
||||
dma_async_device_unregister(&sba->dma_dev);
|
||||
|
@ -1803,8 +1761,7 @@ static int sba_remove(struct platform_device *pdev)
|
|||
|
||||
sba_freeup_channel_resources(sba);
|
||||
|
||||
for (i = 0; i < sba->mchans_count; i++)
|
||||
mbox_free_channel(sba->mchans[i]);
|
||||
mbox_free_channel(sba->mchan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1319,8 +1319,8 @@ static void coh901318_list_print(struct coh901318_chan *cohc,
|
|||
int i = 0;
|
||||
|
||||
while (l) {
|
||||
dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src 0x%pad"
|
||||
", dst 0x%pad, link 0x%pad virt_link_addr 0x%p\n",
|
||||
dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src %pad"
|
||||
", dst %pad, link %pad virt_link_addr 0x%p\n",
|
||||
i, l, l->control, &l->src_addr, &l->dst_addr,
|
||||
&l->link_addr, l->virt_link_addr);
|
||||
i++;
|
||||
|
@ -2231,7 +2231,7 @@ coh901318_prep_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
|
|||
spin_lock_irqsave(&cohc->lock, flg);
|
||||
|
||||
dev_vdbg(COHC_2_DEV(cohc),
|
||||
"[%s] channel %d src 0x%pad dest 0x%pad size %zu\n",
|
||||
"[%s] channel %d src %pad dest %pad size %zu\n",
|
||||
__func__, cohc->id, &src, &dest, size);
|
||||
|
||||
if (flags & DMA_PREP_INTERRUPT)
|
||||
|
|
|
@ -72,6 +72,9 @@
|
|||
|
||||
#define AXI_DMAC_FLAG_CYCLIC BIT(0)
|
||||
|
||||
/* The maximum ID allocated by the hardware is 31 */
|
||||
#define AXI_DMAC_SG_UNUSED 32U
|
||||
|
||||
struct axi_dmac_sg {
|
||||
dma_addr_t src_addr;
|
||||
dma_addr_t dest_addr;
|
||||
|
@ -80,6 +83,7 @@ struct axi_dmac_sg {
|
|||
unsigned int dest_stride;
|
||||
unsigned int src_stride;
|
||||
unsigned int id;
|
||||
bool schedule_when_free;
|
||||
};
|
||||
|
||||
struct axi_dmac_desc {
|
||||
|
@ -200,11 +204,21 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
|
|||
}
|
||||
sg = &desc->sg[desc->num_submitted];
|
||||
|
||||
/* Already queued in cyclic mode. Wait for it to finish */
|
||||
if (sg->id != AXI_DMAC_SG_UNUSED) {
|
||||
sg->schedule_when_free = true;
|
||||
return;
|
||||
}
|
||||
|
||||
desc->num_submitted++;
|
||||
if (desc->num_submitted == desc->num_sgs)
|
||||
chan->next_desc = NULL;
|
||||
if (desc->num_submitted == desc->num_sgs) {
|
||||
if (desc->cyclic)
|
||||
desc->num_submitted = 0; /* Start again */
|
||||
else
|
||||
chan->next_desc = NULL;
|
||||
} else {
|
||||
chan->next_desc = desc;
|
||||
}
|
||||
|
||||
sg->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID);
|
||||
|
||||
|
@ -220,9 +234,11 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
|
|||
|
||||
/*
|
||||
* If the hardware supports cyclic transfers and there is no callback to
|
||||
* call, enable hw cyclic mode to avoid unnecessary interrupts.
|
||||
* call and only a single segment, enable hw cyclic mode to avoid
|
||||
* unnecessary interrupts.
|
||||
*/
|
||||
if (chan->hw_cyclic && desc->cyclic && !desc->vdesc.tx.callback)
|
||||
if (chan->hw_cyclic && desc->cyclic && !desc->vdesc.tx.callback &&
|
||||
desc->num_sgs == 1)
|
||||
flags |= AXI_DMAC_FLAG_CYCLIC;
|
||||
|
||||
axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->x_len - 1);
|
||||
|
@ -237,37 +253,52 @@ static struct axi_dmac_desc *axi_dmac_active_desc(struct axi_dmac_chan *chan)
|
|||
struct axi_dmac_desc, vdesc.node);
|
||||
}
|
||||
|
||||
static void axi_dmac_transfer_done(struct axi_dmac_chan *chan,
|
||||
static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
|
||||
unsigned int completed_transfers)
|
||||
{
|
||||
struct axi_dmac_desc *active;
|
||||
struct axi_dmac_sg *sg;
|
||||
bool start_next = false;
|
||||
|
||||
active = axi_dmac_active_desc(chan);
|
||||
if (!active)
|
||||
return;
|
||||
return false;
|
||||
|
||||
if (active->cyclic) {
|
||||
vchan_cyclic_callback(&active->vdesc);
|
||||
} else {
|
||||
do {
|
||||
sg = &active->sg[active->num_completed];
|
||||
if (sg->id == AXI_DMAC_SG_UNUSED) /* Not yet submitted */
|
||||
break;
|
||||
if (!(BIT(sg->id) & completed_transfers))
|
||||
break;
|
||||
active->num_completed++;
|
||||
sg->id = AXI_DMAC_SG_UNUSED;
|
||||
if (sg->schedule_when_free) {
|
||||
sg->schedule_when_free = false;
|
||||
start_next = true;
|
||||
}
|
||||
|
||||
if (active->cyclic)
|
||||
vchan_cyclic_callback(&active->vdesc);
|
||||
|
||||
if (active->num_completed == active->num_sgs) {
|
||||
if (active->cyclic) {
|
||||
active->num_completed = 0; /* wrap around */
|
||||
} else {
|
||||
list_del(&active->vdesc.node);
|
||||
vchan_cookie_complete(&active->vdesc);
|
||||
active = axi_dmac_active_desc(chan);
|
||||
}
|
||||
} while (active);
|
||||
}
|
||||
} while (active);
|
||||
|
||||
return start_next;
|
||||
}
|
||||
|
||||
static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid)
|
||||
{
|
||||
struct axi_dmac *dmac = devid;
|
||||
unsigned int pending;
|
||||
bool start_next = false;
|
||||
|
||||
pending = axi_dmac_read(dmac, AXI_DMAC_REG_IRQ_PENDING);
|
||||
if (!pending)
|
||||
|
@ -281,10 +312,10 @@ static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid)
|
|||
unsigned int completed;
|
||||
|
||||
completed = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_DONE);
|
||||
axi_dmac_transfer_done(&dmac->chan, completed);
|
||||
start_next = axi_dmac_transfer_done(&dmac->chan, completed);
|
||||
}
|
||||
/* Space has become available in the descriptor queue */
|
||||
if (pending & AXI_DMAC_IRQ_SOT)
|
||||
if ((pending & AXI_DMAC_IRQ_SOT) || start_next)
|
||||
axi_dmac_start_transfer(&dmac->chan);
|
||||
spin_unlock(&dmac->chan.vchan.lock);
|
||||
|
||||
|
@ -334,12 +365,16 @@ static void axi_dmac_issue_pending(struct dma_chan *c)
|
|||
static struct axi_dmac_desc *axi_dmac_alloc_desc(unsigned int num_sgs)
|
||||
{
|
||||
struct axi_dmac_desc *desc;
|
||||
unsigned int i;
|
||||
|
||||
desc = kzalloc(sizeof(struct axi_dmac_desc) +
|
||||
sizeof(struct axi_dmac_sg) * num_sgs, GFP_NOWAIT);
|
||||
if (!desc)
|
||||
return NULL;
|
||||
|
||||
for (i = 0; i < num_sgs; i++)
|
||||
desc->sg[i].id = AXI_DMAC_SG_UNUSED;
|
||||
|
||||
desc->num_sgs = num_sgs;
|
||||
|
||||
return desc;
|
||||
|
|
|
@ -702,6 +702,7 @@ static int dmatest_func(void *data)
|
|||
* free it this time?" dancing. For now, just
|
||||
* leave it dangling.
|
||||
*/
|
||||
WARN(1, "dmatest: Kernel stack may be corrupted!!\n");
|
||||
dmaengine_unmap_put(um);
|
||||
result("test timed out", total_tests, src_off, dst_off,
|
||||
len, 0);
|
||||
|
|
|
@ -891,6 +891,10 @@ static int edma_slave_config(struct dma_chan *chan,
|
|||
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
|
||||
return -EINVAL;
|
||||
|
||||
if (cfg->src_maxburst > chan->device->max_burst ||
|
||||
cfg->dst_maxburst > chan->device->max_burst)
|
||||
return -EINVAL;
|
||||
|
||||
memcpy(&echan->cfg, cfg, sizeof(echan->cfg));
|
||||
|
||||
return 0;
|
||||
|
@ -1868,6 +1872,7 @@ static void edma_dma_init(struct edma_cc *ecc, bool legacy_mode)
|
|||
s_ddev->dst_addr_widths = EDMA_DMA_BUSWIDTHS;
|
||||
s_ddev->directions |= (BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV));
|
||||
s_ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
|
||||
s_ddev->max_burst = SZ_32K - 1; /* CIDX: 16bit signed */
|
||||
|
||||
s_ddev->dev = ecc->dev;
|
||||
INIT_LIST_HEAD(&s_ddev->channels);
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
#include <linux/of_device.h>
|
||||
#include <linux/of_dma.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
@ -730,14 +731,23 @@ static int mdc_slave_config(struct dma_chan *chan,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int mdc_alloc_chan_resources(struct dma_chan *chan)
|
||||
{
|
||||
struct mdc_chan *mchan = to_mdc_chan(chan);
|
||||
struct device *dev = mdma2dev(mchan->mdma);
|
||||
|
||||
return pm_runtime_get_sync(dev);
|
||||
}
|
||||
|
||||
static void mdc_free_chan_resources(struct dma_chan *chan)
|
||||
{
|
||||
struct mdc_chan *mchan = to_mdc_chan(chan);
|
||||
struct mdc_dma *mdma = mchan->mdma;
|
||||
struct device *dev = mdma2dev(mdma);
|
||||
|
||||
mdc_terminate_all(chan);
|
||||
|
||||
mdma->soc->disable_chan(mchan);
|
||||
pm_runtime_put(dev);
|
||||
}
|
||||
|
||||
static irqreturn_t mdc_chan_irq(int irq, void *dev_id)
|
||||
|
@ -854,6 +864,22 @@ static const struct of_device_id mdc_dma_of_match[] = {
|
|||
};
|
||||
MODULE_DEVICE_TABLE(of, mdc_dma_of_match);
|
||||
|
||||
static int img_mdc_runtime_suspend(struct device *dev)
|
||||
{
|
||||
struct mdc_dma *mdma = dev_get_drvdata(dev);
|
||||
|
||||
clk_disable_unprepare(mdma->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int img_mdc_runtime_resume(struct device *dev)
|
||||
{
|
||||
struct mdc_dma *mdma = dev_get_drvdata(dev);
|
||||
|
||||
return clk_prepare_enable(mdma->clk);
|
||||
}
|
||||
|
||||
static int mdc_dma_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct mdc_dma *mdma;
|
||||
|
@ -883,10 +909,6 @@ static int mdc_dma_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(mdma->clk))
|
||||
return PTR_ERR(mdma->clk);
|
||||
|
||||
ret = clk_prepare_enable(mdma->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dma_cap_zero(mdma->dma_dev.cap_mask);
|
||||
dma_cap_set(DMA_SLAVE, mdma->dma_dev.cap_mask);
|
||||
dma_cap_set(DMA_PRIVATE, mdma->dma_dev.cap_mask);
|
||||
|
@ -919,12 +941,13 @@ static int mdc_dma_probe(struct platform_device *pdev)
|
|||
"img,max-burst-multiplier",
|
||||
&mdma->max_burst_mult);
|
||||
if (ret)
|
||||
goto disable_clk;
|
||||
return ret;
|
||||
|
||||
mdma->dma_dev.dev = &pdev->dev;
|
||||
mdma->dma_dev.device_prep_slave_sg = mdc_prep_slave_sg;
|
||||
mdma->dma_dev.device_prep_dma_cyclic = mdc_prep_dma_cyclic;
|
||||
mdma->dma_dev.device_prep_dma_memcpy = mdc_prep_dma_memcpy;
|
||||
mdma->dma_dev.device_alloc_chan_resources = mdc_alloc_chan_resources;
|
||||
mdma->dma_dev.device_free_chan_resources = mdc_free_chan_resources;
|
||||
mdma->dma_dev.device_tx_status = mdc_tx_status;
|
||||
mdma->dma_dev.device_issue_pending = mdc_issue_pending;
|
||||
|
@ -945,15 +968,14 @@ static int mdc_dma_probe(struct platform_device *pdev)
|
|||
mchan->mdma = mdma;
|
||||
mchan->chan_nr = i;
|
||||
mchan->irq = platform_get_irq(pdev, i);
|
||||
if (mchan->irq < 0) {
|
||||
ret = mchan->irq;
|
||||
goto disable_clk;
|
||||
}
|
||||
if (mchan->irq < 0)
|
||||
return mchan->irq;
|
||||
|
||||
ret = devm_request_irq(&pdev->dev, mchan->irq, mdc_chan_irq,
|
||||
IRQ_TYPE_LEVEL_HIGH,
|
||||
dev_name(&pdev->dev), mchan);
|
||||
if (ret < 0)
|
||||
goto disable_clk;
|
||||
return ret;
|
||||
|
||||
mchan->vc.desc_free = mdc_desc_free;
|
||||
vchan_init(&mchan->vc, &mdma->dma_dev);
|
||||
|
@ -962,14 +984,19 @@ static int mdc_dma_probe(struct platform_device *pdev)
|
|||
mdma->desc_pool = dmam_pool_create(dev_name(&pdev->dev), &pdev->dev,
|
||||
sizeof(struct mdc_hw_list_desc),
|
||||
4, 0);
|
||||
if (!mdma->desc_pool) {
|
||||
ret = -ENOMEM;
|
||||
goto disable_clk;
|
||||
if (!mdma->desc_pool)
|
||||
return -ENOMEM;
|
||||
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
if (!pm_runtime_enabled(&pdev->dev)) {
|
||||
ret = img_mdc_runtime_resume(&pdev->dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = dma_async_device_register(&mdma->dma_dev);
|
||||
if (ret)
|
||||
goto disable_clk;
|
||||
goto suspend;
|
||||
|
||||
ret = of_dma_controller_register(pdev->dev.of_node, mdc_of_xlate, mdma);
|
||||
if (ret)
|
||||
|
@ -982,8 +1009,10 @@ static int mdc_dma_probe(struct platform_device *pdev)
|
|||
|
||||
unregister:
|
||||
dma_async_device_unregister(&mdma->dma_dev);
|
||||
disable_clk:
|
||||
clk_disable_unprepare(mdma->clk);
|
||||
suspend:
|
||||
if (!pm_runtime_enabled(&pdev->dev))
|
||||
img_mdc_runtime_suspend(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1004,14 +1033,47 @@ static int mdc_dma_remove(struct platform_device *pdev)
|
|||
tasklet_kill(&mchan->vc.task);
|
||||
}
|
||||
|
||||
clk_disable_unprepare(mdma->clk);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
if (!pm_runtime_status_suspended(&pdev->dev))
|
||||
img_mdc_runtime_suspend(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int img_mdc_suspend_late(struct device *dev)
|
||||
{
|
||||
struct mdc_dma *mdma = dev_get_drvdata(dev);
|
||||
int i;
|
||||
|
||||
/* Check that all channels are idle */
|
||||
for (i = 0; i < mdma->nr_channels; i++) {
|
||||
struct mdc_chan *mchan = &mdma->channels[i];
|
||||
|
||||
if (unlikely(mchan->desc))
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
return pm_runtime_force_suspend(dev);
|
||||
}
|
||||
|
||||
static int img_mdc_resume_early(struct device *dev)
|
||||
{
|
||||
return pm_runtime_force_resume(dev);
|
||||
}
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
||||
static const struct dev_pm_ops img_mdc_pm_ops = {
|
||||
SET_RUNTIME_PM_OPS(img_mdc_runtime_suspend,
|
||||
img_mdc_runtime_resume, NULL)
|
||||
SET_LATE_SYSTEM_SLEEP_PM_OPS(img_mdc_suspend_late,
|
||||
img_mdc_resume_early)
|
||||
};
|
||||
|
||||
static struct platform_driver mdc_dma_driver = {
|
||||
.driver = {
|
||||
.name = "img-mdc-dma",
|
||||
.pm = &img_mdc_pm_ops,
|
||||
.of_match_table = of_match_ptr(mdc_dma_of_match),
|
||||
},
|
||||
.probe = mdc_dma_probe,
|
||||
|
|
|
@ -364,9 +364,9 @@ static void imxdma_disable_hw(struct imxdma_channel *imxdmac)
|
|||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
static void imxdma_watchdog(unsigned long data)
|
||||
static void imxdma_watchdog(struct timer_list *t)
|
||||
{
|
||||
struct imxdma_channel *imxdmac = (struct imxdma_channel *)data;
|
||||
struct imxdma_channel *imxdmac = from_timer(imxdmac, t, watchdog);
|
||||
struct imxdma_engine *imxdma = imxdmac->imxdma;
|
||||
int channel = imxdmac->channel;
|
||||
|
||||
|
@ -1153,9 +1153,7 @@ static int __init imxdma_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
imxdmac->irq = irq + i;
|
||||
init_timer(&imxdmac->watchdog);
|
||||
imxdmac->watchdog.function = &imxdma_watchdog;
|
||||
imxdmac->watchdog.data = (unsigned long)imxdmac;
|
||||
timer_setup(&imxdmac->watchdog, imxdma_watchdog, 0);
|
||||
}
|
||||
|
||||
imxdmac->imxdma = imxdma;
|
||||
|
|
|
@ -178,6 +178,14 @@
|
|||
#define SDMA_WATERMARK_LEVEL_HWE BIT(29)
|
||||
#define SDMA_WATERMARK_LEVEL_CONT BIT(31)
|
||||
|
||||
#define SDMA_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
|
||||
|
||||
#define SDMA_DMA_DIRECTIONS (BIT(DMA_DEV_TO_MEM) | \
|
||||
BIT(DMA_MEM_TO_DEV) | \
|
||||
BIT(DMA_DEV_TO_DEV))
|
||||
|
||||
/*
|
||||
* Mode/Count of data node descriptors - IPCv2
|
||||
*/
|
||||
|
@ -1851,9 +1859,9 @@ static int sdma_probe(struct platform_device *pdev)
|
|||
sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic;
|
||||
sdma->dma_device.device_config = sdma_config;
|
||||
sdma->dma_device.device_terminate_all = sdma_disable_channel_with_delay;
|
||||
sdma->dma_device.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
|
||||
sdma->dma_device.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
|
||||
sdma->dma_device.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
|
||||
sdma->dma_device.src_addr_widths = SDMA_DMA_BUSWIDTHS;
|
||||
sdma->dma_device.dst_addr_widths = SDMA_DMA_BUSWIDTHS;
|
||||
sdma->dma_device.directions = SDMA_DMA_DIRECTIONS;
|
||||
sdma->dma_device.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
|
||||
sdma->dma_device.device_issue_pending = sdma_issue_pending;
|
||||
sdma->dma_device.dev->dma_parms = &sdma->dma_parms;
|
||||
|
|
|
@ -474,7 +474,7 @@ int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs)
|
|||
if (time_is_before_jiffies(ioat_chan->timer.expires)
|
||||
&& timer_pending(&ioat_chan->timer)) {
|
||||
mod_timer(&ioat_chan->timer, jiffies + COMPLETION_TIMEOUT);
|
||||
ioat_timer_event((unsigned long)ioat_chan);
|
||||
ioat_timer_event(&ioat_chan->timer);
|
||||
}
|
||||
|
||||
return -ENOMEM;
|
||||
|
@ -862,9 +862,9 @@ static void check_active(struct ioatdma_chan *ioat_chan)
|
|||
mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
|
||||
}
|
||||
|
||||
void ioat_timer_event(unsigned long data)
|
||||
void ioat_timer_event(struct timer_list *t)
|
||||
{
|
||||
struct ioatdma_chan *ioat_chan = to_ioat_chan((void *)data);
|
||||
struct ioatdma_chan *ioat_chan = from_timer(ioat_chan, t, timer);
|
||||
dma_addr_t phys_complete;
|
||||
u64 status;
|
||||
|
||||
|
|
|
@ -406,10 +406,9 @@ enum dma_status
|
|||
ioat_tx_status(struct dma_chan *c, dma_cookie_t cookie,
|
||||
struct dma_tx_state *txstate);
|
||||
void ioat_cleanup_event(unsigned long data);
|
||||
void ioat_timer_event(unsigned long data);
|
||||
void ioat_timer_event(struct timer_list *t);
|
||||
int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs);
|
||||
void ioat_issue_pending(struct dma_chan *chan);
|
||||
void ioat_timer_event(unsigned long data);
|
||||
|
||||
/* IOAT Init functions */
|
||||
bool is_bwd_ioat(struct pci_dev *pdev);
|
||||
|
|
|
@ -760,7 +760,7 @@ ioat_init_channel(struct ioatdma_device *ioat_dma,
|
|||
dma_cookie_init(&ioat_chan->dma_chan);
|
||||
list_add_tail(&ioat_chan->dma_chan.device_node, &dma->channels);
|
||||
ioat_dma->idx[idx] = ioat_chan;
|
||||
setup_timer(&ioat_chan->timer, ioat_timer_event, data);
|
||||
timer_setup(&ioat_chan->timer, ioat_timer_event, 0);
|
||||
tasklet_init(&ioat_chan->cleanup_task, ioat_cleanup_event, data);
|
||||
}
|
||||
|
||||
|
|
|
@ -1286,7 +1286,6 @@ MODULE_DEVICE_TABLE(of, nbpf_match);
|
|||
static int nbpf_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
const struct of_device_id *of_id = of_match_device(nbpf_match, dev);
|
||||
struct device_node *np = dev->of_node;
|
||||
struct nbpf_device *nbpf;
|
||||
struct dma_device *dma_dev;
|
||||
|
@ -1300,10 +1299,10 @@ static int nbpf_probe(struct platform_device *pdev)
|
|||
BUILD_BUG_ON(sizeof(struct nbpf_desc_page) > PAGE_SIZE);
|
||||
|
||||
/* DT only */
|
||||
if (!np || !of_id || !of_id->data)
|
||||
if (!np)
|
||||
return -ENODEV;
|
||||
|
||||
cfg = of_id->data;
|
||||
cfg = of_device_get_match_data(dev);
|
||||
num_channels = cfg->num_channels;
|
||||
|
||||
nbpf = devm_kzalloc(dev, sizeof(*nbpf) + num_channels *
|
||||
|
|
|
@ -1288,6 +1288,10 @@ static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config
|
|||
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
|
||||
return -EINVAL;
|
||||
|
||||
if (cfg->src_maxburst > chan->device->max_burst ||
|
||||
cfg->dst_maxburst > chan->device->max_burst)
|
||||
return -EINVAL;
|
||||
|
||||
memcpy(&c->cfg, cfg, sizeof(c->cfg));
|
||||
|
||||
return 0;
|
||||
|
@ -1482,6 +1486,7 @@ static int omap_dma_probe(struct platform_device *pdev)
|
|||
od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS;
|
||||
od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
|
||||
od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
|
||||
od->ddev.max_burst = SZ_16M - 1; /* CCEN: 24bit unsigned */
|
||||
od->ddev.dev = &pdev->dev;
|
||||
INIT_LIST_HEAD(&od->ddev.channels);
|
||||
spin_lock_init(&od->lock);
|
||||
|
|
|
@ -123,7 +123,7 @@ struct pch_dma_chan {
|
|||
struct pch_dma {
|
||||
struct dma_device dma;
|
||||
void __iomem *membase;
|
||||
struct pci_pool *pool;
|
||||
struct dma_pool *pool;
|
||||
struct pch_dma_regs regs;
|
||||
struct pch_dma_desc_regs ch_regs[MAX_CHAN_NR];
|
||||
struct pch_dma_chan channels[MAX_CHAN_NR];
|
||||
|
@ -437,7 +437,7 @@ static struct pch_dma_desc *pdc_alloc_desc(struct dma_chan *chan, gfp_t flags)
|
|||
struct pch_dma *pd = to_pd(chan->device);
|
||||
dma_addr_t addr;
|
||||
|
||||
desc = pci_pool_zalloc(pd->pool, flags, &addr);
|
||||
desc = dma_pool_zalloc(pd->pool, flags, &addr);
|
||||
if (desc) {
|
||||
INIT_LIST_HEAD(&desc->tx_list);
|
||||
dma_async_tx_descriptor_init(&desc->txd, chan);
|
||||
|
@ -549,7 +549,7 @@ static void pd_free_chan_resources(struct dma_chan *chan)
|
|||
spin_unlock_irq(&pd_chan->lock);
|
||||
|
||||
list_for_each_entry_safe(desc, _d, &tmp_list, desc_node)
|
||||
pci_pool_free(pd->pool, desc, desc->txd.phys);
|
||||
dma_pool_free(pd->pool, desc, desc->txd.phys);
|
||||
|
||||
pdc_enable_irq(chan, 0);
|
||||
}
|
||||
|
@ -880,7 +880,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
|
|||
goto err_iounmap;
|
||||
}
|
||||
|
||||
pd->pool = pci_pool_create("pch_dma_desc_pool", pdev,
|
||||
pd->pool = dma_pool_create("pch_dma_desc_pool", &pdev->dev,
|
||||
sizeof(struct pch_dma_desc), 4, 0);
|
||||
if (!pd->pool) {
|
||||
dev_err(&pdev->dev, "Failed to alloc DMA descriptors\n");
|
||||
|
@ -931,7 +931,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
|
|||
return 0;
|
||||
|
||||
err_free_pool:
|
||||
pci_pool_destroy(pd->pool);
|
||||
dma_pool_destroy(pd->pool);
|
||||
err_free_irq:
|
||||
free_irq(pdev->irq, pd);
|
||||
err_iounmap:
|
||||
|
@ -963,7 +963,7 @@ static void pch_dma_remove(struct pci_dev *pdev)
|
|||
tasklet_kill(&pd_chan->tasklet);
|
||||
}
|
||||
|
||||
pci_pool_destroy(pd->pool);
|
||||
dma_pool_destroy(pd->pool);
|
||||
pci_iounmap(pdev, pd->membase);
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
|
|
|
@ -2390,7 +2390,8 @@ static inline void _init_desc(struct dma_pl330_desc *desc)
|
|||
}
|
||||
|
||||
/* Returns the number of descriptors added to the DMAC pool */
|
||||
static int add_desc(struct pl330_dmac *pl330, gfp_t flg, int count)
|
||||
static int add_desc(struct list_head *pool, spinlock_t *lock,
|
||||
gfp_t flg, int count)
|
||||
{
|
||||
struct dma_pl330_desc *desc;
|
||||
unsigned long flags;
|
||||
|
@ -2400,27 +2401,28 @@ static int add_desc(struct pl330_dmac *pl330, gfp_t flg, int count)
|
|||
if (!desc)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&pl330->pool_lock, flags);
|
||||
spin_lock_irqsave(lock, flags);
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
_init_desc(&desc[i]);
|
||||
list_add_tail(&desc[i].node, &pl330->desc_pool);
|
||||
list_add_tail(&desc[i].node, pool);
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&pl330->pool_lock, flags);
|
||||
spin_unlock_irqrestore(lock, flags);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static struct dma_pl330_desc *pluck_desc(struct pl330_dmac *pl330)
|
||||
static struct dma_pl330_desc *pluck_desc(struct list_head *pool,
|
||||
spinlock_t *lock)
|
||||
{
|
||||
struct dma_pl330_desc *desc = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&pl330->pool_lock, flags);
|
||||
spin_lock_irqsave(lock, flags);
|
||||
|
||||
if (!list_empty(&pl330->desc_pool)) {
|
||||
desc = list_entry(pl330->desc_pool.next,
|
||||
if (!list_empty(pool)) {
|
||||
desc = list_entry(pool->next,
|
||||
struct dma_pl330_desc, node);
|
||||
|
||||
list_del_init(&desc->node);
|
||||
|
@ -2429,7 +2431,7 @@ static struct dma_pl330_desc *pluck_desc(struct pl330_dmac *pl330)
|
|||
desc->txd.callback = NULL;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&pl330->pool_lock, flags);
|
||||
spin_unlock_irqrestore(lock, flags);
|
||||
|
||||
return desc;
|
||||
}
|
||||
|
@ -2441,20 +2443,18 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch)
|
|||
struct dma_pl330_desc *desc;
|
||||
|
||||
/* Pluck one desc from the pool of DMAC */
|
||||
desc = pluck_desc(pl330);
|
||||
desc = pluck_desc(&pl330->desc_pool, &pl330->pool_lock);
|
||||
|
||||
/* If the DMAC pool is empty, alloc new */
|
||||
if (!desc) {
|
||||
if (!add_desc(pl330, GFP_ATOMIC, 1))
|
||||
DEFINE_SPINLOCK(lock);
|
||||
LIST_HEAD(pool);
|
||||
|
||||
if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
|
||||
return NULL;
|
||||
|
||||
/* Try again */
|
||||
desc = pluck_desc(pl330);
|
||||
if (!desc) {
|
||||
dev_err(pch->dmac->ddma.dev,
|
||||
"%s:%d ALERT!\n", __func__, __LINE__);
|
||||
return NULL;
|
||||
}
|
||||
desc = pluck_desc(&pool, &lock);
|
||||
WARN_ON(!desc || !list_empty(&pool));
|
||||
}
|
||||
|
||||
/* Initialize the descriptor */
|
||||
|
@ -2868,7 +2868,8 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
spin_lock_init(&pl330->pool_lock);
|
||||
|
||||
/* Create a descriptor pool of default size */
|
||||
if (!add_desc(pl330, GFP_KERNEL, NR_DEFAULT_DESC))
|
||||
if (!add_desc(&pl330->desc_pool, &pl330->pool_lock,
|
||||
GFP_KERNEL, NR_DEFAULT_DESC))
|
||||
dev_warn(&adev->dev, "unable to allocate desc\n");
|
||||
|
||||
INIT_LIST_HEAD(&pd->channels);
|
||||
|
|
|
@ -46,6 +46,7 @@
|
|||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_dma.h>
|
||||
#include <linux/circ_buf.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/dmaengine.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
@ -78,6 +79,8 @@ struct bam_async_desc {
|
|||
|
||||
struct bam_desc_hw *curr_desc;
|
||||
|
||||
/* list node for the desc in the bam_chan list of descriptors */
|
||||
struct list_head desc_node;
|
||||
enum dma_transfer_direction dir;
|
||||
size_t length;
|
||||
struct bam_desc_hw desc[0];
|
||||
|
@ -347,6 +350,8 @@ static const struct reg_offset_data bam_v1_7_reg_info[] = {
|
|||
#define BAM_DESC_FIFO_SIZE SZ_32K
|
||||
#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
|
||||
#define BAM_FIFO_SIZE (SZ_32K - 8)
|
||||
#define IS_BUSY(chan) (CIRC_SPACE(bchan->tail, bchan->head,\
|
||||
MAX_DESCRIPTORS + 1) == 0)
|
||||
|
||||
struct bam_chan {
|
||||
struct virt_dma_chan vc;
|
||||
|
@ -356,8 +361,6 @@ struct bam_chan {
|
|||
/* configuration from device tree */
|
||||
u32 id;
|
||||
|
||||
struct bam_async_desc *curr_txd; /* current running dma */
|
||||
|
||||
/* runtime configuration */
|
||||
struct dma_slave_config slave;
|
||||
|
||||
|
@ -372,6 +375,8 @@ struct bam_chan {
|
|||
unsigned int initialized; /* is the channel hw initialized? */
|
||||
unsigned int paused; /* is the channel paused? */
|
||||
unsigned int reconfigure; /* new slave config? */
|
||||
/* list of descriptors currently processed */
|
||||
struct list_head desc_list;
|
||||
|
||||
struct list_head node;
|
||||
};
|
||||
|
@ -539,7 +544,7 @@ static void bam_free_chan(struct dma_chan *chan)
|
|||
|
||||
vchan_free_chan_resources(to_virt_chan(chan));
|
||||
|
||||
if (bchan->curr_txd) {
|
||||
if (!list_empty(&bchan->desc_list)) {
|
||||
dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
|
||||
goto err;
|
||||
}
|
||||
|
@ -632,8 +637,6 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
|
|||
|
||||
if (flags & DMA_PREP_INTERRUPT)
|
||||
async_desc->flags |= DESC_FLAG_EOT;
|
||||
else
|
||||
async_desc->flags |= DESC_FLAG_INT;
|
||||
|
||||
async_desc->num_desc = num_alloc;
|
||||
async_desc->curr_desc = async_desc->desc;
|
||||
|
@ -684,14 +687,16 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
|
|||
static int bam_dma_terminate_all(struct dma_chan *chan)
|
||||
{
|
||||
struct bam_chan *bchan = to_bam_chan(chan);
|
||||
struct bam_async_desc *async_desc, *tmp;
|
||||
unsigned long flag;
|
||||
LIST_HEAD(head);
|
||||
|
||||
/* remove all transactions, including active transaction */
|
||||
spin_lock_irqsave(&bchan->vc.lock, flag);
|
||||
if (bchan->curr_txd) {
|
||||
list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
|
||||
bchan->curr_txd = NULL;
|
||||
list_for_each_entry_safe(async_desc, tmp,
|
||||
&bchan->desc_list, desc_node) {
|
||||
list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
|
||||
list_del(&async_desc->desc_node);
|
||||
}
|
||||
|
||||
vchan_get_all_descriptors(&bchan->vc, &head);
|
||||
|
@ -763,9 +768,9 @@ static int bam_resume(struct dma_chan *chan)
|
|||
*/
|
||||
static u32 process_channel_irqs(struct bam_device *bdev)
|
||||
{
|
||||
u32 i, srcs, pipe_stts;
|
||||
u32 i, srcs, pipe_stts, offset, avail;
|
||||
unsigned long flags;
|
||||
struct bam_async_desc *async_desc;
|
||||
struct bam_async_desc *async_desc, *tmp;
|
||||
|
||||
srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
|
||||
|
||||
|
@ -785,28 +790,41 @@ static u32 process_channel_irqs(struct bam_device *bdev)
|
|||
writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
|
||||
|
||||
spin_lock_irqsave(&bchan->vc.lock, flags);
|
||||
async_desc = bchan->curr_txd;
|
||||
|
||||
if (async_desc) {
|
||||
async_desc->num_desc -= async_desc->xfer_len;
|
||||
async_desc->curr_desc += async_desc->xfer_len;
|
||||
bchan->curr_txd = NULL;
|
||||
offset = readl_relaxed(bam_addr(bdev, i, BAM_P_SW_OFSTS)) &
|
||||
P_SW_OFSTS_MASK;
|
||||
offset /= sizeof(struct bam_desc_hw);
|
||||
|
||||
/* Number of bytes available to read */
|
||||
avail = CIRC_CNT(offset, bchan->head, MAX_DESCRIPTORS + 1);
|
||||
|
||||
list_for_each_entry_safe(async_desc, tmp,
|
||||
&bchan->desc_list, desc_node) {
|
||||
/* Not enough data to read */
|
||||
if (avail < async_desc->xfer_len)
|
||||
break;
|
||||
|
||||
/* manage FIFO */
|
||||
bchan->head += async_desc->xfer_len;
|
||||
bchan->head %= MAX_DESCRIPTORS;
|
||||
|
||||
async_desc->num_desc -= async_desc->xfer_len;
|
||||
async_desc->curr_desc += async_desc->xfer_len;
|
||||
avail -= async_desc->xfer_len;
|
||||
|
||||
/*
|
||||
* if complete, process cookie. Otherwise
|
||||
* push back to front of desc_issued so that
|
||||
* it gets restarted by the tasklet
|
||||
*/
|
||||
if (!async_desc->num_desc)
|
||||
if (!async_desc->num_desc) {
|
||||
vchan_cookie_complete(&async_desc->vd);
|
||||
else
|
||||
} else {
|
||||
list_add(&async_desc->vd.node,
|
||||
&bchan->vc.desc_issued);
|
||||
}
|
||||
list_del(&async_desc->desc_node);
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&bchan->vc.lock, flags);
|
||||
}
|
||||
|
@ -867,6 +885,7 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
|||
struct dma_tx_state *txstate)
|
||||
{
|
||||
struct bam_chan *bchan = to_bam_chan(chan);
|
||||
struct bam_async_desc *async_desc;
|
||||
struct virt_dma_desc *vd;
|
||||
int ret;
|
||||
size_t residue = 0;
|
||||
|
@ -882,11 +901,17 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
|||
|
||||
spin_lock_irqsave(&bchan->vc.lock, flags);
|
||||
vd = vchan_find_desc(&bchan->vc, cookie);
|
||||
if (vd)
|
||||
if (vd) {
|
||||
residue = container_of(vd, struct bam_async_desc, vd)->length;
|
||||
else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie)
|
||||
for (i = 0; i < bchan->curr_txd->num_desc; i++)
|
||||
residue += bchan->curr_txd->curr_desc[i].size;
|
||||
} else {
|
||||
list_for_each_entry(async_desc, &bchan->desc_list, desc_node) {
|
||||
if (async_desc->vd.tx.cookie != cookie)
|
||||
continue;
|
||||
|
||||
for (i = 0; i < async_desc->num_desc; i++)
|
||||
residue += async_desc->curr_desc[i].size;
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&bchan->vc.lock, flags);
|
||||
|
||||
|
@ -927,26 +952,28 @@ static void bam_start_dma(struct bam_chan *bchan)
|
|||
{
|
||||
struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
|
||||
struct bam_device *bdev = bchan->bdev;
|
||||
struct bam_async_desc *async_desc;
|
||||
struct bam_async_desc *async_desc = NULL;
|
||||
struct bam_desc_hw *desc;
|
||||
struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
|
||||
sizeof(struct bam_desc_hw));
|
||||
int ret;
|
||||
unsigned int avail;
|
||||
struct dmaengine_desc_callback cb;
|
||||
|
||||
lockdep_assert_held(&bchan->vc.lock);
|
||||
|
||||
if (!vd)
|
||||
return;
|
||||
|
||||
list_del(&vd->node);
|
||||
|
||||
async_desc = container_of(vd, struct bam_async_desc, vd);
|
||||
bchan->curr_txd = async_desc;
|
||||
|
||||
ret = pm_runtime_get_sync(bdev->dev);
|
||||
if (ret < 0)
|
||||
return;
|
||||
|
||||
while (vd && !IS_BUSY(bchan)) {
|
||||
list_del(&vd->node);
|
||||
|
||||
async_desc = container_of(vd, struct bam_async_desc, vd);
|
||||
|
||||
/* on first use, initialize the channel hardware */
|
||||
if (!bchan->initialized)
|
||||
bam_chan_init_hw(bchan, async_desc->dir);
|
||||
|
@ -955,10 +982,12 @@ static void bam_start_dma(struct bam_chan *bchan)
|
|||
if (bchan->reconfigure)
|
||||
bam_apply_new_config(bchan, async_desc->dir);
|
||||
|
||||
desc = bchan->curr_txd->curr_desc;
|
||||
desc = async_desc->curr_desc;
|
||||
avail = CIRC_SPACE(bchan->tail, bchan->head,
|
||||
MAX_DESCRIPTORS + 1);
|
||||
|
||||
if (async_desc->num_desc > MAX_DESCRIPTORS)
|
||||
async_desc->xfer_len = MAX_DESCRIPTORS;
|
||||
if (async_desc->num_desc > avail)
|
||||
async_desc->xfer_len = avail;
|
||||
else
|
||||
async_desc->xfer_len = async_desc->num_desc;
|
||||
|
||||
|
@ -966,7 +995,22 @@ static void bam_start_dma(struct bam_chan *bchan)
|
|||
if (async_desc->num_desc == async_desc->xfer_len)
|
||||
desc[async_desc->xfer_len - 1].flags |=
|
||||
cpu_to_le16(async_desc->flags);
|
||||
else
|
||||
|
||||
vd = vchan_next_desc(&bchan->vc);
|
||||
|
||||
dmaengine_desc_get_callback(&async_desc->vd.tx, &cb);
|
||||
|
||||
/*
|
||||
* An interrupt is generated at this desc, if
|
||||
* - FIFO is FULL.
|
||||
* - No more descriptors to add.
|
||||
* - If a callback completion was requested for this DESC,
|
||||
* In this case, BAM will deliver the completion callback
|
||||
* for this desc and continue processing the next desc.
|
||||
*/
|
||||
if (((avail <= async_desc->xfer_len) || !vd ||
|
||||
dmaengine_desc_callback_valid(&cb)) &&
|
||||
!(async_desc->flags & DESC_FLAG_EOT))
|
||||
desc[async_desc->xfer_len - 1].flags |=
|
||||
cpu_to_le16(DESC_FLAG_INT);
|
||||
|
||||
|
@ -975,15 +1019,19 @@ static void bam_start_dma(struct bam_chan *bchan)
|
|||
|
||||
memcpy(&fifo[bchan->tail], desc,
|
||||
partial * sizeof(struct bam_desc_hw));
|
||||
memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) *
|
||||
memcpy(fifo, &desc[partial],
|
||||
(async_desc->xfer_len - partial) *
|
||||
sizeof(struct bam_desc_hw));
|
||||
} else {
|
||||
memcpy(&fifo[bchan->tail], desc,
|
||||
async_desc->xfer_len * sizeof(struct bam_desc_hw));
|
||||
async_desc->xfer_len *
|
||||
sizeof(struct bam_desc_hw));
|
||||
}
|
||||
|
||||
bchan->tail += async_desc->xfer_len;
|
||||
bchan->tail %= MAX_DESCRIPTORS;
|
||||
list_add_tail(&async_desc->desc_node, &bchan->desc_list);
|
||||
}
|
||||
|
||||
/* ensure descriptor writes and dma start not reordered */
|
||||
wmb();
|
||||
|
@ -1012,7 +1060,7 @@ static void dma_tasklet(unsigned long data)
|
|||
bchan = &bdev->channels[i];
|
||||
spin_lock_irqsave(&bchan->vc.lock, flags);
|
||||
|
||||
if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd)
|
||||
if (!list_empty(&bchan->vc.desc_issued) && !IS_BUSY(bchan))
|
||||
bam_start_dma(bchan);
|
||||
spin_unlock_irqrestore(&bchan->vc.lock, flags);
|
||||
}
|
||||
|
@ -1033,7 +1081,7 @@ static void bam_issue_pending(struct dma_chan *chan)
|
|||
spin_lock_irqsave(&bchan->vc.lock, flags);
|
||||
|
||||
/* if work pending and idle, start a transaction */
|
||||
if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd)
|
||||
if (vchan_issue_pending(&bchan->vc) && !IS_BUSY(bchan))
|
||||
bam_start_dma(bchan);
|
||||
|
||||
spin_unlock_irqrestore(&bchan->vc.lock, flags);
|
||||
|
@ -1133,6 +1181,7 @@ static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
|
|||
|
||||
vchan_init(&bchan->vc, &bdev->common);
|
||||
bchan->vc.desc_free = bam_dma_free_desc;
|
||||
INIT_LIST_HEAD(&bchan->desc_list);
|
||||
}
|
||||
|
||||
static const struct of_device_id bam_of_match[] = {
|
||||
|
|
|
@ -823,6 +823,13 @@ static const struct sa11x0_dma_channel_desc chan_desc[] = {
|
|||
CD(Ser4SSPRc, DDAR_RW),
|
||||
};
|
||||
|
||||
static const struct dma_slave_map sa11x0_dma_map[] = {
|
||||
{ "sa11x0-ir", "tx", "Ser2ICPTr" },
|
||||
{ "sa11x0-ir", "rx", "Ser2ICPRc" },
|
||||
{ "sa11x0-ssp", "tx", "Ser4SSPTr" },
|
||||
{ "sa11x0-ssp", "rx", "Ser4SSPRc" },
|
||||
};
|
||||
|
||||
static int sa11x0_dma_init_dmadev(struct dma_device *dmadev,
|
||||
struct device *dev)
|
||||
{
|
||||
|
@ -909,6 +916,10 @@ static int sa11x0_dma_probe(struct platform_device *pdev)
|
|||
spin_lock_init(&d->lock);
|
||||
INIT_LIST_HEAD(&d->chan_pending);
|
||||
|
||||
d->slave.filter.fn = sa11x0_dma_filter_fn;
|
||||
d->slave.filter.mapcnt = ARRAY_SIZE(sa11x0_dma_map);
|
||||
d->slave.filter.map = sa11x0_dma_map;
|
||||
|
||||
d->base = ioremap(res->start, resource_size(res));
|
||||
if (!d->base) {
|
||||
ret = -ENOMEM;
|
||||
|
|
|
@ -0,0 +1,988 @@
|
|||
/*
|
||||
* Copyright (C) 2017 Spreadtrum Communications Inc.
|
||||
*
|
||||
* SPDX-License-Identifier: GPL-2.0
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_dma.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "virt-dma.h"
|
||||
|
||||
#define SPRD_DMA_CHN_REG_OFFSET 0x1000
|
||||
#define SPRD_DMA_CHN_REG_LENGTH 0x40
|
||||
#define SPRD_DMA_MEMCPY_MIN_SIZE 64
|
||||
|
||||
/* DMA global registers definition */
|
||||
#define SPRD_DMA_GLB_PAUSE 0x0
|
||||
#define SPRD_DMA_GLB_FRAG_WAIT 0x4
|
||||
#define SPRD_DMA_GLB_REQ_PEND0_EN 0x8
|
||||
#define SPRD_DMA_GLB_REQ_PEND1_EN 0xc
|
||||
#define SPRD_DMA_GLB_INT_RAW_STS 0x10
|
||||
#define SPRD_DMA_GLB_INT_MSK_STS 0x14
|
||||
#define SPRD_DMA_GLB_REQ_STS 0x18
|
||||
#define SPRD_DMA_GLB_CHN_EN_STS 0x1c
|
||||
#define SPRD_DMA_GLB_DEBUG_STS 0x20
|
||||
#define SPRD_DMA_GLB_ARB_SEL_STS 0x24
|
||||
#define SPRD_DMA_GLB_REQ_UID(uid) (0x4 * ((uid) - 1))
|
||||
#define SPRD_DMA_GLB_REQ_UID_OFFSET 0x2000
|
||||
|
||||
/* DMA channel registers definition */
|
||||
#define SPRD_DMA_CHN_PAUSE 0x0
|
||||
#define SPRD_DMA_CHN_REQ 0x4
|
||||
#define SPRD_DMA_CHN_CFG 0x8
|
||||
#define SPRD_DMA_CHN_INTC 0xc
|
||||
#define SPRD_DMA_CHN_SRC_ADDR 0x10
|
||||
#define SPRD_DMA_CHN_DES_ADDR 0x14
|
||||
#define SPRD_DMA_CHN_FRG_LEN 0x18
|
||||
#define SPRD_DMA_CHN_BLK_LEN 0x1c
|
||||
#define SPRD_DMA_CHN_TRSC_LEN 0x20
|
||||
#define SPRD_DMA_CHN_TRSF_STEP 0x24
|
||||
#define SPRD_DMA_CHN_WARP_PTR 0x28
|
||||
#define SPRD_DMA_CHN_WARP_TO 0x2c
|
||||
#define SPRD_DMA_CHN_LLIST_PTR 0x30
|
||||
#define SPRD_DMA_CHN_FRAG_STEP 0x34
|
||||
#define SPRD_DMA_CHN_SRC_BLK_STEP 0x38
|
||||
#define SPRD_DMA_CHN_DES_BLK_STEP 0x3c
|
||||
|
||||
/* SPRD_DMA_CHN_INTC register definition */
|
||||
#define SPRD_DMA_INT_MASK GENMASK(4, 0)
|
||||
#define SPRD_DMA_INT_CLR_OFFSET 24
|
||||
#define SPRD_DMA_FRAG_INT_EN BIT(0)
|
||||
#define SPRD_DMA_BLK_INT_EN BIT(1)
|
||||
#define SPRD_DMA_TRANS_INT_EN BIT(2)
|
||||
#define SPRD_DMA_LIST_INT_EN BIT(3)
|
||||
#define SPRD_DMA_CFG_ERR_INT_EN BIT(4)
|
||||
|
||||
/* SPRD_DMA_CHN_CFG register definition */
|
||||
#define SPRD_DMA_CHN_EN BIT(0)
|
||||
#define SPRD_DMA_WAIT_BDONE_OFFSET 24
|
||||
#define SPRD_DMA_DONOT_WAIT_BDONE 1
|
||||
|
||||
/* SPRD_DMA_CHN_REQ register definition */
|
||||
#define SPRD_DMA_REQ_EN BIT(0)
|
||||
|
||||
/* SPRD_DMA_CHN_PAUSE register definition */
|
||||
#define SPRD_DMA_PAUSE_EN BIT(0)
|
||||
#define SPRD_DMA_PAUSE_STS BIT(2)
|
||||
#define SPRD_DMA_PAUSE_CNT 0x2000
|
||||
|
||||
/* DMA_CHN_WARP_* register definition */
|
||||
#define SPRD_DMA_HIGH_ADDR_MASK GENMASK(31, 28)
|
||||
#define SPRD_DMA_LOW_ADDR_MASK GENMASK(31, 0)
|
||||
#define SPRD_DMA_HIGH_ADDR_OFFSET 4
|
||||
|
||||
/* SPRD_DMA_CHN_INTC register definition */
|
||||
#define SPRD_DMA_FRAG_INT_STS BIT(16)
|
||||
#define SPRD_DMA_BLK_INT_STS BIT(17)
|
||||
#define SPRD_DMA_TRSC_INT_STS BIT(18)
|
||||
#define SPRD_DMA_LIST_INT_STS BIT(19)
|
||||
#define SPRD_DMA_CFGERR_INT_STS BIT(20)
|
||||
#define SPRD_DMA_CHN_INT_STS \
|
||||
(SPRD_DMA_FRAG_INT_STS | SPRD_DMA_BLK_INT_STS | \
|
||||
SPRD_DMA_TRSC_INT_STS | SPRD_DMA_LIST_INT_STS | \
|
||||
SPRD_DMA_CFGERR_INT_STS)
|
||||
|
||||
/* SPRD_DMA_CHN_FRG_LEN register definition */
|
||||
#define SPRD_DMA_SRC_DATAWIDTH_OFFSET 30
|
||||
#define SPRD_DMA_DES_DATAWIDTH_OFFSET 28
|
||||
#define SPRD_DMA_SWT_MODE_OFFSET 26
|
||||
#define SPRD_DMA_REQ_MODE_OFFSET 24
|
||||
#define SPRD_DMA_REQ_MODE_MASK GENMASK(1, 0)
|
||||
#define SPRD_DMA_FIX_SEL_OFFSET 21
|
||||
#define SPRD_DMA_FIX_EN_OFFSET 20
|
||||
#define SPRD_DMA_LLIST_END_OFFSET 19
|
||||
#define SPRD_DMA_FRG_LEN_MASK GENMASK(16, 0)
|
||||
|
||||
/* SPRD_DMA_CHN_BLK_LEN register definition */
|
||||
#define SPRD_DMA_BLK_LEN_MASK GENMASK(16, 0)
|
||||
|
||||
/* SPRD_DMA_CHN_TRSC_LEN register definition */
|
||||
#define SPRD_DMA_TRSC_LEN_MASK GENMASK(27, 0)
|
||||
|
||||
/* SPRD_DMA_CHN_TRSF_STEP register definition */
|
||||
#define SPRD_DMA_DEST_TRSF_STEP_OFFSET 16
|
||||
#define SPRD_DMA_SRC_TRSF_STEP_OFFSET 0
|
||||
#define SPRD_DMA_TRSF_STEP_MASK GENMASK(15, 0)
|
||||
|
||||
#define SPRD_DMA_SOFTWARE_UID 0
|
||||
|
||||
/*
|
||||
* enum sprd_dma_req_mode: define the DMA request mode
|
||||
* @SPRD_DMA_FRAG_REQ: fragment request mode
|
||||
* @SPRD_DMA_BLK_REQ: block request mode
|
||||
* @SPRD_DMA_TRANS_REQ: transaction request mode
|
||||
* @SPRD_DMA_LIST_REQ: link-list request mode
|
||||
*
|
||||
* We have 4 types request mode: fragment mode, block mode, transaction mode
|
||||
* and linklist mode. One transaction can contain several blocks, one block can
|
||||
* contain several fragments. Link-list mode means we can save several DMA
|
||||
* configuration into one reserved memory, then DMA can fetch each DMA
|
||||
* configuration automatically to start transfer.
|
||||
*/
|
||||
enum sprd_dma_req_mode {
|
||||
SPRD_DMA_FRAG_REQ,
|
||||
SPRD_DMA_BLK_REQ,
|
||||
SPRD_DMA_TRANS_REQ,
|
||||
SPRD_DMA_LIST_REQ,
|
||||
};
|
||||
|
||||
/*
|
||||
* enum sprd_dma_int_type: define the DMA interrupt type
|
||||
* @SPRD_DMA_NO_INT: do not need generate DMA interrupts.
|
||||
* @SPRD_DMA_FRAG_INT: fragment done interrupt when one fragment request
|
||||
* is done.
|
||||
* @SPRD_DMA_BLK_INT: block done interrupt when one block request is done.
|
||||
* @SPRD_DMA_BLK_FRAG_INT: block and fragment interrupt when one fragment
|
||||
* or one block request is done.
|
||||
* @SPRD_DMA_TRANS_INT: tansaction done interrupt when one transaction
|
||||
* request is done.
|
||||
* @SPRD_DMA_TRANS_FRAG_INT: transaction and fragment interrupt when one
|
||||
* transaction request or fragment request is done.
|
||||
* @SPRD_DMA_TRANS_BLK_INT: transaction and block interrupt when one
|
||||
* transaction request or block request is done.
|
||||
* @SPRD_DMA_LIST_INT: link-list done interrupt when one link-list request
|
||||
* is done.
|
||||
* @SPRD_DMA_CFGERR_INT: configure error interrupt when configuration is
|
||||
* incorrect.
|
||||
*/
|
||||
enum sprd_dma_int_type {
|
||||
SPRD_DMA_NO_INT,
|
||||
SPRD_DMA_FRAG_INT,
|
||||
SPRD_DMA_BLK_INT,
|
||||
SPRD_DMA_BLK_FRAG_INT,
|
||||
SPRD_DMA_TRANS_INT,
|
||||
SPRD_DMA_TRANS_FRAG_INT,
|
||||
SPRD_DMA_TRANS_BLK_INT,
|
||||
SPRD_DMA_LIST_INT,
|
||||
SPRD_DMA_CFGERR_INT,
|
||||
};
|
||||
|
||||
/* dma channel hardware configuration */
|
||||
struct sprd_dma_chn_hw {
|
||||
u32 pause;
|
||||
u32 req;
|
||||
u32 cfg;
|
||||
u32 intc;
|
||||
u32 src_addr;
|
||||
u32 des_addr;
|
||||
u32 frg_len;
|
||||
u32 blk_len;
|
||||
u32 trsc_len;
|
||||
u32 trsf_step;
|
||||
u32 wrap_ptr;
|
||||
u32 wrap_to;
|
||||
u32 llist_ptr;
|
||||
u32 frg_step;
|
||||
u32 src_blk_step;
|
||||
u32 des_blk_step;
|
||||
};
|
||||
|
||||
/* dma request description */
|
||||
struct sprd_dma_desc {
|
||||
struct virt_dma_desc vd;
|
||||
struct sprd_dma_chn_hw chn_hw;
|
||||
};
|
||||
|
||||
/* dma channel description */
|
||||
struct sprd_dma_chn {
|
||||
struct virt_dma_chan vc;
|
||||
void __iomem *chn_base;
|
||||
u32 chn_num;
|
||||
u32 dev_id;
|
||||
struct sprd_dma_desc *cur_desc;
|
||||
};
|
||||
|
||||
/* SPRD dma device */
|
||||
struct sprd_dma_dev {
|
||||
struct dma_device dma_dev;
|
||||
void __iomem *glb_base;
|
||||
struct clk *clk;
|
||||
struct clk *ashb_clk;
|
||||
int irq;
|
||||
u32 total_chns;
|
||||
struct sprd_dma_chn channels[0];
|
||||
};
|
||||
|
||||
static bool sprd_dma_filter_fn(struct dma_chan *chan, void *param);
|
||||
static struct of_dma_filter_info sprd_dma_info = {
|
||||
.filter_fn = sprd_dma_filter_fn,
|
||||
};
|
||||
|
||||
static inline struct sprd_dma_chn *to_sprd_dma_chan(struct dma_chan *c)
|
||||
{
|
||||
return container_of(c, struct sprd_dma_chn, vc.chan);
|
||||
}
|
||||
|
||||
static inline struct sprd_dma_dev *to_sprd_dma_dev(struct dma_chan *c)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(c);
|
||||
|
||||
return container_of(schan, struct sprd_dma_dev, channels[c->chan_id]);
|
||||
}
|
||||
|
||||
static inline struct sprd_dma_desc *to_sprd_dma_desc(struct virt_dma_desc *vd)
|
||||
{
|
||||
return container_of(vd, struct sprd_dma_desc, vd);
|
||||
}
|
||||
|
||||
static void sprd_dma_chn_update(struct sprd_dma_chn *schan, u32 reg,
|
||||
u32 mask, u32 val)
|
||||
{
|
||||
u32 orig = readl(schan->chn_base + reg);
|
||||
u32 tmp;
|
||||
|
||||
tmp = (orig & ~mask) | val;
|
||||
writel(tmp, schan->chn_base + reg);
|
||||
}
|
||||
|
||||
static int sprd_dma_enable(struct sprd_dma_dev *sdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = clk_prepare_enable(sdev->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* The ashb_clk is optional and only for AGCP DMA controller, so we
|
||||
* need add one condition to check if the ashb_clk need enable.
|
||||
*/
|
||||
if (!IS_ERR(sdev->ashb_clk))
|
||||
ret = clk_prepare_enable(sdev->ashb_clk);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void sprd_dma_disable(struct sprd_dma_dev *sdev)
|
||||
{
|
||||
clk_disable_unprepare(sdev->clk);
|
||||
|
||||
/*
|
||||
* Need to check if we need disable the optional ashb_clk for AGCP DMA.
|
||||
*/
|
||||
if (!IS_ERR(sdev->ashb_clk))
|
||||
clk_disable_unprepare(sdev->ashb_clk);
|
||||
}
|
||||
|
||||
static void sprd_dma_set_uid(struct sprd_dma_chn *schan)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan);
|
||||
u32 dev_id = schan->dev_id;
|
||||
|
||||
if (dev_id != SPRD_DMA_SOFTWARE_UID) {
|
||||
u32 uid_offset = SPRD_DMA_GLB_REQ_UID_OFFSET +
|
||||
SPRD_DMA_GLB_REQ_UID(dev_id);
|
||||
|
||||
writel(schan->chn_num + 1, sdev->glb_base + uid_offset);
|
||||
}
|
||||
}
|
||||
|
||||
static void sprd_dma_unset_uid(struct sprd_dma_chn *schan)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan);
|
||||
u32 dev_id = schan->dev_id;
|
||||
|
||||
if (dev_id != SPRD_DMA_SOFTWARE_UID) {
|
||||
u32 uid_offset = SPRD_DMA_GLB_REQ_UID_OFFSET +
|
||||
SPRD_DMA_GLB_REQ_UID(dev_id);
|
||||
|
||||
writel(0, sdev->glb_base + uid_offset);
|
||||
}
|
||||
}
|
||||
|
||||
static void sprd_dma_clear_int(struct sprd_dma_chn *schan)
|
||||
{
|
||||
sprd_dma_chn_update(schan, SPRD_DMA_CHN_INTC,
|
||||
SPRD_DMA_INT_MASK << SPRD_DMA_INT_CLR_OFFSET,
|
||||
SPRD_DMA_INT_MASK << SPRD_DMA_INT_CLR_OFFSET);
|
||||
}
|
||||
|
||||
static void sprd_dma_enable_chn(struct sprd_dma_chn *schan)
|
||||
{
|
||||
sprd_dma_chn_update(schan, SPRD_DMA_CHN_CFG, SPRD_DMA_CHN_EN,
|
||||
SPRD_DMA_CHN_EN);
|
||||
}
|
||||
|
||||
static void sprd_dma_disable_chn(struct sprd_dma_chn *schan)
|
||||
{
|
||||
sprd_dma_chn_update(schan, SPRD_DMA_CHN_CFG, SPRD_DMA_CHN_EN, 0);
|
||||
}
|
||||
|
||||
static void sprd_dma_soft_request(struct sprd_dma_chn *schan)
|
||||
{
|
||||
sprd_dma_chn_update(schan, SPRD_DMA_CHN_REQ, SPRD_DMA_REQ_EN,
|
||||
SPRD_DMA_REQ_EN);
|
||||
}
|
||||
|
||||
static void sprd_dma_pause_resume(struct sprd_dma_chn *schan, bool enable)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan);
|
||||
u32 pause, timeout = SPRD_DMA_PAUSE_CNT;
|
||||
|
||||
if (enable) {
|
||||
sprd_dma_chn_update(schan, SPRD_DMA_CHN_PAUSE,
|
||||
SPRD_DMA_PAUSE_EN, SPRD_DMA_PAUSE_EN);
|
||||
|
||||
do {
|
||||
pause = readl(schan->chn_base + SPRD_DMA_CHN_PAUSE);
|
||||
if (pause & SPRD_DMA_PAUSE_STS)
|
||||
break;
|
||||
|
||||
cpu_relax();
|
||||
} while (--timeout > 0);
|
||||
|
||||
if (!timeout)
|
||||
dev_warn(sdev->dma_dev.dev,
|
||||
"pause dma controller timeout\n");
|
||||
} else {
|
||||
sprd_dma_chn_update(schan, SPRD_DMA_CHN_PAUSE,
|
||||
SPRD_DMA_PAUSE_EN, 0);
|
||||
}
|
||||
}
|
||||
|
||||
static void sprd_dma_stop_and_disable(struct sprd_dma_chn *schan)
|
||||
{
|
||||
u32 cfg = readl(schan->chn_base + SPRD_DMA_CHN_CFG);
|
||||
|
||||
if (!(cfg & SPRD_DMA_CHN_EN))
|
||||
return;
|
||||
|
||||
sprd_dma_pause_resume(schan, true);
|
||||
sprd_dma_disable_chn(schan);
|
||||
}
|
||||
|
||||
static unsigned long sprd_dma_get_dst_addr(struct sprd_dma_chn *schan)
|
||||
{
|
||||
unsigned long addr, addr_high;
|
||||
|
||||
addr = readl(schan->chn_base + SPRD_DMA_CHN_DES_ADDR);
|
||||
addr_high = readl(schan->chn_base + SPRD_DMA_CHN_WARP_TO) &
|
||||
SPRD_DMA_HIGH_ADDR_MASK;
|
||||
|
||||
return addr | (addr_high << SPRD_DMA_HIGH_ADDR_OFFSET);
|
||||
}
|
||||
|
||||
static enum sprd_dma_int_type sprd_dma_get_int_type(struct sprd_dma_chn *schan)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan);
|
||||
u32 intc_sts = readl(schan->chn_base + SPRD_DMA_CHN_INTC) &
|
||||
SPRD_DMA_CHN_INT_STS;
|
||||
|
||||
switch (intc_sts) {
|
||||
case SPRD_DMA_CFGERR_INT_STS:
|
||||
return SPRD_DMA_CFGERR_INT;
|
||||
|
||||
case SPRD_DMA_LIST_INT_STS:
|
||||
return SPRD_DMA_LIST_INT;
|
||||
|
||||
case SPRD_DMA_TRSC_INT_STS:
|
||||
return SPRD_DMA_TRANS_INT;
|
||||
|
||||
case SPRD_DMA_BLK_INT_STS:
|
||||
return SPRD_DMA_BLK_INT;
|
||||
|
||||
case SPRD_DMA_FRAG_INT_STS:
|
||||
return SPRD_DMA_FRAG_INT;
|
||||
|
||||
default:
|
||||
dev_warn(sdev->dma_dev.dev, "incorrect dma interrupt type\n");
|
||||
return SPRD_DMA_NO_INT;
|
||||
}
|
||||
}
|
||||
|
||||
static enum sprd_dma_req_mode sprd_dma_get_req_type(struct sprd_dma_chn *schan)
|
||||
{
|
||||
u32 frag_reg = readl(schan->chn_base + SPRD_DMA_CHN_FRG_LEN);
|
||||
|
||||
return (frag_reg >> SPRD_DMA_REQ_MODE_OFFSET) & SPRD_DMA_REQ_MODE_MASK;
|
||||
}
|
||||
|
||||
static void sprd_dma_set_chn_config(struct sprd_dma_chn *schan,
|
||||
struct sprd_dma_desc *sdesc)
|
||||
{
|
||||
struct sprd_dma_chn_hw *cfg = &sdesc->chn_hw;
|
||||
|
||||
writel(cfg->pause, schan->chn_base + SPRD_DMA_CHN_PAUSE);
|
||||
writel(cfg->cfg, schan->chn_base + SPRD_DMA_CHN_CFG);
|
||||
writel(cfg->intc, schan->chn_base + SPRD_DMA_CHN_INTC);
|
||||
writel(cfg->src_addr, schan->chn_base + SPRD_DMA_CHN_SRC_ADDR);
|
||||
writel(cfg->des_addr, schan->chn_base + SPRD_DMA_CHN_DES_ADDR);
|
||||
writel(cfg->frg_len, schan->chn_base + SPRD_DMA_CHN_FRG_LEN);
|
||||
writel(cfg->blk_len, schan->chn_base + SPRD_DMA_CHN_BLK_LEN);
|
||||
writel(cfg->trsc_len, schan->chn_base + SPRD_DMA_CHN_TRSC_LEN);
|
||||
writel(cfg->trsf_step, schan->chn_base + SPRD_DMA_CHN_TRSF_STEP);
|
||||
writel(cfg->wrap_ptr, schan->chn_base + SPRD_DMA_CHN_WARP_PTR);
|
||||
writel(cfg->wrap_to, schan->chn_base + SPRD_DMA_CHN_WARP_TO);
|
||||
writel(cfg->llist_ptr, schan->chn_base + SPRD_DMA_CHN_LLIST_PTR);
|
||||
writel(cfg->frg_step, schan->chn_base + SPRD_DMA_CHN_FRAG_STEP);
|
||||
writel(cfg->src_blk_step, schan->chn_base + SPRD_DMA_CHN_SRC_BLK_STEP);
|
||||
writel(cfg->des_blk_step, schan->chn_base + SPRD_DMA_CHN_DES_BLK_STEP);
|
||||
writel(cfg->req, schan->chn_base + SPRD_DMA_CHN_REQ);
|
||||
}
|
||||
|
||||
static void sprd_dma_start(struct sprd_dma_chn *schan)
|
||||
{
|
||||
struct virt_dma_desc *vd = vchan_next_desc(&schan->vc);
|
||||
|
||||
if (!vd)
|
||||
return;
|
||||
|
||||
list_del(&vd->node);
|
||||
schan->cur_desc = to_sprd_dma_desc(vd);
|
||||
|
||||
/*
|
||||
* Copy the DMA configuration from DMA descriptor to this hardware
|
||||
* channel.
|
||||
*/
|
||||
sprd_dma_set_chn_config(schan, schan->cur_desc);
|
||||
sprd_dma_set_uid(schan);
|
||||
sprd_dma_enable_chn(schan);
|
||||
|
||||
if (schan->dev_id == SPRD_DMA_SOFTWARE_UID)
|
||||
sprd_dma_soft_request(schan);
|
||||
}
|
||||
|
||||
static void sprd_dma_stop(struct sprd_dma_chn *schan)
|
||||
{
|
||||
sprd_dma_stop_and_disable(schan);
|
||||
sprd_dma_unset_uid(schan);
|
||||
sprd_dma_clear_int(schan);
|
||||
}
|
||||
|
||||
static bool sprd_dma_check_trans_done(struct sprd_dma_desc *sdesc,
|
||||
enum sprd_dma_int_type int_type,
|
||||
enum sprd_dma_req_mode req_mode)
|
||||
{
|
||||
if (int_type == SPRD_DMA_NO_INT)
|
||||
return false;
|
||||
|
||||
if (int_type >= req_mode + 1)
|
||||
return true;
|
||||
else
|
||||
return false;
|
||||
}
|
||||
|
||||
static irqreturn_t dma_irq_handle(int irq, void *dev_id)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = (struct sprd_dma_dev *)dev_id;
|
||||
u32 irq_status = readl(sdev->glb_base + SPRD_DMA_GLB_INT_MSK_STS);
|
||||
struct sprd_dma_chn *schan;
|
||||
struct sprd_dma_desc *sdesc;
|
||||
enum sprd_dma_req_mode req_type;
|
||||
enum sprd_dma_int_type int_type;
|
||||
bool trans_done = false;
|
||||
u32 i;
|
||||
|
||||
while (irq_status) {
|
||||
i = __ffs(irq_status);
|
||||
irq_status &= (irq_status - 1);
|
||||
schan = &sdev->channels[i];
|
||||
|
||||
spin_lock(&schan->vc.lock);
|
||||
int_type = sprd_dma_get_int_type(schan);
|
||||
req_type = sprd_dma_get_req_type(schan);
|
||||
sprd_dma_clear_int(schan);
|
||||
|
||||
sdesc = schan->cur_desc;
|
||||
|
||||
/* Check if the dma request descriptor is done. */
|
||||
trans_done = sprd_dma_check_trans_done(sdesc, int_type,
|
||||
req_type);
|
||||
if (trans_done == true) {
|
||||
vchan_cookie_complete(&sdesc->vd);
|
||||
schan->cur_desc = NULL;
|
||||
sprd_dma_start(schan);
|
||||
}
|
||||
spin_unlock(&schan->vc.lock);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int sprd_dma_alloc_chan_resources(struct dma_chan *chan)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_get_sync(chan->device->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
schan->dev_id = SPRD_DMA_SOFTWARE_UID;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void sprd_dma_free_chan_resources(struct dma_chan *chan)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&schan->vc.lock, flags);
|
||||
sprd_dma_stop(schan);
|
||||
spin_unlock_irqrestore(&schan->vc.lock, flags);
|
||||
|
||||
vchan_free_chan_resources(&schan->vc);
|
||||
pm_runtime_put(chan->device->dev);
|
||||
}
|
||||
|
||||
static enum dma_status sprd_dma_tx_status(struct dma_chan *chan,
|
||||
dma_cookie_t cookie,
|
||||
struct dma_tx_state *txstate)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
struct virt_dma_desc *vd;
|
||||
unsigned long flags;
|
||||
enum dma_status ret;
|
||||
u32 pos;
|
||||
|
||||
ret = dma_cookie_status(chan, cookie, txstate);
|
||||
if (ret == DMA_COMPLETE || !txstate)
|
||||
return ret;
|
||||
|
||||
spin_lock_irqsave(&schan->vc.lock, flags);
|
||||
vd = vchan_find_desc(&schan->vc, cookie);
|
||||
if (vd) {
|
||||
struct sprd_dma_desc *sdesc = to_sprd_dma_desc(vd);
|
||||
struct sprd_dma_chn_hw *hw = &sdesc->chn_hw;
|
||||
|
||||
if (hw->trsc_len > 0)
|
||||
pos = hw->trsc_len;
|
||||
else if (hw->blk_len > 0)
|
||||
pos = hw->blk_len;
|
||||
else if (hw->frg_len > 0)
|
||||
pos = hw->frg_len;
|
||||
else
|
||||
pos = 0;
|
||||
} else if (schan->cur_desc && schan->cur_desc->vd.tx.cookie == cookie) {
|
||||
pos = sprd_dma_get_dst_addr(schan);
|
||||
} else {
|
||||
pos = 0;
|
||||
}
|
||||
spin_unlock_irqrestore(&schan->vc.lock, flags);
|
||||
|
||||
dma_set_residue(txstate, pos);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void sprd_dma_issue_pending(struct dma_chan *chan)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&schan->vc.lock, flags);
|
||||
if (vchan_issue_pending(&schan->vc) && !schan->cur_desc)
|
||||
sprd_dma_start(schan);
|
||||
spin_unlock_irqrestore(&schan->vc.lock, flags);
|
||||
}
|
||||
|
||||
static int sprd_dma_config(struct dma_chan *chan, struct sprd_dma_desc *sdesc,
|
||||
dma_addr_t dest, dma_addr_t src, size_t len)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = to_sprd_dma_dev(chan);
|
||||
struct sprd_dma_chn_hw *hw = &sdesc->chn_hw;
|
||||
u32 datawidth, src_step, des_step, fragment_len;
|
||||
u32 block_len, req_mode, irq_mode, transcation_len;
|
||||
u32 fix_mode = 0, fix_en = 0;
|
||||
|
||||
if (IS_ALIGNED(len, 4)) {
|
||||
datawidth = 2;
|
||||
src_step = 4;
|
||||
des_step = 4;
|
||||
} else if (IS_ALIGNED(len, 2)) {
|
||||
datawidth = 1;
|
||||
src_step = 2;
|
||||
des_step = 2;
|
||||
} else {
|
||||
datawidth = 0;
|
||||
src_step = 1;
|
||||
des_step = 1;
|
||||
}
|
||||
|
||||
fragment_len = SPRD_DMA_MEMCPY_MIN_SIZE;
|
||||
if (len <= SPRD_DMA_BLK_LEN_MASK) {
|
||||
block_len = len;
|
||||
transcation_len = 0;
|
||||
req_mode = SPRD_DMA_BLK_REQ;
|
||||
irq_mode = SPRD_DMA_BLK_INT;
|
||||
} else {
|
||||
block_len = SPRD_DMA_MEMCPY_MIN_SIZE;
|
||||
transcation_len = len;
|
||||
req_mode = SPRD_DMA_TRANS_REQ;
|
||||
irq_mode = SPRD_DMA_TRANS_INT;
|
||||
}
|
||||
|
||||
hw->cfg = SPRD_DMA_DONOT_WAIT_BDONE << SPRD_DMA_WAIT_BDONE_OFFSET;
|
||||
hw->wrap_ptr = (u32)((src >> SPRD_DMA_HIGH_ADDR_OFFSET) &
|
||||
SPRD_DMA_HIGH_ADDR_MASK);
|
||||
hw->wrap_to = (u32)((dest >> SPRD_DMA_HIGH_ADDR_OFFSET) &
|
||||
SPRD_DMA_HIGH_ADDR_MASK);
|
||||
|
||||
hw->src_addr = (u32)(src & SPRD_DMA_LOW_ADDR_MASK);
|
||||
hw->des_addr = (u32)(dest & SPRD_DMA_LOW_ADDR_MASK);
|
||||
|
||||
if ((src_step != 0 && des_step != 0) || (src_step | des_step) == 0) {
|
||||
fix_en = 0;
|
||||
} else {
|
||||
fix_en = 1;
|
||||
if (src_step)
|
||||
fix_mode = 1;
|
||||
else
|
||||
fix_mode = 0;
|
||||
}
|
||||
|
||||
hw->frg_len = datawidth << SPRD_DMA_SRC_DATAWIDTH_OFFSET |
|
||||
datawidth << SPRD_DMA_DES_DATAWIDTH_OFFSET |
|
||||
req_mode << SPRD_DMA_REQ_MODE_OFFSET |
|
||||
fix_mode << SPRD_DMA_FIX_SEL_OFFSET |
|
||||
fix_en << SPRD_DMA_FIX_EN_OFFSET |
|
||||
(fragment_len & SPRD_DMA_FRG_LEN_MASK);
|
||||
hw->blk_len = block_len & SPRD_DMA_BLK_LEN_MASK;
|
||||
|
||||
hw->intc = SPRD_DMA_CFG_ERR_INT_EN;
|
||||
|
||||
switch (irq_mode) {
|
||||
case SPRD_DMA_NO_INT:
|
||||
break;
|
||||
|
||||
case SPRD_DMA_FRAG_INT:
|
||||
hw->intc |= SPRD_DMA_FRAG_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_BLK_INT:
|
||||
hw->intc |= SPRD_DMA_BLK_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_BLK_FRAG_INT:
|
||||
hw->intc |= SPRD_DMA_BLK_INT_EN | SPRD_DMA_FRAG_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_TRANS_INT:
|
||||
hw->intc |= SPRD_DMA_TRANS_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_TRANS_FRAG_INT:
|
||||
hw->intc |= SPRD_DMA_TRANS_INT_EN | SPRD_DMA_FRAG_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_TRANS_BLK_INT:
|
||||
hw->intc |= SPRD_DMA_TRANS_INT_EN | SPRD_DMA_BLK_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_LIST_INT:
|
||||
hw->intc |= SPRD_DMA_LIST_INT_EN;
|
||||
break;
|
||||
|
||||
case SPRD_DMA_CFGERR_INT:
|
||||
hw->intc |= SPRD_DMA_CFG_ERR_INT_EN;
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(sdev->dma_dev.dev, "invalid irq mode\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (transcation_len == 0)
|
||||
hw->trsc_len = block_len & SPRD_DMA_TRSC_LEN_MASK;
|
||||
else
|
||||
hw->trsc_len = transcation_len & SPRD_DMA_TRSC_LEN_MASK;
|
||||
|
||||
hw->trsf_step = (des_step & SPRD_DMA_TRSF_STEP_MASK) <<
|
||||
SPRD_DMA_DEST_TRSF_STEP_OFFSET |
|
||||
(src_step & SPRD_DMA_TRSF_STEP_MASK) <<
|
||||
SPRD_DMA_SRC_TRSF_STEP_OFFSET;
|
||||
|
||||
hw->frg_step = 0;
|
||||
hw->src_blk_step = 0;
|
||||
hw->des_blk_step = 0;
|
||||
hw->src_blk_step = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct dma_async_tx_descriptor *
|
||||
sprd_dma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
|
||||
size_t len, unsigned long flags)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
struct sprd_dma_desc *sdesc;
|
||||
int ret;
|
||||
|
||||
sdesc = kzalloc(sizeof(*sdesc), GFP_NOWAIT);
|
||||
if (!sdesc)
|
||||
return NULL;
|
||||
|
||||
ret = sprd_dma_config(chan, sdesc, dest, src, len);
|
||||
if (ret) {
|
||||
kfree(sdesc);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return vchan_tx_prep(&schan->vc, &sdesc->vd, flags);
|
||||
}
|
||||
|
||||
static int sprd_dma_pause(struct dma_chan *chan)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&schan->vc.lock, flags);
|
||||
sprd_dma_pause_resume(schan, true);
|
||||
spin_unlock_irqrestore(&schan->vc.lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sprd_dma_resume(struct dma_chan *chan)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&schan->vc.lock, flags);
|
||||
sprd_dma_pause_resume(schan, false);
|
||||
spin_unlock_irqrestore(&schan->vc.lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sprd_dma_terminate_all(struct dma_chan *chan)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
unsigned long flags;
|
||||
LIST_HEAD(head);
|
||||
|
||||
spin_lock_irqsave(&schan->vc.lock, flags);
|
||||
sprd_dma_stop(schan);
|
||||
|
||||
vchan_get_all_descriptors(&schan->vc, &head);
|
||||
spin_unlock_irqrestore(&schan->vc.lock, flags);
|
||||
|
||||
vchan_dma_desc_free_list(&schan->vc, &head);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void sprd_dma_free_desc(struct virt_dma_desc *vd)
|
||||
{
|
||||
struct sprd_dma_desc *sdesc = to_sprd_dma_desc(vd);
|
||||
|
||||
kfree(sdesc);
|
||||
}
|
||||
|
||||
static bool sprd_dma_filter_fn(struct dma_chan *chan, void *param)
|
||||
{
|
||||
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
|
||||
struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan);
|
||||
u32 req = *(u32 *)param;
|
||||
|
||||
if (req < sdev->total_chns)
|
||||
return req == schan->chn_num + 1;
|
||||
else
|
||||
return false;
|
||||
}
|
||||
|
||||
static int sprd_dma_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct sprd_dma_dev *sdev;
|
||||
struct sprd_dma_chn *dma_chn;
|
||||
struct resource *res;
|
||||
u32 chn_count;
|
||||
int ret, i;
|
||||
|
||||
ret = device_property_read_u32(&pdev->dev, "#dma-channels", &chn_count);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "get dma channels count failed\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
sdev = devm_kzalloc(&pdev->dev, sizeof(*sdev) +
|
||||
sizeof(*dma_chn) * chn_count,
|
||||
GFP_KERNEL);
|
||||
if (!sdev)
|
||||
return -ENOMEM;
|
||||
|
||||
sdev->clk = devm_clk_get(&pdev->dev, "enable");
|
||||
if (IS_ERR(sdev->clk)) {
|
||||
dev_err(&pdev->dev, "get enable clock failed\n");
|
||||
return PTR_ERR(sdev->clk);
|
||||
}
|
||||
|
||||
/* ashb clock is optional for AGCP DMA */
|
||||
sdev->ashb_clk = devm_clk_get(&pdev->dev, "ashb_eb");
|
||||
if (IS_ERR(sdev->ashb_clk))
|
||||
dev_warn(&pdev->dev, "no optional ashb eb clock\n");
|
||||
|
||||
/*
|
||||
* We have three DMA controllers: AP DMA, AON DMA and AGCP DMA. For AGCP
|
||||
* DMA controller, it can or do not request the irq, which will save
|
||||
* system power without resuming system by DMA interrupts if AGCP DMA
|
||||
* does not request the irq. Thus the DMA interrupts property should
|
||||
* be optional.
|
||||
*/
|
||||
sdev->irq = platform_get_irq(pdev, 0);
|
||||
if (sdev->irq > 0) {
|
||||
ret = devm_request_irq(&pdev->dev, sdev->irq, dma_irq_handle,
|
||||
0, "sprd_dma", (void *)sdev);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "request dma irq failed\n");
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
dev_warn(&pdev->dev, "no interrupts for the dma controller\n");
|
||||
}
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
sdev->glb_base = devm_ioremap_nocache(&pdev->dev, res->start,
|
||||
resource_size(res));
|
||||
if (!sdev->glb_base)
|
||||
return -ENOMEM;
|
||||
|
||||
dma_cap_set(DMA_MEMCPY, sdev->dma_dev.cap_mask);
|
||||
sdev->total_chns = chn_count;
|
||||
sdev->dma_dev.chancnt = chn_count;
|
||||
INIT_LIST_HEAD(&sdev->dma_dev.channels);
|
||||
INIT_LIST_HEAD(&sdev->dma_dev.global_node);
|
||||
sdev->dma_dev.dev = &pdev->dev;
|
||||
sdev->dma_dev.device_alloc_chan_resources = sprd_dma_alloc_chan_resources;
|
||||
sdev->dma_dev.device_free_chan_resources = sprd_dma_free_chan_resources;
|
||||
sdev->dma_dev.device_tx_status = sprd_dma_tx_status;
|
||||
sdev->dma_dev.device_issue_pending = sprd_dma_issue_pending;
|
||||
sdev->dma_dev.device_prep_dma_memcpy = sprd_dma_prep_dma_memcpy;
|
||||
sdev->dma_dev.device_pause = sprd_dma_pause;
|
||||
sdev->dma_dev.device_resume = sprd_dma_resume;
|
||||
sdev->dma_dev.device_terminate_all = sprd_dma_terminate_all;
|
||||
|
||||
for (i = 0; i < chn_count; i++) {
|
||||
dma_chn = &sdev->channels[i];
|
||||
dma_chn->chn_num = i;
|
||||
dma_chn->cur_desc = NULL;
|
||||
/* get each channel's registers base address. */
|
||||
dma_chn->chn_base = sdev->glb_base + SPRD_DMA_CHN_REG_OFFSET +
|
||||
SPRD_DMA_CHN_REG_LENGTH * i;
|
||||
|
||||
dma_chn->vc.desc_free = sprd_dma_free_desc;
|
||||
vchan_init(&dma_chn->vc, &sdev->dma_dev);
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, sdev);
|
||||
ret = sprd_dma_enable(sdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pm_runtime_set_active(&pdev->dev);
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
|
||||
ret = pm_runtime_get_sync(&pdev->dev);
|
||||
if (ret < 0)
|
||||
goto err_rpm;
|
||||
|
||||
ret = dma_async_device_register(&sdev->dma_dev);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "register dma device failed:%d\n", ret);
|
||||
goto err_register;
|
||||
}
|
||||
|
||||
sprd_dma_info.dma_cap = sdev->dma_dev.cap_mask;
|
||||
ret = of_dma_controller_register(np, of_dma_simple_xlate,
|
||||
&sprd_dma_info);
|
||||
if (ret)
|
||||
goto err_of_register;
|
||||
|
||||
pm_runtime_put(&pdev->dev);
|
||||
return 0;
|
||||
|
||||
err_of_register:
|
||||
dma_async_device_unregister(&sdev->dma_dev);
|
||||
err_register:
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
err_rpm:
|
||||
sprd_dma_disable(sdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int sprd_dma_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = platform_get_drvdata(pdev);
|
||||
struct sprd_dma_chn *c, *cn;
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_get_sync(&pdev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* explicitly free the irq */
|
||||
if (sdev->irq > 0)
|
||||
devm_free_irq(&pdev->dev, sdev->irq, sdev);
|
||||
|
||||
list_for_each_entry_safe(c, cn, &sdev->dma_dev.channels,
|
||||
vc.chan.device_node) {
|
||||
list_del(&c->vc.chan.device_node);
|
||||
tasklet_kill(&c->vc.task);
|
||||
}
|
||||
|
||||
of_dma_controller_free(pdev->dev.of_node);
|
||||
dma_async_device_unregister(&sdev->dma_dev);
|
||||
sprd_dma_disable(sdev);
|
||||
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id sprd_dma_match[] = {
|
||||
{ .compatible = "sprd,sc9860-dma", },
|
||||
{},
|
||||
};
|
||||
|
||||
static int __maybe_unused sprd_dma_runtime_suspend(struct device *dev)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = dev_get_drvdata(dev);
|
||||
|
||||
sprd_dma_disable(sdev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused sprd_dma_runtime_resume(struct device *dev)
|
||||
{
|
||||
struct sprd_dma_dev *sdev = dev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
ret = sprd_dma_enable(sdev);
|
||||
if (ret)
|
||||
dev_err(sdev->dma_dev.dev, "enable dma failed\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops sprd_dma_pm_ops = {
|
||||
SET_RUNTIME_PM_OPS(sprd_dma_runtime_suspend,
|
||||
sprd_dma_runtime_resume,
|
||||
NULL)
|
||||
};
|
||||
|
||||
static struct platform_driver sprd_dma_driver = {
|
||||
.probe = sprd_dma_probe,
|
||||
.remove = sprd_dma_remove,
|
||||
.driver = {
|
||||
.name = "sprd-dma",
|
||||
.of_match_table = sprd_dma_match,
|
||||
.pm = &sprd_dma_pm_ops,
|
||||
},
|
||||
};
|
||||
module_platform_driver(sprd_dma_driver);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("DMA driver for Spreadtrum");
|
||||
MODULE_AUTHOR("Baolin Wang <baolin.wang@spreadtrum.com>");
|
||||
MODULE_ALIAS("platform:sprd-dma");
|
|
@ -0,0 +1,327 @@
|
|||
/*
|
||||
*
|
||||
* Copyright (C) STMicroelectronics SA 2017
|
||||
* Author(s): M'boumba Cedric Madianga <cedric.madianga@gmail.com>
|
||||
* Pierre-Yves Mordret <pierre-yves.mordret@st.com>
|
||||
*
|
||||
* License terms: GPL V2.0.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms of the GNU General Public License version 2 as published by
|
||||
* the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* DMA Router driver for STM32 DMA MUX
|
||||
*
|
||||
* Based on TI DMA Crossbar driver
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_dma.h>
|
||||
#include <linux/reset.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
#define STM32_DMAMUX_CCR(x) (0x4 * (x))
|
||||
#define STM32_DMAMUX_MAX_DMA_REQUESTS 32
|
||||
#define STM32_DMAMUX_MAX_REQUESTS 255
|
||||
|
||||
struct stm32_dmamux {
|
||||
u32 master;
|
||||
u32 request;
|
||||
u32 chan_id;
|
||||
};
|
||||
|
||||
struct stm32_dmamux_data {
|
||||
struct dma_router dmarouter;
|
||||
struct clk *clk;
|
||||
struct reset_control *rst;
|
||||
void __iomem *iomem;
|
||||
u32 dma_requests; /* Number of DMA requests connected to DMAMUX */
|
||||
u32 dmamux_requests; /* Number of DMA requests routed toward DMAs */
|
||||
spinlock_t lock; /* Protects register access */
|
||||
unsigned long *dma_inuse; /* Used DMA channel */
|
||||
u32 dma_reqs[]; /* Number of DMA Request per DMA masters.
|
||||
* [0] holds number of DMA Masters.
|
||||
* To be kept at very end end of this structure
|
||||
*/
|
||||
};
|
||||
|
||||
static inline u32 stm32_dmamux_read(void __iomem *iomem, u32 reg)
|
||||
{
|
||||
return readl_relaxed(iomem + reg);
|
||||
}
|
||||
|
||||
static inline void stm32_dmamux_write(void __iomem *iomem, u32 reg, u32 val)
|
||||
{
|
||||
writel_relaxed(val, iomem + reg);
|
||||
}
|
||||
|
||||
static void stm32_dmamux_free(struct device *dev, void *route_data)
|
||||
{
|
||||
struct stm32_dmamux_data *dmamux = dev_get_drvdata(dev);
|
||||
struct stm32_dmamux *mux = route_data;
|
||||
unsigned long flags;
|
||||
|
||||
/* Clear dma request */
|
||||
spin_lock_irqsave(&dmamux->lock, flags);
|
||||
|
||||
stm32_dmamux_write(dmamux->iomem, STM32_DMAMUX_CCR(mux->chan_id), 0);
|
||||
clear_bit(mux->chan_id, dmamux->dma_inuse);
|
||||
|
||||
if (!IS_ERR(dmamux->clk))
|
||||
clk_disable(dmamux->clk);
|
||||
|
||||
spin_unlock_irqrestore(&dmamux->lock, flags);
|
||||
|
||||
dev_dbg(dev, "Unmapping DMAMUX(%u) to DMA%u(%u)\n",
|
||||
mux->request, mux->master, mux->chan_id);
|
||||
|
||||
kfree(mux);
|
||||
}
|
||||
|
||||
static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
|
||||
struct of_dma *ofdma)
|
||||
{
|
||||
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
|
||||
struct stm32_dmamux_data *dmamux = platform_get_drvdata(pdev);
|
||||
struct stm32_dmamux *mux;
|
||||
u32 i, min, max;
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
if (dma_spec->args_count != 3) {
|
||||
dev_err(&pdev->dev, "invalid number of dma mux args\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
if (dma_spec->args[0] > dmamux->dmamux_requests) {
|
||||
dev_err(&pdev->dev, "invalid mux request number: %d\n",
|
||||
dma_spec->args[0]);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
mux = kzalloc(sizeof(*mux), GFP_KERNEL);
|
||||
if (!mux)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
spin_lock_irqsave(&dmamux->lock, flags);
|
||||
mux->chan_id = find_first_zero_bit(dmamux->dma_inuse,
|
||||
dmamux->dma_requests);
|
||||
set_bit(mux->chan_id, dmamux->dma_inuse);
|
||||
spin_unlock_irqrestore(&dmamux->lock, flags);
|
||||
|
||||
if (mux->chan_id == dmamux->dma_requests) {
|
||||
dev_err(&pdev->dev, "Run out of free DMA requests\n");
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Look for DMA Master */
|
||||
for (i = 1, min = 0, max = dmamux->dma_reqs[i];
|
||||
i <= dmamux->dma_reqs[0];
|
||||
min += dmamux->dma_reqs[i], max += dmamux->dma_reqs[++i])
|
||||
if (mux->chan_id < max)
|
||||
break;
|
||||
mux->master = i - 1;
|
||||
|
||||
/* The of_node_put() will be done in of_dma_router_xlate function */
|
||||
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", i - 1);
|
||||
if (!dma_spec->np) {
|
||||
dev_err(&pdev->dev, "can't get dma master\n");
|
||||
ret = -EINVAL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Set dma request */
|
||||
spin_lock_irqsave(&dmamux->lock, flags);
|
||||
if (!IS_ERR(dmamux->clk)) {
|
||||
ret = clk_enable(dmamux->clk);
|
||||
if (ret < 0) {
|
||||
spin_unlock_irqrestore(&dmamux->lock, flags);
|
||||
dev_err(&pdev->dev, "clk_prep_enable issue: %d\n", ret);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&dmamux->lock, flags);
|
||||
|
||||
mux->request = dma_spec->args[0];
|
||||
|
||||
/* craft DMA spec */
|
||||
dma_spec->args[3] = dma_spec->args[2];
|
||||
dma_spec->args[2] = dma_spec->args[1];
|
||||
dma_spec->args[1] = 0;
|
||||
dma_spec->args[0] = mux->chan_id - min;
|
||||
dma_spec->args_count = 4;
|
||||
|
||||
stm32_dmamux_write(dmamux->iomem, STM32_DMAMUX_CCR(mux->chan_id),
|
||||
mux->request);
|
||||
dev_dbg(&pdev->dev, "Mapping DMAMUX(%u) to DMA%u(%u)\n",
|
||||
mux->request, mux->master, mux->chan_id);
|
||||
|
||||
return mux;
|
||||
|
||||
error:
|
||||
clear_bit(mux->chan_id, dmamux->dma_inuse);
|
||||
kfree(mux);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
static const struct of_device_id stm32_stm32dma_master_match[] = {
|
||||
{ .compatible = "st,stm32-dma", },
|
||||
{},
|
||||
};
|
||||
|
||||
static int stm32_dmamux_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device_node *node = pdev->dev.of_node;
|
||||
const struct of_device_id *match;
|
||||
struct device_node *dma_node;
|
||||
struct stm32_dmamux_data *stm32_dmamux;
|
||||
struct resource *res;
|
||||
void __iomem *iomem;
|
||||
int i, count, ret;
|
||||
u32 dma_req;
|
||||
|
||||
if (!node)
|
||||
return -ENODEV;
|
||||
|
||||
count = device_property_read_u32_array(&pdev->dev, "dma-masters",
|
||||
NULL, 0);
|
||||
if (count < 0) {
|
||||
dev_err(&pdev->dev, "Can't get DMA master(s) node\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
stm32_dmamux = devm_kzalloc(&pdev->dev, sizeof(*stm32_dmamux) +
|
||||
sizeof(u32) * (count + 1), GFP_KERNEL);
|
||||
if (!stm32_dmamux)
|
||||
return -ENOMEM;
|
||||
|
||||
dma_req = 0;
|
||||
for (i = 1; i <= count; i++) {
|
||||
dma_node = of_parse_phandle(node, "dma-masters", i - 1);
|
||||
|
||||
match = of_match_node(stm32_stm32dma_master_match, dma_node);
|
||||
if (!match) {
|
||||
dev_err(&pdev->dev, "DMA master is not supported\n");
|
||||
of_node_put(dma_node);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (of_property_read_u32(dma_node, "dma-requests",
|
||||
&stm32_dmamux->dma_reqs[i])) {
|
||||
dev_info(&pdev->dev,
|
||||
"Missing MUX output information, using %u.\n",
|
||||
STM32_DMAMUX_MAX_DMA_REQUESTS);
|
||||
stm32_dmamux->dma_reqs[i] =
|
||||
STM32_DMAMUX_MAX_DMA_REQUESTS;
|
||||
}
|
||||
dma_req += stm32_dmamux->dma_reqs[i];
|
||||
of_node_put(dma_node);
|
||||
}
|
||||
|
||||
if (dma_req > STM32_DMAMUX_MAX_DMA_REQUESTS) {
|
||||
dev_err(&pdev->dev, "Too many DMA Master Requests to manage\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
stm32_dmamux->dma_requests = dma_req;
|
||||
stm32_dmamux->dma_reqs[0] = count;
|
||||
stm32_dmamux->dma_inuse = devm_kcalloc(&pdev->dev,
|
||||
BITS_TO_LONGS(dma_req),
|
||||
sizeof(unsigned long),
|
||||
GFP_KERNEL);
|
||||
if (!stm32_dmamux->dma_inuse)
|
||||
return -ENOMEM;
|
||||
|
||||
if (device_property_read_u32(&pdev->dev, "dma-requests",
|
||||
&stm32_dmamux->dmamux_requests)) {
|
||||
stm32_dmamux->dmamux_requests = STM32_DMAMUX_MAX_REQUESTS;
|
||||
dev_warn(&pdev->dev, "DMAMUX defaulting on %u requests\n",
|
||||
stm32_dmamux->dmamux_requests);
|
||||
}
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!res)
|
||||
return -ENODEV;
|
||||
|
||||
iomem = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(iomem))
|
||||
return PTR_ERR(iomem);
|
||||
|
||||
spin_lock_init(&stm32_dmamux->lock);
|
||||
|
||||
stm32_dmamux->clk = devm_clk_get(&pdev->dev, NULL);
|
||||
if (IS_ERR(stm32_dmamux->clk)) {
|
||||
ret = PTR_ERR(stm32_dmamux->clk);
|
||||
if (ret == -EPROBE_DEFER)
|
||||
dev_info(&pdev->dev, "Missing controller clock\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
stm32_dmamux->rst = devm_reset_control_get(&pdev->dev, NULL);
|
||||
if (!IS_ERR(stm32_dmamux->rst)) {
|
||||
reset_control_assert(stm32_dmamux->rst);
|
||||
udelay(2);
|
||||
reset_control_deassert(stm32_dmamux->rst);
|
||||
}
|
||||
|
||||
stm32_dmamux->iomem = iomem;
|
||||
stm32_dmamux->dmarouter.dev = &pdev->dev;
|
||||
stm32_dmamux->dmarouter.route_free = stm32_dmamux_free;
|
||||
|
||||
platform_set_drvdata(pdev, stm32_dmamux);
|
||||
|
||||
if (!IS_ERR(stm32_dmamux->clk)) {
|
||||
ret = clk_prepare_enable(stm32_dmamux->clk);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "clk_prep_enable error: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* Reset the dmamux */
|
||||
for (i = 0; i < stm32_dmamux->dma_requests; i++)
|
||||
stm32_dmamux_write(stm32_dmamux->iomem, STM32_DMAMUX_CCR(i), 0);
|
||||
|
||||
if (!IS_ERR(stm32_dmamux->clk))
|
||||
clk_disable(stm32_dmamux->clk);
|
||||
|
||||
return of_dma_router_register(node, stm32_dmamux_route_allocate,
|
||||
&stm32_dmamux->dmarouter);
|
||||
}
|
||||
|
||||
static const struct of_device_id stm32_dmamux_match[] = {
|
||||
{ .compatible = "st,stm32h7-dmamux" },
|
||||
{},
|
||||
};
|
||||
|
||||
static struct platform_driver stm32_dmamux_driver = {
|
||||
.probe = stm32_dmamux_probe,
|
||||
.driver = {
|
||||
.name = "stm32-dmamux",
|
||||
.of_match_table = stm32_dmamux_match,
|
||||
},
|
||||
};
|
||||
|
||||
static int __init stm32_dmamux_init(void)
|
||||
{
|
||||
return platform_driver_register(&stm32_dmamux_driver);
|
||||
}
|
||||
arch_initcall(stm32_dmamux_init);
|
||||
|
||||
MODULE_DESCRIPTION("DMA Router driver for STM32 DMA MUX");
|
||||
MODULE_AUTHOR("M'boumba Cedric Madianga <cedric.madianga@gmail.com>");
|
||||
MODULE_AUTHOR("Pierre-Yves Mordret <pierre-yves.mordret@st.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
File diff suppressed because it is too large
Load Diff
|
@ -42,12 +42,18 @@
|
|||
|
||||
#define DMA_STAT 0x30
|
||||
|
||||
/* Offset between DMA_IRQ_EN and DMA_IRQ_STAT limits number of channels */
|
||||
#define DMA_MAX_CHANNELS (DMA_IRQ_CHAN_NR * 0x10 / 4)
|
||||
|
||||
/*
|
||||
* sun8i specific registers
|
||||
*/
|
||||
#define SUN8I_DMA_GATE 0x20
|
||||
#define SUN8I_DMA_GATE_ENABLE 0x4
|
||||
|
||||
#define SUNXI_H3_SECURE_REG 0x20
|
||||
#define SUNXI_H3_DMA_GATE 0x28
|
||||
#define SUNXI_H3_DMA_GATE_ENABLE 0x4
|
||||
/*
|
||||
* Channels specific registers
|
||||
*/
|
||||
|
@ -62,16 +68,19 @@
|
|||
#define DMA_CHAN_LLI_ADDR 0x08
|
||||
|
||||
#define DMA_CHAN_CUR_CFG 0x0c
|
||||
#define DMA_CHAN_CFG_SRC_DRQ(x) ((x) & 0x1f)
|
||||
#define DMA_CHAN_MAX_DRQ 0x1f
|
||||
#define DMA_CHAN_CFG_SRC_DRQ(x) ((x) & DMA_CHAN_MAX_DRQ)
|
||||
#define DMA_CHAN_CFG_SRC_IO_MODE BIT(5)
|
||||
#define DMA_CHAN_CFG_SRC_LINEAR_MODE (0 << 5)
|
||||
#define DMA_CHAN_CFG_SRC_BURST(x) (((x) & 0x3) << 7)
|
||||
#define DMA_CHAN_CFG_SRC_BURST_A31(x) (((x) & 0x3) << 7)
|
||||
#define DMA_CHAN_CFG_SRC_BURST_H3(x) (((x) & 0x3) << 6)
|
||||
#define DMA_CHAN_CFG_SRC_WIDTH(x) (((x) & 0x3) << 9)
|
||||
|
||||
#define DMA_CHAN_CFG_DST_DRQ(x) (DMA_CHAN_CFG_SRC_DRQ(x) << 16)
|
||||
#define DMA_CHAN_CFG_DST_IO_MODE (DMA_CHAN_CFG_SRC_IO_MODE << 16)
|
||||
#define DMA_CHAN_CFG_DST_LINEAR_MODE (DMA_CHAN_CFG_SRC_LINEAR_MODE << 16)
|
||||
#define DMA_CHAN_CFG_DST_BURST(x) (DMA_CHAN_CFG_SRC_BURST(x) << 16)
|
||||
#define DMA_CHAN_CFG_DST_BURST_A31(x) (DMA_CHAN_CFG_SRC_BURST_A31(x) << 16)
|
||||
#define DMA_CHAN_CFG_DST_BURST_H3(x) (DMA_CHAN_CFG_SRC_BURST_H3(x) << 16)
|
||||
#define DMA_CHAN_CFG_DST_WIDTH(x) (DMA_CHAN_CFG_SRC_WIDTH(x) << 16)
|
||||
|
||||
#define DMA_CHAN_CUR_SRC 0x10
|
||||
|
@ -90,6 +99,9 @@
|
|||
#define NORMAL_WAIT 8
|
||||
#define DRQ_SDRAM 1
|
||||
|
||||
/* forward declaration */
|
||||
struct sun6i_dma_dev;
|
||||
|
||||
/*
|
||||
* Hardware channels / ports representation
|
||||
*
|
||||
|
@ -111,7 +123,12 @@ struct sun6i_dma_config {
|
|||
* however these SoCs really have and need this bit, as seen in the
|
||||
* BSP kernel source code.
|
||||
*/
|
||||
bool gate_needed;
|
||||
void (*clock_autogate_enable)(struct sun6i_dma_dev *);
|
||||
void (*set_burst_length)(u32 *p_cfg, s8 src_burst, s8 dst_burst);
|
||||
u32 src_burst_lengths;
|
||||
u32 dst_burst_lengths;
|
||||
u32 src_addr_widths;
|
||||
u32 dst_addr_widths;
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -175,6 +192,9 @@ struct sun6i_dma_dev {
|
|||
struct sun6i_pchan *pchans;
|
||||
struct sun6i_vchan *vchans;
|
||||
const struct sun6i_dma_config *cfg;
|
||||
u32 num_pchans;
|
||||
u32 num_vchans;
|
||||
u32 max_request;
|
||||
};
|
||||
|
||||
static struct device *chan2dev(struct dma_chan *chan)
|
||||
|
@ -251,8 +271,12 @@ static inline s8 convert_burst(u32 maxburst)
|
|||
switch (maxburst) {
|
||||
case 1:
|
||||
return 0;
|
||||
case 4:
|
||||
return 1;
|
||||
case 8:
|
||||
return 2;
|
||||
case 16:
|
||||
return 3;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -260,11 +284,29 @@ static inline s8 convert_burst(u32 maxburst)
|
|||
|
||||
static inline s8 convert_buswidth(enum dma_slave_buswidth addr_width)
|
||||
{
|
||||
if ((addr_width < DMA_SLAVE_BUSWIDTH_1_BYTE) ||
|
||||
(addr_width > DMA_SLAVE_BUSWIDTH_4_BYTES))
|
||||
return -EINVAL;
|
||||
return ilog2(addr_width);
|
||||
}
|
||||
|
||||
return addr_width >> 1;
|
||||
static void sun6i_enable_clock_autogate_a23(struct sun6i_dma_dev *sdev)
|
||||
{
|
||||
writel(SUN8I_DMA_GATE_ENABLE, sdev->base + SUN8I_DMA_GATE);
|
||||
}
|
||||
|
||||
static void sun6i_enable_clock_autogate_h3(struct sun6i_dma_dev *sdev)
|
||||
{
|
||||
writel(SUNXI_H3_DMA_GATE_ENABLE, sdev->base + SUNXI_H3_DMA_GATE);
|
||||
}
|
||||
|
||||
static void sun6i_set_burst_length_a31(u32 *p_cfg, s8 src_burst, s8 dst_burst)
|
||||
{
|
||||
*p_cfg |= DMA_CHAN_CFG_SRC_BURST_A31(src_burst) |
|
||||
DMA_CHAN_CFG_DST_BURST_A31(dst_burst);
|
||||
}
|
||||
|
||||
static void sun6i_set_burst_length_h3(u32 *p_cfg, s8 src_burst, s8 dst_burst)
|
||||
{
|
||||
*p_cfg |= DMA_CHAN_CFG_SRC_BURST_H3(src_burst) |
|
||||
DMA_CHAN_CFG_DST_BURST_H3(dst_burst);
|
||||
}
|
||||
|
||||
static size_t sun6i_get_chan_size(struct sun6i_pchan *pchan)
|
||||
|
@ -399,7 +441,6 @@ static int sun6i_dma_start_desc(struct sun6i_vchan *vchan)
|
|||
static void sun6i_dma_tasklet(unsigned long data)
|
||||
{
|
||||
struct sun6i_dma_dev *sdev = (struct sun6i_dma_dev *)data;
|
||||
const struct sun6i_dma_config *cfg = sdev->cfg;
|
||||
struct sun6i_vchan *vchan;
|
||||
struct sun6i_pchan *pchan;
|
||||
unsigned int pchan_alloc = 0;
|
||||
|
@ -427,7 +468,7 @@ static void sun6i_dma_tasklet(unsigned long data)
|
|||
}
|
||||
|
||||
spin_lock_irq(&sdev->lock);
|
||||
for (pchan_idx = 0; pchan_idx < cfg->nr_max_channels; pchan_idx++) {
|
||||
for (pchan_idx = 0; pchan_idx < sdev->num_pchans; pchan_idx++) {
|
||||
pchan = &sdev->pchans[pchan_idx];
|
||||
|
||||
if (pchan->vchan || list_empty(&sdev->pending))
|
||||
|
@ -448,7 +489,7 @@ static void sun6i_dma_tasklet(unsigned long data)
|
|||
}
|
||||
spin_unlock_irq(&sdev->lock);
|
||||
|
||||
for (pchan_idx = 0; pchan_idx < cfg->nr_max_channels; pchan_idx++) {
|
||||
for (pchan_idx = 0; pchan_idx < sdev->num_pchans; pchan_idx++) {
|
||||
if (!(pchan_alloc & BIT(pchan_idx)))
|
||||
continue;
|
||||
|
||||
|
@ -470,7 +511,7 @@ static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
|
|||
int i, j, ret = IRQ_NONE;
|
||||
u32 status;
|
||||
|
||||
for (i = 0; i < sdev->cfg->nr_max_channels / DMA_IRQ_CHAN_NR; i++) {
|
||||
for (i = 0; i < sdev->num_pchans / DMA_IRQ_CHAN_NR; i++) {
|
||||
status = readl(sdev->base + DMA_IRQ_STAT(i));
|
||||
if (!status)
|
||||
continue;
|
||||
|
@ -510,47 +551,49 @@ static int set_config(struct sun6i_dma_dev *sdev,
|
|||
enum dma_transfer_direction direction,
|
||||
u32 *p_cfg)
|
||||
{
|
||||
enum dma_slave_buswidth src_addr_width, dst_addr_width;
|
||||
u32 src_maxburst, dst_maxburst;
|
||||
s8 src_width, dst_width, src_burst, dst_burst;
|
||||
|
||||
src_addr_width = sconfig->src_addr_width;
|
||||
dst_addr_width = sconfig->dst_addr_width;
|
||||
src_maxburst = sconfig->src_maxburst;
|
||||
dst_maxburst = sconfig->dst_maxburst;
|
||||
|
||||
switch (direction) {
|
||||
case DMA_MEM_TO_DEV:
|
||||
src_burst = convert_burst(sconfig->src_maxburst ?
|
||||
sconfig->src_maxburst : 8);
|
||||
src_width = convert_buswidth(sconfig->src_addr_width !=
|
||||
DMA_SLAVE_BUSWIDTH_UNDEFINED ?
|
||||
sconfig->src_addr_width :
|
||||
DMA_SLAVE_BUSWIDTH_4_BYTES);
|
||||
dst_burst = convert_burst(sconfig->dst_maxburst);
|
||||
dst_width = convert_buswidth(sconfig->dst_addr_width);
|
||||
if (src_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
|
||||
src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
|
||||
src_maxburst = src_maxburst ? src_maxburst : 8;
|
||||
break;
|
||||
case DMA_DEV_TO_MEM:
|
||||
src_burst = convert_burst(sconfig->src_maxburst);
|
||||
src_width = convert_buswidth(sconfig->src_addr_width);
|
||||
dst_burst = convert_burst(sconfig->dst_maxburst ?
|
||||
sconfig->dst_maxburst : 8);
|
||||
dst_width = convert_buswidth(sconfig->dst_addr_width !=
|
||||
DMA_SLAVE_BUSWIDTH_UNDEFINED ?
|
||||
sconfig->dst_addr_width :
|
||||
DMA_SLAVE_BUSWIDTH_4_BYTES);
|
||||
if (dst_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
|
||||
dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
|
||||
dst_maxburst = dst_maxburst ? dst_maxburst : 8;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (src_burst < 0)
|
||||
return src_burst;
|
||||
if (src_width < 0)
|
||||
return src_width;
|
||||
if (dst_burst < 0)
|
||||
return dst_burst;
|
||||
if (dst_width < 0)
|
||||
return dst_width;
|
||||
if (!(BIT(src_addr_width) & sdev->slave.src_addr_widths))
|
||||
return -EINVAL;
|
||||
if (!(BIT(dst_addr_width) & sdev->slave.dst_addr_widths))
|
||||
return -EINVAL;
|
||||
if (!(BIT(src_maxburst) & sdev->cfg->src_burst_lengths))
|
||||
return -EINVAL;
|
||||
if (!(BIT(dst_maxburst) & sdev->cfg->dst_burst_lengths))
|
||||
return -EINVAL;
|
||||
|
||||
*p_cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) |
|
||||
DMA_CHAN_CFG_SRC_WIDTH(src_width) |
|
||||
DMA_CHAN_CFG_DST_BURST(dst_burst) |
|
||||
src_width = convert_buswidth(src_addr_width);
|
||||
dst_width = convert_buswidth(dst_addr_width);
|
||||
dst_burst = convert_burst(dst_maxburst);
|
||||
src_burst = convert_burst(src_maxburst);
|
||||
|
||||
*p_cfg = DMA_CHAN_CFG_SRC_WIDTH(src_width) |
|
||||
DMA_CHAN_CFG_DST_WIDTH(dst_width);
|
||||
|
||||
sdev->cfg->set_burst_length(p_cfg, src_burst, dst_burst);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -593,11 +636,11 @@ static struct dma_async_tx_descriptor *sun6i_dma_prep_dma_memcpy(
|
|||
DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
|
||||
DMA_CHAN_CFG_DST_LINEAR_MODE |
|
||||
DMA_CHAN_CFG_SRC_LINEAR_MODE |
|
||||
DMA_CHAN_CFG_SRC_BURST(burst) |
|
||||
DMA_CHAN_CFG_SRC_WIDTH(width) |
|
||||
DMA_CHAN_CFG_DST_BURST(burst) |
|
||||
DMA_CHAN_CFG_DST_WIDTH(width);
|
||||
|
||||
sdev->cfg->set_burst_length(&v_lli->cfg, burst, burst);
|
||||
|
||||
sun6i_dma_lli_add(NULL, v_lli, p_lli, txd);
|
||||
|
||||
sun6i_dma_dump_lli(vchan, v_lli);
|
||||
|
@ -948,7 +991,7 @@ static struct dma_chan *sun6i_dma_of_xlate(struct of_phandle_args *dma_spec,
|
|||
struct dma_chan *chan;
|
||||
u8 port = dma_spec->args[0];
|
||||
|
||||
if (port > sdev->cfg->nr_max_requests)
|
||||
if (port > sdev->max_request)
|
||||
return NULL;
|
||||
|
||||
chan = dma_get_any_slave_channel(&sdev->slave);
|
||||
|
@ -981,7 +1024,7 @@ static inline void sun6i_dma_free(struct sun6i_dma_dev *sdev)
|
|||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < sdev->cfg->nr_max_vchans; i++) {
|
||||
for (i = 0; i < sdev->num_vchans; i++) {
|
||||
struct sun6i_vchan *vchan = &sdev->vchans[i];
|
||||
|
||||
list_del(&vchan->vc.chan.device_node);
|
||||
|
@ -1009,6 +1052,15 @@ static struct sun6i_dma_config sun6i_a31_dma_cfg = {
|
|||
.nr_max_channels = 16,
|
||||
.nr_max_requests = 30,
|
||||
.nr_max_vchans = 53,
|
||||
.set_burst_length = sun6i_set_burst_length_a31,
|
||||
.src_burst_lengths = BIT(1) | BIT(8),
|
||||
.dst_burst_lengths = BIT(1) | BIT(8),
|
||||
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -1020,24 +1072,76 @@ static struct sun6i_dma_config sun8i_a23_dma_cfg = {
|
|||
.nr_max_channels = 8,
|
||||
.nr_max_requests = 24,
|
||||
.nr_max_vchans = 37,
|
||||
.gate_needed = true,
|
||||
.clock_autogate_enable = sun6i_enable_clock_autogate_a23,
|
||||
.set_burst_length = sun6i_set_burst_length_a31,
|
||||
.src_burst_lengths = BIT(1) | BIT(8),
|
||||
.dst_burst_lengths = BIT(1) | BIT(8),
|
||||
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
};
|
||||
|
||||
static struct sun6i_dma_config sun8i_a83t_dma_cfg = {
|
||||
.nr_max_channels = 8,
|
||||
.nr_max_requests = 28,
|
||||
.nr_max_vchans = 39,
|
||||
.clock_autogate_enable = sun6i_enable_clock_autogate_a23,
|
||||
.set_burst_length = sun6i_set_burst_length_a31,
|
||||
.src_burst_lengths = BIT(1) | BIT(8),
|
||||
.dst_burst_lengths = BIT(1) | BIT(8),
|
||||
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
};
|
||||
|
||||
/*
|
||||
* The H3 has 12 physical channels, a maximum DRQ port id of 27,
|
||||
* and a total of 34 usable source and destination endpoints.
|
||||
* It also supports additional burst lengths and bus widths,
|
||||
* and the burst length fields have different offsets.
|
||||
*/
|
||||
|
||||
static struct sun6i_dma_config sun8i_h3_dma_cfg = {
|
||||
.nr_max_channels = 12,
|
||||
.nr_max_requests = 27,
|
||||
.nr_max_vchans = 34,
|
||||
.clock_autogate_enable = sun6i_enable_clock_autogate_h3,
|
||||
.set_burst_length = sun6i_set_burst_length_h3,
|
||||
.src_burst_lengths = BIT(1) | BIT(4) | BIT(8) | BIT(16),
|
||||
.dst_burst_lengths = BIT(1) | BIT(4) | BIT(8) | BIT(16),
|
||||
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
|
||||
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
|
||||
};
|
||||
|
||||
/*
|
||||
* The A64 binding uses the number of dma channels from the
|
||||
* device tree node.
|
||||
*/
|
||||
static struct sun6i_dma_config sun50i_a64_dma_cfg = {
|
||||
.clock_autogate_enable = sun6i_enable_clock_autogate_h3,
|
||||
.set_burst_length = sun6i_set_burst_length_h3,
|
||||
.src_burst_lengths = BIT(1) | BIT(4) | BIT(8) | BIT(16),
|
||||
.dst_burst_lengths = BIT(1) | BIT(4) | BIT(8) | BIT(16),
|
||||
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
|
||||
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -1049,7 +1153,16 @@ static struct sun6i_dma_config sun8i_v3s_dma_cfg = {
|
|||
.nr_max_channels = 8,
|
||||
.nr_max_requests = 23,
|
||||
.nr_max_vchans = 24,
|
||||
.gate_needed = true,
|
||||
.clock_autogate_enable = sun6i_enable_clock_autogate_a23,
|
||||
.set_burst_length = sun6i_set_burst_length_a31,
|
||||
.src_burst_lengths = BIT(1) | BIT(8),
|
||||
.dst_burst_lengths = BIT(1) | BIT(8),
|
||||
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES),
|
||||
};
|
||||
|
||||
static const struct of_device_id sun6i_dma_match[] = {
|
||||
|
@ -1058,13 +1171,14 @@ static const struct of_device_id sun6i_dma_match[] = {
|
|||
{ .compatible = "allwinner,sun8i-a83t-dma", .data = &sun8i_a83t_dma_cfg },
|
||||
{ .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg },
|
||||
{ .compatible = "allwinner,sun8i-v3s-dma", .data = &sun8i_v3s_dma_cfg },
|
||||
{ .compatible = "allwinner,sun50i-a64-dma", .data = &sun50i_a64_dma_cfg },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, sun6i_dma_match);
|
||||
|
||||
static int sun6i_dma_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct of_device_id *device;
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct sun6i_dma_dev *sdc;
|
||||
struct resource *res;
|
||||
int ret, i;
|
||||
|
@ -1073,10 +1187,9 @@ static int sun6i_dma_probe(struct platform_device *pdev)
|
|||
if (!sdc)
|
||||
return -ENOMEM;
|
||||
|
||||
device = of_match_device(sun6i_dma_match, &pdev->dev);
|
||||
if (!device)
|
||||
sdc->cfg = of_device_get_match_data(&pdev->dev);
|
||||
if (!sdc->cfg)
|
||||
return -ENODEV;
|
||||
sdc->cfg = device->data;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
sdc->base = devm_ioremap_resource(&pdev->dev, res);
|
||||
|
@ -1129,37 +1242,57 @@ static int sun6i_dma_probe(struct platform_device *pdev)
|
|||
sdc->slave.device_pause = sun6i_dma_pause;
|
||||
sdc->slave.device_resume = sun6i_dma_resume;
|
||||
sdc->slave.device_terminate_all = sun6i_dma_terminate_all;
|
||||
sdc->slave.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
|
||||
sdc->slave.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
|
||||
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
|
||||
sdc->slave.src_addr_widths = sdc->cfg->src_addr_widths;
|
||||
sdc->slave.dst_addr_widths = sdc->cfg->dst_addr_widths;
|
||||
sdc->slave.directions = BIT(DMA_DEV_TO_MEM) |
|
||||
BIT(DMA_MEM_TO_DEV);
|
||||
sdc->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
|
||||
sdc->slave.dev = &pdev->dev;
|
||||
|
||||
sdc->pchans = devm_kcalloc(&pdev->dev, sdc->cfg->nr_max_channels,
|
||||
sdc->num_pchans = sdc->cfg->nr_max_channels;
|
||||
sdc->num_vchans = sdc->cfg->nr_max_vchans;
|
||||
sdc->max_request = sdc->cfg->nr_max_requests;
|
||||
|
||||
ret = of_property_read_u32(np, "dma-channels", &sdc->num_pchans);
|
||||
if (ret && !sdc->num_pchans) {
|
||||
dev_err(&pdev->dev, "Can't get dma-channels.\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = of_property_read_u32(np, "dma-requests", &sdc->max_request);
|
||||
if (ret && !sdc->max_request) {
|
||||
dev_info(&pdev->dev, "Missing dma-requests, using %u.\n",
|
||||
DMA_CHAN_MAX_DRQ);
|
||||
sdc->max_request = DMA_CHAN_MAX_DRQ;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the number of vchans is not specified, derive it from the
|
||||
* highest port number, at most one channel per port and direction.
|
||||
*/
|
||||
if (!sdc->num_vchans)
|
||||
sdc->num_vchans = 2 * (sdc->max_request + 1);
|
||||
|
||||
sdc->pchans = devm_kcalloc(&pdev->dev, sdc->num_pchans,
|
||||
sizeof(struct sun6i_pchan), GFP_KERNEL);
|
||||
if (!sdc->pchans)
|
||||
return -ENOMEM;
|
||||
|
||||
sdc->vchans = devm_kcalloc(&pdev->dev, sdc->cfg->nr_max_vchans,
|
||||
sdc->vchans = devm_kcalloc(&pdev->dev, sdc->num_vchans,
|
||||
sizeof(struct sun6i_vchan), GFP_KERNEL);
|
||||
if (!sdc->vchans)
|
||||
return -ENOMEM;
|
||||
|
||||
tasklet_init(&sdc->task, sun6i_dma_tasklet, (unsigned long)sdc);
|
||||
|
||||
for (i = 0; i < sdc->cfg->nr_max_channels; i++) {
|
||||
for (i = 0; i < sdc->num_pchans; i++) {
|
||||
struct sun6i_pchan *pchan = &sdc->pchans[i];
|
||||
|
||||
pchan->idx = i;
|
||||
pchan->base = sdc->base + 0x100 + i * 0x40;
|
||||
}
|
||||
|
||||
for (i = 0; i < sdc->cfg->nr_max_vchans; i++) {
|
||||
for (i = 0; i < sdc->num_vchans; i++) {
|
||||
struct sun6i_vchan *vchan = &sdc->vchans[i];
|
||||
|
||||
INIT_LIST_HEAD(&vchan->node);
|
||||
|
@ -1199,8 +1332,8 @@ static int sun6i_dma_probe(struct platform_device *pdev)
|
|||
goto err_dma_unregister;
|
||||
}
|
||||
|
||||
if (sdc->cfg->gate_needed)
|
||||
writel(SUN8I_DMA_GATE_ENABLE, sdc->base + SUN8I_DMA_GATE);
|
||||
if (sdc->cfg->clock_autogate_enable)
|
||||
sdc->cfg->clock_autogate_enable(sdc);
|
||||
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -49,12 +49,12 @@ struct ti_am335x_xbar_data {
|
|||
|
||||
struct ti_am335x_xbar_map {
|
||||
u16 dma_line;
|
||||
u16 mux_val;
|
||||
u8 mux_val;
|
||||
};
|
||||
|
||||
static inline void ti_am335x_xbar_write(void __iomem *iomem, int event, u16 val)
|
||||
static inline void ti_am335x_xbar_write(void __iomem *iomem, int event, u8 val)
|
||||
{
|
||||
writeb_relaxed(val & 0x1f, iomem + event);
|
||||
writeb_relaxed(val, iomem + event);
|
||||
}
|
||||
|
||||
static void ti_am335x_xbar_free(struct device *dev, void *route_data)
|
||||
|
@ -105,7 +105,7 @@ static void *ti_am335x_xbar_route_allocate(struct of_phandle_args *dma_spec,
|
|||
}
|
||||
|
||||
map->dma_line = (u16)dma_spec->args[0];
|
||||
map->mux_val = (u16)dma_spec->args[2];
|
||||
map->mux_val = (u8)dma_spec->args[2];
|
||||
|
||||
dma_spec->args[2] = 0;
|
||||
dma_spec->args_count = 2;
|
||||
|
|
|
@ -366,6 +366,20 @@ struct xilinx_dma_chan {
|
|||
u16 tdest;
|
||||
};
|
||||
|
||||
/**
|
||||
* enum xdma_ip_type: DMA IP type.
|
||||
*
|
||||
* XDMA_TYPE_AXIDMA: Axi dma ip.
|
||||
* XDMA_TYPE_CDMA: Axi cdma ip.
|
||||
* XDMA_TYPE_VDMA: Axi vdma ip.
|
||||
*
|
||||
*/
|
||||
enum xdma_ip_type {
|
||||
XDMA_TYPE_AXIDMA = 0,
|
||||
XDMA_TYPE_CDMA,
|
||||
XDMA_TYPE_VDMA,
|
||||
};
|
||||
|
||||
struct xilinx_dma_config {
|
||||
enum xdma_ip_type dmatype;
|
||||
int (*clk_init)(struct platform_device *pdev, struct clk **axi_clk,
|
||||
|
|
|
@ -41,20 +41,6 @@ struct xilinx_vdma_config {
|
|||
int ext_fsync;
|
||||
};
|
||||
|
||||
/**
|
||||
* enum xdma_ip_type: DMA IP type.
|
||||
*
|
||||
* XDMA_TYPE_AXIDMA: Axi dma ip.
|
||||
* XDMA_TYPE_CDMA: Axi cdma ip.
|
||||
* XDMA_TYPE_VDMA: Axi vdma ip.
|
||||
*
|
||||
*/
|
||||
enum xdma_ip_type {
|
||||
XDMA_TYPE_AXIDMA = 0,
|
||||
XDMA_TYPE_CDMA,
|
||||
XDMA_TYPE_VDMA,
|
||||
};
|
||||
|
||||
int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
|
||||
struct xilinx_vdma_config *cfg);
|
||||
|
||||
|
|
|
@ -329,7 +329,7 @@ enum dma_slave_buswidth {
|
|||
* @src_addr_width: this is the width in bytes of the source (RX)
|
||||
* register where DMA data shall be read. If the source
|
||||
* is memory this may be ignored depending on architecture.
|
||||
* Legal values: 1, 2, 4, 8.
|
||||
* Legal values: 1, 2, 3, 4, 8, 16, 32, 64.
|
||||
* @dst_addr_width: same as src_addr_width but for destination
|
||||
* target (TX) mutatis mutandis.
|
||||
* @src_maxburst: the maximum number of words (note: words, as in
|
||||
|
@ -404,13 +404,15 @@ enum dma_residue_granularity {
|
|||
DMA_RESIDUE_GRANULARITY_BURST = 2,
|
||||
};
|
||||
|
||||
/* struct dma_slave_caps - expose capabilities of a slave channel only
|
||||
*
|
||||
* @src_addr_widths: bit mask of src addr widths the channel supports
|
||||
* @dst_addr_widths: bit mask of dstn addr widths the channel supports
|
||||
* @directions: bit mask of slave direction the channel supported
|
||||
* since the enum dma_transfer_direction is not defined as bits for each
|
||||
* type of direction, the dma controller should fill (1 << <TYPE>) and same
|
||||
/**
|
||||
* struct dma_slave_caps - expose capabilities of a slave channel only
|
||||
* @src_addr_widths: bit mask of src addr widths the channel supports.
|
||||
* Width is specified in bytes, e.g. for a channel supporting
|
||||
* a width of 4 the mask should have BIT(4) set.
|
||||
* @dst_addr_widths: bit mask of dst addr widths the channel supports
|
||||
* @directions: bit mask of slave directions the channel supports.
|
||||
* Since the enum dma_transfer_direction is not defined as bit flag for
|
||||
* each type, the dma controller should set BIT(<TYPE>) and same
|
||||
* should be checked by controller as well
|
||||
* @max_burst: max burst capability per-transfer
|
||||
* @cmd_pause: true, if pause and thereby resume is supported
|
||||
|
@ -678,11 +680,13 @@ struct dma_filter {
|
|||
* @dev_id: unique device ID
|
||||
* @dev: struct device reference for dma mapping api
|
||||
* @src_addr_widths: bit mask of src addr widths the device supports
|
||||
* Width is specified in bytes, e.g. for a device supporting
|
||||
* a width of 4 the mask should have BIT(4) set.
|
||||
* @dst_addr_widths: bit mask of dst addr widths the device supports
|
||||
* @directions: bit mask of slave direction the device supports since
|
||||
* the enum dma_transfer_direction is not defined as bits for
|
||||
* each type of direction, the dma controller should fill (1 <<
|
||||
* <TYPE>) and same should be checked by controller as well
|
||||
* @directions: bit mask of slave directions the device supports.
|
||||
* Since the enum dma_transfer_direction is not defined as bit flag for
|
||||
* each type, the dma controller should set BIT(<TYPE>) and same
|
||||
* should be checked by controller as well
|
||||
* @max_burst: max burst capability per-transfer
|
||||
* @residue_granularity: granularity of the transfer residue reported
|
||||
* by tx_status
|
||||
|
|
Loading…
Reference in New Issue