Having the SDMA driver use a tasklet for running the clients
callback introduce some issues:
- probability to have desynchronized data because of the
race condition created since the DMA transaction status
is retrieved only when the callback is executed, leaving
plenty of time for transaction status to get altered.
- inter-transfer latency which can leave channels idle.
Move the callback execution, for cyclic channels, to SDMA
interrupt (as advised in `Documentation/dmaengine/provider.txt`)
to (a)reduce the inter-transfer latency and (b) eliminate the
race condition possibility where DMA transaction status might
be changed by the time is read.
The responsibility of the SDMA interrupt latency
is moved to the SDMA clients which case by case should defer
the work to bottom-halves when needed.
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Reviewed-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There are at least two known devices, e.g. DMA controller found on ARC AXS101
SDP board, that have LLP register and no multi block transfer support at the
same time.
Override autodetection by user provided data.
Reported-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Reviewed-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Intel Quark UART uses DesignWare DMA IP. Though the DMA IP is connected in such
way that handshake interface uses inverted polarity. We have to provide a
possibility to set this in the DMA driver when configuring a channel.
Introduce a new member of custom slave configuration called 'hs_polarity' and
set active low polarity in case this value is 'true'.
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
It seems we need to extend custom slave configuration by one more member to
support Intel Quart UART. It becomes a burden to manage all members of struct
dw_dma_slave one-by-one.
Replace the set of fields by embedding struct dw_dma_slave into struct
dw_dma_chan.
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Let's just move code from cppi41_dma_issue_pending() to
push_desc_queue() as that's the only call to push_desc_queue().
We want to do this for PM runtime as we need to call push_desc_queue()
also for pending queued transfers from PM runtime resume.
No functional changes, just moves code around.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This allows the k3dma driver to be selected on HiKey via the ARCH_HISI
dependency.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Jingoo Han <jg1.han@samsung.com>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the k3dma driver doesn't offer the cyclic mode
necessary for handling audio.
This patch adds it.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline, removed a few
bits of logic that didn't seem to have much effect]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With cyclic mode, the shared virt-dma logic doesn't actually
manage the descriptor state, nor the calling of the descriptor
free callback. This results in leaking a desc structure every
time we start an audio transfer.
Thus we must manage it ourselves. The k3dma driver already keeps
track of the active and finished descriptors via ds_run and ds_done
pointers, so cleanup how we handle those two values, so when we
tear down everything in terminate_all, call free_desc on the ds_run
and ds_done pointers if they are not null.
NOTE: HiKey doesn't use the non-cyclic dma modes, so I'm not been
able to test those modes. But with this patch we no longer leak
the desc structures.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
After lots of debugging on an occasional DMA ERR issue, I realized
that the desc structures which we point the dma hardware are being
allocated out of regular memory. This means when we fill the desc
structures, that data doesn't always get flushed out to memory by
the time we start the dma transfer, resulting in the dma engine getting
some null values, resulting in a DMA ERR on the first irq.
Thus, this patch adopts mechanism similar to the zx296702_dma of
allocating the desc structures from a dma pool, so the memory caching
rules are properly set to avoid this issue.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: John Stutlz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As it was before, as soon as the DMAC IP felt there was an error
he would return IRQ_NONE since no actual transfer had completed.
After spinning on that for 100K interrupts, Linux yanks the IRQ with
a "nobody cared" error.
This patch lets it handle the interrupt and keep the IRQ alive.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The offsets for ERR1 and ERR2 are wrong actually.
That's why you can never clear an error.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Max burst len is a 4-bit field, but at the moment it's clipped with
a 5-bit constant... reduce it to that which can be expressed
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Allow i.MX7 to work with the imx-sdma driver.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Tested-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
of_match_device could return NULL, and so cause a NULL pointer
dereference later at line 850:
mdma->soc = match->data;
For fixing this problem, we use of_device_get_match_data(), this will
simplify the code a little by using a standard function for
getting the match data.
This was reported by coverity (CID 1324134)
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 498b5fdd40 ("PM / clk: Add support for adding a specific clock
from device-tree") add a new helper function for adding a clock from
device-tree to a device. Update the ADMA driver to use this new function
to simplify the driver.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
free_irq() expects the same device identity that was passed to
corresponding request_irq(), otherwise the IRQ is not freed.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When terminating for_each_compatible_node() iteration with
break or return, of_node_put() should be used to prevent
stale device node references from being left behind.
Found by Coccinelle.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In a very tight timeframe, the debug message in the transfer completion
handler can be misleading, as the completion test report can change just
after the message, and the code flow cannot be deduced from the debug
message.
This is just a cleanup to make debugging easier.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the case where a descriptor is chained on a running channel, and as
explained in the comment in the code 10 lines above, the success of the
chaining is ensured either if :
- the DMA is still running
- or if the chained transfer is completed
Unfortunately the transfer completness test was done on the descriptor
to which the transfer was chained, and not the transfer being chained at
the end, ie. hot-chained.
This corner case is extremely hard to trigger, as usually the DMA chain
is still running, and the first case takes care of returning success of
the hot-chaining. It was seen by hot-chaining several "small transfers"
to a running "big transfer", not in a real-life usecase but by testing
the robustness of the driver.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sDMA in OMAP3630 or newer SoC have support for LinkedList transfer. When
LinkedList or Descriptor load feature is present we can create the
descriptors for each and program sDMA to walk through the list of
descriptors instead of the current way of sDMA stop, sDMA reconfiguration
and sDMA start after each SG transfer.
By using LinkedList transfer in sDMA the number of DMA interrupts will
decrease dramatically.
Booting up the board with filesystem on SD card for example:
W/o LinkedList support:
27: 4436 0 WUGEN 13 Level omap-dma-engine
Same board/filesystem with this patch:
27: 1027 0 WUGEN 13 Level omap-dma-engine
Or copying files from SD card to eMCC:
2.1G /usr/
232001
W/o LinkedList we see ~761069 DMA interrupts.
With LinkedList support it is down to ~269314 DMA interrupts.
With the decreased DMA interrupt number the CPU load is dropping
significantly as well.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of accessing the array via index, take the pointer first and use
it to set up the omap_sg struct.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Print the same information the driver prints when allocating the channel
resources regarding to the sDMA channel.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On OMAP1 platforms we do not have 32 channels available. Allocate the
lch_map based on the available channels. This way we are not going to have
more visible channels then it is available on the platform.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Flatten the indentation level of the function which gives better view on
the cases we handle here.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We can drop the (sg)idx parameter for the omap_dma_start_sg() function and
increment the sgidx inside of the same function.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The USB-DMAC's interruption happens even if the CHCR.DE is not set to 1
because CHCR.NULLE is set to 1. So, this driver should call
usb_dmac_isr_transfer_end() if the DE bit is set to 1 only. Otherwise,
the desc is possible to be NULL in the usb_dmac_isr_transfer_end().
Fixes: 0c1c8ff32f ("dmaengine: usb-dmac: Add Renesas USB DMA Controller (USB-DMAC) driver)
Cc: <stable@vger.kernel.org> # v4.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Cc: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Cc: Xuelin Shi <xuelin.shi@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Provide a mechanism to translate CHANERR bits to English strings in order
to allow user to report more concise errors.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adding error handling to the ioatdma driver so that when a
read/write error occurs the error results are reported back and
all the remaining descriptors are aborted. This utilizes the new
dmaengine callback function that allows reporting of results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adding a new callback that will provide the error result for a transaction.
The result is allocated on the stack and the callback should create a copy
if it wishes to retain the information after exiting. The result parameter
is now defined and takes over the dummy void pointer we placed in the
helper functions previously. dmaengine drivers should start converting
to the new "callback_result" callback in order to receive transaction
results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Li Yang <leoli@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Viresh Kumar <vireshk@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dmaengine does not provide a way to pass back the result from a DMA
transaction through the callback function. We are adding dmaengine
helper function in order to prep for a mechanism that allow result
status and other information through the callback. The initial conversion
will make the existing driver use these new helper functions but retain
the original behavior of the code. However, the helper functions paves
a way towards adding the result parameter through callback.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Static analysis showed that unitialized array is being used for compare.
At line 850 when a dma_mapping_error() occurs, it jumps to dma_unmap. At
this point, dma_srcs has not been initialized. However, the code after
dma_unmap label checks dma_srcs for a comparison and thus is comparing
to random garbage in the array. Given that when dest_dma is being mapped
this is the first instance of mapping DMA memory and failed, there is
really nothing to be cleaned up and thus should jump to free_resources
label instead.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have bit of largish changes:
New drivers:
- Xilinx zynqmp dma engine driver.
- Marvell xor2 driver.
Updates:
- dmatest sg support.
- updates and enhancements to Xilinx drivers, adding of cyclic mode.
- clock handling fixes across drivers.
- removal of OOM messages on kzalloc across subsystem.
- interleaved transfers support in omap driver.
- runtime pm support in qcom bam dma.
- tasklet kill freeup across drivers.
- irq cleanup on remove across drivers.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXmZj8AAoJEHwUBw8lI4NHANYP/0e7LkiopiBdxKyYNwViyrfL
XGsB3Fcd8MvYnojzlRUNUi5dt86YfXM5JixMWg8e2WeDSK9AXBEpRBHlEJXA7FNn
/BXy8FZxW4YkXOBSOL+GbDgC48CRiQrmoHSsYE1e9qosJTTpDJlTMd0I3EmdAK53
wjNKBYGv5ORNMXdYXJe/6uUqbN0QT7Qr7a9+Q1qgwhF1wd8pKjVXvDD6Qj2NeM8L
OCySngBECDWTCEYpuNXHbtp/s8QpWteGwCTyQYTkuNsFYM2H3nQCXi9ObMOGR9IN
44FpLgeLeIgW03F/1DmVvMWDYgCgow+b1usqHRWC7x33K/ArqzZzAsPyKePqOBU5
B9zzAla+/QKi73mKauqgHl/Siokr9FZdFpvTVWf2ssm/k3b3GJMO9tPPJ2ocyvZ6
lwlHrTMOV9n2tzeBkkadgLPWO6yDlcYlDVjj1P36DzC88GhjEfFOlq3tqcEJWxmR
adDFX2yRXdtpA6XSiI9l7jxUHxBWZwOTPJ6h1gznk/wVVd0TjyzZteX2gPc8lkcL
Aedhyx0zgGl5bE4+eBsNKLiOrUj468j7Cb87Hhe4YygDw/2T2Ff5RDGxQ9iRrkCb
YRPP21453VS01GuF2T1vzziJ/tGl8IwIon1EOSsTXuImH1sm7Or3W+Cyrke9AZgo
0M8kfHJ2EfcRnwHE8N2H
=tYp0
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.8-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have bit of largish changes: two new drivers, bunch of
updates and cleanups to existing set. Nothing super exciting though.
New drivers:
- Xilinx zynqmp dma engine driver
- Marvell xor2 driver
Updates:
- dmatest sg support
- updates and enhancements to Xilinx drivers, adding of cyclic mode
- clock handling fixes across drivers
- removal of OOM messages on kzalloc across subsystem
- interleaved transfers support in omap driver
- runtime pm support in qcom bam dma
- tasklet kill freeup across drivers
- irq cleanup on remove across drivers"
* tag 'dmaengine-4.8-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (94 commits)
dmaengine: k3dma: add missing clk_disable_unprepare() on error in k3_dma_probe()
dmaengine: zynqmp_dma: add missing MODULE_LICENSE
dmaengine: qcom_hidma: use for_each_matching_node() macro
dmaengine: zynqmp_dma: Fix static checker warning
dmaengine: omap-dma: Support for interleaved transfer
dmaengine: ioat: statify symbol
dmaengine: pxa_dma: implement device_synchronize
dmaengine: imx-sdma: remove assignment never used
dmaengine: imx-sdma: remove dummy assignment
dmaengine: cppi: remove unused and bogus check
dmaengine: qcom_hidma_lli: kill the tasklets upon exit
dmaengine: pxa_dma: remove owner assignment
dmaengine: fsl_raid: remove owner assignment
dmaengine: coh901318: remove owner assignment
dmaengine: qcom_hidma: kill the tasklets upon exit
dmaengine: txx9dmac: explicitly freeup irq
dmaengine: sirf-dma: kill the tasklets upon exit
dmaengine: s3c24xx: kill the tasklets upon exit
dmaengine: s3c24xx: explicitly freeup irq
dmaengine: pl330: explicitly freeup irq
...
Add the missing clk_disable_unprepare() before return
from k3_dma_probe() in the error handling case.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We get a warning about the missing MODULE_LICENSE tag for this newly
added driver module:
WARNING: modpost: missing MODULE_LICENSE() in drivers/dma/xilinx/zynqmp_dma.o
see include/linux/module.h for more information
This adds a "GPL" license, matching the "version 2 or later" information in
the comment at the start of the file.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use for_each_matching_node() macro instead of open coding it.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Initial support for interleaved transfer with sDMA.
The implementation only supports DMA_MEM_TO_MEM and frame_size must be 1.
sDMA needs to be configured for double indexing when ICG is needed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Sparse warns:
drivers/dma/ioat/init.c:1215:6: warning: symbol 'ioat_resume' was not declared. Should it be static?
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Implement the function which wait until a dma channel is stopped to have
a synchronization point.
This also protects the pxad_remove() from races, such as spurious
interrupts while removing the driver, because :
- as long as there is one dma channel requested, ie. dma_chan_get() but
no dma_chan_put(), the try_module_get() of dma_chan_get() prevents
the remove() routine from running
- when the last channel is released, ie. the last dma_chan_put() is
called, if there is a running DMA, pxad_synchronize() is called
- pxad_synchronize() waits for the channel to stop, which in turn
ensures on pxa architecture that the interrupt cannot be fired anymore
Reported-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
David reported:
[drivers/dma/imx-sdma.c:769]: (style) Variable 'emi_2_emi' is assigned a value that is never used
Since emi_2_emi is never used afterwards, remove thsi as well
Reported-by: David Binderman <dcb314@hotmail.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
David reported:
drivers/dma/imx-sdma.c:1003]: (style) Same expression on both sides of '|='
ORing with itself yields same result, So remove this
Reported-by: David Binderman <dcb314@hotmail.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cppi41_dma_prep_slave_sg() variable num is initialized to zero, but never
updated and a BUG_ON is checked for it being greater than zero which will be
always false.
Remove the bogus check and this variable
Reported-by: David Binderman <linuxdev.baldrick@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Sinan Kaya <okaya@codeaurora.org>
debugfs file operations owner is set by core, so remove
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Robert Jarzmik <robert.jarzmik@free.fr>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Barry Song <Baohua.Song@csr.com>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jassi Brar <jassisinghbrar@gmail.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Mario Six <mario.six@gdsys.cc>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Sparse complains:
drivers/dma/mmp_tdma.c:407:22: warning: symbol 'mmp_tdma_alloc_descriptor' was not declared. Should it be static?
drivers/dma/mmp_tdma.c:595:17: warning: symbol 'mmp_tdma_xlate' was not declared. Should it be static?
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Qiao Zhou <zhouqiao@marvell.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jingchang Lu <b35083@freescale.com>
Cc: Peter Griffin <peter.griffin@linaro.org>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
size_t should be printed with %zu, not %lu as driver did, so fix these
warning by doing this change
drivers/dma/fsl_raid.c: In function 'fsl_re_prep_dma_genq':
drivers/dma/fsl_raid.c:341:4: warning: format '%lu' expects argument of type
'long unsigned int', but argument 3 has type 'size_t' [-Wformat=]
len, FSL_RE_MAX_DATA_LEN);
^
drivers/dma/fsl_raid.c: In function 'fsl_re_prep_dma_pq':
drivers/dma/fsl_raid.c:428:4: warning: format '%lu' expects argument of type
'long unsigned int', but argument 3 has type 'size_t' [-Wformat=]
len, FSL_RE_MAX_DATA_LEN);
^
drivers/dma/fsl_raid.c: In function 'fsl_re_prep_dma_memcpy':
drivers/dma/fsl_raid.c:549:4: warning: format '%lu' expects argument of type
'long unsigned int', but argument 3 has type 'size_t' [-Wformat=]
len, FSL_RE_MAX_DATA_LEN);
^
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Xuelin Shi <xuelin.shi@freescale.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jingchang Lu <b35083@freescale.com>
Cc: Peter Griffin <peter.griffin@linaro.org>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Sparse complains:
drivers/dma/coh901318.c:269:30: warning: symbol 'chan_config' was not declared. Should it be static?
drivers/dma/coh901318.c:2806:12: warning: symbol 'coh901318_init' was not declared. Should it be static?
drivers/dma/coh901318.c:2812:13: warning: symbol 'coh901318_exit' was not declared. Should it be static?
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
This patch updates the dmatest client to
Support scatter-gather dma mode.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the handler ignores the channel 0 interrupt and thus doesn't ack
it properly. This is done in order to allow sdma_run_channel0() to poll
on the irq status bit, as this function may be called in atomic context,
but needs to know when the channel has finished.
This works mostly, as the polling happens under a spinlock, disabling IRQs
on the local CPU, leaving only a very slight race window for a spurious
IRQ to happen if the handler is executed on another CPU in an SMP system.
Still this is clearly suboptimal.
This behavior turns into a real problem on an RT system, where the spinlock
doesn't disable IRQs on the local CPU. Not acking the IRQ in the handler
in such a setup is very likely to drown the CPU in an IRQ storm, leaving
it unable to make any progress in the polling loop, leading to the IRQ
never being acked.
Fix this by properly acknowledging the channel 0 IRQ in the handler.
As the IRQ status bit can no longer be used to poll for the channel
completion, switch over to using the SDMA_H_STATSTOP register for this
purpose, where bit 0 is cleared by the hardware when the channel is done.
Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In case of error, the function platform_device_register_full()
returns ERR_PTR() and never returns NULL. The NULL test in the
return value check should be replaced with IS_ERR().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The new mv_xor_v2 driver supports the XOR engines found in the 64-bits
ARM from Marvell of the Armada 7K and Armada 8K family. This XOR
engine is a completely new hardware block, entirely different from the
one used on previous Marvell Armada platforms, which use the existing
mv_xor driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added zynqmp_dma driver produces a warning on 32-bit architectures
when dma_addr_t is 64-bit wide:
drivers/dma/xilinx/zynqmp_dma.c: In function 'zynqmp_dma_config_sg_ll_desc':
drivers/dma/xilinx/zynqmp_dma.c:321:9: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v);
^
drivers/dma/xilinx/zynqmp_dma.c:321:29: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v);
This changes the cast to the more appropriate uintptr_t.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cyclic DMA mode need to link the tail bd segment
with the head bd segment to process bd's in cyclic.
Current driver is doing this only for tx channel
needs to update the same for rx channel case also.
This patch fixes the same.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Added the driver for zynqmp dma engine used in Zynq
UltraScale+ MPSoC. This dma controller supports memory to memory
and I/O to I/O buffer transfers.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cookies corresponding to pending transfers have a residue value equal to
the full size of the corresponding descriptor. The driver miscomputes
that and uses the size of the active descriptor instead. Fix it.
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
[geert: Also check desc.active list]
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Running descriptor pointer is set to NULL upon freeing resources. Other-
wise, rcar_dmac_issue_pending might not start new transfers
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The documentation states one should make sure both DE and TE are cleared
before starting a transaction. This patch extends the current warning to
look at both DE and TE.
Based on previous work from Muhammad Hamza Farooq.
Suggested-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The hardware might have complete the transfer but the interrupt handler
might not have had a chance to run. If rcar_dmac_chan_get_residue()
which reads HW registers finds that there is no residue return
DMA_COMPLETE.
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
[Niklas: add explanation in commit message]
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Current driver assumes that child node channel name is either
"xlnx,axi-vdma-mm2s-channel" or "xlnx,axi-vdma-s2mm-channel"
which is confusing the users of AXI DMA and CDMA.
This patch fixes this issue by using different channel
names for the AXI DMA and AXI CDMA child nodes.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the existing vdma driver support for
AXI DMA and CDMA got added so the driver is no
longer VDMA specific.
This patch renames the driver and DT binding doc to xilinx_dma
and updates the Kconfig description for all the DMAS.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for AXI DMA multi-channel dma mode
Multichannel mode enables DMA to connect to multiple masters
and slaves on the streaming side.
In Multichannel mode AXI DMA supports 2D transfers.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bam_dma driver gained runtime PM support, but that causes build
warnings whenever CONFIG_PM is disabled:
drivers/dma/qcom/bam_dma.c:1324:12: error: 'bam_dma_runtime_resume' defined but not used [-Werror=unused-function]
static int bam_dma_runtime_resume(struct device *dev)
^~~~~~~~~~~~~~~~~~~~~~
drivers/dma/qcom/bam_dma.c:1315:12: error: 'bam_dma_runtime_suspend' defined but not used [-Werror=unused-function]
static int bam_dma_runtime_suspend(struct device *dev)
This removes the incomplete #ifdef guard and instead marks all
four PM functions as __maybe_unused, which avoids this kind of
warning.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 7d2545599f ("dmaengine: qcom-bam-dma: Add pm_runtime support")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When building this driver on arm64, we get a harmless type
mismatch warning:
drivers/dma/bcm2835-dma.c: In function 'bcm2835_dma_fill_cb_chain_with_sg':
include/linux/kernel.h:743:17: warning: comparison of distinct pointer types lacks a cast
(void) (&_min1 == &_min2); \
^
drivers/dma/bcm2835-dma.c:409:21: note: in expansion of macro 'min'
cb->cb->length = min(len, max_len);
This changes the type of the 'len' variable to size_t, which
avoids the problem.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 388cc7a281 ("dmaengine: bcm2835: add slave_sg support to bcm2835-dma")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Return IRQ_NONE in the interrupt handler when it is called but no IRQs are
pending. This allows the system to recover in case of an interrupt storm
e.g. due to a wrong interrupt configuration setup.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Propagate errors returned by platform_get_irq() to the driver core. This
will enable proper probe deferring for the driver in case the IRQ provider
has not been registered yet.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add MODULE_DEVICE_TABLE() for the axi-dmac driver. This allows the driver
to be loaded on demand when built as a module.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When building this driver on arm64, we get a harmless type
mismatch warning:
drivers/dma/bcm2835-dma.c: In function 'bcm2835_dma_fill_cb_chain_with_sg':
include/linux/kernel.h:743:17: warning: comparison of distinct pointer types lacks a cast
(void) (&_min1 == &_min2); \
^
drivers/dma/bcm2835-dma.c:409:21: note: in expansion of macro 'min'
cb->cb->length = min(len, max_len);
This changes the type of the 'len' variable to size_t, which
avoids the problem.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 388cc7a281 ("dmaengine: bcm2835: add slave_sg support to bcm2835-dma")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adds pm_runtime support for BAM DMA so that clock is enabled only
when there is a transaction going on to help save power.
Signed-off-by: Pramod Gurav <pramod.gurav@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 71f7e6cc55 ('dmaengine: tegra20-apb-dma: Only calculate residue
if txstate exists') changed the tegra_dma_tx_status() function to only
calculate the residue if there is a valid 'txstate' pointer for storing
the residue. Although this makes sense, this changed the behaviour of
the function tegra_dma_tx_status() such that if the pointer 'txstate' is
not valid, then we will return whatever state is returned by
dma_cookie_status() and no longer return the state by looking up the DMA
descriptor and returning it's state.
Please note that dma_cookie_status() will either return DMA_COMPLETE or
DMA_IN_PROGRESS. However, if dma_cookie_status() returns DMA_IN_PROGRESS
the actual status could be DMA_ERROR which will only be seen from
checking the descriptor status. Therefore, even if 'txstate' is not
valid, still check to see if there is a valid descriptor for the cookie
in question and if so return the descriptor state. Finally, ensure the
residue is still not calculated if the 'txstate' is not valid.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The calculation of the DMA residue for the Tegra APB DMA is duplicated
in two places in the tegra_dma_tx_status() function. Remove this
duplicated code by moving calculation to the end of the function and
only calculating if we found a valid descriptor.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Correct the grammar in the debug message when no descriptor is found.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
mbr_ds is an integer, don't use %pad to print it.
Fixes: commit 268914f4e7 ("dmaengine: at_xdmac: use %pad format string for dma_addr_t")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The omap_dmaxbar_init() function is not exported or declared outside
the driver, so make it static to fix the following sparse warning:
drivers/dma/ti-dma-crossbar.c:455:5: warning: symbol 'omap_dmaxbar_init' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To allow other code to safely read DMA Channel Status Register (where
the register attribute for Channel Error, Descriptor Time Out &
Descriptor Done fields are read-clear), export hsu_dma_get_status().
hsu_dma_irq() is renamed to hsu_dma_do_irq() and requires Status
Register value to be passed in.
Signed-off-by: Chuah, Kim Tatt <kim.tatt.chuah@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If kzalloc() fails it will issue it's own error message including
a dump_stack(). So remove the site specific error messages.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point calculating the residue if there is
no txstate to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point in calculating the residue if state does not
exist to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point calculating the residue if there is
no txstate to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Doing so saves a few lines of code in the driver.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point in calculating the residue if there is no
txstate to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It is useful to print the error code as part of the error
message.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently fsl-edma doesn't clk_disable_unprepare()
its clocks on error conditions. This patch adds a
fsl_disable_clocks helper for this, and also only
disables clocks which were enabled if encountering
an error whilst enabling clocks.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The AXI CDMA is a soft ip, which can be programmed to support
32 bit addressing or greater than 32 bit addressing.
When the AXI CDMA ip is configured for 32 bit address space
in simple dma mode the source/destination buffer address is
specified by a single register(18h for Source buffer address and
20h for Destination buffer address). When configured in SG mode
the current descriptor and tail descriptor are specified by a
Single register(08h for curdesc 10h for tail desc).
When the AXI CDMA core is configured for an address space greater
than 32 then each buffer address or descriptor address is specified by
a combination of two registers.
The first register specifies the LSB 32 bits of address,
while the next register specifies the MSB 32 bits of address.
For example, 08h will specify the LSB 32 bits while 0Ch will
specify the MSB 32 bits of the first start address.
So we need to program two registers at a time.
This patch adds the 64 bit addressing support to the axicdma
IP in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The AXI DMA is a soft ip, which can be programmed to support
32 bit addressing or greater than 32 bit addressing.
When the AXI DMA ip is configured for 32 bit address space
in simple dma mode the buffer address is specified by a single register
(18h for MM2S channel and 48h for S2MM channel). When configured in SG mode
The current descriptor and tail descriptor are specified by a single
Register(08h for curdesc 10h for tail desc for MM2S channel and 38h for
Curdesc and 40h for tail desc for S2MM).
When the AXI DMA core is configured for an address space greater
than 32 then each buffer address or descriptor address is specified by
a combination of two registers.
The first register specifies the LSB 32 bits of address,
while the next register specifies the MSB 32 bits of address.
For example, 48h will specify the LSB 32 bits while 4Ch will
specify the MSB 32 bits of the first start address.
So we need to program two registers at a time.
This patch adds the 64 bit addressing support for the axidma
IP in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are some places where whitespace is used in very funky ways. Fix
the most serious ones to make the code easier on the eye.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added xilinx_dma_prep_dma_cyclic function sometimes causes
a gcc warning about the use of the segment function in case
we never run into the inner loop of the function:
dma/xilinx/xilinx_vdma.c: In function 'xilinx_dma_prep_dma_cyclic':
dma/xilinx/xilinx_vdma.c:1808:23: error: 'segment' may be used uninitialized in this function [-Werror=maybe-uninitialized]
segment->hw.control |= XILINX_DMA_BD_SOP;
This can only happen if the period len is zero (which would cause other
problems earlier), or if the buffer is shorter than a period. Neither
of them should ever happen, but by adding an explicit check for these two
cases, we can abort in a more controlled way, and the compiler is
able to see that we never use uninitialized data.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below compilation warining.
drivers/dma/xilinx/xilinx_vdma.c: In function 'xilinx_dma_prep_dma_cyclic':
drivers/dma/xilinx/xilinx_vdma.c:1808:23: warning: 'segment' may be used
uninitialized in this function [-Wmaybe-uninitialized]
segment->hw.control |= XILINX_DMA_BD_SOP;
The start of packet (SOP) should be set to the first segment in the desc
chain not for the last segment of the desc chain.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bcm2835_dma_prep_dma_memcpy() function is not exported
outside the driver, so make it static to avoid the following
warning:
drivers/dma/bcm2835-dma.c:616:32: warning: symbol 'bcm2835_dma_prep_dma_memcpy' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The at_xdmac_init_used_desc() and at_xdmac_prep_dma_memset()
functions are not exported outside the driver, so make them
static to avoid the following warnings:
drivers/dma/at_xdmac.c:459:6: warning: symbol 'at_xdmac_init_used_desc' was not declared. Should it be static?
drivers/dma/at_xdmac.c:1205:32: warning: symbol 'at_xdmac_prep_dma_memset' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The sirfsoc_dmadata structs are not used outside the driver, so
remove build warnings by making them static. Fixes:
drivers/dma/sirf-dma.c:1129:24: warning: symbol 'sirfsoc_dmadata_a6' was not declared. Should it be static?
drivers/dma/sirf-dma.c:1134:24: warning: symbol 'sirfsoc_dmadata_a7v1' was not declared. Should it be static?
drivers/dma/sirf-dma.c:1139:24: warning: symbol 'sirfsoc_dmadata_a7v2' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix warning due to d40_width_to_bits() not being used outside
this file. Fixes:
drivers/dma/ste_dma40_ll.c:13:4: warning: symbol 'd40_width_to_bits' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The driver limits the physical number of paRAM slots to be used by channels.
If the transfer needs more slots (more SGs) then the transfer is broken up
to smaller chunks. When the chunk is finished the driver will rewrite the
physical slots and continues the transfer. This set up time can take some
time and we might miss DMA events. If the intermediate set completion is
using early completion (the interrupt will happen when the last slot is
issued to the TPTC and not when the transfer is finished by the TPTC) we
will have a bit more time to update the paRAM slots and less likely to have
missed events.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Upon booting, I occasionally spotted some BUGs triggered by the internal
DMA test routine executed upon driver probing. This was detected by
SLUB_DEBUG ("Freechain corrupt" or "Redzone overwritten"). Tracking
this down located a problem in passing 0 as offset in dma_map_page().
As kmalloc, especially when used with SLUB_DEBUG, may return a non page
aligned address.
This patch fixes this issue by passing the correct offset in
dma_map_page().
Tested on a custom Armada XP board.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gregory CLEMENT <gregory.clement@free-electrons.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove the space before the "err_free_dma:" label in mv_xor_channel_add().
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Gregory CLEMENT <gregory.clement@free-electrons.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_pool_zalloc combines dma_pool_alloc and memset 0
this patch updates the driver to use dma_pool_zalloc.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for AXI DMA cyclic dma mode.
In cyclic mode, DMA fetches and processes the same
BDs without interruption. The DMA continues to fetch and process
until it is stopped or reset.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Due to the way CUBC register is updated, a double flush is needed to
compute an accurate residue. First flush aim is to get data from the DMA
FIFO and second one ensures that we won't report data which are not in
memory.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
An unexpected value of CUBC can lead to a corrupted residue. A more
complex sequence is needed to detect an inaccurate value for NCA or CUBC.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Having descriptors aligned on 64 bits allows update CNDA and CUBC in an
atomic way.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For each descriptor, in addition to the memory used by the descriptors
structure itself, the driver allocates a list of chunks as well as a
buffer for hardware descriptors. Descriptors themselves are preallocated,
and allocation of the chunks and buffer is performed the first time the
descriptor is used. The memory isn't freed when the transfer is completed,
as the chunks and buffer will be needed again when the descriptor is
reused internally, so the driver keeps the memory around.
If only a few descriptors are used concurrently, the current
list_add_tail() implementation will result in all preallocated descriptors
being used before going back to the first one, and will thus allocate
chunks and a buffer for all preallocated descriptors. Using list_add()
will put the complete descriptor at the head of the list of available
descriptors, so the next transfer will be more likely to reuse a
descriptor that already has associated memory instead of one that has
never been used before.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Most users of IS_ERR_VALUE() in the kernel are wrong, as they
pass an 'int' into a function that takes an 'unsigned long'
argument. This happens to work because the type is sign-extended
on 64-bit architectures before it gets converted into an
unsigned type.
However, anything that passes an 'unsigned short' or 'unsigned int'
argument into IS_ERR_VALUE() is guaranteed to be broken, as are
8-bit integers and types that are wider than 'unsigned long'.
Andrzej Hajda has already fixed a lot of the worst abusers that
were causing actual bugs, but it would be nice to prevent any
users that are not passing 'unsigned long' arguments.
This patch changes all users of IS_ERR_VALUE() that I could find
on 32-bit ARM randconfig builds and x86 allmodconfig. For the
moment, this doesn't change the definition of IS_ERR_VALUE()
because there are probably still architecture specific users
elsewhere.
Almost all the warnings I got are for files that are better off
using 'if (err)' or 'if (err < 0)'.
The only legitimate user I could find that we get a warning for
is the (32-bit only) freescale fman driver, so I did not remove
the IS_ERR_VALUE() there but changed the type to 'unsigned long'.
For 9pfs, I just worked around one user whose calling conventions
are so obscure that I did not dare change the behavior.
I was using this definition for testing:
#define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \
unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO))
which ends up making all 16-bit or wider types work correctly with
the most plausible interpretation of what IS_ERR_VALUE() was supposed
to return according to its users, but also causes a compile-time
warning for any users that do not pass an 'unsigned long' argument.
I suggested this approach earlier this year, but back then we ended
up deciding to just fix the users that are obviously broken. After
the initial warning that caused me to get involved in the discussion
(fs/gfs2/dir.c) showed up again in the mainline kernel, Linus
asked me to send the whole thing again.
[ Updated the 9p parts as per Al Viro - Linus ]
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.org/lkml/2016/1/7/363
Link: https://lkml.org/lkml/2016/5/27/486
Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This time round the update brings in following changes:
- New tegra driver for ADMA device
- Support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI Central
Direct Memory Access Engine and few updates to this driver.
- New cyclic capability to sun6i and few updates.
- Slave-sg support in bcm2835.
- Updates to many drivers like designware, hsu, mv_xor, pxa, edma,
qcom_hidma & bam.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXPVb9AAoJEHwUBw8lI4NHnDQP/AtUYBTI8XD68iGh5eCTEtwO
3dNgUmOvIAIl0ZtVKex3b7j2S52IN7EDv44QmsmvMHgjvaupUsZ/HeIHgoI37y39
/qoRkyiG75ht68BrNjKcpJLsOyxaAUT1tMyf/bYXlDW8O7qEPtRDhuvUB+i+s3RX
ljNOQXH2WaQTJrNeZxkvbp92iGiu3j7AKyCh9MJ4gnF4y2oA1bFp++QpH5qcBOTp
0nccs7pgDQhw2nzHmhYbEmvgcKPrPQi+67U7eIed7n7wiThAIXIEbZl6AYk9kFaK
gSa4/N3fwnZc9TFR5O6qdanvsYdW4JC1P5Ydm0opExo3lgtMckQ3sGKFIwTG8eU4
YiyQE1uVHRqT82zxPCecTF+I0Y4g68oCJURrHED6kxKGA5a8ojU04aGebXDiNKlp
FEDceEC5ch7ZPw8CCTola+TYpf9Vni3g7OkrdkPY9cX/aDXDROghTCg9jgPJ2aL/
oai5axc5gQMEFzHPaEwFp45tgXw7IvIzaqYHmiWE11fsRbGUSB2HAwBXytI9ReC0
XTMBvc08YvisbIpIR29T0R5cerzdDuK9bXxYHHHOeUFg0t8R8UGaP1UxEQCVmLsT
AIrHupoccPJ7IAn0h6mShtZ2yzBfj3rU4tEMJR/Oj/VvjW3gKbbZ5XVi92fOurBs
xjn9uBBZ/Pt9hgprwlmY
=0Sy7
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time round the update brings in following changes:
- new tegra driver for ADMA device
- support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI
Central Direct Memory Access Engine and few updates to this driver
- new cyclic capability to sun6i and few updates
- slave-sg support in bcm2835
- updates to many drivers like designware, hsu, mv_xor, pxa, edma,
qcom_hidma & bam"
* tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (84 commits)
dmaengine: ioatdma: disable relaxed ordering for ioatdma
dmaengine: of_dma: approximate an average distribution
dmaengine: core: Use IS_ENABLED() instead of checking for built-in or module
dmaengine: edma: Re-evaluate errors when ccerr is triggered w/o error event
dmaengine: qcom_hidma: add support for object hierarchy
dmaengine: qcom_hidma: add debugfs hooks
dmaengine: qcom_hidma: implement lower level hardware interface
dmaengine: vdma: Add clock support
Documentation: DT: vdma: Add clock support for dmas
dmaengine: vdma: Add config structure to differentiate dmas
MAINTAINERS: Update Tegra DMA maintainers
dmaengine: tegra-adma: Add support for Tegra210 ADMA
Documentation: DT: Add binding documentation for NVIDIA ADMA
dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
Documentation: DT: vdma: update binding doc for AXI CDMA
dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine
Documentation: DT: vdma: update binding doc for AXI DMA
dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma
dmaengine: slave means at least one of DMA_SLAVE, DMA_CYCLIC
dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoC
...
ioatdma by default is in snoop mode. Relaxed ordering according to spec
does not do anything in snoop mode. However, it causes hang or significant
performance degrade when tested with NTB. Disabling in the driver due to
some BIOS do not configure it correctly.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the following DT description would result in dmac0 always
being tried first and dmac1 second if dmac0 was unavailable. This
results in heavier use of dmac0 then of dmac1. This patch adds an
approximate average distribution over the two nodes lessening the load
of anyone of them.
i2c6: i2c@e60b0000 {
...
dmas = <&dmac0 0x77>, <&dmac0 0x78>,
<&dmac1 0x77>, <&dmac1 0x78>;
dma-names = "tx", "rx", "tx", "rx";
...
};
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The IS_ENABLED() macro checks if a Kconfig symbol has been enabled either
built-in or as a module, use that macro instead of open coding the same.
Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When the ccerr handler is called but the error registers indicate no error
events we need to command eDMA to re-evaluate the errors. Otherwise we can
receive flood of error interrupts.
Reported-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In order to create a relationship model between the channels and the
management object, we are adding support for object hierarchy to the
drivers. This patch simplifies the userspace application development.
We will not have to traverse different firmware paths based on device
tree or ACPI based kernels.
No matter what flavor of kernel is used, objects will be represented as
platform devices.
The new layout is as follows:
hidmam_10: hidma-mgmt@0x5A000000 {
compatible = "qcom,hidma-mgmt-1.0";
...
hidma_10: hidma@0x5a010000 {
compatible = "qcom,hidma-1.0";
...
}
}
The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects
in device tree and calls into the channel driver to create platform devices
for each child of the management object.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add debugfs hooks for debugging the execution behavior of the DMA
channel. The debugfs hooks get initialized by the probe function and
uninitialized by the remove function.
A stats file is created in debugfs. The stats file will show the
information about each HIDMA channel as well as each asynchronous job
queued and completed at a given time.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch implements the hardware hooks for the HIDMA channel driver.
The main functions of interest are:
- hidma_ll_init
- hidma_ll_request
- hidma_ll_queue_request
- hidma_ll_hw_start
OS layer calls the hidma_ll_init function during probe to set up the
hardware. At this moment, the number of supported descriptors are also
given. On each request, a descriptor is allocated from the free pool and
filled in with the transfer parameters. Multiple requests can be queued
into the hardware via the OS interface. When client is ready for requests
to be executed, start method is called.
Completions are delivered via callbacks via tasklet.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Added basic clock support for axi dma's.
The clocks are requested at probe and released at remove.
Reviewed-by: Shubhrajyoti Datta <shubhraj@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds config structure in the driver to differentiate
AXI DMA's and to add more features(clock support etc..) to these DMA's.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for the Tegra210 Audio DMA controller that is used for
transferring data between system memory and the Audio sub-system.
The driver only supports cyclic transfers because this is being solely
used for audio.
This driver is based upon the work by Dara Ramesh <dramesh@nvidia.com>.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the AXI Central Direct Memory Access
(AXI CDMA) core to the existing vdma driver, AXI CDMA is a
soft Xilinx IP core that provides high-bandwidth
Direct Memory Access(DMA) between a memory-mapped
source address and a memory-mapped destination address.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the AXI Direct Memory Access (AXI DMA)
core in the existing vdma driver, AXI DMA Core is a
soft Xilinx IP core that provides high-bandwidth
direct memory access between memory and AXI4-Stream
type target peripherals.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch renames the xilinx_vdma_ prefix to xilinx_dma
for the API's and masks that will be shared b/w three DMA
IP cores.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When check for capabilities recognize slave support by either DMA_SLAVE or
DMA_CYCLIC bit set. If we don't do that the user can't get a normally worked
DMA support for engines that doesn't have one of the mentioned bits set.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Armada 3700 SoC uses the mv_xor driver but don't select anymore the
PLAT_ORION symbol. This commit extends the dependency of the mv_xor
driver to the more modern SoCs only compatible with ARCH_MVEBU, which
allows using it with the Armada 3700 SoC.
In the same time it also add the COMPILE_TEST dependency allowing a wider
test coverage.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Armada 3700 SoC comprise a single XOR engine compliant with the ones used
in older Marvell SoC's like Armada XP or 38x. The only thing that needs
modification is the Mbus configuration, which has to be done on two
levels: global and in device. The first one is inherited from the
bootloader. The latter can be opened in a default way, leaving
arbitration to the bus controller. Hence filled mbus_dram_target_info
structure is not needed.
Patch "dmaengine: mv_xor: optimize performance by using a subset
of the XOR channels" introduced limitation for using XOR engines and
channels vs number of available CPU's. Those constraints do not however
fit Armada 3700 architecture with two possible CPU's and single,
dual-channel engine. Hence in this commit an adjustment for setting
maximum available channels is added.
This patch enables XOR access to DRAM by opening default window to 4GB
space with specific attribute.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the main difference between legacy XOR engine and newer one, is
the way the engine modes are setup (either in the descriptor or through
the controller registers). In order to be able to take into account new
generation of the XOR engine for the ARM64 SoC, we need to identify them
by type, and then depending to the type the engine setup will be
selected.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix two warnings which appear when building for 64 bits target:
drivers/dma/mv_xor.c: In function ‘mv_xor_prep_dma_xor’:
drivers/dma/mv_xor.c:480:3: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 6 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
"%s src_cnt: %d len: %u dest %pad flags: %ld\n",
^
drivers/dma/mv_xor.c: In function ‘mv_xor_probe’:
drivers/dma/mv_xor.c:1223:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
op_in_desc = (int)of_id->data;
^
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Sören Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The transfer direction is now checked in set_config.
There is no need to check it twice.
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some DMA clients, as audio, don't set the maxburst size and bus width
on the memory side when starting DMA transfers.
This patch prevents such transfers to be aborted by providing system
default values to the lacking ones.
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We pass struct dw_dma_chip to dw_dma_probe() anyway, thus we may use it to
pass a platform data as well.
While here, constify the source of the platform data.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Keep the entire platform data in the struct dw_dma.
It makes the driver a bit cleaner.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There several changes are done here:
- Convert the property to be in bytes
Besides that this is a common practice for such property, the use of a value
in bytes much more convenient than handling the encoded one.
- Rename data_width to data-width in the device tree bindings
The change leaves the support for the old format as well just in case someone
will use a newer kernel with an old device tree blob.
- While here, replace dwc_fast_ffs() by __ffs()
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The value of nr_masters equal to 0 is invalid since this DMA controller has to
have at least one master.
Check this before we proceed with the rest of properties.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Initialize default channel slave_id(req_sel) to invalid id
(i.e max supported slave id + 1) to avoid overwriting of slave_id
during tegra_dma_slave_config() with client data if slave_id
is not initialized through DT
Signed-off-by: Shardar Shariff Md <smohammed@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix typo in warning message that there is no "interrupt-names"
property defined in the device-tree and legacy-mode is used.
Also added newline to end of message.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Checking the DMA config before setting the lli list avoids to do tests
inside the setting loop.
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the commit 1f9cd915b6 ("dmaengine: sun6i: Fix memcpy operation"),
the signed values returned by convert_burst() and convert_buswidth()
were stored in an unsigned value.
Then, these values were considered as errors when non null.
As a result, DMA transfers were rejected when the burst or buswidth
had values different from 1, as 8 for the burst or 4 for the bus width.
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The IRQ register number is computed, but this number was not used
and the register was the one indexed by the channel index instead.
Then, only the first DMA channel was working.
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch changes the driver to allocate DMA descriptors when
needed. This stops memory resources to be wasted and letting
them sit idle in the free_list structure when the device doesn't
need it... This also solves the problem, that a driver has to
guess the number of how many descriptors it needs to allocate
in advance. Currently, the dma engine will just fail when put
under load by sata_dwc_460ex.
Signed-off-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It seems that the define has not been with acurate name and
makes confusion while reading the code. The more acurate
name should be BAM_FIFO_SIZE.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The pipe fifo size register must instruct the bam hw
how many hw descriptors can be pushed to fifo. Currently
we instruct the hw with 32KBytes but wrap the tail in
bam_start_dma in BAM_P_EVNT_REG on 4095 i.e. 32760. This
leads to stalled transactions when the tail wraps.
Fix this by use the correct fifo size in BAM_P_FIFO_SIZES
register i.e. 32K - 8.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some of the peripherals has bam which is controlled by remote
processor, thus the bam dma driver must avoid register writes
which initialise bam hw block. Those registers are protected
from xPU block and any writes to them will lead to secure
violation and system reboot.
Adding the contolled_remotely flag in bam driver to avoid
not permitted register writes in bam_init function.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Pramod Gurav <gpramod@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently we write BAM_IRQ_CLR register with zero even when no
BAM_IRQ occured. This write has some bad side effects when the
BAM instance is for the crypto engine. In case of crypto engine
some of the BAM registers are xPU protected and they cannot be
controlled by the driver.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Pramod Gurav <gpramod@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use platform_get_irq_byname to allow for correct mapping of
interrupts to dma channels.
The currently implemented device tree is unfortunately
implemented with the wrong assumption, that each dma-channel
has its own dma channel, but dma-irq 11 is handling
dma-channel 11-14 and dma-irq 12 is actually a "catch all"
interrupt.
So here we use the byname variant and require that interrupts
are explicitly named via the interrupts-name property in the
device tree.
The use of shared interrupts is also implemented.
As a side-effect this means we can now use dma channels 12, 13 and 14
in a correct manner - also testing shows that onl using
channels 11 to 14 for spi and i2s works perfectly (when playing
some video)
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Also added check for an error condition in bcm2835_dma_create_cb_chain
that showed up during development of this patch.
Tested using dmatest for all enabled channels.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add slave_sg support to bcm2835-dma using shared allocation
code for bcm2835_desc and DMA-control blocks already used by
dma_cyclic.
Note that bcm2835_dma_callback had to get modified to support
both modes of operation (cyclic and non-cyclic).
Tested using:
* Hifiberry I2S card (using cyclic DMA)
* fb_st7735r SPI-framebuffer (using slave_sg DMA via spi-bcm2835)
playing BigBuckBunny for audio and video.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bcm2835 dma system has 2 basic types of dma-channels:
* "normal" channels
* "light" channels
Lite channels are limited in several aspects:
* internal data-structure is 128 bit (not 256)
* does not support BCM2835_DMA_TDMODE (2D)
* DMA length register is limited to 16 bit.
so 0-65535 (not 0-65536 as mentioned in the official datasheet)
* BCM2835_DMA_S/D_IGNORE are not supported
The detection of the type of mode is implemented by looking at
the LITE bit in the DEBUG register for each channel.
This allows automatic detection.
Based on this the maximum block size is set to (64K - 4) or to 1G
and this limit is honored during generation of control block
chains. The effect is that when a LITE channel is used more
control blocks are used to do the same transfer (compared
to a normal channel).
As there are several sources/target DREQS that are 32 bit wide
we need to have the transfer to be a multiple of 4 as this would
break the transfer otherwise.
This is why the limit of (64K - 4) was chosen over the
alternative of (64K - 4K).
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In preparation of adding slave_sg functionality this patch moves the
generation/allocation of bcm2835_desc and the building of
the corresponding DMA-control-block chain from bcm2835_dma_prep_dma_cyclic
into the newly created method bcm2835_dma_create_cb_chain.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In preparation to consolidating code we move the cyclic member
into the bcm_2835_desc structure.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add additional defines describing the DMA registers
as well as adding some more documentation to those registers.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The original patch contained 3 dma channels that were masked out.
These - as far as research and discussions show - are a
artefacts remaining from the downstream legacy dma-api.
Right now down-stream still includes a legacy api used only
in a single (downstream only) driver (bcm2708_fb) that requires
2D DMA for speedup (DMA-channel 0).
Formerly the sd-card support driver also was using this legacy
api (DMA-channel 2), but since has been moved over to use
dmaengine directly.
The DMA-channel 3 is already masked out in the devicetree in
the default property "brcm,dma-channel-mask = <0x7f35>;"
So we can remove the whole masking of DMA channels.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
bcm2835-dma supports residue reporting at burst level but didn't report
this via the residue_granularity field.
See also:
b015555327
for the downstream patch.
Signed-off-by: Matthias Reichl <hias@horus.com>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To be sure we have the cyclic transfers already gone we set cdesc to NULL. It
will prevent the double free.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Residue is a property of any active descriptor. So, any descriptor may be in
different state but residue is a feature of active descriptor. Check if the
asked descriptor is active and return proper residue value for it.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We have already dedicated variable for flags, therefore no need to create an
additional storage for that. Covert dwc->initialized to use dwc->flags.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We have already dedicated variable for flags, therefore no need to create an
additional storage for that. Convert dwc->paused to use dwc->flags.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code is fixed to satisfy a compiler otherwise we have
drivers/dma/dw/core.c: In function ‘dwc_handle_cyclic’:
drivers/dma/dw/core.c:568: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_tasklet’:
drivers/dma/dw/core.c:590: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_off’:
drivers/dma/dw/core.c:1103: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_cyclic_free’:
drivers/dma/dw/core.c:1469: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_probe’:
drivers/dma/dw/core.c:1574: warning: comparison between signed and unsigned
There is no functional change.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since struct dw_dma is allocated and regs member is assigned properly we can
use standard IO accessors to the DMA registers.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The datasheet requires that the LLP_[SD]_EN bits be cleared whenever
LLP.LOC is zero, i.e. in the last descriptor of a multi-block chain.
Make the driver do this.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The LMS field indicates from which master the descriptor is to be
read. This patch assumes this is always the same as the memory
side in a peripheral transfer which is true for all known systems.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the DMA controller uses a different byte order than the host CPU,
the hardware linked list descriptor fields need to be byte-swapped.
This patch makes the driver write these fields using the same byte
order it uses for mmio accesses to the DMA engine. I do not know
if this is guaranteed to always be correct.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On some architectures the DMA controller can have two masters connected to
different buses and thus access to memory is possible only through one and
to peripheral through the other.
This patch changes the src and dst master setting to match the direction
of the transfer.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The source and destination masters are reflecting buses or their layers to
where the different devices can be connected. The patch changes the master
names to reflect which one is related to which independently on the transfer
direction.
The outcome of the change is that the memory data width is now always limited
by a data width of the master which is dedicated to communicate to memory.
The patch will not break anything since all current users have the same data
width for all masters. Though it would be nice to revisit avr32 platforms to
check what is the actual hardware topology in use there. It seems that it has
one bus and two masters on it as stated by Table 8-2, that's why everything
works independently on the master in use. The purpose of the sequential patch
is to fix the driver for configuration of more than one bus.
The change is done in the assumption that src_master and dst_master are
reflecting a connection to the memory and peripheral correspondently on avr32
and otherwise on the rest.
Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The commit 8950052029 ("dmaengine: dw: apply both HS interfaces and remove
slave_id usage") cleaned up the code to avoid usage of depricated slave_id
member of generic slave configuration.
Meanwhile it broke the master selection by removing important call to
dwc_set_masters() in ->device_alloc_chan_resources() which copied masters from
custom slave configuration to the internal channel structure.
Everything works until now since there is no customized connection of
DesignWare DMA IP to the bus, i.e. one bus and one or more masters are in use.
The configurations where 2 masters are connected to the different masters are
not working anymore. We are expecting one user of such configuration and need
to select masters properly. Besides that it is obviously a performance
regression since only one master is in use in multi-master configuration.
Select masters in accordance with what user asked for. Keep this patch in a form
more suitable for back porting.
We are safe to take necessary data in ->device_alloc_chan_resources() because
we don't support generic slave configuration embedded into custom one, and thus
the only way to provide such is to use the parameter to a filter function which
is called exactly before channel resource allocation.
While here, replase BUG_ON to less noisy dev_warn() and prevent channel
allocation in case of error.
Fixes: 8950052029 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit ef859312c3 ("dmaengine: core: Use dev_ functions for debug and
error prints") wasn't quite right in __dma_request_channel() by claiming
that all pr_ prints have valid DMA channel pointer. Obviously it is not
true as __dma_request_channel() is looking for a channel and returns NULL
if it does not find it.
Prevent this potential NULL pointer dereference by reverting back to
pr_debug().
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below checkpatch.pl warnings.
WARNING: void function return statements are not generally useful
+ return;
+}
WARNING: void function return statements are not generally useful
+ return;
+}
WARNING: Missing a blank line after declarations
+ u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
+ vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Moritz Fischer <moritz.fischer@ettus.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When VDMA is configured in Non-sg mode
Users can queue descriptors greater than h/w configured frames.
Current driver allows the user to queue descriptors upto h/w configured.
Which is wrong for non-sg mode configuration.
This patch fixes this issue.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This VDMA is a soft ip, which can be programmed to support
32 bit addressing or greater than 32 bit addressing.
When the VDMA ip is configured for 32 bit address space
the buffer address is specified by a single register
(0x5C for MM2S and 0xAC for S2MM channel).
When the VDMA core is configured for an address space greater
than 32 then each buffer address is specified by a combination of
two registers.
The first register specifies the LSB 32 bits of address,
while the next register specifies the MSB 32 bits of address.
For example, 5Ch will specify the LSB 32 bits while 60h will
specify the MSB 32 bits of the first start address.
So we need to program two registers at a time.
This patch adds the 64 bit addressing support to the vdma driver.
Signed-off-by: Anurag Kumar Vulisha <anuragku@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently drivers are limited to 19 slots for cyclic transfers.
However, if the DMA burst size is the same as the period size,
the period size can be changed to the full buffer size and
intermediate interrupts activated. Since intermediate interrupts
will trigger for each burst and the burst size is the same as
the period size, the driver will get interrupts each period as
expected. This has the benefit of allowing the functionality of
many more slots, but only uses 2 slots.
This workaround is only active if more than 19 slots are needed
and the burst size matches the period size.
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The dynamic or on demand pm_runtime does not work correctly on am335x and
am437x due to interference with hwmod.
Fall back using the pm_runtime usage as it was in the old driver stack,
meaning that at probe time call pm_runtime_enable() and
pm_runtime_get_sync() for the TPTCs as well.
Fixes: 1be5336bc7 ("dmaengine: edma: New device tree binding")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reported-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current OF translation of channels can never work with
any DMA client using the DMA channels directly: the only way
to get the channels initialized properly is in the
dma_async_device_register() call, where chan->dev etc is
allocated and initialized.
Allocate and initialize all possible DMA channels and
only augment a target channel with the periph_buses at
of_xlate(). Remove some const settings to make things work.
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Tested-by: Joachim Eastwood <manabian@gmail.com>
Tested-by: Johannes Stezenbach <js@sig21.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_get_slave_caps() API only checked for slave capability where
we use slave capabilities for cyclic dma operations as well, so we
should add the cyclic case here too.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When client request a non existing channel from of_dma_xilinx_xlate
we get a NULL pointer dereferencing. This patch fix this problem.
Signed-off-by: Franck Jullien <franck.jullien@odyssee-systemes.fr>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the client queues up more transfers the driver will not able to move to
the next transfer without knowing that the previous descriptor is
completed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When based on the CCR_ENABLE bit the channel is stopped we should not call
omap_dma_callback(), only change the return value to DMA_COMPLETE. Client
drivers will do the right thing to clean up the channel after the transfer
has been completed.
Check the CCR_ENABLE only if the channel is running and not paused since
pause in sDMA means that the channel is stopped.
This will fix one hard to reproduce race condition when the channel is
terminated during transfer (affecting cyclic operation).
Fixes: 1a7cf7b26f ("dmaengine: omap-dma: Handle cases when the channel is polled for completion")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch extends the capabilities of the driver to handle DMA
transfers to and from devices of 1, 2, 4, 16 (for MPC512x), and 32 byte
widths.
Signed-off-by: Mario Six <mario.six@gdsys.cc>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since the MPC8308 has no external request lines to initiate DMA transfers,
all transfers must be triggered by software.
Because of this, the current implementation of DMA transfers from and to
devices on MPC8308 SoCs using major and minor loops is faulty: After the
completion of the first major loop, the DMA engine resets the start flag in
the channel's TCD, thus halting the transfer. The driver would have to set
the start bit again to trigger the next iteration of the major loop; on
MPC512x SoCs, this is done via the external request lines, so in this case,
the driver doesn't have to interfer in any way.
This has the effect that on MPC8308s, every DMA transfer to or from a
device hangs after executing the first major loop.
The patch fixes this behavior by using just one major loop for the whole
DMA transfer on MPC8308s.
Signed-off-by: Mario Six <mario.six@gdsys.cc>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This tells, for example, IOMMU what the maximum size of a segment
the DMA controller can send.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The timeout capability is only available on the so called DMA write channels,
i.e. associated with UART Rx FIFO. It means we don't need to check the
direction of the channel to handle timeouts.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Current code allows only up to 3 descriptors to be programmed to the hardware
since it is used wrong calculations. Change % to min_t() to allow as many
descriptors as user supplied. At once it could be programmed up to 4
descriptors due to hardware limitations.
The issue was found under stress test, so it might not bother ordinary users.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is a typo in documentation regarding to descriptor empty bit (DESCE)
which is set to 1 when descriptor is empty. Thus, status register at the end of
a transfer usually returns all DESCE bits set and thus it will never be zero.
Moreover, there are 2 bits (CDESC) that encode current descriptor, on which
interrupt has been asserted. In case when we have few descriptors programmed we
might have non-zero value.
Remove DESCE and CDESC bits from DMA channel status register (HSU_CH_SR) when
reading it.
Fixes: 2b49e0c567 ("dmaengine: append hsu DMA driver")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The commit f0579c8cea ("dmaengine: hsu: speed up residue calculation")
speeded up calculation of the queued descriptor but broke the initial residue
value for active descriptor.
In accordance with documentation the hardware descriptor is updated each time
DMA transfered some bytes. It means we have to calculate a sum of lengths of
non-submitted hardware descriptors and whatever current values in the hardware.
Do this straightforward.
Fixes: f0579c8cea ("dmaengine: hsu: speed up residue calculation")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
HSU_CH_MTSR register should be programmed to a minimum size to transfer. This
size on a memory side of the transfer. Program it accordingly.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
According to dmaengine kerneldoc the struct dma_chan has always a non-NULL
pointer to DMA device and a test in dma_async_device_register()
validates that DMA device must also point to struct device.
All pr_ prints except one in dma_channel_table_init() have valid DMA
channel or DMA device pointer available which allow convert them to use
dev_ functions and thus able to show the associated DMA device.
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are only three patches this time, most other changes to
files in include/asm-generic tend to go through the tree of whoever
depends on the change.
Two patches are cleanups for stuff that is no longer needed,
the main change is to adapt the generic version of BUG_ON()
for CONFIG_BUG=n to make it behave consistently with BUG().
This avoids undefined behavior along with a number of warnings
about that undefined behavior in randconfig builds when
we keep going on after hitting a BUG_ON().
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAVvRXDGCrR//JCVInAQKUFRAAmp23pohv08LZzXL8Qu7XfFN+b1RkZ936
WYBeiA9PEWufQs2hgXaEUXy0onO7ah4cs2NWfkaBPyxT+I9mN+ThdzqVrlTE+AEO
2K0f2RaZANC238zB86Yv/YvTj7FegH0DDdMBq/P06vlYdgBegx49U3pMpguxl3d0
/q9MyqTzo9j4uOEK4ix4/Dko+4eKIS5Y/xeb0TkeKA6HiBVzAhGLZFl+eMku07Bf
ap8B705hBDXSBFeWcK9AvKjHZCM+FCkb+C3TXo9x5tUu8g5OIG1t962OQvT9ldsP
rvo5ppRh/TAY2Z9chN3cKrsvshbHiZ9uRzeksCunL+SK+dOhEIPCVzLXndQpi3RD
NgeNKgo6gKYdle44pEj0EH2ktuvr0u8sbjQg9SY2miC1H4DmEbCakSqtQegHXTKd
chJ6xyNiQXktdfo0pFOtCA2gjqiAriugttBqUtGcK9zRqjGGpP5hOUQVm3jR7UMp
Hjb+oj5o+Gjz5J1t5zsjbhFINDCHAgXRzqqaoT9RfE9+QlUftUhu+N9KVFgzhe9I
93VHaqgGIRoi856BO7UZSaMGhy7ljm1nQ18jP9aZl/tBco0kpd3AO8og9dJ0u2j+
3fEqAHH30ia8GJCfIDnolxTL6uaqcCIeAoLgGcmn+QZS7ka+tD+000rtgd2pdy9/
gy/VPpFG064=
=8tPL
-----END PGP SIGNATURE-----
Merge tag 'asm-generic-4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic
Pull asm-generic updates from Arnd Bergmann:
"There are only three patches this time, most other changes to files in
include/asm-generic tend to go through the tree of whoever depends on
the change.
Two patches are cleanups for stuff that is no longer needed, the main
change is to adapt the generic version of BUG_ON() for CONFIG_BUG=n to
make it behave consistently with BUG().
This avoids undefined behavior along with a number of warnings about
that undefined behavior in randconfig builds when we keep going on
after hitting a BUG_ON()"
* tag 'asm-generic-4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
asm-generic: remove old nonatomic-io wrapper files
asm-generic: default BUG_ON(x) to if(x)BUG()
asm-generic: page.h: Remove useless get_user_page and free_user_page
Newly added support for additional SoCs:
- Axis Artpec-6 SoC family
- Allwinner A83T SoC
- Mediatek MT7623
- NXP i.MX6QP SoC
- ST Microelectronics stm32f469 microcontroller
New features:
- SMP support for Mediatek mt2701
- Big-endian support for NXP i.MX
- DaVinci now uses the new DMA engine dma_slave_map
- OMAP now uses the new DMA engine dma_slave_map
- earlyprintk support for palmchip uart on mach-tango
- delay timer support for orion
Other:
- Exynos PMU driver moved out to drivers/soc/
- Various smaller updates for Renesas, Xilinx, PXA, AT91, OMAP, uniphier
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAVu68DGCrR//JCVInAQIHVQ//Wblms+NKj3aKh6m2Sscs/YkSbFaQ4sY2
rNyfxLIYsLXkth1kbdHRFSMyL68Ym+xutErgw/3HQPB2D1YtYJE3VJ/y8AU92SU3
oHyQIty+atB8d8zBbtlkWmat94NIfYf0I8PQETreGb1LMaJqAf0mDEDAyorTLZcZ
UtQ817Ihn7urqwdTJpTO58V41RmY/vflbHI5T6bIjUJn6fF1e/7+VqtMIfq5sjJ6
0EPEQdu8s5AJ7gcGlGi9I5gAtSnWSA/9phAxul9P8/HrMpUWIxreSEAy8FY7W14F
4TON3sQrnw7nyA72U80KGIXhgLy7SbEmHcSqyy4YJK3ycdk6VYk0CBO7nWVYAiD1
knLisOH6jwe0LIj9WXiRR+Y2Q53pXN8SF77pLDahSnvuShnYEjEH5uELHtxe7Vxh
gn+NH1rDkRTgdYgt4RWlVyUoLkddQWzLb1m4QyQlvxtTR25cJJayXdVX2MRrNPF5
c1zRa9HH+b8LJQIMdWfo/NoHhHtftkkGGsqHAAaypZqdpyk0j2HpJYk5ecPR4f5C
/8o/h/5xOI9gEzp/DVYSZ1VAvRqBQGIDfKBXWq6GuoZaF0aN8ISe5IxFn5Yx2F46
fNaxqiNpWmyywl8D+tSWPFK6aE21AXKGi5zIzexZZqy283aDjlUPI+tgF2GKIuKP
3ayYTDeBpLI=
=ynNj
-----END PGP SIGNATURE-----
Merge tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC platform updates from Arnd Bergmann:
"Newly added support for additional SoCs:
- Axis Artpec-6 SoC family
- Allwinner A83T SoC
- Mediatek MT7623
- NXP i.MX6QP SoC
- ST Microelectronics stm32f469 microcontroller
New features:
- SMP support for Mediatek mt2701
- Big-endian support for NXP i.MX
- DaVinci now uses the new DMA engine dma_slave_map
- OMAP now uses the new DMA engine dma_slave_map
- earlyprintk support for palmchip uart on mach-tango
- delay timer support for orion
Other:
- Exynos PMU driver moved out to drivers/soc/
- Various smaller updates for Renesas, Xilinx, PXA, AT91, OMAP,
uniphier"
* tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (83 commits)
ARM: uniphier: rework SMP code to support new System Bus binding
ARM: uniphier: add missing of_node_put()
ARM: at91: avoid defining CONFIG_* symbols in source code
ARM: DRA7: hwmod: Add data for eDMA tpcc, tptc0, tptc1
ARM: imx: Make reset_control_ops const
ARM: imx: Do L2 errata only if the L2 cache isn't enabled
ARM: imx: select ARM_CPU_SUSPEND only for imx6
dmaengine: pxa_dma: fix the maximum requestor line
ARM: alpine: select the Alpine MSI controller driver
ARM: pxa: add the number of DMA requestor lines
dmaengine: mmp-pdma: add number of requestors
dma: mmp_pdma: Add the #dma-requests DT property documentation
ARM: OMAP2+: Add rtc hwmod configuration for ti81xx
ARM: s3c24xx: Avoid warning for inb/outb
ARM: zynq: Move early printk virtual address to vmalloc area
ARM: DRA7: hwmod: Add custom reset handler for PCIeSS
ARM: SAMSUNG: Remove unused register offset definition
ARM: EXYNOS: Cleanup header files inclusion
drivers: soc: samsung: Enable COMPILE_TEST
MAINTAINERS: Add maintainers entry for drivers/soc/samsung
...
This is smallish update with minor changes to core and new driver and usual
updates. Nothing super exciting here..
- We have made slave address as physical to enable driver to do the mapping.
- We now expose the maxburst for slave dma as new capability so clients can
know this and program accordingly
- addition of device synchronize callbacks on omap and edma.
- pl330 updates to support DMAFLUSHP for Rockchip platforms.
- Updates and improved sg handling in Xilinx VDMA driver.
- New hidma qualcomm dma driver, though some bits are still in progress
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJW6W4OAAoJEHwUBw8lI4NHIj0P/0UEXOn9Laj1dQ++3RuEHtJH
AvolC574yj/jdvhNNRAu3TBq214VDtVu+OEi6cAwybSMUOT0lbrSEI4a6K6iDIdH
QGfyz2PFDBMnNLqqNfeQulgB6YgoZ/7PXUOz9D+FX4wyM3poTBb9J2JI5okFuuJI
r4jmiZrXTZSmm2NTbG0QxHogoyvMDA59EB8cIgAUrl1rDssPkdvzU7ygW6qc5CMW
33tQFyz6Q74EI9ImPeYUkSf1zzT1va4uRce+3lEmLSvtOWG2pjOOZ1Vw89vtkyal
yX1eH06glVTQwpfV+fgmbjpn72Ftk+G6rqcB4aICSyN2dH7Gf4D+Dqjp1mdEHyFf
Oum5pWNPzJ97HoGLwxd8FEuA3ma3C0nC+nDl+ffNWLmIDGgeqFHSQaNBlf2S6y+a
VtGFJ0EaR//qIpwvPNfpJbkwjrEaEFdSYQcdpGcPPeTeOOpaLGkmJ/2kD2rpGSNC
iPh+G/h7sJYLFyiG7C6GeuWxShzSL+LpZqv0ks5i/QKmSNXWsvVQexAlBW43R385
uQkZSWOlzUwmGlTj9XUI2mUxhI73SgKt+WZ9wrJWvIThBHRwwSIln+72SzQ8d4ys
Smv3DkGt4gCxHmsV+G3nEIBlviECJn2KaaN450D6FVxgQ40yGV5gWAVX4yAWo2De
uMnQMDamjoajgbeanpbM
=wCCJ
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This is smallish update with minor changes to core and new driver and
usual updates. Nothing super exciting here..
- We have made slave address as physical to enable driver to do the
mapping.
- We now expose the maxburst for slave dma as new capability so
clients can know this and program accordingly
- addition of device synchronize callbacks on omap and edma.
- pl330 updates to support DMAFLUSHP for Rockchip platforms.
- Updates and improved sg handling in Xilinx VDMA driver.
- New hidma qualcomm dma driver, though some bits are still in
progress"
* tag 'dmaengine-4.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits)
dmaengine: IOATDMA: revise channel reset workaround on CB3.3 platforms
dmaengine: add Qualcomm Technologies HIDMA channel driver
dmaengine: add Qualcomm Technologies HIDMA management driver
dmaengine: hidma: Add Device Tree binding
dmaengine: qcom_bam_dma: move to qcom directory
dmaengine: tegra: Move of_device_id table near to its user
dmaengine: xilinx_vdma: Remove unnecessary variable initializations
dmaengine: sirf: use __maybe_unused to hide pm functions
dmaengine: rcar-dmac: clear pertinence number of channels
dmaengine: sh: shdmac: don't open code of_device_get_match_data()
dmaengine: tegra: don't open code of_device_get_match_data()
dmaengine: qcom_bam_dma: Make driver work for BE
dmaengine: sun4i: support module autoloading
dma/mic_x100_dma: IS_ERR() vs PTR_ERR() typo
dmaengine: xilinx_vdma: Use readl_poll_timeout instead of do while loop's
dmaengine: xilinx_vdma: Simplify spin lock handling
dmaengine: xilinx_vdma: Fix issues with non-parking mode
dmaengine: xilinx_vdma: Improve SG engine handling
dmaengine: pl330: fix to support the burst mode
dmaengine: make slave address physical
...
Pull dma_*_writecombine rename from Ingo Molnar:
"Rename dma_*_writecombine() to dma_*_wc()
This is a tree-wide API rename, to move the dma_*() write-combining
APIs closer in name to their usual API families. (The old API names
are kept as compatibility wrappers to not introduce extra breakage.)
The patch was Coccinelle generated"
* 'mm-pat-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
dma, mm/pat: Rename dma_*_writecombine() to dma_*_wc()
Previously we unloaded the interrupts and reloaded in order to work around
a channel reset bug that cleared the MSIX table. This approach just isn't
practical when a reset needs to happen in the error handler that just
happens to be running in interrupt context (bottom half). It looks like we
can work around the hardware issue by just storing a shadow copy of the
MSIX table and restore it after reset.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for hidma engine. The driver consists of two
logical blocks. The DMA engine interface and the low-level interface.
The hardware only supports memcpy/memset and this driver only support
memcpy interface. HW and driver doesn't support slave interface.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The Qualcomm Technologies HIDMA device has been designed to support
virtualization technology. The driver has been divided into two to follow
the hardware design.
1. HIDMA Management driver
2. HIDMA Channel driver
Each HIDMA HW consists of multiple channels. These channels share some set
of common parameters. These parameters are initialized by the management
driver during power up. Same management driver is used for monitoring the
execution of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and prioritization.
The management driver is executed in host context and is the main
management entity for all channels provided by the device.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Creating a QCOM directory for all QCOM DMA source files.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Gross <agross@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When computing the residue we need two pieces of information: the current
descriptor and the remaining data of the current descriptor. To get
that information, we need to read consecutively two registers but we
can't do it in an atomic way. For that reason, we have to check manually
that current descriptor has not changed.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Suggested-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Reported-by: David Engraf <david.engraf@sysgo.com>
Tested-by: David Engraf <david.engraf@sysgo.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #4.1 and later
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Rename dma_*_writecombine() to dma_*_wc(), so that the naming
is coherent across the various write-combining APIs. Keep the
old names for compatibility for a while, these can be removed
at a later time. A guard is left to enable backporting of the
rename, and later remove of the old mapping defines seemlessly.
Build tested successfully with allmodconfig.
The following Coccinelle SmPL patch was used for this simple
transformation:
@ rename_dma_alloc_writecombine @
expression dev, size, dma_addr, gfp;
@@
-dma_alloc_writecombine(dev, size, dma_addr, gfp)
+dma_alloc_wc(dev, size, dma_addr, gfp)
@ rename_dma_free_writecombine @
expression dev, size, cpu_addr, dma_addr;
@@
-dma_free_writecombine(dev, size, cpu_addr, dma_addr)
+dma_free_wc(dev, size, cpu_addr, dma_addr)
@ rename_dma_mmap_writecombine @
expression dev, vma, cpu_addr, dma_addr, size;
@@
-dma_mmap_writecombine(dev, vma, cpu_addr, dma_addr, size)
+dma_mmap_wc(dev, vma, cpu_addr, dma_addr, size)
We also keep the old names as compatibility helpers, and
guard against their definition to make backporting easier.
Generated-by: Coccinelle SmPL
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: airlied@linux.ie
Cc: akpm@linux-foundation.org
Cc: benh@kernel.crashing.org
Cc: bhelgaas@google.com
Cc: bp@suse.de
Cc: dan.j.williams@intel.com
Cc: daniel.vetter@ffwll.ch
Cc: dhowells@redhat.com
Cc: julia.lawall@lip6.fr
Cc: konrad.wilk@oracle.com
Cc: linux-fbdev@vger.kernel.org
Cc: linux-pci@vger.kernel.org
Cc: luto@amacapital.net
Cc: mst@redhat.com
Cc: tomi.valkeinen@ti.com
Cc: toshi.kani@hp.com
Cc: vinod.koul@intel.com
Cc: xen-devel@lists.xensource.com
Link: http://lkml.kernel.org/r/1453516462-4844-1-git-send-email-mcgrof@do-not-panic.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
adding unmap of sources and destinations while doing dequeue.
Signed-off-by: Xuelin Shi <xuelin.shi@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
After using the function of_device_get_match_data(), the
of_device_id table for tegra20 dma is not used by probe()
and hence moving it near to place where platform driver is
defined as this table used only on this data structure.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch removes the unnecessary variable initializations
in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The sirf dma driver uses #ifdef to check for CONFIG_PM_SLEEP
for its suspend/resume code but then has no #ifdef for the
respective runtime PM code, so we get a warning if CONFIG_PM
is disabled altogether:
drivers/dma/sirf-dma.c:1000:12: error: 'sirfsoc_dma_runtime_resume' defined but not used [-Werror=unused-function]
This removes the existing #ifdef and instead uses __maybe_unused
annotations for all four functions to let the compiler know it
can silently drop the function definition.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMACHCLR clears each channels, but its channel number is based on
its SoC or IP. Current driver is using fixed 0x7fff (= for 15ch),
it is not good match for Gen3 or Gen2 Audio DMAC. This patch fixes it
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This change will also make Coverity happy by avoiding a theoretical NULL
pointer dereference; yet another reason is to use the above helper function
to tighten the code and make it more readable.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use of_device_get_match_data() for getting matched data
instead of implementing this locally.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the Qualcomm BAM dmaenging driver to work with big
endian kernels.
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
MODULE_DEVICE_TABLE() is missing, so the module isn't auto-loading on
supported systems. This commit adds the missing line so it loads
automatically when building it as a module and running on a system
with the early sunxi DMA engine.
Signed-off-by: Emilio López <emilio.lopez@collabora.co.uk>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is harmless because the caller only cares about zero vs non-zero
but we should be returning PTR_ERR() here.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
While testing audio with pxa2xx-ac97, underrun were happening while the
user application was correctly feeding the music. Debug proved that the
cyclic transfer is not cyclic, ie. the last descriptor did not loop on
the first.
Another issue is that the descriptor length was always set to 8192,
because of an trivial operator issue.
This was tested on a pxa27x platform.
Fixes: a57e16cf03 ("dmaengine: pxa: add pxa dmaengine driver")
Reported-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It is sometimes necessary to poll a memory-mapped register until its
value satisfies some condition use convenience macros
that do this instead of do while loop's.
This patch updates the same in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch simplifies the spin lock handling in the driver
by moving locking out of xilinx_dma_start_transfer() API
and xilinx_dma_update_completed_cookie() API.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes issues with the Non-parking mode(Cirular mode).
With the existing driver in cirular mode if we submit frames less than h/w
configured we simply end-up having misconfigured vdma h/w.
This patch fixes this issue by configuring the frame count register.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current driver allows user to queue up multiple segments
on to a single transaction descriptor. User will submit this single desc
and in the issue_pending() we decode multiple segments and submit to SG HW engine.
We free up the allocated_desc when it is submitted to the HW.
Existing code prevents the user to prepare multiple trasactions at same time as
we are overwrite with the allocated_desc.
The best utilization of HW SG engine would happen if we collate the pending
list when we start dma this patch updates the same.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The two header files got moved to include/linux, and most
users were already converted, this changes the remaining drivers
and removes the files.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Simon Horman <simon.horman@netronome.com>
Acked-by: Yisen Zhuang <yisen.zhuang@huawei.com>
This patch fixes the burst mode that will break DMA uart on SoCFPGA.
In some cases, some SoCS didn't support the multi-burst
even if the devices who use the pl330 claim support the maxburst.
Fixes: commit 848e977
"dmaengine: pl330: support burst mode for dev-to-mem and mem-to-dev transmit"
Reported-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Tested-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current number of requestor lines is limited to 31. This was an
error of a previous commit, as this number is platform dependent, and is
actually :
- for pxa25x: 40 requestor lines
- for pxa27x: 75 requestor lines
- for pxa3xx: 100 requestor lines
The previous testing did not reveal the faulty constant as on pxa[23]xx
platforms, only camera, MSL and USB are above requestor 32, and in these
only the camera has a driver using dma.
Fixes: e87ffbdf06 ("dmaengine: pxa_dma: fix the no-requestor case")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Acked-by: Vinod Koul <vinod.koul@intel.com>
In acpi_dma_request_slave_chan_by_name() the debug message is printed before
the actual matching happens. Correct the message itself to be in align with the
flow.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is a typo in the definition of IDMA64C_CFGH_WR_ISSUE_THD(x). Fix it by
swapping characters.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The mxs-dma unit is also available on i.MX6UL. Make it possible to
select it in Kconfig.
Signed-off-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In order to avoid possible race condition when client drivers are using
dmaengine_terminate_sync() call to disable the channel.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Suggested-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We need the callback to support the dmaengine_terminate_sync().
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We need the callback to support the dmaengine_terminate_sync().
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Future IOATDMA hardware will take advantage of descriptors residing in
contiguous memory. Setting the descriptor ring in max config DMA memory
of 2MB. Each channel will need 2 of these chunks. This should provide 64k
of 64B descriptors.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Moving to contingous memory backed descriptor rings. This makes is really
difficult and complex to do reshape. Going to remove this as I don't think
we need to do it anymore.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Converting old pci_pool_* calls to "new" dma_pool_* to make everything
uniform.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The commit 2895b2cad6 ("dmaengine: dw: fix cyclic transfer callbacks")
re-enabled BLOCK interrupts with regard to make cyclic transfers work. However,
this change becomes a regression for non-cyclic transfers as interrupt counters
under stress test had been grown enormously (approximately per 4-5 bytes in the
UART loop back test).
Taking into consideration above enable BLOCK interrupts if and only if channel
is programmed to perform cyclic transfer.
Fixes: 2895b2cad6 ("dmaengine: dw: fix cyclic transfer callbacks")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mans Rullgard <mans@mansr.com>
Tested-by: Mans Rullgard <mans@mansr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The datasheet requires that the user must clear LLP_[SD]_EN bits whenever
LLP.LOC is zero, i.e. in the last descriptor of a multi-block chain.
Make the driver do this.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch add max burst capability for dmaengine and
limit burst capability to one for PL330_QUIRK_BROKEN_NO_FLUSHP
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch add max_burst to dma_get_slave_caps for clients
to get the burst capability of slave dma controller.
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch add "arm,pl330-broken-no-flushp" quirk to avoid execute
DMAFLUSHP if Soc doesn't support it.
Signed-off-by: Addy Ke <addy.ke@rock-chips.com>
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
cc: Doug Anderson <dianders@chromium.org>
cc: Heiko Stuebner <heiko@sntech.de>
cc: Olof Johansson <olof@lixom.net>
Reviewed-by: Sonny Rao <sonnyrao@chromium.org>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds to support burst mode for dev-to-mem and
mem-to-dev transmit.
Signed-off-by: Boojin Kim <boojin.kim@samsung.com>
Signed-off-by: Addy Ke <addy.ke@rock-chips.com>
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
cc: Heiko Stuebner <heiko@sntech.de>
cc: Doug Anderson <dianders@chromium.org>
cc: Olof Johansson <olof@lixom.net>
Reviewed-by: Sonny Rao <sonnyrao@chromium.org>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Make use of ARCH_RENESAS in place of ARCH_SHMOBILE.
This is part of an ongoing process to migrate from ARCH_SHMOBILE to
ARCH_RENESAS the motivation for which being that RENESAS seems to be a more
appropriate name than SHMOBILE for the majority of Renesas ARM based SoCs.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When retrieving the residue value, the SRC/DST fields of the
active PaRAM are read to determine the current position of
the DMA engine. However, the AM335x Technical Reference Manual
states:
11.3.3.6 Parameter Set Updates
After the TR is read from the PaRAM (and is in the process
of being submitted to the EDMA3TC), the following fields are
updated as needed: ... SRC DST
This means SRC/DST is incremented even though the DMA transfer
may not have started yet or is in progress. Thus if the reader
of the residue accesses the DMA buffer too quickly, the CPU is
misinformed about the data that has been successfully processed.
The CCSTAT.ACTV register is a boolean that is set if any TR is
being processed by either the EMDA3CC or EDMA3TC. By polling
this register it is possible to ensure that the residue value
returned is valid for immediate processing. However, since the
DMA engine may be active, polling may never hit a moment where
no TR is being processed. To handle this, the SRC/DST is also
polled to see if it changes. And as a last resort, a max loop
count for the busy waiting exists to avoid an infinite loop.
Signed-off-by: John Ogness <john.ogness@linutronix.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
WildcatPoint PCH as seen on MacBook 12-inch (Early 2015) has PCI enabled
DesignWare DMA controller. Enable it by adding its ID to the corresponding
driver.
Reported-by: Leif Liddy <leif.liddy@gmail.com>
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=110901
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The timer_event() function seems to have a bug where it ends up marking the
last entry as non-responding and eventually attempts to restart the
channel. This also continuously happen when idle. What needs to happen is
for us to make sure there are no descriptors active and then handle that
case properly. We should only hit the "cleanup" stage if there are still
active descriptors.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
CC [M] drivers/dma/ioat/prep.o
drivers/dma/ioat/prep.c: In function 'ioat_prep_pqxor':
drivers/dma/ioat/prep.c:682:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=]
}
^
drivers/dma/ioat/prep.c: In function 'ioat_prep_pqxor_val':
drivers/dma/ioat/prep.c:714:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=]
}
gcc version 5.3.1 20151219 (Ubuntu 5.3.1-4ubuntu1)
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Nicholas Mc Guire <der.herr@hofr.at>
Cc: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Debugging ALSA hangups it was found that EP9302 (latest E2 rev.) DMA controller
sometimes asserts STALL interrupt instead of NFB interrupt. Simply ignoring the
difference and simply acting upon the amount of data we still have to transfer
seems to work fine. This somehow sounds similar to M2M issue which is already
dealt with in the driver, when the controller asserts DONE interrupt too early.
The issue is not documented in Cirrus Logic erratas for EP93XX, but original
Cirrus DMA driver from 2003 (not based on DMA API) did the similar handling
of STALL interrupt. In-tree driver (6d831c65) did it also, before conversion to
DMA engine API.
Signed-off-by: Alexander Sverdlin <alexander.sverdlin@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The offset of SINC should be 9, not 7, here fix this
typo.
Signed-off-by: Jie Yang <yang.jie@intel.com>
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Few fixes on drivers have piled up and one missed rcar bindings patch
which got somehow lost in for-linus branch so cherry-picked that one
Fixes are on dw, at_hdmac, edma
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWn3/rAAoJEHwUBw8lI4NHfFEP/3YTyj1BganEuV//h81l16Nl
fbgFtvgAZ5rilakOmsQJZ1Pp1r3aGgT8nULaBU/qtD2Ye0Ox0lQol/KfpnUDWH1a
go6yYnsArUJxs35QPCDV6WUnopzlDTLnamE7C/VcjQsLokJBEv4PubFKhQd6IFvB
mx8N0Rnn3oyhcAMADtUDnPljBgyPo1dwXbwM5IxlYI9Ar8zsT4hLMCHxXad7SzRR
SvRXMRflDlIxKwKh1f/4FnQ9BTW2mKb3nypBG6Uz8YatIPHliJO8LVjmvYsGA2KU
JtjDIpsRCeqETEPKlTSt6nQKdghR2CapDNEIS7Cv4LZFvCL3ADJL17xAfMKaxdkO
m4aVpK6vR+h/e0csYE0GYkcPugXAvm89MWHDkywmqo4CtCgh7QbMYxXR4MNsy3xS
TGHU2BkULrKoZkcKDKkXEB6fvfsMtzdd33/P0KPf2i0WcZUDo1jmETDLpz1Vehf6
hcRHURYMqHG/zJLItwQsAcJgfQtp+3oeiBv+N3FRjY/ECFPQWShFM5kVqac0Jr9y
e8gnC1+p7IfOesaHO/FUW5KvvWnQ1gskUFbtM0cZNGGqhiKTjx8lnWnXH530SU9D
Nf/bgNsQMoD3h5Uqsrsf5X6e+CNIGSqit9tONSqHSHG8POVCNo8CAkfj/sc7Ck+s
LH8N8eM3UIKsTojVbjMn
=16UM
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
"Here is my second pull request for this window:
A few driver fixes have piled up and one missed rcar bindings patch
which got somehow lost in for-linus branch so cherry-picked that one.
Fixes are for dw, at_hdmac, edma"
* tag 'dmaengine-fix-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: rcar-dmac: Document SoC specific bindings
dmaengine: at_xdmac: fix resume for cyclic transfers
dmaengine: dw: fix cyclic transfer callbacks
dmaengine: dw: fix cyclic transfer setup
dmaengine: edma: Fix paRAM slot allocation for entry channel 0
When having cyclic transfers, the channel was paused when performing
suspend but was not correctly resumed.
Signed-off-by: Songjun Wu <songjun.wu@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: <stable@vger.kernel.org> # 4.1 and later
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We've had quite busy weeks in this cycle. Looking at ALSA core, the
significant changes are a few fixes wrt timer and sequencer ioctls
that have been revealed by fuzzer recently. Other than that, ASoC
core got a few updates about DAI link handling, but these are rather
straightforward refactoring.
In drivers scene, ASoC received quite lots of new drivers in addition
to bunch of updates for still ongoing Intel Skylake support and
topology API. HD-audio gained a new HDMI/DP hotplug notification via
component. FireWire got a pile of code refactoring/updates with
SCS.1x driver integration.
More highlights are shown below.
[NOTE: this contains also many commits for DRM. This is due to the
pull of drm stable branch into sound tree, as the base of i915 audio
component work for HD-audio. The highlights below don't contain
these DRM changes, as these are supposed to be pulled via drm tree in
anyway sooner or later.]
Core
- Handful fixes to harden ALSA timer and sequencer ioctls against
races reported by syzkaller fuzzer
- Irq description string can be unique to each card; only for
HD-audio for now
ASoC
- Conversion of the array of DAI links to a list for supporting
dynamically adding and removing DAI links
- Topology API enhancements to make everything more component based
and being able to specify PCM links via topology
- Some more fixes for the topology code, though it is still not final
and ready for enabling in production; we really need to get to the
point where that can be done
- A pile of changes for Intel SkyLake drivers which hopefully deliver
some useful initial functionality for systems with this chipset,
though there is more work still to come
- Lots of new features and cleanups for the Renesas drivers
- ANC support for WM5110
- New drivers: Imagination Technologies IPs, Atmel class D speaker,
Cirrus CS47L24 and WM1831, Dialog DA7128, Realtek RT5659 and
RT56156, Rockchip RK3036, TI PC3168A, and AMD ACP
- Rename PCM1792a driver to be generic pcm179x
HD-Audio
- Use audio component for i915 HDMI/DP hotplug handling
- On-demand binding with i915 driver
- bdl_pos_adj parameter adjustment for Baytrail controllers
- Enable power_save_node for CX20722; this shouldn't lead to
regression, hopefully
- Kabylake HDMI/DP codec support
- Quirks for Lenovo E50-80, Dell Latitude E-series, and other Dell
machines
- A few code refactoring
FireWire
- Lots of code cleanup and refactoring
- Integrate the support of SCS.1x devices into snd-oxfw driver;
snd-scs1x driver is obsoleted
USB-audio
- Fix possible NULL dereference at disconnection
- A regression fix for Native Instruments devices
Misc
- A few code cleanups of fm801 driver
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJWmmhNAAoJEGwxgFQ9KSmk/wsP/3eO+giAT9VRPa6qxR6VdT6I
dZwTxcp4ZzUrgLxk9k5VYjqey6QL+1xWfl3Abrd+NzXDj1wo4KsDh2XCKG1btO9K
UpIZf76Nzt7o91pzHbsU6mrjDeoVNqloZoGbg1utAmmegaXH3owd18p/ZHfE3sz2
BbaHmYW/R8lnaBgBhzqJB97+zRaLJmMWpWHfpHaIPjdfw8/V4j76jtPnpmv2hDZl
BHXVHcQXjVGunFRzxdzBLuTC+FmhzUeTAbbAdOT4fEoOCv5MtZqYppNxdhj+b9l5
mrsXe5FBTNmrt9Z5TtfCuzgJPkzoDperFb0aKd7wI1jVMtLzkNCMlanHr9U6B6fr
jSrs6l25xrpF1BBfRMfHjNudA5vng/XC5dtW00JofXSrIxtwPNUoDDiqJgw7xVm5
aVWK7KkQIjRbHdCQaeTymv70oHHKei92hbCrXUobXZ7wLeJMXNVPT25ttChWrgAI
7cu5h+K5PjReI/sJFTMPL4aHZ+jAn9quQl7vK8EXiL9E6G8lLiuBiVW6hjGd9At+
Z6UyGV+nCM6O3qZcyParMuLkNtWx9uT7Pcn8oTZAdKPngNhsf8+yl9qmsFkNLDC4
LKPx0+rdCjtMKn2du3krsHhG3EN9pLDrE6g5U3d6Cz83e69Y7fCuSjl31SjD91H0
bZDcM/ejYSbid3yKN4TL
=Gvgb
-----END PGP SIGNATURE-----
Merge tag 'sound-4.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound updates from Takashi Iwai:
"We've had quite busy weeks in this cycle. Looking at ALSA core, the
significant changes are a few fixes wrt timer and sequencer ioctls
that have been revealed by fuzzer recently. Other than that, ASoC
core got a few updates about DAI link handling, but these are rather
straightforward refactoring.
In drivers scene, ASoC received quite lots of new drivers in addition
to bunch of updates for still ongoing Intel Skylake support and
topology API. HD-audio gained a new HDMI/DP hotplug notification via
component. FireWire got a pile of code refactoring/updates with
SCS.1x driver integration.
More highlights are shown below.
[ NOTE: this contains also many commits for DRM. This is due to the
pull of drm stable branch into sound tree, as the base of i915 audio
component work for HD-audio. The highlights below don't contain
these DRM changes, as these are supposed to be pulled via drm tree
in anyway sooner or later. ]
Core:
- Handful fixes to harden ALSA timer and sequencer ioctls against
races reported by syzkaller fuzzer
- Irq description string can be unique to each card; only for
HD-audio for now
ASoC:
- Conversion of the array of DAI links to a list for supporting
dynamically adding and removing DAI links
- Topology API enhancements to make everything more component based
and being able to specify PCM links via topology
- Some more fixes for the topology code, though it is still not final
and ready for enabling in production; we really need to get to the
point where that can be done
- A pile of changes for Intel SkyLake drivers which hopefully deliver
some useful initial functionality for systems with this chipset,
though there is more work still to come
- Lots of new features and cleanups for the Renesas drivers
- ANC support for WM5110
- New drivers: Imagination Technologies IPs, Atmel class D speaker,
Cirrus CS47L24 and WM1831, Dialog DA7128, Realtek RT5659 and
RT56156, Rockchip RK3036, TI PC3168A, and AMD ACP
- Rename PCM1792a driver to be generic pcm179x
HD-Audio:
- Use audio component for i915 HDMI/DP hotplug handling
- On-demand binding with i915 driver
- bdl_pos_adj parameter adjustment for Baytrail controllers
- Enable power_save_node for CX20722; this shouldn't lead to
regression, hopefully
- Kabylake HDMI/DP codec support
- Quirks for Lenovo E50-80, Dell Latitude E-series, and other Dell
machines
- A few code refactoring
FireWire:
- Lots of code cleanup and refactoring
- Integrate the support of SCS.1x devices into snd-oxfw driver;
snd-scs1x driver is obsoleted
USB-audio:
- Fix possible NULL dereference at disconnection
- A regression fix for Native Instruments devices
Misc:
- A few code cleanups of fm801 driver"
* tag 'sound-4.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (722 commits)
ALSA: timer: Code cleanup
ALSA: timer: Harden slave timer list handling
ALSA: hda - Add fixup for Dell Latitidue E6540
ALSA: timer: Fix race among timer ioctls
ALSA: hda - add codec support for Kabylake display audio codec
ALSA: timer: Fix double unlink of active_list
ALSA: usb-audio: Fix mixer ctl regression of Native Instrument devices
ALSA: hda - fix the headset mic detection problem for a Dell laptop
ALSA: hda - Fix white noise on Dell Latitude E5550
ALSA: hda_intel: add card number to irq description
ALSA: seq: Fix race at timer setup and close
ALSA: seq: Fix missing NULL check at remove_events ioctl
ALSA: usb-audio: Avoid calling usb_autopm_put_interface() at disconnect
ASoC: hdac_hdmi: remove unused hdac_hdmi_query_pin_connlist
ASoC: AMD: Add missing include file
ALSA: hda - Fixup inverted internal mic for Lenovo E50-80
ALSA: usb: Add native DSD support for Oppo HA-1
ASoC: Make aux_dev more like a generic component
ASoC: bcm2835: cleanup includes by ordering them alphabetically
ASoC: AMD: Manage ACP 2.x SRAM banks power
...
Cyclic transfer callbacks rely on block completion interrupts which were
disabled in commit ff7b05f29f ("dmaengine/dw_dmac: Don't handle block
interrupts"). This re-enables block interrupts so the cyclic callbacks
can work. Other transfer types are not affected as they set the INT_EN
bit only on the last block.
Fixes: ff7b05f29f ("dmaengine/dw_dmac: Don't handle block interrupts")
Signed-off-by: Mans Rullgard <mans@mansr.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: <stable@vger.kernel.org>
Commit 61e183f830 ("dmaengine/dw_dmac: Reconfigure interrupt and
chan_cfg register on resume") moved some channel initialisation to
a new function which must be called before starting a transfer.
This updates dw_dma_cyclic_start() to use dwc_dostart() like the other
modes, thus ensuring dwc_initialize() gets called and removing some code
duplication.
Fixes: 61e183f830 ("dmaengine/dw_dmac: Reconfigure interrupt and chan_cfg register on resume")
Signed-off-by: Mans Rullgard <mans@mansr.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: <stable@vger.kernel.org>
This round we have few new features, new driver and updates to few drivers.
The new features to dmaengine core are:
- Synchronized transfer termination API to terminate the dmaengine
transfers in synchronized and async fashion as required by users.
We have its user now in ALSA dmaengine lib, img, at_xdma, axi_dmac
drivers.
- Universal API for channel request and start consolidation of request
flows. It's user is ompa-dma driver.
- Introduce reuse of descriptors and use in pxa_dma driver
Add/Remove:
- STM32 DMA driver
- Removal of unused R-Car HPB-DMAC driver
Updates:
- ti-dma-crossbar updates for supporting eDMA
- tegra-apb pm updates
- idma64
- mv_xor updates
- ste_dma updates
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWlO5UAAoJEHwUBw8lI4NH1OAP+QG35WZx3e2kfk7O4xKTugEg
i0vGdSxMu1ILKUsi+ZjwzGaMcheDDfBTRMzlBZNqYoFfnV1ull2re/GM+DwjCg6C
xuYDThhwghj+42CNRb5FGNHlLsNczNTsD3KfPAg9BjMOxEUW8bUQMtEp7cEhvf9F
+BoxTXInrEGeKeZRl557pBSSYfhs/y/b9etMY7JNKSk97C3isNHFRMV3cBESGmch
GaQR+3IEtbQfsuOypJpPMA2qmH/wQUeiQOgSR0EW9+599azVbUYohDTkBP1qA2aY
W0/UBybsCo+tFitB8WZTBAkMCmrHXqdub7sfo467oZvuJoqIIflWkr5fgLSI2tdA
+6EPIXNCjFfk5r3PrmcWBoeJ29S3/t9hRdfSvcWPX5+tdqqcD/qaLsYqMnb15bkj
sPAbhZcIb7OJRz2ibaTjcamyBXfqV89suRF64Fokm1/sN78IALtv/0RsHwEULmVJ
yZfRB8U9T7Or45FGODrfeQCIyKkJD0lCfTJryVHUdix+UgidZByuMW726B4E6V6R
tPB/tQCoHlZeUuWclR1BOpaZhtHmpMFZsBlZCHVU4GIwUkzWhlnt7FgOXPUY8gWf
zof9rMfPbZzykrlEi+iYLCPG8JCm0gzbcEeMRsr2rIBPmXW8ZRpW4+829wBgdDXs
vtJLlB8ZTlF2gaDWhDEt
=D5Fb
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This round we have few new features, new driver and updates to few
drivers.
The new features to dmaengine core are:
- Synchronized transfer termination API to terminate the dmaengine
transfers in synchronized and async fashion as required by users.
We have its user now in ALSA dmaengine lib, img, at_xdma, axi_dmac
drivers.
- Universal API for channel request and start consolidation of
request flows. It's user is ompa-dma driver.
- Introduce reuse of descriptors and use in pxa_dma driver
Add/Remove:
- New STM32 DMA driver
- Removal of unused R-Car HPB-DMAC driver
Updates:
- ti-dma-crossbar updates for supporting eDMA
- tegra-apb pm updates
- idma64
- mv_xor updates
- ste_dma updates"
* tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (54 commits)
dmaengine: mv_xor: add suspend/resume support
dmaengine: mv_xor: de-duplicate mv_chan_set_mode*()
dmaengine: mv_xor: remove mv_xor_chan->current_type field
dmaengine: omap-dma: Add support for DMA filter mapping to slave devices
dmaengine: edma: Add support for DMA filter mapping to slave devices
dmaengine: core: Introduce new, universal API to request a channel
dmaengine: core: Move and merge the code paths using private_candidate
dmaengine: core: Skip mask matching when it is not provided to private_candidate
dmaengine: mdc: Correct terminate_all handling
dmaengine: edma: Add probe callback to edma_tptc_driver
dmaengine: dw: fix potential memory leak in dw_dma_parse_dt()
dmaengine: stm32-dma: Fix unchecked deference of chan->desc
dmaengine: sh: Remove unused R-Car HPB-DMAC driver
dmaengine: usb-dmac: Document SoC specific compatibility strings
ste_dma40: Delete an unnecessary variable initialisation in d40_probe()
ste_dma40: Delete another unnecessary check in d40_probe()
ste_dma40: Delete an unnecessary check before the function call "kmem_cache_destroy"
dmaengine: tegra-apb: Free interrupts before killing tasklets
dmaengine: tegra-apb: Update driver to use GFP_NOWAIT
dmaengine: tegra-apb: Only save channel state for those in use
...
edma_alloc_slot was not checking the channel mapping support existence when
slot 0 has been requested (used as entry slot for channel/event 0).
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* acpi-soc:
PM / clk: don't leave clocks enabled when driver not bound
i2c: dw: Add APM X-Gene ACPI I2C device support
ACPI / APD: Add APM X-Gene ACPI I2C device support
ACPI / LPSS: change 'does not have' to 'has' in comment
Revert "dmaengine: dw: platform: provide platform data for Intel"
dmaengine: dw: return immediately from IRQ when DMA isn't in use
dmaengine: dw: platform: power on device on shutdown
ACPI / LPSS: override power state for LPSS DMA device
ACPI / LPSS: power on when probe() and otherwise when remove()
ACPI / LPSS: do delay for all LPSS devices when D3->D0
ACPI / LPSS: allow to use specific PM domain during ->probe()
Revert "ACPI / LPSS: allow to use specific PM domain during ->probe()"
device core: add BUS_NOTIFY_DRIVER_NOT_BOUND notification
x86/platform/iosf_mbi: Remove duplicate definitions
Conflicts:
drivers/i2c/busses/i2c-designware-platdrv.c
Since we have a work around to prevent a system hangup we don't need to provide
a platform data explicitly anymore.
This reverts commit 175267b389.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>