A single jump label was used by the d40_probe() function in several cases
for error handling which was a bit inefficient here.
* This implementation detail could be improved by the introduction
of another jump label.
* Remove an extra check for the variable "base".
* Omit its explicit initialisation at the beginning then.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The kmem_cache_destroy() function tests whether its argument is NULL
and then returns immediately. Thus the test around the call is not needed.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This change makes the DT file to be easier to read since the reserved slots
array does not need the '/bits/ 16' to be specified, which might confuse
some people.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Rob Herring <robh@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This change makes the DT file to be easier to read since the memcpy
channels array does not need the '/bits/ 16' to be specified, which might
confuse some people.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Rob Herring <robh@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On probe failure or driver removal, before killing any tasklets, ensure
that the channel interrupt is freed to ensure that another channel
interrupt cannot occur and schedule the tasklet again.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The tegra20-apb-dma driver currently uses the flag GFP_ATOMIC when
allocating memory for structures used in conjunction with the DMA
descriptors. It is preferred that dmaengine drivers use GFP_NOWAIT
instead and so the emergency memory pool will not be used by these
drivers.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the tegra-apb DMA driver suspend/resume helpers, save and
restore the registers for all channels regardless of whether they are
in use or not. Change this so that only channels that have been
allocated and configured are saved and restored.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Newer tegra devices have a separate word count register per channel that
contains the number of words to be transferred. This register is not
saved or restored by the suspend/resume helpers for these newer devices
and so ensure that it is.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the tegra_dma_runtime_suspend/resume functions, the pdev structure
is not needed, and so just call dev_get_drvdata() to get the device
data structure.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The tegra-apb DMA driver enables runtime-pm but never calls
pm_runtime_get/put and hence the runtime-pm callbacks are never invoked.
The driver manages the clocks by directly calling clk_prepare_enable()
and clk_unprepare_disable().
Fix this by replacing the clk_prepare_enable() and clk_disable_unprepare()
with pm_runtime_get_sync() and pm_runtime_put(), respectively. Note that
the consequence of this is that if runtime-pm is disabled, then the clocks
will remain on the entire time the driver is loaded. However, if
runtime-pm is disabled, then power is not most likely not a concern.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
spin lock should be released while returning from function
Signed-off-by: Saurabh Sengar <saurabh.truth@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Calling synchronize_irq() right before devm_free_irq() is quite useless. On
one hand the IRQ can easily fire again before devm_free_irq() is entered,
on the other hand devm_free_irq() itself calls synchronize_irq() internally
(in a race condition free way), before any state associated with the IRQ is
freed.
Patch was generated using the following semantic patch:
// <smpl>
@@
expression irq, dev;
@@
-synchronize_irq(irq);
devm_free_irq(dev, irq, ...);
// </smpl>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Calling synchronize_irq() right before free_irq() is quite useless. On one
hand the IRQ can easily fire again before free_irq() is entered, on the
other hand free_irq() itself calls synchronize_irq() internally (in a race
condition free way), before any state associated with the IRQ is freed.
Patch was generated using the following semantic patch:
// <smpl>
@@
expression irq;
@@
-synchronize_irq(irq);
free_irq(irq, ...);
// </smpl>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This add power management suspend/resume support for the fsl-edma
driver.
eDMA acted as a basic function used by others. What it needs to do
is the two steps below to support power management.
In fsl_edma_suspend_late:
Check whether the DMA chan is idle, if it is not idle disable DMA
request.
In fsl_edma_resume_early:
Enable the eDMA and wait for being used.
Signed-off-by: Yuan Yao <yao.yuan@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
f931782917 dmaengine: bcm2835-dma: Fix memory leak when stopping a
running transfer
Fixed the memleak, but introduced another issue: the terminate_all callback
might be called with interrupts disabled and the dma_free_coherent() is
not allowed to be called when IRQs are disabled.
Convert the driver to use dma_pool_* for managing the list of control
blocks for the transfer.
Fixes: f931782917 ("dmaengine: bcm2835-dma: Fix memory leak when stopping a running transfer")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Matthias Reichl <hias@horus.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When performing interleaved transfers with numf > 1, an extra line is
copied. The mbr.bc field is incremented once too often. The length of
the block is (BLEN+1) microblocks.
Signed-off-by: Sylvain ETIENNE <Sylvain.ETIENNE@ingenico.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: 4e5385784e ("dmaengine: at_xdmac: handle numf > 1")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code was not in agreement with the comments.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Cc: stable@vger.kernel.org # 4.3 and later
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no need to calculate an overall length of the descriptor each time we
call for DMA transfer status. Instead we do this at descriptor allocation stage
and keep the stored length for further usage.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the match DMA controller is done only for lower 32 bits of
address which might be not true on 64-bit platform. Check upper portion
as well.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since a local variable contains the number of hardware desriptors at the
beginning of idma64_desc_fill() we may use it to index the last descriptor as
well.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Explicitly show in idma64_desc_fill() how we link the hardware
descriptors.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This tells, for example, IOMMU what the maximum size of a segment
the DMA controller can send.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no need to disable interrupts in the IRQ handler. The driver
guarantess that at one time only one descriptor is active, besides the fact
that each call to the same channel will be serialized in idma64_chan_irq()
handler anyway.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix typo in a macro which was not used until now. It explains why there
is no error at compilation time.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 "dmaengine: at_xdmac: creation of the atmel eXtended
DMA Controller driver"
Cc: stable@vger.kernel.org # 3.19 and later
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When setting the channel configuration register, the perid field is not
set to 0 since it is useless for mem2mem transfers. Unfortunately, a
device has 0 as perid. It could cause spurious flags status because
the controller could mix some events from the two channels.
For that reason, use the highest perid value for mem2mem transfers since it
doesn't match the perid of other devices.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes an issue that list_for_each_entry() in
usb_dmac_chan_terminate_all() is possible to cause endless loop because
this will move own desc to the desc_freed. So, this driver should use
list_for_each_entry_safe() instead of list_for_each_entry().
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When a DMA client driver decides that it is not providing callback for
completion of a transfer (and/or does not set the DMA_PREP_INTERRUPT) but
it will poll the status of the transfer (in case of short memcpy for
example) we will not get interrupt for the completion of the transfer and
will not mark the transaction as done.
Check the channel enable bit in the CCR when the status is queried and if
the channel is no longer active, we call the omap_dma_callback() to handle
the transfer completion.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The use of tasklet to actually start the DMA transfer slightly decreases the
DMA throughput since it adds small scheduling delay when the transfer is
started. In normal use, even with high I/O load the tasklet would start
one transaction at a time, however running the DMAtest for memcpy on all
available channels will cause the tasklet to start about 15 transfers.
The performance numbers on OMAP4 PandaBoard-es (test_buf_size = 6553):
With the tasklet:
dmatest: dma0chan30-copy: summary 5000 tests, 0 failures 186 iops 593 KB/s (0)
dmatest: dma0chan8-copy0: summary 5000 tests, 0 failures 184 iops 584 KB/s (0)
dmatest: dma0chan13-copy: summary 5000 tests, 0 failures 184 iops 585 KB/s (0)
dmatest: dma0chan12-copy: summary 5000 tests, 0 failures 184 iops 585 KB/s (0)
dmatest: dma0chan7-copy0: summary 5000 tests, 0 failures 183 iops 581 KB/s (0)
With this patch (no tasklet):
dmatest: dma0chan4-copy0: summary 5000 tests, 0 failures 199 iops 644 KB/s (0)
dmatest: dma0chan5-copy0: summary 5000 tests, 0 failures 199 iops 645 KB/s (0)
dmatest: dma0chan6-copy0: summary 5000 tests, 0 failures 199 iops 637 KB/s (0)
dmatest: dma0chan24-copy: summary 5000 tests, 0 failures 199 iops 638 KB/s (0)
dmatest: dma0chan16-copy: summary 5000 tests, 0 failures 199 iops 638 KB/s (0)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The for_each_sg() macro's last parameter is inteded to be used as counter.
We can use 'i' instead of 'j' within the loop for indexes.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
During mem copy both src and dst position moves at the same pace. Check the
dst position for progress reporting.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Jyri Sarha <jsarha@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In eDMA the events are directly mapped to a DMA channel (for example DMA
event 14 can only be handled by DMA channel 14). If the memcpy is enabled
on the eDMA, there is a possibility that the crossbar driver would assign
DMA event number already allocated in eDMA for memcpy. Furthermore the
eDMA can be shared with DSP in which case the crossbar driver should also
avoid mapping xbar events to DSP used event numbers (or channels).
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The use of idr was nice, but it was a bit heavy and we did not need the
features it provides. Using simple bitmap to track allocated DMA channels
is adequate here and it will be easier to add support for reserving
channels later on.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Allow the crossbar driver to be used with the eDMA node with non legacy
binding.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As we are now passing the filter data as pointers to the drivers,
we can take the final step and also pass the filter function the
same way. I'm keeping this change separate, as there it's less
obvious that this is a net win.
Upsides of this are:
- The ASoC drivers are completely independent from the DMA engine
implementation, which simplifies the Kconfig logic and in theory
allows the same sound drivers to be built in a kernel that supports
different kinds of dmaengine drivers.
- Consistency with other subsystems and drivers
On the other hand, we have a few downsides:
- The s3c24xx-dma driver now needs to be built-in for the ac97 platform
device to be instantiated on s3c2440.
- samsung_dmaengine_pcm_config cannot be marked 'const' any more
because the filter function pointer needs to be set at runtime.
This is safe as long we don't have multiple different DMA engines
in thet same system at runtime, but is nonetheless ugly.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The dca_ops structure is never modified, so declare it as const.
Done with the help of Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_addr_t may be defined as 32 or 64 bit depending on configuration,
so it cannot be printed using the normal format strings, as
gcc correctly warns:
drivers/dma/at_hdmac.c: In function 'atc_prep_dma_interleaved':
drivers/dma/at_hdmac.c:731:28: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'dma_addr_t {aka long long unsigned int}' [-Wformat=]
This changes the format strings to use the special "%pad" format
string that prints a dma_addr_t, and changes the arguments so we
pass the address by reference as required.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_addr_t may be defined as 32 or 64 bit depending on configuration,
so it cannot be printed using the normal format strings, as
gcc correctly warns:
drivers/dma/at_xdmac.c: In function 'at_xdmac_interleaved_queue_desc':
drivers/dma/at_xdmac.c:922:51: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'dma_addr_t {aka long long unsigned int}' [-Wformat=]
This changes the format strings to use the special "%pad" format
string that prints a dma_addr_t, and changes the arguments so we
pass the address by reference as required.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The sdma_probe function will call sdma_event_remap, but sdma_event_remap
marked with the __init annotation which make the kbuild complains as the
following log:
WARNING: drivers/dma/built-in.o(.text+0x56fc): Section mismatch in reference
from the function sdma_probe() to the function .init.text:sdma_event_remap()
The function sdma_probe() references
the function __init sdma_event_remap().
This is often because sdma_probe lacks a __init
annotation or the annotation of sdma_event_remap is wrong.
Remove the __init annotation on sdma_event_remap to kill this build warning
Signed-off-by: Jason Liu <r64343@freescale.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove pre-3.0 channel status reads. 3.0 and later chansts register
is 64bit and can be read 64bit. This was clarified with the hardware
architects and since the driver now only support 3.0+ we don't need the
legacy support
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current code uses bits 0-2 instead of 4-6 as the comment says.
Fixes: 633e42b8c5 ('dmaengine: edma: Get qDMA channel information from HW also')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
During the edma rework, a build error was introduced for the
case that CONFIG_OF is disabled:
drivers/built-in.o: In function `edma_tc_set_pm_state':
:(.text+0x43bf0): undefined reference to `of_find_device_by_node'
As the edma_tc_set_pm_state() function does nothing in case
we are running without OF, this adds an IS_ENABLED() check
that turns the function into an empty stub then and avoids the
link error.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: ca304fa9bb ("ARM/dmaengine: edma: Public API to use private struct pointer")
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the STM32 DMA controller.
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the call to pm_runtime_get_sync() failed, Runtime PM was left
enabled.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If CONFIG_PREEMPT=y:
Unable to handle kernel NULL pointer dereference at virtual address 00000014
pgd = c0003000
[00000014] *pgd=80000040004003, *pmd=00000000
Internal error: Oops: 206 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 0 PID: 17 Comm: kworker/0:1 Tainted: G W 4.3.0-rc3-koelsch-022
71-g705498fc5e6a5da8-dirty #1789
Hardware name: Generic R8A7791 (Flattened Device Tree)
Workqueue: pm pm_runtime_work
task: ef578e40 ti: ef57a000 task.ti: ef57a000
PC is at usb_dmac_chan_halt+0xc/0xc0
LR is at usb_dmac_runtime_suspend+0x28/0x38
pc : [<c023c880>] lr : [<c023c95c>] psr: 80000113
sp : ef57bdf8 ip : 00000008 fp : 00000003
r10: 00000008 r9 : c06ab928 r8 : ef49e810
r7 : 00000000 r6 : 000000ac r5 : ef770010 r4 : 00000000
r3 : 00000000 r2 : 8ffc2b84 r1 : 00000000 r0 : ef770010
Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment kernel
Control: 30c5307d Table: 40003000 DAC: fffffffd
Process kworker/0:1 (pid: 17, stack limit = 0xef57a210)
Stack: (0xef57bdf8 to 0xef57c000)
[...
[<c023c880>] (usb_dmac_chan_halt) from [<c023c95c>] (usb_dmac_runtime_suspend+0x28/0x38)
[<c023c95c>] (usb_dmac_runtime_suspend) from [<c027b25c>] (pm_genpd_runtime_suspend+0x74/0x23c)
This happens because usb_dmac_probe() calls pm_runtime_put() before
usb_dmac_chan_probe(), leading to the device being suspended before the
DMA channels are initialized, causing a NULL pointer dereference.
Move the call to pm_runtime_put() to the end of usb_dmac_probe() to fix
this.
Add a check to usb_dmac_runtime_suspend() to prevent the crash from
happening in the error path.
Reported-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As this driver provides a mechanism to reuse transfers, declare it in
its probe function.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the current state, the capability of transfer reuse can neither be
set by a slave dmaengine driver, nor used by a client driver, because
the capability is not available to dma_get_slave_caps().
Fix this by adding a way to declare the capability.
Fixes: 272420214d ("dmaengine: Add DMA_CTRL_REUSE")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch attempts to enhance the case of a transfer submitted multiple
times, and where the cost of creating the descriptors chain is not
negligible.
This happens with big video buffers (several megabytes, ie. several
thousands of linked descriptors in one scatter-gather list). In these
cases, a video driver would want to do :
- tx = dmaengine_prep_slave_sg()
- dma_engine_submit(tx);
- dma_async_issue_pending()
- wait for video completion
- read video data (or not, skipping a frame is also possible)
- dma_engine_submit(tx)
=> here, the descriptors chain recalculation will take time
=> the dma coherent allocation over and over might create holes in
the dma pool, which is counter-productive.
- dma_async_issue_pending()
- etc ...
In order to cope with this case, virt-dma is modified to prevent freeing
the descriptors upon completion if DMA_CTRL_REUSE flag is set in the
transfer.
This patch is a respin of the former DMA_CTRL_ACK approach, which was
reverted due to a regression in audio drivers.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Implement the new device_synchronize() callback to allow proper
synchronization when stopping a channel. Since the driver already makes
sure that no new complete callbacks are scheduled after the
device_terminate_all() callback has been called, all left to do in the
device_synchronize() callback is to wait for all currently running complete
callbacks to finish.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add a synchronize helper function for the virt-dma library. The function
makes sure that any scheduled descriptor complete callbacks have finished
running before the function returns.
This needs to be called by drivers using virt-dma in their
device_synchronize() callback. Depending on the driver additional
operations might be necessary in addition to calling vchan_synchronize() to
ensure proper synchronization.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The DMAengine API has a long standing race condition that is inherent to
the API itself. Calling dmaengine_terminate_all() is supposed to stop and
abort any pending or active transfers that have previously been submitted.
Unfortunately it is possible that this operation races against a currently
running (or with some drivers also scheduled) completion callback.
Since the API allows dmaengine_terminate_all() to be called from atomic
context as well as from within a completion callback it is not possible to
synchronize to the execution of the completion callback from within
dmaengine_terminate_all() itself.
This means that a user of the DMAengine API does not know when it is safe
to free resources used in the completion callback, which can result in a
use-after-free race condition.
This patch addresses the issue by introducing an explicit synchronization
primitive to the DMAengine API called dmaengine_synchronize().
The existing dmaengine_terminate_all() is deprecated in favor of
dmaengine_terminate_sync() and dmaengine_terminate_async(). The former
aborts all pending and active transfers and synchronizes to the current
context, meaning it will wait until all running completion callbacks have
finished. This means it is only possible to call this function from
non-atomic context. The later function does not synchronize, but can still
be used in atomic context or from within a complete callback. It has to be
followed up by dmaengine_synchronize() before a client can free the
resources used in a completion callback.
In addition to this the semantics of the device_terminate_all() callback
are slightly relaxed by this patch. It is now OK for a driver to only
schedule the termination of the active transfer, but does not necessarily
have to wait until the DMA controller has completely stopped. The driver
must ensure though that the controller has stopped and no longer accesses
any memory when the device_synchronize() callback returns.
This was in part done since most drivers do not pay attention to this
anyway at the moment and to emphasize that this needs to be done when the
device_synchronize() callback is implemented. But it also helps with
implementing support for devices where stopping the controller can require
operations that may sleep.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have a very typical update which is mostly fixes and updates to
drivers and no new drivers.
- Biggest change is coming from Peter for edma cleanup which even caused
some last minute regression, things seem settled now
- idma64 and dw updates
- iotdma updates
- module autoload fixes for various drivers
- scatter gather support for hdmac
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWP1vYAAoJEHwUBw8lI4NHKg8QAKwJ/iPxZDMfWwCLAoZoeaDp
GgOmxIZ3LWrFQHIziI1IUcxSAU28Z+N6GBziaXycw49oQic4J/ukHGhgxwI2GM/W
JGbFzHalnwdGqKzVZyfWGV0njT3KRvwKl7qqb66ZhikF5lD5imga66QGGxof8wBK
uoXra7yycrE8nGpyY5Gdd4H59aWVyAK1fW6+0cxa2kMfWo8vLkgj/olbzYZydhQz
DFCEvx2hbTyP/8VjGybVhtd+oeLjX43tGJOm6weyCb/8z3/soD73CChZIP53Mt6i
g+LKPfurRZ3VGILgdemkiMB+5pXi4jhmziw5pCSI6BluUGVZAMyg8hC1zapFUr/2
HaLfVmtxYdsBlceuMss1f3vyN2d+kRdDPKlOb9Td0PC2TEpSa5QayuMscvVfau+Q
epSdIxgjlHqAVDVZJ0kX55FLdmItx36tat6zJL5cUcZmrDvEY3TQmvcXyR2dIltY
U9WB2lRaETOG4Js0gAItXDjijNvLs0F4q10Zk6RdH4BcWbuqvONAhaCpWKeOWNsf
NgQkkg+DqHqol6+C0pI0uX6dvEfwJuZ4bqq98wl4SryuxWJNra/g55/oFEFDbJCU
oRMPYguMUgV8tzCpAurmDhxcrsZu/dvaSSW+flVz2QP81ecvFjRnZ8hhXbCr6iWi
VVoAISmQBQPCXoqPr7ZC
=tIzO
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have a very typical update which is mostly fixes and
updates to drivers and no new drivers.
- the biggest change is coming from Peter for edma cleanup which even
caused some last minute regression, things seem settled now
- idma64 and dw updates
- iotdma updates
- module autoload fixes for various drivers
- scatter gather support for hdmac"
* tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (77 commits)
dmaengine: edma: Add dummy driver skeleton for edma3-tptc
Revert "ARM: DTS: am33xx: Use the new DT bindings for the eDMA3"
Revert "ARM: DTS: am437x: Use the new DT bindings for the eDMA3"
dmaengine: dw: some Intel devices has no memcpy support
dmaengine: dw: platform: provide platform data for Intel
dmaengine: dw: don't override platform data with autocfg
dmaengine: hdmac: Add scatter-gathered memset support
dmaengine: hdmac: factorise memset descriptor allocation
dmaengine: virt-dma: Fix kernel-doc annotations
ARM: DTS: am437x: Use the new DT bindings for the eDMA3
ARM: DTS: am33xx: Use the new DT bindings for the eDMA3
dmaengine: edma: New device tree binding
dmaengine: Kconfig: edma: Select TI_DMA_CROSSBAR in case of ARCH_OMAP
dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx
dmaengine: edma: Merge the of parsing functions
dmaengine: edma: Do not allocate memory for edma_rsv_info in case of DT boot
dmaengine: edma: Refactor the dma device and channel struct initialization
dmaengine: edma: Get qDMA channel information from HW also
dmaengine: edma: Merge map_dmach_to_queue into assign_channel_eventq
dmaengine: edma: Correct PaRAM access function names (_parm_ to _param_)
...
Here is the big char/misc driver update for 4.4-rc1. Lots of different
driver and subsystem updates, hwtracing being the largest with the
addition of some new platforms that are now supported. Full details in
the shortlog.
All of these have been in linux-next for a long time with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEABECAAYFAlY6d/oACgkQMUfUDdst+yl93ACcCf91y+ufwU3cmcnq5LpwHPfx
VbkAn08Cn6Wu6IcihoEpR4hqGgIOtjqW
=1a3d
-----END PGP SIGNATURE-----
Merge tag 'char-misc-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc driver updates from Greg KH:
"Here is the big char/misc driver update for 4.4-rc1. Lots of
different driver and subsystem updates, hwtracing being the largest
with the addition of some new platforms that are now supported. Full
details in the shortlog.
All of these have been in linux-next for a long time with no reported
issues"
* tag 'char-misc-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (181 commits)
fpga: socfpga: Fix check of return value of devm_request_irq
lkdtm: fix ACCESS_USERSPACE test
mcb: Destroy IDA on module unload
mcb: Do not return zero on error path in mcb_pci_probe()
mei: bus: set the device name before running fixup
mei: bus: use correct lock ordering
mei: Fix debugfs filename in error output
char: ipmi: ipmi_ssif: Replace timeval with timespec64
fpga: zynq-fpga: Fix issue with drvdata being overwritten.
fpga manager: remove unnecessary null pointer checks
fpga manager: ensure lifetime with of_fpga_mgr_get
fpga: zynq-fpga: Change fw format to handle bin instead of bit.
fpga: zynq-fpga: Fix unbalanced clock handling
misc: sram: partition base address belongs to __iomem space
coresight: etm3x: adding documentation for sysFS's cpu interface
vme: 8-bit status/id takes 256 values, not 255
fpga manager: Adding FPGA Manager support for Xilinx Zynq 7000
ARM: zynq: dt: Updated devicetree for Zynq 7000 platform.
ARM: dt: fpga: Added binding docs for Xilinx Zynq FPGA manager.
ver_linux: proc/modules, limit text processing to 'sed'
...
Here is the big tty and serial driver update for 4.4-rc1.
Lots of serial driver updates and a few small tty core changes. Full
details in the shortlog.
All of these have been in linux-next for a while.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEABECAAYFAlY6f64ACgkQMUfUDdst+ykf8gCfYPjtHy5hD/TsharaeXROnVgi
W8cAn16xk1Nmnde220MNNpO6zDu65G/1
=kslf
-----END PGP SIGNATURE-----
Merge tag 'tty-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty/serial driver updates from Greg KH:
"Here is the big tty and serial driver update for 4.4-rc1.
Lots of serial driver updates and a few small tty core changes. Full
details in the shortlog.
All of these have been in linux-next for a while"
* tag 'tty-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (148 commits)
tty: Use unbound workqueue for all input workers
tty: Abstract tty buffer work
tty: Prevent tty teardown during tty_write_message()
tty: core: Use correct spinlock flavor in tiocspgrp()
tty: Combine SIGTTOU/SIGTTIN handling
serial: amba-pl011: fix incorrect integer size in pl011_fifo_to_tty()
ttyFDC: Fix build problems due to use of module_{init,exit}
tty: remove unneeded return statement
serial: 8250_mid: add support for DMA engine handling from UART MMIO
dmaengine: hsu: remove platform data
dmaengine: hsu: introduce stubs for the exported functions
dmaengine: hsu: make the UART driver in control of selecting this driver
serial: fix mctrl helper functions
serial: 8250_pci: Intel MID UART support to its own driver
serial: fsl_lpuart: add earlycon support
tty: disable unbind for old 74xx based serial/mpsc console port
serial: pl011: Spelling s/clocks-names/clock-names/
n_tty: Remove reader wakeups for TTY_BREAK/TTY_PARITY chars
tty: synclink, fix indentation
serial: at91, fix rs485 properties
...
The eDMA3 TPTC does not need any software configuration, but it is a
separate IP block in the SoC. In order the omap hwmod core to be able to
handle the TPTC resources correctly in regards of PM we need to have a
driver loaded for it.
This patch will add a dummy driver skeleton without probe or remove
callbacks provided.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Provide a flag to choose if the device does support memory-to-memory transfers.
At least this is not true for iDMA32 controller that might be supported in the
future. Besides that Intel BayTrail and Braswell users should not try this
feature due to HW specific behaviour.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Provide platform data explicitly for Intel SoCs where dw_dmac is enumerated by
ACPI.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Let probe driver decide either it wants to auto configure the driver or have
explicitly defined properties.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Just like memset support, the HDMAC might be used to do a memset over a
discontiguous memory area.
In such a case, we'll just build up a chain of memset descriptors over the
contiguous chunks of memory to set, in order to allow such a support.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The memset and scatter gathered memset are going to use some common logic
to create their descriptors.
Move that logic into a function of its own so that we can share it with the
future memset_sg callback.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In kernel-doc annotations parameters need to start with a @ for them to be
properly recognized. Add those where missing for virt-dma.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With the old binding and driver architecture we had many issues:
No way to assign eDMA channels to event queues, thus not able to tune the
system by moving specific DMA channels to low/high priority servicing. We
moved the cyclic channels to high priority within the code, but that was
just a workaround to this issue.
Memcopy was fundamentally broken: even if the driver scanned the DT/devices
in the booted system for direct DMA users (which is not effective when the
events are going through a crossbar) and created a map of 'used' channels,
this information was not really usable. Since via dmaengien API the eDMA
driver will be called with _some_ channel number, we would try to request
this channel when any channel is requested for memcpy. By luck we got
channel which is not used by any device most of the time so things worked,
but if a device would have been using the given channel, but not requested
it, the memcpy channel would have been waiting for HW event.
The old code had the am33xx/am43xx DMA event router handling embedded. This
should have been done in a separate driver since it is not part of the
actual eDMA IP.
There were no way to 'lock' PaRAM slots to be used by the DSP for example
when booting with DT.
In DT boot the edma node used more than one hwmod which is not a good
practice and the kernel prints warning because of this.
With the new bindings and the changes in the driver we can:
- No regression with Legacy binding and non DT boot
- DMA channels can be assigned to any TC (to set priority)
- PaRAM slots can be reserved for other cores to use
- Dynamic power management for CC and TCs, if only TC0 is used all other TC
can be powered down for example
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since the crossbar is needed for eDMA when it is used on OMAP like
platforms (am335x/am437x and later DRA7xx), select the crossbar to be built
if ARCH_OMAP is set.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The DMA event crossbar on AM33xx/AM43xx is different from the one found in
DRA7x family.
Instead of a single event crossbar it has 64 identical mux attached to each
eDMA event line. When the 0 event mux is selected, the default mapped event
is going to be routed to the corresponding eDMA event line. If different
mux is selected, then the selected event is going to be routed to the given
eDMA event.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of nesting functions just merge them since the resulting function
is still small and readable.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The channel/slot reservation is not supported when booted with DT so there
is not need to allocate memory.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Move all code under one function to do the dma device and eDMA channel
related setup so they are not scattered around the driver.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Query the number of qDMA channels from CCCFG register.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
edma_assign_channel_eventq() is a wrapper around edma_map_dmach_to_queue()
We can merge the content of the later so we will have only one function
to be used for mapping channels to given eventq
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
These inline functions are designed to modify parts of the PaRAM in eDMA.
Change the names accordingly.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of passing a pointer to struct edma_cc and the channel number,
pass only the pointer to the edma_chan structure for the given channel.
This struct contains all the information needed by the functions and the
use of this makes it obvious that most of the sanity checks can be removed
from the driver.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the transfer is shorted then 64K we can complete it with one ACNT burst
by configuring ACNT to the length of the copy, this require one paRAM slot.
Otherwise we use two paRAM slots for the copy:
slot1: will copy (length / 32767) number of 32767 byte long blocks
slot2: will be configured to copy the remaining data.
According to tests this patch increases the throughput of memcpy from
~3MB/s to 15MB/s
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Despite the claim by the original commit adding the memcpy
support, eDMA does not have constraint on the alignment of src, dst
or length in increment mode.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are no platforms where it's not possible to calculate
the number of channels based on IO space length, and since
that is the only purpose for struct hsu_dma_platform_data,
removing it.
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
HSU (High Speed UART) DMA engine, like the name suggests, is
an integrated DMA engine for UART and UART alone. Therefore,
making the UART drivers responsible of selecting it and
removing the user selectable option for it. The UARTs with
this DMA engine can always select HSU_DMA when
SERIAL_8250_DMA option is enabled.
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The DMA engine supports memory copy, RAID5 XOR, RAID6 PQ, and other
computations. But the bandwidth of the entire DMA engine is shared
among all channels. This patch re-configures operations availability
such that one can achieve maximum performance for XOR and PQ
computation by removing the memory offload operations.
Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the eDMA3 has support for channel paRAM slot mapping we can utilize it
to allocate slots on demand and save precious slots for real transfers.
On am335x the eDMA has 64 channels which means we can unlock 64 paRAM
slots out from the available 256.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The names chosen for the bitfields were quite confusing and given no real
information on what they are used for...
edma_inuse -> slot_inuse: tracks the slot usage/availability
edma_unused -> channel_unused: tracks the channel usage/availability
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of directly reading it from CCCFG register take the information out
once when we set up the configuration from the HW.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
No need to run through the bits in QEMR and CCERR events since they will
not trigger any action, so just clearing the errors there is fine.
In case of the missed event the loop can be optimized so we spend less time
to handle the event.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the ccerr interrupt handler the code checks for pending errors in the
error status registers in two different places.
Move the check out to a helper function.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With the merger of the arch/arm/common/edma.c code into the dmaengine
driver, there is no longer need to have per channel callback/data storage
for interrupt events.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove or rewrite the comments for the internal functions.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Warning message in case of linking between paRAM slots in different eDMA
controllers.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
edma_write_slot() is for writing an entire paRAM slot.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We have access to dev, so it is better to use the dev_dbg for debug prints.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Be consistent and do not mix the use of dev, &pdev->dev, etc in the
functions.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When allocating a memory for number of items it is better (looks better)
to use devm_kcalloc.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of using defines to specify the size of different arrays and
bitmaps, allocate the memory for them based on the information we get from
the HW itself.
Since these defines are set based on the worst case, there are devices
where they are not valid.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Move the code out from arch/arm/common and merge it inside of the dmaengine
driver.
This change is done with as minimal (if eny) functional change to the code
as possible to avoid introducing regression.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since the driver stack no longer depends on lookup with id number in a
global array of pointers, the limitation for the number of eDMAs are no
longer needed. We can handle as many eDMAs in legacy and DT boot as we have
memory for them to allocate the needed structures.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of relying on indexes pointing to edma private date in the global
pointer array, pass the private data pointer via the public API.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the of_dma_controller is registered in the non dmaengine driver we could
have race condition:
the of_dma_controller has been registered, but the dmaengine driver is not
yet probed. Drivers requesting DMA channels during this window will fail
since we do not yet have dmaengine drivers registered.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code path in edma_execute() and edma_callback() can be simplified
and make it more optimal.
There is not need to call in to edma_execute() when the transfer
has been finished for example.
Also the handling of missed/first or next batch of paRAMs can
be done in a more optimal way.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no need to print that the driver has been initialized
or removed, so remove such messages.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since commit d078cd1b41 ("dmaengine: imx-sdma: Add imx6sx platform
support") we get this message on every boot on mx6q:
imx-sdma 20ec000.sdma: no event needs to be remapped
, which is not very helpful.
Move the message to debug level instead.
Cc: Zidan Wang <zidan.wang@freescale.com>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is needed due to the duplicated iommu stuff to help with the merge
and to prevent future issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The L3 throughput can be higher than expected when packed access
is not enabled. The ratio depends on the number of bytes in a
transaction and the EMIF interface width.
The throughput was measured for the following settings/cases:
* Case 1: Burst size of 64 bytes, packed access disabled
* Case 2: Burst size of 64 bytes, packed access enabled
* Case 3: Burst disabled, packed access disabled
Throughput measurements were done during McASP-based audio
playback on the Jacinto6 EVM using the omapconf tool [1]:
$ omapconf trace bw -m sdma_rd
---------------------------------------------------------
Throughput (MB/s)
Audio parameters Case 1 Case 2 Case 3
---------------------------------------------------------
44.1kHz, 16-bits, stereo 1.41 0.18 1.41
44.1kHz, 32-bits, stereo 1.41 0.35 1.41
44.1kHz, 16-bits, 4-chan 2.82 0.35 2.82
44.1kHz, 16-bits, 6-chan 4.23 0.53 4.23
44.1kHz, 16-bits, 8-chan 5.64 0.71 5.64
---------------------------------------------------------
From above measurements, case 2 is the only one that delivers
the expected throughput for the given audio parameters. For
that reason, the packed accesses are now enabled.
It's worth to mention that packed accesses cannot be enabled
for all addressing modes. In cyclic transfers, it can be
enabled in the source for MEM_TO_DEV and in dest for DEV_TO_MEM,
as they use post-increment mode which supports packed accesses.
Peter Ujfalusi:
From the TRM regarding to this:
"NOTE: Except in the constant addressing mode, the source or
destination must be specified as packed for burst transactions
to occur."
So w/o the packed setting the burst on the MEM side was not
enabled, this explains the numbers.
[1] https://github.com/omapconf/omapconf
Signed-off-by: Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The MIC X100 DMA engine has a special status descriptor which writes
an 8 byte value to a destination location. This is used to signal
completion of all DMA descriptors prior to the status descriptor.
This patch add a new DMA engine API which enables updating a
destination address with an 8 byte immediate data value.
Reviewed-by: Nikhil Rao <nikhil.rao@intel.com>
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Lawrynowicz, Jacek <jacek.lawrynowicz@intel.com>
Signed-off-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Siva Yerramreddy <yshivakrishna@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This contains fixes spread throughout the drivers
Also fixes one more instance of privatecnt in dmaengine
bunch of pxa_dma fixes for reuse of descriptor issue, residue and
no-requestor
odd fixes in xgene, idma, sun4i and zxdma
at_xdmac fixes for cleaning descriptor and block addr mode
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWDsttAAoJEHwUBw8lI4NHBDcP/0NjV4T7KAcx+IYDodDw4fti
p+UixavHfVUHJ63tG/y9YiJKR7OjqJbuY3T3dgazJN/Xfyi7QKt3IcnXNhpU6Gk0
VlbvTQtZXzUEa13pLH02QwAMxf8wn+1c5r2jSuCKCwVdjfKujfwmfJC2Yqxk66YQ
2dFGclMfkQeiKPfo5WZZa95fk9ZhAVzduMdU1mn5Zk1rV2wYGIXm/k6nvY8pUle/
6PkTjFYCv9fZ5eGP1pwpoJ5GMxXbCQL8Z0/KQGKNEoEjA2+LgsjIxN2nGYXVvMDa
Z/T8bTfcZdi8kgLmxZSJalWRWQyMmWmp2Sv9tQ5ujnJ/vlnDD1WA0uah+MGIv1sj
HK5FVRwzIfNtFsSpI6on0ndi2xf5c2tA4ZC8St8jyZyw3DxYNeiGgL6/uIn60saf
5v5D6R+YQ7uxX3jfWe6vzoZMBNqaKpcLmZmSwiwo6SQgP7umYAQocNmFCWLAkHlN
UPvgVW2Q4Doqj11GEJ3FO4HXd4Sauo+ARvlYNs0hyeIEwnHJsc0IMYHd4tmnzwt0
EiM7uDMeJVkQrJxXm3xsv8rqheLXS6rGebu3JLL1riEe9nxC1sGuz08L4+sJFMgn
agyRGMYnslFaVwWMkgA4rdh0FPJwFRRFjxggtskwhIi9sSRTBF9uKS98JvrU/AeW
J4C8XmuLMGyATzrBXdUM
=wcjo
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-4.3-rc4' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
"This contains fixes spread throughout the drivers, and also fixes one
more instance of privatecnt in dmaengine.
Driver fixes summary:
- bunch of pxa_dma fixes for reuse of descriptor issue, residue and
no-requestor
- odd fixes in xgene, idma, sun4i and zxdma
- at_xdmac fixes for cleaning descriptor and block addr mode"
* tag 'dmaengine-fix-4.3-rc4' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: pxa_dma: fix residue corner case
dmaengine: pxa_dma: fix the no-requestor case
dmaengine: zxdma: Fix off-by-one for testing valid pchan request
dmaengine: at_xdmac: clean used descriptor
dmaengine: at_xdmac: change block increment addressing mode
dmaengine: dw: properly read DWC_PARAMS register
dmaengine: xgene-dma: Fix overwritting DMA tx ring
dmaengine: fix balance of privatecnt
dmaengine: sun4i: fix unsafe list iteration
dmaengine: idma64: improve residue estimation
dmaengine: xgene-dma: fix handling xgene_dma_get_ring_size result
dmaengine: pxa_dma: fix initial list move
A very tiny temporal window exists in the residue calculation where :
- upon entering residue calculation, the transfer is ongoing
- when reading the current transfer pointer, it just changed to
the "finisher/linker" descriptor
In this case, the residue returned is the whole transfer length instead
of 0. Fix it.
This appears almost in one extreme case, where the driver is used
by older clients which inquire for residue in interrupt context, such
as the smsc91x ethernet driver, in a tight loop :
interrupt_handler()
dmaengine_submit()
do {
dmaengine_tx_status()
} while (residue > 0 || status != DMA_ERROR)
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
A very small number of devices don't use the flow control offered by
requestor lines. In these specific cases, the pxa dma driver should be
aware of that and not try to use a requestor line.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The valid pchan range is 0 ~ d->dma_requests - 1.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Reviewed-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This platform driver has a OF device ID table but the OF module
alias information is not created so module autoloading won't work.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Moritz Fischer <moritz.fischer@ettus.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This platform driver has a OF device ID table but the OF module
alias information is not created so module autoloading won't work.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This platform driver has a OF device ID table but the OF module
alias information is not created so module autoloading won't work.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This platform driver has a OF device ID table but the OF module
alias information is not created so module autoloading won't work.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This platform driver has a OF device ID table but the OF module
alias information is not created so module autoloading won't work.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This platform driver has a OF device ID table but the OF module
alias information is not created so module autoloading won't work.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In interleaved mode, when numf > 1, we have only one descriptor for the
transfer but this descriptor has to be added to the descs_list. If not,
when doing remove_xfer, the descriptor won't be put back in the
free_descs_list.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When putting back a descriptor to the free descs list, some fields are
not set to 0, it can cause bugs if someone uses it without having this
in mind.
Descriptor are not put back one by one so it is easier to clean
descriptors when we request them.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Cc: stable@vger.kernel.org #4.2
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The addressing mode we were using was not only incrementing the address at
each microblock, but also at each data boundary, which was severely slowing
the transfer, without any benefit since we were not using the data stride.
Switch to the micro block increment only in order to get back to an
acceptable performance level.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: 6007ccb577 ("dmaengine: xdmac: Add interleaved transfer support")
Cc: stable@vger.kernel.org #4.2
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix the call to memset in this driver
[linux-4.2-next-20150911/drivers/dma/zx296702_dma.c:444]: (warning)
memset() called to fill 0 bytes of 'ds'.
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of hardconding a platform data for dw_dmac let's use it's own
autoconfiguration feature. Thus, remove hardcoded values.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We replace __fls() by __ffs() since we have to find a *minimum* data width that
satisfies both source and destination.
While here, rename dwc_fast_fls() to dwc_fast_ffs() which it really is.
Fixes: 4c2d56c574 (dw_dmac: introduce dwc_fast_fls())
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In case we have less than maximum allowed channels (8) and autoconfiguration is
enabled the DWC_PARAMS read is wrong because it uses different arithmetic to
what is needed for channel priority setup.
Re-do the caclulations properly. This now works on AVR32 board well.
Fixes: fed2574b3c (dw_dmac: introduce software emulation of LLP transfers)
Cc: yitian.bu@tangramtek.com
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes an over flow issue with the TX ring descriptor. Each
descriptor is 32B in size and an operation requires 2 of these
descriptors.
Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_release_channel() decrements privatecnt counter and almost all dma_get*
function increments it with the exception of dma_get_slave_channel().
In most cases this does not cause issue since normally the channel is not
requested and released, but if a driver requests DMA channel via
dma_get_slave_channel() and releases the channel the privatecnt will be
unbalanced and this will prevent for example getting channel for memcpy.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Get pointer to the struct acpi_device by using ACPI_COMPANION() macro. This
is more efficient than using ACPI_HANDLE() and acpi_bus_get_device().
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, sun4i_dma_free_contract iterates over lists and frees memory
as it goes through them, causing reads to recently freed memory to
be performed. Fix this by using the safe version of the iterator, so
freed memory is not referenced at all.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Emilio López <emilio@elopez.com.ar>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
in edma_callback, driver was doing redundant check for desc, so remove that
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are already helper functions to do 64-bit I/O on 32-bit machines, thus we
don't need to reinvent the wheel. In our case we can't use readq() / writeq()
even on 64-bit kernel since there is a hardware limitation (OCP bus is a 32-bit
bus).
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to the documentation the CH_DRAIN bit enforses single bursts when
channel is going to be suspended. This, in case when channel will be resumed,
makes data to flow in non-optimal mode until DMA returns to full burst mode.
The fix differentiates pause / resume cycle from pause / terminate and sets
CH_DRAIN bit accordingly.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes a comment where DesignWare is wrongly mentioned. There is no
functional change.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We use a pattern
x = min_t(u32, <LOG2_CONSTANT>, __ffs(expr));
There is no need to use min_t() since we can replace it by
x = __ffs(expr | <2^LOG2_CONST>);
and moreover guarantee that argument of __ffs() will be not zero.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We replace __fls() by __ffs() since we have to find a *minimum* data width that
satisfies both source and destination.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The residue calculation may provide a wrong estimation when the transfer is
started. There are possible scenarios we have to separate:
1) the transfer is not started yet; residue is equal to the total
length;
2) the transfer is just started (first chunk is ongoing); residue is
equal to the total length without already transfered bytes;
3) the transfer is ongoing and we already sent few chunks of data;
residue is equal to the total length without fully transfered chunks
and already sent bytes.
Mistakenly the calculation in cases 2) and 3) was done in the similar way and
the result is equal to -bytes that have been transfered, i.e. quite big since
size_t type can't keep negative values.
Rewrite the calculation algorithm to be one pass and have a correct result.
Besides above in case user asks for a status of the active DMA descriptor
without pausing an ongoing transfer the residue will be estimated based on the
register value, though it's still racy. Since the transfer is active the value
is continuously being changed. Here we have to read two registers at a time. To
minimize an error make those reads close to each other.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The function can return negative value.
The problem has been detected using proposed semantic patch
scripts/coccinelle/tests/assign_signed_to_unsigned.cocci [1].
[1]: http://permalink.gmane.org/gmane.linux.kernel/2046107
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since the commit to have an allocated list of virtual descriptors was
reverted, the pxa_dma driver is broken, as it assumes the descriptor is
placed on the allocated list upon allocation.
Fix the issue in pxa_dma by making an allocated virtual descriptor a
singleton.
Fixes: 8c8fe97b2b ("Revert "dmaengine: virt-dma: don't always free descriptor upon completion"")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current implementation hard codes the two supported channels so that
"tx" is always 0 and "rx" is always 1. This is because there has been no
suitable way in ACPI to name resources.
With _DSD device properties we can finally do this:
Device (SPI1) {
Name (_CRS, ResourceTemplate () {
...
FixedDMA (0x0000, 0x0000, Width32bit)
FixedDMA (0x0001, 0x0001, Width32bit)
})
Name (_DSD, Package () {
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package () {
Package () {"dma-names", Package () {"tx", "rx"}}
},
})
}
The names "tx" and "rx" now provide index of the FixedDMA resource in
question.
Modify acpi_dma_request_slave_chan_by_name() so that it looks for
"dma-names" property first and only then fall back using hardcoded indices.
The DT "dma-names" binding that we reuse for ACPI is documented in
Documentation/devicetree/bindings/dma/dma.txt.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
the symbol CONFIG_IDMA64 should rather be CONFIG_INTEL_IDMA64 to conform to
rest of the intel dmaengine drivers. This was found after sorting the
entries and trying to place this odd one
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adding AER handlers in order to handle any PCIe errors.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The ioatdma needs to be queisced and block all additional op submission
during reboots. When NET_DMA was used, this caused issue as ops were still
being sent to ioatdma during reboots even though PCI BME has been turned
off. Even though NET_DMA has been deprecated, we need to prevent similar
situations. The shutdown handler should address that.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We should use shiny new dma_pool_zalloc instead of
dma_pool_alloc/memset
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Most interrupt flow handlers do not use the irq argument. Those few
which use it can retrieve the irq number from the irq descriptor.
Remove the argument.
Search and replace was done with coccinelle and some extra helper
scripts around it. Thanks to Julia for her help!
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
This time we have aded a new capability for scatter-gathered memset using
dmaengine APIs. This is supported in xdmac & hdmac drivers
We have added support for reusing descriptors for examples like video
buffers etc. Driver will follow
The behaviour of descriptor ack has been clarified and documented
New devices added are:
- dma controller in sun[457]i SoCs
- lpc18xx dmamux
- ZTE ZX296702 dma controller
- Analog Devices AXI-DMAC DMA controller
- eDMA support for dma-crossbar
- imx6sx support in imx-sdma driver
- imx-sdma device to device support
Others
- jz4780 fixes
- ioatdma large refactor and cleanup for removal of ioat v1 and v2 which is
deprecated and fixes
- ACPI support in X-Gene DMA engine driver
- ipu irq fixes
- mvxor fixes
- minor fixes spread thru drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJV5+nSAAoJEHwUBw8lI4NHiXQQAI/++7PmUGZ6BDZGu0B9Bj7U
JalNijm43p858nka1zVhDea8pi7Cq3zJdE8EAB7FPQGESvCODWr62oZBr+mSaQ1C
oU1RTIRTSiU2HPE4EFeGUvVGrnmTbHR2b1apI1SU41gKn+oQ5RJRRoQwEVwO6uuZ
1VYcUqhurIAZs1FrMIAUa2vg7KTcK9UotfwR2gGBmSvXMf1aJ/dNZC7i/pBJjoyt
v6KrLuYjEBAJvY7l368+NhLY/MS+2xdCMQo84B+HNEG7eA7y2MFOcRPXQA3a7dzr
NwNuAZcTYDU11r2jiAPcnBM5sPo4bokX6Td0oDbYH6Rn2uIWlof7jGIceUaWLQQq
QGZc4QPI4KdjTGNedRN8g9zqv0irFVfDr5v1A+B7N7ehvlubnB4jV8LmLpqN6UQH
B38VnDJ3hqdZ6j9RHQTyUoQskSYMPbOAUYbL0qQLkyx8AnLc8TRv7DgtSvZjnz5W
oF6So2A5SWZ7UmXKupd6TKtdyG3xtFAh+/MGVQ1RS9bCmnyhaIxJRiJwfftCBTBx
IZePOsqlwl2dojM62BDlGS4CLRZve2VgiUEJaPINsdm/On3tQs9+iDbNY3cpvLQS
P9u4po1TQPZnKG732vPAxEqdlq709kta7Fj5KIEvNjuWBBGKfypNP8BHKRvTLFlR
kcbO03NzwSO6PZpmiUsx
=gQZ6
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have aded a new capability for scatter-gathered memset
using dmaengine APIs. This is supported in xdmac & hdmac drivers
We have added support for reusing descriptors for examples like video
buffers etc. Driver will follow
The behaviour of descriptor ack has been clarified and documented
New devices added are:
- dma controller in sun[457]i SoCs
- lpc18xx dmamux
- ZTE ZX296702 dma controller
- Analog Devices AXI-DMAC DMA controller
- eDMA support for dma-crossbar
- imx6sx support in imx-sdma driver
- imx-sdma device to device support
Other:
- jz4780 fixes
- ioatdma large refactor and cleanup for removal of ioat v1 and v2
which is deprecated and fixes
- ACPI support in X-Gene DMA engine driver
- ipu irq fixes
- mvxor fixes
- minor fixes spread thru drivers"
[ The Kconfig and Makefile entries got re-sorted alphabetically, and I
handled the conflict with the new Intel integrated IDMA driver by
slightly mis-sorting it on purpose: "IDMA64" got sorted after "IMX" in
order to keep the Intel entries together. I think it might be a good
idea to just rename the IDMA64 config entry to INTEL_IDMA64 to make
the sorting be a true sort, not this mismash.
Also, this merge disables the COMPILE_TEST for the sun4i DMA
controller, because it does not compile cleanly at all. - Linus ]
* tag 'dmaengine-4.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (89 commits)
dmaengine: ioatdma: add Broadwell EP ioatdma PCI dev IDs
dmaengine :ipu: change ipu_irq_handler() to remove compile warning
dmaengine: ioatdma: Fix variable array length
dmaengine: ioatdma: fix sparse "error" with prep lock
dmaengine: hdmac: Add memset capabilities
dmaengine: sort the sh Makefile
dmaengine: sort the sh Kconfig
dmaengine: sort the dw Kconfig
dmaengine: sort the Kconfig
dmaengine: sort the makefile
drivers/dma: make mv_xor.c driver explicitly non-modular
dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller
devicetree: Add bindings documentation for Analog Devices AXI-DMAC
dmaengine: xgene-dma: Fix the lock to allow client for further submission of requests
dmaengine: ioatdma: fix coccinelle warning
dmaengine: ioatdma: fix zero day warning on incompatible pointer type
dmaengine: tegra-apb: Simplify locking for device using global pause
dmaengine: tegra-apb: Remove unnecessary return statements and variables
dmaengine: tegra-apb: Avoid unnecessary channel base address calculation
dmaengine: tegra-apb: Remove unused variables
...
- ACPICA update to upstream revision 20150818 including method
tracing extensions to allow more in-depth AML debugging in the
kernel and a number of assorted fixes and cleanups (Bob Moore,
Lv Zheng, Markus Elfring).
- ACPI sysfs code updates and a documentation update related to
AML method tracing (Lv Zheng).
- ACPI EC driver fix related to serialized evaluations of _Qxx
methods and ACPI tools updates allowing the EC userspace tool
to be built from the kernel source (Lv Zheng).
- ACPI processor driver updates preparing it for future
introduction of CPPC support and ACPI PCC mailbox driver
updates (Ashwin Chaugule).
- ACPI interrupts enumeration fix for a regression related
to the handling of IRQ attribute conflicts between MADT
and the ACPI namespace (Jiang Liu).
- Fixes related to ACPI device PM (Mika Westerberg, Srinidhi Kasagar).
- ACPI device registration code reorganization to separate the
sysfs-related code and bus type operations from the rest (Rafael
J Wysocki).
- Assorted cleanups in the ACPI core (Jarkko Nikula, Mathias Krause,
Andy Shevchenko, Rafael J Wysocki, Nicolas Iooss).
- ACPI cpufreq driver and ia64 cpufreq driver fixes and cleanups
(Pan Xinhui, Rafael J Wysocki).
- cpufreq core cleanups on top of the previous changes allowing it
to preseve its sysfs directories over system suspend/resume (Viresh
Kumar, Rafael J Wysocki, Sebastian Andrzej Siewior).
- cpufreq fixes and cleanups related to governors (Viresh Kumar).
- cpufreq updates (core and the cpufreq-dt driver) related to the
turbo/boost mode support (Viresh Kumar, Bartlomiej Zolnierkiewicz).
- New DT bindings for Operating Performance Points (OPP), support
for them in the OPP framework and in the cpufreq-dt driver plus
related OPP framework fixes and cleanups (Viresh Kumar).
- cpufreq powernv driver updates (Shilpasri G Bhat).
- New cpufreq driver for Mediatek MT8173 (Pi-Cheng Chen).
- Assorted cpufreq driver (speedstep-lib, sfi, integrator) cleanups
and fixes (Abhilash Jindal, Andrzej Hajda, Cristian Ardelean).
- intel_pstate driver updates including Skylake-S support, support
for enabling HW P-states per CPU and an additional vendor bypass
list entry (Kristen Carlson Accardi, Chen Yu, Ethan Zhao).
- cpuidle core fixes related to the handling of coupled idle states
(Xunlei Pang).
- intel_idle driver updates including Skylake Client support and
support for freeze-mode-specific idle states (Len Brown).
- Driver core updates related to power management (Andy Shevchenko,
Rafael J Wysocki).
- Generic power domains framework fixes and cleanups (Jon Hunter,
Geert Uytterhoeven, Rajendra Nayak, Ulf Hansson).
- Device PM QoS framework update to allow the latency tolerance
setting to be exposed to user space via sysfs (Mika Westerberg).
- devfreq support for PPMUv2 in Exynos5433 and a fix for an incorrect
exynos-ppmu DT binding (Chanwoo Choi, Javier Martinez Canillas).
- System sleep support updates (Alan Stern, Len Brown, SungEun Kim).
- rockchip-io AVS support updates (Heiko Stuebner).
- PM core clocks support fixup (Colin Ian King).
- Power capping RAPL driver update including support for Skylake H/S
and Broadwell-H (Radivoje Jovanovic, Seiichi Ikarashi).
- Generic device properties framework fixes related to the handling
of static (driver-provided) property sets (Andy Shevchenko).
- turbostat and cpupower updates (Len Brown, Shilpasri G Bhat,
Shreyas B Prabhu).
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJV5hhGAAoJEILEb/54YlRxs+EQAK51iFk48+IbpHYaZZ50Yo4m
ZZc2zBcbwRcBlU9vKERrhG+jieSl8J/JJNxT8vBjKqyvNw038mCjewQh02ol0HuC
R7nlDiVJkmZ50sLO4xwE/1UBZr/XqbddwCUnYzvFMkMTA0ePzFtf8BrJ1FXpT8S/
fkwSXQty6hvJDwxkfrbMSaA730wMju9lahx8D6MlmUAedWYZOJDMQKB4WKa/St5X
9uckBPHUBB2KiKlXxdbFPwKLNxHvLROq5SpDLc6cM/7XZB+QfNFy85CUjCUtYo1O
1W8k0qnztvZ6UEv27qz5dejGyAGOarMWGGNsmL9evoeGeHRpQL+dom7HcTnbAfUZ
walyhYSm/zKkdy7Vl3xWUUQkMG48+PviMI6K0YhHXb3Rm5wlR/yBNZTwNIty9SX/
fKCHEa8QynWwLxgm53c3xRkiitJxMsHNK03moLD9zQMjshTyTNvpNbZoahyKQzk6
H+9M1DBRHhkkREDWSwGutukxfEMtWe2vcZcyERrFiY7l5k1j58DwDBMPqjPhRv6q
P/1NlCzr0XYf83Y86J18LbDuPGDhTjjIEn6CqbtI2mmWqTg3+rF7zvS2ux+FzMnA
gisv8l6GT9JiWhxKFqqL/rrVpwtyHebWLYE/RpNUW6fEzLziRNj1qyYO9dqI/GGi
I3rfxlXoc/5xJWCgNB8f
=fTgI
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki:
"From the number of commits perspective, the biggest items are ACPICA
and cpufreq changes with the latter taking the lead (over 50 commits).
On the cpufreq front, there are many cleanups and minor fixes in the
core and governors, driver updates etc. We also have a new cpufreq
driver for Mediatek MT8173 chips.
ACPICA mostly updates its debug infrastructure and adds a number of
fixes and cleanups for a good measure.
The Operating Performance Points (OPP) framework is updated with new
DT bindings and support for them among other things.
We have a few updates of the generic power domains framework and a
reorganization of the ACPI device enumeration code and bus type
operations.
And a lot of fixes and cleanups all over.
Included is one branch from the MFD tree as it contains some
PM-related driver core and ACPI PM changes a few other commits are
based on.
Specifics:
- ACPICA update to upstream revision 20150818 including method
tracing extensions to allow more in-depth AML debugging in the
kernel and a number of assorted fixes and cleanups (Bob Moore, Lv
Zheng, Markus Elfring).
- ACPI sysfs code updates and a documentation update related to AML
method tracing (Lv Zheng).
- ACPI EC driver fix related to serialized evaluations of _Qxx
methods and ACPI tools updates allowing the EC userspace tool to be
built from the kernel source (Lv Zheng).
- ACPI processor driver updates preparing it for future introduction
of CPPC support and ACPI PCC mailbox driver updates (Ashwin
Chaugule).
- ACPI interrupts enumeration fix for a regression related to the
handling of IRQ attribute conflicts between MADT and the ACPI
namespace (Jiang Liu).
- Fixes related to ACPI device PM (Mika Westerberg, Srinidhi
Kasagar).
- ACPI device registration code reorganization to separate the
sysfs-related code and bus type operations from the rest (Rafael J
Wysocki).
- Assorted cleanups in the ACPI core (Jarkko Nikula, Mathias Krause,
Andy Shevchenko, Rafael J Wysocki, Nicolas Iooss).
- ACPI cpufreq driver and ia64 cpufreq driver fixes and cleanups (Pan
Xinhui, Rafael J Wysocki).
- cpufreq core cleanups on top of the previous changes allowing it to
preseve its sysfs directories over system suspend/resume (Viresh
Kumar, Rafael J Wysocki, Sebastian Andrzej Siewior).
- cpufreq fixes and cleanups related to governors (Viresh Kumar).
- cpufreq updates (core and the cpufreq-dt driver) related to the
turbo/boost mode support (Viresh Kumar, Bartlomiej Zolnierkiewicz).
- New DT bindings for Operating Performance Points (OPP), support for
them in the OPP framework and in the cpufreq-dt driver plus related
OPP framework fixes and cleanups (Viresh Kumar).
- cpufreq powernv driver updates (Shilpasri G Bhat).
- New cpufreq driver for Mediatek MT8173 (Pi-Cheng Chen).
- Assorted cpufreq driver (speedstep-lib, sfi, integrator) cleanups
and fixes (Abhilash Jindal, Andrzej Hajda, Cristian Ardelean).
- intel_pstate driver updates including Skylake-S support, support
for enabling HW P-states per CPU and an additional vendor bypass
list entry (Kristen Carlson Accardi, Chen Yu, Ethan Zhao).
- cpuidle core fixes related to the handling of coupled idle states
(Xunlei Pang).
- intel_idle driver updates including Skylake Client support and
support for freeze-mode-specific idle states (Len Brown).
- Driver core updates related to power management (Andy Shevchenko,
Rafael J Wysocki).
- Generic power domains framework fixes and cleanups (Jon Hunter,
Geert Uytterhoeven, Rajendra Nayak, Ulf Hansson).
- Device PM QoS framework update to allow the latency tolerance
setting to be exposed to user space via sysfs (Mika Westerberg).
- devfreq support for PPMUv2 in Exynos5433 and a fix for an incorrect
exynos-ppmu DT binding (Chanwoo Choi, Javier Martinez Canillas).
- System sleep support updates (Alan Stern, Len Brown, SungEun Kim).
- rockchip-io AVS support updates (Heiko Stuebner).
- PM core clocks support fixup (Colin Ian King).
- Power capping RAPL driver update including support for Skylake H/S
and Broadwell-H (Radivoje Jovanovic, Seiichi Ikarashi).
- Generic device properties framework fixes related to the handling
of static (driver-provided) property sets (Andy Shevchenko).
- turbostat and cpupower updates (Len Brown, Shilpasri G Bhat,
Shreyas B Prabhu)"
* tag 'pm+acpi-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (180 commits)
cpufreq: speedstep-lib: Use monotonic clock
cpufreq: powernv: Increase the verbosity of OCC console messages
cpufreq: sfi: use kmemdup rather than duplicating its implementation
cpufreq: drop !cpufreq_driver check from cpufreq_parse_governor()
cpufreq: rename cpufreq_real_policy as cpufreq_user_policy
cpufreq: remove redundant 'policy' field from user_policy
cpufreq: remove redundant 'governor' field from user_policy
cpufreq: update user_policy.* on success
cpufreq: use memcpy() to copy policy
cpufreq: remove redundant CPUFREQ_INCOMPATIBLE notifier event
cpufreq: mediatek: Add MT8173 cpufreq driver
dt-bindings: mediatek: Add MT8173 CPU DVFS clock bindings
PM / Domains: Fix typo in description of genpd_dev_pm_detach()
PM / Domains: Remove unusable governor dummies
PM / Domains: Make pm_genpd_init() available to modules
PM / domains: Align column headers and data in pm_genpd_summary output
powercap / RAPL: disable the 2nd power limit properly
tools: cpupower: Fix error when running cpupower monitor
PM / OPP: Drop unlikely before IS_ERR(_OR_NULL)
PM / OPP: Fix static checker warning (broken 64bit big endian systems)
...
* acpi-pm:
ACPI / bus: Move duplicate code to a separate new function
mfd: Add support for Intel Sunrisepoint LPSS devices
dmaengine: add a driver for Intel integrated DMA 64-bit
mfd: make mfd_remove_devices() iterate in reverse order
driver core: implement device_for_each_child_reverse()
klist: implement klist_prev()
Driver core: wakeup the parent device before trying probe
ACPI / PM: Attach ACPI power domain only once
PM / QoS: Make it possible to expose device latency tolerance to userspace
ACPI / PM: Update the copyright notice and description of power.c
Adding the Broadwell Xeon ioatdma PCI device IDs and
related bits. This is still IOATDMA 3.2 based hw.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Change ipu_irq_handler() to avoid gcc warning:
drivers/dma/ipu/ipu_irq.c:305:4: warning: 'irq' may be used
uninitialized in this function [-Wmaybe-uninitialized]
generic_handle_irq(irq);
Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Sparse reported:
drivers/dma/ioat/prep.c:637:27: sparse: Variable length array is used.
Assigning a static value for the array.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The prep lock gets acquired in ioat_check_space_lock and released in
ioat_tx_submit_unlock. Setting the annotations so sparse does not freak out.
drivers/dma/ioat/dma.c:273:30: sparse: context imbalance in 'ioat_tx_submit_unlock' - unexpected unlock
drivers/dma/ioat/dma.c:476:5: sparse: context imbalance in 'ioat_check_space_lock' - wrong count at exit
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Just like for the XDMAC, the SoCs that embed the HDMAC don't have any kind
of GPU, and need to accelerate a few framebuffer-related operations through
their DMA controller.
However, unlike the XDMAC, the HDMAC doesn't have the memset capability
built-in. That can be easily emulated though, by doing a transfer with a
fixed address on the variable that holds the value we want to set.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dmaengine Kconfig grew over the years, unfortunately without any
order to it. So order by core, driver and client sections, and
sort these sections alphabetically
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dmaengine makefile grew over the years, unfortunately without any
order to it. So order by core, dmatest and driver sections and
sort these sections alphabetically
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The Kconfig for this driver is currently:
config MV_XOR
bool "Marvell XOR engine support"
...meaning that it currently is not being built as a module by anyone.
Lets remove the modular code that is essentially orphaned, so that
when reading the driver there is no doubt it is builtin-only.
Since module_init translates to device_initcall in the non-modular
case, the init ordering remains unchanged with this commit.
We leave some tags like MODULE_AUTHOR for documentation purposes.
Also note that MODULE_DEVICE_TABLE is a no-op for non-modular code.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for the Analog Devices AXI-DMAC DMA controller. This controller
is a soft peripheral that can be instantiated in a FPGA and is often used
in Analog Devices' reference designs for FPGA platforms.
The peripheral has various configuration options that can be selected at
synthesis time and influence the supported features of the instantiated
peripheral, those options are represented as device-tree properties to
allow the driver to behave accordingly.
The peripheral has a zero latency architecture, which means it is possible
to switch from one to the next descriptor without any delay. This is
archived by having a internal queue which can hold multiple descriptors.
The driver supports this, which means it will submit new descriptors
directly to the hardware until the queue is full and not wait for a
descriptor to complete before the next one is submitted. Interrupts are
used for the descriptor queue flow control.
Currently the driver supports SG, cyclic and interleaved slave DMA.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch provides the fix in the cleanup routing such that client can perform
further submission by releasing the lock before calling client's callback function.
Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Simplifying the end return. This existed in the original code but was
flagged when refactoring of the code made it appear it's new.
coccinelle warnings: (new ones prefixed by >>)
>> drivers/dma/ioat/init.c:1018:1-3: WARNING: end returns can be simpified
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The 32bit build is creating this warning. Since we don't expect anyone
actually use this on 32bit, restrict ioatdma to be built only on x86_64.
This issue has long existed and only reason it's surfacing due to code
refactoring.
drivers/dma/ioat/dma.c: In function 'ioat_timer_event':
>> drivers/dma/ioat/dma.c:870:39: warning: passing argument 2 of 'ioat_cleanup_preamble' from incompatible pointer type
if (ioat_cleanup_preamble(ioat_chan, &phys_complete))
^
drivers/dma/ioat/dma.c:577:13: note: expected 'u64 *' but argument is of type 'dma_addr_t *'
static bool ioat_cleanup_preamble(struct ioatdma_chan *ioat_chan,
^
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Sparse reports the following with regard to locking in the
tegra_dma_global_pause() and tegra_dma_global_resume() functions:
drivers/dma/tegra20-apb-dma.c:362:9: warning: context imbalance in
'tegra_dma_global_pause' - wrong count at exit
drivers/dma/tegra20-apb-dma.c:366:13: warning: context imbalance in
'tegra_dma_global_resume' - unexpected unlock
The warning is caused because tegra_dma_global_pause() acquires a lock
but does not release it. However, the lock is released by
tegra_dma_global_resume(). These pause/resume functions are called in
pairs and so it does appear to work.
This global pause is used on early tegra devices that do not have an
individual pause for each channel. The lock appears to be used to ensure
that multiple channels do not attempt to assert/de-assert the global pause
at the same time which could cause the DMA controller to be in the wrong
paused state. Rather than locking around the entire code between the pause
and resume, employ a simple counter to keep track of the global pause
requests. By using a counter, it is only necessary to hold the lock when
pausing and unpausing the DMA controller and hence, fixes the sparse
warning.
Please note that for devices that support individual channel pausing, the
DMA controller lock is not held between pausing and unpausing the channel.
Hence, this change will make the devices that use the global pause behave
in the same way, with regard to locking, as those that don't.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some void functions have unnecessary return statements at the end
(reported by sparse) and so remove these. Also remove the return variables
from functions tegra_dma_prep_slave_sg() and tegra_dma_prep_slave_cyclic()
because the value is not used.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Everytime a DMA channel register is accessed, the channel base address
is calculated by adding the DMA base address and the channel register
offset. Avoid this calculation and simply calculate the channel base
address once at probe time for each DMA channel.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The callback and callback_param members of the tegra_dma_sg_req structure
are never used. The dma-engine structure, dma_async_tx_descriptor, defines
the same members and these are the ones used by the driver. Therefore,
remove the unused versions from the tegra_dma_sg_req structure.
The half_done member of tegra_dma_channel structure is configured but
never used and so remove it.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
clk_enable() may fail, so we should better check the return value and
propagate it in the case of error.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the DMA engine present on Allwinner A10,
A13, A10S and A20 SoCs. This engine has two kinds of channels: normal
and dedicated. The main difference is in the mode of operation;
while a single normal channel may be operating at any given time,
dedicated channels may operate simultaneously provided there is no
overlap of source or destination.
Hardware documentation can be found on A10 User Manual (section 12), A13
User Manual (section 14) and A20 User Manual (section 1.12)
Signed-off-by: Emilio López <emilio@elopez.com.ar>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Due to how async_tx behaves internally, having more XOR channels than
CPUs is actually hurting performance more than it improves it, because
memcpy requests get scheduled on a different channel than the XOR
requests, but async_tx will still wait for the completion of the
memcpy requests before scheduling the XOR requests.
It is in fact more efficient to have at most one channel per CPU,
which this patch implements by limiting the number of channels per
engine, and the number of engines registered depending on the number
of availables CPUs.
Marvell platforms are currently available in one CPU, two CPUs and
four CPUs configurations:
- in the configurations with one CPU, only one channel from one
engine is used.
- in the configurations with two CPUs, only one channel from each
engine is used (they are two XOR engines)
- in the configurations with four CPUs, both channels of both engines
are used.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The only reason why we had dmacap,* properties is because back when
DMA_MEMSET was supported, only one out of the two channels per engine
could do a memset operation. But this is something that the driver
already knows anyway, and since then, the DMA_MEMSET support has been
removed.
The driver is already well aware of what each channel supports and the
one to one mapping between Linux specific implementation details (such
as dmacap,interrupt enabling DMA_INTERRUPT) and DT properties is a
good indication that these DT properties are wrong.
Therefore, this commit simply gets rid of these dmacap,* properties,
they are now ignored, and the driver is responsible for knowing the
capabilities of the hardware with regard to the dmaengine subsystem
expectations.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When there is only one burst required do not emit loop instructions to
loop exactly once. Emit just the body of the loop.
Signed-off-by: Michal Suchanek <hramrach@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
set_irq_flags is ARM specific with custom flags which have genirq
equivalents. Convert drivers to use the genirq interfaces directly, so we
can kill off set_irq_flags. The translation of flags is as follows:
IRQF_VALID -> !IRQ_NOREQUEST
IRQF_PROBE -> !IRQ_NOPROBE
IRQF_NOAUTOEN -> IRQ_NOAUTOEN
For IRQs managed by an irqdomain, the irqdomain core code handles clearing
and setting IRQ_NOREQUEST already, so there is no need to do this in
.map() functions and we can simply remove the set_irq_flags calls. Some
users also modify IRQ_NOPROBE and this has been maintained although it
is not clear that is really needed. There appears to be a great deal of
blind copy and paste of this code.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The new Solo X has more requirements for SDMA events. So it creates
a event mux to remap most of event numbers in GPR (General Purpose
Register). If we want to use SDMA support for those module who do
not get the even number as default, we need to configure GPR first.
Thus this patch adds this support of GPR event remapping configuration
to the SDMA driver.
Signed-off-by: Zidan Wang <zidan.wang@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cyclic mode, the round chaining has been broken by the introduction
of at_xdmac_queue_desc(): AT_XDMAC_MBR_UBC_NDE is set for all descriptors
excepted for the last one. at_xdmac_queue_desc() has to be called one
more time to chain the last and the first descriptors.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: 0d0ee751f7 ("dmaengine: xdmac: Rework the chaining logic")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Tasklets may have been scheduled as a result of an earlier interrupt
that could still be running. Kill them before unregistering the
device.
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We must explicitly free the IRQ before the device is unregistered in
case any device interrupt still occurs, so there's no point in using
the managed variations of the IRQ functions. Change to the regular
versions.
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When scanning for a free DMA channel, the filter function should ensure
that the channel is on the controller that it was requested to be on in
the DT.
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When the DT requests a specific channel to use it is not necesssary
to scan through all DMA channels in the system. Just return the
requested channel using dma_get_slave_channel().
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are a some signedness bugs such as testing for < 0 on unsigned
return values. Additionally there are some cases where functions which
should return NULL on error actually return a PTR_ERR value which can
result in oopses on error. Fix these issues.
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For some reason the controller does not support 8 byte transfers (but
does support all other powers of 2 up to 128). In this case fall back
to 4 bytes. In addition, fall back to 128 bytes when any larger power
of 2 would be possible within the alignment constraints, as this is
the maximum supported.
It makes no sense to outright reject 8 or >128 bytes just because the
alignment constraints make those the maximum possible size given the
parameters for the transaction. For instance, this can result in a DMA
from/to an 8 byte aligned address failing.
It is perfectly safe to fall back to smaller transfer sizes, the only
consequence is reduced transfer efficiency, which is far better than
not allowing the transfer at all.
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Several function prototypes did not match the dmaengine API they were
implementing, resulting in build warnings. Correct these.
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If DMA interrupt comes and is latched by IRQ controller during the
execution of dma_terminate_all(), dma_irq routine will be executed
after dma terminated, and it will cause kernel panic.
We clear DMA interrupts in dma_terminate_all() to avoid this useless
interrupt.
Signed-off-by: Yanchang Li <Yanchang.Li@csr.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for DMA on NXP LPC18xx/43xx platforms which has
a multiplexer in front of the PL080 dma request lines.
The mux is a single register in the LPC18xx/43xx CREG block
and can multiplex up to 4 request lines to each of the 16
lines on the PL080.
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This fixes the following error:
drivers/dma/pxa_dma.c: In function ‘dbg_show_requester_chan’:
drivers/dma/pxa_dma.c:192:2: error: void value not ignored as it ought to be
pos += seq_printf(s, "DMA channel %d requester :\n", phy->idx);
^
drivers/dma/pxa_dma.c:197:8: error: void value not ignored as it ought to be
!!(drcmr & DRCMR_MAPVLD));
^
scripts/Makefile.build:258: recipe for target 'drivers/dma/pxa_dma.o' failed
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch increments privatecnt value and set DMA_PRIVATE in device
caps in dma_request_slave_channel() function. This is needed to keep
privatecnt increment/decrement balance.
As function dma_release_channel() decrements privatecnt counter, we need
to increment it when channel is requested. Otherwise privatecnt drops
into negatives after few dma_release_channel() calls.
Reported-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Robert Baldyga <r.baldyga@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Don't use the direction passed in the configuration, and rely on each
transfer's direction to prepare the transfers. This will enable
future removal of direction parameter from dma_slave_config.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
IOAT_COMPLETION_PENDING flag was deprecated for v2 and v3 drivers but was
not cleaned up. Doing that now. The commit deprecated this flag was
4dec23d7 ioatdma: fix race between updating ioat->head and
IOAT_COMPLETION_PENDING.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
./scripts/kerne-doc is reporting errors on dma.h. Clean up all reported
errors.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since we are a "single" device driver now we no longer require the function
pointers in ioatdma_device. Remove.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Moving the relevant functions to their respective .c files and removal of
dma_v3.c file. Also removed various ioat3 references when appropriate.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Move all DMA descriptor prepping functions to prep.c file. Fixup all
broken bits caused by the move.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Moving all the init routines to init.c and fixup anything broken during
the move.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Move and fixup all sysfs related bits to sysfs.c file.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Clean out dma_v2 and remove ioat2 calls since we are moving everything
to just ioat.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Changing the variable names for ioatdma_device to be consistently named
ioat_dma instead of device/dma in order to avoid confusion and distinct
from struct device. This will clearly indicate that it is an
ioatdma_device. This also make all the naming consistent that the dma
device is ioat_dma and all the channels are ioat_chan.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Kill the common ioatdma channel structure and everything that is not
dma_chan to be ioat_dma_chan. Since we don't have to worry about v1
and v2 ioatdma anymore this makes it much cleaner and obvious for
maintenance.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Removal of support for ioatdma v2 device support.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cleaning up of ioat1 specific code as it is no longer supported
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Removal of any devices that are ioatdma pre-3.0. This is the first step
in attempting to clean up the ioatdma driver and remove hw no longer
supported.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the allocation order is 16, then the u16 count will overflow and wrap
to zero when assigned the value 1 << 16.
Change the type of 'total_descs' to int, so that it is large enough to
store a value equal or greater than 1 << 16.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the allocation order is 16, then the u16 index will overflow and wrap
to zero instead of being equal or greater than 1 << 16. The loop
condition will always be true, and the loop will run until all the
memory resources are depleted.
Change the type of index 'i' to u32, so that it is large enough to store
a value equal or greater than 1 << 16.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The functions irq_irq_err and ipu_irq_fn are identical plus/minus the
comments. Remove one.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The irq argument of most interrupt flow handlers is unused or merily
used instead of a local variable. The handlers which need the irq
argument can retrieve the irq number from the irq descriptor.
Search and update was done with coccinelle and the invaluable help of
Julia Lawall.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The XDMAC also supports memset operations over discontiguous areas. Add the
necessary logic to support this.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMA will not stop when clearing enable bit till all transaction
is done. The bug is exposed in audio playback because ring DMA
chain never stop. Force hardware to stop with setting FORCE bit.
Signed-off-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Align src and dst width to fix data alignment issue as
trailing single transaction that does not fill a full
burst require identical src/dst data width.
Burst length limitation can be addressed well too.
Signed-off-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Most drivers need to set constraints on the buffer alignment for async tx
operations. However, even though it is documented, some drivers either use
a defined constant that is not matching what the alignment variable expects
(like DMA_BUSWIDTH_* constants) or fill the alignment in bytes instead of
power of two.
Add a new enum for these alignments that matches what the framework
expects, and convert the drivers to it.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
devm_ioremap_resource returns ERR_PTR on failure.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds DEV_TO_DEV support for i.MX SDMA driver to support data
transfer between two peripheral FIFOs.
The per_2_per script requires two peripheral addresses and two DMA
requests, and it need to check the src addr and dst addr is in the SPBA
bus space or in the AIPS bus space.
Signed-off-by: Shengjiu Wang <shengjiu.wang@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We had a regression due to reuse of descriptor so we have reverted that.
Rest are driver fixes
at_hdmac and at_xdmac for residue, trannfer width, and channel config
pl330 final fix for dma fails and overflow issue
xgene resouce map fix
mv_xor big endian op fix
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVvOweAAoJEHwUBw8lI4NH21QP/0D8rEh/iXVUZOUqp7ANp+NX
B96LMvxTmc7Vn8C7dLeMvktZy+SSvlSrG2kqN+X02syhttWjXvvwEYUDw6/InLCy
ZnXzPxFmPPZEIGiUqb0zFbfUSYtV/7qjTGcXdamxWR3dw2ti1114sQ4K4RfMUvgh
9aU8PmFw3PYMi1w9boxaoU5KHIAc8zogcKHo21mxSzFPOa9ej4Bcaxa1AtKCsawG
lPBbjKI7/VWtvMReMF2GVK/mummZ03Iro+iXGL78QUud2hlcxbF7OLPuFHazhi7x
B8PprnvbVk/DDRy9zO3EVVRpEgWa0E4ms24UKt2eg06k8o/ibaqdZsGR6QpqLmZI
bl26tQiBpoX1PBxgP8w+6v84FXDzE8pA64dt5t0mCnFrcehyCfPek4P5UmbbfAo1
S4AH4E9vlNQbjyhB6MYSZD0Ck8BmxxrHqzp/xbUzfRl0Qsyqe9zyaSOraqcmveAZ
XCETHDb82EetOJh8ukWPGw95Pi9rrKX98FZFWKU8+oxePlGPIeVc3s7T06hj+j+Y
9ShalP9TG56kmIRGvKFmxW5T9VGQWu/GiglN8LtJSN1hrGAxyaK4QCD8nnYBrxvG
59WwR/XjkQhldxH3IhuU7LqaphOzOcokFX5kD5imyYRMTQsMjL89LYXshw+8DsQw
mzZsRA6L3777Zq9SlnsF
=X0jd
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-4.2-rc5' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
"We had a regression due to reuse of descriptor so we have reverted
that.
The rest are driver fixes:
- at_hdmac and at_xdmac for residue, trannfer width, and channel config
- pl330 final fix for dma fails and overflow issue
- xgene resouce map fix
- mv_xor big endian op fix"
* tag 'dmaengine-fix-4.2-rc5' of git://git.infradead.org/users/vkoul/slave-dma:
Revert "dmaengine: virt-dma: don't always free descriptor upon completion"
dmaengine: mv_xor: fix big endian operation in register mode
dmaengine: xgene-dma: Fix the resource map to handle overlapping
dmaengine: at_xdmac: fix transfer data width in at_xdmac_prep_slave_sg()
dmaengine: at_hdmac: fix residue computation
dmaengine: at_xdmac: fix bug about channel configuration
dmaengine: pl330: Really fix choppy sound because of wrong residue calculation
dmaengine: pl330: Fix overflow when reporting residue in memcpy
This reverts commit b9855f03d5.
The patch break existing DMA usage case. For example, audio SOC
dmaengine never release channel and cause virt-dma to cache too
much memory in descriptor to exhaust system memory.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 6f166312c6 ("dmaengine: mv_xor: add support for a38x command
in descriptor mode") introduced the support for a feature that
appeared in Armada 38x: specifying the operation to be performed in a
per-descriptor basis rather than globally per channel.
However, when doing so, it changed the function mv_chan_set_mode() to
use:
if (IS_ENABLED(__BIG_ENDIAN))
instead of:
#if defined(__BIG_ENDIAN)
While IS_ENABLED() is perfectly fine for CONFIG_* symbols, it is not
for other symbols such as __BIG_ENDIAN that is provided directly by
the compiler. Consequently, the commit broke support for big-endian,
as the XOR_DESCRIPTOR_SWAP flag was not set in the XOR channel
configuration register.
The primarily visible effect was some nasty warnings and failures
appearing during the self-test of the XOR unit:
[ 1.197368] mv_xor d0060900.xor: error on chan 0. intr cause 0x00000082
[ 1.197393] mv_xor d0060900.xor: config 0x00008440
[ 1.197410] mv_xor d0060900.xor: activation 0x00000000
[ 1.197427] mv_xor d0060900.xor: intr cause 0x00000082
[ 1.197443] mv_xor d0060900.xor: intr mask 0x000003f7
[ 1.197460] mv_xor d0060900.xor: error cause 0x00000000
[ 1.197477] mv_xor d0060900.xor: error addr 0x00000000
[ 1.197491] ------------[ cut here ]------------
[ 1.197513] WARNING: CPU: 0 PID: 1 at ../drivers/dma/mv_xor.c:664 mv_xor_interrupt_handler+0x14c/0x170()
See also:
http://storage.kernelci.org/next/next-20150617/arm-mvebu_v7_defconfig+CONFIG_CPU_BIG_ENDIAN=y/lab-khilman/boot-armada-xp-openblocks-ax3-4.txt
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Fixes: 6f166312c6 ("dmaengine: mv_xor: add support for a38x command in descriptor mode")
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is an overlap in dma ring cmd csr region due to sharing of ethernet
ring cmd csr region. This patch fix the resource overlapping by mapping
the entire dma ring cmd csr region.
Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds the missing update of the transfer data width in
at_xdmac_prep_slave_sg().
Indeed, for each item in the scatter-gather list, we check whether the
transfer length is aligned with the data width provided by
dmaengine_slave_config(). If so, we directly use this data width for the
current part of the transfer we are preparing. Otherwise, the data width
is reduced to 8 bits (1 byte). Of course, the actual number of register
accesses must also be updated to match the new data width.
So one chunk was missing in the original patch (see Fixes tag below): the
number of register accesses was correctly set to (len >> fixed_dwidth) in
mbr_ubc but the real data width was not updated in mbr_cfg. Since mbr_cfg
may change for each part of the scatter-gather transfer this also explains
why the original patch used the Descriptor View 2 instead of the
Descriptor View 1.
Let's take the example of a DMA transfer to write 8bit data into an Atmel
USART with FIFOs. When FIFOs are enabled in the USART, its Transmit
Holding Register (THR) works in multidata mode, that is to say that up to
4 8bit data can be written into the THR in a single 32bit access and it is
still possible to write only one data with a 8bit access. To take
advantage of this new feature, the DMA driver was modified to allow
multiple dwidths when doing slave transfers.
For instance, when the total length is 22 bytes, the USART driver splits
the transfer into 2 parts:
First part: 20 bytes transferred through 5 32bit writes into THR
Second part: 2 bytes transferred though 2 8bit writes into THR
For the second part, the data width was first set to 4_BYTES by the USART
driver thanks to dmaengine_slave_config() then at_xdmac_prep_slave_sg()
reduces this data width to 1_BYTE because the 2 byte length is not aligned
with the original 4_BYTES data width. Since the data width is modified,
the actual number of writes into THR must be set accordingly.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Fixes: 6d3a7d9e3a ("dmaengine: at_xdmac: allow muliple dwidths when doing slave transfers")
Cc: stable@vger.kernel.org #4.0 and later
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As claimed by the programmer datasheet and confirmed by the IP designer,
the Block Transfer Size (BTSIZE) bitfield of the Channel x Control A
Register (CTRLAx) always refers to a number of Source Width (SRC_WIDTH)
transfers.
Both the SRC_WIDTH and BTSIZE bitfields can be extacted from the CTRLAx
register to compute the DMA residue. So the 'tx_width' field is useless
and can be removed from the struct at_desc.
Before this patch, atc_prep_slave_sg() was not consistent: BTSIZE was
correctly initialized according to the SRC_WIDTH but 'tx_width' was always
set to reg_width, which was incorrect for MEM_TO_DEV transfers. It led to
bad DMA residue when 'tx_width' != SRC_WIDTH.
Also the 'tx_width' field was mostly set only in the first and last
descriptors. Depending on the kind of DMA transfer, this field remained
uninitialized for intermediate descriptors. The accurate DMA residue was
computed only when the currently processed descriptor was the first or the
last of the chain. This algorithm was a little bit odd. An accurate DMA
residue can always be computed using the SRC_WIDTH and BTSIZE bitfields
in the CTRLAx register.
Finally, the test to check whether the currently processed descriptor is
the last of the chain was wrong: for cyclic transfer, last_desc->lli.dscr
is NOT equal to zero, since set_desc_eol() is never called, but logically
equal to first_desc->txd.phys. This bug has a side effect on the
drivers/tty/serial/atmel_serial.c driver, which uses cyclic DMA transfer
to receive data. Since the DMA residue was wrong each time the DMA
transfer reaches the second (and last) period of the transfer, no more
data were received by the USART driver till the cyclic DMA transfer loops
back to the first period.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Acked-by: Torsten Fleischer <torfl6749@gmail.com>
Tested-by: Jirí Prchal <jiri.prchal@aksignal.cz>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When using descriptor view 2 or higher, we don't write the configuration
into AT_XDMAC_CC register because this configuration will be fetch from
the descriptor. Unfortunately, the PROT bit is not updated with this
method, we have to do it manually before enabling the channel.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Intel integrated DMA (iDMA) 64-bit is a specific IP that is used as a part of
LPSS devices such as HSUART or SPI. The iDMA IP is attached for private
usage on each host controller independently.
While it has similarities with Synopsys DesignWare DMA, the following
distinctions doesn't allow to use the existing driver:
- 64-bit mode with corresponding changes in Hardware Linked List data structure
- many slight differences in the channel registers
Moreover this driver is based on the DMA virtual channels framework that helps
to make the driver cleaner and easy to understand.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
The crossbar for eDMA works exactly the same way as sDMA, but sDMA
requires an offset of 1, while no offset is needed for eDMA.
Based on the patch from Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
CC: Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Switch to my kernel.org alias instead of a badly named gmail address,
which I rarely use.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
At device removal, tasklets are not disabled and irqs are still enabled, so
free the irq explicitly on device removal
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add ZTE ZX296702 dma controller support. Only
device tree probe is support currently.
Signed-off-by: Jun Nie <jun.nie@linaro.org>
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use irq_desc_get_xxx() to avoid redundant lookup of irq_desc while we
already have a pointer to corresponding irq_desc.
This is also a preparation for the removal of the 'irq' argument from
interrupt flow handlers.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Chained irq handlers usually set up handler data as well. We now have
a function to set both under irq_desc->lock. Replace the two calls
with one.
Search and conversion was done with coccinelle.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
All hardware accesses are done under virtual channel lock. That's why specific
channel lock is excessive and can be removed safely. This has been tested on
Intel Medfield and Merrifield.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The dmaengine core assumes that async DMA devices will only be removed
when they not used anymore, or it assumes dma_async_device_unregister()
will only be called by dma driver exit routines. But this assumption is
not true for the IOAT driver, which calls dma_async_device_unregister()
from ioat_remove(). So current IOAT driver doesn't support device
hot-removal because it may cause system crash to hot-remove an inuse
IOAT device.
To support CPU socket hot-removal, all PCI devices, including IOAT
devices embedded in the socket, will be hot-removed. The idea solution
is to enhance the dmaengine core and IOAT driver to support hot-removal,
but that's too hard.
This patch implements a hack to disable IOAT devices under hotplug-capable
CPU socket so it won't break socket hot-removal.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This allows claiming of non-RAID channels as a private channel. This
prevents breakage of MDRAID using the IOATDMA channels via
async_tx but also allows agents such as NTB to claim channels
exclusively for its usages.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Removing static analysis error:-
Possible null pointer dereference: xt
Because currently xt is dereferenced before NULL check,
Thus Use it after NULL Check.
Signed-off-by: Maninder Singh <maninder1.s@samsung.com>
Reviewed-by: Vaneet Narang <v.narang@samsung.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
clk_prepare_enable() may fail, so we should better check its return value
and propagate it in the case of error.
While at it, change the label 'err' to a more descriptive naming.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When pl330 driver was used during sound playback, after some time or
after a number of plays the sound became choppy or totally noisy. For
example on Odroid XU3 board the first four executions of aplay with
small WAVE worked fine, but fifth was unrecognizable with errors:
$ aplay /usr/share/sounds/alsa/Front_Right.wava
underrun!!! (at least 0.095 ms long)
Issue was caused by wrong residue reported by pl330 driver to
pcm_dmaengine for its cyclic dma transfers.
The pl330_tx_status(), residue reporting function, used a "last" flag in
a descriptor to indicate that there is no more data to send.
The pl330_tx_submit() iterated over descriptors trying to remove this
flag from them and then mark last descriptor as "last". However when
iterating it actually removed the flag not from descriptors but always
from last of it (and then reset it). Thus effectively once some
descriptor was marked as last, then it stayed like this forever causing
residue to be reported too low.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski.k@gmail.com>
Fixes: aee4d1fac8 ("dmaengine: pl330: improve pl330_tx_status() function")
Cc: <stable@vger.kernel.org>
Reported-by: gabriel@unseen.is
Suggested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
During memcpy operations the residue was always set to an u32 overflowed
value.
In pl330_tx_status() function number of currently transferred bytes was
subtracted from internal "bytes_requested" field. However this
"bytes_requested" was not initialized at start to length of memcpy
buffer so transferred bytes were subtracted from 0 causing overflow.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: <stable@vger.kernel.org>
Fixes: aee4d1fac8 ("dmaengine: pl330: improve pl330_tx_status() function")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 3b62286d0e ("dmaengine: Remove FSF mailing addresses") left Free
Software Foundation mailing address still in two files. Remove it now.
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Merge third patchbomb from Andrew Morton:
- the rest of MM
- scripts/gdb updates
- ipc/ updates
- lib/ updates
- MAINTAINERS updates
- various other misc things
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (67 commits)
genalloc: rename of_get_named_gen_pool() to of_gen_pool_get()
genalloc: rename dev_get_gen_pool() to gen_pool_get()
x86: opt into HAVE_COPY_THREAD_TLS, for both 32-bit and 64-bit
MAINTAINERS: add zpool
MAINTAINERS: BCACHE: Kent Overstreet has changed email address
MAINTAINERS: move Jens Osterkamp to CREDITS
MAINTAINERS: remove unused nbd.h pattern
MAINTAINERS: update brcm gpio filename pattern
MAINTAINERS: update brcm dts pattern
MAINTAINERS: update sound soc intel patterns
MAINTAINERS: remove website for paride
MAINTAINERS: update Emulex ocrdma email addresses
bcache: use kvfree() in various places
libcxgbi: use kvfree() in cxgbi_free_big_mem()
target: use kvfree() in session alloc and free
IB/ehca: use kvfree() in ipz_queue_{cd}tor()
drm/nouveau/gem: use kvfree() in u_free()
drm: use kvfree() in drm_free_large()
cxgb4: use kvfree() in t4_free_mem()
cxgb3: use kvfree() in cxgb_free_mem()
...
Main excitement here is Peter Zijlstra's lockless rbtree optimization to
speed module address lookup. He found some abusers of the module lock
doing that too.
A little bit of parameter work here too; including Dan Streetman's breaking
up the big param mutex so writing a parameter can load another module (yeah,
really). Unfortunately that broke the usual suspects, !CONFIG_MODULES and
!CONFIG_SYSFS, so those fixes were appended too.
Cheers,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVkgKHAAoJENkgDmzRrbjxQpwQAJVmBN6jF3SnwbQXv9vRixjH
58V33sb1G1RW+kXxQ3/e8jLX/4VaN479CufruXQp+IJWXsN/CH0lbC3k8m7u50d7
b1Zeqd/Yrh79rkc11b0X1698uGCSMlzz+V54Z0QOTEEX+nSu2ZZvccFS4UaHkn3z
rqDo00lb7rxQz8U25qro2OZrG6D3ub2q20TkWUB8EO4AOHkPn8KWP2r429Axrr0K
wlDWDTTt8/IsvPbuPf3T15RAhq1avkMXWn9nDXDjyWbpLfTn8NFnWmtesgY7Jl4t
GjbXC5WYekX3w2ZDB9KaT/DAMQ1a7RbMXNSz4RX4VbzDl+yYeSLmIh2G9fZb1PbB
PsIxrOgy4BquOWsJPm+zeFPSC3q9Cfu219L4AmxSjiZxC3dlosg5rIB892Mjoyv4
qxmg6oiqtc4Jxv+Gl9lRFVOqyHZrTC5IJ+xgfv1EyP6kKMUKLlDZtxZAuQxpUyxR
HZLq220RYnYSvkWauikq4M8fqFM8bdt6hLJnv7bVqllseROk9stCvjSiE3A9szH5
OgtOfYV5GhOeb8pCZqJKlGDw+RoJ21jtNCgOr6DgkNKV9CX/kL/Puwv8gnA0B0eh
dxCeB7f/gcLl7Cg3Z3gVVcGlgak6JWrLf5ITAJhBZ8Lv+AtL2DKmwEWS/iIMRmek
tLdh/a9GiCitqS0bT7GE
=tWPQ
-----END PGP SIGNATURE-----
Merge tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull module updates from Rusty Russell:
"Main excitement here is Peter Zijlstra's lockless rbtree optimization
to speed module address lookup. He found some abusers of the module
lock doing that too.
A little bit of parameter work here too; including Dan Streetman's
breaking up the big param mutex so writing a parameter can load
another module (yeah, really). Unfortunately that broke the usual
suspects, !CONFIG_MODULES and !CONFIG_SYSFS, so those fixes were
appended too"
* tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (26 commits)
modules: only use mod->param_lock if CONFIG_MODULES
param: fix module param locks when !CONFIG_SYSFS.
rcu: merge fix for Convert ACCESS_ONCE() to READ_ONCE() and WRITE_ONCE()
module: add per-module param_lock
module: make perm const
params: suppress unused variable error, warn once just in case code changes.
modules: clarify CONFIG_MODULE_COMPRESS help, suggest 'N'.
kernel/module.c: avoid ifdefs for sig_enforce declaration
kernel/workqueue.c: remove ifdefs over wq_power_efficient
kernel/params.c: export param_ops_bool_enable_only
kernel/params.c: generalize bool_enable_only
kernel/module.c: use generic module param operaters for sig_enforce
kernel/params: constify struct kernel_param_ops uses
sysfs: tightened sysfs permission checks
module: Rework module_addr_{min,max}
module: Use __module_address() for module_address_lookup()
module: Make the mod_tree stuff conditional on PERF_EVENTS || TRACING
module: Optimize __module_address() using a latched RB-tree
rbtree: Implement generic latch_tree
seqlock: Introduce raw_read_seqcount_latch()
...
To be consistent with other kernel interface namings, rename
of_get_named_gen_pool() to of_gen_pool_get(). In the original function
name "_named" suffix references to a device tree property, which contains
a phandle to a device and the corresponding device driver is assumed to
register a gen_pool object.
Due to a weak relation and to avoid any confusion (e.g. in future
possible scenario if gen_pool objects are named) the suffix is removed.
[sfr@canb.auug.org.au: crypto/marvell/cesa - fix up for of_get_named_gen_pool() rename]
Signed-off-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Cc: Philipp Zabel <p.zabel@pengutronix.de>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Boris BREZILLON <boris.brezillon@free-electrons.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This time we have support for few new devices, few new features and odd
fixes spread thru the subsystem.
New devices added
- support for CSRatlas7 dma controller
- Allwinner H3(sun8i) controller
- TI DMA crossbar driver on DRA7x
- new pxa driver
New features added:
- memset support is bought back now that we have a user in xdmac controller
- interleaved transfers support different source and destination strides
- supporting DMA routers and configuration thru DT
- support for reusing descriptors
- xdmac memset and interleaved transfer support
- hdmac support for interleaved transfers
- omap-dma support for memcpy
Others
- Constify platform_device_id
- mv_xor fixes and improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVjsgRAAoJEHwUBw8lI4NHcu8QAMw6EMPSD+tXWr0eDKhZm3zr
9rURBLXaVKjcboY78uvcZvtzC9PB5AVexoTt7K2zKkeF24t8hIz7nVBAnTqLtd00
tEoJpDEIxtmRyKkCPpF7LvbVVFh+qD2+66Gf67LMb0UXzOFKsrdAdrfNtST8ezUl
rQU95ZmZfW1CfCDg0zaM9ipxZWB54txR51Wf1C14Y5SzKWVHSaD7jgAqhA81WPLF
iIOqGY9VyOh3Ry58ON/x/Q8lOGfMEocXs9+FLa1tMFrO3vKSQB1lPN1NfwbnvZKy
Oqh+1sqdGwPUoQBEGZfBHcYvVgyX4FC4d8V6BIBPVD3PGt3oQJ6+pVom9ufnDtaQ
3cxbpNt+n0FywIKEZrIxe96kHrkb7FWL17p3ZuA7n4qmEHt5pabFjqEBS/isqpzB
CiVJDzh3x3LOlL4zzvp303a/Yn/fnuDJpa1Zfw45uYZgMkyNlatd1Llrxm2Z24j8
g56Jve+JXx17j1b5yjSVcuWR9QOwBrqJncbFVx7rGLjo755ex24pXEMccvMy2BCD
x/le8obIGsY3jAU/4k+eJSrI5RLsAins5tCicrL3d12elPCcSlPCR8FyLbNDyFIV
K67hOmVrkJrqsLVoRtFxEwaLJF1M1DGstjPr42G2W82pF4IbHEF1oHRqAhsXY6xB
+PStPU1krDOu/nTJOPOm
=VM4w
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have support for few new devices, few new features and
odd fixes spread thru the subsystem.
New devices added:
- support for CSRatlas7 dma controller
- Allwinner H3(sun8i) controller
- TI DMA crossbar driver on DRA7x
- new pxa driver
New features added:
- memset support is bought back now that we have a user in xdmac controller
- interleaved transfers support different source and destination strides
- supporting DMA routers and configuration thru DT
- support for reusing descriptors
- xdmac memset and interleaved transfer support
- hdmac support for interleaved transfers
- omap-dma support for memcpy
Others:
- Constify platform_device_id
- mv_xor fixes and improvements"
* tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
dmaengine: xgene: fix file permission
dmaengine: fsl-edma: clear pending interrupts on initialization
dmaengine: xdmac: Add memset support
Documentation: dmaengine: document DMA_CTRL_ACK
dmaengine: virt-dma: don't always free descriptor upon completion
dmaengine: Revert "drivers/dma: remove unused support for MEMSET operations"
dmaengine: hdmac: Implement interleaved transfers
dmaengine: Move icg helpers to global header
dmaengine: mv_xor: improve descriptors list handling and reduce locking
dmaengine: mv_xor: Enlarge descriptor pool size
dmaengine: mv_xor: add support for a38x command in descriptor mode
dmaengine: mv_xor: Rename function for consistent naming
dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
dmaengine: pl330: fix wording in mcbufsz message
dmaengine: sirf: add CSRatlas7 SoC support
dmaengine: xgene-dma: Fix "incorrect type in assignement" warnings
dmaengine: fix kernel-doc documentation
dmaengine: pxa_dma: add support for legacy transition
dmaengine: pxa_dma: add debug information
dmaengine: pxa: add pxa dmaengine driver
...
Here's the tty and serial driver patches for 4.2-rc1.
A number of individual driver updates, some code cleanups, and other
minor things, full details in the shortlog.
All have been in linux-next for a while with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEABECAAYFAlWNoSAACgkQMUfUDdst+ymxNQCguSEmkAYNDdLyYhdcOqSxJt9u
U1gAoMThUDoomkx6CTDMU1wn53hxgMk9
=eCUS
-----END PGP SIGNATURE-----
Merge tag 'tty-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty/serial driver updates from Greg KH:
"Here's the tty and serial driver patches for 4.2-rc1.
A number of individual driver updates, some code cleanups, and other
minor things, full details in the shortlog.
All have been in linux-next for a while with no reported issues"
* tag 'tty-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (152 commits)
Doc: serial-rs485.txt: update RS485 driver interface
Doc: tty.txt: remove mention of the BKL
MAINTAINERS: tty: add serial docs directory
serial: sprd: check for NULL after calling devm_clk_get
serial: 8250_pci: Correct uartclk for xr17v35x expansion chips
serial: 8250_pci: Add support for 12 port Exar boards
serial: 8250_uniphier: add bindings document for UniPhier UART
serial: core: cleanup in uart_get_baud_rate()
serial: stm32-usart: Add STM32 USART Driver
tty/serial: kill off set_irq_flags usage
tty: move linux/gsmmux.h to uapi
doc: dt: add documentation for nxp,lpc1850-uart
serial: 8250: add LPC18xx/43xx UART driver
serial: 8250_uniphier: add UniPhier serial driver
serial: 8250_dw: support ACPI platforms with integrated DMA engine
serial: of_serial: check the return value of clk_prepare_enable()
serial: of_serial: use devm_clk_get() instead of clk_get()
serial: earlycon: Add support for big-endian MMIO accesses
serial: sirf: use hrtimer for data rx
serial: sirf: correct the fifo empty_bit
...
It was a busy development cycle at this time, as you can see a wide
range of changes in diffstat. There are no big changes but many
refactoring and improvements. Here we go some highlights:
* ALSA core:
- Procfs codes were cleaned up to use seq_file
- Procfs can be opt out via Kconfig (only for EXPERT)
- Two types of jack API were unified finally; now both kctl and input
jack devs are handled via a single function call.
* HD-audio
- Continued code restructuring for the future ASoC driver; now HDA
controller driver is split to a core helper module.
- Preliminary codes for Skylake audio support in HDA core.
- Proper i915 gfx power well management for SKL & co
- Enabled runtime PM as default for Intel HDMI/DP codecs
- Newer Tegra chip supports
- More quirks for Dell headsets, Alienware (with CA0132), etc.
- A couple of DRM ELD helper API functions
* ASoC
- Support for loading ASoC topology maps from firmware, intended to be
used to allow self-describing DSP firmware images to be built which
can map controls added by the DSP to userspace without the kernel
needing to know about individual DSP firmwares
- Lots of refactoring to avoid direct access to snd_soc_codec where
it's not needed supporting future refactoring
- Big refactoring, cleanup and enhancement for the Wolfson ADSP driver
- Cleanup series for TI TAS2552 and R-CAR drivers
- Fixes and improvements on RT56xx codecs
- Support for TI TAS571x power amplifiers
- Support for Qualcomm APQ8016 and ZTE ZX296702 SoCs
- Support for x86 systems with RT5650 and Qualcomm Storm
- Support for Mediatek AFE (Audio Front End) unit
- Other various small fixes to ASoC codec drivers
* Firewire
- Enhanced to allow non-blocking streams to use timestamp
synchronization
- Improve support for DM1500 and BeBoBv3
* Misc
- Cleanup of old pci API functions over all PCI sound drivers
- Fix long-standing regression of the old powermac i2c setup
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJVitjmAAoJEGwxgFQ9KSmksW8P/2ngNzNpo/bmmGh6xjB7GWU9
RDAkqhKd6yvcClQojGS9n4a9CJ8nk5tdqTr9rMp58N7DRv6GYCPdq0A+lLOih+yC
UPcTkTMBKm6UtvJjEcaasMxhvs5xno345oo5KrBdvlfv1rXe83dTtzEsybWYkaVD
dJbbr5QFaiyj5cTp9nanK5kyTyDDXCdP+vjBGv5u9+GbVxQ6Eenyts89uSqEZs1F
ltoBrl4VotXyqHKneJ0ttUKEimcVIgu8rCXH0sTtCg0SZVJFi+UXzI/VkkS+expL
x9bNN6bw5UT9LA8+qybFRETx+8qchFsffzeUEle4wkIpVKXt/VqjP3GIvp6umlF5
RhU5Wumf2KuIVjgVsYxd7bUkmHr4ywpqS3vSWMWU90FApJay7exatzLPyUVN0AxH
pdAizc8NWFk1kVtWq8jr9agEdxDt2l+E9UXij+ViGyouMZL1oSvOo9NgovfwvfC6
qKUisUkq53p1uPOW/U5gvF7bee2enEXMI9YUY1Z8MHx7nloq+25Nqma8P0gYthB8
6Qk+t1oqC2p7ZMSkyVHH9nySQmoLITZHZmsHqqpLW+jFtanhuckDI75AvmrScs+r
3+2YZXxPI0caZZ1qxMCd7Clmh7ZcSeRe73HXSXmF0xrLffISM3Yg3ZN10cbWQRj2
D6TiHCspLpn+pcYLcWJ2
=D78E
-----END PGP SIGNATURE-----
Merge tag 'sound-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound updates from Takashi Iwai:
"It was a busy development cycle at this time, as you can see a wide
range of changes in diffstat. There are no big changes but many
refactoring and improvements. Here we go some highlights:
ALSA core:
- Procfs codes were cleaned up to use seq_file
- Procfs can be opt out via Kconfig (only for EXPERT)
- Two types of jack API were unified finally; now both kctl and input
jack devs are handled via a single function call.
HD-audio:
- Continued code restructuring for the future ASoC driver; now HDA
controller driver is split to a core helper module.
- Preliminary codes for Skylake audio support in HDA core.
- Proper i915 gfx power well management for SKL & co
- Enabled runtime PM as default for Intel HDMI/DP codecs
- Newer Tegra chip supports
- More quirks for Dell headsets, Alienware (with CA0132), etc.
- A couple of DRM ELD helper API functions
ASoC:
- Support for loading ASoC topology maps from firmware, intended to
be used to allow self-describing DSP firmware images to be built
which can map controls added by the DSP to userspace without the
kernel needing to know about individual DSP firmwares
- Lots of refactoring to avoid direct access to snd_soc_codec where
it's not needed supporting future refactoring
- Big refactoring, cleanup and enhancement for the Wolfson ADSP
driver
- Cleanup series for TI TAS2552 and R-CAR drivers
- Fixes and improvements on RT56xx codecs
- Support for TI TAS571x power amplifiers
- Support for Qualcomm APQ8016 and ZTE ZX296702 SoCs
- Support for x86 systems with RT5650 and Qualcomm Storm
- Support for Mediatek AFE (Audio Front End) unit
- Other various small fixes to ASoC codec drivers
Firewire:
- Enhanced to allow non-blocking streams to use timestamp
synchronization
- Improve support for DM1500 and BeBoBv3
Misc:
- Cleanup of old pci API functions over all PCI sound drivers
- Fix long-standing regression of the old powermac i2c setup"
* tag 'sound-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (533 commits)
ALSA: pcm: Fix pcm_class sysfs output
ALSA: hda-beep: Update authors dead email address
ASoC: wm_adsp: Move DSP Rate controls into the codec
ASoC: wm8995: Fix setting sysclk for WM8995_SYSCLK_MCLK2 case
ALSA: hda: provide default bus io ops extended hdac
ALSA: hda: add hda link cleanup routine
ALSA: hda: add hdac_ext stream creation and cleanup routines
ASoC: rsrc-card: remove unused ret
ALSA: HDAC: move SND_HDA_PREALLOC_SIZE to core
ASoC: mediatek: Add machine driver for rt5650 rt5676 codec
ASoC: mediatek: Add machine driver for MAX98090 codec
ASoC: mediatek: Add AFE platform driver
ASoC: rsnd: remove io from rsnd_mod
ASoC: rsnd: move rsnd_mod_is_working() to rsnd_io_is_working()
ASoC: rsnd: don't use rsnd_mod_to_io() on snd_kcontrol
ASoC: rsnd: don't use rsnd_mod_to_io() on rsnd_src_xxx()
ASoC: rsnd: don't use rsnd_mod_to_io() on rsnd_ssi_xxx()
ASoC: rsnd: don't use rsnd_mod_to_io() on rsnd_dma_xxx()
ASoC: rsnd: don't use rsnd_mod_to_io() on rsnd_get_adinr()
ASoC: rsnd: add common interrupt handler for SSI/SRC/DMA
...
Clear pending interrupts before requesting interrupts and move
interrupt initialization after channels have been initialized.
This avoids a NULL pointer dereference panic when using kexec
while DMA requests were running.
Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The XDMAC supports memset transfers, both over contiguous areas, and over
discontiguous areas through a LLI.
The current memset operation only supports contiguous memset for now, add some
support for it. Scatter-gathered memset will come eventually.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>