Pull dmaengine changes from Dan
1/ Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.
2/ In the course of testing 1/ a collection of enhancements to dmatest
fell out. Notably basic performance statistics, and fixed / enhanced
test control through new module parameters 'run', 'wait', 'noverify',
and 'verbose'. Thanks to Andriy and Linus for their review.
3/ Testing the raid related corner cases of 1/ triggered bugs in the
recently added 16-source operation support in the ioatdma driver.
4/ Some minor fixes / cleanups to mv_xor and ioatdma.
Conflicts:
drivers/dma/dmatest.c
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Despite requesting two memory resources, called 'base' and 'high_base', the
driver uses explicitly only the former. The latter is being used implicitly
by addressing at offset +0x200, which in practice accesses high_base.
In other words, the current driver breaks if the second memory resource
is ever place at an offset different from +0x200.
This patch fixes the above by defining the registers with the offset from
high_base, and use high_base explicitly where appropriate.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This mmio address is checked at probe-time, which makes this test
redundant. Let's just remove it.
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Remove support for DMA unmapping from drivers as it is no longer
needed (DMA core code is now handling it).
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
[djbw: fix up chan2parent() unused warning in drivers/dma/dw/core.c]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Add a hook for a common dma unmap implementation to enable removal of
the per driver custom unmap code. (A reworked version of Bartlomiej
Zolnierkiewicz's patches to remove the custom callbacks and the size
increase of dma_async_tx_descriptor for drivers that don't care about
raid).
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
[bzolnier: prepare pl330 driver for adding missing unmap while at it]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Pull slave-dmaengine updates from Vinod Koul:
"This pull brings:
- Andy's DW driver updates
- Guennadi's sh driver updates
- Pl08x driver fixes from Tomasz & Alban
- Improvements to mmp_pdma by Daniel
- TI EDMA fixes by Joel
- New drivers:
- Hisilicon k3dma driver
- Renesas rcar dma driver
- New API for publishing slave driver capablities
- Various fixes across the subsystem by Andy, Jingoo, Sachin etc..."
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (94 commits)
dma: edma: Remove limits on number of slots
dma: edma: Leave linked to Null slot instead of DUMMY slot
dma: edma: Find missed events and issue them
ARM: edma: Add function to manually trigger an EDMA channel
dma: edma: Write out and handle MAX_NR_SG at a given time
dma: edma: Setup parameters to DMA MAX_NR_SG at a time
dmaengine: pl330: use dma_set_max_seg_size to set the sg limit
dmaengine: dma_slave_caps: remove sg entries
dma: replace devm_request_and_ioremap by devm_ioremap_resource
dma: ste_dma40: Fix potential null pointer dereference
dma: ste_dma40: Remove duplicate const
dma: imx-dma: Remove redundant NULL check
dma: dmagengine: fix function names in comments
dma: add driver for R-Car HPB-DMAC
dma: k3dma: use devm_ioremap_resource() instead of devm_request_and_ioremap()
dma: imx-sdma: Staticize sdma_driver_data structures
pch_dma: Add MODULE_DEVICE_TABLE
dmaengine: PL08x: Add cyclic transfer support
dmaengine: PL08x: Fix reading the byte count in cctl
dmaengine: PL08x: Add support for different maximum transfer size
...
Return directly if memory allocation fails. There is no need
of dma_free_coherent().
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Cc: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The mv_xor driver had never been used in a big-endian context, and
therefore was not using the hardware features to support such an
execution environment. The hardware provides a "descriptor swap" bit
that automatically swaps the bytes of the DMA descriptors, within
blocks of 8 bytes. This requires a different DMA descriptor layout on
big-endian systems, as well as enabling this "descriptor swap" bit.
This mechanism is exactly identical to the one already used in the
mv643xx_eth network driver and the mvneta network driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Dan Williams <djbw@fb.com>
In order to support big-endian execution, the mv_xor driver is changed
to use the readl_relaxed() and writel_relaxed() accessors that
properly convert from the CPU endianess to the device endianess (which
in the case of Marvell XOR hardware is always little-endian).
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Dan Williams <djbw@fb.com>
Use the wrapper function for retrieving the platform data instead of
accessing dev->platform_data directly.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
%p is used, thus NULL should be used instead of 0
in order to fix the following sparse warning:
drivers/dma/mv_xor.c:648:9: warning: Using plain integer as NULL pointer
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There have never been any real users of MEMSET operations since they
have been introduced in January 2007 by commit 7405f74bad ("dmaengine:
refactor dmaengine around dma_async_tx_descriptor"). Therefore remove
support for them for now, it can be always brought back when needed.
[sebastian.hesselbarth@gmail.com: fix drivers/dma/mv_xor]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Acked-by: Dan Williams <djbw@fb.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Olof Johansson <olof@lixom.net>
Cc: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull slave-dmaengine updates from Vinod Koul:
"This is fairly big pull by my standards as I had missed last merge
window. So we have the support for device tree for slave-dmaengine,
large updates to dw_dmac driver from Andy for reusing on different
architectures. Along with this we have fixes on bunch of the drivers"
Fix up trivial conflicts, usually due to #include line movement next to
each other.
* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits)
Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT"
ARM: dts: pl330: Add #dma-cells for generic dma binding support
DMA: PL330: Register the DMA controller with the generic DMA helpers
DMA: PL330: Add xlate function
DMA: PL330: Add new pl330 filter for DT case.
dma: tegra20-apb-dma: remove unnecessary assignment
edma: do not waste memory for dma_mask
dma: coh901318: set residue only if dma is in progress
dma: coh901318: avoid unbalanced locking
dmaengine.h: remove redundant else keyword
dma: of-dma: protect list write operation by spin_lock
dmaengine: ste_dma40: do not remove descriptors for cyclic transfers
dma: of-dma.c: fix memory leakage
dw_dmac: apply default dma_mask if needed
dmaengine: ioat - fix spare sparse complain
dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c
ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
dw_dmac: add support for Lynxpoint DMA controllers
dw_dmac: return proper residue value
dw_dmac: fill individual length of descriptor
...
dev_<level> calls take less code than dev_printk(KERN_<LEVEL>
and reducing object size is good.
Coalesce formats for easier grep.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
When a channel fails to initialize, we release all ressources,
including clocks. However, a XOR unit is not necessarily associated to
a clock (some variants of Marvell SoCs have a clock for XOR units,
some don't), so we shouldn't unconditionally be releasing the clock.
Instead, just like we do in the mv_xor_remove() function, we should
check if one clock was found before releasing it.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
When mv_xor_channel_add() fails for one XOR channel, we jump to the
err_channel_add label to clean up all previous channels that had been
initialized correctly. Unfortunately, while handling this error
condition, we were disposing the IRQ mapping before calling
mv_xor_channel_remove() (which does the free_irq()), which is
incorrect.
Instead, do things properly in the reverse order of the
initialization: first remove the XOR channel (so that free_irq() is
done), and then dispose the IRQ mapping.
This avoids ugly warnings when for some reason one of the XOR channel
fails to initialize.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
This is a branch with updates for Marvell's mvebu/kirkwood platforms. They
came in late-ish, and were heavily interdependent such that it didn't
make sense to split them up across the cross-platform topic branches. So
here they are (for the second release in a row) in a branch on their own.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJQy5i9AAoJEIwa5zzehBx3ZskP/2wxjbwEaNdnR+7j8595bTaa
GYq8qJ4lUCOKmSqp3bQkg/Plm2D88p78BO5qTm2io527gl10HemzCiGaejclujIw
sDFZPAE8K0Z8p0gQcBNlRZNuI3J1N6IKRqYH5SIJ2vWmBMfO7nKRR9nmTiDpm5bx
IcuKX2u/mhyXWN+F0EcHqcupH1K+mdzyGdIQk80Tyqni+UTN+pd0efLM6WL4SFJM
5fj64dDFpVDA8t+O2Avz8p+lx07vkSy2wIXWt7Ik9BVtsyZQecn+9lpl8FvcrSK/
MgL3QO4kqDpJDs88M7DJURU1/EdsWZc32M63avctaWnGWItQAbOJYBDmZTlng08x
ZGrKOgf/I6le7wEpnzdag9ymI/rAL8I0755FkfXxf1R7/X40b+t8/61J/ddOKTDs
1sTVt+eKyyIMWle4V4zENa03goVBApCIEXcmnuFisFNbBY6azV31inJEp/3PvpgE
GeMBfxBDkvn+03LkRFcZlhTeDsNTdctD+sfgrNPaQf5bZGIvEz87vgfNTIiaU3GA
Vd5aiainVDQgmpoFfRG6391gdFlF2l9d67LoG4ClCjn4WL+UxcTRuzBW/liORpUO
E7CwMHtPq6eoGKywiKMFRzY2QRIKZRkxrC2PCJ/1V9mbIGwgaD/3BQ2/czwrnc8q
1gnxWx8E5SKEGcDJXD+6
=7luC
-----END PGP SIGNATURE-----
Merge tag 'mvebu' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC updates for Marvell mvebu/kirkwood from Olof Johansson:
"This is a branch with updates for Marvell's mvebu/kirkwood platforms.
They came in late-ish, and were heavily interdependent such that it
didn't make sense to split them up across the cross-platform topic
branches. So here they are (for the second release in a row) in a
branch on their own."
* tag 'mvebu' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (88 commits)
arm: l2x0: add aurora related properties to OF binding
arm: mvebu: add Aurora L2 Cache Controller to the DT
arm: mvebu: add L2 cache support
dma: mv_xor: fix error handling path
dma: mv_xor: fix error checking of irq_of_parse_and_map()
dma: mv_xor: use request_irq() instead of devm_request_irq()
dma: mv_xor: clear the window override control registers
arm: mvebu: fix address decoding armada_cfg_base() function
ARM: mvebu: update defconfig with I2C and RTC support
ARM: mvebu: Add SATA support for OpenBlocks AX3-4
ARM: mvebu: Add support for the RTC in OpenBlocks AX3-4
ARM: mvebu: Add support for I2C on OpenBlocks AX3-4
ARM: mvebu: Add support for I2C controllers in Armada 370/XP
arm: mvebu: Add hardware I/O Coherency support
arm: plat-orion: Add coherency attribute when setup mbus target
arm: dma mapping: Export a dma ops function arm_dma_set_mask
arm: mvebu: Add SMP support for Armada XP
arm: mm: Add support for PJ4B cpu and init routines
arm: mvebu: Add IPI support via doorbells
arm: mvebu: Add initial support for power managmement service unit
...
CONFIG_HOTPLUG is going away as an option so __devinit is no longer
needed.
Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Cc: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer
needed.
Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Acked-by: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The ->probe() function of the mv_xor function contains in its error
handling code a loop to cleanup the XOR channels that had been
successfully initialized if some other XOR channel fails to be
initialized. It does that by traveling the list of XOR channels, and
cleanup those for which the pointer is not NULL.
However, since the mv_xor_channel_add() function return a PTR_ERR
style value, the pointer is not NULL on error. So, when handling the
error of a given channel initialization, we cleanup this channel
initialization and mark this channel entry as NULL in the array. This
allows the remaining of the cleanup (for other channels) to work
properly.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The irq_of_parse_and_map() function returns 0 on failure, and does not
return an error code, so we fix the calling site of
irq_of_parse_and_map() in the mv_xor driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Even through the usage of devm_*() functions is generally recommended
over their classic variants, in the case of devm_request_irq()
combined with irq_of_parse_and_map(), it doesn't work nicely.
We have the following scenario:
irq_of_parse_and_map(...)
devm_request_irq(...)
For some reason, the driver initialization fails at a later
point. Since irq_of_parse_and_map() is no device-managed, we do a:
irq_dispose_mapping(...)
Unfortunately, this doesn't work, because the free_irq() must be done
prior to calling irq_dispose_mapping(). But with the devm mechanism,
the automatic free_irq() would happen only after we get out of the
->probe() function.
So basically, we revert to using request_irq() with traditional error
handling, so that in case of error, free_irq() gets called before
irq_dispose_mapping().
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The XOR channels on Marvell SoCs have a Window Override Control
register that allow to do some fancy things with addresses. Those
features are not used by the driver, but some U-Boot versions anyway
modify those registers.
For some reason, the U-Boot on OpenBlocks AX3-4 was setting an invalid
value in those registers when the addition 2 GB DRAM chip was plugged
into the board, causing the XOR driver to fail in using the XOR
engines.
By setting those registers to 0 during the driver initialization, we
ensure that the registers are configured according with the driver
operation model.
Thanks to Lior Amsalem <alior@marvell.com> for his help in debugging
this problem.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The dmatest module for DMA engines calls
device_control(dtc->chan, DMA_TERMINATE_ALL, 0);
after completing the tests. The documentation in
include/linux/dmaengine.h suggests this function is optional and
dma_async_device_register() also does not BUG_ON() when not passed a
function. However, dmatest is not the only code in the kernel
unconditionally calling device_control. So add an implementation
indicating all operations are not implemented.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The ->probe() and ->remove() functions were missing the usual
__devinit and __devexit qualifiers.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This patch finally adds a Device Tree binding to the mv_xor
driver. Thanks to the previous cleanup patches, the Device Tree
binding is relatively simply: one DT node per XOR engine, with
sub-nodes for each XOR channel of the XOR engine. The binding
obviously comes with the necessary documentation.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: devicetree-discuss@lists.ozlabs.org
Even though the driver cannot be unloaded at the moment, it is still
good to properly free the IRQ handlers in the channel removal function.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The pool_size is always PAGE_SIZE, and since it is a software
configuration paramter (and not a hardware description parameter), we
cannot make it part of the Device Tree binding, so we'd better remove
it from the platform_data as well.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
There is no need for the platform_data to give this ID, it is simply
the channel number, so we can compute it inside the driver when
registering the channels.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Now that mv_xor_device is no longer used to designate the per-channel
DMA devices, use it know to designate the XOR engine themselves
(currently composed of two XOR channels).
So, now we have the nice organization where:
- mv_xor_device represents each XOR engine in the system
- mv_xor_chan represents each XOR channel of a given XOR engine
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Even though the DMA engine infrastructure has support for multiple
channels per device, the mv_xor driver registers one DMA engine device
for each channel, because the mv_xor channels inside the same XOR
engine have different capabilities, and the DMA engine infrastructure
only allows to express capabilities at the DMA engine device level.
The mv_xor driver has therefore been registering one DMA engine device
and one DMA engine channel for each XOR channel since its introduction
in the kernel. However, it kept two separate internal structures,
mv_xor_device and mv_xor_channel, which didn't make a lot of sense
since there was a 1:1 mapping between those structures.
This patch gets rid of this duplication, and merges everything into
the mv_xor_chan structure.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
In preparation for the removal of the mv_xor_device structure, we
directly pass mv_xor_chan pointers to the self-test functions included
in the driver. These functions were anyway selecting the first (and
only channel) available in each DMA device, so the behaviour is
unchanged.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The mv_xor_device structure embeds a 'struct dma_device', which is
named 'common', a not very meaningful name. Rename it to 'dmadev',
which will help avoid confusions later as we merge the mv_xor_device
and mv_xor_chan structures together.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The mv_xor_chan structure embeds a 'struct dma_chan', which is named
'common', a not very meaningful name. Rename it to 'dmachan', which
will help avoid confusions later as we merge the mv_xor_device and
mv_xor_chan structures together.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
It was only used in places where we could get the 'struct device *'
pointer through a different way.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
In many place, we need to get the 'struct device *' pointer from a
'struct mv_chan *', so we add a helper that makes this a bit
easier. It will also help reducing the change noise in further
patches.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
In mv_xor_memcpy_self_test() and mv_xor_xor_self_test(), all DMA
functions are called by passing dma_chan->device->dev as the 'device
*', except the calls to dma_sync_single_for_cpu() which uselessly goes
through mv_chan->device->pdev->dev.
Simplify this by using dma_chan->device->dev direclty in
dma_sync_single_for_cpu() calls.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The to_mv_xor_device() macro is not being used by the driver, so we
can get rid of it.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The 'shared' word no longer makes sense in a number of places as we
renamed the 'mv_xor_shared' driver to 'mv_xor'.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Since we got rid of the per-XOR channel 'mv_xor' driver, now the
per-XOR engine driver that used to be called 'mv_xor_shared' can
simply be named 'mv_xor'.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
'struct mv_xor_shared_platform_data' used to be the platform_data
structure for the 'mv_xor_shared', but this driver is going to be
renamed simply 'mv_xor', so also rename its platform_data structure
accordingly.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
mv_xor_platform_data used to be the platform_data structure associated
to the 'mv_xor' driver. This driver no longer exists, and this data
structure really contains the properties of each XOR channel part of a
given XOR engine. Therefore 'struct mv_xor_channel_data' is a more
appropriate name.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Now that XOR channels are directly registered by the main
'mv_xor_shared' device ->probe() function and all users of the
'mv_xor' device have been removed, we can get rid of the latter.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Extend the XOR engine driver (currently called "mv_xor_shared") so
that XOR channels can be passed in the platform_data structure, and be
registered from there.
This will allow the users of the driver to be converted to the single
platform_driver variant of the mv_xor driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of doing the initialization/cleanup of the XOR channels
directly in the ->probe() and ->remove() hooks, we create separate
utility functions mv_xor_channel_add() and mv_xor_channel_remove().
This will allow to easily introduce in a future patch a different way
of registering XOR channels: instead of having one platform_device per
channel, we'll trigger the registration of all XOR channels of a given
XOR engine directly from the XOR engine ->probe() function.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The driver currently pokes into the platform_data structure during its
normal operation to get the pool_size value. Poking into the
platform_data structure is not nice when moving to the Device Tree, so
this commit adds a new pool_size field in the mv_xor_device structure,
which gets initialized at ->probe() time. The driver then uses this
field instead of the platform_data.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The usage of dev_printk() is deprecated, and the dev_err(), dev_info()
and dev_notice() functions should be used instead.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Platform data for device drivers should be defined in
include/linux/platform_data/*.h, not in the architecture
and platform specific directories.
This moves such data out of the orion include directories
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <djbw@fb.com>
Cc: Bryan Wu <bryan.wu@canonical.com>
Cc: Richard Purdie <rpurdie@rpsys.net>
Cc: Chris Ball <cjb@laptop.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Liam Girdwood <lrg@ti.com>
Cc: Jaroslav Kysela <perex@perex.cz>
Cc: Takashi Iwai <tiwai@suse.de>
Some orion platforms can gate the XOR driver clock. If the clock
exisits, unable/disable it as appropriate.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Jamie Lentin <jm@lentin.co.uk>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
Ensure all DMA engine drivers initialize their cookies in the same way,
so that they all behave in a similar fashion. This means their first
issued cookie will be 2 rather than 1, and will increment to INT_MAX
before returning 1 and starting over.
In connection with this, Dan Williams said:
> Russell King wrote:
> > Secondly, some DMA engine drivers initialize the dma_chan cookie to 0,
> > others to 1. Is there a reason for this, or are these all buggy?
>
> I know that ioat and iop-adma expect 0 to mean "I have cleaned up this
> descriptor and it is idle", and would break if zero was an in-flight
> cookie value. The reserved usage of zero is an driver internal
> concern, but I have no problem formalizing it as a reserved value.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Now that we have the completed cookie in the dma_chan structure, we
can consolidate the tx_status functions by providing a function to set
the txstate structure and returning the DMA status. We also provide
a separate helper to set the residue for cookies which are still in
progress.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Everyone deals with assigning DMA cookies in the same way (it's part of
the API so they should be), so lets consolidate the common code into a
helper function to avoid this duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Add a local private header file to contain definitions and declarations
which should only be used by DMA engine drivers.
We also fix linux/dmaengine.h to use LINUX_DMAENGINE_H to guard against
multiple inclusion.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Every DMA engine implementation declares a last completed dma cookie
in their private dma channel structures. This is pointless, and
forces driver specific code. Move this out into the common dma_chan
structure.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
mv_xor's is_complete_cookie is only ever written to, but never read.
This is silly, remove the write-only structure member.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Use an getter function in plat-orion/addr-map.c to get the address map
structure, rather than pass it to drivers in the platform_data
structures. When the drivers are built for none orion platforms, a
dummy function is provided instead which returns NULL.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Michael Walle <michael@walle.cc>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
This patch makes BUG_ON() usage correct in drivers/dma/mv_xor.c
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Coly Li <bosong.ly@taobao.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
use mv_xor_slot_cleanup() instead of __mv_xor_slot_cleanup() as the former function
aquires the spin lock that needed to protect the drivers data.
Cc: <stable@kernel.org>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When using simultaneously the two DMA channels on a same engine, some
transfers are never completed. For example, an endless lock can occur
while writing heavily on a RAID5 array (with async-tx offload support
enabled).
Note that this issue can also be reproduced by using the DMA test
client.
On a same engine, the interrupt cause register is shared between two
DMA channels. This patch make sure that the cause bit is only cleared
for the requested channel.
Signed-off-by: Simon Guinot <sguinot@lacie.com>
Tested-by: Luc Saillard <luc@saillard.org>
Acked-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Convert the device_is_tx_complete() operation on the
DMA engine to a generic device_tx_status()operation which
can return three states, DMA_TX_RUNNING, DMA_TX_COMPLETE,
DMA_TX_PAUSED.
[dan.j.williams@intel.com: update for timberdale]
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Li Yang <leoli@freescale.com>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Cc: Magnus Damm <damm@opensource.se>
Cc: Liam Girdwood <lrg@slimlogic.co.uk>
Cc: Joe Perches <joe@perches.com>
Cc: Roland Dreier <rdreier@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop mv_xor's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Centralize this common initialization (and one case where ipu_idmac is
duplicating ->chan initialization).
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
dmatest: fix use after free in dmatest_exit
ipu_idmac: fix spinlock type
iop-adma, mv_xor: fix mem leak on self-test setup failure
fsldma: fix off by one in dma_halt
I/OAT: fail self-test if callback test reaches timeout
I/OAT: update driver version and copyright dates
I/OAT: list usage cleanup
I/OAT: set tcp_dma_copybreak to 256k for I/OAT ver.3
I/OAT: cancel watchdog before dma remove
I/OAT: fail initialization on zero channels detection
I/OAT: do not set DCACTRL_CMPL_WRITE_ENABLE for I/OAT ver.3
I/OAT: add verification for proper APICID_TAG_MAP setting by BIOS
dmaengine: update kerneldoc
iop_adma_zero_sum_self_test has the brackets in the wrong place for the
setup failure deallocation path. This error was duplicated in
mv_xor_xor_self_test.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
`iop_adma_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv_xor_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv64xxx_i2c_unmap_regs' referenced in section `.devinit.text' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv64xxx_i2c_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`orion_nand_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`pxafb_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reference counting is done at the module level so clients need not worry
that a channel will leave while they are actively using dmaengine.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
All users have been converted to either the general-purpose allocator,
dma_find_channel, or dma_request_channel.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_tx.ko is a consumer of dma channels. A circular dependency arises
if modules in drivers/dma rely on common code in async_tx.ko. It
prevents either module from being unloaded.
Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
where they should have been from the beginning.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Mapping the destination multiple times is a misuse of the dma-api.
Since the destination may be reused as a source, ensure that it is only
mapped once and that it is mapped bidirectionally. This appears to add
ugliness on the unmap side in that it always reads back the destination
address from the descriptor, but gcc can determine that dma_unmap is a
nop and not emit the code that calculates its arguments.
Cc: <stable@kernel.org>
Cc: Saeed Bishara <saeed@marvell.com>
Acked-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch performs the equivalent include directory shuffle for
plat-orion, and fixes up all users.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
In some cases client code may need the dma-driver to skip the unmap of source
and/or destination buffers. Setting these flags indicates to the driver to
skip the unmap step. In this regard async_xor is currently broken in that it
allows the destination buffer to be unmapped while an operation is still in
progress, i.e. when the number of sources exceeds the hardware channel's
maximum (fixed in a subsequent patch).
Acked-by: Saeed Bishara <saeed@marvell.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
A DMA controller capable of doing slave transfers may need to know a
few things about the slave when preparing the channel. We don't want
to add this information to struct dma_channel since the channel hasn't
yet been bound to a client at this point.
Instead, pass a reference to the client requesting the channel to the
driver's device_alloc_chan_resources hook so that it can pick the
necessary information from the dma_client struct by itself.
[dan.j.williams@intel.com: fixed up fsldma and mv_xor]
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The XOR engine found in Marvell's SoCs and system controllers
provides XOR and DMA operation, iSCSI CRC32C calculation, memory
initialization, and memory ECC error cleanup operation support.
This driver implements the DMA engine API and supports the following
capabilities:
- memcpy
- xor
- memset
The XOR engine can be used by DMA engine clients implemented in the
kernel, one of those clients is the RAID module. In that case, I
observed 20% improvement in the raid5 write throughput, and 40%
decrease in the CPU utilization when doing array construction, those
results obtained on an 5182 running at 500Mhz.
When enabling the NET DMA client, the performance decreased, so
meanwhile it is recommended to keep this client off.
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>