In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Link: https://lore.kernel.org/r/20200831103542.305571-23-allen.lkml@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix the following coccicheck warning:
drivers/dma/qcom/hidma.c:553:1-17: WARNING: Assignment of 0/1 to bool
variable
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Acked By: Sinan Kaya <okaya@kernel.org>
Link: https://lore.kernel.org/r/20200504113406.41530-1-yanaijie@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to call 'hidma_debug_uninit()' in the error handling
path. 'hidma_debug_init()' has not been called yet.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/20200427111043.70218-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When dma_cookie_complete() is called in hidma_process_completed(),
dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then,
hidma_txn_is_success() will be called to use channel cookie
mchan->last_success to do additional DMA status check. Current code
assigns mchan->last_success after dma_cookie_complete(). This causes
a race condition of dma_cookie_status() returns DMA_COMPLETE before
mchan->last_success is assigned correctly. The race will cause
hidma_tx_status() return DMA_ERROR but the transaction is actually a
success. Moreover, in async_tx case, it will cause a timeout panic
in async_tx_quiesce().
Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for
transaction
...
Call trace:
[<ffff000008089994>] dump_backtrace+0x0/0x1f4
[<ffff000008089bac>] show_stack+0x24/0x2c
[<ffff00000891e198>] dump_stack+0x84/0xa8
[<ffff0000080da544>] panic+0x12c/0x29c
[<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx]
[<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx]
[<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456]
[<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456]
[<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456]
[<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456]
[<ffff000008736538>] md_thread+0x108/0x168
[<ffff0000080fb1cc>] kthread+0x10c/0x138
[<ffff000008084d34>] ret_from_fork+0x10/0x18
Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Reviewed-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In async_tx_test_ack(), it uses flags in struct dma_async_tx_descriptor
to check the ACK status. As hidma reuses the descriptor in a free list
when hidma_prep_dma_*(memcpy/memset) is called, the flag will keep ACKed
if the descriptor has been used before. This will cause a BUG_ON in
async_tx_quiesce().
kernel BUG at crypto/async_tx/async_tx.c:282!
Internal error: Oops - BUG: 0 1 SMP
...
task: ffff8017dd3ec000 task.stack: ffff8017dd3e8000
PC is at async_tx_quiesce+0x54/0x78 [async_tx]
LR is at async_trigger_callback+0x98/0x110 [async_tx]
This patch initializes flags in dma_async_tx_descriptor by the flags
passed from the caller when hidma_prep_dma_*(memcpy/memset) is called.
Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Reviewed-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The location for destination event channel register has been relocated from
offset 0x28 to 0x40. Update the code accordingly.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for probing the newer HW and also organize MSI capable hardware
into an array for maintenance reasons.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
HIDMA HW supports memset operation in addition to memcpy.
Since the memset API is present on the kernel now, bring the
memset feature into life.
The descriptor format is the same for both memcpy and memset.
Type of the descriptor is 4 when memset is requested.
The lowest 8 bits of the source DMA argument is used as a
fill pattern.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Current code is violating the DMA Engine API by putting the submitted
requests directly into the HW queue. This causes queued transactions
to be started by another thread as soon as the first one finishes.
The DMA Engine document clearly states this.
"dmaengine_submit() will not start the DMA operation".
Move HW queuing of the requests into the issue_pending() routine
to comply with API requirements also create a new queued state for
temporarily holding the requests.
A descriptor goes through these transitions now.
free->prepared->queued->active->completed->free
as opposed to
free->prepared->active->completed->free
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Parameters like maximum read/write request size and the maximum
number of active transactions are currently configured in DT/ACPI.
This patch allows a user to override these to fine tune performance
for their application.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We need to ensure that all DMAs and interrupts are cleared during
shutdown operation in order for kexec to start the next kernel clearly.
Otherwise, HW could be performing a DMA into random addresses in the
middle of second kernel start.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
MODULE_DEVICE_TABLE is used by the kernel to determine which device driver
should be loaded for which platform device. MODULE_DEVICE_TABLE has been
only defined for the device-tree based platforms in the current code.
Defining it also for ACPI based platforms.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The 4.8-rc8 kernel is printing duplicate file entry warnings while removing
the HIDMA object. This is caused by stale sysfs entries remaining from the
previous execution.
_sysfs_warn_dup+0x5c/0x78
sysfs_add_file_mode_ns+0x13c/0x1c0
sysfs_create_file_ns+0x2c/0x40
device_create_file+0x54/0xa0
hidma_probe+0x7c8/0x808
Create hidma_sysfs_init and hidma_sysfs_uninit functions and call them from
the probe and remove path. To do proper clean up, adding the attrs object
to the device data structure to keep it around until remove call is made.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added MSI support causes a harmless warning when MSI
is disabled:
drivers/dma/qcom/hidma.c:558:20: error: 'hidma_chirq_handler_msi' defined but not used [-Werror=unused-function]
This adds another #ifdef to match that around the users of the function.
Fixes: 1c0e3e82a7 ("dmaengine: qcom_hidma: add MSI support for interrupts")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The interrupts can now be delivered as platform MSI interrupts on newer
platforms. The code looks for a new OF and ACPI strings in order to enable
the functionality.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The HIDMA driver is capable of error detection. However, the error was
not being passed back to the client when tx_status API is called.
Changing the error handling behavior to follow this oder.
1. dmaengine asserts error interrupt
2. Driver receives and mark's the txn as error
3. Driver completes the txn and intimates the client. No further
submissions. Drop the locks before calling callback, as subsequent
processing by client maybe in callback thread.
4. Client invokes status and you can return error
5. On error, client calls terminate_all. You can reset channel, free all
descriptors in the active, pending and completed lists
6. Client prepares new txn and so on.
As part of this work, got rid of the reset in the interrupt handler when
an error happens and the HW is put into disabled state. The only way to
recover is for the client to terminate the channel.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pass the DMA errors to the client by passing a result argument. The HW only
supports a generic error when something goes wrong. That's why, using
DMA_TRANS_ABORTED all the time.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is a race condition between data transfer callback and descriptor
free code. The callback routine may decide to clear the resources even
though the descriptor has not yet been freed.
Instead of calling the callback first and then releasing the memory,
this code is changing the order to return the descriptor back to the
free pool and then call the user provided callback.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
In order to create a relationship model between the channels and the
management object, we are adding support for object hierarchy to the
drivers. This patch simplifies the userspace application development.
We will not have to traverse different firmware paths based on device
tree or ACPI based kernels.
No matter what flavor of kernel is used, objects will be represented as
platform devices.
The new layout is as follows:
hidmam_10: hidma-mgmt@0x5A000000 {
compatible = "qcom,hidma-mgmt-1.0";
...
hidma_10: hidma@0x5a010000 {
compatible = "qcom,hidma-1.0";
...
}
}
The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects
in device tree and calls into the channel driver to create platform devices
for each child of the management object.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add debugfs hooks for debugging the execution behavior of the DMA
channel. The debugfs hooks get initialized by the probe function and
uninitialized by the remove function.
A stats file is created in debugfs. The stats file will show the
information about each HIDMA channel as well as each asynchronous job
queued and completed at a given time.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch implements the hardware hooks for the HIDMA channel driver.
The main functions of interest are:
- hidma_ll_init
- hidma_ll_request
- hidma_ll_queue_request
- hidma_ll_hw_start
OS layer calls the hidma_ll_init function during probe to set up the
hardware. At this moment, the number of supported descriptors are also
given. On each request, a descriptor is allocated from the free pool and
filled in with the transfer parameters. Multiple requests can be queued
into the hardware via the OS interface. When client is ready for requests
to be executed, start method is called.
Completions are delivered via callbacks via tasklet.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for hidma engine. The driver consists of two
logical blocks. The DMA engine interface and the low-level interface.
The hardware only supports memcpy/memset and this driver only support
memcpy interface. HW and driver doesn't support slave interface.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>