This converts arm to the generic pci_set_dma_mask and
pci_set_consistent_dma_mask (removes HAVE_ARCH_PCI_SET_DMA_MASK for
dmabounce).
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Looked-over-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Greg KH <greg@kroah.com>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
We will need to treat dma_unmap_page() differently from dma_unmap_single()
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Jamie Iles <jamie@jamieiles.com>
The non-highmem() and the __pfn_to_bus() based page_to_dma() both
compile to the same code, so its pointless having these two different
approaches. Use the __pfn_to_bus() based version.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
If a machine class has a custom __virt_to_bus() implementation then it
must provide a __arch_page_to_dma() implementation as well which is
_not_ based on page_address() to support highmem.
This patch fixes existing __arch_page_to_dma() and provide a default
implementation otherwise. The default implementation for highmem is
based on __pfn_to_bus() which is defined only when no custom
__virt_to_bus() is provided by the machine class.
That leaves only ebsa110 and footbridge which cannot support highmem
until they provide their own __arch_page_to_dma() implementation.
But highmem support on those legacy platforms with limited memory is
certainly not a priority.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
This is a helper to be used by the DMA mapping API to handle cache
maintenance for memory identified by a page structure instead of a
virtual address. Those pages may or may not be highmem pages, and
when they're highmem pages, they may or may not be virtually mapped.
When they're not mapped then there is no L1 cache to worry about. But
even in that case the L2 cache must be processed since unmapped highmem
pages can still be L2 cached.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
dma_supported() is supposed to indicate whether the system can support
the DMA mask it was passed, which depends on the maximal address which
can be returned for DMA allocations. If the mask is smaller than that,
we are unable to guarantee that the driver can reliably obtain suitable
memory.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/mm/dma-mapping.c: In function `dma_sync_sg_for_cpu':
arch/arm/mm/dma-mapping.c:588: warning: statement with no effect
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As per the dma_unmap_* calls, we don't touch the cache when a DMA
buffer transitions from device to CPU ownership. Presently, no
problems have been identified with speculative cache prefetching
which in itself is a new feature in later architectures. We may
have to revisit the DMA API later for these architectures anyway.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Validate the direction argument like x86 does. In addition,
validate the dma_unmap_* parameters against those passed to
dma_map_* when using the DMA bounce code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The dmabounce dma_sync_xxx() implementation have been broken for
quite some time; they all copy data between the DMA buffer and
the CPU visible buffer no irrespective of the change of ownership.
(IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer
to the CPU buffer during a call to dma_sync_single_for_device().)
Fix it by getting rid of sync_single(), moving the contents into
the recently created dmabounce_sync_for_xxx() functions and adjusting
appropriately.
This also makes it possible to properly support the DMA range sync
functions.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We can translate a struct page directly to a DMA address using
page_to_dma(). No need to use page_address() followed by
virt_to_dma().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert the existing dma_sync_single_for_* APIs to the new range based
APIs, and make the dma_sync_single_for_* API a superset of it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OMAP at least gets the return type(s) for the DMA translation functions
wrong, which can lead to subtle errors. Avoid this by moving the DMA
translation functions to asm/dma-mapping.h, and converting them to
inline functions.
Fix the OMAP DMA translation macros to use the correct argument and
result types.
Also, remove the unnecessary casts in dmabounce.c.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>