This is the first pull request for the new dma-mapping subsystem

In this new subsystem we'll try to properly maintain all the generic
 code related to dma-mapping, and will further consolidate arch code
 into common helpers.
 
 This pull request contains:
 
  - removal of the DMA_ERROR_CODE macro, replacing it with calls
    to ->mapping_error so that the dma_map_ops instances are
    more self contained and can be shared across architectures (me)
  - removal of the ->set_dma_mask method, which duplicates the
    ->dma_capable one in terms of functionality, but requires more
    duplicate code.
  - various updates for the coherent dma pool and related arm code
    (Vladimir)
  - various smaller cleanups (me)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCAApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlldmw0LHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYOiKA/+Ln1mFLSf3nfTzIHa24Bbk8ZTGr0B8TD4Vmyyt8iG
 oO3AeaTLn3d6ugbH/uih/tPz8PuyXsdiTC1rI/ejDMiwMTSjW6phSiIHGcStSR9X
 VFNhmMFacp7QpUpvxceV0XZYKDViAoQgHeGdp3l+K5h/v4AYePV/v/5RjQPaEyOh
 YLbCzETO+24mRWdJxdAqtTW4ovYhzj6XsiJ+pAjlV0+SWU6m5L5E+VAPNi1vqv1H
 1O2KeCFvVYEpcnfL3qnkw2timcjmfCfeFAd9mCUAc8mSRBfs3QgDTKw3XdHdtRml
 LU2WuA5cpMrOdBO4mVra2plo8E2szvpB1OZZXoKKdCpK3VGwVpVHcTvClK2Ks/3B
 GDLieroEQNu2ZIUIdWXf/g2x6le3BcC9MmpkAhnGPqCZ7skaIBO5Cjpxm0zTJAPl
 PPY3CMBBEktAvys6DcudOYGixNjKUuAm5lnfpcfTEklFdG0AjhdK/jZOplAFA6w4
 LCiy0rGHM8ZbVAaFxbYoFCqgcjnv6EjSiqkJxVI4fu/Q7v9YXfdPnEmE0PJwCVo5
 +i7aCLgrYshTdHr/F3e5EuofHN3TDHwXNJKGh/x97t+6tt326QMvDKX059Kxst7R
 rFukGbrYvG8Y7yXwrSDbusl443ta0Ht7T1oL4YUoJTZp0nScAyEluDTmrH1JVCsT
 R4o=
 =0Fso
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-4.13' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping infrastructure from Christoph Hellwig:
 "This is the first pull request for the new dma-mapping subsystem

  In this new subsystem we'll try to properly maintain all the generic
  code related to dma-mapping, and will further consolidate arch code
  into common helpers.

  This pull request contains:

   - removal of the DMA_ERROR_CODE macro, replacing it with calls to
     ->mapping_error so that the dma_map_ops instances are more self
     contained and can be shared across architectures (me)

   - removal of the ->set_dma_mask method, which duplicates the
     ->dma_capable one in terms of functionality, but requires more
     duplicate code.

   - various updates for the coherent dma pool and related arm code
     (Vladimir)

   - various smaller cleanups (me)"

* tag 'dma-mapping-4.13' of git://git.infradead.org/users/hch/dma-mapping: (56 commits)
  ARM: dma-mapping: Remove traces of NOMMU code
  ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  ARM: NOMMU: Introduce dma operations for noMMU
  drivers: dma-mapping: allow dma_common_mmap() for NOMMU
  drivers: dma-coherent: Introduce default DMA pool
  drivers: dma-coherent: Account dma_pfn_offset when used with device tree
  dma: Take into account dma_pfn_offset
  dma-mapping: replace dmam_alloc_noncoherent with dmam_alloc_attrs
  dma-mapping: remove dmam_free_noncoherent
  crypto: qat - avoid an uninitialized variable warning
  au1100fb: remove a bogus dma_free_nonconsistent call
  MAINTAINERS: add entry for dma mapping helpers
  powerpc: merge __dma_set_mask into dma_set_mask
  dma-mapping: remove the set_dma_mask method
  powerpc/cell: use the dma_supported method for ops switching
  powerpc/cell: clean up fixed mapping dma_ops initialization
  tile: remove dma_supported and mapping_error methods
  xen-swiotlb: remove xen_swiotlb_set_dma_mask
  arm: implement ->dma_supported instead of ->set_dma_mask
  mips/loongson64: implement ->dma_supported instead of ->set_dma_mask
  ...
This commit is contained in:
Linus Torvalds 2017-07-06 19:20:54 -07:00
commit f72e24a124
75 changed files with 779 additions and 726 deletions

View File

@ -550,32 +550,11 @@ and to unmap it:
dma_unmap_single(dev, dma_handle, size, direction); dma_unmap_single(dev, dma_handle, size, direction);
You should call dma_mapping_error() as dma_map_single() could fail and return You should call dma_mapping_error() as dma_map_single() could fail and return
error. Not all DMA implementations support the dma_mapping_error() interface. error. Doing so will ensure that the mapping code will work correctly on all
However, it is a good practice to call dma_mapping_error() interface, which DMA implementations without any dependency on the specifics of the underlying
will invoke the generic mapping error check interface. Doing so will ensure implementation. Using the returned address without checking for errors could
that the mapping code will work correctly on all DMA implementations without result in failures ranging from panics to silent data corruption. The same
any dependency on the specifics of the underlying implementation. Using the applies to dma_map_page() as well.
returned address without checking for errors could result in failures ranging
from panics to silent data corruption. A couple of examples of incorrect ways
to check for errors that make assumptions about the underlying DMA
implementation are as follows and these are applicable to dma_map_page() as
well.
Incorrect example 1:
dma_addr_t dma_handle;
dma_handle = dma_map_single(dev, addr, size, direction);
if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
goto map_error;
}
Incorrect example 2:
dma_addr_t dma_handle;
dma_handle = dma_map_single(dev, addr, size, direction);
if (dma_handle == DMA_ERROR_CODE) {
goto map_error;
}
You should call dma_unmap_single() when the DMA activity is finished, e.g., You should call dma_unmap_single() when the DMA activity is finished, e.g.,
from the interrupt which told you that the DMA transfer is done. from the interrupt which told you that the DMA transfer is done.

View File

@ -68,6 +68,9 @@ Linux implementation note:
- If a "linux,cma-default" property is present, then Linux will use the - If a "linux,cma-default" property is present, then Linux will use the
region for the default pool of the contiguous memory allocator. region for the default pool of the contiguous memory allocator.
- If a "linux,dma-default" property is present, then Linux will use the
region for the default pool of the consistent DMA allocator.
Device node references to reserved memory Device node references to reserved memory
----------------------------------------- -----------------------------------------
Regions in the /reserved-memory node may be referenced by other device Regions in the /reserved-memory node may be referenced by other device

View File

@ -240,10 +240,9 @@ CLOCK
DMA DMA
dmam_alloc_coherent() dmam_alloc_coherent()
dmam_alloc_noncoherent() dmam_alloc_attrs()
dmam_declare_coherent_memory() dmam_declare_coherent_memory()
dmam_free_coherent() dmam_free_coherent()
dmam_free_noncoherent()
dmam_pool_create() dmam_pool_create()
dmam_pool_destroy() dmam_pool_destroy()

View File

@ -2631,6 +2631,21 @@ S: Maintained
F: net/bluetooth/ F: net/bluetooth/
F: include/net/bluetooth/ F: include/net/bluetooth/
DMA MAPPING HELPERS
M: Christoph Hellwig <hch@lst.de>
M: Marek Szyprowski <m.szyprowski@samsung.com>
R: Robin Murphy <robin.murphy@arm.com>
L: linux-kernel@vger.kernel.org
T: git git://git.infradead.org/users/hch/dma-mapping.git
W: http://git.infradead.org/users/hch/dma-mapping.git
S: Supported
F: lib/dma-debug.c
F: lib/dma-noop.c
F: lib/dma-virt.c
F: drivers/base/dma-mapping.c
F: drivers/base/dma-coherent.c
F: include/linux/dma-mapping.h
BONDING DRIVER BONDING DRIVER
M: Jay Vosburgh <j.vosburgh@gmail.com> M: Jay Vosburgh <j.vosburgh@gmail.com>
M: Veaceslav Falico <vfalico@gmail.com> M: Veaceslav Falico <vfalico@gmail.com>

View File

@ -22,6 +22,7 @@ config ARM
select CLONE_BACKWARDS select CLONE_BACKWARDS
select CPU_PM if (SUSPEND || CPU_IDLE) select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
select DMA_NOOP_OPS if !MMU
select EDAC_SUPPORT select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB select EDAC_ATOMIC_SCRUB
select GENERIC_ALLOCATOR select GENERIC_ALLOCATOR

View File

@ -33,6 +33,7 @@
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/dma-iommu.h>
#undef STATS #undef STATS
@ -256,7 +257,7 @@ static inline dma_addr_t map_single(struct device *dev, void *ptr, size_t size,
if (buf == NULL) { if (buf == NULL) {
dev_err(dev, "%s: unable to map unsafe buffer %p!\n", dev_err(dev, "%s: unable to map unsafe buffer %p!\n",
__func__, ptr); __func__, ptr);
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n", dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
@ -326,7 +327,7 @@ static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page,
ret = needs_bounce(dev, dma_addr, size); ret = needs_bounce(dev, dma_addr, size);
if (ret < 0) if (ret < 0)
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
if (ret == 0) { if (ret == 0) {
arm_dma_ops.sync_single_for_device(dev, dma_addr, size, dir); arm_dma_ops.sync_single_for_device(dev, dma_addr, size, dir);
@ -335,7 +336,7 @@ static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page,
if (PageHighMem(page)) { if (PageHighMem(page)) {
dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n"); dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n");
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
return map_single(dev, page_address(page) + offset, size, dir, attrs); return map_single(dev, page_address(page) + offset, size, dir, attrs);
@ -444,12 +445,17 @@ static void dmabounce_sync_for_device(struct device *dev,
arm_dma_ops.sync_single_for_device(dev, handle, size, dir); arm_dma_ops.sync_single_for_device(dev, handle, size, dir);
} }
static int dmabounce_set_mask(struct device *dev, u64 dma_mask) static int dmabounce_dma_supported(struct device *dev, u64 dma_mask)
{ {
if (dev->archdata.dmabounce) if (dev->archdata.dmabounce)
return 0; return 0;
return arm_dma_ops.set_dma_mask(dev, dma_mask); return arm_dma_ops.dma_supported(dev, dma_mask);
}
static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return arm_dma_ops.mapping_error(dev, dma_addr);
} }
static const struct dma_map_ops dmabounce_ops = { static const struct dma_map_ops dmabounce_ops = {
@ -465,7 +471,8 @@ static const struct dma_map_ops dmabounce_ops = {
.unmap_sg = arm_dma_unmap_sg, .unmap_sg = arm_dma_unmap_sg,
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device, .sync_sg_for_device = arm_dma_sync_sg_for_device,
.set_dma_mask = dmabounce_set_mask, .dma_supported = dmabounce_dma_supported,
.mapping_error = dmabounce_mapping_error,
}; };
static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev, static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev,

View File

@ -9,6 +9,8 @@
#include <linux/kmemcheck.h> #include <linux/kmemcheck.h>
#include <linux/kref.h> #include <linux/kref.h>
#define ARM_MAPPING_ERROR (~(dma_addr_t)0x0)
struct dma_iommu_mapping { struct dma_iommu_mapping {
/* iommu specific data */ /* iommu specific data */
struct iommu_domain *domain; struct iommu_domain *domain;
@ -33,5 +35,7 @@ int arm_iommu_attach_device(struct device *dev,
struct dma_iommu_mapping *mapping); struct dma_iommu_mapping *mapping);
void arm_iommu_detach_device(struct device *dev); void arm_iommu_detach_device(struct device *dev);
int arm_dma_supported(struct device *dev, u64 mask);
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif #endif

View File

@ -12,18 +12,14 @@
#include <xen/xen.h> #include <xen/xen.h>
#include <asm/xen/hypervisor.h> #include <asm/xen/hypervisor.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
extern const struct dma_map_ops arm_dma_ops; extern const struct dma_map_ops arm_dma_ops;
extern const struct dma_map_ops arm_coherent_dma_ops; extern const struct dma_map_ops arm_coherent_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{ {
return &arm_dma_ops; return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : &dma_noop_ops;
} }
#define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *dev, u64 mask);
#ifdef __arch_page_to_dma #ifdef __arch_page_to_dma
#error Please update to __arch_pfn_to_dma #error Please update to __arch_pfn_to_dma
#endif #endif

View File

@ -1045,8 +1045,8 @@ config ARM_L1_CACHE_SHIFT
default 5 default 5
config ARM_DMA_MEM_BUFFERABLE config ARM_DMA_MEM_BUFFERABLE
bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7 bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
default y if CPU_V6 || CPU_V6K || CPU_V7 default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
help help
Historically, the kernel has used strongly ordered mappings to Historically, the kernel has used strongly ordered mappings to
provide DMA coherent memory. With the advent of ARMv7, mapping provide DMA coherent memory. With the advent of ARMv7, mapping
@ -1061,6 +1061,10 @@ config ARM_DMA_MEM_BUFFERABLE
and therefore turning this on may result in unpredictable driver and therefore turning this on may result in unpredictable driver
behaviour. Therefore, we offer this as an option. behaviour. Therefore, we offer this as an option.
On some of the beefier ARMv7-M machines (with DMA and write
buffers) you likely want this enabled, while those that
didn't need it until now also won't need it in the future.
You are recommended say 'Y' here and debug any affected drivers. You are recommended say 'Y' here and debug any affected drivers.
config ARM_HEAVY_MB config ARM_HEAVY_MB

View File

@ -2,9 +2,8 @@
# Makefile for the linux arm-specific parts of the memory manager. # Makefile for the linux arm-specific parts of the memory manager.
# #
obj-y := dma-mapping.o extable.o fault.o init.o \ obj-y := extable.o fault.o init.o iomap.o
iomap.o obj-y += dma-mapping$(MMUEXT).o
obj-$(CONFIG_MMU) += fault-armv.o flush.o idmap.o ioremap.o \ obj-$(CONFIG_MMU) += fault-armv.o flush.o idmap.o ioremap.o \
mmap.o pgd.o mmu.o pageattr.o mmap.o pgd.o mmu.o pageattr.o

View File

@ -0,0 +1,228 @@
/*
* Based on linux/arch/arm/mm/dma-mapping.c
*
* Copyright (C) 2000-2004 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/export.h>
#include <linux/mm.h>
#include <linux/dma-mapping.h>
#include <linux/scatterlist.h>
#include <asm/cachetype.h>
#include <asm/cacheflush.h>
#include <asm/outercache.h>
#include <asm/cp15.h>
#include "dma.h"
/*
* dma_noop_ops is used if
* - MMU/MPU is off
* - cpu is v7m w/o cache support
* - device is coherent
* otherwise arm_nommu_dma_ops is used.
*
* arm_nommu_dma_ops rely on consistent DMA memory (please, refer to
* [1] on how to declare such memory).
*
* [1] Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
*/
static void *arm_nommu_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp,
unsigned long attrs)
{
const struct dma_map_ops *ops = &dma_noop_ops;
/*
* We are here because:
* - no consistent DMA region has been defined, so we can't
* continue.
* - there is no space left in consistent DMA region, so we
* only can fallback to generic allocator if we are
* advertised that consistency is not required.
*/
if (attrs & DMA_ATTR_NON_CONSISTENT)
return ops->alloc(dev, size, dma_handle, gfp, attrs);
WARN_ON_ONCE(1);
return NULL;
}
static void arm_nommu_dma_free(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t dma_addr,
unsigned long attrs)
{
const struct dma_map_ops *ops = &dma_noop_ops;
if (attrs & DMA_ATTR_NON_CONSISTENT)
ops->free(dev, size, cpu_addr, dma_addr, attrs);
else
WARN_ON_ONCE(1);
return;
}
static void __dma_page_cpu_to_dev(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
dmac_map_area(__va(paddr), size, dir);
if (dir == DMA_FROM_DEVICE)
outer_inv_range(paddr, paddr + size);
else
outer_clean_range(paddr, paddr + size);
}
static void __dma_page_dev_to_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (dir != DMA_TO_DEVICE) {
outer_inv_range(paddr, paddr + size);
dmac_unmap_area(__va(paddr), size, dir);
}
}
static dma_addr_t arm_nommu_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
unsigned long attrs)
{
dma_addr_t handle = page_to_phys(page) + offset;
__dma_page_cpu_to_dev(handle, size, dir);
return handle;
}
static void arm_nommu_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
__dma_page_dev_to_cpu(handle, size, dir);
}
static int arm_nommu_dma_map_sg(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir,
unsigned long attrs)
{
int i;
struct scatterlist *sg;
for_each_sg(sgl, sg, nents, i) {
sg_dma_address(sg) = sg_phys(sg);
sg_dma_len(sg) = sg->length;
__dma_page_cpu_to_dev(sg_dma_address(sg), sg_dma_len(sg), dir);
}
return nents;
}
static void arm_nommu_dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir,
unsigned long attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sgl, sg, nents, i)
__dma_page_dev_to_cpu(sg_dma_address(sg), sg_dma_len(sg), dir);
}
static void arm_nommu_dma_sync_single_for_device(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
__dma_page_cpu_to_dev(handle, size, dir);
}
static void arm_nommu_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
__dma_page_cpu_to_dev(handle, size, dir);
}
static void arm_nommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir)
{
struct scatterlist *sg;
int i;
for_each_sg(sgl, sg, nents, i)
__dma_page_cpu_to_dev(sg_dma_address(sg), sg_dma_len(sg), dir);
}
static void arm_nommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir)
{
struct scatterlist *sg;
int i;
for_each_sg(sgl, sg, nents, i)
__dma_page_dev_to_cpu(sg_dma_address(sg), sg_dma_len(sg), dir);
}
const struct dma_map_ops arm_nommu_dma_ops = {
.alloc = arm_nommu_dma_alloc,
.free = arm_nommu_dma_free,
.map_page = arm_nommu_dma_map_page,
.unmap_page = arm_nommu_dma_unmap_page,
.map_sg = arm_nommu_dma_map_sg,
.unmap_sg = arm_nommu_dma_unmap_sg,
.sync_single_for_device = arm_nommu_dma_sync_single_for_device,
.sync_single_for_cpu = arm_nommu_dma_sync_single_for_cpu,
.sync_sg_for_device = arm_nommu_dma_sync_sg_for_device,
.sync_sg_for_cpu = arm_nommu_dma_sync_sg_for_cpu,
};
EXPORT_SYMBOL(arm_nommu_dma_ops);
static const struct dma_map_ops *arm_nommu_get_dma_map_ops(bool coherent)
{
return coherent ? &dma_noop_ops : &arm_nommu_dma_ops;
}
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent)
{
const struct dma_map_ops *dma_ops;
if (IS_ENABLED(CONFIG_CPU_V7M)) {
/*
* Cache support for v7m is optional, so can be treated as
* coherent if no cache has been detected. Note that it is not
* enough to check if MPU is in use or not since in absense of
* MPU system memory map is used.
*/
dev->archdata.dma_coherent = (cacheid) ? coherent : true;
} else {
/*
* Assume coherent DMA in case MMU/MPU has not been set up.
*/
dev->archdata.dma_coherent = (get_cr() & CR_M) ? coherent : true;
}
dma_ops = arm_nommu_get_dma_map_ops(dev->archdata.dma_coherent);
set_dma_ops(dev, dma_ops);
}
void arch_teardown_dma_ops(struct device *dev)
{
}
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
core_initcall(dma_debug_do_init);

View File

@ -180,6 +180,11 @@ static void arm_dma_sync_single_for_device(struct device *dev,
__dma_page_cpu_to_dev(page, offset, size, dir); __dma_page_cpu_to_dev(page, offset, size, dir);
} }
static int arm_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == ARM_MAPPING_ERROR;
}
const struct dma_map_ops arm_dma_ops = { const struct dma_map_ops arm_dma_ops = {
.alloc = arm_dma_alloc, .alloc = arm_dma_alloc,
.free = arm_dma_free, .free = arm_dma_free,
@ -193,6 +198,8 @@ const struct dma_map_ops arm_dma_ops = {
.sync_single_for_device = arm_dma_sync_single_for_device, .sync_single_for_device = arm_dma_sync_single_for_device,
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device, .sync_sg_for_device = arm_dma_sync_sg_for_device,
.mapping_error = arm_dma_mapping_error,
.dma_supported = arm_dma_supported,
}; };
EXPORT_SYMBOL(arm_dma_ops); EXPORT_SYMBOL(arm_dma_ops);
@ -211,6 +218,8 @@ const struct dma_map_ops arm_coherent_dma_ops = {
.get_sgtable = arm_dma_get_sgtable, .get_sgtable = arm_dma_get_sgtable,
.map_page = arm_coherent_dma_map_page, .map_page = arm_coherent_dma_map_page,
.map_sg = arm_dma_map_sg, .map_sg = arm_dma_map_sg,
.mapping_error = arm_dma_mapping_error,
.dma_supported = arm_dma_supported,
}; };
EXPORT_SYMBOL(arm_coherent_dma_ops); EXPORT_SYMBOL(arm_coherent_dma_ops);
@ -344,8 +353,6 @@ static void __dma_free_buffer(struct page *page, size_t size)
} }
} }
#ifdef CONFIG_MMU
static void *__alloc_from_contiguous(struct device *dev, size_t size, static void *__alloc_from_contiguous(struct device *dev, size_t size,
pgprot_t prot, struct page **ret_page, pgprot_t prot, struct page **ret_page,
const void *caller, bool want_vaddr, const void *caller, bool want_vaddr,
@ -647,22 +654,6 @@ static inline pgprot_t __get_dma_pgprot(unsigned long attrs, pgprot_t prot)
return prot; return prot;
} }
#define nommu() 0
#else /* !CONFIG_MMU */
#define nommu() 1
#define __get_dma_pgprot(attrs, prot) __pgprot(0)
#define __alloc_remap_buffer(dev, size, gfp, prot, ret, c, wv) NULL
#define __alloc_from_pool(size, ret_page) NULL
#define __alloc_from_contiguous(dev, size, prot, ret, c, wv, coherent_flag, gfp) NULL
#define __free_from_pool(cpu_addr, size) do { } while (0)
#define __free_from_contiguous(dev, page, cpu_addr, size, wv) do { } while (0)
#define __dma_free_remap(cpu_addr, size) do { } while (0)
#endif /* CONFIG_MMU */
static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp, static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp,
struct page **ret_page) struct page **ret_page)
{ {
@ -799,13 +790,13 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp &= ~(__GFP_COMP); gfp &= ~(__GFP_COMP);
args.gfp = gfp; args.gfp = gfp;
*handle = DMA_ERROR_CODE; *handle = ARM_MAPPING_ERROR;
allowblock = gfpflags_allow_blocking(gfp); allowblock = gfpflags_allow_blocking(gfp);
cma = allowblock ? dev_get_cma_area(dev) : false; cma = allowblock ? dev_get_cma_area(dev) : false;
if (cma) if (cma)
buf->allocator = &cma_allocator; buf->allocator = &cma_allocator;
else if (nommu() || is_coherent) else if (is_coherent)
buf->allocator = &simple_allocator; buf->allocator = &simple_allocator;
else if (allowblock) else if (allowblock)
buf->allocator = &remap_allocator; buf->allocator = &remap_allocator;
@ -854,8 +845,7 @@ static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size, void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs) unsigned long attrs)
{ {
int ret = -ENXIO; int ret;
#ifdef CONFIG_MMU
unsigned long nr_vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; unsigned long nr_vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
unsigned long pfn = dma_to_pfn(dev, dma_addr); unsigned long pfn = dma_to_pfn(dev, dma_addr);
@ -870,10 +860,6 @@ static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
vma->vm_end - vma->vm_start, vma->vm_end - vma->vm_start,
vma->vm_page_prot); vma->vm_page_prot);
} }
#else
ret = vm_iomap_memory(vma, vma->vm_start,
(vma->vm_end - vma->vm_start));
#endif /* CONFIG_MMU */
return ret; return ret;
} }
@ -892,9 +878,7 @@ int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size, void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs) unsigned long attrs)
{ {
#ifdef CONFIG_MMU
vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot); vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot);
#endif /* CONFIG_MMU */
return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
} }
@ -1177,11 +1161,10 @@ void arm_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
* during bus mastering, then you would pass 0x00ffffff as the mask * during bus mastering, then you would pass 0x00ffffff as the mask
* to this function. * to this function.
*/ */
int dma_supported(struct device *dev, u64 mask) int arm_dma_supported(struct device *dev, u64 mask)
{ {
return __dma_supported(dev, mask, false); return __dma_supported(dev, mask, false);
} }
EXPORT_SYMBOL(dma_supported);
#define PREALLOC_DMA_DEBUG_ENTRIES 4096 #define PREALLOC_DMA_DEBUG_ENTRIES 4096
@ -1254,7 +1237,7 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
if (i == mapping->nr_bitmaps) { if (i == mapping->nr_bitmaps) {
if (extend_iommu_mapping(mapping)) { if (extend_iommu_mapping(mapping)) {
spin_unlock_irqrestore(&mapping->lock, flags); spin_unlock_irqrestore(&mapping->lock, flags);
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
start = bitmap_find_next_zero_area(mapping->bitmaps[i], start = bitmap_find_next_zero_area(mapping->bitmaps[i],
@ -1262,7 +1245,7 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
if (start > mapping->bits) { if (start > mapping->bits) {
spin_unlock_irqrestore(&mapping->lock, flags); spin_unlock_irqrestore(&mapping->lock, flags);
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
bitmap_set(mapping->bitmaps[i], start, count); bitmap_set(mapping->bitmaps[i], start, count);
@ -1445,7 +1428,7 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size,
int i; int i;
dma_addr = __alloc_iova(mapping, size); dma_addr = __alloc_iova(mapping, size);
if (dma_addr == DMA_ERROR_CODE) if (dma_addr == ARM_MAPPING_ERROR)
return dma_addr; return dma_addr;
iova = dma_addr; iova = dma_addr;
@ -1472,7 +1455,7 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size,
fail: fail:
iommu_unmap(mapping->domain, dma_addr, iova-dma_addr); iommu_unmap(mapping->domain, dma_addr, iova-dma_addr);
__free_iova(mapping, dma_addr, size); __free_iova(mapping, dma_addr, size);
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size) static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
@ -1533,7 +1516,7 @@ static void *__iommu_alloc_simple(struct device *dev, size_t size, gfp_t gfp,
return NULL; return NULL;
*handle = __iommu_create_mapping(dev, &page, size, attrs); *handle = __iommu_create_mapping(dev, &page, size, attrs);
if (*handle == DMA_ERROR_CODE) if (*handle == ARM_MAPPING_ERROR)
goto err_mapping; goto err_mapping;
return addr; return addr;
@ -1561,7 +1544,7 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size,
struct page **pages; struct page **pages;
void *addr = NULL; void *addr = NULL;
*handle = DMA_ERROR_CODE; *handle = ARM_MAPPING_ERROR;
size = PAGE_ALIGN(size); size = PAGE_ALIGN(size);
if (coherent_flag == COHERENT || !gfpflags_allow_blocking(gfp)) if (coherent_flag == COHERENT || !gfpflags_allow_blocking(gfp))
@ -1582,7 +1565,7 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size,
return NULL; return NULL;
*handle = __iommu_create_mapping(dev, pages, size, attrs); *handle = __iommu_create_mapping(dev, pages, size, attrs);
if (*handle == DMA_ERROR_CODE) if (*handle == ARM_MAPPING_ERROR)
goto err_buffer; goto err_buffer;
if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) if (attrs & DMA_ATTR_NO_KERNEL_MAPPING)
@ -1732,10 +1715,10 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
int prot; int prot;
size = PAGE_ALIGN(size); size = PAGE_ALIGN(size);
*handle = DMA_ERROR_CODE; *handle = ARM_MAPPING_ERROR;
iova_base = iova = __alloc_iova(mapping, size); iova_base = iova = __alloc_iova(mapping, size);
if (iova == DMA_ERROR_CODE) if (iova == ARM_MAPPING_ERROR)
return -ENOMEM; return -ENOMEM;
for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) { for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {
@ -1775,7 +1758,7 @@ static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
for (i = 1; i < nents; i++) { for (i = 1; i < nents; i++) {
s = sg_next(s); s = sg_next(s);
s->dma_address = DMA_ERROR_CODE; s->dma_address = ARM_MAPPING_ERROR;
s->dma_length = 0; s->dma_length = 0;
if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) { if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
@ -1950,7 +1933,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p
int ret, prot, len = PAGE_ALIGN(size + offset); int ret, prot, len = PAGE_ALIGN(size + offset);
dma_addr = __alloc_iova(mapping, len); dma_addr = __alloc_iova(mapping, len);
if (dma_addr == DMA_ERROR_CODE) if (dma_addr == ARM_MAPPING_ERROR)
return dma_addr; return dma_addr;
prot = __dma_info_to_prot(dir, attrs); prot = __dma_info_to_prot(dir, attrs);
@ -1962,7 +1945,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p
return dma_addr + offset; return dma_addr + offset;
fail: fail:
__free_iova(mapping, dma_addr, len); __free_iova(mapping, dma_addr, len);
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
/** /**
@ -2056,7 +2039,7 @@ static dma_addr_t arm_iommu_map_resource(struct device *dev,
size_t len = PAGE_ALIGN(size + offset); size_t len = PAGE_ALIGN(size + offset);
dma_addr = __alloc_iova(mapping, len); dma_addr = __alloc_iova(mapping, len);
if (dma_addr == DMA_ERROR_CODE) if (dma_addr == ARM_MAPPING_ERROR)
return dma_addr; return dma_addr;
prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO; prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO;
@ -2068,7 +2051,7 @@ static dma_addr_t arm_iommu_map_resource(struct device *dev,
return dma_addr + offset; return dma_addr + offset;
fail: fail:
__free_iova(mapping, dma_addr, len); __free_iova(mapping, dma_addr, len);
return DMA_ERROR_CODE; return ARM_MAPPING_ERROR;
} }
/** /**
@ -2140,6 +2123,9 @@ const struct dma_map_ops iommu_ops = {
.map_resource = arm_iommu_map_resource, .map_resource = arm_iommu_map_resource,
.unmap_resource = arm_iommu_unmap_resource, .unmap_resource = arm_iommu_unmap_resource,
.mapping_error = arm_dma_mapping_error,
.dma_supported = arm_dma_supported,
}; };
const struct dma_map_ops iommu_coherent_ops = { const struct dma_map_ops iommu_coherent_ops = {
@ -2156,6 +2142,9 @@ const struct dma_map_ops iommu_coherent_ops = {
.map_resource = arm_iommu_map_resource, .map_resource = arm_iommu_map_resource,
.unmap_resource = arm_iommu_unmap_resource, .unmap_resource = arm_iommu_unmap_resource,
.mapping_error = arm_dma_mapping_error,
.dma_supported = arm_dma_supported,
}; };
/** /**

View File

@ -185,23 +185,6 @@ EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region);
const struct dma_map_ops *xen_dma_ops; const struct dma_map_ops *xen_dma_ops;
EXPORT_SYMBOL(xen_dma_ops); EXPORT_SYMBOL(xen_dma_ops);
static const struct dma_map_ops xen_swiotlb_dma_ops = {
.alloc = xen_swiotlb_alloc_coherent,
.free = xen_swiotlb_free_coherent,
.sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
.sync_single_for_device = xen_swiotlb_sync_single_for_device,
.sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
.map_sg = xen_swiotlb_map_sg_attrs,
.unmap_sg = xen_swiotlb_unmap_sg_attrs,
.map_page = xen_swiotlb_map_page,
.unmap_page = xen_swiotlb_unmap_page,
.dma_supported = xen_swiotlb_dma_supported,
.set_dma_mask = xen_swiotlb_set_dma_mask,
.mmap = xen_swiotlb_dma_mmap,
.get_sgtable = xen_swiotlb_get_sgtable,
};
int __init xen_mm_init(void) int __init xen_mm_init(void)
{ {
struct gnttab_cache_flush cflush; struct gnttab_cache_flush cflush;

View File

@ -24,7 +24,6 @@
#include <xen/xen.h> #include <xen/xen.h>
#include <asm/xen/hypervisor.h> #include <asm/xen/hypervisor.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0)
extern const struct dma_map_ops dummy_dma_ops; extern const struct dma_map_ops dummy_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)

View File

@ -175,7 +175,6 @@ static void *__dma_alloc(struct device *dev, size_t size,
no_map: no_map:
__dma_free_coherent(dev, size, ptr, *dma_handle, attrs); __dma_free_coherent(dev, size, ptr, *dma_handle, attrs);
no_mem: no_mem:
*dma_handle = DMA_ERROR_CODE;
return NULL; return NULL;
} }
@ -478,7 +477,7 @@ static dma_addr_t __dummy_map_page(struct device *dev, struct page *page,
enum dma_data_direction dir, enum dma_data_direction dir,
unsigned long attrs) unsigned long attrs)
{ {
return DMA_ERROR_CODE; return 0;
} }
static void __dummy_unmap_page(struct device *dev, dma_addr_t dev_addr, static void __dummy_unmap_page(struct device *dev, dma_addr_t dev_addr,

View File

@ -41,6 +41,7 @@ config BLACKFIN
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_NMI select HAVE_NMI
select ARCH_NO_COHERENT_DMA_MMAP
config GENERIC_CSUM config GENERIC_CSUM
def_bool y def_bool y

View File

@ -12,11 +12,6 @@
#ifndef _ASM_C6X_DMA_MAPPING_H #ifndef _ASM_C6X_DMA_MAPPING_H
#define _ASM_C6X_DMA_MAPPING_H #define _ASM_C6X_DMA_MAPPING_H
/*
* DMA errors are defined by all-bits-set in the DMA address.
*/
#define DMA_ERROR_CODE ~0
extern const struct dma_map_ops c6x_dma_ops; extern const struct dma_map_ops c6x_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)

View File

@ -29,8 +29,6 @@
#include <asm/io.h> #include <asm/io.h>
struct device; struct device;
extern int bad_dma_address;
#define DMA_ERROR_CODE bad_dma_address
extern const struct dma_map_ops *dma_ops; extern const struct dma_map_ops *dma_ops;
@ -39,9 +37,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops; return dma_ops;
} }
#define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *dev, u64 mask);
extern int dma_is_consistent(struct device *dev, dma_addr_t dma_handle);
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);

View File

@ -25,25 +25,16 @@
#include <linux/module.h> #include <linux/module.h>
#include <asm/page.h> #include <asm/page.h>
#define HEXAGON_MAPPING_ERROR 0
const struct dma_map_ops *dma_ops; const struct dma_map_ops *dma_ops;
EXPORT_SYMBOL(dma_ops); EXPORT_SYMBOL(dma_ops);
int bad_dma_address; /* globals are automatically initialized to zero */
static inline void *dma_addr_to_virt(dma_addr_t dma_addr) static inline void *dma_addr_to_virt(dma_addr_t dma_addr)
{ {
return phys_to_virt((unsigned long) dma_addr); return phys_to_virt((unsigned long) dma_addr);
} }
int dma_supported(struct device *dev, u64 mask)
{
if (mask == DMA_BIT_MASK(32))
return 1;
else
return 0;
}
EXPORT_SYMBOL(dma_supported);
static struct gen_pool *coherent_pool; static struct gen_pool *coherent_pool;
@ -181,7 +172,7 @@ static dma_addr_t hexagon_map_page(struct device *dev, struct page *page,
WARN_ON(size == 0); WARN_ON(size == 0);
if (!check_addr("map_single", dev, bus, size)) if (!check_addr("map_single", dev, bus, size))
return bad_dma_address; return HEXAGON_MAPPING_ERROR;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
dma_sync(dma_addr_to_virt(bus), size, dir); dma_sync(dma_addr_to_virt(bus), size, dir);
@ -203,6 +194,11 @@ static void hexagon_sync_single_for_device(struct device *dev,
dma_sync(dma_addr_to_virt(dma_handle), size, dir); dma_sync(dma_addr_to_virt(dma_handle), size, dir);
} }
static int hexagon_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == HEXAGON_MAPPING_ERROR;
}
const struct dma_map_ops hexagon_dma_ops = { const struct dma_map_ops hexagon_dma_ops = {
.alloc = hexagon_dma_alloc_coherent, .alloc = hexagon_dma_alloc_coherent,
.free = hexagon_free_coherent, .free = hexagon_free_coherent,
@ -210,6 +206,7 @@ const struct dma_map_ops hexagon_dma_ops = {
.map_page = hexagon_map_page, .map_page = hexagon_map_page,
.sync_single_for_cpu = hexagon_sync_single_for_cpu, .sync_single_for_cpu = hexagon_sync_single_for_cpu,
.sync_single_for_device = hexagon_sync_single_for_device, .sync_single_for_device = hexagon_sync_single_for_device,
.mapping_error = hexagon_mapping_error,
.is_phys = 1, .is_phys = 1,
}; };

View File

@ -40,7 +40,6 @@ EXPORT_SYMBOL(memset);
/* Additional variables */ /* Additional variables */
EXPORT_SYMBOL(__phys_offset); EXPORT_SYMBOL(__phys_offset);
EXPORT_SYMBOL(_dflt_cache_att); EXPORT_SYMBOL(_dflt_cache_att);
EXPORT_SYMBOL(bad_dma_address);
#define DECLARE_EXPORT(name) \ #define DECLARE_EXPORT(name) \
extern void name(void); EXPORT_SYMBOL(name) extern void name(void); EXPORT_SYMBOL(name)

View File

@ -12,8 +12,6 @@
#define ARCH_HAS_DMA_GET_REQUIRED_MASK #define ARCH_HAS_DMA_GET_REQUIRED_MASK
#define DMA_ERROR_CODE 0
extern const struct dma_map_ops *dma_ops; extern const struct dma_map_ops *dma_ops;
extern struct ia64_machine_vector ia64_mv; extern struct ia64_machine_vector ia64_mv;
extern void set_iommu_machvec(void); extern void set_iommu_machvec(void);

View File

@ -19,6 +19,7 @@ config M32R
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select CPU_NO_EFFICIENT_FFS select CPU_NO_EFFICIENT_FFS
select DMA_NOOP_OPS select DMA_NOOP_OPS
select ARCH_NO_COHERENT_DMA_MMAP if !MMU
config SBUS config SBUS
bool bool

View File

@ -8,8 +8,6 @@
#include <linux/dma-debug.h> #include <linux/dma-debug.h>
#include <linux/io.h> #include <linux/io.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{ {
return &dma_noop_ops; return &dma_noop_ops;

View File

@ -2,6 +2,7 @@ config M68K
bool bool
default y default y
select ARCH_MIGHT_HAVE_PC_PARPORT if ISA select ARCH_MIGHT_HAVE_PC_PARPORT if ISA
select ARCH_NO_COHERENT_DMA_MMAP if !MMU
select HAVE_IDE select HAVE_IDE
select HAVE_AOUT if MMU select HAVE_AOUT if MMU
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE

View File

@ -2,6 +2,7 @@ config MICROBLAZE
def_bool y def_bool y
select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_NO_COHERENT_DMA_MMAP if !MMU
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_EXTABLE_SORT select BUILDTIME_EXTABLE_SORT
select TIMER_OF select TIMER_OF

View File

@ -28,8 +28,6 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
#define __dma_alloc_coherent(dev, gfp, size, handle) NULL #define __dma_alloc_coherent(dev, gfp, size, handle) NULL
#define __dma_free_coherent(size, addr) ((void)0) #define __dma_free_coherent(size, addr) ((void)0)

View File

@ -75,19 +75,11 @@ static void loongson_dma_sync_sg_for_device(struct device *dev,
mb(); mb();
} }
static int loongson_dma_set_mask(struct device *dev, u64 mask) static int loongson_dma_supported(struct device *dev, u64 mask)
{ {
if (!dev->dma_mask || !dma_supported(dev, mask)) if (mask > DMA_BIT_MASK(loongson_sysconf.dma_mask_bits))
return -EIO; return 0;
return swiotlb_dma_supported(dev, mask);
if (mask > DMA_BIT_MASK(loongson_sysconf.dma_mask_bits)) {
*dev->dma_mask = DMA_BIT_MASK(loongson_sysconf.dma_mask_bits);
return -EIO;
}
*dev->dma_mask = mask;
return 0;
} }
dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
@ -126,8 +118,7 @@ static const struct dma_map_ops loongson_dma_map_ops = {
.sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = loongson_dma_sync_sg_for_device, .sync_sg_for_device = loongson_dma_sync_sg_for_device,
.mapping_error = swiotlb_dma_mapping_error, .mapping_error = swiotlb_dma_mapping_error,
.dma_supported = swiotlb_dma_supported, .dma_supported = loongson_dma_supported,
.set_dma_mask = loongson_dma_set_mask
}; };
void __init plat_swiotlb_setup(void) void __init plat_swiotlb_setup(void)

View File

@ -26,8 +26,6 @@
#include <linux/kmemcheck.h> #include <linux/kmemcheck.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
extern const struct dma_map_ops or1k_dma_map_ops; extern const struct dma_map_ops or1k_dma_map_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
@ -35,11 +33,4 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return &or1k_dma_map_ops; return &or1k_dma_map_ops;
} }
#define HAVE_ARCH_DMA_SUPPORTED 1
static inline int dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
#endif /* __ASM_OPENRISC_DMA_MAPPING_H */ #endif /* __ASM_OPENRISC_DMA_MAPPING_H */

View File

@ -17,10 +17,6 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/swiotlb.h> #include <asm/swiotlb.h>
#ifdef CONFIG_PPC64
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
#endif
/* Some dma direct funcs must be visible for use in other dma_ops */ /* Some dma direct funcs must be visible for use in other dma_ops */
extern void *__dma_direct_alloc_coherent(struct device *dev, size_t size, extern void *__dma_direct_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag, dma_addr_t *dma_handle, gfp_t flag,
@ -116,7 +112,6 @@ static inline void set_dma_offset(struct device *dev, dma_addr_t off)
#define HAVE_ARCH_DMA_SET_MASK 1 #define HAVE_ARCH_DMA_SET_MASK 1
extern int dma_set_mask(struct device *dev, u64 dma_mask); extern int dma_set_mask(struct device *dev, u64 dma_mask);
extern int __dma_set_mask(struct device *dev, u64 dma_mask);
extern u64 __dma_get_required_mask(struct device *dev); extern u64 __dma_get_required_mask(struct device *dev);
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)

View File

@ -139,6 +139,8 @@ struct scatterlist;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
#define IOMMU_MAPPING_ERROR (~(dma_addr_t)0x0)
static inline void set_iommu_table_base(struct device *dev, static inline void set_iommu_table_base(struct device *dev,
struct iommu_table *base) struct iommu_table *base)
{ {
@ -238,6 +240,8 @@ static inline int __init tce_iommu_bus_notifier_init(void)
} }
#endif /* !CONFIG_IOMMU_API */ #endif /* !CONFIG_IOMMU_API */
int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr);
#else #else
static inline void *get_iommu_table_base(struct device *dev) static inline void *get_iommu_table_base(struct device *dev)

View File

@ -105,6 +105,11 @@ static u64 dma_iommu_get_required_mask(struct device *dev)
return mask; return mask;
} }
int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == IOMMU_MAPPING_ERROR;
}
struct dma_map_ops dma_iommu_ops = { struct dma_map_ops dma_iommu_ops = {
.alloc = dma_iommu_alloc_coherent, .alloc = dma_iommu_alloc_coherent,
.free = dma_iommu_free_coherent, .free = dma_iommu_free_coherent,
@ -115,5 +120,6 @@ struct dma_map_ops dma_iommu_ops = {
.map_page = dma_iommu_map_page, .map_page = dma_iommu_map_page,
.unmap_page = dma_iommu_unmap_page, .unmap_page = dma_iommu_unmap_page,
.get_required_mask = dma_iommu_get_required_mask, .get_required_mask = dma_iommu_get_required_mask,
.mapping_error = dma_iommu_mapping_error,
}; };
EXPORT_SYMBOL(dma_iommu_ops); EXPORT_SYMBOL(dma_iommu_ops);

View File

@ -314,18 +314,6 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
int __dma_set_mask(struct device *dev, u64 dma_mask)
{
const struct dma_map_ops *dma_ops = get_dma_ops(dev);
if ((dma_ops != NULL) && (dma_ops->set_dma_mask != NULL))
return dma_ops->set_dma_mask(dev, dma_mask);
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
int dma_set_mask(struct device *dev, u64 dma_mask) int dma_set_mask(struct device *dev, u64 dma_mask)
{ {
if (ppc_md.dma_set_mask) if (ppc_md.dma_set_mask)
@ -338,7 +326,10 @@ int dma_set_mask(struct device *dev, u64 dma_mask)
return phb->controller_ops.dma_set_mask(pdev, dma_mask); return phb->controller_ops.dma_set_mask(pdev, dma_mask);
} }
return __dma_set_mask(dev, dma_mask); if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
} }
EXPORT_SYMBOL(dma_set_mask); EXPORT_SYMBOL(dma_set_mask);

View File

@ -198,11 +198,11 @@ static unsigned long iommu_range_alloc(struct device *dev,
if (unlikely(npages == 0)) { if (unlikely(npages == 0)) {
if (printk_ratelimit()) if (printk_ratelimit())
WARN_ON(1); WARN_ON(1);
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
} }
if (should_fail_iommu(dev)) if (should_fail_iommu(dev))
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
/* /*
* We don't need to disable preemption here because any CPU can * We don't need to disable preemption here because any CPU can
@ -278,7 +278,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
} else { } else {
/* Give up */ /* Give up */
spin_unlock_irqrestore(&(pool->lock), flags); spin_unlock_irqrestore(&(pool->lock), flags);
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
} }
} }
@ -310,13 +310,13 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,
unsigned long attrs) unsigned long attrs)
{ {
unsigned long entry; unsigned long entry;
dma_addr_t ret = DMA_ERROR_CODE; dma_addr_t ret = IOMMU_MAPPING_ERROR;
int build_fail; int build_fail;
entry = iommu_range_alloc(dev, tbl, npages, NULL, mask, align_order); entry = iommu_range_alloc(dev, tbl, npages, NULL, mask, align_order);
if (unlikely(entry == DMA_ERROR_CODE)) if (unlikely(entry == IOMMU_MAPPING_ERROR))
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
entry += tbl->it_offset; /* Offset into real TCE table */ entry += tbl->it_offset; /* Offset into real TCE table */
ret = entry << tbl->it_page_shift; /* Set the return dma address */ ret = entry << tbl->it_page_shift; /* Set the return dma address */
@ -328,12 +328,12 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,
/* tbl->it_ops->set() only returns non-zero for transient errors. /* tbl->it_ops->set() only returns non-zero for transient errors.
* Clean up the table bitmap in this case and return * Clean up the table bitmap in this case and return
* DMA_ERROR_CODE. For all other errors the functionality is * IOMMU_MAPPING_ERROR. For all other errors the functionality is
* not altered. * not altered.
*/ */
if (unlikely(build_fail)) { if (unlikely(build_fail)) {
__iommu_free(tbl, ret, npages); __iommu_free(tbl, ret, npages);
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
} }
/* Flush/invalidate TLB caches if necessary */ /* Flush/invalidate TLB caches if necessary */
@ -478,7 +478,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen); DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen);
/* Handle failure */ /* Handle failure */
if (unlikely(entry == DMA_ERROR_CODE)) { if (unlikely(entry == IOMMU_MAPPING_ERROR)) {
if (!(attrs & DMA_ATTR_NO_WARN) && if (!(attrs & DMA_ATTR_NO_WARN) &&
printk_ratelimit()) printk_ratelimit())
dev_info(dev, "iommu_alloc failed, tbl %p " dev_info(dev, "iommu_alloc failed, tbl %p "
@ -545,7 +545,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
*/ */
if (outcount < incount) { if (outcount < incount) {
outs = sg_next(outs); outs = sg_next(outs);
outs->dma_address = DMA_ERROR_CODE; outs->dma_address = IOMMU_MAPPING_ERROR;
outs->dma_length = 0; outs->dma_length = 0;
} }
@ -563,7 +563,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
npages = iommu_num_pages(s->dma_address, s->dma_length, npages = iommu_num_pages(s->dma_address, s->dma_length,
IOMMU_PAGE_SIZE(tbl)); IOMMU_PAGE_SIZE(tbl));
__iommu_free(tbl, vaddr, npages); __iommu_free(tbl, vaddr, npages);
s->dma_address = DMA_ERROR_CODE; s->dma_address = IOMMU_MAPPING_ERROR;
s->dma_length = 0; s->dma_length = 0;
} }
if (s == outs) if (s == outs)
@ -777,7 +777,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
unsigned long mask, enum dma_data_direction direction, unsigned long mask, enum dma_data_direction direction,
unsigned long attrs) unsigned long attrs)
{ {
dma_addr_t dma_handle = DMA_ERROR_CODE; dma_addr_t dma_handle = IOMMU_MAPPING_ERROR;
void *vaddr; void *vaddr;
unsigned long uaddr; unsigned long uaddr;
unsigned int npages, align; unsigned int npages, align;
@ -797,7 +797,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
dma_handle = iommu_alloc(dev, tbl, vaddr, npages, direction, dma_handle = iommu_alloc(dev, tbl, vaddr, npages, direction,
mask >> tbl->it_page_shift, align, mask >> tbl->it_page_shift, align,
attrs); attrs);
if (dma_handle == DMA_ERROR_CODE) { if (dma_handle == IOMMU_MAPPING_ERROR) {
if (!(attrs & DMA_ATTR_NO_WARN) && if (!(attrs & DMA_ATTR_NO_WARN) &&
printk_ratelimit()) { printk_ratelimit()) {
dev_info(dev, "iommu_alloc failed, tbl %p " dev_info(dev, "iommu_alloc failed, tbl %p "
@ -869,7 +869,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
io_order = get_iommu_order(size, tbl); io_order = get_iommu_order(size, tbl);
mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL, mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL,
mask >> tbl->it_page_shift, io_order, 0); mask >> tbl->it_page_shift, io_order, 0);
if (mapping == DMA_ERROR_CODE) { if (mapping == IOMMU_MAPPING_ERROR) {
free_pages((unsigned long)ret, order); free_pages((unsigned long)ret, order);
return NULL; return NULL;
} }

View File

@ -644,32 +644,22 @@ static void dma_fixed_unmap_sg(struct device *dev, struct scatterlist *sg,
direction, attrs); direction, attrs);
} }
static int dma_fixed_dma_supported(struct device *dev, u64 mask) static int dma_suported_and_switch(struct device *dev, u64 dma_mask);
{
return mask == DMA_BIT_MASK(64);
}
static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask);
static const struct dma_map_ops dma_iommu_fixed_ops = { static const struct dma_map_ops dma_iommu_fixed_ops = {
.alloc = dma_fixed_alloc_coherent, .alloc = dma_fixed_alloc_coherent,
.free = dma_fixed_free_coherent, .free = dma_fixed_free_coherent,
.map_sg = dma_fixed_map_sg, .map_sg = dma_fixed_map_sg,
.unmap_sg = dma_fixed_unmap_sg, .unmap_sg = dma_fixed_unmap_sg,
.dma_supported = dma_fixed_dma_supported, .dma_supported = dma_suported_and_switch,
.set_dma_mask = dma_set_mask_and_switch,
.map_page = dma_fixed_map_page, .map_page = dma_fixed_map_page,
.unmap_page = dma_fixed_unmap_page, .unmap_page = dma_fixed_unmap_page,
.mapping_error = dma_iommu_mapping_error,
}; };
static void cell_dma_dev_setup_fixed(struct device *dev);
static void cell_dma_dev_setup(struct device *dev) static void cell_dma_dev_setup(struct device *dev)
{ {
/* Order is important here, these are not mutually exclusive */ if (get_pci_dma_ops() == &dma_iommu_ops)
if (get_dma_ops(dev) == &dma_iommu_fixed_ops)
cell_dma_dev_setup_fixed(dev);
else if (get_pci_dma_ops() == &dma_iommu_ops)
set_iommu_table_base(dev, cell_get_iommu_table(dev)); set_iommu_table_base(dev, cell_get_iommu_table(dev));
else if (get_pci_dma_ops() == &dma_direct_ops) else if (get_pci_dma_ops() == &dma_direct_ops)
set_dma_offset(dev, cell_dma_direct_offset); set_dma_offset(dev, cell_dma_direct_offset);
@ -956,38 +946,29 @@ static u64 cell_iommu_get_fixed_address(struct device *dev)
return dev_addr; return dev_addr;
} }
static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask) static int dma_suported_and_switch(struct device *dev, u64 dma_mask)
{ {
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
if (dma_mask == DMA_BIT_MASK(64) && if (dma_mask == DMA_BIT_MASK(64) &&
cell_iommu_get_fixed_address(dev) != OF_BAD_ADDR) cell_iommu_get_fixed_address(dev) != OF_BAD_ADDR) {
{ u64 addr = cell_iommu_get_fixed_address(dev) +
dma_iommu_fixed_base;
dev_dbg(dev, "iommu: 64-bit OK, using fixed ops\n"); dev_dbg(dev, "iommu: 64-bit OK, using fixed ops\n");
dev_dbg(dev, "iommu: fixed addr = %llx\n", addr);
set_dma_ops(dev, &dma_iommu_fixed_ops); set_dma_ops(dev, &dma_iommu_fixed_ops);
} else { set_dma_offset(dev, addr);
dev_dbg(dev, "iommu: not 64-bit, using default ops\n"); return 1;
set_dma_ops(dev, get_pci_dma_ops());
} }
cell_dma_dev_setup(dev); if (dma_iommu_dma_supported(dev, dma_mask)) {
dev_dbg(dev, "iommu: not 64-bit, using default ops\n");
*dev->dma_mask = dma_mask; set_dma_ops(dev, get_pci_dma_ops());
cell_dma_dev_setup(dev);
return 1;
}
return 0; return 0;
} }
static void cell_dma_dev_setup_fixed(struct device *dev)
{
u64 addr;
addr = cell_iommu_get_fixed_address(dev) + dma_iommu_fixed_base;
set_dma_offset(dev, addr);
dev_dbg(dev, "iommu: fixed addr = %llx\n", addr);
}
static void insert_16M_pte(unsigned long addr, unsigned long *ptab, static void insert_16M_pte(unsigned long addr, unsigned long *ptab,
unsigned long base_pte) unsigned long base_pte)
{ {
@ -1139,7 +1120,7 @@ static int __init cell_iommu_fixed_mapping_init(void)
cell_iommu_setup_window(iommu, np, dbase, dsize, 0); cell_iommu_setup_window(iommu, np, dbase, dsize, 0);
} }
dma_iommu_ops.set_dma_mask = dma_set_mask_and_switch; dma_iommu_ops.dma_supported = dma_suported_and_switch;
set_pci_dma_ops(&dma_iommu_ops); set_pci_dma_ops(&dma_iommu_ops);
return 0; return 0;

View File

@ -519,7 +519,7 @@ static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *page,
{ {
struct vio_dev *viodev = to_vio_dev(dev); struct vio_dev *viodev = to_vio_dev(dev);
struct iommu_table *tbl; struct iommu_table *tbl;
dma_addr_t ret = DMA_ERROR_CODE; dma_addr_t ret = IOMMU_MAPPING_ERROR;
tbl = get_iommu_table_base(dev); tbl = get_iommu_table_base(dev);
if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) { if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) {
@ -625,6 +625,7 @@ static const struct dma_map_ops vio_dma_mapping_ops = {
.unmap_page = vio_dma_iommu_unmap_page, .unmap_page = vio_dma_iommu_unmap_page,
.dma_supported = vio_dma_iommu_dma_supported, .dma_supported = vio_dma_iommu_dma_supported,
.get_required_mask = vio_dma_get_required_mask, .get_required_mask = vio_dma_get_required_mask,
.mapping_error = dma_iommu_mapping_error,
}; };
/** /**

View File

@ -8,8 +8,6 @@
#include <linux/dma-debug.h> #include <linux/dma-debug.h>
#include <linux/io.h> #include <linux/io.h>
#define DMA_ERROR_CODE (~(dma_addr_t) 0x0)
extern const struct dma_map_ops s390_pci_dma_ops; extern const struct dma_map_ops s390_pci_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)

View File

@ -14,6 +14,8 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <asm/pci_dma.h> #include <asm/pci_dma.h>
#define S390_MAPPING_ERROR (~(dma_addr_t) 0x0)
static struct kmem_cache *dma_region_table_cache; static struct kmem_cache *dma_region_table_cache;
static struct kmem_cache *dma_page_table_cache; static struct kmem_cache *dma_page_table_cache;
static int s390_iommu_strict; static int s390_iommu_strict;
@ -281,7 +283,7 @@ static dma_addr_t dma_alloc_address(struct device *dev, int size)
out_error: out_error:
spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags);
return DMA_ERROR_CODE; return S390_MAPPING_ERROR;
} }
static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size) static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size)
@ -329,7 +331,7 @@ static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page,
/* This rounds up number of pages based on size and offset */ /* This rounds up number of pages based on size and offset */
nr_pages = iommu_num_pages(pa, size, PAGE_SIZE); nr_pages = iommu_num_pages(pa, size, PAGE_SIZE);
dma_addr = dma_alloc_address(dev, nr_pages); dma_addr = dma_alloc_address(dev, nr_pages);
if (dma_addr == DMA_ERROR_CODE) { if (dma_addr == S390_MAPPING_ERROR) {
ret = -ENOSPC; ret = -ENOSPC;
goto out_err; goto out_err;
} }
@ -352,7 +354,7 @@ static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page,
out_err: out_err:
zpci_err("map error:\n"); zpci_err("map error:\n");
zpci_err_dma(ret, pa); zpci_err_dma(ret, pa);
return DMA_ERROR_CODE; return S390_MAPPING_ERROR;
} }
static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr, static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr,
@ -429,7 +431,7 @@ static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
int ret; int ret;
dma_addr_base = dma_alloc_address(dev, nr_pages); dma_addr_base = dma_alloc_address(dev, nr_pages);
if (dma_addr_base == DMA_ERROR_CODE) if (dma_addr_base == S390_MAPPING_ERROR)
return -ENOMEM; return -ENOMEM;
dma_addr = dma_addr_base; dma_addr = dma_addr_base;
@ -476,7 +478,7 @@ static int s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
for (i = 1; i < nr_elements; i++) { for (i = 1; i < nr_elements; i++) {
s = sg_next(s); s = sg_next(s);
s->dma_address = DMA_ERROR_CODE; s->dma_address = S390_MAPPING_ERROR;
s->dma_length = 0; s->dma_length = 0;
if (s->offset || (size & ~PAGE_MASK) || if (s->offset || (size & ~PAGE_MASK) ||
@ -525,6 +527,11 @@ static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
s->dma_length = 0; s->dma_length = 0;
} }
} }
static int s390_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == S390_MAPPING_ERROR;
}
int zpci_dma_init_device(struct zpci_dev *zdev) int zpci_dma_init_device(struct zpci_dev *zdev)
{ {
@ -659,6 +666,7 @@ const struct dma_map_ops s390_pci_dma_ops = {
.unmap_sg = s390_dma_unmap_sg, .unmap_sg = s390_dma_unmap_sg,
.map_page = s390_dma_map_pages, .map_page = s390_dma_map_pages,
.unmap_page = s390_dma_unmap_pages, .unmap_page = s390_dma_unmap_pages,
.mapping_error = s390_mapping_error,
/* if we support direct DMA this must be conditional */ /* if we support direct DMA this must be conditional */
.is_phys = 0, .is_phys = 0,
/* dma_supported is unconditionally true without a callback */ /* dma_supported is unconditionally true without a callback */

View File

@ -2,6 +2,7 @@ config SUPERH
def_bool y def_bool y
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_NO_COHERENT_DMA_MMAP if !MMU
select HAVE_PATA_PLATFORM select HAVE_PATA_PLATFORM
select CLKDEV_LOOKUP select CLKDEV_LOOKUP
select HAVE_IDE if HAS_IOPORT_MAP select HAVE_IDE if HAS_IOPORT_MAP

View File

@ -9,8 +9,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops; return dma_ops;
} }
#define DMA_ERROR_CODE 0
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir); enum dma_data_direction dir);

View File

@ -5,11 +5,6 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/dma-debug.h> #include <linux/dma-debug.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
#define HAVE_ARCH_DMA_SUPPORTED 1
int dma_supported(struct device *dev, u64 mask);
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
@ -19,7 +14,6 @@ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
} }
extern const struct dma_map_ops *dma_ops; extern const struct dma_map_ops *dma_ops;
extern const struct dma_map_ops *leon_dma_ops;
extern const struct dma_map_ops pci32_dma_ops; extern const struct dma_map_ops pci32_dma_ops;
extern struct bus_type pci_bus_type; extern struct bus_type pci_bus_type;
@ -28,7 +22,7 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{ {
#ifdef CONFIG_SPARC_LEON #ifdef CONFIG_SPARC_LEON
if (sparc_cpu_model == sparc_leon) if (sparc_cpu_model == sparc_leon)
return leon_dma_ops; return &pci32_dma_ops;
#endif #endif
#if defined(CONFIG_SPARC32) && defined(CONFIG_PCI) #if defined(CONFIG_SPARC32) && defined(CONFIG_PCI)
if (bus == &pci_bus_type) if (bus == &pci_bus_type)

View File

@ -314,7 +314,7 @@ static dma_addr_t dma_4u_map_page(struct device *dev, struct page *page,
bad_no_ctx: bad_no_ctx:
if (printk_ratelimit()) if (printk_ratelimit())
WARN_ON(1); WARN_ON(1);
return DMA_ERROR_CODE; return SPARC_MAPPING_ERROR;
} }
static void strbuf_flush(struct strbuf *strbuf, struct iommu *iommu, static void strbuf_flush(struct strbuf *strbuf, struct iommu *iommu,
@ -547,7 +547,7 @@ static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist,
if (outcount < incount) { if (outcount < incount) {
outs = sg_next(outs); outs = sg_next(outs);
outs->dma_address = DMA_ERROR_CODE; outs->dma_address = SPARC_MAPPING_ERROR;
outs->dma_length = 0; outs->dma_length = 0;
} }
@ -573,7 +573,7 @@ static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist,
iommu_tbl_range_free(&iommu->tbl, vaddr, npages, iommu_tbl_range_free(&iommu->tbl, vaddr, npages,
IOMMU_ERROR_CODE); IOMMU_ERROR_CODE);
s->dma_address = DMA_ERROR_CODE; s->dma_address = SPARC_MAPPING_ERROR;
s->dma_length = 0; s->dma_length = 0;
} }
if (s == outs) if (s == outs)
@ -741,6 +741,26 @@ static void dma_4u_sync_sg_for_cpu(struct device *dev,
spin_unlock_irqrestore(&iommu->lock, flags); spin_unlock_irqrestore(&iommu->lock, flags);
} }
static int dma_4u_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == SPARC_MAPPING_ERROR;
}
static int dma_4u_supported(struct device *dev, u64 device_mask)
{
struct iommu *iommu = dev->archdata.iommu;
if (device_mask > DMA_BIT_MASK(32))
return 0;
if ((device_mask & iommu->dma_addr_mask) == iommu->dma_addr_mask)
return 1;
#ifdef CONFIG_PCI
if (dev_is_pci(dev))
return pci64_dma_supported(to_pci_dev(dev), device_mask);
#endif
return 0;
}
static const struct dma_map_ops sun4u_dma_ops = { static const struct dma_map_ops sun4u_dma_ops = {
.alloc = dma_4u_alloc_coherent, .alloc = dma_4u_alloc_coherent,
.free = dma_4u_free_coherent, .free = dma_4u_free_coherent,
@ -750,31 +770,9 @@ static const struct dma_map_ops sun4u_dma_ops = {
.unmap_sg = dma_4u_unmap_sg, .unmap_sg = dma_4u_unmap_sg,
.sync_single_for_cpu = dma_4u_sync_single_for_cpu, .sync_single_for_cpu = dma_4u_sync_single_for_cpu,
.sync_sg_for_cpu = dma_4u_sync_sg_for_cpu, .sync_sg_for_cpu = dma_4u_sync_sg_for_cpu,
.dma_supported = dma_4u_supported,
.mapping_error = dma_4u_mapping_error,
}; };
const struct dma_map_ops *dma_ops = &sun4u_dma_ops; const struct dma_map_ops *dma_ops = &sun4u_dma_ops;
EXPORT_SYMBOL(dma_ops); EXPORT_SYMBOL(dma_ops);
int dma_supported(struct device *dev, u64 device_mask)
{
struct iommu *iommu = dev->archdata.iommu;
u64 dma_addr_mask = iommu->dma_addr_mask;
if (device_mask > DMA_BIT_MASK(32)) {
if (iommu->atu)
dma_addr_mask = iommu->atu->dma_addr_mask;
else
return 0;
}
if ((device_mask & dma_addr_mask) == dma_addr_mask)
return 1;
#ifdef CONFIG_PCI
if (dev_is_pci(dev))
return pci64_dma_supported(to_pci_dev(dev), device_mask);
#endif
return 0;
}
EXPORT_SYMBOL(dma_supported);

View File

@ -47,4 +47,6 @@ static inline int is_span_boundary(unsigned long entry,
return iommu_is_span_boundary(entry, nr, shift, boundary_size); return iommu_is_span_boundary(entry, nr, shift, boundary_size);
} }
#define SPARC_MAPPING_ERROR (~(dma_addr_t)0x0)
#endif /* _IOMMU_COMMON_H */ #endif /* _IOMMU_COMMON_H */

View File

@ -401,6 +401,11 @@ static void sbus_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
BUG(); BUG();
} }
static int sbus_dma_supported(struct device *dev, u64 mask)
{
return 0;
}
static const struct dma_map_ops sbus_dma_ops = { static const struct dma_map_ops sbus_dma_ops = {
.alloc = sbus_alloc_coherent, .alloc = sbus_alloc_coherent,
.free = sbus_free_coherent, .free = sbus_free_coherent,
@ -410,6 +415,7 @@ static const struct dma_map_ops sbus_dma_ops = {
.unmap_sg = sbus_unmap_sg, .unmap_sg = sbus_unmap_sg,
.sync_sg_for_cpu = sbus_sync_sg_for_cpu, .sync_sg_for_cpu = sbus_sync_sg_for_cpu,
.sync_sg_for_device = sbus_sync_sg_for_device, .sync_sg_for_device = sbus_sync_sg_for_device,
.dma_supported = sbus_dma_supported,
}; };
static int __init sparc_register_ioport(void) static int __init sparc_register_ioport(void)
@ -637,6 +643,7 @@ static void pci32_sync_sg_for_device(struct device *device, struct scatterlist *
} }
} }
/* note: leon re-uses pci32_dma_ops */
const struct dma_map_ops pci32_dma_ops = { const struct dma_map_ops pci32_dma_ops = {
.alloc = pci32_alloc_coherent, .alloc = pci32_alloc_coherent,
.free = pci32_free_coherent, .free = pci32_free_coherent,
@ -651,29 +658,9 @@ const struct dma_map_ops pci32_dma_ops = {
}; };
EXPORT_SYMBOL(pci32_dma_ops); EXPORT_SYMBOL(pci32_dma_ops);
/* leon re-uses pci32_dma_ops */
const struct dma_map_ops *leon_dma_ops = &pci32_dma_ops;
EXPORT_SYMBOL(leon_dma_ops);
const struct dma_map_ops *dma_ops = &sbus_dma_ops; const struct dma_map_ops *dma_ops = &sbus_dma_ops;
EXPORT_SYMBOL(dma_ops); EXPORT_SYMBOL(dma_ops);
/*
* Return whether the given PCI device DMA address mask can be
* supported properly. For example, if your device can only drive the
* low 24-bits during PCI bus mastering, then you would pass
* 0x00ffffff as the mask to this function.
*/
int dma_supported(struct device *dev, u64 mask)
{
if (dev_is_pci(dev))
return 1;
return 0;
}
EXPORT_SYMBOL(dma_supported);
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
static int sparc_io_proc_show(struct seq_file *m, void *v) static int sparc_io_proc_show(struct seq_file *m, void *v)

View File

@ -24,6 +24,7 @@
#include "pci_impl.h" #include "pci_impl.h"
#include "iommu_common.h" #include "iommu_common.h"
#include "kernel.h"
#include "pci_sun4v.h" #include "pci_sun4v.h"
@ -412,12 +413,12 @@ static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page,
bad: bad:
if (printk_ratelimit()) if (printk_ratelimit())
WARN_ON(1); WARN_ON(1);
return DMA_ERROR_CODE; return SPARC_MAPPING_ERROR;
iommu_map_fail: iommu_map_fail:
local_irq_restore(flags); local_irq_restore(flags);
iommu_tbl_range_free(tbl, bus_addr, npages, IOMMU_ERROR_CODE); iommu_tbl_range_free(tbl, bus_addr, npages, IOMMU_ERROR_CODE);
return DMA_ERROR_CODE; return SPARC_MAPPING_ERROR;
} }
static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr, static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr,
@ -590,7 +591,7 @@ static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist,
if (outcount < incount) { if (outcount < incount) {
outs = sg_next(outs); outs = sg_next(outs);
outs->dma_address = DMA_ERROR_CODE; outs->dma_address = SPARC_MAPPING_ERROR;
outs->dma_length = 0; outs->dma_length = 0;
} }
@ -607,7 +608,7 @@ static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist,
iommu_tbl_range_free(tbl, vaddr, npages, iommu_tbl_range_free(tbl, vaddr, npages,
IOMMU_ERROR_CODE); IOMMU_ERROR_CODE);
/* XXX demap? XXX */ /* XXX demap? XXX */
s->dma_address = DMA_ERROR_CODE; s->dma_address = SPARC_MAPPING_ERROR;
s->dma_length = 0; s->dma_length = 0;
} }
if (s == outs) if (s == outs)
@ -669,6 +670,26 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist,
local_irq_restore(flags); local_irq_restore(flags);
} }
static int dma_4v_supported(struct device *dev, u64 device_mask)
{
struct iommu *iommu = dev->archdata.iommu;
u64 dma_addr_mask;
if (device_mask > DMA_BIT_MASK(32) && iommu->atu)
dma_addr_mask = iommu->atu->dma_addr_mask;
else
dma_addr_mask = iommu->dma_addr_mask;
if ((device_mask & dma_addr_mask) == dma_addr_mask)
return 1;
return pci64_dma_supported(to_pci_dev(dev), device_mask);
}
static int dma_4v_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == SPARC_MAPPING_ERROR;
}
static const struct dma_map_ops sun4v_dma_ops = { static const struct dma_map_ops sun4v_dma_ops = {
.alloc = dma_4v_alloc_coherent, .alloc = dma_4v_alloc_coherent,
.free = dma_4v_free_coherent, .free = dma_4v_free_coherent,
@ -676,6 +697,8 @@ static const struct dma_map_ops sun4v_dma_ops = {
.unmap_page = dma_4v_unmap_page, .unmap_page = dma_4v_unmap_page,
.map_sg = dma_4v_map_sg, .map_sg = dma_4v_map_sg,
.unmap_sg = dma_4v_unmap_sg, .unmap_sg = dma_4v_unmap_sg,
.dma_supported = dma_4v_supported,
.mapping_error = dma_4v_mapping_error,
}; };
static void pci_sun4v_scan_bus(struct pci_pbm_info *pbm, struct device *parent) static void pci_sun4v_scan_bus(struct pci_pbm_info *pbm, struct device *parent)

View File

@ -317,18 +317,6 @@ static void tile_dma_sync_sg_for_device(struct device *dev,
} }
} }
static inline int
tile_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int
tile_dma_supported(struct device *dev, u64 mask)
{
return 1;
}
static const struct dma_map_ops tile_default_dma_map_ops = { static const struct dma_map_ops tile_default_dma_map_ops = {
.alloc = tile_dma_alloc_coherent, .alloc = tile_dma_alloc_coherent,
.free = tile_dma_free_coherent, .free = tile_dma_free_coherent,
@ -340,8 +328,6 @@ static const struct dma_map_ops tile_default_dma_map_ops = {
.sync_single_for_device = tile_dma_sync_single_for_device, .sync_single_for_device = tile_dma_sync_single_for_device,
.sync_sg_for_cpu = tile_dma_sync_sg_for_cpu, .sync_sg_for_cpu = tile_dma_sync_sg_for_cpu,
.sync_sg_for_device = tile_dma_sync_sg_for_device, .sync_sg_for_device = tile_dma_sync_sg_for_device,
.mapping_error = tile_dma_mapping_error,
.dma_supported = tile_dma_supported
}; };
const struct dma_map_ops *tile_dma_map_ops = &tile_default_dma_map_ops; const struct dma_map_ops *tile_dma_map_ops = &tile_default_dma_map_ops;
@ -504,18 +490,6 @@ static void tile_pci_dma_sync_sg_for_device(struct device *dev,
} }
} }
static inline int
tile_pci_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int
tile_pci_dma_supported(struct device *dev, u64 mask)
{
return 1;
}
static const struct dma_map_ops tile_pci_default_dma_map_ops = { static const struct dma_map_ops tile_pci_default_dma_map_ops = {
.alloc = tile_pci_dma_alloc_coherent, .alloc = tile_pci_dma_alloc_coherent,
.free = tile_pci_dma_free_coherent, .free = tile_pci_dma_free_coherent,
@ -527,8 +501,6 @@ static const struct dma_map_ops tile_pci_default_dma_map_ops = {
.sync_single_for_device = tile_pci_dma_sync_single_for_device, .sync_single_for_device = tile_pci_dma_sync_single_for_device,
.sync_sg_for_cpu = tile_pci_dma_sync_sg_for_cpu, .sync_sg_for_cpu = tile_pci_dma_sync_sg_for_cpu,
.sync_sg_for_device = tile_pci_dma_sync_sg_for_device, .sync_sg_for_device = tile_pci_dma_sync_sg_for_device,
.mapping_error = tile_pci_dma_mapping_error,
.dma_supported = tile_pci_dma_supported
}; };
const struct dma_map_ops *gx_pci_dma_map_ops = &tile_pci_default_dma_map_ops; const struct dma_map_ops *gx_pci_dma_map_ops = &tile_pci_default_dma_map_ops;
@ -578,8 +550,6 @@ static const struct dma_map_ops pci_hybrid_dma_ops = {
.sync_single_for_device = tile_pci_dma_sync_single_for_device, .sync_single_for_device = tile_pci_dma_sync_single_for_device,
.sync_sg_for_cpu = tile_pci_dma_sync_sg_for_cpu, .sync_sg_for_cpu = tile_pci_dma_sync_sg_for_cpu,
.sync_sg_for_device = tile_pci_dma_sync_sg_for_device, .sync_sg_for_device = tile_pci_dma_sync_sg_for_device,
.mapping_error = tile_pci_dma_mapping_error,
.dma_supported = tile_pci_dma_supported
}; };
const struct dma_map_ops *gx_legacy_pci_dma_map_ops = &pci_swiotlb_dma_ops; const struct dma_map_ops *gx_legacy_pci_dma_map_ops = &pci_swiotlb_dma_ops;

View File

@ -19,8 +19,6 @@
# define ISA_DMA_BIT_MASK DMA_BIT_MASK(32) # define ISA_DMA_BIT_MASK DMA_BIT_MASK(32)
#endif #endif
#define DMA_ERROR_CODE 0
extern int iommu_merge; extern int iommu_merge;
extern struct device x86_dma_fallback_dev; extern struct device x86_dma_fallback_dev;
extern int panic_on_overflow; extern int panic_on_overflow;
@ -35,9 +33,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp); bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
#define arch_dma_alloc_attrs arch_dma_alloc_attrs #define arch_dma_alloc_attrs arch_dma_alloc_attrs
#define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *hwdev, u64 mask);
extern void *dma_generic_alloc_coherent(struct device *dev, size_t size, extern void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_addr, gfp_t flag, dma_addr_t *dma_addr, gfp_t flag,
unsigned long attrs); unsigned long attrs);

View File

@ -6,6 +6,8 @@ extern int force_iommu, no_iommu;
extern int iommu_detected; extern int iommu_detected;
extern int iommu_pass_through; extern int iommu_pass_through;
int x86_dma_supported(struct device *dev, u64 mask);
/* 10 seconds */ /* 10 seconds */
#define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000) #define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)

View File

@ -704,6 +704,7 @@ static const struct dma_map_ops gart_dma_ops = {
.alloc = gart_alloc_coherent, .alloc = gart_alloc_coherent,
.free = gart_free_coherent, .free = gart_free_coherent,
.mapping_error = gart_mapping_error, .mapping_error = gart_mapping_error,
.dma_supported = x86_dma_supported,
}; };
static void gart_iommu_shutdown(void) static void gart_iommu_shutdown(void)

View File

@ -50,6 +50,8 @@
#include <asm/x86_init.h> #include <asm/x86_init.h>
#include <asm/iommu_table.h> #include <asm/iommu_table.h>
#define CALGARY_MAPPING_ERROR 0
#ifdef CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT #ifdef CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT
int use_calgary __read_mostly = 1; int use_calgary __read_mostly = 1;
#else #else
@ -252,7 +254,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
if (panic_on_overflow) if (panic_on_overflow)
panic("Calgary: fix the allocator.\n"); panic("Calgary: fix the allocator.\n");
else else
return DMA_ERROR_CODE; return CALGARY_MAPPING_ERROR;
} }
} }
@ -272,10 +274,10 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,
entry = iommu_range_alloc(dev, tbl, npages); entry = iommu_range_alloc(dev, tbl, npages);
if (unlikely(entry == DMA_ERROR_CODE)) { if (unlikely(entry == CALGARY_MAPPING_ERROR)) {
pr_warn("failed to allocate %u pages in iommu %p\n", pr_warn("failed to allocate %u pages in iommu %p\n",
npages, tbl); npages, tbl);
return DMA_ERROR_CODE; return CALGARY_MAPPING_ERROR;
} }
/* set the return dma address */ /* set the return dma address */
@ -295,7 +297,7 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr,
unsigned long flags; unsigned long flags;
/* were we called with bad_dma_address? */ /* were we called with bad_dma_address? */
badend = DMA_ERROR_CODE + (EMERGENCY_PAGES * PAGE_SIZE); badend = CALGARY_MAPPING_ERROR + (EMERGENCY_PAGES * PAGE_SIZE);
if (unlikely(dma_addr < badend)) { if (unlikely(dma_addr < badend)) {
WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA " WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA "
"address 0x%Lx\n", dma_addr); "address 0x%Lx\n", dma_addr);
@ -380,7 +382,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg,
npages = iommu_num_pages(vaddr, s->length, PAGE_SIZE); npages = iommu_num_pages(vaddr, s->length, PAGE_SIZE);
entry = iommu_range_alloc(dev, tbl, npages); entry = iommu_range_alloc(dev, tbl, npages);
if (entry == DMA_ERROR_CODE) { if (entry == CALGARY_MAPPING_ERROR) {
/* makes sure unmap knows to stop */ /* makes sure unmap knows to stop */
s->dma_length = 0; s->dma_length = 0;
goto error; goto error;
@ -398,7 +400,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg,
error: error:
calgary_unmap_sg(dev, sg, nelems, dir, 0); calgary_unmap_sg(dev, sg, nelems, dir, 0);
for_each_sg(sg, s, nelems, i) { for_each_sg(sg, s, nelems, i) {
sg->dma_address = DMA_ERROR_CODE; sg->dma_address = CALGARY_MAPPING_ERROR;
sg->dma_length = 0; sg->dma_length = 0;
} }
return 0; return 0;
@ -453,7 +455,7 @@ static void* calgary_alloc_coherent(struct device *dev, size_t size,
/* set up tces to cover the allocated range */ /* set up tces to cover the allocated range */
mapping = iommu_alloc(dev, tbl, ret, npages, DMA_BIDIRECTIONAL); mapping = iommu_alloc(dev, tbl, ret, npages, DMA_BIDIRECTIONAL);
if (mapping == DMA_ERROR_CODE) if (mapping == CALGARY_MAPPING_ERROR)
goto free; goto free;
*dma_handle = mapping; *dma_handle = mapping;
return ret; return ret;
@ -478,6 +480,11 @@ static void calgary_free_coherent(struct device *dev, size_t size,
free_pages((unsigned long)vaddr, get_order(size)); free_pages((unsigned long)vaddr, get_order(size));
} }
static int calgary_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == CALGARY_MAPPING_ERROR;
}
static const struct dma_map_ops calgary_dma_ops = { static const struct dma_map_ops calgary_dma_ops = {
.alloc = calgary_alloc_coherent, .alloc = calgary_alloc_coherent,
.free = calgary_free_coherent, .free = calgary_free_coherent,
@ -485,6 +492,8 @@ static const struct dma_map_ops calgary_dma_ops = {
.unmap_sg = calgary_unmap_sg, .unmap_sg = calgary_unmap_sg,
.map_page = calgary_map_page, .map_page = calgary_map_page,
.unmap_page = calgary_unmap_page, .unmap_page = calgary_unmap_page,
.mapping_error = calgary_mapping_error,
.dma_supported = x86_dma_supported,
}; };
static inline void __iomem * busno_to_bbar(unsigned char num) static inline void __iomem * busno_to_bbar(unsigned char num)
@ -732,7 +741,7 @@ static void __init calgary_reserve_regions(struct pci_dev *dev)
struct iommu_table *tbl = pci_iommu(dev->bus); struct iommu_table *tbl = pci_iommu(dev->bus);
/* reserve EMERGENCY_PAGES from bad_dma_address and up */ /* reserve EMERGENCY_PAGES from bad_dma_address and up */
iommu_range_reserve(tbl, DMA_ERROR_CODE, EMERGENCY_PAGES); iommu_range_reserve(tbl, CALGARY_MAPPING_ERROR, EMERGENCY_PAGES);
/* avoid the BIOS/VGA first 640KB-1MB region */ /* avoid the BIOS/VGA first 640KB-1MB region */
/* for CalIOC2 - avoid the entire first MB */ /* for CalIOC2 - avoid the entire first MB */

View File

@ -213,10 +213,8 @@ static __init int iommu_setup(char *p)
} }
early_param("iommu", iommu_setup); early_param("iommu", iommu_setup);
int dma_supported(struct device *dev, u64 mask) int x86_dma_supported(struct device *dev, u64 mask)
{ {
const struct dma_map_ops *ops = get_dma_ops(dev);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
if (mask > 0xffffffff && forbid_dac > 0) { if (mask > 0xffffffff && forbid_dac > 0) {
dev_info(dev, "PCI: Disallowing DAC for device\n"); dev_info(dev, "PCI: Disallowing DAC for device\n");
@ -224,9 +222,6 @@ int dma_supported(struct device *dev, u64 mask)
} }
#endif #endif
if (ops->dma_supported)
return ops->dma_supported(dev, mask);
/* Copied from i386. Doesn't make much sense, because it will /* Copied from i386. Doesn't make much sense, because it will
only work for pci_alloc_coherent. only work for pci_alloc_coherent.
The caller just has to use GFP_DMA in this case. */ The caller just has to use GFP_DMA in this case. */
@ -252,7 +247,6 @@ int dma_supported(struct device *dev, u64 mask)
return 1; return 1;
} }
EXPORT_SYMBOL(dma_supported);
static int __init pci_iommu_init(void) static int __init pci_iommu_init(void)
{ {

View File

@ -11,6 +11,8 @@
#include <asm/iommu.h> #include <asm/iommu.h>
#include <asm/dma.h> #include <asm/dma.h>
#define NOMMU_MAPPING_ERROR 0
static int static int
check_addr(char *name, struct device *hwdev, dma_addr_t bus, size_t size) check_addr(char *name, struct device *hwdev, dma_addr_t bus, size_t size)
{ {
@ -33,7 +35,7 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
dma_addr_t bus = page_to_phys(page) + offset; dma_addr_t bus = page_to_phys(page) + offset;
WARN_ON(size == 0); WARN_ON(size == 0);
if (!check_addr("map_single", dev, bus, size)) if (!check_addr("map_single", dev, bus, size))
return DMA_ERROR_CODE; return NOMMU_MAPPING_ERROR;
flush_write_buffers(); flush_write_buffers();
return bus; return bus;
} }
@ -88,6 +90,11 @@ static void nommu_sync_sg_for_device(struct device *dev,
flush_write_buffers(); flush_write_buffers();
} }
static int nommu_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == NOMMU_MAPPING_ERROR;
}
const struct dma_map_ops nommu_dma_ops = { const struct dma_map_ops nommu_dma_ops = {
.alloc = dma_generic_alloc_coherent, .alloc = dma_generic_alloc_coherent,
.free = dma_generic_free_coherent, .free = dma_generic_free_coherent,
@ -96,4 +103,6 @@ const struct dma_map_ops nommu_dma_ops = {
.sync_single_for_device = nommu_sync_single_for_device, .sync_single_for_device = nommu_sync_single_for_device,
.sync_sg_for_device = nommu_sync_sg_for_device, .sync_sg_for_device = nommu_sync_sg_for_device,
.is_phys = 1, .is_phys = 1,
.mapping_error = nommu_mapping_error,
.dma_supported = x86_dma_supported,
}; };

View File

@ -26,6 +26,7 @@
#include <linux/pci_ids.h> #include <linux/pci_ids.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/list.h> #include <linux/list.h>
#include <asm/iommu.h>
#define STA2X11_SWIOTLB_SIZE (4*1024*1024) #define STA2X11_SWIOTLB_SIZE (4*1024*1024)
extern int swiotlb_late_init_with_default_size(size_t default_size); extern int swiotlb_late_init_with_default_size(size_t default_size);
@ -191,7 +192,7 @@ static const struct dma_map_ops sta2x11_dma_ops = {
.sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = swiotlb_sync_sg_for_device, .sync_sg_for_device = swiotlb_sync_sg_for_device,
.mapping_error = swiotlb_dma_mapping_error, .mapping_error = swiotlb_dma_mapping_error,
.dma_supported = NULL, /* FIXME: we should use this instead! */ .dma_supported = x86_dma_supported,
}; };
/* At setup time, we use our own ops if the device is a ConneXt one */ /* At setup time, we use our own ops if the device is a ConneXt one */

View File

@ -18,20 +18,6 @@
int xen_swiotlb __read_mostly; int xen_swiotlb __read_mostly;
static const struct dma_map_ops xen_swiotlb_dma_ops = {
.alloc = xen_swiotlb_alloc_coherent,
.free = xen_swiotlb_free_coherent,
.sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
.sync_single_for_device = xen_swiotlb_sync_single_for_device,
.sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
.map_sg = xen_swiotlb_map_sg_attrs,
.unmap_sg = xen_swiotlb_unmap_sg_attrs,
.map_page = xen_swiotlb_map_page,
.unmap_page = xen_swiotlb_unmap_page,
.dma_supported = xen_swiotlb_dma_supported,
};
/* /*
* pci_xen_swiotlb_detect - set xen_swiotlb to 1 if necessary * pci_xen_swiotlb_detect - set xen_swiotlb to 1 if necessary
* *

View File

@ -3,6 +3,7 @@ config ZONE_DMA
config XTENSA config XTENSA
def_bool y def_bool y
select ARCH_NO_COHERENT_DMA_MMAP if !MMU
select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_FRAME_POINTERS
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_EXTABLE_SORT select BUILDTIME_EXTABLE_SORT

View File

@ -16,8 +16,6 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
extern const struct dma_map_ops xtensa_dma_map_ops; extern const struct dma_map_ops xtensa_dma_map_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)

View File

@ -16,8 +16,27 @@ struct dma_coherent_mem {
int flags; int flags;
unsigned long *bitmap; unsigned long *bitmap;
spinlock_t spinlock; spinlock_t spinlock;
bool use_dev_dma_pfn_offset;
}; };
static struct dma_coherent_mem *dma_coherent_default_memory __ro_after_init;
static inline struct dma_coherent_mem *dev_get_coherent_memory(struct device *dev)
{
if (dev && dev->dma_mem)
return dev->dma_mem;
return dma_coherent_default_memory;
}
static inline dma_addr_t dma_get_device_base(struct device *dev,
struct dma_coherent_mem * mem)
{
if (mem->use_dev_dma_pfn_offset)
return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
else
return mem->device_base;
}
static bool dma_init_coherent_memory( static bool dma_init_coherent_memory(
phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags, phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags,
struct dma_coherent_mem **mem) struct dma_coherent_mem **mem)
@ -83,6 +102,9 @@ static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
static int dma_assign_coherent_memory(struct device *dev, static int dma_assign_coherent_memory(struct device *dev,
struct dma_coherent_mem *mem) struct dma_coherent_mem *mem)
{ {
if (!dev)
return -ENODEV;
if (dev->dma_mem) if (dev->dma_mem)
return -EBUSY; return -EBUSY;
@ -133,7 +155,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
spin_lock_irqsave(&mem->spinlock, flags); spin_lock_irqsave(&mem->spinlock, flags);
pos = (device_addr - mem->device_base) >> PAGE_SHIFT; pos = PFN_DOWN(device_addr - dma_get_device_base(dev, mem));
err = bitmap_allocate_region(mem->bitmap, pos, get_order(size)); err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
spin_unlock_irqrestore(&mem->spinlock, flags); spin_unlock_irqrestore(&mem->spinlock, flags);
@ -161,15 +183,12 @@ EXPORT_SYMBOL(dma_mark_declared_memory_occupied);
int dma_alloc_from_coherent(struct device *dev, ssize_t size, int dma_alloc_from_coherent(struct device *dev, ssize_t size,
dma_addr_t *dma_handle, void **ret) dma_addr_t *dma_handle, void **ret)
{ {
struct dma_coherent_mem *mem; struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
int order = get_order(size); int order = get_order(size);
unsigned long flags; unsigned long flags;
int pageno; int pageno;
int dma_memory_map; int dma_memory_map;
if (!dev)
return 0;
mem = dev->dma_mem;
if (!mem) if (!mem)
return 0; return 0;
@ -186,7 +205,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
/* /*
* Memory was found in the per-device area. * Memory was found in the per-device area.
*/ */
*dma_handle = mem->device_base + (pageno << PAGE_SHIFT); *dma_handle = dma_get_device_base(dev, mem) + (pageno << PAGE_SHIFT);
*ret = mem->virt_base + (pageno << PAGE_SHIFT); *ret = mem->virt_base + (pageno << PAGE_SHIFT);
dma_memory_map = (mem->flags & DMA_MEMORY_MAP); dma_memory_map = (mem->flags & DMA_MEMORY_MAP);
spin_unlock_irqrestore(&mem->spinlock, flags); spin_unlock_irqrestore(&mem->spinlock, flags);
@ -223,7 +242,7 @@ EXPORT_SYMBOL(dma_alloc_from_coherent);
*/ */
int dma_release_from_coherent(struct device *dev, int order, void *vaddr) int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
{ {
struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL; struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
if (mem && vaddr >= mem->virt_base && vaddr < if (mem && vaddr >= mem->virt_base && vaddr <
(mem->virt_base + (mem->size << PAGE_SHIFT))) { (mem->virt_base + (mem->size << PAGE_SHIFT))) {
@ -257,7 +276,7 @@ EXPORT_SYMBOL(dma_release_from_coherent);
int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
void *vaddr, size_t size, int *ret) void *vaddr, size_t size, int *ret)
{ {
struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL; struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
if (mem && vaddr >= mem->virt_base && vaddr + size <= if (mem && vaddr >= mem->virt_base && vaddr + size <=
(mem->virt_base + (mem->size << PAGE_SHIFT))) { (mem->virt_base + (mem->size << PAGE_SHIFT))) {
@ -287,6 +306,8 @@ EXPORT_SYMBOL(dma_mmap_from_coherent);
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <linux/of_reserved_mem.h> #include <linux/of_reserved_mem.h>
static struct reserved_mem *dma_reserved_default_memory __initdata;
static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
{ {
struct dma_coherent_mem *mem = rmem->priv; struct dma_coherent_mem *mem = rmem->priv;
@ -299,6 +320,7 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
&rmem->base, (unsigned long)rmem->size / SZ_1M); &rmem->base, (unsigned long)rmem->size / SZ_1M);
return -ENODEV; return -ENODEV;
} }
mem->use_dev_dma_pfn_offset = true;
rmem->priv = mem; rmem->priv = mem;
dma_assign_coherent_memory(dev, mem); dma_assign_coherent_memory(dev, mem);
return 0; return 0;
@ -307,7 +329,8 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
static void rmem_dma_device_release(struct reserved_mem *rmem, static void rmem_dma_device_release(struct reserved_mem *rmem,
struct device *dev) struct device *dev)
{ {
dev->dma_mem = NULL; if (dev)
dev->dma_mem = NULL;
} }
static const struct reserved_mem_ops rmem_dma_ops = { static const struct reserved_mem_ops rmem_dma_ops = {
@ -327,6 +350,12 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
pr_err("Reserved memory: regions without no-map are not yet supported\n"); pr_err("Reserved memory: regions without no-map are not yet supported\n");
return -EINVAL; return -EINVAL;
} }
if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) {
WARN(dma_reserved_default_memory,
"Reserved memory: region for default DMA coherent area is redefined\n");
dma_reserved_default_memory = rmem;
}
#endif #endif
rmem->ops = &rmem_dma_ops; rmem->ops = &rmem_dma_ops;
@ -334,5 +363,32 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
&rmem->base, (unsigned long)rmem->size / SZ_1M); &rmem->base, (unsigned long)rmem->size / SZ_1M);
return 0; return 0;
} }
static int __init dma_init_reserved_memory(void)
{
const struct reserved_mem_ops *ops;
int ret;
if (!dma_reserved_default_memory)
return -ENOMEM;
ops = dma_reserved_default_memory->ops;
/*
* We rely on rmem_dma_device_init() does not propagate error of
* dma_assign_coherent_memory() for "NULL" device.
*/
ret = ops->device_init(dma_reserved_default_memory, NULL);
if (!ret) {
dma_coherent_default_memory = dma_reserved_default_memory->priv;
pr_info("DMA: default coherent area is set\n");
}
return ret;
}
core_initcall(dma_init_reserved_memory);
RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup); RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
#endif #endif

View File

@ -22,20 +22,15 @@ struct dma_devres {
size_t size; size_t size;
void *vaddr; void *vaddr;
dma_addr_t dma_handle; dma_addr_t dma_handle;
unsigned long attrs;
}; };
static void dmam_coherent_release(struct device *dev, void *res) static void dmam_release(struct device *dev, void *res)
{ {
struct dma_devres *this = res; struct dma_devres *this = res;
dma_free_coherent(dev, this->size, this->vaddr, this->dma_handle); dma_free_attrs(dev, this->size, this->vaddr, this->dma_handle,
} this->attrs);
static void dmam_noncoherent_release(struct device *dev, void *res)
{
struct dma_devres *this = res;
dma_free_noncoherent(dev, this->size, this->vaddr, this->dma_handle);
} }
static int dmam_match(struct device *dev, void *res, void *match_data) static int dmam_match(struct device *dev, void *res, void *match_data)
@ -69,7 +64,7 @@ void *dmam_alloc_coherent(struct device *dev, size_t size,
struct dma_devres *dr; struct dma_devres *dr;
void *vaddr; void *vaddr;
dr = devres_alloc(dmam_coherent_release, sizeof(*dr), gfp); dr = devres_alloc(dmam_release, sizeof(*dr), gfp);
if (!dr) if (!dr)
return NULL; return NULL;
@ -104,35 +99,35 @@ void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
struct dma_devres match_data = { size, vaddr, dma_handle }; struct dma_devres match_data = { size, vaddr, dma_handle };
dma_free_coherent(dev, size, vaddr, dma_handle); dma_free_coherent(dev, size, vaddr, dma_handle);
WARN_ON(devres_destroy(dev, dmam_coherent_release, dmam_match, WARN_ON(devres_destroy(dev, dmam_release, dmam_match, &match_data));
&match_data));
} }
EXPORT_SYMBOL(dmam_free_coherent); EXPORT_SYMBOL(dmam_free_coherent);
/** /**
* dmam_alloc_non_coherent - Managed dma_alloc_noncoherent() * dmam_alloc_attrs - Managed dma_alloc_attrs()
* @dev: Device to allocate non_coherent memory for * @dev: Device to allocate non_coherent memory for
* @size: Size of allocation * @size: Size of allocation
* @dma_handle: Out argument for allocated DMA handle * @dma_handle: Out argument for allocated DMA handle
* @gfp: Allocation flags * @gfp: Allocation flags
* @attrs: Flags in the DMA_ATTR_* namespace.
* *
* Managed dma_alloc_noncoherent(). Memory allocated using this * Managed dma_alloc_attrs(). Memory allocated using this function will be
* function will be automatically released on driver detach. * automatically released on driver detach.
* *
* RETURNS: * RETURNS:
* Pointer to allocated memory on success, NULL on failure. * Pointer to allocated memory on success, NULL on failure.
*/ */
void *dmam_alloc_noncoherent(struct device *dev, size_t size, void *dmam_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
dma_addr_t *dma_handle, gfp_t gfp) gfp_t gfp, unsigned long attrs)
{ {
struct dma_devres *dr; struct dma_devres *dr;
void *vaddr; void *vaddr;
dr = devres_alloc(dmam_noncoherent_release, sizeof(*dr), gfp); dr = devres_alloc(dmam_release, sizeof(*dr), gfp);
if (!dr) if (!dr)
return NULL; return NULL;
vaddr = dma_alloc_noncoherent(dev, size, dma_handle, gfp); vaddr = dma_alloc_attrs(dev, size, dma_handle, gfp, attrs);
if (!vaddr) { if (!vaddr) {
devres_free(dr); devres_free(dr);
return NULL; return NULL;
@ -141,32 +136,13 @@ void *dmam_alloc_noncoherent(struct device *dev, size_t size,
dr->vaddr = vaddr; dr->vaddr = vaddr;
dr->dma_handle = *dma_handle; dr->dma_handle = *dma_handle;
dr->size = size; dr->size = size;
dr->attrs = attrs;
devres_add(dev, dr); devres_add(dev, dr);
return vaddr; return vaddr;
} }
EXPORT_SYMBOL(dmam_alloc_noncoherent); EXPORT_SYMBOL(dmam_alloc_attrs);
/**
* dmam_free_coherent - Managed dma_free_noncoherent()
* @dev: Device to free noncoherent memory for
* @size: Size of allocation
* @vaddr: Virtual address of the memory to free
* @dma_handle: DMA handle of the memory to free
*
* Managed dma_free_noncoherent().
*/
void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle)
{
struct dma_devres match_data = { size, vaddr, dma_handle };
dma_free_noncoherent(dev, size, vaddr, dma_handle);
WARN_ON(!devres_destroy(dev, dmam_noncoherent_release, dmam_match,
&match_data));
}
EXPORT_SYMBOL(dmam_free_noncoherent);
#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
@ -251,7 +227,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size) void *cpu_addr, dma_addr_t dma_addr, size_t size)
{ {
int ret = -ENXIO; int ret = -ENXIO;
#if defined(CONFIG_MMU) && !defined(CONFIG_ARCH_NO_COHERENT_DMA_MMAP) #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
unsigned long user_count = vma_pages(vma); unsigned long user_count = vma_pages(vma);
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr)); unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
@ -268,7 +244,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
user_count << PAGE_SHIFT, user_count << PAGE_SHIFT,
vma->vm_page_prot); vma->vm_page_prot);
} }
#endif /* CONFIG_MMU && !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */ #endif /* !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */
return ret; return ret;
} }

View File

@ -686,7 +686,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
blp = dma_map_single(dev, bufl, sz, DMA_TO_DEVICE); blp = dma_map_single(dev, bufl, sz, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, blp))) if (unlikely(dma_mapping_error(dev, blp)))
goto err; goto err_in;
for_each_sg(sgl, sg, n, i) { for_each_sg(sgl, sg, n, i) {
int y = sg_nctr; int y = sg_nctr;
@ -699,7 +699,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
bufl->bufers[y].len = sg->length; bufl->bufers[y].len = sg->length;
if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr))) if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr)))
goto err; goto err_in;
sg_nctr++; sg_nctr++;
} }
bufl->num_bufs = sg_nctr; bufl->num_bufs = sg_nctr;
@ -717,10 +717,10 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
buflout = kzalloc_node(sz_out, GFP_ATOMIC, buflout = kzalloc_node(sz_out, GFP_ATOMIC,
dev_to_node(&GET_DEV(inst->accel_dev))); dev_to_node(&GET_DEV(inst->accel_dev)));
if (unlikely(!buflout)) if (unlikely(!buflout))
goto err; goto err_in;
bloutp = dma_map_single(dev, buflout, sz_out, DMA_TO_DEVICE); bloutp = dma_map_single(dev, buflout, sz_out, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, bloutp))) if (unlikely(dma_mapping_error(dev, bloutp)))
goto err; goto err_out;
bufers = buflout->bufers; bufers = buflout->bufers;
for_each_sg(sglout, sg, n, i) { for_each_sg(sglout, sg, n, i) {
int y = sg_nctr; int y = sg_nctr;
@ -732,7 +732,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
sg->length, sg->length,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, bufers[y].addr))) if (unlikely(dma_mapping_error(dev, bufers[y].addr)))
goto err; goto err_out;
bufers[y].len = sg->length; bufers[y].len = sg->length;
sg_nctr++; sg_nctr++;
} }
@ -747,9 +747,20 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
qat_req->buf.sz_out = 0; qat_req->buf.sz_out = 0;
} }
return 0; return 0;
err:
dev_err(dev, "Failed to map buf for dma\n"); err_out:
sg_nctr = 0; n = sg_nents(sglout);
for (i = 0; i < n; i++)
if (!dma_mapping_error(dev, buflout->bufers[i].addr))
dma_unmap_single(dev, buflout->bufers[i].addr,
buflout->bufers[i].len,
DMA_BIDIRECTIONAL);
if (!dma_mapping_error(dev, bloutp))
dma_unmap_single(dev, bloutp, sz_out, DMA_TO_DEVICE);
kfree(buflout);
err_in:
n = sg_nents(sgl);
for (i = 0; i < n; i++) for (i = 0; i < n; i++)
if (!dma_mapping_error(dev, bufl->bufers[i].addr)) if (!dma_mapping_error(dev, bufl->bufers[i].addr))
dma_unmap_single(dev, bufl->bufers[i].addr, dma_unmap_single(dev, bufl->bufers[i].addr,
@ -759,17 +770,8 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
if (!dma_mapping_error(dev, blp)) if (!dma_mapping_error(dev, blp))
dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE); dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE);
kfree(bufl); kfree(bufl);
if (sgl != sglout && buflout) {
n = sg_nents(sglout); dev_err(dev, "Failed to map buf for dma\n");
for (i = 0; i < n; i++)
if (!dma_mapping_error(dev, buflout->bufers[i].addr))
dma_unmap_single(dev, buflout->bufers[i].addr,
buflout->bufers[i].len,
DMA_BIDIRECTIONAL);
if (!dma_mapping_error(dev, bloutp))
dma_unmap_single(dev, bloutp, sz_out, DMA_TO_DEVICE);
kfree(buflout);
}
return -ENOMEM; return -ENOMEM;
} }

View File

@ -839,8 +839,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
goto free_resources; goto free_resources;
} }
for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
dma_srcs[i] = DMA_ERROR_CODE;
for (i = 0; i < IOAT_NUM_SRC_TEST; i++) { for (i = 0; i < IOAT_NUM_SRC_TEST; i++) {
dma_srcs[i] = dma_map_page(dev, xor_srcs[i], 0, PAGE_SIZE, dma_srcs[i] = dma_map_page(dev, xor_srcs[i], 0, PAGE_SIZE,
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -910,8 +908,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
xor_val_result = 1; xor_val_result = 1;
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
dma_srcs[i] = DMA_ERROR_CODE;
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) { for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) {
dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE, dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE,
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -965,8 +961,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
op = IOAT_OP_XOR_VAL; op = IOAT_OP_XOR_VAL;
xor_val_result = 0; xor_val_result = 0;
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
dma_srcs[i] = DMA_ERROR_CODE;
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) { for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) {
dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE, dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE,
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -1017,18 +1011,14 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
goto free_resources; goto free_resources;
dma_unmap: dma_unmap:
if (op == IOAT_OP_XOR) { if (op == IOAT_OP_XOR) {
if (dest_dma != DMA_ERROR_CODE) while (--i >= 0)
dma_unmap_page(dev, dest_dma, PAGE_SIZE, dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
DMA_FROM_DEVICE); DMA_TO_DEVICE);
for (i = 0; i < IOAT_NUM_SRC_TEST; i++) dma_unmap_page(dev, dest_dma, PAGE_SIZE, DMA_FROM_DEVICE);
if (dma_srcs[i] != DMA_ERROR_CODE)
dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
DMA_TO_DEVICE);
} else if (op == IOAT_OP_XOR_VAL) { } else if (op == IOAT_OP_XOR_VAL) {
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) while (--i >= 0)
if (dma_srcs[i] != DMA_ERROR_CODE) dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE, DMA_TO_DEVICE);
DMA_TO_DEVICE);
} }
free_resources: free_resources:
dma->device_free_chan_resources(dma_chan); dma->device_free_chan_resources(dma_chan);

View File

@ -646,12 +646,12 @@ int tegra_ivc_init(struct tegra_ivc *ivc, struct device *peer, void *rx,
if (peer) { if (peer) {
ivc->rx.phys = dma_map_single(peer, rx, queue_size, ivc->rx.phys = dma_map_single(peer, rx, queue_size,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
if (ivc->rx.phys == DMA_ERROR_CODE) if (dma_mapping_error(peer, ivc->rx.phys))
return -ENOMEM; return -ENOMEM;
ivc->tx.phys = dma_map_single(peer, tx, queue_size, ivc->tx.phys = dma_map_single(peer, tx, queue_size,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
if (ivc->tx.phys == DMA_ERROR_CODE) { if (dma_mapping_error(peer, ivc->tx.phys)) {
dma_unmap_single(peer, ivc->rx.phys, queue_size, dma_unmap_single(peer, ivc->rx.phys, queue_size,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
return -ENOMEM; return -ENOMEM;

View File

@ -133,7 +133,7 @@ static struct drm_framebuffer *armada_fb_create(struct drm_device *dev,
} }
/* Framebuffer objects must have a valid device address for scanout */ /* Framebuffer objects must have a valid device address for scanout */
if (obj->dev_addr == DMA_ERROR_CODE) { if (!obj->mapped) {
ret = -EINVAL; ret = -EINVAL;
goto err_unref; goto err_unref;
} }

View File

@ -175,6 +175,7 @@ armada_gem_linear_back(struct drm_device *dev, struct armada_gem_object *obj)
obj->phys_addr = obj->linear->start; obj->phys_addr = obj->linear->start;
obj->dev_addr = obj->linear->start; obj->dev_addr = obj->linear->start;
obj->mapped = true;
} }
DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj, DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj,
@ -205,7 +206,6 @@ armada_gem_alloc_private_object(struct drm_device *dev, size_t size)
return NULL; return NULL;
drm_gem_private_object_init(dev, &obj->obj, size); drm_gem_private_object_init(dev, &obj->obj, size);
obj->dev_addr = DMA_ERROR_CODE;
DRM_DEBUG_DRIVER("alloc private obj %p size %zu\n", obj, size); DRM_DEBUG_DRIVER("alloc private obj %p size %zu\n", obj, size);
@ -229,8 +229,6 @@ static struct armada_gem_object *armada_gem_alloc_object(struct drm_device *dev,
return NULL; return NULL;
} }
obj->dev_addr = DMA_ERROR_CODE;
mapping = obj->obj.filp->f_mapping; mapping = obj->obj.filp->f_mapping;
mapping_set_gfp_mask(mapping, GFP_HIGHUSER | __GFP_RECLAIMABLE); mapping_set_gfp_mask(mapping, GFP_HIGHUSER | __GFP_RECLAIMABLE);
@ -610,5 +608,6 @@ int armada_gem_map_import(struct armada_gem_object *dobj)
return -EINVAL; return -EINVAL;
} }
dobj->dev_addr = sg_dma_address(dobj->sgt->sgl); dobj->dev_addr = sg_dma_address(dobj->sgt->sgl);
dobj->mapped = true;
return 0; return 0;
} }

View File

@ -16,6 +16,7 @@ struct armada_gem_object {
void *addr; void *addr;
phys_addr_t phys_addr; phys_addr_t phys_addr;
resource_size_t dev_addr; resource_size_t dev_addr;
bool mapped;
struct drm_mm_node *linear; /* for linear backed */ struct drm_mm_node *linear; /* for linear backed */
struct page *page; /* for page backed */ struct page *page; /* for page backed */
struct sg_table *sgt; /* for imported */ struct sg_table *sgt; /* for imported */

View File

@ -181,8 +181,8 @@ dma_addr_t exynos_drm_fb_dma_addr(struct drm_framebuffer *fb, int index)
{ {
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb); struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);
if (index >= MAX_FB_BUFFER) if (WARN_ON_ONCE(index >= MAX_FB_BUFFER))
return DMA_ERROR_CODE; return 0;
return exynos_fb->dma_addr[index]; return exynos_fb->dma_addr[index];
} }

View File

@ -54,6 +54,8 @@
#include "amd_iommu_types.h" #include "amd_iommu_types.h"
#include "irq_remapping.h" #include "irq_remapping.h"
#define AMD_IOMMU_MAPPING_ERROR 0
#define CMD_SET_TYPE(cmd, t) ((cmd)->data[1] |= ((t) << 28)) #define CMD_SET_TYPE(cmd, t) ((cmd)->data[1] |= ((t) << 28))
#define LOOP_TIMEOUT 100000 #define LOOP_TIMEOUT 100000
@ -2394,7 +2396,7 @@ static dma_addr_t __map_single(struct device *dev,
paddr &= PAGE_MASK; paddr &= PAGE_MASK;
address = dma_ops_alloc_iova(dev, dma_dom, pages, dma_mask); address = dma_ops_alloc_iova(dev, dma_dom, pages, dma_mask);
if (address == DMA_ERROR_CODE) if (address == AMD_IOMMU_MAPPING_ERROR)
goto out; goto out;
prot = dir2prot(direction); prot = dir2prot(direction);
@ -2431,7 +2433,7 @@ static dma_addr_t __map_single(struct device *dev,
dma_ops_free_iova(dma_dom, address, pages); dma_ops_free_iova(dma_dom, address, pages);
return DMA_ERROR_CODE; return AMD_IOMMU_MAPPING_ERROR;
} }
/* /*
@ -2483,7 +2485,7 @@ static dma_addr_t map_page(struct device *dev, struct page *page,
if (PTR_ERR(domain) == -EINVAL) if (PTR_ERR(domain) == -EINVAL)
return (dma_addr_t)paddr; return (dma_addr_t)paddr;
else if (IS_ERR(domain)) else if (IS_ERR(domain))
return DMA_ERROR_CODE; return AMD_IOMMU_MAPPING_ERROR;
dma_mask = *dev->dma_mask; dma_mask = *dev->dma_mask;
dma_dom = to_dma_ops_domain(domain); dma_dom = to_dma_ops_domain(domain);
@ -2560,7 +2562,7 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
npages = sg_num_pages(dev, sglist, nelems); npages = sg_num_pages(dev, sglist, nelems);
address = dma_ops_alloc_iova(dev, dma_dom, npages, dma_mask); address = dma_ops_alloc_iova(dev, dma_dom, npages, dma_mask);
if (address == DMA_ERROR_CODE) if (address == AMD_IOMMU_MAPPING_ERROR)
goto out_err; goto out_err;
prot = dir2prot(direction); prot = dir2prot(direction);
@ -2683,7 +2685,7 @@ static void *alloc_coherent(struct device *dev, size_t size,
*dma_addr = __map_single(dev, dma_dom, page_to_phys(page), *dma_addr = __map_single(dev, dma_dom, page_to_phys(page),
size, DMA_BIDIRECTIONAL, dma_mask); size, DMA_BIDIRECTIONAL, dma_mask);
if (*dma_addr == DMA_ERROR_CODE) if (*dma_addr == AMD_IOMMU_MAPPING_ERROR)
goto out_free; goto out_free;
return page_address(page); return page_address(page);
@ -2729,9 +2731,16 @@ static void free_coherent(struct device *dev, size_t size,
*/ */
static int amd_iommu_dma_supported(struct device *dev, u64 mask) static int amd_iommu_dma_supported(struct device *dev, u64 mask)
{ {
if (!x86_dma_supported(dev, mask))
return 0;
return check_device(dev); return check_device(dev);
} }
static int amd_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == AMD_IOMMU_MAPPING_ERROR;
}
static const struct dma_map_ops amd_iommu_dma_ops = { static const struct dma_map_ops amd_iommu_dma_ops = {
.alloc = alloc_coherent, .alloc = alloc_coherent,
.free = free_coherent, .free = free_coherent,
@ -2740,6 +2749,7 @@ static const struct dma_map_ops amd_iommu_dma_ops = {
.map_sg = map_sg, .map_sg = map_sg,
.unmap_sg = unmap_sg, .unmap_sg = unmap_sg,
.dma_supported = amd_iommu_dma_supported, .dma_supported = amd_iommu_dma_supported,
.mapping_error = amd_iommu_mapping_error,
}; };
static int init_reserved_iova_ranges(void) static int init_reserved_iova_ranges(void)

View File

@ -31,6 +31,8 @@
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#define IOMMU_MAPPING_ERROR 0
struct iommu_dma_msi_page { struct iommu_dma_msi_page {
struct list_head list; struct list_head list;
dma_addr_t iova; dma_addr_t iova;
@ -500,7 +502,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size,
{ {
__iommu_dma_unmap(iommu_get_domain_for_dev(dev), *handle, size); __iommu_dma_unmap(iommu_get_domain_for_dev(dev), *handle, size);
__iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
*handle = DMA_ERROR_CODE; *handle = IOMMU_MAPPING_ERROR;
} }
/** /**
@ -533,7 +535,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
dma_addr_t iova; dma_addr_t iova;
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap; unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
*handle = DMA_ERROR_CODE; *handle = IOMMU_MAPPING_ERROR;
min_size = alloc_sizes & -alloc_sizes; min_size = alloc_sizes & -alloc_sizes;
if (min_size < PAGE_SIZE) { if (min_size < PAGE_SIZE) {
@ -627,11 +629,11 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev);
if (!iova) if (!iova)
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
if (iommu_map(domain, iova, phys - iova_off, size, prot)) { if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
iommu_dma_free_iova(cookie, iova, size); iommu_dma_free_iova(cookie, iova, size);
return DMA_ERROR_CODE; return IOMMU_MAPPING_ERROR;
} }
return iova + iova_off; return iova + iova_off;
} }
@ -671,7 +673,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
s->offset += s_iova_off; s->offset += s_iova_off;
s->length = s_length; s->length = s_length;
sg_dma_address(s) = DMA_ERROR_CODE; sg_dma_address(s) = IOMMU_MAPPING_ERROR;
sg_dma_len(s) = 0; sg_dma_len(s) = 0;
/* /*
@ -714,11 +716,11 @@ static void __invalidate_sg(struct scatterlist *sg, int nents)
int i; int i;
for_each_sg(sg, s, nents, i) { for_each_sg(sg, s, nents, i) {
if (sg_dma_address(s) != DMA_ERROR_CODE) if (sg_dma_address(s) != IOMMU_MAPPING_ERROR)
s->offset += sg_dma_address(s); s->offset += sg_dma_address(s);
if (sg_dma_len(s)) if (sg_dma_len(s))
s->length = sg_dma_len(s); s->length = sg_dma_len(s);
sg_dma_address(s) = DMA_ERROR_CODE; sg_dma_address(s) = IOMMU_MAPPING_ERROR;
sg_dma_len(s) = 0; sg_dma_len(s) = 0;
} }
} }
@ -836,7 +838,7 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{ {
return dma_addr == DMA_ERROR_CODE; return dma_addr == IOMMU_MAPPING_ERROR;
} }
static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,

View File

@ -3981,6 +3981,9 @@ struct dma_map_ops intel_dma_ops = {
.map_page = intel_map_page, .map_page = intel_map_page,
.unmap_page = intel_unmap_page, .unmap_page = intel_unmap_page,
.mapping_error = intel_mapping_error, .mapping_error = intel_mapping_error,
#ifdef CONFIG_X86
.dma_supported = x86_dma_supported,
#endif
}; };
static inline int iommu_domain_cache_init(void) static inline int iommu_domain_cache_init(void)

View File

@ -469,56 +469,6 @@ static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter)
} }
} }
static void ibmveth_cleanup(struct ibmveth_adapter *adapter)
{
int i;
struct device *dev = &adapter->vdev->dev;
if (adapter->buffer_list_addr != NULL) {
if (!dma_mapping_error(dev, adapter->buffer_list_dma)) {
dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
DMA_BIDIRECTIONAL);
adapter->buffer_list_dma = DMA_ERROR_CODE;
}
free_page((unsigned long)adapter->buffer_list_addr);
adapter->buffer_list_addr = NULL;
}
if (adapter->filter_list_addr != NULL) {
if (!dma_mapping_error(dev, adapter->filter_list_dma)) {
dma_unmap_single(dev, adapter->filter_list_dma, 4096,
DMA_BIDIRECTIONAL);
adapter->filter_list_dma = DMA_ERROR_CODE;
}
free_page((unsigned long)adapter->filter_list_addr);
adapter->filter_list_addr = NULL;
}
if (adapter->rx_queue.queue_addr != NULL) {
dma_free_coherent(dev, adapter->rx_queue.queue_len,
adapter->rx_queue.queue_addr,
adapter->rx_queue.queue_dma);
adapter->rx_queue.queue_addr = NULL;
}
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
if (adapter->rx_buff_pool[i].active)
ibmveth_free_buffer_pool(adapter,
&adapter->rx_buff_pool[i]);
if (adapter->bounce_buffer != NULL) {
if (!dma_mapping_error(dev, adapter->bounce_buffer_dma)) {
dma_unmap_single(&adapter->vdev->dev,
adapter->bounce_buffer_dma,
adapter->netdev->mtu + IBMVETH_BUFF_OH,
DMA_BIDIRECTIONAL);
adapter->bounce_buffer_dma = DMA_ERROR_CODE;
}
kfree(adapter->bounce_buffer);
adapter->bounce_buffer = NULL;
}
}
static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter, static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter,
union ibmveth_buf_desc rxq_desc, u64 mac_address) union ibmveth_buf_desc rxq_desc, u64 mac_address)
{ {
@ -575,14 +525,17 @@ static int ibmveth_open(struct net_device *netdev)
for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
rxq_entries += adapter->rx_buff_pool[i].size; rxq_entries += adapter->rx_buff_pool[i].size;
rc = -ENOMEM;
adapter->buffer_list_addr = (void*) get_zeroed_page(GFP_KERNEL); adapter->buffer_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
adapter->filter_list_addr = (void*) get_zeroed_page(GFP_KERNEL); if (!adapter->buffer_list_addr) {
netdev_err(netdev, "unable to allocate list pages\n");
goto out;
}
if (!adapter->buffer_list_addr || !adapter->filter_list_addr) { adapter->filter_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
netdev_err(netdev, "unable to allocate filter or buffer list " if (!adapter->filter_list_addr) {
"pages\n"); netdev_err(netdev, "unable to allocate filter pages\n");
rc = -ENOMEM; goto out_free_buffer_list;
goto err_out;
} }
dev = &adapter->vdev->dev; dev = &adapter->vdev->dev;
@ -592,22 +545,21 @@ static int ibmveth_open(struct net_device *netdev)
adapter->rx_queue.queue_addr = adapter->rx_queue.queue_addr =
dma_alloc_coherent(dev, adapter->rx_queue.queue_len, dma_alloc_coherent(dev, adapter->rx_queue.queue_len,
&adapter->rx_queue.queue_dma, GFP_KERNEL); &adapter->rx_queue.queue_dma, GFP_KERNEL);
if (!adapter->rx_queue.queue_addr) { if (!adapter->rx_queue.queue_addr)
rc = -ENOMEM; goto out_free_filter_list;
goto err_out;
}
adapter->buffer_list_dma = dma_map_single(dev, adapter->buffer_list_dma = dma_map_single(dev,
adapter->buffer_list_addr, 4096, DMA_BIDIRECTIONAL); adapter->buffer_list_addr, 4096, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, adapter->buffer_list_dma)) {
netdev_err(netdev, "unable to map buffer list pages\n");
goto out_free_queue_mem;
}
adapter->filter_list_dma = dma_map_single(dev, adapter->filter_list_dma = dma_map_single(dev,
adapter->filter_list_addr, 4096, DMA_BIDIRECTIONAL); adapter->filter_list_addr, 4096, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, adapter->filter_list_dma)) {
if ((dma_mapping_error(dev, adapter->buffer_list_dma)) || netdev_err(netdev, "unable to map filter list pages\n");
(dma_mapping_error(dev, adapter->filter_list_dma))) { goto out_unmap_buffer_list;
netdev_err(netdev, "unable to map filter or buffer list "
"pages\n");
rc = -ENOMEM;
goto err_out;
} }
adapter->rx_queue.index = 0; adapter->rx_queue.index = 0;
@ -638,7 +590,7 @@ static int ibmveth_open(struct net_device *netdev)
rxq_desc.desc, rxq_desc.desc,
mac_address); mac_address);
rc = -ENONET; rc = -ENONET;
goto err_out; goto out_unmap_filter_list;
} }
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) { for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
@ -648,7 +600,7 @@ static int ibmveth_open(struct net_device *netdev)
netdev_err(netdev, "unable to alloc pool\n"); netdev_err(netdev, "unable to alloc pool\n");
adapter->rx_buff_pool[i].active = 0; adapter->rx_buff_pool[i].active = 0;
rc = -ENOMEM; rc = -ENOMEM;
goto err_out; goto out_free_buffer_pools;
} }
} }
@ -662,22 +614,21 @@ static int ibmveth_open(struct net_device *netdev)
lpar_rc = h_free_logical_lan(adapter->vdev->unit_address); lpar_rc = h_free_logical_lan(adapter->vdev->unit_address);
} while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY)); } while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY));
goto err_out; goto out_free_buffer_pools;
} }
rc = -ENOMEM;
adapter->bounce_buffer = adapter->bounce_buffer =
kmalloc(netdev->mtu + IBMVETH_BUFF_OH, GFP_KERNEL); kmalloc(netdev->mtu + IBMVETH_BUFF_OH, GFP_KERNEL);
if (!adapter->bounce_buffer) { if (!adapter->bounce_buffer)
rc = -ENOMEM; goto out_free_irq;
goto err_out_free_irq;
}
adapter->bounce_buffer_dma = adapter->bounce_buffer_dma =
dma_map_single(&adapter->vdev->dev, adapter->bounce_buffer, dma_map_single(&adapter->vdev->dev, adapter->bounce_buffer,
netdev->mtu + IBMVETH_BUFF_OH, DMA_BIDIRECTIONAL); netdev->mtu + IBMVETH_BUFF_OH, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, adapter->bounce_buffer_dma)) { if (dma_mapping_error(dev, adapter->bounce_buffer_dma)) {
netdev_err(netdev, "unable to map bounce buffer\n"); netdev_err(netdev, "unable to map bounce buffer\n");
rc = -ENOMEM; goto out_free_bounce_buffer;
goto err_out_free_irq;
} }
netdev_dbg(netdev, "initial replenish cycle\n"); netdev_dbg(netdev, "initial replenish cycle\n");
@ -689,10 +640,31 @@ static int ibmveth_open(struct net_device *netdev)
return 0; return 0;
err_out_free_irq: out_free_bounce_buffer:
kfree(adapter->bounce_buffer);
out_free_irq:
free_irq(netdev->irq, netdev); free_irq(netdev->irq, netdev);
err_out: out_free_buffer_pools:
ibmveth_cleanup(adapter); while (--i >= 0) {
if (adapter->rx_buff_pool[i].active)
ibmveth_free_buffer_pool(adapter,
&adapter->rx_buff_pool[i]);
}
out_unmap_filter_list:
dma_unmap_single(dev, adapter->filter_list_dma, 4096,
DMA_BIDIRECTIONAL);
out_unmap_buffer_list:
dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
DMA_BIDIRECTIONAL);
out_free_queue_mem:
dma_free_coherent(dev, adapter->rx_queue.queue_len,
adapter->rx_queue.queue_addr,
adapter->rx_queue.queue_dma);
out_free_filter_list:
free_page((unsigned long)adapter->filter_list_addr);
out_free_buffer_list:
free_page((unsigned long)adapter->buffer_list_addr);
out:
napi_disable(&adapter->napi); napi_disable(&adapter->napi);
return rc; return rc;
} }
@ -700,7 +672,9 @@ static int ibmveth_open(struct net_device *netdev)
static int ibmveth_close(struct net_device *netdev) static int ibmveth_close(struct net_device *netdev)
{ {
struct ibmveth_adapter *adapter = netdev_priv(netdev); struct ibmveth_adapter *adapter = netdev_priv(netdev);
struct device *dev = &adapter->vdev->dev;
long lpar_rc; long lpar_rc;
int i;
netdev_dbg(netdev, "close starting\n"); netdev_dbg(netdev, "close starting\n");
@ -724,7 +698,27 @@ static int ibmveth_close(struct net_device *netdev)
ibmveth_update_rx_no_buffer(adapter); ibmveth_update_rx_no_buffer(adapter);
ibmveth_cleanup(adapter); dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
DMA_BIDIRECTIONAL);
free_page((unsigned long)adapter->buffer_list_addr);
dma_unmap_single(dev, adapter->filter_list_dma, 4096,
DMA_BIDIRECTIONAL);
free_page((unsigned long)adapter->filter_list_addr);
dma_free_coherent(dev, adapter->rx_queue.queue_len,
adapter->rx_queue.queue_addr,
adapter->rx_queue.queue_dma);
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
if (adapter->rx_buff_pool[i].active)
ibmveth_free_buffer_pool(adapter,
&adapter->rx_buff_pool[i]);
dma_unmap_single(&adapter->vdev->dev, adapter->bounce_buffer_dma,
adapter->netdev->mtu + IBMVETH_BUFF_OH,
DMA_BIDIRECTIONAL);
kfree(adapter->bounce_buffer);
netdev_dbg(netdev, "close complete\n"); netdev_dbg(netdev, "close complete\n");
@ -1719,11 +1713,6 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
} }
netdev_dbg(netdev, "adapter @ 0x%p\n", adapter); netdev_dbg(netdev, "adapter @ 0x%p\n", adapter);
adapter->buffer_list_dma = DMA_ERROR_CODE;
adapter->filter_list_dma = DMA_ERROR_CODE;
adapter->rx_queue.queue_dma = DMA_ERROR_CODE;
netdev_dbg(netdev, "registering netdev...\n"); netdev_dbg(netdev, "registering netdev...\n");
ibmveth_set_features(netdev, netdev->features); ibmveth_set_features(netdev, netdev->features);

View File

@ -532,10 +532,6 @@ static int au1100fb_drv_probe(struct platform_device *dev)
clk_disable_unprepare(fbdev->lcdclk); clk_disable_unprepare(fbdev->lcdclk);
clk_put(fbdev->lcdclk); clk_put(fbdev->lcdclk);
} }
if (fbdev->fb_mem) {
dma_free_noncoherent(&dev->dev, fbdev->fb_len, fbdev->fb_mem,
fbdev->fb_phys);
}
if (fbdev->info.cmap.len != 0) { if (fbdev->info.cmap.len != 0) {
fb_dealloc_cmap(&fbdev->info.cmap); fb_dealloc_cmap(&fbdev->info.cmap);
} }

View File

@ -1694,9 +1694,10 @@ static int au1200fb_drv_probe(struct platform_device *dev)
/* Allocate the framebuffer to the maximum screen size */ /* Allocate the framebuffer to the maximum screen size */
fbdev->fb_len = (win->w[plane].xres * win->w[plane].yres * bpp) / 8; fbdev->fb_len = (win->w[plane].xres * win->w[plane].yres * bpp) / 8;
fbdev->fb_mem = dmam_alloc_noncoherent(&dev->dev, fbdev->fb_mem = dmam_alloc_attrs(&dev->dev,
PAGE_ALIGN(fbdev->fb_len), PAGE_ALIGN(fbdev->fb_len),
&fbdev->fb_phys, GFP_KERNEL); &fbdev->fb_phys, GFP_KERNEL,
DMA_ATTR_NON_CONSISTENT);
if (!fbdev->fb_mem) { if (!fbdev->fb_mem) {
print_err("fail to allocate frambuffer (size: %dK))", print_err("fail to allocate frambuffer (size: %dK))",
fbdev->fb_len / 1024); fbdev->fb_len / 1024);

View File

@ -67,6 +67,8 @@ static unsigned long dma_alloc_coherent_mask(struct device *dev,
} }
#endif #endif
#define XEN_SWIOTLB_ERROR_CODE (~(dma_addr_t)0x0)
static char *xen_io_tlb_start, *xen_io_tlb_end; static char *xen_io_tlb_start, *xen_io_tlb_end;
static unsigned long xen_io_tlb_nslabs; static unsigned long xen_io_tlb_nslabs;
/* /*
@ -295,7 +297,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
free_pages((unsigned long)xen_io_tlb_start, order); free_pages((unsigned long)xen_io_tlb_start, order);
return rc; return rc;
} }
void *
static void *
xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags, dma_addr_t *dma_handle, gfp_t flags,
unsigned long attrs) unsigned long attrs)
@ -346,9 +349,8 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
memset(ret, 0, size); memset(ret, 0, size);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_alloc_coherent);
void static void
xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
dma_addr_t dev_addr, unsigned long attrs) dma_addr_t dev_addr, unsigned long attrs)
{ {
@ -369,8 +371,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs); xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_free_coherent);
/* /*
* Map a single buffer of the indicated size for DMA in streaming mode. The * Map a single buffer of the indicated size for DMA in streaming mode. The
@ -379,7 +379,7 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_free_coherent);
* Once the device is given the dma address, the device owns this memory until * Once the device is given the dma address, the device owns this memory until
* either xen_swiotlb_unmap_page or xen_swiotlb_dma_sync_single is performed. * either xen_swiotlb_unmap_page or xen_swiotlb_dma_sync_single is performed.
*/ */
dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, unsigned long offset, size_t size,
enum dma_data_direction dir, enum dma_data_direction dir,
unsigned long attrs) unsigned long attrs)
@ -412,7 +412,7 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir, map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir,
attrs); attrs);
if (map == SWIOTLB_MAP_ERROR) if (map == SWIOTLB_MAP_ERROR)
return DMA_ERROR_CODE; return XEN_SWIOTLB_ERROR_CODE;
dev_addr = xen_phys_to_bus(map); dev_addr = xen_phys_to_bus(map);
xen_dma_map_page(dev, pfn_to_page(map >> PAGE_SHIFT), xen_dma_map_page(dev, pfn_to_page(map >> PAGE_SHIFT),
@ -427,9 +427,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
attrs |= DMA_ATTR_SKIP_CPU_SYNC; attrs |= DMA_ATTR_SKIP_CPU_SYNC;
swiotlb_tbl_unmap_single(dev, map, size, dir, attrs); swiotlb_tbl_unmap_single(dev, map, size, dir, attrs);
return DMA_ERROR_CODE; return XEN_SWIOTLB_ERROR_CODE;
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_map_page);
/* /*
* Unmap a single streaming mode DMA translation. The dma_addr and size must * Unmap a single streaming mode DMA translation. The dma_addr and size must
@ -467,13 +466,12 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr,
dma_mark_clean(phys_to_virt(paddr), size); dma_mark_clean(phys_to_virt(paddr), size);
} }
void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir, size_t size, enum dma_data_direction dir,
unsigned long attrs) unsigned long attrs)
{ {
xen_unmap_single(hwdev, dev_addr, size, dir, attrs); xen_unmap_single(hwdev, dev_addr, size, dir, attrs);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_page);
/* /*
* Make physical memory consistent for a single streaming mode DMA translation * Make physical memory consistent for a single streaming mode DMA translation
@ -516,7 +514,6 @@ xen_swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
{ {
xen_swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU); xen_swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_cpu);
void void
xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
@ -524,7 +521,25 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
{ {
xen_swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE); xen_swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
/*
* Unmap a set of streaming mode DMA translations. Again, cpu read rules
* concerning calls here are the same as for swiotlb_unmap_page() above.
*/
static void
xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir,
unsigned long attrs)
{
struct scatterlist *sg;
int i;
BUG_ON(dir == DMA_NONE);
for_each_sg(sgl, sg, nelems, i)
xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs);
}
/* /*
* Map a set of buffers described by scatterlist in streaming mode for DMA. * Map a set of buffers described by scatterlist in streaming mode for DMA.
@ -542,7 +557,7 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
* Device ownership issues as mentioned above for xen_swiotlb_map_page are the * Device ownership issues as mentioned above for xen_swiotlb_map_page are the
* same here. * same here.
*/ */
int static int
xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir, int nelems, enum dma_data_direction dir,
unsigned long attrs) unsigned long attrs)
@ -599,27 +614,6 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
} }
return nelems; return nelems;
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_map_sg_attrs);
/*
* Unmap a set of streaming mode DMA translations. Again, cpu read rules
* concerning calls here are the same as for swiotlb_unmap_page() above.
*/
void
xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir,
unsigned long attrs)
{
struct scatterlist *sg;
int i;
BUG_ON(dir == DMA_NONE);
for_each_sg(sgl, sg, nelems, i)
xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs);
}
EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs);
/* /*
* Make physical memory consistent for a set of streaming mode DMA translations * Make physical memory consistent for a set of streaming mode DMA translations
@ -641,21 +635,19 @@ xen_swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sgl,
sg_dma_len(sg), dir, target); sg_dma_len(sg), dir, target);
} }
void static void
xen_swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, xen_swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir) int nelems, enum dma_data_direction dir)
{ {
xen_swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU); xen_swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_cpu);
void static void
xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir) int nelems, enum dma_data_direction dir)
{ {
xen_swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE); xen_swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_device);
/* /*
* Return whether the given device DMA address mask can be supported * Return whether the given device DMA address mask can be supported
@ -663,31 +655,18 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_device);
* during bus mastering, then you would pass 0x00ffffff as the mask to * during bus mastering, then you would pass 0x00ffffff as the mask to
* this function. * this function.
*/ */
int static int
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
{ {
return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask; return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_dma_supported);
int
xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !xen_swiotlb_dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
EXPORT_SYMBOL_GPL(xen_swiotlb_set_dma_mask);
/* /*
* Create userspace mapping for the DMA-coherent memory. * Create userspace mapping for the DMA-coherent memory.
* This function should be called with the pages from the current domain only, * This function should be called with the pages from the current domain only,
* passing pages mapped from other domains would lead to memory corruption. * passing pages mapped from other domains would lead to memory corruption.
*/ */
int static int
xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size, void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs) unsigned long attrs)
@ -699,13 +678,12 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
#endif #endif
return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size); return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_dma_mmap);
/* /*
* This function should be called with the pages from the current domain only, * This function should be called with the pages from the current domain only,
* passing pages mapped from other domains would lead to memory corruption. * passing pages mapped from other domains would lead to memory corruption.
*/ */
int static int
xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size, void *cpu_addr, dma_addr_t handle, size_t size,
unsigned long attrs) unsigned long attrs)
@ -727,4 +705,25 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
#endif #endif
return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size); return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size);
} }
EXPORT_SYMBOL_GPL(xen_swiotlb_get_sgtable);
static int xen_swiotlb_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == XEN_SWIOTLB_ERROR_CODE;
}
const struct dma_map_ops xen_swiotlb_dma_ops = {
.alloc = xen_swiotlb_alloc_coherent,
.free = xen_swiotlb_free_coherent,
.sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
.sync_single_for_device = xen_swiotlb_sync_single_for_device,
.sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
.map_sg = xen_swiotlb_map_sg_attrs,
.unmap_sg = xen_swiotlb_unmap_sg_attrs,
.map_page = xen_swiotlb_map_page,
.unmap_page = xen_swiotlb_unmap_page,
.dma_supported = xen_swiotlb_dma_supported,
.mmap = xen_swiotlb_dma_mmap,
.get_sgtable = xen_swiotlb_get_sgtable,
.mapping_error = xen_swiotlb_mapping_error,
};

View File

@ -127,7 +127,6 @@ struct dma_map_ops {
enum dma_data_direction dir); enum dma_data_direction dir);
int (*mapping_error)(struct device *dev, dma_addr_t dma_addr); int (*mapping_error)(struct device *dev, dma_addr_t dma_addr);
int (*dma_supported)(struct device *dev, u64 mask); int (*dma_supported)(struct device *dev, u64 mask);
int (*set_dma_mask)(struct device *dev, u64 mask);
#ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK #ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK
u64 (*get_required_mask)(struct device *dev); u64 (*get_required_mask)(struct device *dev);
#endif #endif
@ -546,15 +545,9 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
if (get_dma_ops(dev)->mapping_error) if (get_dma_ops(dev)->mapping_error)
return get_dma_ops(dev)->mapping_error(dev, dma_addr); return get_dma_ops(dev)->mapping_error(dev, dma_addr);
#ifdef DMA_ERROR_CODE
return dma_addr == DMA_ERROR_CODE;
#else
return 0; return 0;
#endif
} }
#ifndef HAVE_ARCH_DMA_SUPPORTED
static inline int dma_supported(struct device *dev, u64 mask) static inline int dma_supported(struct device *dev, u64 mask)
{ {
const struct dma_map_ops *ops = get_dma_ops(dev); const struct dma_map_ops *ops = get_dma_ops(dev);
@ -565,16 +558,10 @@ static inline int dma_supported(struct device *dev, u64 mask)
return 1; return 1;
return ops->dma_supported(dev, mask); return ops->dma_supported(dev, mask);
} }
#endif
#ifndef HAVE_ARCH_DMA_SET_MASK #ifndef HAVE_ARCH_DMA_SET_MASK
static inline int dma_set_mask(struct device *dev, u64 mask) static inline int dma_set_mask(struct device *dev, u64 mask)
{ {
const struct dma_map_ops *ops = get_dma_ops(dev);
if (ops->set_dma_mask)
return ops->set_dma_mask(dev, mask);
if (!dev->dma_mask || !dma_supported(dev, mask)) if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO; return -EIO;
*dev->dma_mask = mask; *dev->dma_mask = mask;
@ -747,10 +734,9 @@ extern void *dmam_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp); dma_addr_t *dma_handle, gfp_t gfp);
extern void dmam_free_coherent(struct device *dev, size_t size, void *vaddr, extern void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle); dma_addr_t dma_handle);
extern void *dmam_alloc_noncoherent(struct device *dev, size_t size, extern void *dmam_alloc_attrs(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp); dma_addr_t *dma_handle, gfp_t gfp,
extern void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr, unsigned long attrs);
dma_addr_t dma_handle);
#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
extern int dmam_declare_coherent_memory(struct device *dev, extern int dmam_declare_coherent_memory(struct device *dev,
phys_addr_t phys_addr, phys_addr_t phys_addr,

View File

@ -1,69 +1,9 @@
#ifndef __LINUX_SWIOTLB_XEN_H #ifndef __LINUX_SWIOTLB_XEN_H
#define __LINUX_SWIOTLB_XEN_H #define __LINUX_SWIOTLB_XEN_H
#include <linux/dma-direction.h>
#include <linux/scatterlist.h>
#include <linux/swiotlb.h> #include <linux/swiotlb.h>
extern int xen_swiotlb_init(int verbose, bool early); extern int xen_swiotlb_init(int verbose, bool early);
extern const struct dma_map_ops xen_swiotlb_dma_ops;
extern void
*xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
unsigned long attrs);
extern void
xen_swiotlb_free_coherent(struct device *hwdev, size_t size,
void *vaddr, dma_addr_t dma_handle,
unsigned long attrs);
extern dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
unsigned long attrs);
extern void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir,
unsigned long attrs);
extern int
xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir,
unsigned long attrs);
extern void
xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir,
unsigned long attrs);
extern void
xen_swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir);
extern void
xen_swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir);
extern void
xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir);
extern void
xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir);
extern int
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask);
extern int
xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask);
extern int
xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs);
extern int
xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size,
unsigned long attrs);
#endif /* __LINUX_SWIOTLB_XEN_H */ #endif /* __LINUX_SWIOTLB_XEN_H */

View File

@ -7,6 +7,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <linux/pfn.h>
static void *dma_noop_alloc(struct device *dev, size_t size, static void *dma_noop_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, dma_addr_t *dma_handle, gfp_t gfp,
@ -16,7 +17,8 @@ static void *dma_noop_alloc(struct device *dev, size_t size,
ret = (void *)__get_free_pages(gfp, get_order(size)); ret = (void *)__get_free_pages(gfp, get_order(size));
if (ret) if (ret)
*dma_handle = virt_to_phys(ret); *dma_handle = virt_to_phys(ret) - PFN_PHYS(dev->dma_pfn_offset);
return ret; return ret;
} }
@ -32,7 +34,7 @@ static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
enum dma_data_direction dir, enum dma_data_direction dir,
unsigned long attrs) unsigned long attrs)
{ {
return page_to_phys(page) + offset; return page_to_phys(page) + offset - PFN_PHYS(dev->dma_pfn_offset);
} }
static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents, static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
@ -43,34 +45,23 @@ static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nent
struct scatterlist *sg; struct scatterlist *sg;
for_each_sg(sgl, sg, nents, i) { for_each_sg(sgl, sg, nents, i) {
dma_addr_t offset = PFN_PHYS(dev->dma_pfn_offset);
void *va; void *va;
BUG_ON(!sg_page(sg)); BUG_ON(!sg_page(sg));
va = sg_virt(sg); va = sg_virt(sg);
sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va); sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va) - offset;
sg_dma_len(sg) = sg->length; sg_dma_len(sg) = sg->length;
} }
return nents; return nents;
} }
static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static int dma_noop_supported(struct device *dev, u64 mask)
{
return 1;
}
const struct dma_map_ops dma_noop_ops = { const struct dma_map_ops dma_noop_ops = {
.alloc = dma_noop_alloc, .alloc = dma_noop_alloc,
.free = dma_noop_free, .free = dma_noop_free,
.map_page = dma_noop_map_page, .map_page = dma_noop_map_page,
.map_sg = dma_noop_map_sg, .map_sg = dma_noop_map_sg,
.mapping_error = dma_noop_mapping_error,
.dma_supported = dma_noop_supported,
}; };
EXPORT_SYMBOL(dma_noop_ops); EXPORT_SYMBOL(dma_noop_ops);

View File

@ -51,22 +51,10 @@ static int dma_virt_map_sg(struct device *dev, struct scatterlist *sgl,
return nents; return nents;
} }
static int dma_virt_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return false;
}
static int dma_virt_supported(struct device *dev, u64 mask)
{
return true;
}
const struct dma_map_ops dma_virt_ops = { const struct dma_map_ops dma_virt_ops = {
.alloc = dma_virt_alloc, .alloc = dma_virt_alloc,
.free = dma_virt_free, .free = dma_virt_free,
.map_page = dma_virt_map_page, .map_page = dma_virt_map_page,
.map_sg = dma_virt_map_sg, .map_sg = dma_virt_map_sg,
.mapping_error = dma_virt_mapping_error,
.dma_supported = dma_virt_supported,
}; };
EXPORT_SYMBOL(dma_virt_ops); EXPORT_SYMBOL(dma_virt_ops);