Commit Graph

1347 Commits

Author SHA1 Message Date
Baoquan He 9b1a12d291 iommu/amd: Correct the wrong setting of alias DTE in do_attach
In below commit alias DTE is set when its peripheral is
setting DTE. However there's a code bug here to wrongly
set the alias DTE, correct it in this patch.

commit e25bfb56ea
Author: Joerg Roedel <jroedel@suse.de>
Date:   Tue Oct 20 17:33:38 2015 +0200

    iommu/amd: Set alias DTE in do_attach/do_detach

Signed-off-by: Baoquan He <bhe@redhat.com>
Tested-by: Mark Hounschell <markh@compro.net>
Cc: stable@vger.kernel.org # v4.4
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-29 12:30:47 +01:00
Jeremy McNicoll da972fb13b iommu/vt-d: Don't skip PCI devices when disabling IOTLB
Fix a simple typo when disabling IOTLB on PCI(e) devices.

Fixes: b16d0cb9e2 ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS")
Cc: stable@vger.kernel.org  # v4.4
Signed-off-by: Jeremy McNicoll <jmcnicol@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-29 12:18:13 +01:00
Lada Trimasova 8f6aff9858 iommu/io-pgtable-arm: Fix io-pgtable-arm build failure
Trying to build a kernel for ARC with both options CONFIG_COMPILE_TEST
and CONFIG_IOMMU_IO_PGTABLE_LPAE enabled (e.g. as a result of "make
allyesconfig") results in the following build failure:

 | CC drivers/iommu/io-pgtable-arm.o
 | linux/drivers/iommu/io-pgtable-arm.c: In
 | function ‘__arm_lpae_alloc_pages’:
 | linux/drivers/iommu/io-pgtable-arm.c:221:3:
 | error: implicit declaration of function ‘dma_map_single’
 | [-Werror=implicit-function-declaration]
 | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE);
 | ^
 | linux/drivers/iommu/io-pgtable-arm.c:221:42:
 | error: ‘DMA_TO_DEVICE’ undeclared (first use in this function)
 | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE);
 | ^

Since IOMMU_IO_PGTABLE_LPAE depends on DMA API, io-pgtable-arm.c should
include linux/dma-mapping.h. This fixes the reported failure.

Cc: Alexey Brodkin <abrodkin@synopsys.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Lada Trimasova <ltrimas@synopsys.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-29 12:14:08 +01:00
Linus Torvalds 99e38df892 IOMMU Updates for Linux v4.5
The updates include:
 
 	* Small code cleanups in the AMD IOMMUv2 driver
 
 	* Scalability improvements for the DMA-API implementation of the
 	  AMD IOMMU driver. This is just a starting point, but already
 	  showed some good improvements in my tests.
 
 	* Removal of the unused Renesas IPMMU/IPMMUI driver
 
 	* Updates for ARM-SMMU include:
 
 		* Some fixes to get the driver working nicely on
 		  Broadcom hardware
 
 		* A change to the io-pgtable API to indicate the unit in
 		  which to flush (all callers converted, with Ack from
 		  Laurent)
 
 		* Use of devm_* for allocating/freeing the SMMUv3
 		  buffers
 
 	* Some other small fixes and improvements for other drivers
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJWnkkIAAoJECvwRC2XARrjpD8QAMCYfoqiq35QLQYn7Jh/LA5E
 ZotdNv6hONwCahQiNSSsaoP2f8IBpyvo7Nrgz1Fj3SEQYzAiBn6mWgXFu7WdQarD
 kw1SLwUIUweF/qjgpOvGD29F7mC4XIRYfPOFbLEPvkBwx6Vm4NSJkclfMZJeNCFm
 ghWxGdva+7HFyJX+gMS1flihfUzN31U5hKWRxQqHXcLbHuVOdEnL1by5ozbpcJNI
 vkpbkCcWaD1uKju918akFYJultcwMGb7Wm6HwKB2EjG2aOoe2Siw61MrJ1DUreOh
 J0fJubltaZwkMxFUTuNwrP9E+FH6arPtJBmvpMMz8ZQeLyQQQnBcHKDZFAgHu23Z
 /wOkjoA5uG5iy2XiPWbUFJQKp4q+Dlkp8LqT1RAKvp8kVbrrsSGUXQzBIf+DE5F7
 U0ghAWB70g6fREys/cvs0q7huX42Cuf3M82JKP9rksLj9ArWoT4TtkI2nvbNyKE8
 KhX57xj4OSROriZV8+XmaU/W7bK6BVXr7B0aVOCvf5y7GsIYhf6zSH+0cP/TmLuQ
 ZLtOr2UHFzvjZq7LHgRfEs1CYn+PhKw6kUM3rxjm/QZxiBSft7ABhhxJZKlMyE/f
 jTnPS5DH2XT+UKtt8D0nfS558h0kxqwXzhICQHC30lpJLIoWj9ulZcmOXdzlY1xM
 R5+4TTJ4l1tovPtQ9nUW
 =bm5E
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:
 "The updates include:

   - Small code cleanups in the AMD IOMMUv2 driver

   - Scalability improvements for the DMA-API implementation of the AMD
     IOMMU driver.  This is just a starting point, but already showed
     some good improvements in my tests.

   - Removal of the unused Renesas IPMMU/IPMMUI driver

   - Updates for ARM-SMMU include:
      * Some fixes to get the driver working nicely on Broadcom hardware
      * A change to the io-pgtable API to indicate the unit in which to
        flush (all callers converted, with Ack from Laurent)
      * Use of devm_* for allocating/freeing the SMMUv3 buffers

   - Some other small fixes and improvements for other drivers"

* tag 'iommu-updates-v4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (46 commits)
  iommu/vt-d: Fix up error handling in alloc_iommu
  iommu/vt-d: Check the return value of iommu_device_create()
  iommu/amd: Remove an unneeded condition
  iommu/amd: Preallocate dma_ops apertures based on dma_mask
  iommu/amd: Use trylock to aquire bitmap_lock
  iommu/amd: Make dma_ops_domain->next_index percpu
  iommu/amd: Relax locking in dma_ops path
  iommu/amd: Initialize new aperture range before making it visible
  iommu/amd: Build io page-tables with cmpxchg64
  iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addresses
  iommu/amd: Optimize dma_ops_free_addresses
  iommu/amd: Remove need_flush from struct dma_ops_domain
  iommu/amd: Iterate over all aperture ranges in dma_ops_area_alloc
  iommu/amd: Flush iommu tlb in dma_ops_free_addresses
  iommu/amd: Rename dma_ops_domain->next_address to next_index
  iommu/amd: Remove 'start' parameter from dma_ops_area_alloc
  iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc()
  iommu/amd: Retry address allocation within one aperture
  iommu/amd: Move aperture_range.offset to another cache-line
  iommu/amd: Add dma_ops_aperture_alloc() function
  ...
2016-01-19 09:35:06 -08:00
Joerg Roedel 32704253dc Merge branches 's390', 'arm/renesas', 'arm/msm', 'arm/shmobile', 'arm/smmu', 'x86/amd' and 'x86/vt-d' into next 2016-01-19 15:30:43 +01:00
Linus Torvalds 67c707e451 Merge branch 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cleanups from Ingo Molnar:
 "The main changes in this cycle were:

   - code patching and cpu_has cleanups (Borislav Petkov)

   - paravirt cleanups (Juergen Gross)

   - TSC cleanup (Thomas Gleixner)

   - ptrace cleanup (Chen Gang)"

* 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  arch/x86/kernel/ptrace.c: Remove unused arg_offs_table
  x86/mm: Align macro defines
  x86/cpu: Provide a config option to disable static_cpu_has
  x86/cpufeature: Remove unused and seldomly used cpu_has_xx macros
  x86/cpufeature: Cleanup get_cpu_cap()
  x86/cpufeature: Move some of the scattered feature bits to x86_capability
  x86/paravirt: Remove paravirt ops pmd_update[_defer] and pte_update_defer
  x86/paravirt: Remove unused pv_apic_ops structure
  x86/tsc: Remove unused tsc_pre_init() hook
  x86: Remove unused function cpu_has_ht_siblings()
  x86/paravirt: Kill some unused patching functions
2016-01-11 16:26:03 -08:00
Joerg Roedel bc8474549e iommu/vt-d: Fix up error handling in alloc_iommu
Only check for error when iommu->iommu_dev has been assigned
and only assign drhd->iommu when the function can't fail
anymore.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07 13:44:41 +01:00
Nicholas Krause 592033790e iommu/vt-d: Check the return value of iommu_device_create()
This adds the proper check to alloc_iommu to make sure that
the call to iommu_device_create has completed successfully
and if not return the error code to the caller after freeing
up resources allocated previously.

Signed-off-by: Nicholas Krause <xerofoify@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07 13:43:56 +01:00
Robin Murphy 164afb1d85 iommu/dma: Use correct offset in map_sg
When mapping a non-page-aligned scatterlist entry, we copy the original
offset to the output DMA address before aligning it to hand off to
iommu_map_sg(), then later adding the IOVA page address portion to get
the final mapped address. However, when the IOVA page size is smaller
than the CPU page size, it is the offset within the IOVA page we want,
not that within the CPU page, which can easily be larger than an IOVA
page and thus result in an incorrect final address.

Fix the bug by taking only the IOVA-aligned part of the offset as the
basis of the DMA address, not the whole thing.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07 13:36:41 +01:00
Dan Carpenter 1fb260bc00 iommu/amd: Remove an unneeded condition
get_device_id() returns an unsigned short device id.  It never fails and
it never returns a negative so we can remove this condition.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07 13:08:07 +01:00
Joerg Roedel a639a8eecf iommu/amd: Preallocate dma_ops apertures based on dma_mask
Preallocate between 4 and 8 apertures when a device gets it
dma_mask. With more apertures we reduce the lock contention
of the domain lock significantly.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel 7b5e25b84e iommu/amd: Use trylock to aquire bitmap_lock
First search for a non-contended aperture with trylock
before spinning.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel 5f6bed5005 iommu/amd: Make dma_ops_domain->next_index percpu
Make this pointer percpu so that we start searching for new
addresses in the range we last stopped and which is has a
higher probability of being still in the cache.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel 92d420ec02 iommu/amd: Relax locking in dma_ops path
Remove the long holding times of the domain->lock and rely
on the bitmap_lock instead.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel a73c156665 iommu/amd: Initialize new aperture range before making it visible
Make sure the aperture range is fully initialized before it
is visible to the address allocator.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 7bfa5bd270 iommu/amd: Build io page-tables with cmpxchg64
This allows to build up the page-tables without holding any
locks. As a consequence it removes the need to pre-populate
dma_ops page-tables.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 266a3bd28f iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addresses
It really belongs there and not in __map_single.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 4eeca8c5e7 iommu/amd: Optimize dma_ops_free_addresses
Don't flush the iommu tlb when we free something behind the
current next_bit pointer. Update the next_bit pointer
instead and let the flush happen on the next wraparound in
the allocation path.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel ab7032bb9c iommu/amd: Remove need_flush from struct dma_ops_domain
The flushing of iommu tlbs is now done on a per-range basis.
So there is no need anymore for domain-wide flush tracking.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 2a87442c5b iommu/amd: Iterate over all aperture ranges in dma_ops_area_alloc
This way we don't need to care about the next_index wrapping
around in dma_ops_alloc_addresses.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel d41ab09896 iommu/amd: Flush iommu tlb in dma_ops_free_addresses
Instead of setting need_flush, do the flush directly in
dma_ops_free_addresses.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel ebaecb423b iommu/amd: Rename dma_ops_domain->next_address to next_index
It points to the next aperture index to allocate from. We
don't need the full address anymore because this is now
tracked in struct aperture_range.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel 05ab49e005 iommu/amd: Remove 'start' parameter from dma_ops_area_alloc
Parameter is not needed because the value is part of the
already passed in struct dma_ops_domain.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel ccb50e03da iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc()
Since the allocator wraparound happens in this function now,
flush the iommu tlb there too.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel 60e6a7cb44 iommu/amd: Retry address allocation within one aperture
Instead of skipping to the next aperture, first try again in
the current one.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel ae62d49c7a iommu/amd: Move aperture_range.offset to another cache-line
Moving it before the pte_pages array puts in into the same
cache-line as the spin-lock and the bitmap array pointer.
This should safe a cache-miss.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel a0f51447f4 iommu/amd: Add dma_ops_aperture_alloc() function
Make this a wrapper around iommu_ops_area_alloc() for now
and add more logic to this function later on.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel b57c3c802e iommu/amd: Pass correct shift to iommu_area_alloc()
The page-offset of the aperture must be passed instead of 0.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 84b3a0bc88 iommu/amd: Flush the IOMMU TLB before the addresses are freed
This allows to keep the bitmap_lock only for a very short
period of time.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 53b3b65aa5 iommu/amd: Flush IOMMU TLB on __map_single error path
There have been present PTEs which in theory could have made
it to the IOMMU TLB. Flush the addresses out on the error
path to make sure no stale entries remain.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 08c5fb938e iommu/amd: Introduce bitmap_lock in struct aperture_range
This lock only protects the address allocation bitmap in one
aperture.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 007b74bab2 iommu/amd: Move 'struct dma_ops_domain' definition to amd_iommu.c
It is only used in this file anyway, so keep it there. Same
with 'struct aperture_range'.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel a7fb668fd8 iommu/amd: Warn only once on unexpected pte value
This prevents possible flooding of the kernel log.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Geert Uytterhoeven f64232eee6 iommu/ipmmu-vmsa: Don't truncate ttbr if LPAE is not enabled
If CONFIG_PHYS_ADDR_T_64BIT=n:

    drivers/iommu/ipmmu-vmsa.c: In function 'ipmmu_domain_init_context':
    drivers/iommu/ipmmu-vmsa.c:434:2: warning: right shift count >= width of type
      ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32);
      ^

As io_pgtable_cfg.arm_lpae_s1_cfg.ttbr[] is an array of u64s, assigning
it to a phys_addr_t may truncates it.  Make ttbr u64 to fix this.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:10:52 +01:00
Robin Murphy 0a9afeda80 iommu/dma: Avoid unlikely high-order allocations
Doug reports that the equivalent page allocator on 32-bit ARM exhibits
particularly pathalogical behaviour under memory pressure when
fragmentation is high, where allocating a 4MB buffer takes tens of
seconds and the number of calls to alloc_pages() is over 9000![1]

We can drastically improve that situation without losing the other
benefits of high-order allocations when they would succeed, by assuming
memory pressure is relatively constant over the course of an allocation,
and not retrying allocations at orders we know to have failed before.
This way, the best-case behaviour remains unchanged, and in the worst
case we should see at most a dozen or so (MAX_ORDER - 1) failed attempts
before falling back to single pages for the remainder of the buffer.

[1]:http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/394660.html

Reported-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:06:26 +01:00
Robin Murphy 5b11e9cd42 iommu/dma: Add some missing #includes
dma-iommu.c was naughtily relying on an implicit transitive #include of
linux/vmalloc.h, which is apparently not present on some architectures.
Add that, plus a couple more headers for other functions which are used
similarly.

Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:03:34 +01:00
Joerg Roedel 6d6c7e56a2 Merge branch 'for-joerg/arm-smmu/updates' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu 2015-12-22 11:26:37 +01:00
Borislav Petkov 362f924b64 x86/cpufeature: Remove unused and seldomly used cpu_has_xx macros
Those are stupid and code should use static_cpu_has_safe() or
boot_cpu_has() instead. Kill the least used and unused ones.

The remaining ones need more careful inspection before a conversion can
happen. On the TODO.

Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/1449481182-27541-4-git-send-email-bp@alien8.de
Cc: David Sterba <dsterba@suse.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fb.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-12-19 11:49:55 +01:00
Linus Torvalds ccdd96be43 IOMMU Fixes for Linux v4.4-rc5
* Two similar fixes for the Intel and AMD IOMMU drivers to add
 	  proper access checks before calling handle_mm_fault.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJWdCp7AAoJECvwRC2XARrjjAIP/0ihW2zF4R622RgY1C1Cm62j
 0eb/R4UqjI3PG0KsURgDHcIm9JP5Z//dgKTOtNX9KOkHlXLcO9MMSD5chVBd4HKG
 +Mgx7RM+Mr7f6ElRUa6s1GY1tcJlGf43fW5cMQ44BJIqVXlE47go4U09D86DVgXy
 KgyBxQldeOrkXZvAG82WLjGgkdGALQjbDlI8ktmfYWXAvIRWNGJqWY16BwAYOWfb
 9d3+1JPekSSBWHC6H+qbkDb8ueO69/Ux0HL5z2Q0zchqGjBb1gnfwLcz865KZpOB
 qUwsKFSXTl+jPCrAaLYJnVqAnH4qqKaF6WKAJSIHObTSVqXKHpFHrQrlGVzOvYNn
 s3216KIMsxG2nnvSgXCOFGqM/810MH2MSo8YcF5A3celrka3j2Gj08mxInrZXN7D
 3p51HSwq8ePo4i5jppT5ldOBSjNV9N3wKWcjDb4OL+OfkJc/u2VbSHNQtpvTclsV
 V6VSfWLDC8BCmUveMH2TrawQWkKOz0LqgqfQPX+VvSCIM7tgkrgVsTJrijPtGOs1
 zid/A/cfqMdBezSVALrZfB4OVBaM2UL2LJmmLJgApYV+N55Oxmx+nxnMr0aT5KlY
 crjcnVaypkq3rG1Wjpt+nTTwtllB0yXNEywQcu2edeswmaQCqsEgQRsDqi6S2/+S
 c8l9JKoTrB4+vToYjXyW
 =qrAB
 -----END PGP SIGNATURE-----

Merge tag 'iommu-fixes-v4.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU fixes from Joerg Roedel:
 "Two similar fixes for the Intel and AMD IOMMU drivers to add proper
  access checks before calling handle_mm_fault"

* tag 'iommu-fixes-v4.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
  iommu/vt-d: Do access checks before calling handle_mm_fault()
  iommu/amd: Do proper access checking before calling handle_mm_fault()
2015-12-18 12:38:35 -08:00
Will Deacon 12c2ab0957 iommu/io-pgtable-arm: Ensure we free the final level on teardown
When tearing down page tables, we return early for the final level
since we know that we won't have any table pointers to follow.
Unfortunately, this also means that we forget to free the final level,
so we end up leaking memory.

Fix the issue by always freeing the current level, but just don't bother
to iterate over the ptes if we're at the final level.

Cc: <stable@vger.kernel.org>
Reported-by: Zhang Bo <zhangbo_a@xiaomi.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:36 +00:00
Prem Mallappa 6380be0535 iommu/arm-smmu: Use STE.S1STALLD only when supported
It is ILLEGAL to set STE.S1STALLD to 1 if stage 1 is enabled and
either the stall or terminate models are not supported.

This patch fixes the STALLD check and ensures that we don't set STALLD
in the STE when it is not supported.

Signed-off-by: Prem Mallappa <pmallapp@broadcom.com>
[will: consistently use IDR0_STALL_MODEL_* prefix]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:36 +00:00
Prem Mallappa 324ba10823 iommu/arm-smmu: Fix write to GERRORN register
When acknowledging global errors, the GERRORN register should be written
with the original GERROR value so that active errors are toggled.

This patch fixed the driver to write the original GERROR value to
GERRORN, instead of an active error mask.

Signed-off-by: Prem Mallappa <pmallapp@broadcom.com>
[will: reworked use of active bits and fixed commit log]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:35 +00:00
Robin Murphy fdc3896763 iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
There is no need to keep a useful accessor for a public structure hidden
away in a private implementation. Move it out alongside the structure
definition so that other implementations may reuse it.

Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:35 +00:00
Robin Murphy 75df138655 iommu/arm-smmu: Invalidate TLBs properly
When invalidating an IOVA range potentially spanning multiple pages,
such as when removing an entire intermediate-level table, we currently
only issue an invalidation for the first IOVA of that range. Since the
architecture specifies that address-based TLB maintenance operations
target a single entry, an SMMU could feasibly retain live entries for
subsequent pages within that unmapped range, which is not good.

Make sure we hit every possible entry by iterating over the whole range
at the granularity provided by the pagetable implementation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[will: added missing semicolons...]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:35 +00:00
Robin Murphy 06c610e8f3 iommu/io-pgtable: Indicate granule for TLB maintenance
IOMMU hardware with range-based TLB maintenance commands can work
happily with the iova and size arguments passed via the tlb_add_flush
callback, but for IOMMUs which require separate commands per entry in
the range, it is not straightforward to infer the necessary granularity
when it comes to issuing the actual commands.

Add an additional argument indicating the granularity for the benefit
of drivers needing to know, and update the ARM LPAE code appropriately
(for non-leaf invalidations we currently just assume the worst-case
page granularity rather than walking the table to check).

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:34 +00:00
Robin Murphy 2eb97c7861 iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
In the case of corrupted page tables, or when an invalid size is given,
__arm_lpae_unmap() may recurse beyond the maximum number of levels.
Unfortunately the detection of this error condition only happens *after*
calculating a nonsense offset from something which might not be a valid
table pointer and dereferencing that to see if it is a valid PTE.

Make things a little more robust by checking the level is valid before
doing anything which depends on it being so.

Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:34 +00:00
Will Deacon a0d5c04c60 iommu/arm-smmu: Handle unknown CERROR values gracefully
Whilst the architecture only defines a few of the possible CERROR values,
we should handle unknown values gracefully rather than go out of bounds
trying to print out an error description.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:33 +00:00
Peng Fan 9a4a9d8c34 iommu/arm-smmu: Correct group reference count
The basic flow for add a device:
 arm_smmu_add_device
        |->iommu_group_get_for_dev
            |->iommu_group_get
                     return group;  (1)
            |->ops->device_group : Init/increase reference count to/by 1.
            |->iommu_group_add_device : Increase reference count by 1.
		     return group   (2)
        |->return 0;

Since we are adding one device, the flow is (2) and the group reference
count will be increased by 2. So, we need to add iommu_group_put at the
end of arm_smmu_add_device to decrease the count by 1.

Also take the failure path into consideration when fail to add a device.

Signed-off-by: Peng Fan <van.freenix@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:33 +00:00
Will Deacon a0eacd89e3 iommu/arm-smmu: Use incoming shareability attributes in bypass mode
When we initialise a bypass STE, we memset the structure to zero and
set the Valid and Config fields to indicate that the stream should
bypass the SMMU. Unfortunately, this results in an SHCFG field of 0
which means that the shareability of any incoming transactions is
overridden with non-shareable, leading to potential coherence problems
down the line.

This patch fixes the issue by initialising bypass STEs to use the
incoming shareability attributes. When translation is in effect at
either stage 1 or stage 2, the shareability is determined by the
page tables.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:33 +00:00
Markus Elfring 44830b0cbd iommu/arm-smmu: Delete an unnecessary check before free_io_pgtable_ops()
The free_io_pgtable_ops() function tests whether its argument is NULL
and then returns immediately. Thus the test around the call is not needed.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17 12:05:32 +00:00