Every call-place of get_device_resources calls check_device
before it. So call it from get_device_resources directly and
simplify the code.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
The check_device logic needs to include the dma_supported
checks to be really sure. Merge the dma_supported logic into
check_device and use it to implement dma_supported.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
The non-present cache flag was IOMMU local until now which
doesn't make sense. Make this a global flag so we can remove
the lase user of 'struct iommu' in the map/unmap path.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch reimplements the function
flush_all_domains_on_iommu to use the global protection
domain list.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch reimplementes the amd_iommu_flush_all_domains
function to use the global protection domain list instead
of flushing every domain on every IOMMU.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch adds code to keep a global list of all protection
domains. This allows to simplify the resume code.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This iommu_flush_tlb_pde function does essentially the same.
So the iommu_flush_domain function is redundant and can be
removed.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch extends the iommu_flush_pages function to flush
the TLB entries on all IOMMUs the domain has devices on.
This basically gives up the former assumption that dma_ops
domains are only bound to one IOMMU in the system.
For dma_ops domains this is still true but not for
IOMMU-API managed domains. Giving this assumption up for
dma_ops domains too allows code simplification.
Further it splits out the main logic into a generic function
which can be used by iommu_flush_tlb too.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch adds a function to the AMD IOMMU driver which
completes all queued commands an all IOMMUs a specific
domain has devices attached on. This is required in a later
patch when per-domain flushing is implemented.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch adds reference counting for protection domains
per IOMMU. This allows a smarter TLB flushing strategy.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch updates the copyright headers in the relevant AMD
IOMMU driver files to match the date of the latest changes.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch moves all function declarations which are only
used inside the driver code to a seperate header file.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
For some devices the ACPI table may define unity map
requirements which must me met when the IOMMU is enabled. So
we need to attach devices to their domains as early as
possible so that these mappings are in place when needed.
This patch assigns the domains right after they are
allocated. Otherwise this can result in I/O page faults
before a driver binds to a device and BIOS is still using
it.
Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This kills bad_dma_address variable, the old mechanism to enable
IOMMU drivers to make dma_mapping_error() work in IOMMU's
specific way.
bad_dma_address variable was introduced to enable IOMMU drivers
to make dma_mapping_error() work in IOMMU's specific way.
However, it can't handle systems that use both swiotlb and HW
IOMMU. SO we introduced dma_map_ops->mapping_error to solve that
case.
Intel VT-d, GART, and swiotlb already use
dma_map_ops->mapping_error. Calgary, AMD IOMMU, and nommu use
zero for an error dma address. This adds DMA_ERROR_CODE and
converts them to use it (as SPARC and POWER does).
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: muli@il.ibm.com
Cc: joerg.roedel@amd.com
LKML-Reference: <1258287594-8777-3-git-send-email-fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
If HW IOMMU initialization fails (Intel VT-d often does this,
typically due to BIOS bugs), we fall back to nommu. It doesn't
work for the majority since nowadays we have more than 4GB
memory so we must use swiotlb instead of nommu.
The problem is that it's too late to initialize swiotlb when HW
IOMMU initialization fails. We need to allocate swiotlb memory
earlier from bootmem allocator. Chris explained the issue in
detail:
http://marc.info/?l=linux-kernel&m=125657444317079&w=2
The current x86 IOMMU initialization sequence is too complicated
and handling the above issue makes it more hacky.
This patch changes x86 IOMMU initialization sequence to handle
the above issue cleanly.
The new x86 IOMMU initialization sequence are:
1. we initialize the swiotlb (and setting swiotlb to 1) in the case
of (max_pfn > MAX_DMA32_PFN && !no_iommu). dma_ops is set to
swiotlb_dma_ops or nommu_dma_ops. if swiotlb usage is forced by
the boot option, we finish here.
2. we call the detection functions of all the IOMMUs
3. the detection function sets x86_init.iommu.iommu_init to the
IOMMU initialization function (so we can avoid calling the
initialization functions of all the IOMMUs needlessly).
4. if the IOMMU initialization function doesn't need to swiotlb
then sets swiotlb to zero (e.g. the initialization is
sucessful).
5. if we find that swiotlb is set to zero, we free swiotlb
resource.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: chrisw@sous-sol.org
Cc: dwmw2@infradead.org
Cc: joerg.roedel@amd.com
Cc: muli@il.ibm.com
LKML-Reference: <1257849980-22640-10-git-send-email-fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
There is an erratum for IOMMU hardware which documents
undefined behavior when forwarding SMI requests from
peripherals and the DTE of that peripheral has a sysmgt
value of 01b. This problem caused weird IO_PAGE_FAULTS in my
case.
This patch implements the suggested workaround for that
erratum into the AMD IOMMU driver. The erratum is
documented with number 63.
Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch makes sure a device is not detached from the
passthrough domain when the device driver is unloaded or
does otherwise release the device.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
When the IOMMU driver runs in passthrough mode it has to
make sure that every device not assigned to an IOMMU-API
domain must be put into the passthrough domain instead of
keeping it unassigned.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch makes the locking behavior between the functions
attach_device and __attach_device consistent with the
locking behavior between detach_device and __detach_device.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
The V bit of the device table entry has to be set after the
rest of the entry is written to not confuse the hardware.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
When iommu=pt is passed on kernel command line the devices
should run untranslated. This requires the allocation of a
special domain for that purpose. This patch implements the
allocation and initialization path for iommu=pt.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch factors some code of protection domain allocation
into seperate functions. This way the logic can be used to
allocate the passthrough domain later. As a side effect this
patch fixes an unlikely domain id leakage bug.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch adds a map_size parameter to the iommu_map_page
function which makes it generic enough to handle multiple
page sizes. This also requires a change to alloc_pte which
is also done in this patch.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
The driver now supports a dynamic number of levels for IO
page tables. This allows to reduce the number of levels for
dma_ops domains by one because a dma_ops domain has usually
an address space size between 128MB and 4G.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch changes the alloc_pte function to be able to map
pages into the whole 64 bit address space supported by AMD
IOMMU hardware from the old limit of 2**39 bytes.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Thist patch introduces the update_domain function which
propagates the larger address space of a protection domain
to the device table and flushes all relevant DTEs and the
domain TLB.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This function factors out some logic of attach_device to a
seperate function. This new function will be used to update
device table entries when necessary.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch adds a generic variant of
amd_iommu_flush_all_devices function which flushes only the
DTEs for a given protection domain.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch changes the fetch_pte function in the AMD IOMMU
driver to support dynamic mapping levels.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Instead of a panic on an comletion wait loop failure, try to
recover from that event from resetting the command buffer.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
To prevent the driver from doing recursive command buffer
resets, just panic when that recursion happens.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
On an ILLEGAL_COMMAND_ERROR the IOMMU stops executing
further commands. This patch changes the code to handle this
case better by resetting the command buffer in the IOMMU.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This function flushes all DTE entries on one IOMMU for all
devices behind this IOMMU. This is required for command
buffer resetting later.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
The amd_iommu_pd_table is indexed by protection domain
number and not by device id. So this check is broken and
must be removed.
Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch replaces the "AMD IOMMU" printk strings with the
official name for the hardware: "AMD-Vi".
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch removes some left-overs which where put into the code to
simplify merging code which also depends on changes in other trees.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch introduces a function to flush all domain tlbs
for on one given IOMMU. This is required later to reset the
command buffer on one IOMMU.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch adds code to dump the command which caused an
ILLEGAL_COMMAND_ERROR raised by the IOMMU hardware.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>