Commit Graph

9 Commits

Author SHA1 Message Date
Will Deacon
cf27ec930b iommu/io-pgtable-arm: Unmap and free table when overwriting with block
When installing a block mapping, we unconditionally overwrite a non-leaf
PTE if we find one. However, this can cause a problem if the following
sequence of events occur:

  (1) iommu_map called for a 4k (i.e. PAGE_SIZE) mapping at some address
      - We initialise the page table all the way down to a leaf entry
      - No TLB maintenance is required, because we're going from invalid
        to valid.

  (2) iommu_unmap is called on the mapping installed in (1)
      - We walk the page table to the final (leaf) entry and zero it
      - We only changed a valid leaf entry, so we invalidate leaf-only

  (3) iommu_map is called on the same address as (1), but this time for
      a 2MB (i.e. BLOCK_SIZE) mapping)
      - We walk the page table down to the penultimate level, where we
        find a table entry
      - We overwrite the table entry with a block mapping and return
        without any TLB maintenance and without freeing the memory used
        by the now-orphaned table.

This last step can lead to a walk-cache caching the overwritten table
entry, causing unexpected faults when the new mapping is accessed by a
device. One way to fix this would be to collapse the page table when
freeing the last page at a given level, but this would require expensive
iteration on every map call. Instead, this patch detects the case when
we are overwriting a table entry and explicitly unmaps the table first,
which takes care of both freeing and TLB invalidation.

Cc: <stable@vger.kernel.org>
Reported-by: Brian Starkey <brian.starkey@arm.com>
Tested-by: Brian Starkey <brian.starkey@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-18 11:27:36 +02:00
Robin Murphy
f5b831907d iommu/io-pgtable: Remove flush_pgtable callback
With the users fully converted to DMA API operations, it's dead, Jim.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-06 14:35:40 +01:00
Robin Murphy
87a91b15d6 iommu/io-pgtable-arm: Centralise sync points
With all current users now opted in to DMA API operations, make the
iommu_dev pointer mandatory, rendering the flush_pgtable callback
redundant for cache maintenance. However, since the DMA calls could be
nops in the case of a coherent IOMMU, we still need to ensure the page
table updates are fully synchronised against a subsequent page table
walk. In the unmap path, the TLB sync will usually need to do this
anyway, so just cement that requirement; in the map path which may
consist solely of cacheable memory writes (in the coherent case),
insert an appropriate barrier at the end of the operation, and obviate
the need to call flush_pgtable on every individual update for
synchronisation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[will: slight clarification to tlb_sync comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-06 14:35:39 +01:00
Robin Murphy
f8d5496131 iommu/io-pgtable-arm: Allow appropriate DMA API use
Currently, users of the LPAE page table code are (ab)using dma_map_page()
as a means to flush page table updates for non-coherent IOMMUs. Since
from the CPU's point of view, creating IOMMU page tables *is* passing
DMA buffers to a device (the IOMMU's page table walker), there's little
reason not to use the DMA API correctly.

Allow IOMMU drivers to opt into DMA API operations for page table
allocation and updates by providing their appropriate device pointer.
The expectation is that an LPAE IOMMU should have a full view of system
memory, so use streaming mappings to avoid unnecessary pressure on
ZONE_DMA, and treat any DMA translation as a warning sign.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-06 14:35:38 +01:00
Will Deacon
63979b8da3 iommu/io-pgtable-arm: avoid speculative walks through TTBR1
Although we set TCR.T1SZ to 0, the input address range covered by TTBR1
is actually calculated using T0SZ in this case on the ARM SMMU. This
could theoretically lead to speculative table walks through physical
address zero, leading to all sorts of fun and games if we have MMIO
regions down there.

This patch avoids the issue by setting EPD1 to disable walks through
the unused TTBR1 register.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-27 13:39:36 +00:00
Will Deacon
367bd978b8 iommu/io-pgtable-arm: Fix self-test WARNs on i386
Various build/boot bots have reported WARNs being triggered by the ARM
iopgtable LPAE self-tests on i386 machines.

This boils down to two instances of right-shifting a 32-bit unsigned
long (i.e. an iova) by more than the size of the type. On 32-bit ARM,
this happens to give us zero, hence my testing didn't catch this
earlier.

This patch fixes the issue by using DIV_ROUND_UP and explicit case to
to avoid the erroneous shifts.

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Reported-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-02-25 13:37:32 +01:00
Laurent Pinchart
c896c132b0 iommu: io-pgtable-arm: add non-secure quirk
The quirk causes the Non-Secure bit to be set in all page table entries.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-01-19 14:46:45 +00:00
Will Deacon
fe4b991dcd iommu: add self-consistency tests to ARM LPAE IO page table allocator
This patch adds a series of basic self-consistency tests to the ARM LPAE
IO page table allocator that exercise corner cases in map/unmap, as well
as testing all valid configurations of pagesize, ias and stage.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-01-19 14:46:44 +00:00
Will Deacon
e1d3c0fd70 iommu: add ARM LPAE page table allocator
A number of IOMMUs found in ARM SoCs can walk architecture-compatible
page tables.

This patch adds a generic allocator for Stage-1 and Stage-2 v7/v8
long-descriptor page tables. 4k, 16k and 64k pages are supported, with
up to 4-levels of walk to cover a 48-bit address space.

Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-01-19 14:46:44 +00:00