Make of_device_id array const, because all OF functions handle it as const.
Signed-off-by: Kiran Padwal <kiran.padwal@smartplayin.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
We are using the same pfn for every pte we create while constructing the
pmd. Fix this by actually updating the pfn on each iteration of the pmd
construction loop.
It's not clear if we can actually hit this bug right now since iommu_map
splits up the calls to .map based on the page size, so we only ever seem to
iterate this loop once. However, things might change in the future that
might cause us to hit this.
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
MMU-401 is similar to MMU-400, but updated with limited ARMv8 support.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The SMMU driver was relying on a quirk of MMU-500 r2px to identify
the correct architecture version. Since this does not apply to other
implementations, make the architecture version for each supported
implementation explicit.
While we're at it, remove the unnecessary #ifdef since the dependencies
for CONFIG_ARM_SMMU already imply CONFIG_OF.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order for nested translation to work correctly, we need to ensure
that the maximum output address size from stage-1 is <= the maximum
supported input address size to stage-2. The latter is currently defined
by VA_BITS, since we make use of the CPU page table functions for
allocating out tables and so the driver currently enforces this
restriction by truncating the stage-1 output size during probe.
In reality, this doesn't make a lot of sense; the guest OS is responsible
for managing the stage-1 page tables, so we actually just need to ensure
that the ID registers of the virtual SMMU interface only advertise the
supported stage-2 input size.
This patch fixes the problem by treating the stage-1 and stage-2 input
address sizes separately.
Reported-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Arbitrary integer division is not available in all ARM CPUs, so the GCC
may spit out calls to helper functions which are not implemented in
the kernel.
This patch avoids these problems in the SMMU driver by using page shift
instead of page size, so that divisions by the page size (as required
by the vSMMU code) can be expressed as a simple right shift.
Signed-off-by: Will Deacon <will.deacon@arm.com>
In preparation for nested translation support, stick a pointer to the
iommu_domain in dev->archdata.iommu. This makes it much easier to grab
hold of the physical group configuration (e.g. cbndx) when dealing with
vSMMU accesses from a guest.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Whilst the driver currently creates one IOMMU group per device, this
will soon change when we start supporting non-transparent PCI bridges
which require all upstream masters to be assigned to the same address
space.
This patch reworks our IOMMU group code so that we can easily support
multi-master groups. The master configuration (streamids and smrs) is
stored as private iommudata on the group, whilst the low-level attach/detach
code is updated to avoid double alloc/free when dealing with multiple
masters sharing the same SMMU configuration. This unifies device
handling, regardless of whether the device sits on the platform or pci
bus.
Signed-off-by: Will Deacon <will.deacon@arm.com>
When debugging and testing code on an SMMU that supports nested
translation, it can be useful to restrict the driver to a particular
stage of translation.
This patch adds a module parameter to the ARM SMMU driver to allow this
by restricting the ability of the probe() code to detect support for
only the specified stage.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Working out the usable address sizes for the SMMU is surprisingly tricky.
We must take into account both the limitations of the hardware for VA,
IPA and PA sizes but also any restrictions imposed by the Linux page
table code, particularly when dealing with nested translation (where the
IPA size is limited by the input address size at stage-2).
This patch fixes a few corner cases in our address size handling so that
we correctly deal with 40-bit addresses in TTBCR2 and restrict the IPA
size differently depending on whether or not we have support for nested
translation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The prefix suggests the number should be printed in hex, so use
the %x specifier to do that.
Found by using regex suggested by Joe Perches.
Signed-off-by: Hans Wennborg <hans@hanshq.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The number of S2CR registers is not properly set when stream
matching is not supported. Fix this and add check that we do not try to
access outside of the number of S2CR regisrers.
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[will: added missing NUMSIDB_* definitions]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When we attach a device to a domain, we configure the SMRs (if we have
any) to match the Stream IDs for the corresponding SMMU master and
program the s2crs accordingly. However, on detach we tear down the s2crs
assuming stream-indexing (as opposed to stream-matching) and SMRs
assuming they are present.
This patch fixes the device detach code so that it operates as a
converse of the attach code.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If split page table lock for PTE tables is enabled (CONFIG_SPLIT_PTLOCK_CPUS
<=NR_CPUS) pgtable_page_ctor() leads to non-atomic allocation for ptlock with
a spinlock held, resulting in:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 466 at kernel/locking/lockdep.c:2742 lockdep_trace_alloc+0xd8/0xf4()
DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags))
Modules linked in:
CPU: 0 PID: 466 Comm: dma0chan0-copy0 Not tainted 3.16.0-3d47efb-clean-pl330-dma_test-ve-a15-a32-slr-m
c-on-3+ #55
[<80014748>] (unwind_backtrace) from [<80011640>] (show_stack+0x10/0x14)
[<80011640>] (show_stack) from [<802bf864>] (dump_stack+0x80/0xb4)
[<802bf864>] (dump_stack) from [<8002385c>] (warn_slowpath_common+0x64/0x88)
[<8002385c>] (warn_slowpath_common) from [<80023914>] (warn_slowpath_fmt+0x30/0x40)
[<80023914>] (warn_slowpath_fmt) from [<8005d818>] (lockdep_trace_alloc+0xd8/0xf4)
[<8005d818>] (lockdep_trace_alloc) from [<800d3d78>] (kmem_cache_alloc+0x24/0x144)
[<800d3d78>] (kmem_cache_alloc) from [<800bfae4>] (ptlock_alloc+0x18/0x2c)
[<800bfae4>] (ptlock_alloc) from [<802b1ec0>] (arm_smmu_handle_mapping+0x4c0/0x690)
[<802b1ec0>] (arm_smmu_handle_mapping) from [<802b0cd8>] (iommu_map+0xe0/0x148)
[<802b0cd8>] (iommu_map) from [<80019098>] (arm_coherent_iommu_map_page+0x160/0x278)
[<80019098>] (arm_coherent_iommu_map_page) from [<801f4d78>] (dmatest_func+0x60c/0x1098)
[<801f4d78>] (dmatest_func) from [<8003f8ac>] (kthread+0xcc/0xe8)
[<8003f8ac>] (kthread) from [<8000e868>] (ret_from_fork+0x14/0x2c)
---[ end trace ce0d27e6f434acf8 ]--
Split page tables lock is not used in the driver. In fact, page tables are
guarded with domain lock, so remove calls to pgtable_page_{c,d}tor().
Cc: <stable@vger.kernel.org>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Stage-1 context banks do not have the SMMU_CBn_TCR[SL0] field since it
is only applicable to stage-2 context banks.
This patch ensures that we don't set the reserved TCR bits for stage-1
translations.
Cc: <stable@vger.kernel.org>
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
request_irq shouldn't be called from atomic context since it might
sleep, but we're calling it with a spinlock held, resulting in:
[ 9.172202] BUG: sleeping function called from invalid context at kernel/mm/slub.c:926
[ 9.182989] in_atomic(): 1, irqs_disabled(): 128, pid: 1, name: swapper/0
[ 9.189762] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G W 3.10.40-gbc1b510b-38437-g55831d3bd9-dirty #97
[ 9.199757] [<c020c448>] (unwind_backtrace+0x0/0x11c) from [<c02097d0>] (show_stack+0x10/0x14)
[ 9.208346] [<c02097d0>] (show_stack+0x10/0x14) from [<c0301d74>] (kmem_cache_alloc_trace+0x3c/0x210)
[ 9.217543] [<c0301d74>] (kmem_cache_alloc_trace+0x3c/0x210) from [<c0276a48>] (request_threaded_irq+0x88/0x11c)
[ 9.227702] [<c0276a48>] (request_threaded_irq+0x88/0x11c) from [<c0931ca4>] (arm_smmu_attach_dev+0x188/0x858)
[ 9.237686] [<c0931ca4>] (arm_smmu_attach_dev+0x188/0x858) from [<c0212cd8>] (arm_iommu_attach_device+0x18/0xd0)
[ 9.247837] [<c0212cd8>] (arm_iommu_attach_device+0x18/0xd0) from [<c093314c>] (arm_smmu_test_probe+0x68/0xd4)
[ 9.257823] [<c093314c>] (arm_smmu_test_probe+0x68/0xd4) from [<c05aadd0>] (driver_probe_device+0x12c/0x330)
[ 9.267629] [<c05aadd0>] (driver_probe_device+0x12c/0x330) from [<c05ab080>] (__driver_attach+0x68/0x8c)
[ 9.277090] [<c05ab080>] (__driver_attach+0x68/0x8c) from [<c05a92d4>] (bus_for_each_dev+0x70/0x84)
[ 9.286118] [<c05a92d4>] (bus_for_each_dev+0x70/0x84) from [<c05aa3b0>] (bus_add_driver+0x100/0x244)
[ 9.295233] [<c05aa3b0>] (bus_add_driver+0x100/0x244) from [<c05ab5d0>] (driver_register+0x9c/0x124)
[ 9.304347] [<c05ab5d0>] (driver_register+0x9c/0x124) from [<c0933088>] (arm_smmu_test_init+0x14/0x38)
[ 9.313635] [<c0933088>] (arm_smmu_test_init+0x14/0x38) from [<c0200618>] (do_one_initcall+0xb8/0x160)
[ 9.322926] [<c0200618>] (do_one_initcall+0xb8/0x160) from [<c1200b7c>] (kernel_init_freeable+0x108/0x1cc)
[ 9.332564] [<c1200b7c>] (kernel_init_freeable+0x108/0x1cc) from [<c0b924b0>] (kernel_init+0xc/0xe4)
[ 9.341675] [<c0b924b0>] (kernel_init+0xc/0xe4) from [<c0205e38>] (ret_from_fork+0x14/0x3c)
Fix this by moving the request_irq out of the critical section. This
should be okay since smmu_domain->smmu is still being protected by the
critical section. Also, we still don't program the Stream Match Register
until after registering our interrupt handler so we shouldn't be missing
any interrupts.
Cc: <stable@vger.kernel.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
[will: code cleanup and fixed request_irq token parameter]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Fix some issues reported by checkpatch.pl. Mostly whitespace, but also
includes min=>min_t, kzalloc=>kcalloc, and kmalloc=>kmalloc_array.
The only issue I'm leaving alone is:
arm-smmu.c:853: WARNING: line over 80 characters
#853: FILE: arm-smmu.c:853:
+ (MAIR_ATTR_WBRWA << MAIR_ATTR_SHIFT(MAIR_ATTR_IDX_CACHE)) |
since it seems to be a case where "exceeding 80 columns significantly
increases readability and does not hide information."
(Documentation/CodingStyle).
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This structure is read-only data and should never be modified.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
If somebody attempts to check the capability of an IOMMU domain prior to
device attach, then we'll try to dereference a NULL SMMU pointer through
the SMMU domain (since we can't determine the actual SMMU instance until
we have a device attached).
This patch fixes the capability check so that non-global features are
reported as being absent when no device is attached to the domain.
Signed-off-by: Will Deacon <will.deacon@arm.com>
For an SMMU that supports both Stage-1 and Stage-2 mappings (but not
nested translation), then we should prefer stage-1 mappings as we
otherwise rely on the memory attributes of the incoming transactions
for IOMMU_CACHE mappings.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM SMMU driver has supported chained SMMUs (i.e. SMMUs connected
back-to-back in series) via the smmu-parent property in device tree.
This was in anticipation of somebody building such a configuration,
however that seems not to be the case.
This patch removes the unused chained SMMU hack from the driver. We can
consider adding it back later if somebody decided they need it, but for
the time being it's just pointless mess that we're carrying in mainline.
Removal of the feature also makes migration to the generic IOMMU bindings
easier.
Signed-off-by: Will Deacon <will.deacon@arm.com>
MSIs are just seen as bog standard memory writes by the ARM SMMU, so
they can be translated (and isolated) in the same way.
This patch adds the IOMMU_CAP_INTR_REMAP capability to the ARM SMMU
driver and reworks our capabaility code so that we don't assume the
caps are organised as bits in a bitmask (since this isn't the intention).
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch extends the ARM SMMU driver so that it can handle PCI master
devices in addition to platform devices described in the device tree.
The driver is informed about the PCI host controller in the DT via a
phandle to the host controller in the mmu-masters property. The host
controller is then added to the master tree for that SMMU, just like a
normal master (although it probably doesn't advertise any StreamIDs).
When a device is added to the PCI bus, we set the archdata.iommu pointer
for that device to describe its StreamID (actually its RequesterID for
the moment). This allows us to re-use our existing data structures using
the host controller of_node for everything apart from StreamID
configuration, where we reach into the archdata for the information we
require.
Cc: Varun Sethi <varun.sethi@freescale.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
T0SZ controls the input address range for TTBR0, so use the input
address range rather than the output address range for the calculation.
For stage-2, this means using the output size of stage-1.
Signed-off-by: Will Deacon <will.deacon@arm.com>
There is already S2CR_TYPE_SHIFT in S2CR_TYPE_TRANS macro, so drop the
second shift. Note that, since S2CR_TYPE_SHIFT is 0x0, there is no
functional change introduced by this patch.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The output size of stage-1 is currently limited by the input size of
stage-2, which is further limited by VA_BITS since we make use of the
standard pgd_alloc functions for creating page tables.
This patch ensures that we use VA_BITS instead of a hardcoded '39'
for the stage-1 output size limit.
Signed-off-by: Will Deacon <will.deacon@arm.com>
kernel panic happened when iommu_unmap a buffer larger than 2MB,
more than expected pmd entries got “invalidated”, due to a wrong range
passed to arm_smmu_alloc_init_pte. it was likely a typo, now we fix
it, passing the correct "end" address to arm_smmu_alloc_init_pte.
Signed-off-by: Bin Wang <binw@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The IOMMU core expects the unmap operation to return the number of bytes
that have been unmapped or 0 on failure, a negative return value being
treated like a number of bytes.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 1463fe44fd ("iommu/arm-smmu: Don't use VMIDs for stage-1
translations") moved our TLB invalidation from context creation time to
context destruction time, but forgot to update an associated comment.
This patch fixes the broken comment.
Signed-off-by: Will Deacon <will.deacon@arm.com>
On coherent systems, publishing new page tables to the SMMU walker is
achieved with a dsb instruction. In fact, this can be a dsb(ishst) which
also provides the mandatory barrier option for arm64.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 972157cac5 ("arm/smmu: Use irqsafe spinlock for domain lock")
fixed our page table locks to be the irq{save,restore} variants, since
the DMA mapping API can be invoked from interrupt context.
This patch cleans up our use of the flags variable so we can distinguish
between IRQ flags (now `flags') and pte protection bits (now `prot').
Signed-off-by: Will Deacon <will.deacon@arm.com>
In such a case we have to use secure aliases of some non-secure
registers.
This handling is switched on by DT property
"calxeda,smmu-secure-config-access" for an SMMU node.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
[will: merged with driver option handling patch]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The DT parsing code that determines stream IDs uses
of_parse_phandle_with_args and thus MAX_MASTER_STREAMIDS
is always bound by MAX_PHANDLE_ARGS.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As the lock might be used through DMA-API which is allowed
in interrupt context.
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Whilst trying to bring-up an SMMUv2 implementation with the table
walker plumbed into a coherent interconnect, I noticed that the memory
transactions targetting the CPU caches from the SMMU were marked as
outer-shareable instead of inner-shareable.
After a bunch of digging, it seems that we actually need to program
CBARn.BPSHCFG for s1-s2-bypass contexts to act as non-shareable in order
for the shareability configured in the corresponding TTBCR not to be
overridden with an outer-shareable attribute.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we populate page tables as we traverse them ("iommu/arm-smmu:
fix pud/pmd entry fill sequence"), we need to ensure that we flush out
our zeroed tables after initial allocation, to prevent speculative TLB
fills using bogus data.
This patch adds additional calls to arm_smmu_flush_pgtable during
initial table allocation, and moves the dsb required by coherent table
walkers into the helper.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit a44a9791e7 ("iommu/arm-smmu: use mutex instead of spinlock for
locking page tables") replaced the page table spinlock with a mutex, to
allow blocking allocations to satisfy lazy mapping requests.
Unfortunately, it turns out that IOMMU mappings are created from atomic
context (e.g. spinlock held during a dma_map), so this change doesn't
really help us in practice.
This patch is a partial revert of the offending commit, bringing back
the original spinlock but replacing our page table allocations for any
levels below the pgd (which is allocated during domain init) with
GFP_ATOMIC instead of GFP_KERNEL.
Cc: <stable@vger.kernel.org>
Reported-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM SMMU driver's population of puds and pmds is broken, since we
iterate over the next level of table repeatedly setting the current
level descriptor to point at the pmd being initialised. This is clearly
wrong when dealing with multiple pmds/puds.
This patch fixes the problem by moving the pud/pmd population out of the
loop and instead performing it when we allocate the next level (like we
correctly do for ptes already). The starting address for the next level
is then calculated prior to entering the loop.
Cc: <stable@vger.kernel.org>
Signed-off-by: Yifan Zhang <zhangyf@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Previously, all of our mappings were marked as executable, which isn't
usually required. Now that we have the IOMMU_EXEC flag, use that to
determine whether or not a mapping should be marked as executable.
Signed-off-by: Will Deacon <will.deacon@arm.com>
With the introduction of the VA_BITS definition for arm64, make use of
it in the driver, allowing up to 42-bits of VA space when configured
with 64k pages.
Signed-off-by: Will Deacon <will.deacon@arm.com>
IOMMU groups are expected by certain users of the IOMMU API,
e.g. VFIO. Add new devices found by the SMMU driver to an IOMMU
group to satisfy those users.
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Fix to return -ENODEV instead of 0 when context interrupt number
does no match in arm_smmu_device_dt_probe().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When handling mapping requests, we dereference the SMMU domain before
checking that it is NULL. This patch fixes the issue by removing the check
altogether, since we don't actually use the leaf_smmu when creating
mappings.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When creating IO mappings, we lazily allocate our page tables using the
standard, non-atomic allocator functions. This presents us with a
problem, since our page tables are protected with a spinlock.
This patch reworks the smmu_domain lock to use a mutex instead of a
spinlock. iova_to_phys is then reworked so that it only reads the page
tables, and can run in a lockless fashion, leaving the mutex to guard
against concurrent mapping threads.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This time the updates contain:
* Tracepoints for certain IOMMU-API functions to make
their use easier to debug
* A tracepoint for IOMMU page faults to make it easier
to get them in user space
* Updates and fixes for the new ARM SMMU driver after
the first hardware showed up
* Various other fixes and cleanups in other IOMMU drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJShVQAAAoJECvwRC2XARrj4T8P/2C/aej9QoEZhZRsJbClt7d6
6j6VoAYzFGQ5KKGuIXH/qmJqQKrDhRq7O/dP6XZEFYTDiyAcpLPK9sZ3eovNrur6
xW7TIpewZczEOPY0sz7Hkg90DgP0DnU37fELA0oYoUe55jQ0uZrcXmptcWlQssei
UZ6Cx2x9ebVcvPz2Ge7cNuDT8FXpu2MGNR7FLlh49EarFwMkl/al5oFuTcnmAojO
ypsafA5JBmsjhu3VpiI+VolZMEnYzUtZlIjv44cHw891RL5iQkcxVT/UWC8q3jHW
+OZZci21/MN3X4f2GcQUE5lEJTLX+mcmlGRoDF6B4Lh4n0IZGikyNThZMPRU1Q6x
6Ab76qHhOJtcGnxWcMiEbReUC6oPRFyr8YzTrJJfNp6iTMNgXgISKwL6UV1A7Lha
pZDXjAzREgxe8FbU3JZGfgcMg7WlnN/Y33R5E/UGwXK/MDAL0BCwNV4PBE0LCbtH
2qCzBC3TIWF7NMbIp0GnD8cbJJRO7c1hIZkVNRUwbUXkrMT75CfNIhq3l9xeWAIF
ooXvNa+MO0uJPQ/0IAJc5+AGBEEiPnvXEnp8XfTWE6S8dkA26LokKslI6fsZE27s
r0P+dHhc1OiHsIAngYqetWXZ/OdeNMfeBhIWeiKj2VKrT8MG8e/tdO9ICAHkVQSt
dnUAmLQqyR41hcI5hFEu
=IWTK
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
"This time the updates contain:
- Tracepoints for certain IOMMU-API functions to make their use
easier to debug
- A tracepoint for IOMMU page faults to make it easier to get them in
user space
- Updates and fixes for the new ARM SMMU driver after the first
hardware showed up
- Various other fixes and cleanups in other IOMMU drivers"
* tag 'iommu-updates-v3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (26 commits)
iommu/shmobile: Enable the driver on all ARM platforms
iommu/tegra-smmu: Staticize tegra_smmu_pm_ops
iommu/tegra-gart: Staticize tegra_gart_pm_ops
iommu/vt-d: Use list_for_each_entry_safe() for dmar_domain->devices traversal
iommu/vt-d: Use for_each_drhd_unit() instead of list_for_each_entry()
iommu/vt-d: Fixed interaction of VFIO_IOMMU_MAP_DMA with IOMMU address limits
iommu/arm-smmu: Clear global and context bank fault status registers
iommu/arm-smmu: Print context fault information
iommu/arm-smmu: Check for num_context_irqs > 0 to avoid divide by zero exception
iommu/arm-smmu: Refine check for proper size of mapped region
iommu/arm-smmu: Switch to subsys_initcall for driver registration
iommu/arm-smmu: use relaxed accessors where possible
iommu/arm-smmu: replace devm_request_and_ioremap by devm_ioremap_resource
iommu: Remove stack trace from broken irq remapping warning
iommu: Change iommu driver to call io_page_fault trace event
iommu: Add iommu_error class event to iommu trace
iommu/tegra: gart: cleanup devm_* functions usage
iommu/tegra: Print phys_addr_t using %pa
iommu: No need to pass '0x' when '%pa' is used
iommu: Change iommu driver to call unmap trace event
...
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>